The simple fact that the AI Commission is led by former Google CEO Eric Schmidt should trouble those who care for privacy, accountability, transparency, and individual liberty.
In late January, the National Security Commission on Artificial Intelligence (NSCAI), or the AI Commission, released a draft of their upcoming report to Congress, rejecting calls to ban AI-powered autonomous weapons, characterized by critics as “killer robots”. While the AI Commission did briefly address privacy and civil liberties concerns, they ultimately called on Congress to double AI research and funding annually up to $32 billion a year by 2026. The report also failed to note clear conflicts of interest between the Commission’s Chairman, and former Google CEO, Eric Schmidt.
Opponents of the advancing AI-powered surveillance and police states include privacy advocates concerned about a future where law enforcement are wearing glasses equipped with facial recognition software powered by secret AI algorithms.
The draft report addresses the surveillance concerns, stating, “The stakes of the AI future are intimately connected to the enduring contest between authoritarian and democratic political systems and ideologies.” The Commission also notes that AI-enabled surveillance will “soon be in the hands of most or all governments” and “authoritarian regimes will continue to use AI-powered face recognition, biometrics, predictive analytics, and data fusion as instruments of surveillance, influence, and political control.”
The report correctly points a finger at China’s authoritarianism and AI-driven surveillance state. However, the draft also attempts to paint the U.S. as a “liberal democracy” that uses such technologies for “legitimate public purposes…. compatible with the rule of law.” The implication is that the enemies of the U.S. could use this technology for tyrannical purposes, but the U.S. and its allies would only ever use AI in the interest of preserving liberty.
“A responsible democracy must ensure that the use of AI by the government is limited by wise restraints to comport with the rights and liberties that define a free and open society,” reads the draft. “The U.S. government should develop and field AI-enabled technologies with adequate transparency, strong oversight, and accountability to protect against misuse.”
Taken at face value, these statements might offer a sense of reassurance. Unfortunately, we are speaking about the U.S. government and military, and these institutions do not have a history of transparency or accountability. Even more worrisome is the drafts mention of the “urgent need” to use AI for national security purposes, particularly against “foreign and domestic terrorists operating within our borders.” The draft encourages the DOD not to pursue their counter-terrorism goals without ensuring that “security applications of AI conform to core values of individual liberty and equal protection under law.”
Despite the acknowledgement of privacy concerns, the bulk of the draft report was an endorsement of expanding the U.S. government and U.S. military research into AI. Robert Work, former Deputy Defense Secretary and the commission’s vice chairman, said the commission is calling on the Department of Defense to achieve “military AI readiness” by 2025 with training and education for military members. The draft calls for the Secretary of Defense to establish AI readiness goals by the end of this year.
The final version of the report is expected to be shared with Congress on March 1st.
The Fight To Stop Killer Robots
There are also fears that traditional policing involving human officers could be replaced with remote-controlled drones and robot officers powered by artificial intelligence that make decisions using a formula unknown to the public.
Some critics had hoped for an outright ban on the technology, but the commission said they believe AI would make fewer mistakes in battle, leading to less deaths. Vice Chair Work said there was a “moral imperative to at least pursue this hypothesis.” Reuters reports that one of the members of the commission warned of “pressure to build machines that react quickly, which could escalate conflicts.” The panel endorsed the idea that only humans should make decisions regarding launching nuclear weapons, but said a ban on AI would work against “U.S. interests” and difficult to enforce.
One of the main advocates for banning autonomous weapons is the organization Campaign to Stop Killer Robots, a coalition of non-governmental organizations (NGOs) formed in October 2012 to ban fully autonomous weapons. They claim this would help “retain meaningful human control over the use of force.” The organization has been campaigning internationally for a treaty which would ban so-called “killer robots”. According to the coalition, 30 countries, 110 NGOs, and 4,500 AI experts are in support of their efforts to ban the technology.
Mary Wareham, coordinator of Campaign to Stop Killer Robots, told Reuters that the commission’s “focus on the need to compete with similar investments made by China and Russia… only serves to encourage arms races.” Wareham is correct in her assessment given Vice Chair Work’s statement that it is a “moral imperative” for the U.S. military to pursue AI research under the assumption that AI-driven warfare would lead to less casualties. This mentality will all but guarantee that AI-related defense research will be funded to the tune of billions of taxpayer dollars annually.
The Electronic Privacy Information Center has been fighting to force the AI Commission to provide details regarding how they reach their conclusions, as well as seeking internal communications between Commission members. EPIC has won twice in its case against the AI Commission, forcing the Commission to hold public meetings and disclose thousands of pages of records. EPIC has called on the AI Commission to “advise Congress, as the nation’s highest policymaking authority, to establish government-wide principles and safeguards for the use and development of AI.”
While EPIC has succeeded in revealing invaluable data about the work of the AI Commission, they also warn that “there are already indications that the U.S. Intelligence Community has failed to invest in vital AI safeguards.”
The AI Commission was established by Congress in 2018 with the goal of “review[ing] advances in artificial intelligence, related machine learning developments, and associated technologies” and making policy recommendations to Congress and the President. The Commission has made promises of transparency and accountability, but has actually held most of its meetings and decision-making in secret. The simple fact that the Commission is led by former Google CEO Eric Schmidt should trouble those who care for privacy, accountability, transparency, and individual liberty.
Schmidt is known for being the CEO of Google from 2001 to 2011, however, his role with Google continues through the 2020’s. Schmidt served as Executive Chairman of Google from 2011 to 2015 and then Executive Chair of Google’s parent company Alphabet Inc. from 2015 to 2017. Most recently, Schmidt has been a “Technical Advisor” at Alphabet from 2017 to 2020.
During this time period, Google has suffered multiple public relations nightmares, namely the fact that the “Big Tech” firm is infamous for gathering massive amounts of data from its users. There was the time that Google planned to launch a censored version of its search engine in China that would blacklist websites and search terms – a move which Eric Schmidt said would help China be “more open.” There was also the Project Maven fiasco where it was revealed that Google was working with the Pentagon to develop AI that would analyze drone footage. After the news of Maven became mainstream, dozens of employees resigned in protest and thousands signed a petition asking Google to quit the project. Google ultimately caved and announced they would abandon Project Maven.
More recently, an investigation by The American Prospect revealed that Schmidt has ties to a largely unknown AI contractor. The report notes that during the Obama administration Google representatives were seen frequently enough at the White House that some “jokingly call the administration Google.gov”, with more than 250 Google employees moving between the government and the company throughout Obama’s presidency. Schmidt was one of these Google employees.
“From official positions, he has advocated for the Defense Department and intelligence agencies to adopt more machine-learning technology. Meanwhile, as a venture capitalist, he has invested millions of dollars in more than a half-dozen national-security startups that sell those very technologies back to the government,” the Prospect writes.
Specifically, Schmidt is one of the primary investors of AI contractor Rebellion Defense via his firm Innovation Endeavors. Rebellion states that its mission is to “empower the mission of national defense through AI driven technology.” The company brags that it’s team members were “early-stage employees at Netflix, Amazon, Twitter, Google, and Microsoft, and many have spent time as civil servants at the U.S. Digital Service, Defense Digital Service, and the U.K. Government Digital Service (GDS).”
Rebellion’s co-founder and CEO Chris Lynch moved from the tech sector to D.C. in 2015 to run the Pentagon’s Defense Digital Service (DDS). While at the DDS Lynch worked under three defense secretaries before leaving in 2019 to launch Rebellion Defense. Lynch’s move from Big Tech to Military-Industrial Complex and back to private sector is illustrative of the ongoing revolving door between private sector and government.
The ease of access that Schmidt and his Google colleagues enjoyed during the Obama years appears to be returning in the early days of the Biden administration. The Prospect notes that in November, Rebellion was awarded a contract to create a single data-sharing network for the Air Force. Shortly after Biden was declared the next President, he began announcing his transition team. Big Tech companies were on the list, including Uber, Amazon, Google, and the relatively unknown Rebellion Defense.
The presence of Eric Schmidt on the AI Commission and as a primary investor in an AI contractor for the military is a clear conflict of interest. Schmidt’s time at Google and his public statements have made it clear that he does not value privacy. Despite the AI Commission’s draft report paying lip service to privacy protections, the American people should not expect the likes of Eric Schmidt or Rebellion Defense to protect them from the growing specter of AI-powered autonomous weapons.
Question Everything, Come To Your Own Conclusions.