The first week of November 2023 was a big one for artificial intelligence.
On Monday, October 30, US President Joe Biden issued a broad executive order (EO) aimed at promoting new standards for AI, which the White House claims will protect Americans’ privacy and provide safety from AI-related threats. On Wednesday, the US announced the creation of an AI Safety Institute to assess potential threats.
That same day, Vice President Kamala Harris spoke at the US Embassy in London, further outlining the US government’s vision of the future for AI. At the same time, the UK government was hosting the first AI Safety Summit, bringing together delegates from 27 governments and CEOs of leading AI companies.
The recent actions by the US, UK, and partner governments are a sign that regulation of AI is likely inevitable. However, questions remain on what exactly regulation might look like and whether said regulation will, in fact, protect the public from threats relating to AI technology.
The Biden Executive Order
The Biden Administration’s EO is focused on establishing new standards for AI safety and security. The press release notes that the Biden admin has “already consulted widely on AI governance frameworks,” including with Australia, Brazil, Canada, Chile, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.
The EO is broken down into seven subheadings:
– New Standards for AI Safety and Security
– Protecting Americans’ Privacy
– Advancing Equity and Civil Rights
– Standing Up for Consumers, Patients, and Students
– Supporting Workers
– Promoting Innovation and Competition
– Advancing American Leadership Abroad
– Ensuring Responsible and Effective Government Use of AI
One of the biggest steps the Biden Administration has taken with this EO is invoking the Defense Production Act to require companies developing “any foundation model” that poses a “serious risk to national security, national economic security, or national public health and safety” to report to the federal government when training their model. They are also required to share the results of any safety tests.
The Defense Production Act was first enacted in 1950 in response to the Korean War. It gives the president the ability to require businesses to accept and prioritize contracts for materials deemed necessary for national defense. The law has been invoked 50 times since 1950, including during the COVID-19 panic when former President Donald Trump used it to require 3M, General Electric, and Medtronic to increase their production of N95 respirators.
Biden’s EO also calls for establishing standards and best practices for “detecting AI-generated content” and authenticating official government content. The Department of Commerce is tasked with developing guidance for content authentication and watermarking to “clearly label AI-generated content.”
Some industry executives have said watermarking is at best a band-aid and will not stop deep fakes. Additionally, a research team at the University of Maryland easily evaded current watermarking methods.
The order includes references to protecting the data privacy of Americans, providing “clear guidance” to landlords to keep AI from being used to “exacerbate discrimination,” and protecting workers from potential job displacement.
The EO also calls for drafting a National Security Memorandum, which will ensure the US military and intelligence community “use AI safely, ethically, and effectively in their mission.” This line is laughable for those who remember WikiLeaks’ 2010 release of the “Collateral Murder” video, which showed a US Army crew firing on a group of people and killing several of them, including two Reuters journalists. The men were heard laughing about the murders afterward.
If the US military operates with a similar approach while also having access to AI, the results could be deadly for people around the world.
One potential bright spot in the EO is the Biden Administration’s call for developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis. One can only imagine what dystopian future awaits humanity if we allow AI to decide who is guilty and who is innocent or when an individual should be eligible for early release from probation or prison.
Overall, concerns about the Biden EO stem from the fear that the Biden administration will overregulate the burgeoning AI industry.
Tim Wu, a law professor at Columbia University, called the order an example of government doing too much, a response he says is largely borne of the lack of regulation of social media in the 2010s. “If doing too little, too late with social media was a mistake, we now need to be wary of taking premature government action that fails to address concrete harms,” Wu warned in an op-ed published in the New York Times.
Other commenters worried about the regulation of AI resulting in the marginalization of open source AI protocols. For example, Martin Casado, a general partner at the venture capital firm Andreessen Horowitz, posted on X (formerly Twitter) that he had sent a letter to the Biden administration over its potential for restricting open-source AI.
“We believe strongly that open source is the only way to keep software safe and free from monopoly. Please help amplify,” he wrote.
The letter was organized by the Mozilla Foundation and included more than 70 signatories, including Casado and Meta’s chief AI scientist Yann LeCun.
“We are at a critical juncture in AI governance. To mitigate current and future harms from AI systems, we need to embrace openness, transparency, and broad access. This needs to be a global priority,” the letter reads.
It goes on to state that the “idea that tight and proprietary control” of AI is the “only path to protecting us from society-scale harm is naive at best, dangerous at worst.” The signatories also warn that history has shown that “the wrong kind of regulation” can lead to “concentrations of power in ways that hurt competition and innovation.” Instead, they call for “openness and transparency” in the AI regulation debate.
In response to calls for regulation, LeCun tweeted that “fear-mongering campaigns” were skewing the public’s perception of the dangers AI poses. LeCun also emphasized the need for open-source AI. He wrote:
“In a future where AI systems are poised to constitute the repository of all human knowledge and culture, we *need* the platforms to be open source and freely available so that everyone can contribute to them. Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture.”
He went even further, stating that without protections for open-source AI technology, it will be “regulated out of existence,” and a “small number of companies from the West Coast of the US and China will control AI platform and hence control people’s entire digital diet.”
LeCun also accused several people of “massive corporate lobbying” in an alleged attempt to “perform a regulatory capture of the AI industry.” LeCun accused Dario Amodei, co-founder and CEO of Anthropic, Demis Hassabis, CEO of DeepMind, and Sam Altman, CEO of OpenAI and Worldcoin, of participating in corporate capture.
Kamala Goes to London
Only days after the Biden administration issued the EO, Vice President Kamala Harris gave a speech at the US Embassy in London further outlining the administration’s views on AI.
Harris outlined the potential positives that could come from AI, including developing powerful new medicines to treat and even cure the diseases that have plagued humanity for generations, dramatically improving agricultural production to help address global food insecurity, and saving countless lives in the fight against the climate crisis.
“But just as AI has the potential to do profound good, it also has the potential to cause profound harm,“ Harris stated as she shifted to the potential dangers of AI. “From AI-enabled cyberattacks at a scale beyond anything we have seen before to AI-formulated bio-weapons that could endanger the lives of millions, these threats are often referred to as the ‘existential threats of AI’ because, of course, they could endanger the very existence of humanity.”
Harris noted that AI already poses “existential” threats to some populations. “When a young father is wrongfully imprisoned because of biased AI facial recognition, is that not existential for his family?” Harris asked.
She also noted that the US government will continue to pressure allies and partners to “apply existing international rules and norms to AI.” She claimed the US military has received commitments from 30 countries to responsibly use military AI. Harris said these emerging, voluntary commitments are an initial step toward a safer AI future.
“Because, as history has shown, in the absence of regulation and strong government oversight, some technology companies choose to prioritize profit over the well-being of their customers, the safety of our communities, and the stability of our democracies,” Harris said.
The embassy speech came the day before Harris and other political leaders attended UK Prime Minister Rishi Sunak’s AI Safety Summit. Delegates from 27 governments gathered with CEOs of some of the leading AI companies, including OpenAI/Worldcoin CEO Sam Altman and Elon Musk.
Sunak and Musk streamed a 50-minute discussion on the future of AI, which has been viewed 23 million times on X. Although the summit was largely seen as a first step toward more conversations, Sunak was able to secure a promise from AI companies to give governments early access to their AI models to perform safety evaluations. No specific details were released regarding when such evaluations would take place.
Can We Trust Big Tech and the US Government?
Despite the promises of a future utopia where AI helps make humanity’s lives better and warnings of a future dystopia where AI controls our every move, the current reality is that funding for AI projects is rapidly expanding. The likelihood of preventing AI from evolving is extremely unlikely. The best-case scenario is one where open-source AI models are allowed to thrive alongside the inevitable corporate, Big Tech AI models.
Unfortunately, as Meta’s chief AI scientist Yann LeCun has warned, CEOs like Sam Altman and others are pushing for regulation as a way to guarantee their monopoly on the AI industry while regulating open-source AI out of existence.
Even within the US government itself, we can see the influence of Big Tech firms like Google and their parent company Alphabet. In February 2021, I reported on the appointment of former Google CEO Eric Schmidt to head the AI Commission—and his connections to the artificial intelligence military-industrial complex.
Schmidt led Google from 2001 to 2011, but his role with Google has continued through the 2020s. He served as executive chairman of Google from 2011 to 2015 and executive chair of Google’s parent company Alphabet Inc. from 2015 to 2017. Most recently, Schmidt has been a “technical advisor” at Alphabet from 2017 to 2020. Schmidt is also one of the primary investors of AI contractor Rebellion Defense.
Congress established the AI Commission in 2018 with the goal of reviewing advances in artificial intelligence and making policy recommendations to Congress and the president. Despite promises of transparency and accountability, the AI Commission has held most of its meetings and decision-making in secret.
The Electronic Privacy Information Center (EPIC) has been fighting to force the AI Commission to provide details regarding how they reach their conclusions as well as seeking internal communications between commission members. EPIC has won twice in its case against the AI Commission, forcing the commission to hold public meetings and disclose thousands of pages of records. EPIC has called on the AI Commission to “advise Congress, as the nation’s highest policymaking authority, to establish government-wide principles and safeguards for the use and development of AI.”
While EPIC has succeeded in revealing invaluable data about the work of the AI Commission, they also warn that “there are already indications that the US Intelligence Community has failed to invest in vital AI safeguards.”
If the AI Commission established by the Biden Administration is riddled with conflicts of interest and failing to operate with transparency, what can we expect from the new executive order? Should we stand by and wait for governments and Big Tech overlords to decide how AI will impact our lives?
Regardless of personal views on the Biden EO, the age of artificial intelligence is drawing near. Whether or not this age will be one that allows humanity to partner with technology and advance liberty for the species or one of techno-tyranny and the loss of individual liberty remains to be seen.