OpenAI CEO Sam Altman has spoken very strongly about the need for AI regulation in numerous interviews, events, and even during sitting in front of the US Congress.
However, according to OpenAI documents used for the company’s lobbying efforts in the EU, there is a catch: Open AI wants regulation that strongly favors business and has worked to weaken proposed regulation on AI.
The documents, obtained by Time(opens in a new tab) from the European Commission via freedom of information requests, provides a behind-the-scenes look at what AItman means when he calls for AI regulation.
In the document, titled “OpenAI’s White Paper on European Union Artificial Intelligence Law,” the company focuses on exactly what it says: EU AI law and attempts change various designations in the act, which would weaken its scope. For example, “general purpose AI systems” like GPT-3 have been classified as “high risk” under EU AI law.
According to the European Commission, the “high risk” classification include(opens in a new tab) systems that could result in “harm to the health, safety, fundamental rights or environment of people”. They include examples such as AI that “influences voters in political campaigns and in the recommendation systems used by social media platforms.” These “high risk” AI systems would be subject to legal requirements for human oversight and transparency.
“By itself, GPT-3 is not a high-risk system, but has capabilities that can potentially be used in high-risk use cases,” reads OpenAI’s white paper. OpenAI also objected to classifying generative AI like the popular ChatGPT and AI art generator Dall-E as “high risk”.
Fundamentally, OpenAI’s position is that the regulatory focus should be on companies using language models, such as applications that use OpenAI’s API, and not on companies training and providing the models.
Position of OpenAI aligned with Microsoft, Google
According Time(opens in a new tab)OpenAI essentially backed positions held by Microsoft and Google when those companies lobbied to weaken EU AI law regulations.
The section that OpenAI lobbied against ended up being removed from the final version of the AI law.
OpenAI’s successful lobbying efforts likely explain Altman’s change of heart when it comes to OpenAI’s operations in Europe. Altman previously threat(opens in a new tab) to withdraw OpenAI from the EU because of the AI law. Last month, however, he backtracked. Altman said(opens in a new tab) back when the previous AI bill “was over-regulated, but we heard it was going to be withdrawn.”
Now that parts of the EU AI law have been “retired”, OpenAI has no plans to leave.