Questions swirl around AI regulation

It is too important to leave it to Microsoft, Google and Meta

By Ezra Klein

  • Follow us on
  • google-news
  • whatsapp
  • telegram

Top Stories

A robot hand solves a Rubik's Cube in San Francisco. Researchers at the OpenAI lab in San Francisco spent months training their robotic hand to solve the cube. (Matt Edge/The New York Times)
A robot hand solves a Rubik's Cube in San Francisco. Researchers at the OpenAI lab in San Francisco spent months training their robotic hand to solve the cube. (Matt Edge/The New York Times)

Published: Thu 20 Apr 2023, 4:37 PM

Last updated: Thu 20 Apr 2023, 4:38 PM

Among the many unique experiences of reporting on artificial intelligence is this: In a young industry flooded with hype and money, person after person tells me that they are desperate to be regulated, even if it slows them down. In fact, especially if it slows them down.

What they tell me is obvious to anyone watching. Competition is forcing them to go too fast and cut too many corners. This technology is too important to be left to a race among Microsoft, Google, Meta and a few other firms. But no company can slow down to a safe pace without risking irrelevancy. That’s where the government comes in — or so they hope.

A place to start is with the frameworks policymakers have already put forward to govern AI. The two major proposals, at least in the West, are the Blueprint for an AI Bill of Rights, which the White House put forward in 2022, and the Artificial Intelligence Act, which the European Commission proposed in 2021. Then, last week, China released its latest regulatory approach.

Let’s start with the European proposal, as it came first. The AIA tries to regulate AI systems according to how they’re used. It is particularly concerned with high-risk uses, which include everything from overseeing critical infrastructure to grading papers to calculating credit scores to making hiring decisions. High-risk uses, in other words, are any use in which a person’s life or livelihood might depend on a decision made by a machine-learning algorithm.

The European Commission described this approach as “future-proof”, which proved to be predictably arrogant, as new AI systems have already thrown the bill’s clean definitions into chaos. Focusing on use cases is fine for narrow systems designed for a specific use, but it’s a category error when it’s applied to generalised systems. Models such as GPT-4 don’t do any one thing except predict the next word in a sequence. You can use them to write code, pass the bar exam, draw up contracts, create political campaigns, plot market strategy, and power AI companions or sexbots. In trying to regulate systems by use case, the AIA ends up saying very little about how to regulate the underlying model that’s powering all these use cases.

Unintended consequences abound. The AIA mandates, for example, that in high-risk cases, “training, validation and testing data sets shall be relevant, representative, free of errors and complete.” But what the large language models are showing is that the most powerful systems are those trained on the largest data sets. Those sets can’t plausibly be free of error, and it’s not clear what it would mean for them to be “representative.” There’s a strong case to be made for data transparency, but I don’t think Europe intends to deploy weaker, less capable systems across everything from exam grading to infrastructure.

The other problem with the use-case approach is that it treats AI as a technology that will, itself, respect boundaries. But its disrespect for boundaries is what most worries the people working on these systems. Imagine that “personal assistant” is rated as a low-risk use case and a hypothetical GPT-6 is deployed to power an absolutely fabulous personal assistant. The system gets tuned to be extremely good at interacting with humans and accomplishing a diverse set of goals in the real world. That’s great until someone asks it to secure a restaurant reservation at the hottest place in town and the system decides that the only way to do it is to cause a disruption that leads one-third of that night’s diners to cancel their bookings.

Sounds like sci-fi? Sorry, but this kind of problem is sci-fact.

The White House’s Blueprint for an AI Bill of Rights is a more interesting proposal. But where the European Commission’s approach is much too tailored, the White House blueprint may well be too broad. No AI system today comes close to adhering to the framework, and it’s not clear that any of them could.

The blueprint says that “automated systems should be developed with consultation from diverse communities, stakeholders and domain experts to identify concerns, risks and potential impacts of the system.” This is crucial, and it would be interesting to see the White House or Congress flesh out how much consultation is needed, what type is sufficient and how regulators will make sure the public’s wishes are actually followed.

Perhaps the most interesting of the blueprint’s proposals is that “you should be able to opt out from automated systems in favour of a human alternative, where appropriate.” In that sentence, the devil lurks in the definition of “appropriate”. But the underlying principle is worth considering. Should there be an opt-out from AI systems? Which ones? When is an opt-out clause a genuine choice, and at what point does it become merely an invitation to recede from society altogether, like saying you can choose not to use the internet or vehicular transport or banking services if you so choose.

Then there are China’s proposed new rules. I won’t say much on these, except to note that they are much more restrictive than anything the United States or Europe is imagining, which makes me very sceptical of arguments that we are in a race with China to develop advanced artificial intelligence. China seems perfectly willing to cripple the development of general AI so it can concentrate on systems that will more reliably serve state interests.

After talking to a lot of people working on these problems and reading through a lot of policy papers imagining solutions, there are a few categories I’d prioritise.

The first is the question — and it is a question — of interpretability. As I said above, it’s not clear that interpretability is achievable. But without it, we will be turning more and more of our society over to algorithms we do not understand. If you told me you were building a next generation nuclear power plant but there was no way to get accurate readings on whether the reactor core was going to blow up, I’d say you shouldn’t build it. Is AI like that power plant? I’m not sure. But that’s a question society should consider, not a question that should be decided by a few hundred technologists. At the very least, I think it’s worth insisting that AI companies spend a good bit more time and money discovering whether this problem is solvable.

The second is security. For all the talk of an AI race with China, the easiest way for China — or any country for that matter, or even any hacker collective — to catch up on AI is to simply steal the work being done here. Any firm building AI systems above a certain scale should be operating with hardened cybersecurity. It’s ridiculous to block the export of advanced semiconductors to China but to simply hope that every 26-year-old engineer at OpenAI is following appropriate security measures.

The third is evaluations and audits. This is how models will be evaluated for everything from bias to the ability to scam people to the tendency to replicate themselves across the internet.

The fourth is liability. There’s going to be a temptation to treat AI systems the way we treat social media platforms and exempt the companies that build them from the harms caused by those who use them. I believe that would be a mistake. The way to make AI systems safe is to give the companies that design the models a good reason to make them safe. Making them bear at least some liability for what their models do would encourage a lot more caution.

The fifth is, for lack of a better term, humanness. Do we want a world filled with AI systems that are designed to seem human in their interactions with humans? Because make no mistake: That is a design decision, not an emergent property of machine-learning code. AI systems can be tuned to return dull and caveat-filled answers, or they can be built to show off sparkling personalities and become enmeshed in the emotional lives of human beings.

I think the latter class of programs has the potential to do a lot of good as well as a lot of harm, so the conditions under which they operate should be thought through carefully. It might, for instance, make sense to place fairly tight limits on the kinds of personalities that can be built for AI systems that interact with children. I’d also like to see very tight limits on any ability to make money by using AI companions to manipulate consumer behaviour.

This is not meant to be an exhaustive list. Others will have different priorities and different views. And the good news is that new proposals are being released almost daily. The Future of Life Institute’s policy recommendations are strong, and I think the AI Objectives Institute’s focus on the human-run institutions that will design and own AI systems is critical. But one thing regulators shouldn’t fear is imperfect rules that slow a young industry. For once, much of that industry is desperate for someone to help slow it down.

This article originally appeared in The New York Times.

More news from Business