There’s so much to say about this one, in my opinion, it’s been counterproductive for anyone worried about AI safety. Here is an excerpt from my latest Bloomberg column:
Sometimes publicity stunts backfire. An example can be the warning of a sentence published this week by the Center for AI Safety: “Mitigate the risk of AI extinction should be a global priority alongside other society-wide risks such as pandemics and nuclear war.”
… The first problem is the word “extinction”. Whether or not you think the current trajectory of AI systems is at risk of extinction – and No — the more you use that term, the more likely the matter will fall under the jurisdiction of the national security establishment. And his priority is to defeat foreign opponents. The bureaucrats who are on the staff of the most mundane regulatory agencies will be sidelined.
American national security experts are correctly skeptical on the idea of an international agreement to limit AI systems, because they doubt that anyone is monitoring and sanctioning China, Russia or other states (even the United Arab Emirates has a potentially strong power system on my way). So the more people say that AI systems can be super powerful, the more national security advisers insist that American technology must always be superior. I happen to agree on the need for American dominance – but realize that this is an argument for speeding up AI research, not slowing it down.
A second problem with the statement is that many of the signatories are big players in AI developments. So a common-sense objection might look like this: If you’re so worried, why don’t you just stop working on the AI? There’s a perfectly legitimate answer — you want to stay involved because you’re afraid that if you leave someone less responsible will be put in charge — but I’m under no illusions that this argument l would prevail. As they say in politics, if you explain, you lose.
The geographical distribution of signatories will also pose problems. Many of the best-known signatories are on the West Coast, particularly in California and Seattle. There is a group from Toronto and a few from the UK, but the Midwest and South of the United States are barely represented. If I were a congressman’s chief of staff or a political lobbyist, I would ask myself: where are the community bankers? Where are the car dealership owners? Why are so few states and House districts represented on the list?
I myself do not see the AI security movement as a left-wing political project. But if all you knew about it was this document, you might conclude that it is. In short, the petition perhaps does more to point out the weakness and narrowness of the movement than its strength.
Then there is the brevity of the statement itself. This may be a bold decision, which will help stimulate debate and generate ideas. But another view is that the band couldn’t agree on anything more. There is no white paper or set of policy recommendations. I praise the humility of the signatories, but not their political instincts.
Again, consider the public as well as the political perception. If some well-known and very smart players in a given field think the world might end but don’t make any recommendations on what to do about it, could you decide to ignore them altogether? (“Come back to me when you understand!”) What if a group of scientists announced that a large asteroid was heading towards Earth. I suspect they would have very specific recommendations, on such matters as how to deflect the asteroid and prepare defenses.
Read the whole thing. You will notice that my arguments do not require any particular view of AGI risk, one way or the other. I consider this statement wrong by all accounts, except perhaps for accelerationists.