I haven’t made up my mind on the whole business of ‘pausing’ AI development and how seriously to take the dire warnings & doomsday prophecies of those that are alarmed about AI and the inevitability of AGI, and calling for a moratorium on its development for six months to figure out guardrails, whatever that means.
I want to know from them how it can actually harm humanity by doing stuff that cannot be reversed, stopped, or monitored for failure.
The only scenario I can think of is if we hand it the keys to the armoury, that means allowing it to take military decisions.
So, can’t we just limit its military use? Well, therein lies the problem. You see, even if we were to come together and write a set of rules that we want everyone to follow with the development of AI, put together an international monitoring body with a well-funded network and technology (and teeth), the one area they’d find hard to monitor would be the military R&D establishment.
Unlike nuclear weapons research, which needs very specific material, hardware, and space, and leaves behind telltale signs that can be detected from afar and proven objectively for everyone, the development of military AI will not be easy or perhaps even possible to monitor, and hence, control.
Also, apart from vested interests with a financial stake in the industry, which country will truly believe that its rival isn’t developing military AI? Are we really so naive as to believe that the Pentagon will restrain itself even as the White House signs up for (probably even leads the signing) adhering to ‘rules’ about AI? Do you think they’d ever buy the argument that the Chinese Red Army has ‘paused’ all AI development? Would the Chinese Red Army?
Indeed, that brings me to the first question I’ve been meaning to ask: Do you truly believe that the military AI development race hasn’t already begun? So, what’s all this brouhaha about pausing for?
Let me tell you a story of a French General in WWI who, while dictating to his ADC for entries into the unit diary about the day’s action, asked him to record a particular point on the front line where his unit had ended the day’s hostilities. When the ADC pointed out that the actual position of their troops was about 200 meters behind that line, the General said he knew that, but this was ‘pour l’histoire’.
So, my second question is: Do all those celebrities asking for a ‘pause’ in AI development have objective reasons for their fears and actually believe that there is a way to stop or pause this juggernaut if we were to decide to simply? Or are they posturing for the history books, which, if what they say were to come true in every way, would never be written?
I’ll leave you with a quote from my favourite sci-fi humorist, Douglas Adams, who, in my opinion, would probably not have signed this letter if he were alive. That is, of course, speculation. So, for what it’s worth, here is him talking about technology (which he loved):
I’ve come up with a set of rules that describe our reactions to technologies:
1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you’re thirty-five is against the natural order of things.
― Douglas Adams, The Salmon of Doubt
How many of those signatories do you think are between 15 and 35? What about you? Would you sign it? Why?