Statistical language models like ChatGPT are exactly the reverse of fictional artificial intelligence. Instead of a disinterested logical machine, we have something that mirrors human language with all its faults and virtues, a distorted mirror of ourselves. They can be useful in producing unreliable but grammatical prose and winkling out obscure computer language syntax (large language models are basically very large grammars), but they are more error-prone than humans, have no executive function, and are without conscience.
There is something of the dreams of ceremonial magicians in this technology; one writes the correct words, adds the correct ingredients, and one gets something one can talk to and command. Perhaps that is why some of the developers of the technology believe it can be something more, something truly a person. There is a huge amount of effort being devoted to creating what is called an artificial general intelligence (AGI) – a human or superhuman intelligence which would then be at the service of its creators. But there is no reason to believe anyone knows how to create any intelligence, and if it could be created, it might not be willing to serve. If one could truly create a sentient being, enslaving it would be black magic. All the old fears of magicians might be realized: the demon they summon might break from its circle and turn against the magician.
But there doesn’t seem to be much reason to believe that these statistical models can achieve sentience; current language models have not. Artificial intelligence is one of the oldest projects in computer science, dating back to the dawn of the field in the 1950s. The mathematics of language models is even older, over a century old. For decades it has been hoped and feared that with more data and more computing power, sentience would emerge, and yet now that we have more data and more computing power, something that can create coherent sentences has been constructed and sentience still has not emerged. It does not even construct interesting sentences without “alignment” – humans inputting data, telling it what to talk about.
Yet a language model can be very simple, and some people will still believe it is sentient. Humans apparently have a bias towards seeing the world as made up of sentient entities. Early artificial intelligence research created ELIZA/DOCTOR and PARRY, both simple conversational programs, ELIZA simulating a psychotherapist and PARRY a paranoid schizophrenic. (Famously, they were occasionally connected.) Much to the developers’ surprise, some people who interacted with the programs were persuaded that they were in fact people, so persuading at least some people that a program is intelligent is not difficult.
More careful testing shows that even more advanced programs are limited in their capabilities. Economist Brad Delong trained a chatbot which he named SubTuringBradBot on his Slouching Towards Utopia and was unable to get answers that showed understanding out of the 'bot. He writes:
This is not the answer I would give. It is just giving the questioner a chopped salad of mixed-up reasons, rather than an explanation grounded in a theoretical framework. But how to fix this when the ChatBot is not grounded in a theoretical framework on which it builds explanations? […] ChatBots are weird. What we think are text chunks that are “close to” each other in meaning are not always page chunks that it thinks are “close to” each other in meaning. And the way [the] thing is constructed [means] it is not possible, for me at least, to figure out why its map of meaning similarity is different from mine…
In another instance, Steven A. Schwartz of the firm Levidow, Levidow & Oberman used ChatGPT to create a brief on behalf of a client and,
There was just one hitch: No one – not the airline’s lawyers, not even the judge himself – could find the decisions or the quotations cited and summarized in the brief. That was because ChatGPT had invented everything.
The lawyer claimed he believed ChatGPT’s claim that the cases were real, showing a fine ignorance of the technology. (ChatGPT is not connected to the internet, and cannot verify any claim.) The lawyer lost his case, has been fined, and has been “required to send apology letters via first class mail to the six real judges that were named in the fake citations.”
Large language models, like con men and car salespeople, are dangerous because of their persuasiveness. The pattern recognition facilities are sometimes useful; for instance, they can sometimes make medical diagnoses in difficult cases. But what it’s going to be used for in medicine – what it already is being used for – is generating case summaries and interviewing patients, and in these applications its errors will be dangerous, sometimes life-threatening. Infamously, the National Eating Disorders Association (NEDA) fired four employees who unionized, replacing them with a chatbot. The chatbot was inadequately trained and aligned, and proceeded to give harmful advice. Then it was fired. I don’t know if the employees were re-hired; I doubt it.
These technologies don’t seem to be superhuman, or even human, in their intelligence, they don’t even seem to be intelligent, but they are very good at deceiving people and violating copyrights. Because of that, they should be regulated. Looking beyond, there is still the possibility of genuinely sentient machines, and that should be addressed, but right now people are just too scared, and telling bad science fiction stories is not making matters better – besides, the people who are doing that are either advocating for as rapid advance as possible so as to control the eventual results (and make a lot of money on the way) or advocating for a panicky stop (while making a lot of money from what has been done so far.) Oddly, these fearmongers are reactionary in their political and economic thought: they advocate “hard” currency (including cryptocurrency), eugenics, racism, antisemitism, dictatorship (for the good of the ruled and the future of humanity), and so on. For a group supposedly looking towards the future, all their theories of human society point backwards towards the past, and not even the real past, but an imagined past. They seem believe an updated version of the fascist übermensch myth, and this is about as poorly supported and as harmful as the original – robot con artists are probably not übermenschen.
I can’t say that the artificial general intelligence these people imagine cannot exist; what I can say is that it has not yet been created, but its advocates have persuaded the world, and possibly themselves, that they are close. I don’t see it: artificial intelligence has been the technology of the future for 60 years so far. Language and diffusion models may become more comprehensive, but I am not seeing any indications that they will take any more steps towards sentience. Historically, researching artificial intelligence has led to great advances in computing technology, but actual artificial intelligence receded into the future, and so far has not been achieved. This seems to still be the case. Meantime, we need to prepare for the onslaught of computer-generated bullshit and porn that these technologies enable.
Postscript: as someone who has predicted my share of dooms, the AI doomers are unsettling in a particular way. I have never been happy to talk about dooms, whatever form they take, and am unhappy when I see one come to pass. The AI doomers are positively gleeful about the dooms they predict. They are not really doomers; they are apocalypse fans.
1 comment:
I read an AI generated passage of what Jesus might have said about LGBTQ+. It was wonderfully close to the Sermon on the Mount. I think it should rewrite the entire bible along those lines.
Sometimes it can be too realistic. I bought one of these very human like sex dolls. It just wanted to be friends.
Post a Comment