Why AI is Arguably Less Conscious Than a Fruit Fly
“So many people assume if behavior appears human, consciousness must be underneath.”
Hi,
Thanks for all the feedback on the 3-Year Anniversary newsletter! Your comments warmed my cold dead heart!
“I’ve been here since the beginning and Webworm has been a bit of mental refuge. I read it during the depths of covid, in the hospital while waiting for my son to be born, in the middle of dozens of boring work meetings. The eclectic mix of articles and the community around it has been such a breath of fresh air compared to most internet places.”
Ego stroked and heart full (and with a new Flightless Bird out today that I really love) let me continue.
As you will be aware, I’ve written about artificial intelligence a lot in the past — be it my obsession with the singularity, or the nuances of ChatGPT. It’s not just me — guest writers Joshua Drummond and Tony Stamp have also chimed in. Perhaps it’s because we all watched Terminator 2: Judgement Day during our formative years
Or maybe it’s just because this whole AI thing is just fascinating, terrifying, and one of the most interesting things to happen during my thus-far 40 year lifespan.
In today’s newsletter, friend of Webworm Joshua Drummond (who has his own wonderful newsletter, The Cynic’s Guide to Self-Improvement) decided to do something very novel, and talk to an expert — an expert in brain imaging and complex statistical models to be precise.
And he has thoughts on AI that both Josh and myself found very enlightening and wanted to share with you.
David.
AI: An Insult To Life Itself
by Joshua Drummond.
A few weeks ago a message appeared on my phone, from award-winning investigative journalist David Farrier. It was a couple of screenshots, and at a glance I could see the incredible news. Donald Trump has been arrested! And there were pictures to prove it! And, best of all, he had made a scene!
I was wildly happy. “Louise! It happened! Donald Trump got arrested!” I yelled to my wife.
“Really? I hadn’t seen anything on that,” she said with a frown, checking her phone.
“It’s a message from David,” I explained. I knew David Farrier, Skeptic of the Year, would never mislead me.
I started to show her the pictures, looked a little harder — especially at the captions that were right there under the images — and swore.
“Oh for fuck’s sake. It’s AI.”
It’s hard to admit, but this was the third time in as many days I’d been fooled by a glancing at a post without doing it proper mental due diligence. I’d also been fooled by a fake pope and fake catgirls. For anyone who thinks this is funny — well, it is. But it might not be funny for long.
The future of AI is far from certain, and opinion at the moment seems split between Twitter users panicking that it will take our jobs, LinkedIn bros who are thrilled that it’s going to take our jobs, tech leaders who think it’s going to kill us all, and those who think it might take our jobs and then kill us.
However, there’s another way to look at it all, and that’s what I want to explore in this piece.
You can’t spell “plagiarism” without AI
Part of the problem with AI is that everyone (including AI) is writing about AI.
I was just putting the finishing touches on an earlier version of this article when I got a DM from hard-bitten investigative journalist David Farrier, last seen writing a piece about Mr Kitters: The Internet’s Favourite Cat
“Tony Stamp has focussed on AI for his totally normal column also,” he wrote.
Ah shit. I checked out Tony’s column and it was like he’d plagiarized everything I’d written while also, infuriatingly, making it more relevant and much funnier.
So I decided to do something completely different: talk to an expert.
Talking about my friend Lee Reid (PhD) is difficult, because the stuff he does and the things that happen to him are so out of the ordinary that they sound like massive lies. He is the most brilliant and the most unlucky person I have ever met. I once drew a comic about him that sums up just one aspect of his extraordinary story — how he wrote an incredible music notation program called Musink with voice recognition software and a mouse operated with his foot.
The reason I wanted to talk to Dr Reid is because he’s one of those rare people who knows a lot about both human and artificial intelligence. He’s an expert in brain imaging and complex statistical models, including some of the language models that we think of (incorrectly, as it turns out) as AI.
He graciously put up with me peppering him with questions about AI while he was probably trying to work on something more important, like a project to make maps of a patient’s brain so neurosurgery can be performed more safely.
And one of the things he’s keen for everyone to know is that AI might be artificial, but it’s not intelligent. It’s less smart than a pig, or a fly, or even a TERF. The problem with allegations of AI sentience, Dr Reid reckons, is that they’re based on a false premise.
“Mainstream computer science has, for many decades, considered intelligence to be displaying behavior that seems human-like,” Dr Reid says. “So many people assume if behavior appears human, consciousness must be underneath.” But that isn’t the case. Chat GPT, even though it can answer your exam questions, is arguably less conscious than a fruit fly.
“Language models like Chat GPT don’t have a real understanding of anything, and they certainly don't have intent. If they had belief (which they don’t) it would be that they are trying to replicate a conversation that has already happened. They just are trained to guess the next (or a missing) word in a sentence, based on millions of other sentences.”
This is all being done with maths. Letters are converted to numbers, and complicated statistical models can be engineered to try to “guess” target numbers. Convert those numbers back into letters and boom, you’ve got sentences. But not sentience.
“Currently, the most popular models in machine learning are neural networks,” Dr Reid says. “They are enormous maths equations that kind of evolve. Most of the numbers in the equation start out wrong. To make it work well, the computer plugs example data — like a sentence, or an image — into the equation, and compares the result to what is correct. If it’s not correct, the computer changes those numbers slightly. The process repeats until you have something that works.”
You can arrange how the model’s maths is performed, and feed information to the model in a nearly endless number of ways. And this means that all complex models have the same set of problems: they are fundamentally stupid, yet their complexity makes them highly unpredictable, capable of “novel outputs” that researchers and users simply can’t anticipate.
“An equation with millions or billions of numbers is not one a human can understand, and each individual operation is virtually meaningless in the scheme of the equation. That makes it extremely difficult to track how or why a decision was made,” Dr Reid says.
This means that large AI models are black boxes. Information is fed in, and information is fed out. What happens inside? You don’t get to know. This means that decisions made by an AI model can’t easily be justified in the real world.
“If a model says to ‘launch the nukes’ or ‘cut out the patient’s kidney,’ those are big decisions, and you’d want to know the reasoning why,” Dr Reid explains. But that reasoning often can’t be reliably obtained, or relied on. “Because we don’t understand it, there’s no guarantee that the model will behave rationally in the future. All we can do is test it on data we have to hand, and hope that when we launch in the real world it doesn’t do something novel like drive us into the back of a parked fire truck.”
The call is coming from inside the house
The fact that AI isn’t currently intelligent isn’t comforting for a lot of people. Surely, given what AI is already capable of, and how quickly they’re evolving, an Artificial General Intelligence (AGI) — usually defined as an AI that can beat a human at any knowledge task — is just around the corner? Couldn’t an artificial superintelligence be next? And isn’t that what we should be afraid of?
Some high-profile people certainly seem to think so. Open AI founder Sam Altman says that we should start preparing for AGI, which he thinks is inevitable. Elon Musk told a 2018 SXSW audience that “AI is far more dangerous than nukes.” Most recently, Musk and other AI personalities signed an open letter penned by the “Future of Life Institute” calling for the cessation of AI research for “at least six months.” There is lots of terrifying speculation out there about what might happen if we develop a superintelligent AI, like all life on the planet being extinguished by an AI with a mission to make paperclips. It’s widely acknowledged by futurists as a worst-case outcome for humanity.
On that note, I have bad news.
The paperclip-maximizing, planet-despoiling machine already exists.
It’s been running for a long time.
It’s called capitalism.
Science fiction author Charlie Stross thinks of corporations as “Slow AIs,” and in my opinion, he has it absolutely dead to rights. Let’s take a closer look at those nightmare AI scenarios. “What if something came and killed most of us and told the survivors how to live?!” is just colonization. And “What if something destroyed the planet to make widgets?!” is just what neoliberal economism is already doing.
Sorry to be boring.
Our future is our present, and it’s a curious mixture of a boot stamping on a human face forever, and planetary asphyxiation on the exhaust gasses of Slow AIs that have existed in various forms for centuries.
We are sacrificing the universe’s only known biosphere on the altar of making lines go up.
It’s time to take the “intelligence” out of AI
Just because AI probably won’t be superintelligent doesn’t mean it isn’t going to wreak havoc. Your phone is already using AI to fake your moon pictures so they look better, but that’s nothing. As discussed earlier, AI-generated images and deepfakes are already a problem, and it’s about to get much worse.
AI companies have spent billions of dollars and expended enormous amounts of energy on creating and training their programs (while also utilizing rampant art theft and worker exploitation,) but it turns out that you can get one AI to train another AI for peanuts. These things are about to start popping up like mushrooms.
Soon — if it hasn’t been done already — someone will create an AI to ingest and regurgitate the terrible stuff on the Dark Web, like gore, death, and child abuse. Decades of exploitation and misery will be captured and copied, ready to replicate any scenario a user desires.
The nightmares are just beginning, but when you take a step back, it all has so little to do with “AI”. It’s just people doubling down on the worst tendencies of people. When you take away the false claims of intelligence, all you’re left with is the artificial.
Artificial, aptly, just means “something made by humans.”
On that note, Dr Reid agrees with some of the world’s top AI ethicists, as well as author, journalist, activist, and AI skeptic Cory Doctorow, who says a lot of AI criticism only serves to hype it up.
Dr Reid says the smart thing to do might be to take “intelligence” out of AI.
“The trick is to stop throwing around the word AI and start going back to words we know,” Dr Reid says. “Let’s just use the word “system”, or “product”, because that’s all AI is.” Like any other innovative, disruptive system or product — think of planes, or cars, or telephones — AI can, and should, be regulated. “If you think AI is somehow exceptional because it's software, remember we already regulate malware, cookie-tracking, and software used in medical devices.”
Dr Reid is optimistic that if AI (I’m sorry, I mean SALAMI) can be de-hyped, it can be both useful and much less damaging than current trends suggest. “What we need is for the law to dictate that people — not just institutions — are held accountable for the actions of their products. Their well-evolved instinct to save their own butts will take care of the rest.”
I agree that lawmakers could reel in the worst impulses of our Slow AI overlords. But, having spent a lifetime watching states utterly fail to address climate change, a problem our species already has all the knowledge it needs to solve, I’m not convinced that they will.
So a mate of mine and I have put together something we think is the next best thing. We reckon it’d be great if people could identify what content is — or isn’t — created by AI. We call it Responsible AI Disclosure, or RAID. Check it out here.
There’s so much more I’d love to say about this topic. I haven’t even touched on a long conversation I’ve had with Dr Reid about how AI absorbs and exacerbates existing societal biases against minorities, people of color, LGBTQI+ folks, and many more.
I wish I could go in to how AI boosterism and investment from people like Elon Musk is driven by a demented, eugenicist, brutal sci-fi philosophy called “longtermism.” I haven’t touched on the myriad ways AI has already taken jobs, or just made them exponentially more bullshit. And we need to talk about fake catgirls with too many fingers. I suppose we’ll always have the comments.
For now, let’s leave the last word to artist Hayao Miyazaki. A few years ago, some enthusiastic geeks showed the famed animator how AI might transform animation, and Miyazaki reacted with palpable disgust.
“I strongly feel that this is an insult to life itself,” he says, with tightly-contained fury. But it’s his words at the end of the video that really resonate with me.
“I feel like we are nearing to the end of times,” he says. “We humans are losing faith in ourselves.”
-Joshua Drummond.
David here again.
After reading what Lee Reid had to say there is nothing smarter I can add to this conversation, so I just wanted to list a few standout quotes (for me, at least) from him and Josh.
I prefer reading them with Brad Fiedel’s Terminator 2 theme song playing in the background, but while thinking of humans instead of Terminators (the real Terminators - da-dah!)
Our future is our present, and it’s a curious mixture of a boot stamping on a human face forever, and planetary asphyxiation on the exhaust gasses of Slow AIs that have existed in various forms for centuries.
Decades of exploitation and misery will be captured and copied, ready to replicate any scenario a user desires.
It all has so little to do with “AI”. It’s just people doubling down on the worst tendencies of people.
I hope your week is going great. I am writing a newsletter about rescuing a hummingbird — so will have that for you soon.
David.
I just added an edit from Josh to this piece - a really cool thing he made:
"So a mate of mine and I have put together something we think is the next best thing. We reckon it’d be great if people could identify what content is — or isn’t — created by AI. We call it Responsible AI Disclosure, or RAID. Check it out here: http://responsibleaidisclosure.com/"
This was a great read! Thank you! I can’t believe that after my semi regular rants about capitalism, combined with the current conversations around AI that I didn’t make the connection before....the dystopia we worry AI will create is actually already here and has been for ages 🤦🏻♀️ seems so clear now that I’ve read it. What a great distraction it serves to be, while we worry about what AI will become we’re not thinking about what’s already happening. It reminds me of that quote, “this is the way the world ends, not with a bang, but a whimper”. As I type, I realise I sound very conspiracy theorist-ish, I’m not saying any of this is a master plan or manipulation, more that the conditions make for a world that will slowly self destruct simply by continuing as they are.