104 Comments
author

I just added an edit from Josh to this piece - a really cool thing he made:

"So a mate of mine and I have put together something we think is the next best thing. We reckon it’d be great if people could identify what content is — or isn’t — created by AI. We call it Responsible AI Disclosure, or RAID. Check it out here: http://responsibleaidisclosure.com/"

Expand full comment
Apr 18, 2023Liked by David Farrier

This was a great read! Thank you! I can’t believe that after my semi regular rants about capitalism, combined with the current conversations around AI that I didn’t make the connection before....the dystopia we worry AI will create is actually already here and has been for ages 🤦🏻‍♀️ seems so clear now that I’ve read it. What a great distraction it serves to be, while we worry about what AI will become we’re not thinking about what’s already happening. It reminds me of that quote, “this is the way the world ends, not with a bang, but a whimper”. As I type, I realise I sound very conspiracy theorist-ish, I’m not saying any of this is a master plan or manipulation, more that the conditions make for a world that will slowly self destruct simply by continuing as they are.

Expand full comment

Thanks! I'm really glad you liked it, and yeah, we invented paperclip maximizing machines a long time ago. The fever dream of AI run amok is just our present projected on the future with a bit of spice to make it seem less real, not more.

I know there's a lot to get through in that piece, and a lot of links to check out, but one I really want people to look at is writer Charlie Stross' take on "Slow AIs." It really does sum up our present situation incredibly well. I find it genuinely helpful to think of (for example) the fossil fuel industry as a stupid kind of sentient, fighting for its life even as our civilization's survival depends on it being killed. http://www.antipope.org/charlie/blog-static/2018/01/dude-you-broke-the-future.html

Expand full comment
Apr 19, 2023Liked by David Farrier

Holy moly! Just got done watching. The Q and A is still playing as I type this. Terrified! Not sure what to say. I'm glad I watched it. So much food for thought. The "she'll be right" kiwi in me wants to write it off as possible but unlikely. However, I didn't think Trump would get elected, I didn't think Roe v Wade would be overturned, I never imagined living through a pandemic, and I thought we would've done more on climate change by now and we still have plenty of people who think that's not even real, soooooo fuck?! It's very sobering and I'm not sure what to do with it all yet....

Expand full comment
Apr 18, 2023·edited Apr 18, 2023Liked by David Farrier

While I'm not at the level of Dr. Reid, this is my field and I've been frustrated with the people that think these systems are "creating". What they're doing is pattern matching based on wide and extensive knowledge, and maximizing for the best value. It's just that they can do it at an insane rate to the human mind, so it appears to be "thinking". (One could say humans do the same, but I choose to interpret that differently.) A good parallel example is how chess is played - the computing power is effectively playing out billions of different operations, applying the outcome to the success metrics defined in the game, and maximizing for the one option out of the billions of paths that were considered. It's trying every single thing, which I like to define as "brute force" intelligence. If I had the time to play ten billion games of chess and learn from mistakes, I'd be damn good at it as well. Plus its method of "learning" is either incorporating existing available material (of which accuracy/authority can be dubious) or playing out simulations based on coded parameters - the Japanese video shows exactly how wrong those simulations can go when parameters are missing. The simulations in chess work because of the narrow game constraints, but life has so many varied parameters that it is impossible to have accounted for all of them.

The larger challenges here are actually twofold, but combine together:

* when humans determine that the output of these systems is infallible and blindly trust it (happening today with those caught by artificial images, or those using the ChatGPT output for their letter, paper, or email), so that even with human review the output is no longer questioned and effectively becomes the authority

* when the SALAMI systems make up completely false data that looks authoritative, called "hallucinations" (60 Minutes just covered this with their request to write a paper on Economics - https://www.cbsnews.com/video/google-ai-artificial-intelligence-advancements-60-minutes-video-2023-04-16/)

That's the scenario of WOPR declaring that there's an inbound nuclear missile attack from the Soviets. (WarGames, 1984) There's a lot more to all of that, but this comment is already way too long and pessimistic to so many factors.

Expand full comment

If it's any consolation I fucking loved this comment. So thoughtful and thorough.

Expand full comment

People who know me know that I'm not one for short answers. Given sufficient time (of which I find I have very little with all the other demands on it) I could have continued for far longer.

Expand full comment

yeah, great comment. Im going to need to read it a few times to digest properly.

Expand full comment

Hello. As always when I write something for Webworm, I’ll be hanging around in the comments to answer any questions you might have. Hopefully. Thanks for reading, and to David for giving me a guest spot!

Expand full comment
author

Josh - you are way smarter on this than I am - so - thanks!!!

Expand full comment

Amazing read, thank you Josh and David. That video ... wow. The humanity of Hayao Miyazaki, and knowing there are many many people who feel like he does (probably most the people in this community ❤️). Frustrating that we don't often hear from them because hype and capitalism takes up so much space.

Expand full comment

Oh! I forgot to mention - there was meant to be a link to this in the piece, but it might have fallen out. A mate of mine and I have put together a framework for responsible AI disclosures that we call RAID. It's designed as a watermark or disclaimer that users can put on their stuff to show that it was (or, importantly, wasn't!) made with the help of AI. We're pretty proud of this, and it's licensed under Creative Commons, so others can build on what we've done. http://responsibleaidisclosure.com/

Expand full comment
author

Added this into the piece, Josh (my bad as an editor who missed this revision) - and added it as a comment, too.

Expand full comment
Apr 18, 2023Liked by David Farrier

I've thought about this model, as well as a signing/certifying model for "official/real" things, to challenge the problems that will continue to surface with artificially generated images, deepfake videos, and the like. There has been this challenge for years and skilled artists creating images that never happened, just now it is amplified with there technology tools.

Expand full comment

That's right. It's been easy for reasonably skilled artists to fake images for a long time, all that's happened is the ability is now open to a lot more people and the process is a lot faster.

I'm glad you like our framework. It really is just a baby and we're keen to get people collaborating on what it could grow into. Feel free to suggest additions or changes!

Expand full comment

Hell yeah Josh, this is badass. Warmed my cynical Minnesotan soul

Expand full comment
Apr 18, 2023Liked by David Farrier

Great piece Josh. Maybe, actually, it should be renamed Artificial Intent, with emphasis on the artificial? Mathematics is behind the how it's done, but humans drive the why, the intention of the tasks and outcomes. It's like by association of the word "intelligence" somehow absolves us from shitty outcomes. The intent is all ours, just generated by AI

Expand full comment

"The intent is all ours, just generated by AI" --This is what I've been silently screaming the last several months!!

Expand full comment
Apr 18, 2023·edited Apr 18, 2023

Of course, the dead giveaway that AI generated those pictures of Trump's arrest is that they are showing him actually running.

Had he been speeding down the road on a golf cart, with law enforcement shaking their fists and trailing behind, I would have been taken in too.

Expand full comment
author

I still feel bad for temporarily misleading Josh as he groggily woke up!

Expand full comment
founding
Apr 18, 2023Liked by David Farrier

It is a glimpse at an alternative reality. In that universe it really happened

Expand full comment
Apr 18, 2023·edited Apr 18, 2023

So really, these AI-generated images are just the multiverse.

Expand full comment
Apr 18, 2023Liked by David Farrier

Here https://xkcd.com/1838/ has the perfect illustration of "This is all being done with maths. Letters are converted to numbers, and complicated statistical models can be engineered to try to “guess” target numbers. Convert those numbers back into letters and boom, you’ve got sentences."

Expand full comment

Typically brilliant. Damn you, Randall.

Expand full comment

The video made me cry as he is so right. As always great article. AI scares the daylights out of me. Thank you.

Expand full comment
author

He nailed it, right?

Expand full comment
Apr 18, 2023Liked by David Farrier

Great read. AI is equal parts terrifying thrilling for me. But as a digital artist (and a recent one at that) what pisses me off the most is how people don’t see the ‘harmless’ AI bots that make them images as anything other than fun. They don’t see theft of our art/intellectual property as anything other than just for fun. But how would they like it if Johnny Frank in the cubicle next door took bits and pieces of everyones reports in the office, edited them together, then claimed them as his own and got the promotion that they didn’t deserve? Then when they complain, Johnny says “It was just a bit of fun. A loophole. You shoulda been more careful.”

They would be pissed.

And what about the human experience? How does an AI become successful when it can’t deliver the raw emotion or opinions that the human mind can?

It is all very crazy scary and, sadly, not going away.

I say to always go for the next advancement in tech. But when is it too far? One may never know.

E x

Expand full comment
Apr 18, 2023Liked by David Farrier

Yes, some people are quick to argue they are absolved of guilt because "This is all new and anyway you allowed it to happen."

I heard a teenager use that line after he had been caught hacking, and the response was "So if I leave my handbag on the passenger seat and the car window down, it's OK for you to reach in and steal my bag?"

Silence.

Expand full comment

I’m also an artist and despite reading and hearing various opinions about AI and art theft, I have yet to form my own (unusual since I am very opinionated haha). On the one hand, isn’t every artist inherently a thief? We learn and develop our styles by looking at other peoples work, sometimes emulating it, other times incorporating elements we like in to our own. Hell, one of my favourite movies, Nosferatu 1922, is a plagiarised version of Dracula. On the other hand, people’s work being used without consent to train AI is of course, upsetting, and weird when you see a computer generating images similar to those you have spent years to hone your skill.

Emotion and opinion is interesting. We could argue that "good" art is art that makes us feel something, that makes makes us look at a thing in a new perspective. If an AI makes an image that invokes these feelings, is it good art? If our response to the generated images is frustrating, fear, anger, is the art good simply because of our response or not, since our response to it is in the context of knowing it has been trained on others work? I don’t know the answer. AI is weird and fun and scary.

Expand full comment

I see where you are going but art is not inherently theft. An artist may get inspired by another artist/writer but if you copied chinks of the Hobbit word for word without credit to Tolkien, and claimed it as your own, thats plagiarism. Theft. Thats essentially what these AI art bots do. They search the internet and use a piece here and a piece there to create an ‘original’ image and give no credit to the actual artist and take all the credit. I have had close friends who work in digital design and the like that have lost out on jobs because AI “can do it cheaper”.

Expand full comment

I don’t think that real originality in art is much threatened by art generated by AI (whereas it’s regularly under threat from straightforward theft, in which AI bots are implicated as described by Ehren above). Because the algorithms work by approximation to a model that is known and specified, they average trends observed in the material they scrape, thus pulling always towards various norms. Originality is about outliers rather than norms. That said, the big problem is for artists working within the confines of genres, especially graphic genres, which by definition require a high ratio of compliance to innovation. I’m not belittling the talent needed to massage the constraints of generic expectations to create something novel and distinctive. And I wouldn’t call following generic conventions to the extent that audiences require ‘stealing’, though it involves an element of imitation. If art of any kind, visual, literary, musical or whatever, were utterly novel the audience could not make sense of it, read it, react to it never mind enjoy it. Genres and conventions are there to help us make sense of stuff. But genre-busting or at least pushing of the envelope is expected in the ‘fine’ or ‘high’ arts to an extent that would alienate the clients and audiences for graphic art. If AI art outside of generic boundaries pleases the audience for and threatens the makers of fine art (for want of a better term) it will be by accident not design.

Expand full comment

That’s true, and that is absolutely awful that your friends have lost jobs due to it. I wonder how many pieces the AI is using. Thousands? If our own work is comprised of thousands of bits from art we’ve already seen does it compare? It’s such a weird area to be in right now.

Expand full comment
Apr 19, 2023Liked by David Farrier

Great piece, Josh! I go back and forth with how I feel about AI and this really helped to clear some things up that I have been concerned about recently! Sure it's fun to see what AI *can* generate from time to time but I think Miyazaki's thoughts about it are spot on!

Expand full comment

Miyazaki’s response was so interesting regarding simulated movement and disability. Something I hadn’t thought about before. What I find very fascinating is how these AI, with a target goal like "move most efficiently and quickly across this plane" eventually develop ways of achieving the goal in unexpected ways. Whether it’s using bugs in the code to break the physics (https://www.youtube.com/watch?v=Lu56xVlZ40M) or simply learning how to run better https://www.youtube.com/watch?v=eksOgX3vacs

Expand full comment
Apr 18, 2023Liked by David Farrier

A neuroscientist friend said that AI like ChatGPT is like a calculator for language. It is a tool. But you do have to have some idea of how to use it otherwise it spits out numbers (or sentences). Granted, coherent sentences but it is nowhere near conscious.

Expand full comment

Good analogy. It's not all that far removed from typing 80085 into your calculator and sharing it around the class for a good laugh (or eye-roll.)

Expand full comment

Thanks for injecting some "intelligence" into the AI conversation! I was a computer programmer way back when we were just emerging from having to use zeroes & ones (000100010 ) to "tell" a computer what to do (not a working code BTW as I only witnessed this)

Glad I missed that bit because my mind doesn't work that way, so cloaking it in more "human" language allowed me to master the still primitive instructions we gave to "tell" a computer when to add or subtract, multiply or divide, and how to recognise/match trigger words to cause certain actions. I started with HUGE machines in totally atmosphere controlled rooms, that could only do basic bookkeeping functions, to now walking around with a smart phone that is expontentially & almost mindblowingly more powerful.

So I guess my mind is attuned to how increases in tech can advance so far, as we discussed then how a computer/machine "learned" from previous iterations (or the people building them more accurately?) - as in didn't need to invent so much as improve on what others invented/developed already. We had it with early programming - someone wrote a basic programme, a customer asked for something on top so you used an existing working programme & tweaked or expanded it without the need to start from scratch (and it was already debugged & proven to work, so ....) I guess that is how I come to grips with "AI" being not a complete mystery that seemed to appear suddenly, but as an increase in computing capacity NOT intelligence.

It does concern me that "AI" has so much potential for misuse/abuse and/or accidental harm (e.g. via racial bias etc), and that certain people/organisations only care about their short-term bottom line & won't allow ethics to get in the way of a good shareholder payout etc. Hope I pass on before "AI" decides who qualifies for health care & who has to die ...?

Expand full comment
Apr 20, 2023Liked by David Farrier

I don’t have anything smart or interesting to say, but "It’s less smart than a pig, or a fly, or even a TERF." took me completely off-guard and elicited a solid chuckle. Good write up.

Expand full comment

Thanks for such an interesting piece. So much to think about and kudos for referencing Charlie Stross (awesome Scottish sci-fi writer who manages to weave some of this into his works - and still be entertaining). If you were responsible for sharing this article https://www.google.co.nz/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwiGyL3ei9f9AhVQ1GEKHRMDDD0QFnoECA8QAQ&url=https%3A%2F%2Fnymag.com%2Fintelligencer%2Farticle%2Fai-artificial-intelligence-chatbots-emily-m-bender.html&usg=AOvVaw2EqN_HBzU9bhh2C7Mi6YFS by a computational linguist who has some fascinating things to say on exactly why we should not be taken in by ChatGPT you are truly wonderful. If not, please do share everywhere. My boss is trying to use it for everything and won’t hear a word said against it. nice pun, huh!

Expand full comment