David here. This is from a reader who wanted to remain anonymous. I know them, and it hits hard.
**
i am like ten million other office drones who use ai all the time. for work.
it is the most helpful thing. it has made my job much easier.
because like ten million other office drones, i work a bullshit job.
most of what I do is bullshit. it is not meaningful or helpful to anyone.
that is why ai is so good at it. that is why so many other office drones fear it could take our jobs.
ai puts unnecessary things in spreadsheets. ai summarises meaningless meetings. ai writes emails that never needed to be written.
because our jobs are bullshit, and ai is too
i hate that this is what i have to do for a living. i hate that it pays well and that i am good at it. for a long time i wished i could turn off my mind and just be and do what my bullshit job wants me to do and be
now i have that and it is worse than ever
my job is bullshit. ai is bullshit. bullshit compounded. bullshit squared. bullshit from horizon to horizon.
It's funny because when I was in my early 20s, I really wanted one of these jobs. Like, I was extremely jealous of the people I knew that did an hour of work every day and played online & talked to me on AIM for the rest of the day.
Now? No fucking way. I need meaning in my life. The thing is, I think most people in their 20s are like I was. Because capitalism is prioritized over people, most folks would rather have a good paying job that is stupid than one with meaning (in my case, I enjoy helping people and animals). And if you ask my family, they were much more proud of me when I had a customer service job at a giant corporation (not an easy job but it paid well) because I made the most money I've ever made........even though it was never what I wanted to do and I hated it.
Anyway, I really think we need to be intentional on prioritizing people over everything else right now.
I agree with this so hard. Last spring I was laid off from my college teaching job along with a *huge* number of my colleagues across our university system. For 20 years, I taught graduate and undergraduate students at a large university in a major city in California: 3-4 classes for a total of anywhere from 75-120 students each semester. Want to know how much I made per year at my college teaching job? Less than $40k. Yep, you read that correctly. That's what my undergrad degree, my 2 grad degrees (one at an Ivy League univ), my several book publications, and my 30+ years of teaching experience earned me. I loved my job so much -- loved the subject matter, loved the smart, funny, soulful students with whom I had the privilege to work. I got such satisfaction from knowing that I was a really good teacher, that I made a difference in some great young people's lives. Sadly, I also didn't make a living wage. And now I'm in my mid-60s in The Orange Satan's nightmare of America, praying my husband can keep his job for a few more years, praying our fascist overlords don't gut Social Security. I chose to do work that privileged meaning over money. And don't get me wrong: I'd do it again, over and over and over. What's the point of living otherwise? But -- oh, boy -- we need to do better, both as individuals and as a culture. If Trimp & Co have revealed nothing else to us, it's that we've become ignorant and mean and dehumanized enough already. More than enough. The last thing we need is to champion something that amplifies that.
Oh, don't get me started on higher ed. The adjunctification of universities is extremely fucked up. Even those lucky enough to land an actual position get peanuts for the time & effort they put in and they have little job security & little say as to how things are run. Plus, nearly everyone has crippling student debt for their higher degrees so teachers and professors are some of the poorest people in this country..........and society hates them & devalues them so much that we want to replace them with technology.
My family were much more proud of me when I was working working wine marketing casually, with absolutely no job security, for a small winery. Or working 80-100 hours a week full time managing the latest high profile fine dining restaurant/wine establishment (my highway salary was AU$55,000 a year). Recently a family member suggested I might have an easier time with a large financial purchase if I quit my current profession (I didn't ask them if they quit their well paying, stable job, if that would help or hinder their financial prospects). I day dream of an easy job with overtime, stability, payed holidays, sick leave, super. And then a post like the one from anonymous and yourself Sam bring me back down to earth. And honestly, it looks like my job stability (in person work, I detest working online) - is only going to improve if Zuck wants everyone to make AI friends. Although I'm not actually sure if any kids these days are on Facebook... it'll probably be older men that the loneliness epidemic continues to affect.
I obviously liked having enough money to live independently and also having insurance which was tied to my job......but I worked too much, felt like shit all the time, was isolating myself outside of work because I felt so bad, & insurance/healthcare thing is honestly a "damned if you do, damned if you don't" situation.
Also, my grandmother died thinking I was "okay" because I finally "had a good job".
Oh same. David Graeber's Bullshit Jobs was an excellent read and confirmed what I long expected: that I had unconsciously walked into a bullshit job many years ago and I am now paid excellent money to do very little. I need to get off the treadmill but am terrified because I also built a bullshit lifestyle around the bullshit money from the bullshit job.
In my profession, I'm surrounded by AI advocates, and I feel like I'm losing my mind. Don't get me wrong, from a software development standpoint, "AI" tools have helped me with pet projects and with revitalizing old NIN websites I somehow still maintain.
But executives fucking love AI for writing, and for solo interaction. Creatively incurious people simultaneously blasting out visual garbage and suggesting that "if you see an em-dash then it's written by AI" just feeds into this idea that the way you make Artificial Intelligence seem intelligent is by making sure to dumb down the people who use it. The CEO of Shopify tweeted proudly that he spends more time talking to AI than real human beings, and C-suite sad sacks all over LinkedIn nodded excitably as they reposted "This is the future"
I write the way I write because I studied writing in high school and college, and in part because I spend too much time on forums, and have been blogging since 1999. I see the stuff that people get AI to write for them, and it's just as bad as the images and video people generate.
Mark Z talking about how people on average have three friends is mortifyingly sad. Suggesting that AI personalities on Meta platforms is going to make this better is embarrassing, and the only reason it won't be completely ruinous to civilization is because he doesn't understand how people socialize (RIP Metaverse, you were never a fucking good idea)
And all my vitriol and negative energy does not come from a place of ignorance. I've used LLMs with programming, and I've signed up for tools like KlingAI to experiment with animating historical imagery. There are some interesting looking results I get from that, but before I even get to the input screen, I'm shown a page full of what other people are generating, and it's all fantasy women in skimpy clothing, and my god is that ever depressing to see. Go to a fucking museum. Go to a concert. Go to the park. Talk to strangers.
The internet as a source of information has been irreversibly poisoned. God, I could go on. Can you tell I don't have an outlet where I feel comfortable ranting about this anywhere else?
"Go to a fucking museum. Go to a concert. Go to the park. Talk to strangers" This. A thousand times this. I would add to this - get a dog as they increase the likelihood of going to a park and talking to strangers. We are wired for connections with our communities.
👍😁 Yes! Get dogs & take them walking 👍 Most of my daily interactions (I live on my own) are with dogs & their humans while I am out walking OR with parents & their littlies at the local playground halfway through the walking cycle 🫂 I have neither dogs nor littlies & the small connections are good for mental health 💜
I don’t want more than three friends. Why would I? I’m not living in a pizza advert. Even three friends is a bit much tbf. Quality over quantity is my motto. (Also “people suck” is another but not very fair on those people who don’t suck)
I feel like the internet has always been full of derivative, horny, soulless content. It's just depressing to see a slice of it. People were already lazy, incurious, and overstimulated, and AI makes their output faster and louder.
So is the problem that AI is so clearly going to level up the wrong parts of the world that it's irrelevant if it could also be used for good? Is it comparable to the idea that maybe if we could, we'd sacrifice the one or two TV shows worth watching to wish away the tons and tons and tons of media excrement that shape the world we live in?
The problem is that for this whole "AI"/"LLM" excitement we're still on the first peak of the Gartner Hype Cycle (https://en.wikipedia.org/wiki/Gartner_hype_cycle). I don't know if we're still on the way up or already on the way down, but I'd bet the former and it's going to continue getting worse before there's a balloon pop. There's definitely going to be some value there in the core technology, but a lot less than the current claims and a huge amount of disappointment (and lost investment) along the way.
I'm the techie in tech and have been for decades. I'm seeing a lot of the same, and coming from people who should know better. Until recently I didn't have to deal with the corporate bullshit mentioned in David's pinned post, and I have to wonder if the expansion of the useless waste of time tasks are based on AI or just inept individuals. Though as things sit I'm pretty sure I'm not going to have to deal with it, in one way or another (whether that becomes Kalden/Mercer from The Circle or a John Galt reference, I don't know).
I hate generative AI. I hate it so much. It's so dangerous, and it also massively affects your ability to think critically and creatively. It's a disaster, even without all the other things like environmental impact, theft of IP for training and all the rest. I wish it would disappear.
Last week I asked ChatGPT for references to a specific issue I was thinking about. I knew they existed and it spat out about six refs that looked convincing (researchers I would expect to see, journal details down to page numbers etc.) But when I tried to follow up, not a single one of those papers actually existed in the specified edition!
Had this conversation yesterday - if I'm struggling to write something, it's *because* it's important.
Not sure how true this is for others, but for me the writing process IS the thinking process.
Am very aware of the teeth in the saying "use it or lose it" and I don't want to lose the ability to think through how to express something challenging.
I'm the same, in that I'm often doing things that don't have prior reference, and so the solution to them isn't going to come from some mashing up of prior history. Maybe something collecting the previous work and history, but I'm not in any position to trust what is given so there's no value there. I've seen the outputs described as an "an excitable junior engineer who types really fast" or "an overly confident 7-year-old", and when you think of them in that context then you take a different perspective of the output generated and how much you need to question them.
Yes! The research being done shows that using genAI regularly affects your ability to think critically and creatively, and express yourself in writing! It's absolutely a use it or lose it thing, and a lotta people are basically throwing it away without even realising
👍💯 I have written all my life (poetry & stories when young, reports/articles/training manuals when older) & it is totally my way of processing & prioritising & clarifying - taking that away is how I imagine having dementia 😱
Yeah, this is exactly it. I’ve been mucking around with LLMs since before they were uncool, back when I was getting early versions of GPT to make up Mike Hosking columns, and the worst aspect of these new ones - in addition to all the stuff I’ve written about them here, which seems to be holding up pretty well! - is that they can sometimes be very useful, and are sometimes the opposite. I’ve found it about equivalent to flipping a coin. The issue is that you *often can’t tell* if it's helpful or not until you've done a bit of work. And sometimes you can't tell at all. That's because, as Dylan outlines, these things are built for this; bots training on users whilst spewing nonsense and becoming increasingly obsequious and convincing are just doing exactly what they're fundamentally designed to do.
The thing I haven't seen mentioned much is that we were already conditioned to enter some text into a box and trust a computed output; that's what Google and others trained us to do for years. There were always problems with that too, around confirmation bias and engineered responses, but this feels an order of magnitude worse.
Meanwhile, a huge swathe of the mostly-wrong field of "AI safety" is concentrating on preventing an entirely hypothetical and wildly implausible superintelligence apocalypse, while the incredibly real actual harms that these things create grow and spiral. I warned about bots trained to produce CSAM and non-consensual porn in a previous article; that and similar horrors are absolutely coming to fruition now - as 404 Media reported today, it's nearing the mainstream. But I have to admit I never thought of an LLM using pre-programmed cult techniques to help users to self-prompt into a kind of induced psychosis. Nightmare shit.
I'll admit something embarrasing: I was feeling particularly low one day over the the most recent holidays, and not wanting to spoil anyone else's mood and in dire need of someone to talk to, I turned to AI. Claude, in this case, as I had heard it was the more "emotionally intelligent" one. So I poured my guts out to it, it gave me a few seemingly good responses that made me feel a bit better, and that was that. After the holidays, I signed up for a real human therapist. Somewhat ironically, the same self-conciousness that I need to talk to a therapist about is the same thing that would keep me from using AI as my therapist.
But I can see how easy it would be to become dependent on it! It felt very comforting in the moment, and while I'm sure it was telling me what I wanted to hear, what it said actually made sense. And while it wasn't all that different from what my flesh and blood therapist would say to me later, I also don't trust it enough to rely on it long term.
I think social media, at least for me, can amplify loneliness. Sometimes it's like being at a party, and watching everyone else have a good time while you stand in the corner by yourself. I think it's actually a bit predatory of these AI companies to offer Friendship as a product.
🫂 The difference between what you did & those who go totally down the rabbit-hole, is that you retained the self-awareness to recognise you couldn't "trust it" as a long term solution 💪
Education is key. Kids need to be encouraged to question everything now. The brain is convinced something is real pretty much instantly with AI. For example ’Is that video of a rock slide crushing cars on a highway real or are the physics impossible?’ Do people really have seven fingers on each hand or is it AI? Just because an article appears to be academically accurate is it really. Research, checks and balances need to be made, (but which sites can you trust, especially if the real truth is behind a paywall?) however humans love the path of least resistance and it’s easier to believe everything is true/real. Unfortunately most people viewing the current output through vastly horrifying social media channels won’t even consider something to be fake, the default is to believe unless time is taken to prove otherwise.
Educational institutions need to be concentrating on teaching kids to think critically about everything now, however with the current (seemingly international) attacks on education, science, arts, books being banned, free speech being attacked etc I worry that the ability for humans to sift through the wheat for the chaff will be lost forever.
I find AI aggravating. I often google things and ask questions related to medical and psychological topics, sometimes to clarify unusual diagnoses for court reports. It used to be fine, I would ask a question and would get links to various articles and research. Now this stupid bots thing tries to answer questions and makes up answers and tells me things that I know are not true. i.e it lies to me on a regular basis, and that is the first thing that comes up, before any links to articles. I ignore it as much as possible, and never use any information it generates. . And I hate the puerile summaries newspapers have taken to putting above their comments pages on articles that allegedly summaries what people have said. Complete bollocks.
Within a week of Google's AI summaries, I had evaluated other search engines and had switched over to doing 95% of my searches on DuckDuckGo. I do a lot of online searches for my work, and can recommend it, unless I'm specifically asking to find academic research. Then I use Google Scholar, which is brilliant and has no AI summaries.
🤷 Not relying on it for anything important, but I have been using DuckDuckGo from my Firefox browser, & noticed last time there was an "AI" tab alongside the others like location (which incidentally it has routinely ignored when giving me results!) but the "Time" tab had disappeared & I often like to see results from the last month or week, depending what I am looking for. Haven't been back since to see if it was just a glitch, but if it's staying like that I will have to find another search engine 🤬
I'm a researcher. Last week I asked an AI chat to find specific peer reviewed papers for me with particular key words. I used a similar process to my usual data base searching, hoping that I'd get some other papers that were published outside of the big academic machinery. I did - I got about 20 hits, and some of them were amazing. Including things like book chapters I'd apparently authored myself but forgot about! Huh. I instructed it to keep to 'real' papers. It apologised and did the same thing again - but this time included fictional DOIs. In all, it generated 4 almost entirely fictional reference lists. Each time I called out its lies, it complimented me on wanting only truth and then lied again. It was kind of fun to do, and convinced me that my job's safe.
The challenge is for others to also understand that those results are just useless gibberish, because there are too many who are currently seeing something that "looks good" and that's sufficient for them to just trust the output and results without doing any validation/verification.
What terrifies me as a health care professional is the absolute enthusiasm people seem to have for incorporating AI into patient care. It's pretty good at spotting cancer on scans, maybe, but I've seen people wanting it to diagnose and treat patients, or write progress notes . . . AI lies so much. Why would we trust it with our fragile meat sacks?
I also see a lot of enthusiasm for it in students that I tutor . . . how is asking AI, famous for hallucinating sources and getting things utterly wrong, a good plan for writing essays or studying for tests? It scares me - the people studying this way are our next generation of health care professionals, and they've bought in to the AI world to a degree that seems to ignore how flawed a tool it is. It's not a promising outlook for our next generation of thinkers and doers.
I use AI a lot, and objectively I can see a lot of value in it. I'm always surprised when people who I otherwise respect will respond with an internet-style pile-on to this topic. I agree with the many concerns, but is it too scary to admit that it's an incredible achievement AS WELL? Is this just wanting to have a straightforward position to defend, or is it almost like a question of refusing to separate the art (AI output) from the artist (Evil big tech and a vault of stolen IP)? Or am I really on the wrong side of history here, thinking something's cool when I shouldn't?
Leaving that aside, a practical thing to add to this discussion is -- if sycophantic AI is the problem, have you tried asking it what it thinks you want in the context? Or telling it? There are some questions you can ask AI and verify the result. But if you're asking a question you can't verify (say, asking it for advice about a relationship issue), one of the most useful things you can do is say "Argue this from the other perspective, what would you say to the other person if they came to you with this", and I think that's one of the places it shines. It's not objective, but it's definitely better at holding multiple viewpoints than most people, if the user asks it to. And it can also be hearteningly good at holding the line if you're trying to get it to agree with you about something that it shouldn't.
I have been using Chat GPT since not long after it emerged. People seem to forget that it is not human and can only operate on the information it is given. If you feed it crap questions, misinformation etc it will give crap back to you. I only go to Perplexity for research but use Chat GPT regularly for monitoring my writing. The directions I give are specific. It has greatly improved in the last year and if an answer seems out of kilter I immediately examine what I asked for and find my question was ambiguous. It does not have emotions.
⁉️ I guess the answer is it can be BOTH good & bad 🤔 And the PROBLEM is relying on something if it has been misled by the user's imperfect questioning & parameters, and apart from those people NEEDING accurate info, who bothers taking the time? It is after all WHY they are doing it instead of spending the time writing & investigating for themselves 🤷
I definitely see your point - it is impressive technology, and the potential (if realised, and I honestly wouldn't know enough about it to comment on that) is enormous. However, the ethics of using libraries of pirated work to train AI models is sketchy at best, and for me the ethics are the part that I can't get over.
The sheer scale of information required for LLMs to learn from makes creating bespoke material to use impractical. And so enters out of copyright IP. Ok, sure, although I would express concern over the potential biases being introduced by primarily using material produced prior to 1955 (assuming the author died today!). Definitely not saying that biases don't exist in modern material, but it's always important to consider who isn't in the room, etc. etc. And so to work around those limitations, we've seen evidence of pirated material being used. And we're supposed to be ok with people's work being used, and remixed for profit? I wouldn't be ok with my work being used in that way without permission. Not to mention, we don't know where the information that comes out of ChatGPT etc. is specifically coming from - how do you know that it's a reliable source, or whether the author of the kernel of information was a terrible person who you wouldn't want your own work to be associated with - JKR is obvs trash (speaking of people not to associate your work with, lol), but 'never trust anything that can think for itself if you can't see where it keeps it's brain' stuck around in my head all these years for a reason. Solid advice.
I think my point is that the 'art' and the 'artist' here are so entwined that I don't think you ethically can untangle them - this isn't a 'death of the author' essay, it's corporations prioritising profit (or cost-savings, as at my workplace) over humanity. And I mean humanity in that Big Thought sort of way that put man on the moon - hopefully that conveys the feeling of what I mean! We're capable of such greatness, and AI is likely to be a part of that - it has too much potential for it not to be. But it just feels scary how quickly individuals are willing to set aside ethics in pursuit of outsourcing thought / research / writing / creating / friendship(??).
Sorry, hopefully this wasn't too preachy - tools are helpful, they make us human, I'm just thinking about whether the cost of the tool worth it. New technologies always have this question - that's how we got labour laws out of the industrial revolution!
Proud to say I have never once knowingly or by choice interacted with or used any AI chat bot. I'll just be over here shouting at clouds with David (just as soon as I send this link over to my friend who uses chatgpt regularly and likes to point out all the cool things it does for her, to which I respond good for you but AI bots are still terrible)
I've been so frustrated with all the AI bullshit being forced into everything lately. Had to get a new phone recently and it was impossible to find one without AI this and AI that, shoved into every conceivable function. I disabled as many of the options as I could find but it seriously feels almost futile to resist at this point. It's fucking everywhere 😭
I just learnt that if you write Fuck of Fuckin or any such offensive term in a search then it also kills the AI responses! Of course you could also just write -ai like an adult....
David here. This is from a reader who wanted to remain anonymous. I know them, and it hits hard.
**
i am like ten million other office drones who use ai all the time. for work.
it is the most helpful thing. it has made my job much easier.
because like ten million other office drones, i work a bullshit job.
most of what I do is bullshit. it is not meaningful or helpful to anyone.
that is why ai is so good at it. that is why so many other office drones fear it could take our jobs.
ai puts unnecessary things in spreadsheets. ai summarises meaningless meetings. ai writes emails that never needed to be written.
because our jobs are bullshit, and ai is too
i hate that this is what i have to do for a living. i hate that it pays well and that i am good at it. for a long time i wished i could turn off my mind and just be and do what my bullshit job wants me to do and be
now i have that and it is worse than ever
my job is bullshit. ai is bullshit. bullshit compounded. bullshit squared. bullshit from horizon to horizon.
bullshit forever
It's funny because when I was in my early 20s, I really wanted one of these jobs. Like, I was extremely jealous of the people I knew that did an hour of work every day and played online & talked to me on AIM for the rest of the day.
Now? No fucking way. I need meaning in my life. The thing is, I think most people in their 20s are like I was. Because capitalism is prioritized over people, most folks would rather have a good paying job that is stupid than one with meaning (in my case, I enjoy helping people and animals). And if you ask my family, they were much more proud of me when I had a customer service job at a giant corporation (not an easy job but it paid well) because I made the most money I've ever made........even though it was never what I wanted to do and I hated it.
Anyway, I really think we need to be intentional on prioritizing people over everything else right now.
I agree with this so hard. Last spring I was laid off from my college teaching job along with a *huge* number of my colleagues across our university system. For 20 years, I taught graduate and undergraduate students at a large university in a major city in California: 3-4 classes for a total of anywhere from 75-120 students each semester. Want to know how much I made per year at my college teaching job? Less than $40k. Yep, you read that correctly. That's what my undergrad degree, my 2 grad degrees (one at an Ivy League univ), my several book publications, and my 30+ years of teaching experience earned me. I loved my job so much -- loved the subject matter, loved the smart, funny, soulful students with whom I had the privilege to work. I got such satisfaction from knowing that I was a really good teacher, that I made a difference in some great young people's lives. Sadly, I also didn't make a living wage. And now I'm in my mid-60s in The Orange Satan's nightmare of America, praying my husband can keep his job for a few more years, praying our fascist overlords don't gut Social Security. I chose to do work that privileged meaning over money. And don't get me wrong: I'd do it again, over and over and over. What's the point of living otherwise? But -- oh, boy -- we need to do better, both as individuals and as a culture. If Trimp & Co have revealed nothing else to us, it's that we've become ignorant and mean and dehumanized enough already. More than enough. The last thing we need is to champion something that amplifies that.
Oh, don't get me started on higher ed. The adjunctification of universities is extremely fucked up. Even those lucky enough to land an actual position get peanuts for the time & effort they put in and they have little job security & little say as to how things are run. Plus, nearly everyone has crippling student debt for their higher degrees so teachers and professors are some of the poorest people in this country..........and society hates them & devalues them so much that we want to replace them with technology.
My heart goes out to you.
My family were much more proud of me when I was working working wine marketing casually, with absolutely no job security, for a small winery. Or working 80-100 hours a week full time managing the latest high profile fine dining restaurant/wine establishment (my highway salary was AU$55,000 a year). Recently a family member suggested I might have an easier time with a large financial purchase if I quit my current profession (I didn't ask them if they quit their well paying, stable job, if that would help or hinder their financial prospects). I day dream of an easy job with overtime, stability, payed holidays, sick leave, super. And then a post like the one from anonymous and yourself Sam bring me back down to earth. And honestly, it looks like my job stability (in person work, I detest working online) - is only going to improve if Zuck wants everyone to make AI friends. Although I'm not actually sure if any kids these days are on Facebook... it'll probably be older men that the loneliness epidemic continues to affect.
I obviously liked having enough money to live independently and also having insurance which was tied to my job......but I worked too much, felt like shit all the time, was isolating myself outside of work because I felt so bad, & insurance/healthcare thing is honestly a "damned if you do, damned if you don't" situation.
Also, my grandmother died thinking I was "okay" because I finally "had a good job".
*highest
Oh same. David Graeber's Bullshit Jobs was an excellent read and confirmed what I long expected: that I had unconsciously walked into a bullshit job many years ago and I am now paid excellent money to do very little. I need to get off the treadmill but am terrified because I also built a bullshit lifestyle around the bullshit money from the bullshit job.
It is, truly, all Bullshit forever.
I really like this old column Hayden wrote mentioning that wonderful book... and of course it's about ChatGPT too: https://www.webworm.co/p/episode12
I did not know this book and I’m going to have to read it now. I wonder if the evolution of my role is moving into this realm.
I feel this so much.
In my profession, I'm surrounded by AI advocates, and I feel like I'm losing my mind. Don't get me wrong, from a software development standpoint, "AI" tools have helped me with pet projects and with revitalizing old NIN websites I somehow still maintain.
But executives fucking love AI for writing, and for solo interaction. Creatively incurious people simultaneously blasting out visual garbage and suggesting that "if you see an em-dash then it's written by AI" just feeds into this idea that the way you make Artificial Intelligence seem intelligent is by making sure to dumb down the people who use it. The CEO of Shopify tweeted proudly that he spends more time talking to AI than real human beings, and C-suite sad sacks all over LinkedIn nodded excitably as they reposted "This is the future"
I write the way I write because I studied writing in high school and college, and in part because I spend too much time on forums, and have been blogging since 1999. I see the stuff that people get AI to write for them, and it's just as bad as the images and video people generate.
Mark Z talking about how people on average have three friends is mortifyingly sad. Suggesting that AI personalities on Meta platforms is going to make this better is embarrassing, and the only reason it won't be completely ruinous to civilization is because he doesn't understand how people socialize (RIP Metaverse, you were never a fucking good idea)
And all my vitriol and negative energy does not come from a place of ignorance. I've used LLMs with programming, and I've signed up for tools like KlingAI to experiment with animating historical imagery. There are some interesting looking results I get from that, but before I even get to the input screen, I'm shown a page full of what other people are generating, and it's all fantasy women in skimpy clothing, and my god is that ever depressing to see. Go to a fucking museum. Go to a concert. Go to the park. Talk to strangers.
The internet as a source of information has been irreversibly poisoned. God, I could go on. Can you tell I don't have an outlet where I feel comfortable ranting about this anywhere else?
You know I'm a fan of your work, Matt - so it means a lot you feel comfortable posting and discussing shit here. Thank you.
"Go to a fucking museum. Go to a concert. Go to the park. Talk to strangers" This. A thousand times this. I would add to this - get a dog as they increase the likelihood of going to a park and talking to strangers. We are wired for connections with our communities.
Matt speaks the truth.
👍😁 Yes! Get dogs & take them walking 👍 Most of my daily interactions (I live on my own) are with dogs & their humans while I am out walking OR with parents & their littlies at the local playground halfway through the walking cycle 🫂 I have neither dogs nor littlies & the small connections are good for mental health 💜
I don’t want more than three friends. Why would I? I’m not living in a pizza advert. Even three friends is a bit much tbf. Quality over quantity is my motto. (Also “people suck” is another but not very fair on those people who don’t suck)
I agree. Three is enough. Any more and it's just too hard to keep up the relationships. You just spread yourself too thin.
I feel like the internet has always been full of derivative, horny, soulless content. It's just depressing to see a slice of it. People were already lazy, incurious, and overstimulated, and AI makes their output faster and louder.
So is the problem that AI is so clearly going to level up the wrong parts of the world that it's irrelevant if it could also be used for good? Is it comparable to the idea that maybe if we could, we'd sacrifice the one or two TV shows worth watching to wish away the tons and tons and tons of media excrement that shape the world we live in?
I used em dashes a lot. Sometimes-- where they're not even necessary.
I'm not an AI though...am I?
I use them constantly and I will never stop! It's the way my brain operates.
There are dozens of us-- dozens!
Well I can’t vouch for you, and that profile pic is a cat…Joe G goes on the suspect list!
Noooooooo
Right?? I immediate bristled. Don't come for my em-dash, bro. You'll have to rip it, along with my Oxford comma, out of my cold, dead hands.
The problem is that for this whole "AI"/"LLM" excitement we're still on the first peak of the Gartner Hype Cycle (https://en.wikipedia.org/wiki/Gartner_hype_cycle). I don't know if we're still on the way up or already on the way down, but I'd bet the former and it's going to continue getting worse before there's a balloon pop. There's definitely going to be some value there in the core technology, but a lot less than the current claims and a huge amount of disappointment (and lost investment) along the way.
I'm the techie in tech and have been for decades. I'm seeing a lot of the same, and coming from people who should know better. Until recently I didn't have to deal with the corporate bullshit mentioned in David's pinned post, and I have to wonder if the expansion of the useless waste of time tasks are based on AI or just inept individuals. Though as things sit I'm pretty sure I'm not going to have to deal with it, in one way or another (whether that becomes Kalden/Mercer from The Circle or a John Galt reference, I don't know).
YES. Thank you. As someone who taught college writing for 30+ years, this crap just depresses me no end.
OMG you're my people!! I'm glad you exist!
I hate generative AI. I hate it so much. It's so dangerous, and it also massively affects your ability to think critically and creatively. It's a disaster, even without all the other things like environmental impact, theft of IP for training and all the rest. I wish it would disappear.
Don't tell me you read this piece that quickly Rowan!!!
(But I agree with your sentiment haha)
I am a fast reader and also in hyperfocus mode 😂
I take it back I am just jealous as I read at the pace of a slug.
Just saw this article from Rolling Stone about ChatGPT induced psychosis too which is horrifying
https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/
Last week I asked ChatGPT for references to a specific issue I was thinking about. I knew they existed and it spat out about six refs that looked convincing (researchers I would expect to see, journal details down to page numbers etc.) But when I tried to follow up, not a single one of those papers actually existed in the specified edition!
So scary. Imagine that happening with 'fact checking' of conspiracy theory and government lies.
If I want referenced research I use Perplexity. Provided my question is clearly detailed, all the refs and info it has given me has been spot on.
Well that's shocking! And I'm surprised!
Had this conversation yesterday - if I'm struggling to write something, it's *because* it's important.
Not sure how true this is for others, but for me the writing process IS the thinking process.
Am very aware of the teeth in the saying "use it or lose it" and I don't want to lose the ability to think through how to express something challenging.
Totally agree! Writing to think is a key part of how I figure things out
I'm the same, in that I'm often doing things that don't have prior reference, and so the solution to them isn't going to come from some mashing up of prior history. Maybe something collecting the previous work and history, but I'm not in any position to trust what is given so there's no value there. I've seen the outputs described as an "an excitable junior engineer who types really fast" or "an overly confident 7-year-old", and when you think of them in that context then you take a different perspective of the output generated and how much you need to question them.
Yes! The research being done shows that using genAI regularly affects your ability to think critically and creatively, and express yourself in writing! It's absolutely a use it or lose it thing, and a lotta people are basically throwing it away without even realising
👍💯 I have written all my life (poetry & stories when young, reports/articles/training manuals when older) & it is totally my way of processing & prioritising & clarifying - taking that away is how I imagine having dementia 😱
Yeah, this is exactly it. I’ve been mucking around with LLMs since before they were uncool, back when I was getting early versions of GPT to make up Mike Hosking columns, and the worst aspect of these new ones - in addition to all the stuff I’ve written about them here, which seems to be holding up pretty well! - is that they can sometimes be very useful, and are sometimes the opposite. I’ve found it about equivalent to flipping a coin. The issue is that you *often can’t tell* if it's helpful or not until you've done a bit of work. And sometimes you can't tell at all. That's because, as Dylan outlines, these things are built for this; bots training on users whilst spewing nonsense and becoming increasingly obsequious and convincing are just doing exactly what they're fundamentally designed to do.
The thing I haven't seen mentioned much is that we were already conditioned to enter some text into a box and trust a computed output; that's what Google and others trained us to do for years. There were always problems with that too, around confirmation bias and engineered responses, but this feels an order of magnitude worse.
Meanwhile, a huge swathe of the mostly-wrong field of "AI safety" is concentrating on preventing an entirely hypothetical and wildly implausible superintelligence apocalypse, while the incredibly real actual harms that these things create grow and spiral. I warned about bots trained to produce CSAM and non-consensual porn in a previous article; that and similar horrors are absolutely coming to fruition now - as 404 Media reported today, it's nearing the mainstream. But I have to admit I never thought of an LLM using pre-programmed cult techniques to help users to self-prompt into a kind of induced psychosis. Nightmare shit.
And right now is the best it'll ever be.
I'll admit something embarrasing: I was feeling particularly low one day over the the most recent holidays, and not wanting to spoil anyone else's mood and in dire need of someone to talk to, I turned to AI. Claude, in this case, as I had heard it was the more "emotionally intelligent" one. So I poured my guts out to it, it gave me a few seemingly good responses that made me feel a bit better, and that was that. After the holidays, I signed up for a real human therapist. Somewhat ironically, the same self-conciousness that I need to talk to a therapist about is the same thing that would keep me from using AI as my therapist.
But I can see how easy it would be to become dependent on it! It felt very comforting in the moment, and while I'm sure it was telling me what I wanted to hear, what it said actually made sense. And while it wasn't all that different from what my flesh and blood therapist would say to me later, I also don't trust it enough to rely on it long term.
I think social media, at least for me, can amplify loneliness. Sometimes it's like being at a party, and watching everyone else have a good time while you stand in the corner by yourself. I think it's actually a bit predatory of these AI companies to offer Friendship as a product.
🫂 The difference between what you did & those who go totally down the rabbit-hole, is that you retained the self-awareness to recognise you couldn't "trust it" as a long term solution 💪
Sends link to this article with passive aggressive text to person who uses it for everything.
🤣
Of course I did! Instantly!
SHOTS FIRED 😂
There is a massive wave of this link being used passive aggressively right now…if I’m in any way topical!
Education is key. Kids need to be encouraged to question everything now. The brain is convinced something is real pretty much instantly with AI. For example ’Is that video of a rock slide crushing cars on a highway real or are the physics impossible?’ Do people really have seven fingers on each hand or is it AI? Just because an article appears to be academically accurate is it really. Research, checks and balances need to be made, (but which sites can you trust, especially if the real truth is behind a paywall?) however humans love the path of least resistance and it’s easier to believe everything is true/real. Unfortunately most people viewing the current output through vastly horrifying social media channels won’t even consider something to be fake, the default is to believe unless time is taken to prove otherwise.
Educational institutions need to be concentrating on teaching kids to think critically about everything now, however with the current (seemingly international) attacks on education, science, arts, books being banned, free speech being attacked etc I worry that the ability for humans to sift through the wheat for the chaff will be lost forever.
Really terrifying thought tbh
I find AI aggravating. I often google things and ask questions related to medical and psychological topics, sometimes to clarify unusual diagnoses for court reports. It used to be fine, I would ask a question and would get links to various articles and research. Now this stupid bots thing tries to answer questions and makes up answers and tells me things that I know are not true. i.e it lies to me on a regular basis, and that is the first thing that comes up, before any links to articles. I ignore it as much as possible, and never use any information it generates. . And I hate the puerile summaries newspapers have taken to putting above their comments pages on articles that allegedly summaries what people have said. Complete bollocks.
Within a week of Google's AI summaries, I had evaluated other search engines and had switched over to doing 95% of my searches on DuckDuckGo. I do a lot of online searches for my work, and can recommend it, unless I'm specifically asking to find academic research. Then I use Google Scholar, which is brilliant and has no AI summaries.
🤷 Not relying on it for anything important, but I have been using DuckDuckGo from my Firefox browser, & noticed last time there was an "AI" tab alongside the others like location (which incidentally it has routinely ignored when giving me results!) but the "Time" tab had disappeared & I often like to see results from the last month or week, depending what I am looking for. Haven't been back since to see if it was just a glitch, but if it's staying like that I will have to find another search engine 🤬
I'm a researcher. Last week I asked an AI chat to find specific peer reviewed papers for me with particular key words. I used a similar process to my usual data base searching, hoping that I'd get some other papers that were published outside of the big academic machinery. I did - I got about 20 hits, and some of them were amazing. Including things like book chapters I'd apparently authored myself but forgot about! Huh. I instructed it to keep to 'real' papers. It apologised and did the same thing again - but this time included fictional DOIs. In all, it generated 4 almost entirely fictional reference lists. Each time I called out its lies, it complimented me on wanting only truth and then lied again. It was kind of fun to do, and convinced me that my job's safe.
The challenge is for others to also understand that those results are just useless gibberish, because there are too many who are currently seeing something that "looks good" and that's sufficient for them to just trust the output and results without doing any validation/verification.
Yep. This exactly.
What terrifies me as a health care professional is the absolute enthusiasm people seem to have for incorporating AI into patient care. It's pretty good at spotting cancer on scans, maybe, but I've seen people wanting it to diagnose and treat patients, or write progress notes . . . AI lies so much. Why would we trust it with our fragile meat sacks?
I also see a lot of enthusiasm for it in students that I tutor . . . how is asking AI, famous for hallucinating sources and getting things utterly wrong, a good plan for writing essays or studying for tests? It scares me - the people studying this way are our next generation of health care professionals, and they've bought in to the AI world to a degree that seems to ignore how flawed a tool it is. It's not a promising outlook for our next generation of thinkers and doers.
So pleased that you resist AI, David. The ability to write well is a superpower, so why would you surrender it to anyone or anything?
I use AI a lot, and objectively I can see a lot of value in it. I'm always surprised when people who I otherwise respect will respond with an internet-style pile-on to this topic. I agree with the many concerns, but is it too scary to admit that it's an incredible achievement AS WELL? Is this just wanting to have a straightforward position to defend, or is it almost like a question of refusing to separate the art (AI output) from the artist (Evil big tech and a vault of stolen IP)? Or am I really on the wrong side of history here, thinking something's cool when I shouldn't?
Leaving that aside, a practical thing to add to this discussion is -- if sycophantic AI is the problem, have you tried asking it what it thinks you want in the context? Or telling it? There are some questions you can ask AI and verify the result. But if you're asking a question you can't verify (say, asking it for advice about a relationship issue), one of the most useful things you can do is say "Argue this from the other perspective, what would you say to the other person if they came to you with this", and I think that's one of the places it shines. It's not objective, but it's definitely better at holding multiple viewpoints than most people, if the user asks it to. And it can also be hearteningly good at holding the line if you're trying to get it to agree with you about something that it shouldn't.
I have been using Chat GPT since not long after it emerged. People seem to forget that it is not human and can only operate on the information it is given. If you feed it crap questions, misinformation etc it will give crap back to you. I only go to Perplexity for research but use Chat GPT regularly for monitoring my writing. The directions I give are specific. It has greatly improved in the last year and if an answer seems out of kilter I immediately examine what I asked for and find my question was ambiguous. It does not have emotions.
⁉️ I guess the answer is it can be BOTH good & bad 🤔 And the PROBLEM is relying on something if it has been misled by the user's imperfect questioning & parameters, and apart from those people NEEDING accurate info, who bothers taking the time? It is after all WHY they are doing it instead of spending the time writing & investigating for themselves 🤷
I definitely see your point - it is impressive technology, and the potential (if realised, and I honestly wouldn't know enough about it to comment on that) is enormous. However, the ethics of using libraries of pirated work to train AI models is sketchy at best, and for me the ethics are the part that I can't get over.
The sheer scale of information required for LLMs to learn from makes creating bespoke material to use impractical. And so enters out of copyright IP. Ok, sure, although I would express concern over the potential biases being introduced by primarily using material produced prior to 1955 (assuming the author died today!). Definitely not saying that biases don't exist in modern material, but it's always important to consider who isn't in the room, etc. etc. And so to work around those limitations, we've seen evidence of pirated material being used. And we're supposed to be ok with people's work being used, and remixed for profit? I wouldn't be ok with my work being used in that way without permission. Not to mention, we don't know where the information that comes out of ChatGPT etc. is specifically coming from - how do you know that it's a reliable source, or whether the author of the kernel of information was a terrible person who you wouldn't want your own work to be associated with - JKR is obvs trash (speaking of people not to associate your work with, lol), but 'never trust anything that can think for itself if you can't see where it keeps it's brain' stuck around in my head all these years for a reason. Solid advice.
I think my point is that the 'art' and the 'artist' here are so entwined that I don't think you ethically can untangle them - this isn't a 'death of the author' essay, it's corporations prioritising profit (or cost-savings, as at my workplace) over humanity. And I mean humanity in that Big Thought sort of way that put man on the moon - hopefully that conveys the feeling of what I mean! We're capable of such greatness, and AI is likely to be a part of that - it has too much potential for it not to be. But it just feels scary how quickly individuals are willing to set aside ethics in pursuit of outsourcing thought / research / writing / creating / friendship(??).
Sorry, hopefully this wasn't too preachy - tools are helpful, they make us human, I'm just thinking about whether the cost of the tool worth it. New technologies always have this question - that's how we got labour laws out of the industrial revolution!
Proud to say I have never once knowingly or by choice interacted with or used any AI chat bot. I'll just be over here shouting at clouds with David (just as soon as I send this link over to my friend who uses chatgpt regularly and likes to point out all the cool things it does for her, to which I respond good for you but AI bots are still terrible)
I've been so frustrated with all the AI bullshit being forced into everything lately. Had to get a new phone recently and it was impossible to find one without AI this and AI that, shoved into every conceivable function. I disabled as many of the options as I could find but it seriously feels almost futile to resist at this point. It's fucking everywhere 😭
IT Engineer here - also never used any AI tools either and quite proud of that too.
When I can remember I add "-ai" to my Google searches to stop it adding that distracting bit of AI generated slop that always appears first.
e.g. "cats -ai"
You aren't alone in your disdain for company's wanting to crowbar AI into every tool imaginable.
Kia kaha
I just learnt that if you write Fuck of Fuckin or any such offensive term in a search then it also kills the AI responses! Of course you could also just write -ai like an adult....
write -ai like an adult? nah.... fuck that
Excellent work by NZs Strongest Man there.