I’ve already seen multiple people in my social media circles crash out into LLM psychosis and it’s getting kind of annoying seeing people spam output from an advanced statistical resource hog autofill when I trying to discuss stuff with humans online like bro your chatbot is not going to build a time machine it can’t even build functional creatures 2 brain architectures…
the problem is discussing with humas. Before 2020 i was active in the fancy ratbreeder community. Someone asked me for a paper, explaining that hamsterwheels are bad for rats (somehow everyone in germany thinks they are the worst and she was discussing with someone out of a native english speakin country) – send 3-4 papers, explaining that the wheelsare good, without sayin anything. Just the links.
The argument ended when the papers where placed. Both didnot read a single paper, they just accepted that whoever shopws scientific papers, is right.
And those papers where clear, wheelsare good – papers ended the discussion and both persons ended with wheels are bad.
The main problem is that humans hate to think on their own. They just got a new tool that thinks for them…
Indeed, thinking is work and society has made us lazy.
Rather than do critical analysis, most of us will just believe in what we are told if the source is confident enough.
@genesis I thought you were a native German speaker! Something about your spelling tipped me off, that and the black forest references you occasionally make.
@foggygoofball I am, my old creatures name was MK-grendel and i was active in the german community.
Also critical thinking was always unpopular for the majority of the humans, saddly… the internet just makes it more obvious
@genesis It’s not about critical thinking avoidance. It’s about a modern, more effective Narcissus trap. An echo chamber of yourself and your mirror, which is insanely unhealthy. Even being able to interact with someone who has a different opinion, and accept that they might be more correct than you, is something that would be lost. I’m worried how much they’ll try to recursively degrade and control people with this. There’s always a reason if the elites start pushing something. It’s not just “more of the same as always,” sometimes happening is real. It’s worth trying to stop. How much more frustrating would that interaction be if they fed it into a pretrained chatGPT log that sycophantically insists wheels are dangerous and depressing to rats or whatever and never budges, and just posted that at you?
And to be fair, scientific papers are rough reads. That wouldn’t be my baseline expectation for people, personally.
@kurgan we are at a big selection event right now, many traits will be reshuffeled reduced or removed a few generations down the line. When those who prefer talkin with female mecha hitler instead of real humans (rerequesit for babys) are removed out of the genepool, i see this as a win.
Let the narcissus trap snap, it’s better for our species in the long run…
@genesis Well, it’s best to save who you can. This seems like a great filter that didn’t actually have to happen. As has been discussed, it’s a total red herring to begin with. You don’t want the numbers to swing too far in the favor of the people orchastrating all that, in the end.
@kurgan for our civilisation, i agree – but for our species, sharp selections with hughe cut offs improve way more.
And i am always compleatly on the biological side
You have been programmed since birth. All of your institutional learning facilities (schools, church, prisons etc.) are complicit. Your family, your friends, the news, the internet, etc all work in tandem to reinforce certain thought patterns and reject those that are deemed problematic. Start with Adam Curtis’ “The Century of the Self” documentary. (i’m sure its on youtube or somewhere FREE…but it’s long. Please do the curtesy of watching the entire 4-part documentary.) Actually many of Adam Curtis documentaries will help remove the blinders and allow actual thought processes to occur, rather than knee-jerk thoughtquotes that have been pounded into you since day one. PM me if you would like further reading, insane rantings, or if you want to “see the fnords”.
@finnius my mild autism and my 9th house chiron midheaven conjunct in scorpio make me more resistant to programming by authority figures however
I think what we’re seeing is largely a few different things at play.
– AI companies advertising LLMs as a “magic do-anything machine”. Which… they’re not. They do have their uses (and the fact that natural language processing has advanced this much is genuinely wild! I remember when Cleverbot was the cutting edge), but companies or people buying into the hype and trying to do everything with AI turns out about as well as one would expect.
– Humans are more likely to trust someone who speaks confidently. This is commonly exploited by con artists, to great effect. While LLMs aren’t intentionally trying to con someone, they often present factually incorrect information with the same degree of confidence as correct information. Because of this, it’s important to think carefully about what is being said, not just how it’s said. On a side note, I do wonder if this problem could be ameliorated by training LLMs to speak unconfidently, so that people’s first instinct would be to second guess them. Though, I doubt any of the commercial models would try this, as it would be bad for business.
– LLMs in general can act as an echo chamber if you aren’t careful. As they have no inherent opinions of their own (unless you count things like the system prompt, content filters, or biases in training data), it is possible to get an LLM to claim just about anything with the proper context. Asking for sources can help (though it’s still up to you to make sure the sources actually say what it claimed), but either way, you do still need to think critically about what’s being said. This and the previous point could probably also be helped by providing people with better information about how LLMs actually work, and what their limitations are.
– Some models (this is especially a problem for ChatGPT, which is also one of the most used models) are very sycophantic as a result of how they were trained, which further exacerbates the previous two problems.
– Web searches have gotten worse and worse over the years, through a combination of enshittification of search engines and content mill websites designed to game SEO. This has made it more and more difficult to find specific information. As a result, in some cases, it is now actually easier to find information by asking an LLM to find webpages for you. Though I think it says less about how good LLMs are at it (in a lot of cases they’re pretty mediocre) and more about how bad search engines + the SEO problem has gotten (I often have to sift through pages upon pages of content mill garbage to find anything useful, unless I already know what website the thing I’m looking for is on and specify that in the search, which… largely defeats the point of being able to search the entire internet??).
– There probably are some cases of actual psychosis, likely from people with conditions that make them prone to this sort of thing. As a disclaimer, I am not a psychologist. But, this sort of thing isn’t new. From time to time you hear about people who have fallen really hard into complex conspiracy theories. Things like Time Cube, or people who think that everything is a sign that the government is personally out to get them. AFAIK this is usually a result of schizophrenia, or similar conditions. It’s possible that use of LLMs may be more dangerous for people with these conditions, but I’m not aware of any studies that have been done on the subject.
