I just discovered machineslikeus.com. @steve used to write for the website though it’s now only available through the wayback machine at archive.org. So much great stuff he wrote back in the day, but unfortunately the archive only lists the first ten pages of Steve’s email debate regarding the seat of consciousness with a particularly smug academic type.
I feel like Steve had him on the ropes, as it were. Perhaps it’s a debate lost to the mists of time but I’m dying to know how it ended.
Call me what you will, sycophantic, a zealot, a groupie. These things are all true, but no one in academia seems to care for the holistic nature of biology and as Steve has said many times (which I’ve honestly felt is the real intrinsic truth of a matter for a decades now) “scientists” or “academics” are often more concerned with looking clever and releasing papers than they are with actually solving or understanding the problems that they are studying. So much effort goes into creating (again, Steve’s words) “toy problems” which are so reductionist and simplified (or in Joejohn’s case, simultaneously overcomplicated and oversimplified) that they have little bearing on reality.
Too few researchers are actually in it for the passion of discovery, it might just be the state of the modern world, but research always seems to be bound inextricably to finances. Thinking is free, consciousness is just an emergent feature of life, unfortunately modern life is expensive.
There’s a very legitimate reason that AI hasn’t progressed beyond chatbots and that reason is; researchers are often more concerned with reproduceable results and flashy mathematics than they are with truly understanding the dynamics of inherently messy and difficult to reproduce systems such as the brain.
The version of the Turing test that I ascribe to is the one from xkcd ( https://xkcd.com/329/ ), and we just aren’t there yet, in any chatbot conversation by the time I get three prompts deep it’s all gone to hell.
The version of the Turing test that I ascribe to is the one from xkcd ( https://xkcd.com/329/ ), and we just aren’t there yet, in any chatbot conversation by the time I get three prompts deep it’s all gone to hell.
Counterpoint: Current-generation LLM-based chatbots are excellent at the xkcd 329 Turing Test.
“Convince the examiner that he’s the computer”.
That could either be fulfilled by the example of outright AI psychosis, or just by people’s acceptance of plausible-looking, but intellectually unsound arguments about “philosophical zombies” and the like. Enough for people to start questioning “am I really any different from a computer that runs tokens through a probabilistic neural network?” etc. “You make some good points, I don’t know who I am anymore.”
I do really appreciate Steve’s “holistic” understanding of a broad range of topics. I feel like AI science needs that – like there are too many people who understand technical problems without the massive context they’re meant to operate within, and that’s very important for even considering something like intelligence.
I can’t remember if Steve has made any statements on the subject of self-driving cars himself, but I’ve been trying to learn more about AI lately (thus rediscovering his work and finding this forum) and I’d love it if people of actual intellectual repute would just stop talking about autonomous vehicles, which are a deeply terrible idea.
Driving is not a mechanical problem. It’s a judgment-making, predictive (not just physical prediction), social, and communicative process. That is, everything computers are bad at. It’s only ever gotten worse the more I’ve examined it (like realizing road markings are socially-constructed imaginary barriers that our brains can treat like physical boundaries).
Every problem autonomy promises to “solve” can be better addressed without any new (or even recent) technology. And autonomy, on the other hand, creates so many new problems most people haven’t even begun to consider.
Or, for a more simple and obvious “understand society please” example, “lowering medical costs with AI doctors/therapists”. 😬
And, on re-reading, I want to make it clear: That’s not just a “public healthcare is obviously better” comment. Even within a “private” (fealty-based) healthcare system, it’s a failure to understand the problem, and the new problems that would be introduced by the proposed “solution”.
I guess they’d be deferring their understanding of society to the business/social leadership they correspond with, which might work – if the people in those roles were actually doing their jobs well.
I’ve come to the conclusion that social expertise has significant value, and the problems caused by managers, executives, politicians, etc., are from the absence of those roles’ expected contributions. Scientists are working at the wrong problems because they’re receiving the wrong instructions (like “make self-driving cars, they’re a great idea and we’ll pay you lots of money”).
(Also, hi. I didn’t see an “introductions” thread, so I’ll just make my first post a 7-month late reply that hopefully includes a meaningful sample of how I think.)
Hello! Just sticking my head up over the parapet for a moment, preparatory to trying to return to work (very gently)!
> Perhaps it’s a debate lost to the mists of time but I’m dying to know how it ended.
I can’t remember how it ended with Jonjoe. I expect we agreed to disagree – that’s usually what happens. He had his axe to grind and I had mine. Was this where I had to contend with a dumb “consciousness is electromagnetism” argument or was that someone else? Scientists who seek the “holy grail of consciousness” in some kind of weird physical phenomenon (or even a conceptual, statistical theory like quantum mechanics) are just showing up how Cartesian dualist they really are, despite their protestations. It’s vitalism, plain and simple. But everyone wants an angle. And (most) physicists understand everything in physical terms, rather than understanding the physical terms themselves in organizational terms. There’s no hope for them… 🙂
> I’d love it if people of actual intellectual repute would just stop talking about autonomous vehicles, which are a deeply terrible idea.
Hear, hear! For one thing, if we genuinely wanted to make transport safer and more efficient, we wouldn’t start from there. But that’s not really why people do it, of course. They have ulterior motives.
with the contiousness debate, i always love to ask: “are all human contious? And do we test them the same way once we got a good definition?”
This always derails the compleate discussion into a compleatly different direction – specially when i remind them of the consequences this would imply.
Always fun, specially when i bring lifestock animals, AI, mentally retarded and young children into that conversation. And apply whatever definition the other person uses of conciousness to those groups…
…I know why i named my favourite Axe context, everything is better with context! *slams big axe on the ground* no need to sharpen when there is enough force behind the axe!
