joshua.seigler.net/site/posts/2025-04-24-thinking-machines.md
Joshua Seigler 86d1bb9967 new article
2025-04-24 23:46:53 -04:00

4.6 KiB

title description
Thinking machines Some thoughts about large language models

Work

There's an exchange early in the classic '80s movie TRON. Some scientists are talking shop:

ALAN: Ever since he got that Master Control Program set up, system's got more bugs than a bait store.

GIBBS: Well, you have to expect some static. Computers are just machines after all, they can't think...

ALAN: They'll start to soon enough.

GIBBS: (wryly) Yes, won't that be grand -- the computers will start thinking, and people will stop.

Gibbs has a point. The modern vision of a utopian future is one where work is relieved, and people are free to pursue leisure, or exercise their creativity with art, writing, and poetry. Setting aside the irony that creative works are the first and most visible applications of this technology -- is that imagined future actually a good one?

When I was a kid, I remember a day of yard-saling in the family minivan. It was early summer, a hot day. The windows were down and I argued that if the vehicle has good air conditioning, what was the point in getting all hot? "To get used to the warm weather," seemed like such an unfair, dumb answer. We were sweating back there! Later in life, I took a short trip to Arizona in August. Everyone scurried from building to building. Where the sun was doubled, reflected off of glass skyscrapers, it was like a convection oven. It was actually unsafe to spend long stretches outside unprepared. But when I returned to Massachusetts, for the rest of the summer 85 or 90 degrees Fahrenheit felt like nothing.

All that to say, the work that is required to write a good email or article, to make an illustration, or to learn an unfamiliar API or programming language, isn't just about achieving a result. That effort maintains and builds our abilities. Work pushes us to connect to each other for help, or to persevere in doing something difficult.

Empathy

I'm going to take as a given that this technology does manifest a type of intelligence. Sure, it's "just" multidimensional vector embeddings. But it is obviously intelligent in some way. And it is utterly without empathy. It couldn't possibly have empathy - it doesn't even have a body. Intelligence without empathy is dangerous 1 2.

I think this new intelligence has been around since before LLMs, in advertising and social media. The seminal LLM paper is called, "Attention is all you need". A very prescient title. The algorithms that run attention-oriented content feeds are a sort of ancestor. Human attention is a precious thing. These systems (LLMs as well as feed algorithms) are oriented towards capturing human attention. AI images have a magnetic quality. The early iterations would let you down if you look closely at fingers or text, but even those images grab your attention first. This thing is not a person. It doesn't have a body, and it doesn't care about us.

Truth

It also doesn't care about truth. At a basic level, an LLM is a document completion engine. You give it text, and it extends it. No amount of pre-training or guard rails make it completely truthful because it built to be convincing. It's called "hallucination" when it references something that doesn't exist or is just wrong. But there is such a thing as truth, and truth is not necessary to be convincing. A person who has this kind of disregard for truth is called a con-man or bullshitter. 3

Warnings

Science fiction is littered with cautionary tales about inhuman intelligence. For that matter, so is myth: genies give people whatever they want, but because people have self-destructive desires (like the desire to avoid work), it goes wrong. There are exceptions, like when Solomon is offered anything he wants, and he asks for wisdom 4. Wisdom is having a strong relationship with truth. And because he chooses this, he also ends up with wealth, long life, and other things that would destroy an unwise man.

I am not particularly wise. So I intend to avoid using LLMs as much as possible.