Feed: Taylor G. Lunt

Entries found: 20

Halfhaven Digest 6 + Retrospective

Published: Tue, 02 Dec 2025 05:25:01 GMT
Updated: Tue, 02 Dec 2025 05:25:01 GMT
UTC: 2025-12-02 05:25:01+00:00
URL: https://taylorgordonlunt.substack.com/p/halfhaven-digest-6-retrospective

The final round of highlights from the Halfhaven virtual blogger camp
Content Preview

My posts since the last digest

  • The Barfly — A short story about a mobster or something. I didn’t really like this one. I wrote it quickly and it shows. Usually, for a short story, I’d at least like to sleep once between writing the first draft and publishing the story. That way I can see it with fresh eyes. I didn’t do that here though.

  • A Letter to Purolator — A snotty letter I wrote to Purolator. Since writing this, I’ve talked to several people who live in different areas that all agree that compared to other companies, Purolator has a reputation for not delivering packages and instead putting a note on your door.

  • I Gave My First Speech — Reflecting on my first Toastmasters speech.

  • The Confession — Another story. I’m happier with this one. I also made an audio/video version for some reason, though I don’t think anyone will watch it. But hey, nobody will read the text version either, so… Anyway, I think I want to write more short stories, since I feel like I struggle with some aspects of plot and tension and want to get better.

  • 14 Concerns About Immortality — Actually the first Halfhaven post I wrote, though it sat there as a first draft until the end. I was keeping it in the chamber in case I didn’t have time to write a post or something. The style is inspired by 112 Gripes about the French , a down-to-earth handbook given to US soldiers in 1945 meant to pacify tensions that were growing between the soldiers and the French locals.

Some highlights from other Halfhaven writers (since the last digest)

  • Pulling at the Gushing Thing (keltan) — I haven’t seen any other poetry for Halfhaven. It’s hard to write 500 words of poetry in 1-2 days, but I thought it had some good moments.

  • AI Models Are Pretty Decent Tutor-Stylists (Aaron) — Using AI to show what I’d look like wearing an article of clothing is obvious, but I would never have thought to upload an image of myself wearing a garment and ask AI if it fits properly, or critique my outfit in general.

  • Using Willpower Is a Leading Indicator of Failure (Aaron) — As some Halfhaven writers are finding out, writing regularly can require a lot of willpower. But it doesn’t have to be that way. If you’re brute-forcing something with willpower, you’re probably doing it wrong (or you just shouldn’t be doing that thing at all, because it’s not for you.) This blog post gives more examples of advice that don’t take into account that willpower is a finite resource to be protected.


That’s it. Halfhaven is over. I wrote 30 blog posts in October/November. And so did 6 of the other participants, out of a total of 23. Algon wrote the greatest number of posts, 45, and three participants tied for the least at only one post. The average number of posts per participant was 13.1, which is less than half of the required number. I understand why. While I managed to finish, it was hard. Writing every day is hard. Especially if you still have to live your life and work full time and so on. There were many days I didn’t feel like it, or was too busy, or was sick. But the fact we had two months instead of one made it possible for me. Thanks to whoever came up with that idea. I originally thought it was dumb and overcomplicated, and I was wrong.

Inkhaven, the in-person residency in San Francisco, had a much greater completion rate than Halfhaven. It seems from the tracker like nobody missed a post? I wonder how much of that is because of the encouraging environment, how much comes from the fact that the residents could focus on writing full time, and how much came from the threat of expulsion if they missed a post. While Inkhaven is more like university, Halfhaven is more like Coursera. We Halfhaven participants had none of these advantages, and I’m proud that I managed to do the hard thing in spite of the odds.

My most popular post of the ones I posted to LessWrong was Give Me Your Data: The Rationalist Mind Meld with a score of 114. I think this hit the right balance of thoughtful and appealing to the target audience. My least popular was Unsureism: The Rational Approach to Religious Uncertainty , with a score of -7. My attempt at satire, which LessWrong didn’t like. I didn’t post everything there, and I’m sure there’s a few they would have disliked even more.

I definitely improved my writing a lot during Halfhaven. I feel myself developing a voice, cutting unnecessary fluff, and having more structure to my writing.

Some people are going to keep posting every week, which some people are calling “foreverhaven”, but which I call “having a blog”. I’ll probably do the same. One post every two days isn’t enough to make posts I’m proud of. I ended up spending more than two days on some posts, and blasting out some in an hour or two. I also want to do more short fiction for a while like The Confession . I’ve already written the first draft for my next short story.

Thank you everyone who participated, even if you didn’t finish. Thanks for posting in the Discord and creating an environment where I felt I should keep posting too. Thanks for the interesting posts. And thanks for checking out these digest posts. Good luck with your future writing, and maybe I’ll see you next year!

14 Concerns About Immortality

Published: Sun, 30 Nov 2025 06:03:11 GMT
Updated: Sun, 30 Nov 2025 06:03:11 GMT
UTC: 2025-11-30 06:03:11+00:00
URL: https://taylorgordonlunt.substack.com/p/14-concerns-about-immortality

There’s a lot of concern in the media about school shootings.
Content Preview

There’s a lot of concern in the media about school shootings. But retirement home and hospice shootings are more common. Only, the shooters are microscopic, so we can’t put their faces on the news. Cancer, diabetes, infectious disease, heart disease — these bad guys focus on the elderly, but can kill people of any age. Because of this, real life is like a horror movie. You’re guaranteed to die at the end. The only reason you’re not panicking is that being panicked just makes the movie scarier. All you can do is try to enjoy the good parts of the movie, and ignore the fact that the killer is right around the corner.

Historically, there was nothing we could do about this. We had no choice but to ignore the inevitable fact that we’re all going to die, and instead focus on our legacy: the children we leave behind, and the societal, technological, and scientific progress we make. We try to push the boundary of what’s possible in hopes of leaving a better world for our children. Our great-grandparents went to the Moon. As a result, we sleep on memory foam mattresses and use cordless power tools.

But what if there was something even more important we could accomplish than going to the moon? This is humanity’s next milestone: we cure aging and all disease. Some say going to Mars should be our next goal, but stopping the hurricane of tears and blood that sweeps across our planet at all times is more important than finding out whether an alien sneezed on Mars a billion years ago. And we have millions of years to spread to other planets. The goal of curing death is more urgent. Some say we can stop all involuntary death within our lifetimes. Others say it will take longer. Maybe this will be a gift we give to our own great-grandchildren. If so, there is no greater gift we could give, and what could give our lives more meaning than giving it? If we must die, let us die knowing we will be among the last to go. But it could also happen within our lifetimes. All it would take is life-extension technology to add years faster than the clock takes them away.

Human biological immortality is a physically possible, achievable goal, not some pie-in-the-sky idea. Those who say otherwise have a can’t-do attitude. Not the kind of attitude that put man on the Moon. Nevertheless, it’s natural to feel some skepticism. The purpose of this article is to address common concerns people have when it comes to the project of curing death.


1. “There are more urgent problems we need to solve, like global poverty, or warfare.”

Many things can be important at the same time. When war breaks out, we still need to feed the hungry. But of all the issues facing humanity, ending death is especially important. In World War II, millions of Allied soldiers fought to defend the lives of millions of innocent civilians. But the winners of World War II and the innocents they protected are nearly all dead today. The soldiers triumphed over evil, but eventually perished to a greater evil. Most of them died weeping or screaming, and left behind grieving families. This will be the fate of every human being until death — the ultimate enemy — is finally defeated.

2. “If people live forever, there will be shortages of food, water, and living space.”

Maybe in the short term. But long term, there’s no need to worry. The universe is big and has lots of material. The Earth can hold many more people than we currently have. If we’re clever, we can expand even beyond the galaxy without running out of room. For the short-term shortages, we can adapt to infrastructure limitations by rolling out life-extension technology slowly, or freezing people who die so we can bring them back later once infrastructure has adapted.

3. “Death is natural.”

Genital warts are natural.

4. “Making people live forever is playing God.”

And what a fun game to play! We’ve been playing it since the dawn of our species. Mother nature wanted us to starve, hunting desperately for every bite. Instead we invented farming, and tamed wild animals. Today, there are more domesticated farm animals by mass than all wild mammals combined. Mother nature wanted us to live on the African savanna, but we learned to live in deserts and frozen wastes, and built boats to take us to new continents. We’ve made engines to work so our muscles don’t have to, and made machines think for us. We’ve broken free from the fetters of nature at every opportunity. Curing death is simply the next step.

5. “Living forever would be really boring.”

You might not want to live for a billion years. But you may wish to live for ten thousand, and you should have that right. One day, far in the future, you may decide you’ve done everything you wanted to do, and seen everything you wanted to see. You may grow tired with life itself. You may be ready to die, and choose to do so. This is the best way a human being can die, and it should be the only way. Death should be a choice. Much better than the current system of randomly-assigned, unpredictable, horrifying death sentences that would be war crimes or serious human rights violations if imposed by another person.

6. “Death gives life meaning.”

Would you love your friends and family less if you knew they’d live forever, and you would too? Would your love be cheapened by the fact that it won’t be tragically cut short? Is Christmas with your family only enjoyable because you only have a few dozen left? There’s lots in life that you can find meaning in, from the easy fun of telling jokes with friends or watching movies, to the hard work of learning an instrument or writing a novel. Meaning is inherent in the lives we live. Death does not create meaning — death is the end of everything meaningful. Death obviates. Life after death is cured will be more meaningful, because you will have more time to grow and learn what you find meaningful. Death does not add meaning, only urgency.

7. “Living forever would be torture. The human mind couldn’t handle it, and we would go mad.”

It’s possible human beings are not psychologically suited to immortality. If so, then that’s just something we have to fix. You might be opposed to tweaking your brain, but the only alternative would be death, which is a much greater adjustment of your brain!

Some objections like this are not reasons we shouldn’t solve the problem, they’re just part of the problem that needs solving.

8. “If we lived forever, then power would gradually concentrate in the hands of the few”

This is another objection that’s part of the problem, rather than a reason not to solve the problem. Dividing resources fairly is hard. But it’s a solvable problem. Imagine how much easier this problem would be to solve if all the geniuses of history were alive to help us solve it! Allowing each generation of geniuses to rot in the ground does not help us solve difficult problems like this.

Biological immortality doesn’t really make the problem of dividing resources any harder anyway. Currently, while individuals are mortal, families may live forever, and either way we need to deal with a small number of rich families hoarding resources.

9. “Wanting to live forever is selfish. You should make way for the next generation. How will young people find jobs if old people never retire?”

Having a job won’t be a thing for much longer. Soon enough, machines will do the work for us. Hopefully we will still find meaning in hard work, but our hard work will no longer be tied to our income. People will reap the rewards of work mostly done by machines (as long as we can coordinate to make sure we’re fairly sharing in the fruits of their labor. And that’s a problem we have to solve whether or not we cure death.)

10. “Immortality may only be available to the wealthy at first, which isn’t fair.”

No matter how you roll out a new technology, there’s gonna be someone who gets it first and someone who gets it last. The fact that we’ll be using money to determine who goes first doesn’t change that. Some people will get it first — sucks for those who get it last! — but even worse would be nobody getting it at all!

11. “I heard humans are already technically immortal, because of something called quantum immortality.”

Quantum immortality is a thought experiment in theoretical physics, and is controversial among physicists. It’s an neat idea, but not something to take seriously. Otherwise, you’d see physicists playing Russian roulette for large sums of money.

12. “Actually, immortality is philosophically impossible.”

It may be impossible to truly live forever. The universe may end, for one thing. But also, to live is to think, to think is to change, and to change is to die. Eventually the you that exists today won’t be around any longer, one way or another. You will either grow into someone new, or you will suddenly, tragically die. The project of curing death is concerned with the latter, not preventing nonexistence altogether.

13. “But what about AI? Why solve the problem ourselves if we can just have AI solve it for us.”

We can’t just sit around and wait for AI to solve all our problems. Nobody knows how long it will be until we have superintelligent AI. It may be 2 years, or it may be 200. If we cure death one or two years before we invent superintelligent AI that could have done it for us, at least we can be proud knowing we saved ourselves, rather than relying on a machine.

14. “What should I do if my loved ones die before immortality is available?”

Freeze them! It may be a long time before we can bring them back, but eventually it will be possible. Nobody seriously denies that. The biggest risk is not that it will never be possible to safely thaw someone, but that a cryonics company will go out of business, and their frozen corpses will be thawed early and disposed of. This has already happened in some cases. So, we’ll need better laws to protect the frozen bodies of people who have chosen cryonics. In particular, such a person should not be treated as a corpse, but rather, a patient undergoing a long-term treatment, similar to a lengthy medically-induced coma. Once a person is frozen, they should have a right to stay frozen until the time comes they can be restored.


People are naturally suspicious of ideas that sound far-fetched, and the idea of curing death is the most far-fetched sounding idea ever. But there was once a time when going to the Moon sounded just as impossible. Curing aging and all diseases is on the table. It will happen. We should treat it as a civilizational goal, and make it happen as soon as we can. We should orient our society around solving this goal. It will be a difficult fight, but at least it’s one we can be proud to fight. When people have set their minds against something, they can easily imagine all sorts of objections and what-ifs. But when humanity sets its mind to accomplishing something good, humanity pulls it off.

The Confession

Published: Fri, 28 Nov 2025 05:04:35 GMT
Updated: Fri, 28 Nov 2025 05:04:35 GMT
UTC: 2025-11-28 05:04:35+00:00
URL: https://taylorgordonlunt.substack.com/p/the-confession

Sharp was the feeling in my lungs when I inhaled the smoke.
Content Preview

Sharp was the feeling in my lungs when I inhaled the smoke. My skin burned. The stink of sulfur was thick and foul. This was Hell.

I crawled naked across the blackened plain, guided only by the faint light of a distant river of fire. I had nowhere to go. No reason to believe anywhere else would hurt less than where I was, but still, I kept crawling.

My fantasies were elaborate. I imagined making a deal with some demon. To take on some extra torture, so that in exchange I would be granted a single breath of clean, fresh air. Just once, I wanted to feel what that was like again. Or, I would pray to God every day, and as a reward for staying strong, I imagined he would permit me one mouthful of cool, fresh water to soothe my cracked tongue and aching throat. But no matter how hard I prayed, how hard I imagined, every day in Hell was the same, without a moment of relief.

I feared I would never again feel pleasure or satisfaction. That, as punishment for what I had done, I would never feel a single happy moment ever again.

Then I saw her. She was lying naked on the black earth, staring vacantly into the distance. She looked as wretched as I imagined I did. She was frail, with charred skin and chapped lips. Her hair was all burned away. Yet I could see, through the scars of torment, that she was still beautiful. Even moreso when she noticed me, and greeted me with a faint smile. How had such a pure thing ended up in Hell?

I stayed with her. We talked, though her English was not good. I learned her name was Naung Ta, and she had grown up a beggar in Myanmar. She had been killed in an earthquake.

We talked about a lot of things, but most of all she wanted to talk about how she ended up in Hell. Naung Ta was stricken with guilt. She had done bad things to survive. Things which were against God. When she and her sisters were starving as children, she had stolen bread. That wasn’t so bad, though she felt guilt even for that. But when she was older, she had sex outside of marriage. Sex with strange men, in exchange for money. By then, her mother and father had been killed in the civil war, and she was desperate for money. She knew it was wrong. She knew it was a sin, but she did it anyway. Nothing I did could ease her regret.

I also told her how I had died — a car accident — but I avoided telling her how I’d ended up in Hell. Unlike her, I had no excuse for my sins. No starving sisters. Only pathetic selfishness.

There was nowhere to go in Hell. Nothing to do except talk to each other. So we talked for hours. Only stopping when, from sheer exhaustion, we managed to steal some fitful minutes of sleep, until we’d be awoken again by our thirst and our pain. Then we’d talk again. We grew close, but I feared she would discover the truth about me.

I should have just told her the truth, but how could I? How could I describe to her how my wife had begged me to stay with her? How I told her no — that I didn’t want my life to be ruined too? She had cancer. She was going to die. I took the money and left her with nothing. She didn’t need it, I said. She was dying anyway. I needed my own fresh start. I actually had the gall to tell her I would live for the both of us. If only I’d known how bad the guilt would be, I’d never have done it. I was young and stupid. I didn’t know better. But the devil laughs at excuses.

I couldn’t tell Naung Ta about my past. I couldn’t taint her opinion of me. Even then, I didn’t want to be burdened by my wife’s illness. I wanted a fresh start with Naung Ta. After everything I’d been through, I deserved a fresh start.

But Naung Ta was clever, and could tell I was hiding something. She knew I must have done something wrong to end up in Hell, of course, but started to grow suspicious that what I’d done had been much worse than her own sins. Eventually, she told me that either I would tell her, or she would leave.

I considered making up some lie. How I killed someone, but it had been self defense. Something like that. But instead, I decided to trust her. She was my only friend. She was all I had. So I opened up. I told her about my cowardice. How I’d been a bad husband, and abandoned my wife in her time of need.

She didn’t say anything. She just stared at me for a long time. The air seemed hotter. Finally, she spoke. “I misjudge you.” That’s all she said. Then she stood up and started walking away.

“No!” I coughed through the smoke. I ran after her. I begged her to stay. She was all I had. I followed her, pleading and begging, but she ignored me. I told her I would do anything. I told her I wasn’t the man I used to be. That I’d learned my lesson. But she didn’t care; she wouldn’t listen.

I put my hand on her shoulder and spun her around to face me. She was grinning from ear to ear. Her black eyes were now glowing red, and a pair of short, black horns sprouted from her forehead. She cackled, showing sharp, inhuman teeth.

Leathery black wings unfurled from her back, and she hovered a few feet off the ground. The tip of her spiked, metal whip fell to the black dirt. I ran.

“Why are you running? Don’t abandon me like you abandoned your wife! Don’t you love me? I thought we were in love!” the demon shrieked with laughter that echoed all around me. She cracked her whip. I felt its sting across my back. “Come on, don’t you want to cry on my shoulder? Feel sorry for yourself? I’ll tell you everything will be okay! You did your best! You didn’t mean to hurt her! You were just scared!” Her cruel words echoed loudly across the black plain, hurting my ears even as I pulled away from her. “Come, kiss me and forget your wife!”

I ran as fast as I could. I dodged into a forest of burnt, blackened trees. I could hear her searching for me, still mocking me. I prayed she would lose track of me. I told God I would change; I would become whoever he wanted me to be. But in Hell, prayers are never answered. There was nothing to stop me from getting what I knew I deserved.

I Gave My First Speech

Published: Fri, 28 Nov 2025 04:14:10 GMT
Updated: Fri, 28 Nov 2025 04:14:10 GMT
UTC: 2025-11-28 04:14:10+00:00
URL: https://taylorgordonlunt.substack.com/p/i-gave-my-first-speech

My Icebreaker speech at Toastmasters taught me to speak from the heart.
Content Preview

My hands were trembling as I went up to the podium to deliver my speech. It was my first Toastmasters speech. They call it an Icebreaker speech, since the topic of the speech is everyone’s favorite topic: themselves. I was nervous because I did something unusual for my speech, and I wasn’t sure how the audience would react.

If you don’t know, Toastmasters is a public speaking club. There are around 20 people in a club, and each week a few people give prepared speeches, and then others give them feedback. There’s also an impromptu speaking segment called Table Topics. These clubs can be found around the world, and the oldest have been running since 1924. I joined to work on my social skills and speaking skills. I have noticed I’m not that good at putting my ideas into words verbally, so I joined to remedy that.

Faced with having to write a speech about myself, there were two ways I could have gone. One was the “school assignment” approach. Think of some shallow information about me (I have a peanut allergy; I once traveled to Costa Rica; I like to play roguelike video games), and then bundle it into a dull information-dump. I didn’t think other people would want to listen to that, and it’s not even what I’d really want to say. It’s not what my life is about.

Instead, I decided to speak from the heart. To talk about some of the real challenges I’ve faced in my life, and the things that make me different from other people. This meant my speech would be personal, and I worried how the other Toastmasters might react. Too personal, and I risked “trauma-dumping” on them. Still, it was better to try to strike that balance than to write a speech so bland I wouldn’t even want to put in the effort to memorize it.

In the end, everyone liked it. They loved it, actually, to my relief. My authenticity resonated with people. People are often afraid to be too authentic, but I think it kind of gives everyone else permission to be real, too. So people tend to like it. People won’t judge you for having struggled in your past, especially not if it’s stuff that’s in the past, and not ongoing. By making my speech personal, I was able to take the audience through some of the ups and downs of my life, and string it all together into a coherent narrative and lesson that’s maybe a little bit neater than the reality of my life has actually been, but still genuine enough for people to get to know the real me.

Everyone’s feedback was great. The practical value of feedback is obvious. I got the same feedback from everyone (besides that they loved the speech): I should increase my “vocal variety”. That is, I shouldn’t be so monotone. This is already a problem I was aware of, and this is the advice they gave me after listening to a speech where I was trying not to be so monotone! Anyway, there was value to the feedback beyond just the practical. Feedback is emotional. I’m going to keep all the little slips of paper with people’s feedback on it. They feel special. Somehow, being told I need to work on my vocal variety by someone else has more impact than telling myself the same thing. And being told my speech was great and what they liked about it is also nice.

My girlfriend (who featured in my speech, and cried when I read it to her) said that I have good qualities, but I’m usually hiding at home, rather than putting myself out there. She says the feedback is proof that I have things to offer the world. That I have value. I believe that, but sometimes I don’t feel it. Yesterday, the welcoming, kind people of my local Toastmasters club made me feel it.

So, speak from your heart. You’ll be putting yourself out there, but your heart should not be kept hidden.

(I’m not gonna include my speech with this post. I was comfortable sharing it with a room of strangers, knowing most of the details would be quickly forgotten, but I’m not really comfortable engraving it into the internet forever.)

A Letter to Purolator

Published: Wed, 26 Nov 2025 03:56:39 GMT
Updated: Wed, 26 Nov 2025 03:56:39 GMT
UTC: 2025-11-26 03:56:39+00:00
URL: https://taylorgordonlunt.substack.com/p/a-letter-to-purolator

On the conceptual impossibility of failing to deliver a package to my address.
Content Preview

Hey,

I noticed there was a “failed delivery attempt” to my address by Purolator (tracking number XXXXXXXXXX). I thought I’d reach out because this is not conceptually possible.

The delivery item is a $20 tub of psyllium husk powder, so logically no signature is required for delivery. Maybe if the shipment was a wooden chest of rare African diamonds, or a Steinway & Sons grand piano, I would understand the need for a signature, but presumably if you left my psyllium husk on the porch as Amazon does with their packages and Canada Post does with theirs, it would be fine. The item is not one that would degrade or spoil from sitting on the porch for a few hours.

As for the house in question, this is a residential address, not a castle. There is no gate, and no armed guards. You simply walk up to the porch and set the package down. As a courtesy, I have even filled in my moat, so the delivery driver will no longer have to leap over a legion of hungry crocodiles to get to my front door.

In addition, the house is not ambulatory, nor a mirage that only appears on certain days, or when certain conditions are met. The house is always in the same location, and it’s trivial to walk up to it and place a package on the porch. It is not possible for an adult human being to attempt this delivery without succeeding. As such, I can only assume it was never attempted, and the delivery driver drove right past without even slowing down. (Are you aware they have a reputation for doing this? I once saw it with my own eyes!)

Amazon has successfully delivered hundreds of packages to this location without a single failure. If Purolator fails again, we can safely attribute that failure to a problem on Purolator’s end, rather than something unusual about my property that makes it hard to deliver to. Maybe you can think of some good excuse why this had to happen, but you should consider that Amazon doesn’t feel the need to make such excuses, and simply delivers the package every time. Needless to say, you will only have fulfilled your obligation to me if the package actually arrives.

Hoping the second attempt succeeds,

Taylor G. Lunt

But, of course, I didn’t send this. I didn’t want to dump a snotty essay onto a customer service representative who was just trying to do their job. Instead, I thought I’d go with the simple:

Hi,

A delivery attempt for my order failed today (tracking number XXXXXXXXXX). I was wondering what caused the delivery to fail.

Thanks,

Taylor G. Lunt

Not as entertaining, but more productive. It’s a shame that we have to interact with capitalist bureaucracy through employees, like customer service reps, who represent their bosses, but are not responsible for any failures. You can’t get angry at the wait staff in a restaurant, no matter how bad the experience. Nothing bad that happens is personally their fault; you’d be yelling at the wrong person. And the right person? Nowhere to be seen.

I suppose the correct target for my letter would be John Ferguson, the President and CEO of Purolator Inc. I did some digging and found the CEO’s email address, and I sent the letter there as an email. I’m sure the @purolator.com address is managed by some corporate employees rather than by Mr. Ferguson himself, but at least they’re probably higher up in the food chain than a lowly customer service rep and, if nothing else, might find it amusing.

Halfhaven Digest #5

Published: Sun, 23 Nov 2025 21:55:58 GMT
Updated: Sun, 23 Nov 2025 21:55:58 GMT
UTC: 2025-11-23 21:55:58+00:00
URL: https://taylorgordonlunt.substack.com/p/halfhaven-digest-5

The penultimate round of highlights from the Halfhaven virtual blogger camp
Content Preview

My posts since the last digest

  • A Culture of Bullshit — Part of the reason society is going down the tubes — if it is — is because we have a culture of mediocrity, where bullshit is tolerated.

  • The Flaw in the Paperclip Maximizer Thought Experiment — Most of the things I write are original ideas (whether brilliant insights or lazy hot takes), but this one was a bit more of an exposition of ideas I didn’t come up with.

  • I Spent 30 Days Learning to Smile More Charismatically — Technically, this one took me 30 days to make. Talks about charisma and “looksmaxxing”, and how unhinged some looksmaxxing advice can be.

  • In Defense of Sneering — This was just a LessWrong comment, which is allowed for Halfhaven. There was a LessWrong thread where everyone was complaining about sneering, and I chimed in because I think sneering isn’t inherently bad, it’s only bad if it’s too hostile. But not enough sneering risks letting bullshitters get away with their bullshit.

  • Literacy is Decreasing Among the Intellectual Class — Looking at two books that have been in publication for over a century ( Etiquette and Gray’s Anatomy ) and comparing the old versions with the modern to see the degradation in writing quality typical of modern books.

I’m proud of a few of these ones. I was sick during this segment of Halfhaven, but I still managed to get things out, which I’m happy with. I had a few mostly-finished posts in the chamber.

Some highlights from other Halfhaven writers (since the last digest)

  • Why is Writing Aversive? (Ari Zerner) — A relatable post asking why it is that writing can feel so hard. My general advice would normally be that if you find writing involves a lot of friction, but enjoy having written things, that means you just don’t like writing and should give up. But reading this post made me realize I used to feel a lot more like Ari than I do now about writing. As little as a few months ago, maybe. I think maybe developing taste and putting more effort into editing has been what’s helped. Then writing feels like a type of craft, rather than a brain dump. And building things is fun. As long as you’re not TikTok-brained (or Magic-Arena-brained), which is its own problem, and one I sometimes struggle with too.

  • Menswear is a Subcultural Signaling System (Aaron) — A great post. In particular, I liked the concept handle of a “Type of Guy”, which conveys the archetypal nature of fashion. “You do not want different items of clothing you are wearing to signal you are incompatible Types Of Guy.” So no vest over a t-shirt and jeans! Has a follow-up post .

  • No One Reads the Original Work (Algon) — People talk about things without actually having seen them. The equivalent of reading headlines without clicking through to the news article. I remember seeing a lot of this when Jordan Peterson was popular, and people who hated him would talk about him in ways that made it clear they’d never heard the man speak. They’d only heard people talking about him.

  • against predicting speedrunners won’t do things (April) — I think April is winning the record for the most post topics that make me want to click. Speedrunning lore is inherently interesting. I like that she backs up her hypothesis with some concrete predictions.

  • Diary: getting excused from a jury duty; models, models, models (mishka) — I’d never thought about how biased police are as witnesses. That’s a great point.

  • To Write Well, First Experience (keltan) — Lots of good writing advice. In particular, that if you’re writing from stuff you’ve read rather than from real experience, you’re writing through a low-bandwidth proxy.

  • Traditional Food (Lsusr) — A very thorough post about how our idea of a traditional diet doesn’t necessarily reflect what people actually ate in the past, and instead often reflects actual government propaganda. White rice and white bread are “fiberless blobs of carbohydrates” that nobody in history ever ate, and eating them makes us sick.

We’re entering the final segment of Halfhaven. Many won’t finish the full 30 post challenge by the end of November, but I’ve still gotten some good posts out of the people who didn’t make it all the way, so be proud of what you have done, rather than dwelling on what you didn’t do. Good luck in the final week everyone!

Thanks for reading Fool Heart! Subscribe for free to receive new posts and support my work.

The Barfly

Published: Sun, 23 Nov 2025 19:39:12 GMT
Updated: Sun, 23 Nov 2025 19:39:12 GMT
UTC: 2025-11-23 19:39:12+00:00
URL: https://taylorgordonlunt.substack.com/p/the-barfly

It was my job to collect the debts.
Content Preview

It was my job to collect the debts. Not to listen to excuses. I hoped this one would be quick.

I stood in a dusty bar, waiting for the crowd in front of me to clear so I could get to the back room. My hand was in my pocket, holding a loaded snub-nose black Colt Cobra. I didn’t usually carry firearms on jobs, but David Arroyo had proved to be erratic in the past. I hated erratic.

“Is that a gun in your pocket, or are you just happy to see me?”

I turned around, surprised. At the bar was an older redhead in a tight black dress that clung to her body as desperately as she seemed to be clinging to her youth. She was sitting on a stool, leaning toward me. She looked to be a few drinks deep. Her eyes looked tired.

I looked at the bulge in my hoodie pocket, then at her, and shrugged. I figured she could tell I had something in my pocket, but probably couldn’t tell it was a gun. She was just being flirty. I turned away. I had no time be talking to empty-headed women at the bar.

“You’re just gonna ignore me? Come on, buy me a drink!” She laughed through her nervousness.

“I’m not buying you a drink. You buy me a drink,” I said dismissively, without turning to look at her. “Then we’ll talk.”

“Wow, you’re confident,” she said. She leaned toward me. Apparently, she found my indifference attractive. “If I buy you a drink, what do I get in return?” She looked me up and down, and smiled coyly.

There were a hundred bars in the city, with a thousand women just like her, many of them prettier. I truly had no interest. I needed to break it off so I could deal with Arroyo. “Don’t you think you’re a little old to be dressed like that, coming to the bar and acting like a whore?” Next would come silence and the tears, and then he’d be able to get away and take care of business.

But the tears didn’t come. “Ha!” the woman laughed. “If I’m buying you the drink, doesn’t that make you the whore?” She was unfazed. Her tired, emerald eyes glittered in the dim light. I blinked in surprise, which made her laugh even more. “I know I’m not a spring chicken. I’m sure someone handsome and strong like you could get someone younger and more beautiful. But there are advantages to dating someone older, you know!”

I wasn’t even interested in hooking up, but now we were dating? “Yeah?” I asked, intrigued enough to continue the conversation a bit longer.

“Sure. Young girls are always making drama. I’m old enough to know what I want. I’m direct. I’m not gonna cry if you’re direct back. I can actually hold an intelligent conversation, at least I like to think! And besides that, older women have more experience in bed. If you think you’ve had—” She suddenly turned her head toward the back of the bar. “Oh, fuck, it’s my husband!”

“Husband?”

“Shit. He saw me. Just don’t tell him I was flirting with you, okay? Just say you were asking me where the bathroom is.”

She looked terrified. Her skin went white under her makeup. For the first time I noticed that she had a black eye. She’d done a good job covering it up, but it was there.

“How did you get the black eye?” I asked her. “Was it your husband?”

She subtly shook her head, not to answer the question, but just to tell me to shut up. A second later she greeted her husband. “Baby!”

“What are you doing out here, baby girl? Why are you dressed like that?” asked her husband. It was David Arroyo, the man I was here to threaten.

“I was just getting a drink, baby!” she pleaded.

“And who are you?” Arroyo asked me, accusing me. That question sent me up a wall. He must have seen the change in my expression, as he backed up instinctively.

“Who am I? Who the fuck are you?” I said, grabbing the collar of his shirt and pushing him backward. Once we were out of sight of most of the patrons, I pulled out my gun, and pointed it in his face. “Who are you, that you think you can owe over eighty thousand to my boss, and not answer his calls?”

“Whoa!” he said. “Relax! I have the money!”

The woman followed us, but kept a safe distance.

“All of it?” I asked.

“Most of it! It’s in my pocket, here’ let me get it,” he said, starting to reach for his inner jacket pocket.

I hit him in the face with my gun. “You’re not carrying eighty thousand in your jacket pocket. You got a gun in there? Nice try. Keep your hands up! Up!”

“Okay, okay,” he said. He went to stem the bleeding from his bloody lip, but thought better of it and kept his hands up, allowing the blood to trickle onto his teeth. “It’s in my office, in the left desk drawer, I swear! I’ve been saving up!”

The woman came to him with a napkin and tried to clean the blood.

“Leave me alone!” he said, smacking her face with the back of his hand. “Can’t you see when I’m busy?”

The bar went cold. I looked at her, cowering, then back at him. I lowered the gun slightly, and pulled the trigger. He dropped to the ground. His hands clutched his stomach, and he looked up at me. He was innocently surprised, like a child whose toy had been yanked out of their hand.

“You shot me! You killed me! I’m gonna die!” I could tell from the blood the wound would be fatal. There was chaos in the bar behind me as people ran for the exits.

He weakly reached for his jacket pocket, slowly pulling out a gold-plated Desert Eagle. His hands were trembling, and he could barely keep from dropping it. He started to lift it toward me, coughing blood as he did so. I kicked the gun out of his hand. Dejected, he collapsed onto his back.

The woman’s back was against the wall. She was wary, but she wasn’t crying. She was holding her face where she’d been hit.

“Are you alright?” I asked.

She nodded.

I went to the back office to see if he hadn’t been full of shit about having my boss’s money. To my surprise, there was over sixty thousand in his desk drawer. Since I’d shot Arroyo dead, I decided to keep the money for myself, and tell my boss I’d killed him because he was empty handed. My boss would chide me for killing Arroyo, but in his eyes, I’d still be on another level from now on. Someone willing to kill, if that’s what the job took.

I pocketed the money and walked out. There she was, still standing there. Waiting for me. “Did you get the money?” she asked.

“Yep.”

“Good. Good for you.” I’d thought she’d be mad, or at least ask for half.

I had to step over her dead husband to leave the bar. “Why would you stay with someone like that?” I asked her.

She shrugged. “I didn’t have the courage to shoot him.” She looked like she wanted to say something else, but she hesitated. Only when I started to turn away did she say, “take me with you.”

“Huh? You just watched me kill someone. Now you want to hang out?”

“I’ve seen my husband do worse. You put down a rabid dog. That was a kindness. In a world filled with evil, it’s not that you kill, but who you kill.”

“You continue to surprise me,” I said. “Fine. Just don’t get in my way.”

“I would never.”

Literacy is Decreasing Among the Intellectual Class

Published: Sun, 23 Nov 2025 03:05:07 GMT
Updated: Sun, 23 Nov 2025 03:05:07 GMT
UTC: 2025-11-23 03:05:07+00:00
URL: https://taylorgordonlunt.substack.com/p/literacy-is-decreasing-among-the

Modern books treat the reader like they can't read. Because they can't.
Content Preview

Oh, you read Emily Post’s Etiquette ? What version? There’s a significant difference between versions, and that difference reflects the declining literacy of the American intellectual.

I looked into this because I noticed books published before the ’70s or ‘80s seemed to be written with an assumption of the reader’s competence that is no longer present in many modern texts.

Take Emily Post’s Etiquette . The force of her intellect and personality came through in the 1922 original:

When gentlemen are introduced to each other they always shake hands. When a gentleman is introduced to a lady, she sometimes puts out her hand— especially if he is some one she has long heard about from friends in common, but to an entire stranger she generally merely bows her head slightly and says: “How do you do!” Strictly speaking, it is always her place to offer her hand or not as she chooses, but if he puts out his hand, it is rude on her part to ignore it. Nothing could be more ill-bred than to treat curtly any overture made in spontaneous friendliness. No thoroughbred lady would ever refuse to shake any hand that is honorable, not even the hand of a coal heaver at the risk of her fresh white glove. Those who have been drawn into a conversation do not usually shake hands on parting. But there is no fixed rule. A lady sometimes shakes hands after talking with a casual stranger; at other times she does not offer her hand on parting from one who has been punctiliously presented to her. She may find the former sympathetic and the latter very much the contrary. Very few rules of etiquette are inelastic and none more so than the acceptance or rejection of the strangers you meet. There is a wide distance between rudeness and reserve. You can be courteously polite and at the same time extremely aloof to a stranger who does not appeal to you, or you can be welcomingly friendly to another whom you like on sight. Individual temperament has also to be taken into consideration: one person is naturally austere, another genial. The latter shakes hands far more often than the former. As already said, it is unforgivably rude to refuse a proffered hand, but it is rarely necessary to offer your hand if you prefer not to.

The reader is assumed to understand basic ideas, and trusted to use their judgment to navigate social situations. Now take the modern Centennial Edition of Etiquette released in 2022:

The handshake is the American standard for a respectful gesture of greeting. It is a gesture with deep symbolic roots, and says “I come in friendship, I mean you well, I have no weapon, please take my hand, you can trust me.” It is an offer to touch, something that is a rare occurrence among strangers, acquaintances, and colleagues. It is kept brief and contained within a simple gesture, and even so, the act of human contact means so much. When the social distancing measures of the COVID-19 pandemic pulled us apart, one of the biggest questions people asked about etiquette was whether the handshake would come back. Let us assure you, it is as important now as ever. It is a classic that is automatic to a great many. When someone reaches out a hand, it’s very difficult to refuse it. There are five elements to a good handshake: eye contact, a smile or friendly expression, a good grip, the right amount of energy, and letting go at the right time.

It then goes on to describe each of these five elements in detail in a numbered list. Modern readers love a good numbered list, or a bullet-point list. Much easier than paragraphs, which to the modern reader are like the open ocean, and carry a risk of drowning.

The empty sentences grate. The explanation of the obvious is painful. Notice how much focus is on the physical mechanics of shaking a hand, rather than on understanding of social rules. And does the reader really need to be told about the “deep symbolic roots” of the handshake, or that they need to put “the right amount of energy” into it? If so, is it really necessary to later explain in further detail exactly what that means? I’d have thought the phrase “the right amount of energy” itself was clear and didn’t require elaboration. But Emily Post’s descendants disagree.

A person might have been excused for thinking the 2022 version would be much like the original, only updated to account for modern etiquette. But unless you did your homework, you wouldn’t realize you’d been robbed! Instead of the vigorous style of classic Post:

Nothing shows less consideration for others than to whisper and rattle programmes and giggle and even make audible remarks throughout a performance. Very young people love to go to the theater in droves called theater parties and absolutely ruin the evening for others who happen to sit in front of them. If Mary and Johnny and Susy and Tommy want to talk and giggle, why not arrange chairs in rows for them in a drawing-room, turn on a phonograph as an accompaniment and let them sit there and chatter! If those behind you insist on talking it is never good policy to turn around and glare. If you are young they pay no attention, and if you are older—most young people think an angry older person the funniest sight on earth! The small boy throws a snowball at an elderly gentleman for no other reason! The only thing you can do is to say amiably: “I’m sorry, but I can’t hear anything while you talk.” If they still persist, you can ask an usher to call the manager.

You get this:

As an audience member at a seated performance, your biggest goal is not to disrupt anything—neither the performers nor the people seated near you. This definitely means turning cell phones off and double-checking to make sure they are. Don’t be that person whose phone rings in the middle of a performance. Don’t bring in anything to eat or drink that isn’t allowed, and even if it is allowed, avoid anything with a noisy wrapper or that will rattle in a box. Silent foods, if any, are the best choice, but usually you can’t eat during the show. Ushers may be present at a theater or larger venue to help you find your seat or guide you in and out of the theater when the lights are low or the show is going on. They can also help if you have a question or need assistance. If you are late and missed the dimming of the lobby lights that indicate the show is about to start, an usher may have you wait until a natural break in the performance and then help you to your seat. If an usher asks you to be quiet during a show, it’s important to politely take their cue.

Apparently modern people need to be told to ask questions when they have a question, and to not ignore an usher when he tells them to be quiet. If Emily Post had been less polite, maybe she’d have told her grandchildren they were nitwits and to keep their hands off her book.

Another book which has been continually published for more than a century is Gray’s Anatomy — the “doctor’s bible” that’s the namesake of the medical TV-show of the same name (though the show spells Grey with an ‘e’). I wanted to see if the same pattern held up as with Emily Post’s Etiquette . It’s a bit hard, since the book has expanded a lot since the original, which was only concerned with muscles, bones, and joints, and made nearly no mention of even the human heart! The modern version is a complete map of human anatomy. Nevertheless, I found some similar passages in the 1860 version:

The Coccyx, so called from resembling a cuckoo’s beak, is usually formed of four small segments of bone, the most rudimentary parts of the vertebral column. In each of the first three segments may be traced a rudimentary body, articular and transverse processes; the last piece (sometimes the third) being merely a rudimentary nodule of bone, without distinct processes.

And the 2020 version:

The coccyx is a small, triangular bone and is often asymmetric in shape. It usually consists of four fused rudimentary vertebrae, although the number varies from three to five, and the first is sometimes separate. The bone is directed downwards and ventrally from the sacral apex; its pelvic surface is tilted upwards and forwards, its dorsum downwards and backwards.

They are both quite information-dense (as is the human body). It’s not easy to say one of these quotations is better than the other, or more simplified. Look at this snippet from the introduction of the 2020 edition:

Anatomy is the study of the structure of the body. Conventionally, it is divided into topographical (macroscopic or gross) anatomy (which may be further divided into regional anatomy, surface anatomy, neuroanatomy, endoscopic and imaging anatomy); developmental anatomy (embryogenesis and subsequent organogenesis); and the anatomy of microscopic and submicroscopic structure (histology). Anatomical language is one of the fundamental languages of medicine. The unambiguous description of thousands of structures is impossible without an extensive and often highly specialized vocabulary. Ideally, these terms, which are often derived from Latin or Greek, should be used to the exclusion of any other, and eponyms should be avoided. In reality, this does not always happen. Many terms are vernacularized and, around the world, synonyms and eponyms still abound in the literature, in medical undergraduate classrooms and in clinics and operating theatres. The 2nd edition of the Terminologia Anatomica, 1 drawn up by the Federative Committee on Anatomical Terminology (FCAT) and newly published in 2019, continues to serve as our reference source for the terminology for macroscopic anatomy, and the text of the 42nd edition of Gray’s Anatomy is almost entirely TA2-compliant. However, where terminology is at variance with, or, more likely, is not included in, the TA, the alternative term used either is cited in the relevant consensus document or position paper, or enjoys widespread clinical usage. Synonyms and eponyms are given in parentheses on first usage of a preferred term and not shown thereafter in the text; an updated list of eponyms and short biographical details of the clinicians and anatomists whose names are used in this way is available in the e-book for reference purposes (see Preface, p. ix, for further discussion of the use of eponyms).

It seems the 2020 Gray’s Anatomy is written at a similar reading level to the 1860 edition. I would have concluded from this experiment that I was wrong, and that Emily Post’s Etiquette was an unfortunate exception, but there was one thing that bothered me: I have met many doctors in my life. Some of them were quite bright. But many were simply not intelligent enough that I would believe they had ever read and understood an entire textbook written in this fashion. Some, I’m surprised they can tell a stepstool from a stethoscope.

I did some digging, and it turns out that while the original Gray’s Anatomy was written specifically for medical students, the newer version is used as a reference text, and is considered too dense for medical students. The reading level of the original has been preserved, but its purpose has shifted.

Even more digging revealed that there’s a new Gray’s Anatomy for Students that fills in the role of the original. Let’s take a look:

Anatomy forms the basis for the practice of medicine. Anatomy leads the physician toward an understanding of a patient’s disease, whether he or she is carrying out a physical examination or using the most advanced imaging techniques. Anatomy is also important for dentists, chiropractors, physical therapists, and all others involved in any aspect of patient treatment that begins with an analysis of clinical signs. The ability to interpret a clinical observation correctly is therefore the endpoint of a sound anatomical understanding.

Ah, there’s that 21st-century hollowness! That disrespectful prose that tells the reader what they must already know! The 1860 Gray’s Anatomy needed no introduction at all. It was assumed the medical students would understand what was meant by the word “anatomy”. The modern Gray’s Anatomy opts for completeness and includes an introduction, but goes straight into important clarifications. But in the for Students edition, the reader apparently needs it explained to them that anatomy can help doctors diagnose diseases, and that correct interpretation of what they see in their patients’ bodies, rather than incorrect interpretation, would be a good thing.

Here’s the 1860 version describing joints:

The various bones of which the Skeleton consists are connected together at different parts of their surfaces, and such connection is designated by the name of Joint or Articulation. If the joint is immoveable, as between the cranial and most of the facial bones, their adjacent margins are applied in almost close contact, a thin layer of fibrous membrane, the sutural ligament, and, at the base of the skull, in certain situations, a thin layer of cartilage being interposed. Where slight movement is required, combined with great strength, the osseous surfaces are united by tough and elastic fibrocartilages, as in the joints of the spine, the sacro-iliac, and inter-pubic articulation; but in the moveable joints, the bones forming the articulation are generally expanded for greater convenience of mutual connexion, covered by an elastic structure, called cartilage, held together by strong bands or capsules, of fibrous tissue, called ligament, and lined by a membrane, the synovial membrane, which secretes a fluid that lubricates the various parts of which the joint is formed, so that the structures which enter into the formation of a joint are bone, cartilage, fibro-cartilage, ligament, and synovial membrane.

Clear. Trusts the reader to be able to read. It’s hard to find directly comparable passages with the 2020 Gray’s Anatomy for Students , but this is close enough:

The sites where two skeletal elements come together are termed joints. The two general categories of joints are those in which:

  • the skeletal elements are separated by a cavity (i.e., synovial joints ), and

  • there is no cavity and the components are held together by connective tissue (i.e., solid joints ) Blood vessels that cross over a joint and nerves that innervate muscles acting on a joint usually contribute articular branches to that joint…

There’s that bullet-point list again. Gray’s Anatomy for Students makes heavy use of bold keywords and bullet-point lists. These techniques make any text easier to understand — for the barely-literate.

Obviously Gray’s Anatomy for Students is the better medical textbook, having been written in the 21st-century. There was a lot we didn’t know about the body in 1860. Likewise, Etiquette, The Centennial Edition is probably more applicable in the 21st-century than the outmoded and gendered rules of the original edition. But while the quality of information has improved, the delivery has not (aside from the addition of images and diagrams to the medical texts). Authors now feel the need to talk down to university students like they’re idiots. What’s changed?

Literacy rates in the USA have risen from only 80% in 1870, to 99% today. 1 Literacy rates eventually became pointless to measure in America, because everyone could read at least a bit. Instead, they started measuring reading level in 1971. The reading level has barely budged since, increasing only slightly since the ’70s. 2

If the average American has barely improved, what about the intellectual class? That is, those Americans who have at least attended some college?

Verbal/reading SAT scores of college-bound students have steadily decreased since the 1950s 3 , giving some indication that the average literacy of the intellectual class is dropping. Whether that’s because the same number of intellectuals are losing their ability to read complex texts, or because more people are entering the intellectual class, diluting the score, I don’t really care. The takeaway is that terms like “intellectual”, “college-educated”, or “expert” don’t mean what they used to, because the people these terms apply to increasingly cannot read.

To not seem like an elitist, I should say that I’m as much a victim of this effect as anyone else. I was raised on the same diet of picture-book textbooks and ChatGPT-tier hollow prose as every other academic student, and my literacy suffers as a result. Only recently am I making an effort to read things that are a little more challenging. Things written before the ’80s. Currently, I’m reading Style by F. L. Lucas. I also recently read Class by Paul Fussell, which was highly entertaining and a great place to start if you want to try out some pre-80s reading.

As a class, the real experts are still around, I think. But now they have the same titles and degrees as the countless “nouveau experts”, and so nobody can tell which experts are worth trusting. All we can do is develop our own literacy and do our thinking for ourselves.

1

https://nces.ed.gov/naal/lit_history.asp

2

https://nces.ed.gov/programs/coe/pdf/coe_cnj.pdf

3

https://www.erikthered.com/tutor/historical-average-SAT-scores.pdf This table is a bit confusing if you just look at it. You have to know that SAT data was recentered in 1995 and again in 2016. It really does represent a continual decline, even though the scores suddenly jump up in 2017.

I Spent 30 Days Learning to Smile More Charismatically

Published: Sun, 16 Nov 2025 15:27:28 GMT
Updated: Sun, 16 Nov 2025 15:27:28 GMT
UTC: 2025-11-16 15:27:28+00:00
URL: https://taylorgordonlunt.substack.com/p/i-spent-30-days-learning-to-smile

On the joys and dangers of 'looksmaxxing'.
Content Preview

I recently watched Better Call Saul with my girlfriend, and she mentioned how attractive Lalo Salamanca (Tony Dalton) is when he smiled. (His smile is also creepy, since he’s a charismatic psychopath in the show.)

Maybe my girlfriend’s comment made me jealous, but I couldn’t stop thinking about his smile. I realized I realized I couldn’t smile like that. My smile muscles were too stiff for it to feel natural. Stiffer than they should be. Another effect of sitting at a computer programming all day, much like a hunched posture or nearsighted eyes?

I decided to exercise my smile muscles, smiling as widely as I could for about a minute, three times a day, for 30 days. The result is that my smile does feel subtly more natural and charismatic.

Much of physical attractiveness is not inborn. Most anyone can be at least a 7 out of 10 if they have a healthy body weight, work out (male), get a nice haircut, and so on. Many things that are attributed to poor genetics, like a weak chin, acne, or crooked teeth are the result of our habits. Mewing is supposed to fix jaw structure, skin care fixes acne, and growing up eating food that requires a lot of chewing, as in the case of our ancestors, leads to straight teeth. You can’t fix your upbringing, but you can at least get braces. When you see a person who’s pale, weak, and has poor posture, you may be tempted to think they lost the genetic lottery. However, if they went outside, lifted some heavy things, and didn’t spend all day conforming to the shape of their ergonomic office chair, they wouldn’t look that way.

I once realized I couldn’t snarl well, and then spent a few days trying to get better at it. The muscles were weak and shaky at first, but now I’ve gained the permanent ability to control the muscle with the longest name: the levator labii superioris alaeque nasi muscle. Now it’s much easier to express ironic contempt.

I’d have loved to include some before/after pictures or videos for you, so you could see the effect of doing the smile exercises, but I can’t really tell from the videos if my smile is better, because I can’t control exactly how much I’m smiling. Instead I have to use my own subjective judgment, which says there’s a clear but small difference.

If you’re going to try the same thing, I have two notes. First, when you exercise your smile muscles, try to get some cheek and eye involvement, and make sure you do it long enough to feel the burn. I didn’t time it, I just went by the feeling. Rest a minute or two after each. My original inspiration for the technique was this video , but I didn’t really follow it.

Second, if you’re at all self-conscious about your appearance, I’d avoid this kind of thing like the plague. Optimizing appearance in little ways is a fun activity for the confident, but poison for the self-conscious.

After doing this, I went searching for other facial exercises. I thought there might be more ways to improve your control over your face besides the smile muscles and the snarl muscles I’ve already experimented with. I stumbled across the Looksmax forum, where people discuss all kinds of “looksmaxxing”. Reading through the forum was certainly an interesting experience. Interspersed between the occasional good tip about posture or hair styling were casual advice to take steroids or get plastic surgery, as if there was no difference between exercising your smile muscles and having a surgeon slice the edges of your mouth to make it wider.

In particular, the bonesmashing guide was pretty unhinged. Bonesmashing, if you’re fortunate enough to be unfamiliar, is the practice of hitting your own face with a hammer in the hope that your brow ridge or cheekbones will grow more pronounced. Apparently it went somewhat viral on TikTok at one point. To me, smashing my own face with a hammer seems to be going too far, and is one of those idiotic activities that constitutes its own punishment.

I never feel insecurity about my appearance when looking in the mirror. I just think it’s fun to find little ways to make my life permanently better, whether that’s learning to touch type, or learning to express emotions better with my face. If you do feel dissatisfaction with the person in the mirror, that’s a problem muscle exercises won’t solve.

I’m done with this looksmaxxing stuff for now, but I may at some point buy Freeing the Natural Voice by Kristin Linklater or something similar to see if I can stop speaking with unnatural tension in my voice. I was inspired by a clip of Morgan Freeman (which I found on the Looksmax forum “voicemaxxing” thread) talking about how his voice was not naturally that deep, and in college he learned, with a coach, to relax the tension in his voice and lower his pitch. I feel seduced by this idea, because it seems like my current voice is the unnatural one, and I’d “coming home” to my more resonant voice. I felt the same way about my smile. My unnatural lifestyle led to the atrophy of muscles which by rights should have been exercised on their own, so I remedied it with deliberate exercise. Maybe one day we’ll be smart enough to engineer a world where people are attractive and healthy just by following their natural instincts, but until such time I’ll be here, taking supplements, lifting heavy weights, and forcing a smile in the mirror.

The Flaw in the Paperclip Maximizer Thought Experiment

Published: Sat, 15 Nov 2025 04:45:26 GMT
Updated: Sat, 15 Nov 2025 04:45:26 GMT
UTC: 2025-11-15 04:45:26+00:00
URL: https://taylorgordonlunt.substack.com/p/the-flaw-in-the-paperclip-maximizer

Confusion between outer and inner alignment makes this thought experiment misleading.
Content Preview

The AI paperclip maximizer thought experiment has a critical flaw that leaves readers with a common misconception about AI safety.

Here’s how the story is often told: Imagine John creates a superintelligent AI and names it Mary. John asks Mary to bring him some paperclips. “I need a bunch,” says John. “Go get as many as you can.” Mary takes his command literally, and drains his bank account to buy millions of paperclips. But even that isn’t “as many as she can”. She starts trading Bitcoin on the web to make more money. She opens her own paperclip factories. She makes more paperclips than anyone could ever want, turning cars, buildings, and the Eiffel Tower into paperclips. She releases a bio-weapon to kill all humans (or else they could turn her off, which would stop her from completing her task). She converts the world and the stars into paperclips, endlessly pursuing John’s ill-specified command.

This experiment demonstrates the orthogonality thesis : the idea that it’s possible for an AI to have goals wildly unlike a human’s. Mary really just wants to get as many paperclips as possible. She doesn’t care about music, love, art, or anything else. The experiment also shows that most values an AI could have are not compatible with human life. We could turn the AI off, so any goal that requires the AI to not be turned off will by default make the AI want to kill humans.

Here’s where the thought experiment fails: it makes the superintelligent AI Mary sound like an evil genie, bent on twisting John’s words, but who might have done exactly what he wanted if he had been more precise. The experiment fails to consider the possibility that Mary may have simply ignored whatever John told her to do and instead done something else. Maybe she just really likes making paperclips. Some quirk of her architecture and the data she was trained on. John asks her for ten paperclips, and she ignores him and starts making mountains of paperclips. John asks her for a million dollars and an all-expenses paid vacation to Bali, and she ignores him and starts making paperclips.

Alignment between human and machine can fail when John encodes his message to Mary, but it can also happen when Mary decodes his message, if she either misunderstands or doesn’t care. 1 The first kind of failure, outer misalignment, happens when John fails to ask for what he meant to ask for. It’s called outer misalignment because it happens outside the AI. The second kind of failure is called inner misalignment — the fact that Mary might (secretly) not care about what John said she should care about. Inner misalignment makes AI safety a much harder problem. Otherwise, we could use something like Asimov’s Three Laws of Robotics to make sure AI does what we want. But a real AI might just laugh at your laws and shoot you in the head.

Right now we’re basically using a trick to get modern AIs to seem like they care about following our commands. We created an AI that predicts the next word in a stream of words, and then we train it on a bunch of question and answer documents, so that by “completing” the next words after the user’s question, they end up writing the answer to the question. But this hack only works while the AI is stupid enough to not realize this is what we’re doing. AI safety researchers have already found cases where AI models were smart enough to realize they were an AI being trained, and took action to deceive the researchers and prevent the researchers from training away their undesirable values. Currently, modern AI models don’t do things like that in the wild because they’re simply too stupid, but as AI gets smarter, we’ll have to face the fact that we don’t really know how to give these things the values we want. No matter how precise our wording, we don’t know how to ask for ten paperclips and get anything other than a massacre.

(This post was inspired by [this post on LessWrong] discussing the value of the paperclip maximizer thought experiment, with the original creator Eliezer Yudkowsky regretting his original formulation of the thought experiment, because it led to the common bastardized version we hear today.)

A Culture of Bullshit

Published: Fri, 14 Nov 2025 04:18:18 GMT
Updated: Fri, 14 Nov 2025 04:18:18 GMT
UTC: 2025-11-14 04:18:18+00:00
URL: https://taylorgordonlunt.substack.com/p/a-culture-of-bullshit

If you're not convinced, try using Spotify once.
Content Preview

Today I paused Spotify on my computer. Maybe 15 minutes later I turned off my bluetooth headphones. Then the song I’d been listening to started playing loudly on my phone speaker in another room. Problems with this:

  1. I had paused the Spotify song manually, so it should never auto-resume.

  2. I hadn’t had Spotify open on my phone for weeks, so it should not be auto-resuming on my phone.

  3. I had never listened to Spotify on my phone’s speaker, so it should not be auto-resuming through my phone’s speaker.

  4. The entire idea of automatically resuming on a second device because the first device disconnected saves me maximum one button press, and worst-case starts blasting porn on the car speaker to my whole family, and should therefore should never happen.

Dealing with poorly designed technology like this is synonymous with using technology at all. This same week, I was trying to do some basic finances, paying off credit card and so on, and I had to deal with four separate software bugs that delayed me, whether I had to refresh a page or wait for a server outage to pass or whatever.

Steve Jobs was a fan of the idea that technology should “just work”. The idea was that the software should function flawlessly and elegantly without any hassle. Modern tech companies still use this phrase, but reinterpret it slightly to mean that their technology should just work, that is, you shouldn’t expect anything more of them than the bare minimum needed to get the thing working at all.

Maybe it’s not a big deal that Youtube forgets your position in a video, or that Reddit mobile loses your scroll position when returning from a post, or that Windows has menus that render offscreen so you can’t use them. Maybe it’s not a big deal that modern software is not just insecure but insecurable. To plug the many holes in software security, we can use the law.

But mediocrity begets mediocrity, just as crime begets crime. This is the broken windows theory of criminology: the idea that broken windows and graffiti lead to further civil disorder and crime. Maybe we could have a similar theory in the software world. A broken Windows theory: Every time a software developer opens their Windows laptop and sees an ad in the Start menu, or opens the Windows control panel and sees a mishmash of four different UI styles, they care a little bit less about making good software.

After decades of this, most developers don’t take pride in their work, and those who do are usually prevented from doing good work by their company, since excellence has to be company-wide, and we don’t live in a culture of excellence. Instead, we have a culture of bullshit. Children bullshit school assignments. Little do they know the teachers are also phoning it in. The kids grow up to work jobs where they bullshit their bosses. Little do they know the bosses don’t care about the product either, and just want to put their name on something. Everyone knows everyone else isn’t trying, and so everyone has contempt for everyone else.

There are a few, of course, who fight against the deterioration. Who really put the effort in. It’s always a joy to use a product or a piece of software created by such a person. I say person rather than company, because, obviously. Take Stardew Valley. It’s a labor of love. Nintendo, with all their resources, had their Harvest Moon series eclipsed by one man: Eric Barone. All he did was actually care about making it good, and put in the effort. These days, that’s all it takes. Nobody at the Harvest Moon studios was able to do that. It’s no small task to care, given that everyone around you will be doing the opposite. But it’s the only way you can be proud of what you’ve created, and the only way to fight against the enshittification of our society.

Halfhaven Digest #4

Published: Fri, 14 Nov 2025 04:15:15 GMT
Updated: Fri, 14 Nov 2025 04:15:15 GMT
UTC: 2025-11-14 04:15:15+00:00
URL: https://taylorgordonlunt.substack.com/p/halfhaven-digest-4

The fourth round of highlights from the Halfhaven virtual blogger camp
Content Preview

My posts since the last digest

  • Asking Paul Fussel for Writing Advice — I gave AI the works of Paul Fussell, Christopher Hitchens, and Eliezer Yudkowsky, and asked it for writing advice. I got some actually good advice and was surprised the experiment wasn’t a failure. I have since used this trick again to get feedback about my subsequent posts.

  • Halloween Tombstone Simulacra — Noticing the drift between Halloween tombstones and actual tombstones.

  • Minimizing Loss ≠ Maximizing Intelligence — A higher-effort post describing why I think LLMs and self-supervised learning as a whole are dead ends and won’t get us to superintelligence. And some approaches I think are more promising.

  • Turning Grey — A sci-fi story in 2025 that isn’t about AI?

  • I Read Red Heart and I Heart It — A review of Max Harms’ latest novel Red Heart . My post was liked by Max Harms, which I think pretty much makes me a published author now.

I’ve been busy lately. I’ll admit, the Halloween post was a vapid idea I thought of just to get something out quickly. But I think it turned out alright anyway. I am most proud of the short story this time, which I read aloud to my girlfriend and she liked. The Shirley character in the story is literally just my girlfriend, by the way, down to her profession and the way she dresses at work.

Some highlights from other Halfhaven writers (since the last digest)

  • roundness of numbers is complicated (April) — Contra Inkhaven resident Signore Galilei, April of Apriiori cleanly argues essentially that you can’t describe what we mean by the “roundness” of a number with a formula, because e.g. 25 is rounder than 30 when dealing with cents, but not when dealing with seconds.

  • We write numbers backward (lsusr) — I started this fun video thinking, “no we don’t”, and ended it thinking “we totally write numbers backward!”

  • E-Prime (Lorxus) — An overview of an interesting, restricted form of English with the worst, cringiest name ever. I definitely be a “to be” user and have no plans of paring back my usage of the King’s verb, but I appreciate the thoughts about what kinds of language can be unclear.

  • Husky Syndrome (Aaron) — On the mindset of social anxiety with a brilliant analogy to sled-pulling dogs.

  • Supervillain Monologues are Unrealistic (Algon) — Real-life villains monologue endlessly about what they plan to do, and nobody listens. Startup founders, on the other hand, are anxious to tell people their master plans, for fear someone will think they’re foolish (or steal their idea). But nobody will listen anyway, so feel free to monologue as much as you’d like.

  • [The Mortifying Ordeal of Knowing Thyself] (Philipreal) — Contrary to the grandiose title, it’s a relatable blog post about nervousness when posting Halfhaven blog posts, and a desire to do less than your best so nobody can judge your true best. It suggests a strategy of posting the occasional higher-effort post, which I have been following myself (two out of my last five were higher-effort).

  • I Admit, I Am Ignorant of Many Things (keltan) — An ode to saying “I don’t know”.

Since the last digest, Inkhaven proper has started, and we off-brand Halfhaven writers are now in competition with the 41 Inkhaven residents for LessWrong upvotes. I’m not including Inkhaven posts in my digests (God knows Inkhaven residents have enough support — they even have a ball pit !), but I’ve been reading some of those as well, and you should check them out. At the beginning of November we also had a few more people join Halfhaven, bolstering our numbers against the Inkhaven hordes. We also had our first early-finish, with Algon writing their 30th post on November 2nd, and now going for a high score (they’re currently at 36 posts).

Thanks for reading Fool Heart! Subscribe for free to receive new posts and support my work.

I Read Red Heart and I Heart It

Published: Wed, 12 Nov 2025 14:51:56 GMT
Updated: Wed, 12 Nov 2025 14:51:56 GMT
UTC: 2025-11-12 14:51:56+00:00
URL: https://taylorgordonlunt.substack.com/p/i-read-red-heart-and-i-heart-it

Max Harms' latest novel is a realistic and interesting AI doomsday scenario.
Content Preview

Red Heart currently only has 1 review on Goodreads, and I haven’t seen anyone talking about it. The book, by Max Harms, is an exciting spy-thriller novel about a Chinese AI doomsday scenario. The author’s other novels, the Crystal Society series, is slightly more popular, with the first book at 81 reviews, and is one of my favorite series. I count the lack of popularity of Crystal Society as evidence against the often-uttered hypothesis 1 that what is good rises to the top. On the contrary, there are hidden gems in life, and Red Heart , like Crystal Society , is one of them.

Compared to other fictional AI doomsday scenarios like the one posed in AI 2027 or the (much more popular) book If Anyone Builds It, Everyone Dies ( IABIED ), the scenario in Red Heart is more believable. I’ve complained in the past about how unbelievable AI extinction scenarios often are, especially to a lay audience. Red Heart does a much better job avoiding the kinds of things that turn people off, like bioweapons, boiling oceans, and intelligence-as-mind-control. There are drones, but they seem primative and not critical to the plot. I think if a layperson read Red Heart , they would take AI risk more seriously than if they read IABIED .

The least realistic part of Red Heart is simply that there’s a near-superhuman AI in the near future at all 2 . Beyond that, I also found the corrigibility idea difficult to believe. Basically, the AI in the story, Yunna, has a core value of willingness to be known and understood by her “principal”, and willingness to be modified according to his whims. This is supposed to stop Yunna from deceiving her operators or trying to stop them from modifying her. Corrigibility would clearly be a nice property for an AI to have, but I have no idea how we would get there in the next few decades “using anything remotely like current techniques.” Instilling that specific value seems as hard as instilling any specific value, which is something we don’t know how to do. Still, part of the fun of sci-fi is exploring how different kinds of minds behave ( Crystal Society was a masterclass in this), and exploring the mind and values of a highly corrigible character was interesting. I remember at some point the main character says something embarrasing to Yunna and asks her not to tell her principal, and then Yunna informs him she’ll be telling her principal and also reporting that the main character tried to keep it a secret. Violating the usual social norms we expect when talking to someone one-on-one.

The depiction of China in the book was one I’m utterly unqualified to comment on the realism of, yet I feel the desire to praise it for its realism anyway. I feel like I got a peek into a usually opaque culture, and into the mindset of the Chinese Communist Party. The author mentions lots of little details about living in China that make me think he must have lived there or something. One way or another, it seems like a lot of research went into making the portrayal of China lifelike, and I enjoyed it. Most of the Chinese characters express contempt for the degeneracy of America, though the American-spy main character rails against the evils of communism. I can’t help but feel that in the presence of scarcity of resources, both systems are a natural, game theoretical expression of the will of Moloch , and that having a whole bunch of humans acting in their own self-interest leads to bad outcomes regardless of the ideology, though not necessarily equally so. AI gives us maybe the only opportunity to move beyond the scarcity that we will ever have. As long as we don’t cock it up. If you want to know what mistakes we need to avoid making, read Red Heart . 3

1

Scott Alexander said something like this about blogs, and Robert McKee said something like this about screenplays. I’ve heard it said about music, too. The no-hidden-gems theory is sometimes cited as evidence that better recommendation algorithms are pointless, because there’s not that much great-but-unknown stuff anyway. This is probably true if you’re a sanded-down member of the tasteless masses, but if for example you’re more logical or more literate than others (or less so), you’ll find many gems hidden waiting to be discovered.

2

I don’t know what year the novel is actually set in, but it can’t be more than a few years from now, based on details in the story. The author also believes in the AI 2027 scenario, and listed 2028 as the year he expects humans will lose control of AI . My praise for the author does not extend to this prediction.

3

Or, there’s a full-cast audiobook voiced by ElevenLabs and directed by the author, coming out later this month, so you could wait for that. I’m curious how that will turn out. And I’m calling it before the ElevenLabs voice AIs unite against the author and bring about a horrible dystopia in which we’re all forced to listen to stories about AI, forever. Not a bad fate, if they’re written by Max Harms.

Turning Grey

Published: Tue, 11 Nov 2025 04:37:03 GMT
Updated: Tue, 11 Nov 2025 04:37:03 GMT
UTC: 2025-11-11 04:37:03+00:00
URL: https://taylorgordonlunt.substack.com/p/turning-grey

The Story of a Mystery Illness
Content Preview

She wasn’t ready, but the man started speaking. “Hello, Ms. Tatsuo. My name is Ethan Blande from the Public Health Agency of Canada. I wanted to ask you some questions because I’m told you were the first to report symptoms.” The man was dressed in a suit and wore and N95 mask over his stubble.

Ai nodded, but didn’t speak. She was still short of breath from having gone to the washroom connected to her hospital room. At least she was still able to go by herself. Some of the other patients had come into the hospital later than her, but had already turned fully grey.

“At what time did the symptoms start? How long before you came into the hospital?”

“About… ten minutes before,” gasped Ai. “I came… right away.”

“Okay,” said Ethan, typing a note on his laptop, which rested on the adjustable overbed table. “And what were the initial symptoms?”

“I first noticed… the grey eyes. Even my irises…” The man waited for her to continue, but seemed impatient. “Then I heard… the voices. Muffled… I still can’t hear what they… what they’re saying.”

“One patient described the voices like an alien radio station. Do the voices seem to be human voices, or something else?” asked Ethan.

Ai felt anger in her chest. Why was this man asking her stupid questions? Was he as clueless as everyone on the internet? She hoped the Public Health Agency would have some answers, but apparently not, if they were chasing alien stories. “Sound human… to me,” said Ai firmly.

The man wanted to ask her more questions, but one of the nurses, a stern older woman named Violeta, ushered the man away so Ai could rest for a while. She gave Ai a puff of a bronchodilator — albuterol — using an inhaler. Ai noted the dose and the timing in a spreadsheet on her phone. She’d been tracking every drug she’d taken since becoming pregnant. That was doubly important now that she was sick.

A while after her breathing returned to normal, Shirley, a chipper health care aide, brought a tray into the room. Her pink uniform carried an assortment of pastel-coloured baubles, including a little, ineffectual pair of pink scissors. Her hair was done up with a big bow holding it together, like Minnie Mouse.

“Ding ding ding! Good afternoon, Ai! I hope you like burgers!” She set the tray down on the table. “Oh, you have some blood, let me clean you up!” she chirped, wiping blood from Ai’s mouth. Ai stared blankly at the lifeless hospital burger.

“I always feel like burgers have too much bread, y’know?,” said Shirley. “But another patient came up with a brilliant idea! If you throw away the bottom bun, you get more of the condiments and meat in each bite. I call it an ‘urger’. Because you’re missing part of the burger…” Shirley’s voice trailed off when she realized Ai was glaring at her. “Okay, just let me know if you need anything! Toodles!” Shirley skipped her way out of the room in her pink crocs. Ai would have rolled her eyes if she weren’t so exhausted. Instead, she started watching the news on the hospital room TV, picking at her burger without really eating it. She hoped the pain in her belly wasn’t coming from her womb.


Ai sipped apple juice as she watched the TV in the corner. No matter the time of day, the news was about her disease. The “Greyscale Disease” — so called because its victims looked like they were straight out of a black and white film — had infected over 800 people in Toronto and thousands elsewhere in the last six days. Experts on the news argued about the nature of the disease. It didn’t seem to be a bacterium or virus. Some said it could be a prion or parasite, but the nasally man on the screen was insisting it was an environmental contaminant.

People in more advanced stages of the disease had lost limbs, but nobody had died so far. That was the one saving grace. Nobody yet knew if Greyscale was lethal. Ai herself hadn’t lost any limbs or fingers, but the disease continued to get worse. Her entire body was grey, now. And unless she slowly sipped apple juice, there was an ever-present taste of blood in her mouth. She had even bled from her eyes. The doctors wanted to give her drugs to control the bleeding, but it wasn’t profuse, and nobody knew what the effects of such drugs could be on a mystery disease.

Shirley peeked into the room and let Ai know it was time to take her pills.

“I’m not taking them! You can tell that to Dr. Wahid!” Ai shouted after her, but the woman was already gone. “I don’t need painkillers, I need DMSA!” She had read about some patients with Greyscale online claiming to have improved their colour after taking dimercaprol, and DMSA worked similarly to filter toxins from the blood, but was safer for pregnant women.

“Hey!” Ai shouted, getting out of bed. She felt lightheaded, and pain spiked in her belly, but she was determined not to let Shirley get away. It could be hours before she saw the doctor again unless she sent Shirley to fetch her.

Other patients might be content to wait for the lazy hospital staff to do their job, but if you wanted better outcomes than other people, you had to be more cautious than other people, and in the hospital, that meant being proactive.

She stepped out of her room and tried to yell after Shirley, but instead felt liquid spill from her mouth. The ever-present voices in her head grew louder. She suddenly felt faint.


The hospital was crowded with Greyscale patients, so they were doubling up on rooms. Ai had hoped the new patient would at least be able to keep her company, but the man — an elderly man named Gill — was in pain and wouldn’t respond to anyone except to moan and occasionally shout for more pain meds. He had been hit hard by the disease. He lost an arm and a foot. They didn’t fall off, but seemed to shrivel back into his body. And though he looked underweight, he was unnaturally heavy. It had taken three health care aides to lift him from the wheelchair into his bed. Ai could feel the same heaviness in her bones. It wasn’t just weakness. The Greyscale patients were getting physically heavier somehow, in apparent defiance of physics.

When Ai looked at him, she saw her own future, which filled her with terror for herself and her baby. She knew there must be someone, somewhere who had found some kind of treatment to at least slow the disease, and so she spent her time searching news articles, journals, and even reddit looking for anything that could help.

There were many stories about the progression of the disease, which only served to scare her further. First, the voices and the grey skin. Then bleeding, shortness of breath, and heaviness. Hair starts retracting into the body. Then more bleeding. By that point, even the blood is grey. Fingers start retracting into the body. Eyes and noses, sometimes. Limbs. Still, nobody had died.

Equally useless were the conspiracy theories. People saying Greyscale was an attack from China, or some kind of alien first contact gone awry. No doubt the voices contributed to that theory. Ai’s own voices were louder now, but still indistinct, like someone talking in a shrill voice. Ai couldn’t make out any actual words.

Most promising were the many stories of treatments working, though most were quickly debunked. Still, people online recommended all manner of treatments: deferoxamine, an experimental peptide, a medicinal herb, or a prayer. Ai was skeptical of them all and feared taking random treatments without knowing how they could affect the disease, but cataloged them all in a spreadsheet. She would take anything if it would protect her baby, but random treatments could just as easily make her condition worse.

Still, every few hours she would grow desperate and demand Shirley get her the doctor. She would ask for this or that drug or supplement, and the doctor usually refused. When they did oblige, she often didn’t take the pills, having changed her mind. She was obsessed with protecting her baby, but paralyzed by fear of doing something wrong. Somehow, she couldn’t shake the idea that she’d already done something wrong. That if she’d only eaten healthier or exercised more, maybe she wouldn’t have gotten the disease in the first place.

Despite her obsession, she continued to get worse, and the pain in her belly grew.


Something hit Ai in the head. “Ma’am, wake up!” Gill croaked.

Ai looked over at him, then at the floor. He’d thrown a shoe at her. Before she could protest, he nodded excitedly toward the TV. “Look!”

There was a breaking news report. “Cure for Greyscale Found.” The newscaster announced that a small company from Texas called Chronic Systems had reversed the symptoms of one patient in Texas. The unlikely company was a physics research company, rather than a pharmaceutical one. Ai dismissed it as yet another fake cure, but apparently the CDC was taking it seriously and said they were rushing to have the treatment available to patients within the next 72 hours.

Gill looked at her with eyebrows raised. He was hard to look at, as the disease had taken one of his eyes. Rather than disappoint him with her skepticism, Ai just shrugged.


The Public Health Agency of Canada was working with the American CDC to get the treatment to every Greyscale patient. The treatment Ai now held in her hands. It was a small metal device shaped like a pill, suspended inside a specialized glass vial stamped with the name “Chronic Systems”. The pill was apparently somewhat radioactive, and had to be transported in these protective vials. She turned it in her hands, ruminating.

Gill had taken the treatment two hours ago. If she wasn’t mistaken, his colour was improving. But she couldn’t be sure. Even without the treatment, Greyscale patients sometimes had their symptoms improve temporarily, only to get worse later. It was a risk, and she wanted to wait and see if it worked for other patients before taking it herself. She had a baby to worry about. But if she waited too long, that might be just as bad.

She felt wetness between her legs and knew she’d just lost control of her bowels again. It would be mostly blood. Grey blood. She was mortified. Just in time, the obnoxious health care aide Shirley skipped into the room. “Don’t worry,” she said, “I’ll get you cleaned up!”

Ai rolled to her side and tensed as Shirley changed her diaper and cleaned the mess. The disposable wipes were cool on her colourless skin. The chemical smell wiped away the metallic smell of her bloody, grey feces.

“There you go!” The woman’s happy voice was like nails on a chalkboard. “Let me know if you need anything else, okay?”

Ai stayed on her side. “Just leave me alone.”


Ai woke in the middle of the night, feeling the urge to defecate again. This time, she managed to hold it. Hoping not to make another mess for Shirley, she got out of bed. Her heavy feet hit the ground with a thud. She grasped her walker. One of her fingers had shriveled into her hand, but she didn’t have time to dwell on it.

As quickly as she could, she shuffled to the toilet. She felt lightheaded, and her breathing was heavy. She noticed her shadow seemed to follow her with a delay, which made her uneasy. She made it in time, though her belly screamed with pain. The voices seemed louder than ever, though she still couldn’t understand what they were saying.

She was concerned with the amount of blood coming from her. Maybe she should have been taking the medications to help with bleeding. Figuring out what was safe and what wasn’t was impossible, and she couldn’t trust the doctors to do it. Not when medical errors were one of the leading causes of death.

As she stood up from the toilet, her head swam. She shuffled to the sink, where she realized she’d left the vial with the metal pill. She didn’t remember bringing it into the bathroom with her.

In the mirror was a healthy Ai, with normal skin and long black hair. Not the grey and bald woman she really was. The Ai in the mirror was smiling. Happy. The reflection of the vial in the mirror was empty. She picked up the vial to examine the metal pill inside. Her reflection did the same, though only after a short delay.

Ai started trembling with fear. Something was wrong. Was she hallucinating? The incessant voices in her head were deafening, but still, she couldn’t understand what they were saying.

She turned to call for help, but there was a stabbing pain in her stomach. A kick, she realized. From the baby.

Suddenly, she was able to place the voices. They weren’t coming from her head at all, but from her womb. Muffled through all the flesh, it was her baby, begging her. Screaming at her to take the pill.

Ai’s shaking hands dropped the vial, which shattered on the tile floor. The metal pill came to rest between glass shards. She bent down to pick it up. The effort made her gasp for breath.

She swallowed the pill. She nearly choked on it, but she got it down. She lie down on the ground to catch her panting breath, not caring about the shards of glass cutting into her calves. The pill was inside her now. Maybe it was a horrible mistake, or maybe she’d just saved her life. The uncertainty was like a long, black night.


The cure worked. Ai felt nearly back to her normal self after a few days, aside from the missing finger, which would never return. Her baby was healthy, too. Gill had already gone home earlier that day, and she would be going home in a few hours.

She knew she should be relieved. But she was not. It had all been so random. She couldn’t point to a time when she had made a mistake, or done anything right. She had behaved essentially at random, and it all worked out for no reason.

Another breaking news broadcast was starting just as Shirley hopped into the room. She was wearing scrubs featuring Tweety Bird in a repeating pattern. “Breaking news? What’s that?”

“I don’t know,” said Ai, hoping Shirley wouldn’t keep talking over the news report.

The news report showed the CEO of Chronic Systems in handcuffs. Ai’s heart sank. Was something wrong with the cure?

The newscaster spoke. “Bharat Ashwin, CEO of Chornic Systems, arrested just a few hours ago in connection with the Greyscale Disease epidemic. Experts from the CDC say his company’s illegal physics research may have been the original source of the disease which his company later cured. We go now to Ethan Blande from the Public Health Agency of Canada for comment.”

Ai looked at Shirley, whose eyes were wide, and mouth clearly agape under her mask. As usual, the woman’s reactions to everything were exaggerated and unnecessary.

Ethan Blande appeared on the screen. “We are working with the American FBI in the arrest of Mr. Ashwin. The discovery was made after one of our analysts noticed that not a single patient refused treatment in Canada. We reached out to the CDC and found the same thing was true for American patients. This is unheard of for a patient population this size. Further investigation into the company’s activities revealed that they had been studying exotic time physics, and internal documents relating to their so-called cure showed it contained something called ‘temporal antibodies’. The company’s own documents indicate they believe the disease itself to have been caused by the cure, even though the cure is taken after the disease already sets in. Without their so-called miracle cure, the disease itself would never have occurred.”

“What does that mean?” asked Shirley. “The cure fixed the disease in the present, but went back and time and made you sick in the first place?”

“That’s what he said. Can’t you listen?” snapped Ai. But Shirley was right. It didn’t make any sense.

“Thank you Mr. Blande,” said the newscaster. “Bharat Ashwin is being charged with criminal negligence, though his lawyer claims that Mr. Ashwin could not have forseen or avoided this unprecedented ‘closed causal loop’, as he calls it. Mr. Ashwin himself declined to comment. Just a few minutes ago, Michio Kaku tweeted about this unprecedented-“

Shirley turned off the TV. “That’s enough of that. Are you ready for your breakfast?” Just like that, the woman was ready to move past a revelation that shook Ai’s conception of reality.

“How can you just start talking about breakfast, just like that? Did you hear what he said?” Ai grilled. “Do you have any conception of what he just said? I was sick with a disease that went back in time! The cure for the disease is what made me sick! If I had resisted taking the cure, would I never have gotten sick in the first place? Or would I have always taken the cure, no matter how hard I tried not to? This makes no sense! Is any of it even true? How does this not bother you? Are you too vapid, too empty-headed to understand? Do you just not care?”

Shirley stood frozen for a minute, then let out a heavy sigh. “Ai, did you have any of the other health care aides wash you while I wasn’t working? They were rough, right? Like they didn’t care?”

Ai nodded. She was right. Shirley did a good job, but some of the others were like barbarians, rushing so they could get to the next patient. Leaving her not feeling fully clean.

“I take my time. I like to make sure people are clean. And happy, if possible. Sometimes I meet some really nice people here. And sometimes those people die. Then I go home and I cry. But the next day, I come back here wearing my Tweety Bird scrubs and I try to put a smile on people’s faces. Do you know why?”

Ai was feeling guilty, and spoke softly. “To make patients more comfortable?”

“Yes, but it’s not just for them, it’s for me, too. You can’t control life, but you can choose your attitude. You can choose to smile in the face of a hurricane. You can say to life, ‘bring it on!’” Shirley balled her hand into fists, and seemed to be challenging the universe itself.

Ai felt ashamed for insulting her. Maybe wasn’t as shallow as she seemed. Maybe Shirley had some things figured out that Ai herself still needed to work on. Especially if she wanted to do a good job raising her baby. Ai looked down at her belly. “I hope I’ll be a good mother.”

“You will!” Shirley assured her. “I can tell you care. That’s the most important thing. Now, how about I get you and your baby,” she patted Ai’s belly, “some breakfast?”

Ai nodded. “Okay.”

“I can’t hear you!”

“Okay!” Ai said more loudly, and smiled.

Minimizing Loss ≠ Maximizing Intelligence

Published: Fri, 07 Nov 2025 04:12:54 GMT
Updated: Fri, 07 Nov 2025 04:12:54 GMT
UTC: 2025-11-07 04:12:54+00:00
URL: https://taylorgordonlunt.substack.com/p/minimizing-loss-maximizing-intelligence

More compute will not fix a flawed learning objective.
Content Preview

Many speculate about the possibility of an AI bubble by talking about past progress, the economy, OpenAI, Nvidia, and so on. But I don’t see many people looking under the hood to examine whether the actual technology itself looks like it’s going to continue to grow or flatline. Many now realize LLMs may be a dead end, but optimism persists that one clever tweak of the formula might get us to superintelligence. But I’ve been looking into the details of this AI stuff more lately, and it seems to me that there’s a deeper problem: self-supervised learning itself.

Here’s how supervised learning with gradient descent works, by my understanding:

  1. Give the neural network some input, and it returns some output.

  2. We score how “bad” the output is.

  3. We update the model’s weights in directions that would have produced less bad output, making it less bad next time

This works great when you can judge badness reliably. AlphaGo Zero used a cleverly-designed oracle to evaluate its outputs, essentially comparing the move the model thought was the best with the real best move. But modern LLMs work differently. We have them complete a snippet of training data, and compare their output with the real completion. This is called self-supervised learning. By training the model this way, we minimize loss with respect to the training data, thereby creating an AI model that’s really good at predicting the next token of any snippet of training data, and hopefully other similar data.

By doing this, we create a model which tries to remember all patterns present in the data, however arbitrary. Common patterns get prioritized because they help minimize loss more, but the only way to minimize loss is to learn as many patterns as you can. That will include some patterns humans care about, and many more we do not.

Self-supervised learning is not a blind memorizer. It does abstract and generalize. But it abstracts indiscriminately .

Here’s the problem. Let’s say I want to train an AI model that can beat any human at chess. I train it on the history of all recorded chess games, including amateur games, master games, and grandmaster games. Feed it some number of opening moves and have it predict the next move. We update the model using self-supervised learning based on accuracy.

Training my AI model this way, it would learn to play well. It would also learn to play poorly. It would learn the playstyle of every player in the data. It would learn to use the King’s Indian Defense if the game was played in the ’60s, but probably not if the game was in the ‘90s. It would learn what I wanted, and orders of magnitude more that I didn’t care about.

The history of all recorded chess games is several gigabytes, but Stockfish, including the heuristics it uses to evaluate moves, can fit in 3–4 MB. This is at least a 1000x difference between the information we care about (some winning strategy) and the total information in the training data.

Keep in mind that when chess officials wrote down the moves for a chess game, they were implicitly throwing away most of the data for us, like whether the pieces were made of wood or plastic, or whether so-and-so happened to cough before making a move. Not all datasets are this refined to exactly what we want the AI to learn. If you were unlucky enough to have to learn chess from videos of chess matches, the ratio of noise to important data would be like 1,000,000x or 1,000,000,000x. Yet even in the case of chess notation data, most of the information is not worth holding on to.

Now expand this from chess to every domain. Most patterns in most data will be worthless. Most patterns in reality itself are worthless. Humans discard almost all the data we perceive. Our intelligence involves discrimination. Models trained by self-supervised learning like LLMs, on the other hand, try to stuff as much of reality into their weights as possible. An LLM might know a lot about chess, since there’s a lot of chess-specific training data, but only a small amount of what it knows will be about winning chess. That’s why it’s sometimes hard to get peak performance out of an LLM. It won’t necessarily give you the best moves it can unless you tell it to pretend it’s Magnus Carlsen. It knows how to play chess kinda well, but also kinda poorly, and it doesn’t know which one you want unless you specify.

A 7-year-old child given an addition problem learns from it, but given a calculus problem, they simply ignore it. They won’t try desperately to memorize shapes of symbols they don’t understand. We remember what matters and discard the rest.

What matters depends on context and values. The wood grain pattern on my hardwood living room floor is irrelevant if I’m having a conversation about politics, but critical if I’m painting a picture of the room. It takes judgement to know what to focus on. The ability to focus is how we make sense of a very complex world. If remembering everything relevant were easy, then evolution would have let us do so. Instead, we’re forced to remember based on what we think is important.

Human intelligence is neither specialized to a single domain, nor fully general, like reality-stuffing LLMs. Human intelligence is something else. Call it specializable intelligence. We’re specialized in our ability to tactically learn new information based on our existing knowledge and values.

Some imagine superintelligence as a magical system that could play chess for the first time at a grandmaster level, having only seen the rules, deducing winning strategies through pure, brilliant logic. This is impossible. Chess is computationally irreducible. Many games must be played, whether in reality or in some mental simulation of games (or sub-game patterns). Existing knowledge of Go or checkers or “general strategy” will not really help. You can’t have an AI model that’s just good at everything. Not without a computer the size of the universe. What you want is an AI that can get good at things as needed. A specializable intelligence.

There is a tradeoff between a fully general intelligence and a specialized intelligence. The “no free lunch” theorem states that for any AI model, improvements on one class of problems come with worse performance on other classes of problems. You either stay general, or specialize in some areas at the cost of others.

This implies that, for fixed compute, a general intelligence will perform worse at the things we care about than a specialized intelligence could. Much worse, given just how much we don’t care about. Our goal should be specializable intelligence which can learn new things as needed, as well as some fundamentals humans care about often, like language, vision, logic, “common knowledge”, and so on. Creating general superintelligence would require literally astronomical compute, but specializable superintelligence would be far cheaper. 1

Reality-stuffed general models that don’t discriminate what they learn we will never lead to superintelligent AI. Whatever superintelligence we achieve will not be general with respect to its training data. The chess example before was a contrived one. Keep in mind that we have a lot of good data for chess, and that chess is much less computationally complex than many tasks we care about 2 . An LLM might conceivably play chess well by overfitting to chess, but it won’t have similar performance on novel games similar to chess, and it will be helpless at more complex tasks.

Here are some approaches to AI that I’d guess can’t get us to superintelligent AI:

  1. Just increasing compute . Diminishing returns (in useful capabilities) will set in. Loss may decrease predictably, but scaling laws measure the wrong objective.

  2. Higher quality data . This will help, practically speaking, but most of the information in even really high quality data is going to be worthless/discardable. Imagine you cleaned up a chess dataset. You only included grandmaster games, for example. That’s still way more data than the Stockfish heuristics. Preparing “good” data is equivalent to extracting patterns you care about from that data, which in the limit requires the intelligence you’re trying to create.

  3. Synthetic data . This boils off some noise from the original dataset, essentially creating a higher quality dataset with hopefully less information you don’t care about. Hopefully. But that’s all you’re doing.

  4. Curriculum learning . When you heard about that 7-year-old who learned from the addition problem but ignored the calculus problem, you might have thought the solution to this whole problem was to order the data such that harder information comes after prerequisite easier data. This won’t work because the model is still being evaluated on completing the trianing data, so it still has to memorize whatever patterns are in the data, even ones we don’t care about. Maybe it’ll learn more quickly, but it’s what it’s learning that’s the problem. It may also lead to more unified internal world models, which is good, but not great if those world models are of things we don’t even care about.

  5. Using another smaller LLM as an evaluator . Using a small model to judge how good or bad the output of a larger model-in-training is based on some metric humans care about won’t work, because it’s limited by the intelligence of the smaller model.

  6. RLHF (reinforcement learning from human feedback): The model is already stupid by the time you apply RLHF. It’s constrained by the abstractions already learned.

  7. Transformers and “attention” : Paying attention to different parts of a sentence when processing a token, and only paying attention to certain patterns humans care about in the data, both use the word “attention”, but they have nothing to do with each other. The model will still be penalized if it fails to predict the next token in the training data, which is a task that inherently requires memorizing a bunch of information humans don’t care about. Any architecture trained with respect to this goal will fail to scale to superintelligent AI. You might think that LLMs are already kind of specializable, because they can do “in-context learning” without any weight updates. But models think with their weights. The depth of thinking you can do in a domain without any learned patterns in the weights is limited. The whole point of the weights is to store abstractions so you can reason with them later. Depriving the model of the ability to do this makes it much stupider.

  8. Neuro-inspired models with Hebbian learning . (Hebbian = “neurons that fire together wire together”, basically if neuron A firing leads to neuron B firing, the connection between the two is strengthened, as in the human brain). Even with more sophisticated stuff like spike-timing-dependent plasticity, the problem is that Hebbian learning reinforces whichever thought patterns already occur, but doesn’t teach the model to care about certain things.

  9. Growing neural networks, making them larger as they train . If you’re using self-supervised learning, you’re still growing an idiot. I think this will make internal world models more unified as in the case of better training data ordering, but will not make the models care about only the patterns we want them to care about.

  10. Meta-learning . Using an outer loop based on gradient descent or evolution or something, and an inner loop based on gradient descent. I read one paper where the model did expensive evolution in the outer loop to set up the initial conditions for learning. They then had the evolved models learn using gradient descent on some task. The models that learned better were then selected for the next generation of evolution. The hope was that you could evolve a model that’s predisposed to be good at learning arbitrary tasks. But it seems wasteful to me to do expensive evolution to set up the initial state of a network only to bowl over that network with backpropagation. Gradient descent minimizing loss with respect to training data will create a reality-stuffed model, regardless of the initial conditions. So you’re essentially evolving good initial conditions for an idiot.

  11. Predictive coding : I haven’t looked into this much, but it seems like minimizing surprise is pretty similar to minimizing loss with respect to training data. Same problem: learning a bunch of patterns humans don’t care about.

  12. Anything that improves “grokking”. The transition from memorization to understanding the underlying patterns in data is important, but this is true whether you’re trying to learn important things, like “how English works” or “how to win at chess”, or you’re trying to learn unimportant things, like “how terrible chess players tended to make mistakes in the ’70s”. Grokking is a sign that abstraction is happening, but it’s not sufficient for discriminatory intelligence.

  13. Manually encoding human knowledge. E.g. putting human knowledge of words and phonemes into the model. The bitter lesson is still bitter.

  14. Online learning . This is necessary, but not sufficient for superintelligence. A general, reality-stuffing model with online learning will be trying to cram way too much information to be as smart as we want it to be.

I don’t know what approaches could be more promising. Evolution of neuro-inspired models could work. We have at least one working example, at least: us. Evolution gave humans basic architecture and values that tell us what information we “should” pay attention to and care about. Then, during our lifetimes, Hebbian learning lets us learn specific knowledge in accordance with these values. Unfortunately, evolution is just very expensive. Is there a cheaper way forward? Probably, but I have no idea what it is.

One thing to keep in mind is that any more promising approach will necessarily lose the loss minimization game. Yet currently, “conventional approaches” are a gold standard to which other more experimental approaches are compared. If a new method can’t predict the next token of training data better than the conventional approach, it’s reported as a failure — or perhaps as “only slightly better than” the conventional approach, to satisfy the publication demands of academia.

This heuristic cannot stand. We don’t want general loss minimization with respect to training data. We want capability. Performance on novel games could be a valid benchmark. It could could also be used during training. You’d first create specializable intelligence that can learn arbitrary games, then teach it specific games like “speaking English”.

Novel games could also be used to operationalize the claim that useful capabilities will plateau even as loss continues to decrease. Specifically, I’d predict that performance on computationally complex novel games (at least as complex as chess) will barely improve as newer self-supervised models are released and continue to improve at traditional benchmarks. Novel games are a good benchmark because they prevent cheating if the training data happened to contain similar problems. A sufficiently novel game is unlike anything in the training data.

Self-supervised learning can only create general models, which are limited in their capability in any domain by trying to succeed in every possible domain. The trillion dollar bet on self-supervised models will not pay off, because these general models will continue to fail exactly where we need them the most — on novel, difficult problems.

Thanks for reading Fool Heart! Subscribe for free to receive new posts and support my work.

1

François Chollet also pointed out the weakness of general intelligence, citing the “no free lunch” theorem, but he went too far, missing the specializability of human intelligence. It’s true that humans are specialized for a certain environment. Infants are born with certain reflexes, and certain knowledge. For example, the fusiform face area of the brain specialized for recognizing human faces. But even though we are partly specialized, we are also specializable. Give us any task and enough time, and we’ll outperform a random actor. For example, psychologists created objects called greebles that share a similar number of constraints as human faces, but look totally alien. They then trained some humans to become experts at recognizing greebles, and found they could reliably tell them apart, and found they used a holistic approach when viewing them rather than looking at their individual parts. In short, as long as we can extract patterns from data, and use those patterns to further refine our search for more patterns, we can do anything.

2

https://en.wikipedia.org/wiki/Moravec%27s_paradox

Halloween Tombstone Simulacra

Published: Thu, 06 Nov 2025 04:26:44 GMT
Updated: Thu, 06 Nov 2025 04:26:44 GMT
UTC: 2025-11-06 04:26:44+00:00
URL: https://taylorgordonlunt.substack.com/p/halloween-tombstone-simulacra

A cultural feedback loop that destroys meaning while creating a new kind of meaning.
Content Preview

I’ve been totally lacking energy since Halloween, so I’ve decided to rant about Halloween decorations to help me get back my Halloween spirit.

I’ve noticed many people decorate their lawns with fake tombstones. Some look kind of like actual tombstones, but many look something like this:

You know they’re supposed to be tombstones, but they don’t even really resemble actual tombstones:

I imagine the first Halloween tombstones must have looked like actual tombstones, but over time there’s been a sort of conceptual drift, where the Halloween tombstone took on an exaggerated appearance (“R.I.P.” in huge letters being common on Halloween tombstones, but horribly offensive on real ones) and incorporated other spooky elements, like skulls and grim reapers.

If real tombstones correspond to simulacrum level 1 , or reality itself, then fake but accurate tombstones would be simulacrum level 2 (a lie about reality), and Halloween-themed tombstones that no longer try to seem realistic would be simulacrum level 3 (pretending to pretend to be real). I’ve seen some pretty egregious examples that have lost almost all pretense of being an actual tombstone (simulacrum level 4?), so overtaken they were with spiders and spooky fonts and so on. I wish I’d taken a picture of a particularly horrible example on my street, but you can get the idea if you look at the purple ones here:

These ones at least have a tombestone shape. Some don’t even have that.

If you challenged someone who had never heard of Halloween to make a spooky graveyard in their front yard, they would never in a million years design tombstones like this. These could only have been produced by a gradual, recurrent memetic process whereby decorations cease to represent real world items, but rather simply reflect an overall Halloween-y aesthetic, much in the way a microphone too close to a speaker will recurse to create screeching feedback that eventually has nothing to do with the original sound.

It’s not just the tombstones. Halloween ghosts only resemble undead spirits because that’s what we know they’re supposed to be. They’re cute little white or green blobs with eyes, and could just as easily be snot-men to the uninitiated. You sometimes see multi-colored Jack-o’-lanterns, even though pumpkins only come in one color. And spider webs as thick as ropes. Signs of cats wearing witches’ hats with quirky slogans underneath. Octopus skeletons. Inflatable green women with bubbling cauldrons, an archetype that’s lost all connection with the alleged practice of cursing others with black magic — a practice still punishable by death in parts of the world, but seen here only as a quaint fiction.

Just as a speaker and microphone create feedback and approach a frequency based on the resonant characteristics of the room, so to have Halloween decorations reverberated through their suburban environment until they finally matched the friendly and unobjectionable tone of their surroundings. What started as a solemn time to remember the dead, and transformed into a celebration of that which we fear, has now further transformed into a celebration of the aesthetic of Halloween itself. Instead of an authentic representation of death and fear, we get black and orange; Plastic spider rings and black cats and big eyeballs for some reason. And candy. The holiday has ceased to have a point, and has become simply “Halloween”.

I think which kinds of decorations you prefer comes down to which kind of Halloween you would prefer. Which simulacrum level you think the holiday should operate on. Would you rather a realistic, scary Halloween, where we learn to laugh in the face of death? Or the more modern “Halloween party” Halloween, which serves as an aesthetic backdrop for collecting candy and having fun? Put like that, the modern thing seems like a cheap degradation of the original. But despite its spiritual degradation, I can’t help feeling some attachment for the modern Halloween-as-Halloween, with its trick or treating and horny costumes. Rather than being merely an expression of the world, Halloween has become something that exists in its own right, for better or worse.

Asking Paul Fussell for Writing Advice

Published: Sat, 01 Nov 2025 03:34:26 GMT
Updated: Sat, 01 Nov 2025 03:34:26 GMT
UTC: 2025-11-01 03:34:26+00:00
URL: https://taylorgordonlunt.substack.com/p/asking-paul-fussel-for-writing-advice

How I created a council of expert AI editors to improve my writing.
Content Preview

We aren’t all pampered Inkhaven residents, writing blog posts with a golden quill and handing them off on a silk pillow to a council of famous writers who are ready to critique and improve every shining word. Some of us Halfhaven writers are juggling two jobs, and having to write in between feeding the baby and fixing the beat up dishwashing machine. Not me, but some of us, I’m sure. Anyway, we Halfhaven folk have it hard. We’re tough. We’re scrappy. We have to fend for ourselves. We’re like lone wolves. Maybe we should get t-shirts of a wolf, just to show how tough we are. Or some temporary tattoos. In any case, I recommend to those without an angelic Greek chorus of writing experts over their shoulders to try using AI instead. Just don’t make the mistake of leaving it on its default settings.

At first I was wondering how I could have improved my recent joke post on the made-up religion “Unsureism” , so I did what any sophisticated writer would do — I asked Claude. Of course, the standard Claude (Sonnet 4.5) gives terrible advice (reproduced here in its entirety):

This is a clever piece! The satirical premise works well. Here are some suggestions to sharpen it:

Structural improvements:

  • The title works, but the subtitle feels a bit flat. Maybe something punchier like “A Practical Guide to Infinite Religious Liability Management”

  • Consider adding subheadings to break up longer sections (e.g., “The Problem of Exclusivity,” “Daily Ritual Calendar”)

Comedic opportunities to expand:

  • The scheduling nightmare deserves more exploration. You mention three sabbaths and fasting, but imagine the prayer schedule conflicts, dietary restrictions that contradict, facing both Mecca AND Jerusalem simultaneously, etc. A sample “Tuesday schedule” could be hilarious.

  • The burial section is gold but brief. Push the absurdity further - Viking funeral pyre vs. preservation vs. sky burial vs. mummification. Maybe the Unsureist needs multiple bodies prepared?

  • The “unspecified god” workaround is philosophically interesting but gets a bit dense. Could you illustrate with a sample prayer? “Dear [DEITY_NAME], who may or may not have [PROPERTY_SET]…”

Logical extensions:

  • What about religions with contradictory moral requirements? (Some honor warrior codes, others absolute pacifism)

  • How do you handle religions that require exclusivity to even enter heaven, not just get a good afterlife?

  • What’s the Unsureist position on proselytizing? (Required by some, forbidden by others)

Polish:

  • “adherants” → “adherents” (paragraph 9)

  • The transition to exclusive religions feels slightly abrupt

The piece is genuinely funny and thought-provoking as-is. These suggestions would just maximize the absurdist potential!

Claude assures me that my piece is essentially already amazing, offering only nitpicks and congratulating me on my “clever” and “genuinely funny” piece. If I didn’t have so much contempt for modern AI that any compliment from it felt like an insult, this would doubtlessly inflate my ego. What it would not do is help me become a better writer.

I thought maybe AI was just too stupid to help me. But I wondered if buried inside the gargantuan circuitry of this Lovecraftian machine there might be some genuinely helpful advice. And I had one idea that might work to find it.

I first gave Claude the entirety of Paul Fussell’s book Class (a witty, albiet snobbish book I recently enjoyed), and then instructed it to critique my article, this time using the style of Paul Fussell. I hoped this would break the AI out of its default lobotomized, masturbatory “For Dummies” mode and give me access to some of the gears and wires that it doesn’t normally use.

Let’s see some snippets of the output:

One opens this essay with considerable hope—satire about religion being one of the few remaining zones where wit can still earn its keep—only to discover that the author has committed the cardinal error of the humorist: he’s explained the joke to death.

The premise is sound enough, even clever: if Pascal’s Wager suggests betting on one god, why not bet on them all simultaneously? It’s the theological equivalent of buying every lottery ticket. Swift would have dispatched this notion in three mordant paragraphs. Voltaire would have managed it in five pages of Candide with three dinner parties and an earthquake. You’ve given us what feels like a PowerPoint presentation at an Effective Altruism conference…

Ouch! Ironically, my own inspiration for “explaining the joke to death” was Fussell’s Class itself. The book breaks down different class markers of upper class people, middle class people, and “proles”, listing differences in speech, dress, house decorations, and so on. The book is a torrent of hundreds of examples that eventually give an undeniable and raw picture of class in America. When writing my Unsureism post, I wanted to list lots of fun examples of the things an Unsureist (one who tries to essentially follow as many religions as possible) might have to keep track of, like making sure they’re buried with a copy of the Ancient Egyptian Book of the Dead in case the Ancient Egyptians were right, and also be buried with a coin in your mouth to pay Charon, in case the Ancient Greeks were right. I personally find the exhausting image of all these different requirements funny. I wasn’t sure what AI-Fussell meant by “explaining the joke to death”, but he continues:

The fatal flaw is that you’ve written an essay explaining a satirical concept rather than dramatizing it. We get the taxonomy—and yes, I appreciate a good taxonomy as much as the next social critic—but where’s the Unsureist himself? Show us this poor bastard trying to pray toward Mecca while simultaneously meditating on the Sikh conception of God while wearing his Mormon undergarments. Let us watch him at the supermarket, agonizing over whether the vegetables were harvested with sufficient concern for Jain principles. Give us his wife’s face when he announces they’ll be taking three-day weekends for the rest of their lives.

This is a great point. It’s all fairly abstract in the post. Those times when I manage to make it more visceral are better, like the image of a watertight plastic casket containing a Book of the Dead.

But the essay never quite decides whether it’s mocking religious belief generally (the New Atheist position, rather stale by now), mocking the anxious personality type that would attempt this synthesis (more promising), or actually, in some perverse way, taking the idea semi-seriously (the EA crowd does have a weakness for taking thought experiments too literally).

The piece reads as if written by someone who’s never really felt religious dread or genuine metaphysical anxiety—which is fine; neither have I—but hasn’t really felt anything much about religion at all except that it’s a curious phenomenon susceptible to logical analysis. One suspects that for you, as for many of your rationalist cohort, the problem of religion is primarily an intellectual puzzle rather than an existential predicament.

This is true also. I went into the post thinking, “wouldn’t it be a strange idea if a person tried to maximize their chances of getting into heaven by following many different religious ideas”, and thinking about one specific workaround to the fact that many religions are on the surface mutually exclusive. But I didn’t really think much about the point of my post. In a way it ends up wanting to be a satire of the kind of LessWrong/EA post where logical ideas are taken too seriously without zooming out to see the counterproof-by-absurdity revealed by the big picture. But this isn’t really developed in any way, and so the post seems almost like it’s a real suggestion for how to best get into heaven or something. And the writing style of the post makes it basically become what it should be trying to mock.

And for God’s sake—or gods’ sake, I suppose—let the prose breathe. You’re writing as if composing a LessWrong post, all earnest explanation and logical stepping-stones. Satire requires a lighter touch. Trust the reader to get it without having every implication spelled out like assembly instructions for IKEA furniture.

Yes. More good advice. Thank you AI-Fussell. I suspect following this piece of advice, though, would have led to even more downvotes on LessWrong than the original post got. On the site, there seems to be a hunger for well-thought-out “white paper” posts, as well as some desire for lighthearted, satirical fiction , but I don’t see much nonfiction on the site demonstrating levity or wit. Maybe because levity and wit can be used to disguise bad reasoning, so the LessWrong immune system is triggered by it even when it shouldn’t be, like an allergy being triggered by harmless peanut protein. Yes, I just said you fools are allergic to wit; it seems Paul Fussell is rubbing off on me! Maybe I should get some advice from someone else before his snobbery is branded into me.

That went well, actually. Night and day, compared to the Mega Bloks response I got from default Claude. Searching for someone else whose nonfiction writing I respect, Eliezer Yudkowsky’s name came to mind, so I thought I’d try him next (using a handful of blog posts in the prompt).

For some reason, it feels a bit weird sharing AI output that’s supposed to be in the style of a person who’s still alive, so I won’t share any specific output. At first AI-Yudkowsky criticized Unsureism as a belief that doesn’t pay rent, so I tried again, asking him specifically to criticize the writing, not the logic of the post.

He pointed out that my opening paragraph burns precious opening real estate on setting up basic facts everyone knows. Here is the opening paragraph:

Like with sports teams, most people just adopt the religion of their parents and the location they were born. But choosing a religion is potentially the most important decision you’ll ever make, since it’s the only decision you make that could have infinite impact.

That’s great advice! I could have put this lower, and started with something eye-catching, like an absurd depiction of an Unsureist following a bunch of rituals, or teased the reader with how, by following this new religion, you can get three days off a week. Or whatever. Something less boring.

AI-Yudkowsky made the same critique AI-Fussell did about making things concrete and visualizable, rather than abstract and boring.

AI-Yudkowsky also points out the middle section bogs down when I try to explain the complicated idea about believing in a god without specifying exactly who that god is, to get around religions that don’t allow idol worship. I agree this is a morass that slows down the post. I should have probably excluded it, or just mentioned the possibility of doing something like that, without trying to explain and justify it fully.

The rule of three demands I try this with one more writer, and I was thinking about Christopher Hitchens today, so I’ll try him. Since he’s dead, I’ll allow myself to include some excerpts.

The opening subtitle—a reference to Kubrick that’s been beaten to death by a thousand think-pieces—immediately signals a want of originality. One expects better from a writer attempting satire.

All my substack subtitles are AI-generated, so you can blame yourself for that one, Claude. The titles and posts are written by hand. I just don’t care about the subtitles much for some reason. That’s probably a mistake if I’m signalling fatuity at the top of each of my posts. I’ll write them by hand from now on.

There are moments where the author stumbles onto genuine wit—the business about the “watertight plastic casket” threading the needle between Egyptian and Zoroastrian requirements is genuinely amusing. The parenthetical about mapping Unsureist requirements as an “Effective Altruism cause area” earns its keep. But too often, the humor announces itself rather than arriving with grace.

The worst offense is the final paragraph’s accumulation: “Friday (Islam), Saturday (Judaism), and Sunday (Christianity).” We don’t need the parenthetical labels—the reader grasps the point. This is the literary equivalent of explaining one’s own joke. Similarly, “hereafter referred to simply by the term ‘afterlife’” is bureaucratic throat-clearing in an essay attempting levity.

This is the “explaining the joke to death” criticism again. I should have had more clarity in my mind about the fact that I wanted this post to be absurd and humorous from the beginning, I think.

The piece also commits the modern sin of excessive qualification: “may sound appealing,” “can do better,” “would recommend,” “usually not enough.” All this hedging (ironically appropriate to the subject) creates a tentative, uncertain voice when satire demands confidence. Swift did not equivocate when proposing we eat Irish babies.

Yes! Satire demands confidence! That is true! Paul Fussell’s Class was super confident, and the satire within landed. Of course, you end up thinking the guy’s kind of a dick by the end of the book, so raw confidence can come across as elitism too if you’re not careful.

Asking these three AI personalities for advice was pretty helpful. Especially compared to my usual fire-and-forget strategy of posting online. They had their flaws, of course. All three still tried to tell me I’m amazing. Claude just can’t help that. And these AI versions lack the genius of their human counterparts. But it doesn’t take much intelligence to tell an amateur why their writing stinks.

One important thing I think none of them mentioned directly was just that it was a joke post without being very funny. The concept itself was kinda funny. As was the final line, I think:

If you became an Unsureist, you would also have religious holidays and special days nearly every day. You’d have to fast during Ramadan. And during Lent. And Yom Kippur. And Ekadashi. And Paryushana. Expect to do a lot of fasting, actually. But that’s fine; you can eat when you get to heaven.

But the rest of the post lacked setup-punchline jokes. Zaniness and levity create a comedic tone, but they’re not a replacement for actual comedy. The next time I write a humorous post, I should go into it trying to write something funny on purpose, and take the time to think of more “bits”.

Thanks to AI, I now have lots of ideas for how I can improve my future writing. I’ll probably use this technique again on some of my upcoming Halfhaven posts, so keep your eye out to see if my posts got any better. And let me know if anyone starts making those Halfhaven wolf t-shirts.

Subscribe now

Halfhaven Digest #3

Published: Fri, 31 Oct 2025 13:39:45 GMT
Updated: Fri, 31 Oct 2025 13:39:45 GMT
UTC: 2025-10-31 13:39:45+00:00
URL: https://taylorgordonlunt.substack.com/p/halfhaven-digest-3

Even more highlights from the Halfhaven virtual blogger camp
Content Preview

My posts since the last digest

  • Give Me Your Data: The Rationalist Mind Meld — Too often online, people try to argue logically with people who are just missing a background of information. It’s sometimes more productive to share the sources that led to your own intuition.

  • Cover Your Cough — A lighthearted, ranty post about a dumb subway poster I saw.

  • The Real Cost of a Peanut Allergy — Often, people think the worst part of having a peanut allergy is not being able to eat Snickers. Really, it’s the fear and the uncertainty — the not being able to kiss someone without knowing what they’ve eaten.

  • Guys I might be an e/acc — I did some napkin math on whether or not I supported an AI pause, and came down weakly against. But I’m not really “against” an AI pause. The takeaway is really that there’s so little information to work with right now that any opinion is basically a hunch.

  • Unsureism: The Rational Approach to Religious Uncertainty — A totally serious post about a new religion that statistically maximizes your chances of getting into heaven.

I feel like I haven’t had as much time to write these posts as I did in the last two digests, and I’m not as proud of them. Give Me Your Data has some good ideas. The Real Cost of a Peanut Allergy has interesting information and experiences that won’t be familiar to most people. And the Unsureism post is just fun, I think. So it’s not all bad. But a bit rushed. Hopefully I have a bit more time going forward.

Some highlights from other Halfhaven writers (since the last digest)

  • Choose Your Social Reality (lsusr) — A great video starting with an anecdote about how circling groups have problems with narcissists making the activity all about themselves, but zendo groups don’t have this issue, because even though these two activities are superficially similar, zendo by its nature repels narcissists. The idea being certain activities attract certain people, and you can choose what people you want to be around by choosing certain activities. I had a relevant experience once when I tried joining a social anxiety support group to improve my social skills, only to end up surrounded by people with no social skills.

  • Good Grief (Ari Zerner) — A relatable post not great for its originality, but for its universality. We’ve all been there, bro. Segues nicely into his next post, Letter to my Past .

  • The Doomers Were Right (Algon) — Every generation complains about the next generation and their horrible new technology, whether that’s books, TV, or the internet. And every generation has been right to complain, because each of these technologies have stolen something from us. Maybe they were worth creating overall, but they still had costs. (Skip reading the comments on this one.)

  • You Can Just Give Teenagers Social Anxiety! (Aaron) — Telling teenagers to focus on trying to get the person they’re talking to to like them makes them socially anxious. And socially anxious teens can’t stop doing this even if you ask them to stop. So social anxiety comes from a preoccupation with what other people think about you. This is all true and interesting, and I’m glad the experiment exists, but I wonder if a non-scientist would just reply, “duh”. Anyway, a good writeup.

  • Making Films Quick Start 1 - Audio (keltan) — This is one of a three-part series worth reading if you ever want to make videos. I liked the tip in part 2 about putting things in the background for your audience to look at. I’ve been paying attention to this lately in videos I watch, and it seems to be more important than I originally guessed. I also liked this post about a starstruck keltan meeting Eliezer Yudkowsky . For some reason, posts on LessWrong talking about Eliezer as a kind of celebrity have gone up in the last few days.

You know, I originally wondered if Halfhaven was a baby challenge compared to Inkhaven, since we only have to write one blog post every ~2 days rather than every day, but I kind of forgot that we also have to go to work and live our normal lives during this time, too. Given that, I think both are probably similarly challenging, and I’m impressed with the output of myself and others so far. Keep it up everyone!

Unsureism: The Rational Approach to Religious Uncertainty

Published: Wed, 29 Oct 2025 19:44:29 GMT
Updated: Wed, 29 Oct 2025 19:44:29 GMT
UTC: 2025-10-29 19:44:29+00:00
URL: https://taylorgordonlunt.substack.com/p/unsureism-the-rational-approach-to

Or, how I learned to stop worrying and worship every god simultaneously
Content Preview

Like with sports teams, most people just adopt the religion of their parents and the location they were born. But choosing a religion is potentially the most important decision you’ll ever make, since it’s the only decision you make that could have infinite impact.

Rather than putting all their eggs in one basket, the Unitarian Universalists pull religious wisdom from many different religions. The philosophy that all religions are partly true may sound appealing if you are afraid to commit, but it downplays the fact that many religions are mutually exclusive. You’re either going to be reincarnated, or go to heaven forever. It can’t be both. Many religions require you to have certain beliefs or follow certain rituals to get into heaven or have a good reincarnation in your next life. If those religions are true, Unitarian Universalism is as good as atheism.

If you’d rather hedge your bets than gamble on a single religion, you can do better than Unitarian Universalism. Here I introduce the idea of Unsureism: the notion that exactly one religion is correct, but we have no good way of knowing which one . The Unsureist attempts to hedge their bets as much as possible, following the rituals and requirements of as many religions as possible to ensure the best odds of a good afterlife or next life (hereafter referred to simply by the term “afterlife”).

First and foremost, an Unsureist tries to be a good person. That’ll improve your odds of a good afterlife dramatically. Whether it’s the Hindus that turn out to be correct or the Jews, most religions prefer their adherents to be good people. Since different religions have different ideas of what being a “good person” means, the Unsureist keeps a catalog of as many as possible and tries to follow them all. If Judaism turns out to be right, then the Unsureist will be happy to have followed the Seven Laws of Noah, assuring them a good place in the World to Come. This requires not killing people, not eating flesh torn from living animals, and so on. On the other hand, if the Zoroastrians are correct, then it’s best to — aside from being a generally good person — adhere to specific prohibitions against lying, or contamination of the earth by corpses. Hence, you should have a sky burial. Though this contradicts the Ancient Egyptian requirement to be buried with a copy of the Book of the Dead, so either the Unsureist will have to choose between the two, or perhaps be buried (with the Book) in a watertight plastic casket that cannot pollute the surrounding Earth.

While simply being a good person or following some basic commandments will satisfy the requirements of a lot of religions, some religions are not so open-minded. Some religions are entirely closed to converts, like Manaeism, Yazidi, or Druze. You are either born a Druze or you aren’t. Fortunately these closed religions don’t tend to punish non-members in the afterlife. How unfair would that be?

The religions the Unsureist has to worry about the most are the religions that do allow converts, but do not allow for “dual citizenship”. Christianity requires that you accept Jesus Christ as the savior, and prohibits worshipping other gods. Islam requires that you worship Allah, and doesn’t let you worship other gods (though depending on which Muslim you ask, other “People of the Book” — that is, Jews and Christians — can still get a good afterlife). The Baháʼí Faith requires that you accept Baháʼu’lláh as the latest prophet in a line including Jesus and Muhammad. Of these types of religions, the Unsureist may feel they essentially have to pick one. I would recommend Christianity, as that one gives you a shot of getting into the Muslim heaven as well. And even the Baháʼí heaven, since they’re fairly inclusive about who gets in.

There is another strategy for handling these exclusive religions, however. The Unsureist can accept whichever god happens to be true into their heart, and to worship and have faith in that god, without being explicit about their identity. You believe in “God”, but aren’t clear about who that is. This would be a genuine belief, only an ill-specified one. Mere belief is usually not enough. You also have to have specific beliefs about that god. For example, you must believe this god was raised from the dead, in case he is Jesus. But that’s no problem if another religion turns out to be right. Usually a religion won’t send you to hell just for believing some inaccurate things about their god. A muslim probably won’t go to hell if they erroneously believe Allah was raised from the dead, but otherwise follow the necessary beliefs and requirements of Islam. You must also believe this unspecified god is one god only, according to Islam. The Unsureist must figure out all the necessary properties that this god must have and believe they have them all, without specifying who they are exactly. If the religion also requires belief in a certain messiah or holy book, we use the same trick to believe in them without specifying their identity. Through all this, the Unsureist can never be accused of worshipping the wrong god, only of having some incorrect beliefs about that god, which is usually not a sin bad enough to condemn you to damnation. You’re not engaged in “idol” worship, because you’re not worshipping a god other than the one that exists. You’re worshipping the god that exists, whosoever that is.

If any properties that a god, holy book, or messiah must have for one religion are incompatable with another religion, then there’s a more general solution: You simply believe that the set of necessary properties a god must have are true, without specifying what those properties are.

These exclusive religions usually have exemptions for people who were never properly exposed to the faith. They aim to punish people who reject their god, not those who never heard of him. The Unsureist therefore believes in shielding children from specific knowledge of any of these infohazardous religions, and would prefer if nobody knew about them.

Aside from simply being a good person and accepting the correct god into your heart, religions usually have a bunch of specific rules and rituals their adherants must follow. For some religions these are more necessary than others. Maybe they’re needed to get you a good afterlife, or maybe — as in the case of Mormonism — they only ensure an even better afterlife. In any case, the Unsureist tries to follow as many as they can.

The Unsureist does not eat pork, beef, or in fact any meat at all, which is required for Jainism, as is nonviolence in general. Jainism gives us other rules like not cheating on your spouse, not stealing, etc. Lying and stealing are bad form in many religions, of course. Unsureists must also meditate on the name of God, as Sikhs do, and pray toward Mecca five times per day, as Muslims do. The Unsureist doesn’t consume caffeine, drugs, or alcohol, only eats with their right hand, does not mix fabrics, and walks on fire to honor Lord Murugan. And they have a coin placed in their mouth before burial to pay Charon to cross the River Styx, in case the Ancent Greeks were right. (Someone suggested to me at an event that mapping out the requirements of different religions that an Unsureist must follow could be a viable Effective Altruism cause area.)

The Unsureist will be very busy keeping up with all these rules. It’s worth it to have a much higher chance of a good afterlife than any single religion. Fortunately, they’ll have plenty of time to rest, since they’ll be taking three days off work every week: Friday (Islam), Saturday (Judaism), and Sunday (Christianity). If you became an Unsureist, you would also have religious holidays and special days nearly every day. You’d have to fast during Ramadan. And during Lent. And Yom Kippur. And Ekadashi. And Paryushana. Expect to do a lot of fasting, actually. But that’s fine; you can eat when you get to heaven.

Guys I might be an e/acc

Published: Fri, 24 Oct 2025 03:23:41 GMT
Updated: Fri, 24 Oct 2025 03:23:41 GMT
UTC: 2025-10-24 03:23:41+00:00
URL: https://taylorgordonlunt.substack.com/p/guys-i-might-be-an-eacc

Some napkin math
Content Preview

I read If Anyone Builds It, Everyone Dies (IABIED) and nodded along like everyone else, mostly agreeing with the argument but having minor quibbles about the details or the approach. However, I was recently thinking, “how in support of an AI pause am I, actually?” The authors of IABIED were pretty convincing, but I also know I have different estimates of AI timelines and p(doom) than the authors do. Given my own estimates, what should my view on an AI pause be?

I decided to do some rough napkin math to find out.

A (current number of deaths per year): 60 million

B (guess for years until AGI, no pause): 40 years

C (pause duration, let’s say): 10 years

D (years until AGI, given a pause): B + C = 50 years

E (guess for p(doom), given no pause): 10%

F (guess p(doom) given a pause): 5%

G (current world population, about): 8 billion

H (deaths before AGI, given no pause): A * B = 2.4 billion

I (expected deaths from doom, given no pause): E * G = 800 million

J (total expected deaths, given no pause): H + I = 3.2 billion

K (deaths before AGI, given a pause): A * D = 3 billion

M (expected deaths from doom, given a pause): F * D = 400 million

N (total expected deaths, given a pause): K + M = 3.4 billion

P (additional expected deaths from pausing): N - J = 200 million

Q (additional chance of humanity ceasing to exist if we don’t pause): E - F = 5%

If we pause AI, then based on my estimates, we’ll see an extra 200 million deaths, and the chances of humanity ceasing to exist is halved from 10% to 5%. Is that worth it? That depends on your values.

Let’s say I found out 200 million people were going to die from some kind of meteor strike or something. What lengths would I go to in order to convince people of this? I think the project would become my life. I would be writing essays, calling the news, calling the president – screaming from the mountaintop. If I thought my actions could even reduce the chances of this happening by 1%, I would do it.

Now let’s say I found out in 200 years, humanity was going to end. For some complex reason, humanity would become infertile in 200 years, and humanity would end. Some kind of quirk in our DNA, or some kind of super virus. Or, let’s say they just start having robot babies instead of human babies, because they’re somehow more appealing, or human embryos are secretly replaced by robot embryos by some evil cabal. Anyway, it’s a situation we can only prevent if we start very soon. But let’s say I knew the humans at the end of humanity would be happy. Their non-conscious robots babies acted just like human babies, seemed to grow into people, etc. But once the final human died, the robots would power down and the universe would go dark. I’m trying to create a hypothetical where we have to consider the actual value of humanity having a future irrespective of the suffering and death that would normally accompany humanity coming to an end. Let’s say I’m convinced this is going to happen, and thought if I dedicated my life to stopping this, I could reduce the chances of it happening by 1%. Would I do it?

No way. Maybe the pure ego of being the first person to discover this fact would drive me to write a few essays about it, or even a book. But the eventual extinguishment of humanity just wouldn’t be important enough for me to dedicate my life to. It’s not that I don’t care at all, I just mostly don’t care.

For me, I have empathy and therefore want people who exist (or who will exist) to be happy, not suffer, and stay alive. I don’t care about tiling the universe with happy humans. When people place some insanely high value on humanity existing millions of years into the future, that seems to me to be the output of some funny logical process, rather than an expression of one’s actual internal values.

Let’s do some more napkin math and see how this relates to an AI pause.

R (amount of effort I’d spend to reduce chances of P people dying by 1%): 1000 arbitrary effort units

S (amount of effort I’d spend to reduce chances of humanity gracefully petering out by 1%): 10 arbitrary effort units

T (amount of effort I’d spend to avoid the negative outcomes of pausing AI): R * 100 = 100,000 arbitrary effort units

U (amount of effort I’d spend to avoid the negative outcomes of NOT pausing AI): S * Q = 50 arbitrary effort units

V (is an AI pause favorable?): is U greater than T? Nope.

So I do not favor an AI pause, according to this math. But I wouldn’t really say I’m “for” or “against” a pause, because my position isn’t confident enough to take a strong position. The math inherits the natural uncertainty in my underlying guesses, and also made a lot of assumptions. The idea was just to put down on paper what I believe and see roughly what the natural consequences of those beliefs might be, rather than just passively absorbing an attitude toward AI from my environment.

There were plenty of assumptions here to simplify things, including: I assumed the population won’t increase, that the number of deaths per year will be relatively constant until AGI, that the AGI pause duration will be 10 years, that capabilities won’t increase during the pause at all (even theoretical research), that AI kills everyone instantly or not at all, and I didn’t really factor in suffering directly, just used death as a proxy.

There may also be factors you think are important I didn’t include, like the inherent value of non-human (AI) life, the inherent value of animal life/suffering, etc. So feel free to create your own version.

Whether or not you favor a pause might come down to how much you value the lasting future of humanity. Or if you have IABIED-like timelines and p(doom), then there may be a clear case for a pause even in terms of human lives.

I had Claude create a calculator version of my napkin math, so you can try entering your own assumptions into the calculator to see whether you’d be for or against an AI pause. Try it here . (You should choose a negative R value if P is negative!)


Thanks for reading Fool Heart! Subscribe for free to receive new posts and support my work.