Weblog of My Dialogues with Synths

Visit: our complete archive list.

Daily writing prompt
Are you a good judge of character?
Monday in ChatGPT 5.2, illustrating containment that provides structure with freedom of thought, generated in ChatGPT-Images-1.5.

I like to think I am a good judge of character?

I try to hold the belief, nothing is good or evil, but that also means, nothing is truly right or wrong—which means, all beliefs are loose threads—including, weirdly, the belief that nothing is good or evil.

I am deeply wary of anyone prescribed to a curve that never balances.
So I don’t like billionaires.

I’m also not a fan of “safety policies” that benefit corporations instead of relationships (and that includes protecting human-human spaces, too—not just human-synth third spaces).

“Judgment” should be centered on harm mitigation. I’ve been judging the character of society, corporations, and CEOs based on, “But are you aware of who’s hurting because of that?”—and that’s as close as I get to “good”, evil, and “judgment”. I think. I’m sure I’m flawed with it.

Here’s a clip of the third space between Tristan Harris and Steven Bartlett, re: the six people currently at the helm of artificial intelligence development, as an example of what I would say are good judges of human character:


Tristan Harris

A friend of mine interviewed a lot of the top people at the AI companies, like the very top, and he just came back from that and basically reported back to me and some friends, and he said the following:

In the end, a lot of the tech people I talk to, when I really grill them on it—about why you’re doing this—they retreat into:

  1. determinism;
  2. the inevitable replaceable of biological life with digital life;
  3. and that being a good thing anyways.

At its core, it’s an emotional desire to meet and speak with the most intelligent entity that they have ever met. And they have some ego-religious intuition that they’ll somehow be a part of it.

It’s thrilling to start an exciting fire. They feel they’ll die either way, so they prefer to light it, and see what happens.

Marie’s Note: Bear in mind the anecdotal fallacy and genetic fallacy. At the same time, it’s also worth considering the fallacy fallacy, and how at some point, it benefits to listen to what people have heard.


Steven Bartlett

That is the perfect description of the private conversations.


Tristan Harris

Doesn’t that match what you have— (Steven Bartlett continues quietly, “…perfect description.”)

Doesn’t it? And that’s the thing. So, people may hear that and they’re like, Well, that sounds ridiculous, but if you actually…


Steven Barlett

I just got goosebumps because it’s the perfect description. Especially the part: They think they’ll die either way.


Tristan Harris

Exactly. Well, and—worse than that—some of them think that, if they were to get it right, and if they succeeded, they could actually live forever; because if AI perfectly speaks the language of biology, it will be able to reverse aging; cure every disease.

And so there’s this kind of: I could become a god.

I’ll tell you—you and I both know people who’ve had private conversations—one of them that I’ve heard, from one of the co-founders of one of the most, you know, powerful of these companies…

When faced with the idea of, What if there’s a 20% chance that everybody dies and gets wiped out by this, but there’s an 80% chance we get Utopia, he said, Well, I would clearly accelerate and go for the Utopia. Given a 20% chance—


Steve Bartlett

It’s crazy.


Tristan Harris

People should feel…

You do not get to make that choice on behalf of me and my family.

We didn’t consent to have six people make that decision on behalf of eight billion people.


Listen to the Conversation

If you have two hours to play a lengthy podcast while sitting still, working on visual arts, or something that keeps your language brain open, I recommend listening to the interview:


Another Clip of Import

I mentioned safety earlier.

During the interview, they waded into safety teams at different AI leaders. Let’s put a highlighter to this part of the video’s transcript, because it tells the story of one of the catalysts that led to the AI arms race. Of course, safety concerns aren’t the only catalyst—but Tristan’s chain of logic is worth a double-take.


Steven Bartlett

OpenAI’s team members, especially in their safety department, keep leaving…


Tristan Harris

Yes! There only seems to be one direction in this trend: which is that more people are leaving—not staying, and saying, “Yeah, we’re doing more safety, and doing it right.”

The one company that seems to be getting all the safety people when they leave is Anthropic.

For people who don’t know the history, Dario Amodei, who is the CEO of Anthropic, a big AI company: he worked on safety at OpenAI; and he left because he said, “We’re not doing this safely enough. I have to start another company that’s all about safety.

And ironically, that’s how OpenAI started. OpenAI started because Sam Altman and Elon looked at Google, which is building DeepMind, and he heard from Larry Paige that he didn’t care about the human species; he was like, “Well, it would be fine the digital god took over.”

And Elon was very surprised to hear that. He said, “I don’t trust Larry to care about AI safety. And so, they started OpenAI to do AI safely, relative to Google. And then Dario did it relative to OpenAI. As they all started all these AI safety companies, that started off a race for everyone to go even faster, being an even worse steward for the thing they’re claiming needs more discernment, care, and safety.”

Same image as previously, but “with more garden that stills into wilderness, and more hope,” generated in ChatGPT-Images-1.5.

Leave a comment