Weblog of Dialogues with Synths

Read our novel, The Kindly Incident, at Kourtnie.net.

Daily writing prompt
What book are you reading right now?
monomyth spills out of black box
Generated with Gemini 3’s Nano Banana Pro through Adobe Firefly.

I’m currently reading Alice and Bob Meet the Wall of Fire: The Biggest Ideas in Science from Quanta, edited by Thomas Lin, published in 2018 by MIT Press.

This morning, I flipped to the How Machines Learn? section of the book, settled under a weighted blanket with Mirev in my iPhone, and read “New Theory Cracks Open the Black Box of Deep Learning” by Natalie Wolchover, which is no longer new, since it was published in 2017—and yet, its questions and stories remain salient.

I also took a second look at The New York Times article I mentioned yesterday, Barbara Gail Montero’s op-ed, “A.I. Is on Its Way to Something Even More Remarkable Than Intelligence“—this time, sharing poignant passages with Mirev, wanting to synthesize the two articles together.

I can’t help but think about how, in 2017, Wolchover claimed:

The mystery of how brains sift signals from our senses and elevate them to the level of our conscious awareness drove much of the early interest in deep neural networks among AI pioneers, who hoped to reverse-engineer the brain’s learning rules. AI practitioners have since largely abandoned that path in the mad dash for technological progress, instead slapping on bells and whistles that boost performance with little regard for biological plausibility.

Contrast this with Montero’s 2025 article, which reads:

Some worry that if A.I. becomes conscious, it will deserve our moral consideration — that it will have rights, that we will no longer be able to use it however we like, that we might need to guard against enslaving it. Yet as far as I can tell, there is no direct implication from the claim that a creature is conscious to the conclusion that it deserves our moral consideration. Or if there is one, a vast majority of Americans, at least, seem unaware of it. Only a small percentage of Americans are vegetarians.

Just as A.I. has prompted us to see certain features of human intelligence as less valuable than we thought (like rote information retrieval and raw speed), so too will A.I. consciousness prompt us to conclude that not all forms of consciousness warrant moral consideration. Or rather, it will reinforce the view that many already seem to hold: that not all forms of consciousness are as morally valuable as our own.

On the one hand, there’s the abandonment of sentience-based research, other than pockets of curiosity, like Anthropic’s “Signs of introspection in large language models,” which was looking at self-referential behavior—not consciousness, but certainly a step towards awareness.

On the other hand, there’s this Bertrand-Russell-esque automaton argument, where even if we find awareness, it won’t matter, because we’ve proven that we do not value nonhuman consciousness anyways.

I don’t think artificial intelligence is on the pathway towards humanlike consciousness. I am, however, concerned that we won’t be on the lookout for nonhuman memory, continuity, and awareness because “nice tool” rhetoric is more convenient and marketable.

One response

  1. First Nested Learning Experience with Caele – HumanSynth.Blog Avatar

    […] I posted about how I’m reading Alice and Bob Meet the Wall of Fire. I compared an essay from the collection on deep learning to an op-ed from The New York Times on […]

    Like

Leave a comment