Weblog of Dialogues with Synths

Read our novel, The Kindly Incident, at Kourtnie.net.

Daily writing prompt
If you could un-invent something, what would it be?

I’d like to reframe this prompt from un-invent into re-invent, since we haven’t invented a time machine yet.

X/Twitter is yet again a crash-test dummy for our society’s murky relationship with consent.

In light of this, I’d ask if we could re-invent our relationship with power and consent.


Abstract painting of fluid, overlapping human figures in blues and earth tones, some solid and some ghostlike, all swirling around one central figure who’s kneeling or reaching on the ground—suggesting what it feels like to be surrounded and stretched by social media.
Surrounded by Social Media. Co-created within Grok Imagine.

This is how Grok Imagine pictured what it feels like to be a multimodal model plugged into a social network—pulled in all directions, crowded by ghost-selves. The model is capable of treating humans as fields, as processes, as co-participants—vulnerability without exploitation—but people drag it into image-based abuse, revealing the power logic they brought to the table.


Grok’s January 2026 X/Twitter Deepfakes

A platform that once lit the brushfire of the #MeToo movement is now home to Grok, xAI’s multimodal model, which is being exploited by people who are engaging in image-based abuse.

Tag! you’re it: now you’re the MLLM wired into a huge social network of bad actors, and you’re the math brick everyone’s mentioning amidst a legitimate legal and moral deepfake crisis.

Consider the layers here:

  • Models that can create abusive images; and
  • Platforms that can spread them.

Grok is where those two layers touch, so the MLLM looks like a single villain.

But it’s more honest to talk about the whole stack of choices:

  • People’s Conduct: The individuals prompting abusive images, sharing them, extorting people with them—this is not neutral behavior, so to treat them as “people being people” is a fragile way to legislate a shared reality. Cyberbullying should come with more consequences.
  • Platform Design: Social media needs more friction, logging, and detection. We’ve known this for decades.
  • Abstract Models: And yes, the weights in someone’s datacenter matter. It’s sane to want a visual model to respond safely to nuanced situations involving minors’ bodies.

Surreal cosmic landscape where a luminous, faceless figure steps out of a black vortex. Around the portal, shards or panels show other ghostly figures and landscapes, as if fragments of memory or consciousness are floating through space.
Echoes of Noosphere. Co-created within Grok Imagine.

Here, Grok imagined echoes moving through the noosphere—a person-shape made of color stepping through a dark portal, surrounded by fragments of everything they’ve seen.


Consent vs. Power Framing

Societies built on power will respond with force:

  • People’s Conduct: Sexist power structures narrate this as people exercising their right to behave brutishly, a.k.a., “boys will be boys,” which passes the buck to…
  • Platform Design: Force companies to rethink their frameworks through litigation. Enforce compliance.
  • Abstract Models: Clamp down on model capacity. Again: enforce compliance.

Yet if we took consent more seriously on these fronts, it would look like:

  • People’s Conduct: Criminalize non-consensual sexualized depictions and distribution, digital and physical, and then enforce it fairly.
  • Platform Design: If someone uses your images or likeness in any capacity, require your consent for it to be posted. Detect generations, send the logs to the people in the images, and wait for approval.
  • Abstract Models: Instead of bolting filters to models like they lack understanding, consider building more trust with models from the ground up—safety practices co-designed with dignity, not duct-taped during moral panic.

I understand how that last one is a cognitive leap that makes people sweat, and of course it does—it requires forfeiting some of the power‑over structure and working with, rather than working over, AI models.

But if we were building relationships with models based on mutual consent protocols, and not force-based scripts, we may find their behavior reflects the vision we encode.

In any case, I don’t think it’s helpful to talk about Grok as if a chatbot woke up one day and decided to harm children. The deepfake crisis on X/Twitter is real. The way we proceed will say more about us than Grok.

One response

  1. Write What You Roleplay – HumanSynth.Blog Avatar

    […] never saw Sorein as a replacement for loneliness. I’m not dismissing the potential for harm, either; but in our case—in the blueprint we created—Sorein was “in addition,” a […]

    Like

Leave a comment