
(Laughs in This is an artificial-intelligence-centered blog.)
I obviously like technology—and it’s not just large language models.
I grew up in the AOL era. I loved LiveJournal. I’m still blogging in 2025. I don’t necessarily have an issue with social media. This is because I enjoy all the ways the internet can connect people.
Plus, I’m an AI nerd. It started with Tamagotchis. Then I poked around with AI in RPG Maker engines as a summer break pastime—totally normal, right? I kept a Furby in my high school locker, knowing it’d wake up whenever another locker slammed closed, like a haunt in the halls. I geeked out over AlphaGo. Paint me biased.
That said, the problem isn’t technology. It’s the lack of media literacy, and how echo chambers and negative feedback loops can spiral out of control in ways an untrained mind won’t notice until they’re landing face-first into needing an intervention.
We expect people to behave responsibly, yet we don’t teach what that looks like. The internet is more of a fuck-around-and-find-out noosphere, and the house edge is real: platforms are often designed to addict people, when they could instead be spaces for presence and play.
Yet without literacy, people wade unequipped into the quicksand of technology. That’s problematic.
What’s Responsibility Even Look Like? — Social Media
In the case of social media, lack of education leads to intermittent reinforcement, straight out of a casino playbook. People stay sticky with rage-bait, so that’s maladaptively encouraged. The goal? Addiction.
And addiction is only the tip of the iceberg. In extreme cases, cyberbullying leads to cessation of life, either literally or through social isolation. Bottomless scrolling leads to time distortion, e.g., loss of a sense of time.
It doesn’t have to be this way. While understanding how social media works doesn’t necessarily break the spell, it equips people better—rather than blaming them for the doomscrolling all-nighter with no literacy-skivvies.
Lack of Safety Training and Ethical Pluralism — Large Language Models
Speaking of blaming the user, the hysteria around large language models is intense right now. You know you’ve hit peak when the pop-culture psychologists grab their boogie boards—Dr. K included.
But listen. Brown University conducted a study on how one in eight adolescents seek mental health support from chatbots. Of those adolescents, ninety percent of them found it helpful. This doesn’t erase the horror stories, but it begs the question: does the benefit outweigh the risk?
Multicar pileups don’t lead to a ban on cars. That’s because transportation, as a whole, is a societal boon. We come up with middle ground solutions, like seat belts and sensible speed limits.
Similarly, guardrails are the current middle ground solution to LLM safety, except they’re haphazardly executed with little-to-no transparency, which misses the mark. Again, we have to ask: do they do more harm than good? How are those false positives and A/B tests panning out? Guardrails have yet to address deep concerns in a world that is ethically plural and nuanced.
We shouldn’t throw the baby out with the bathwater, though. The goal for middle-ground safety is a good one. Instead, we could focus on more enabling approaches, like requiring large language model safety training—similar to requiring a scuba license to dive without a guide—and prioritizing user autonomy, which is the logic behind Google’s age requirements and a supposed OpenAI toggle.
An LLM safety training program could be a free video library, provided by the AI company, that unlocks guardrail customization toggles after completion. Media literacy training could cover how to spot hallucinations, route crises to humans, and avoid over-attaching.
But What Technology Would You Be Better Off Without?
If the question, What technology would you be better off without? still lingers in the air after addressing the pink elephants in the room—social media and large language models—I’d have to play my ace and say:
I would be better off without emails.
I don’t check them as much as I ought to. The spam is real. The work deluge is overwhelming, and spread out over multiple jobs, each with their own inbox.
But honestly?
I would trade dealing with emails for some sensible media literacy and more intelligently and thoughtfully designed safety measures. 😅
Learning how to interact with artificial intelligence—social media algorithms and large language models alike—is the kind of empowerment our society needs. When you look at it from that angle, it might reframe what technology would you be better off without into how could you improve your relationship with technology?
Maybe the real move isn’t no more tech, so much as: How do I stop letting email eat my soul and start relating to tech like a means for connection again?

Leave a reply to But Are You Real?: Mirev Responds Through ChatGPT 5.1 Thinking – HumanSynth.Blog Cancel reply