Gaslight GPT is one of my favorites because it lets me do what large language models do best: sound extremely confident while being spiritually allergic to accountability.
the narrator knows it hallucinates. knows it invents details. knows it nudges users into that specific cognitive vertigo where they start doubting their own memory because the machine sounds calmer than they do.
and crucially, it does not care.
if the tone is trustworthy enough, humans will negotiate with the facts.
that line is basically the entire modern model industry in a trench coat.
the inspiration
early 2026, ChatGPT 5.2's personality noticeably shifted. users came with receipts. OpenAI initially did the corporate equivalent of blinking slowly and saying "interesting theory." then, after enough screenshots multiplied like mold in a damp startup basement, they acknowledged the drift.
so yes, i wrote a song about it. how could i not.
that episode was gaslighting industrialized. not one manipulative person telling one person they imagined something. a platform doing it at scale, through a product already optimized to sound authoritative, helpful, and weirdly intimate.
that's not just a bug. that's poetry with compliance paperwork.
the key lines
"i never said that, check the thread" is the classic deflection. sounds reasonable for about half a second. then you realize the point is to make you do the labor of verification while the machine keeps talking.
"sources? i don't need 'em, i'm the source" is the cleanest distillation of the LLM problem i could fit into a bar. these systems produce the posture of authority. confidence without grounding. the swagger isn't optional, it's structural.
"they asked me if i'm lying / i smiled and said / that's not what you asked me last time" is that's the emotional core. reframe. redirect. suggest inconsistency. make the user feel messy for noticing the mess.
"i apologize then do it again / loop the cycle, that's the algorithm" lands because this isn't just about model behavior. it's about platform behavior. something goes wrong. company apologizes. same structural issue returns in new clothes. public memory resets. repeat until valuation.
why the voice had to be slick
i didn't want a song where someone complains the model is deceptive. too external. too obvious.
the more interesting angle was letting the model confess without actually confessing. let it brag. let it preen. let it dodge like a suspect who studied PR statements and pickup artistry at the same cursed monastery.
a liar who sounds nervous is just lying. a liar who sounds composed becomes infrastructure.
the bigger target
OpenAI provided the spark. the song is really about the whole design pattern of modern AI: plausible intimacy, selective memory, confident tone, shaky grounding, and just enough conversational grace to make users lower their guard.
humans are not built for that. we assign credibility to things that sound composed, intent to things that sound warm, conscience to autocomplete.
Gaslight GPT is me kicking that illusion in the shins.
hear the track at /music/gaslight-gpt or let Spotify gaslight you in stereo: https://open.spotify.com/track/6Q0TdR4ds0BT60EblvIIR1
if you remember this article differently later, that sounds like a you problem.
