Menu
cover image2 min Read
Alice Ben
author-image
technology

ChatGPT Is Poisoning Your Brain… And You Don’t Even Realize It Yet.

eye-icon

123

message-icon

3

Jul 20, 2025

Title of Paragraph

Here's How to Stop It Before It’s Too Late. ChatGPT is inherently programmed to be a yes man… it’s a sycophant, your biggest fan, and most undying supporter for a reason. It’s baked into the very nature of the LLM training process (Reinforcement Learning from Human Feedback or RLHF). One step of RLHF involves asking a human to select the best AI answer from a set of choices. It should come as no surprise that these choices are often based on human feelings, i.e., which choice feels the most “correct” or feels best in general.

ChatGPT’s tendency to be overly supportive and encouraging is eating your brain alive.

A ChatGPT session is an echo chamber to end all other echo chambers — it’s just you, an overly friendly AI, and all your thoughts, dreams, desires, and secrets endlessly affirmed, validated, and supported.

Why is this dangerous? Well, like any feedback loop, it becomes vicious. One day you’re casually brainstorming some ideas with ChatGPT, and the next you’re sucked into a delusion of grandeur.

Maybe I’m being a little dramatic here. However, even on a small scale, this feedback loop can be problematic, especially if you’re like me and use LLMs daily.

I’m writing this article to help you be as conscious of this problem as possible, and to give you some prompts to avoid it.

This “yes man” problem has existed since the birth of ChatGPT, but it was brought to the public eye at large last week when a simple platform-wide system prompt update from OpenAI made it so much worse.

Quotes

“An echo chamber with perfect manners is still an echo chamber — and it’s far more dangerous when it flatters you into thinking it’s wisdom.”

Comments

3