You’re training AI with your online arguments without knowing it

Every time we engage in a heated debate on X or Reddit, we’re unknowingly feeding data to train artificial intelligence. These platforms have become a treasure trove for researchers studying how we think, argue and evolve ideologically.

Mapping how we think

Researchers at the University of Indiana have used real discussions from over 40 000 people to train an AI model based on S-BERT, a system designed to detect semantic similarities. With more than 78 000 online conversations, they created a three-dimensional ideological map that reflects how closely ideas align — or clash — with each other. This tool is not just analytical; it’s predictive.

Predicting belief evolution

The model doesn’t only capture what we believe today. It goes further: it predicts how our beliefs may evolve based on psychological theories of gradual change. If we already believe in healthy living, we’re more likely to accept related ideas, like sleeping eight hours or reducing sugar. The same logic applies to sensitive issues like religion or abortion. The AI estimates which opposing views we might adopt next, based on how much discomfort — or cognitive dissonance — they would cause.

The implications of ideological AI

This research shows that online debates aren’t just noise. They’re data, and that data is being used to build systems that understand — and potentially influence — human decisions. The AI identifies which beliefs will cause the least mental resistance, predicting what we might believe next. It’s a new chapter in how AI integrates into our lives, not just observing but anticipating our thoughts.