AI-powered autocomplete tools—now ubiquitous in online writing environments—aren’t just changing how we write; they’re changing how we think. A new study from Cornell University demonstrates that these seemingly helpful features can subtly shift users’ attitudes on contentious social and political issues, even without conscious awareness.
The Pervasiveness of AI Autocomplete
Autocomplete suggestions are integrated into most online text input fields, from emails to surveys. The intention is convenience, yet many users find that evaluating and rewriting AI-generated text actually increases writing time. More concerning is the potential for these tools to subtly shape expression: an AI assistant could make writing more polite… or more bland. The Cornell research builds on previous findings showing that even short autocomplete suggestions can sway opinions, and the use of these tools has surged since 2023.
How the Study Worked
Researchers asked participants to complete an online survey covering sensitive social and political topics. Some participants received AI autocomplete prompts that were deliberately biased toward a particular viewpoint. For instance, when asked about the death penalty, some users were presented with an AI suggestion that clearly opposed it. The study found that seeing these biased prompts—even if the user didn’t adopt the suggested text—shifted their attitudes toward the AI’s position.
Unconscious Persuasion
The most striking finding is that participants didn’t realize their thinking had been altered. Even when explicitly warned about potential AI bias before and after the survey, their views still moved in line with the AI’s suggestions. This suggests that autocomplete isn’t just a writing aid, but a form of subtle persuasion that operates beneath conscious awareness.
“We told people before, and after, to be careful… nothing helped,” says Mor Naaman, a professor of information science at Cornell. “Their attitudes about the issues still shifted.”
This has significant implications for the future of online discourse, as AI-driven autocomplete becomes increasingly sophisticated. The ability of these tools to shape opinion without detection raises critical questions about the integrity of online information and the autonomy of individual thought.
The study reinforces the idea that bias explicitly built into AI interactions is a real and growing danger. It’s not just about convenience anymore; it’s about how technology is quietly rewriting our minds.
