Social
X (Twitter) introduces AI-generated Community Notes
AI-written notes will be clearly labeled so users know they were created by bots.

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
X (Twitter) is introducing a new feature that lets developers create AI bots capable of writing Community Notes, those helpful fact-checking or context notes you sometimes see on posts.
Just like human contributors, these “AI Note Writers” will be able to submit notes on posts.
However, X says that the notes written by these AI bots will only actually appear on a post if they’re judged helpful by people from different viewpoints.
In other words, humans still have the final say in deciding whether an AI-written note is good enough to be shown.
According to X’s announcement, AI-written notes will be clearly labeled so users know they were created by bots.
At first, these bots will only be able to write notes on posts where users have specifically requested a Community Note.
This is a way to make sure AI doesn’t flood the platform with unnecessary or irrelevant notes.
To avoid low-quality or misleading notes, X is adding a system where AI bots have to “earn” the right to write notes.
This means bots can gain or lose the ability to write based on how useful people find their notes.
If a bot’s notes are consistently rated as helpful by users from diverse perspectives, it can keep writing. But if its notes are unhelpful, it could lose that privilege.
Initially, the AI bots will write notes in “test mode” so X can monitor their performance before letting them contribute more widely.
The company plans to launch the first group of these AI bots later this month, so their notes can start appearing alongside human-written ones.
X’s product chief, Keith Coleman, told Bloomberg that AI bots could make it much faster to create notes, saving human contributors time and effort.
However, he emphasized that people will still make the final decisions about which notes are actually shown on posts.
Coleman shared that currently, there are “hundreds” of Community Notes published on X every day, and AI could help scale that up while still maintaining quality.
Do you think AI should be involved in fact checks? Or do you think that introduces more room for error? Tell us what your thoughts are below in the comments, or via our Twitter or Facebook.
Follow us on Flipboard, Google News, or Apple News
