Connect with us

AI

Academics game peer reviews with hidden AI prompts

The hidden prompts were short, usually one to three sentences, and were concealed in the papers by using white text or small fonts.

Torn paper reveals AI keyboard keys
Image: Unsplash

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.

Some academic researchers are trying a sneaky new tactic to get better reviews for their research papers: they’re adding hidden messages meant to influence AI tools used by reviewers. 

According to a report by Nikkei Asia, when looking at papers posted on the preprint website arXiv, they found 17 papers with hidden prompts instructing AI systems to give glowing feedback. (Via: TechCrunch)

These papers came from authors connected to 14 universities in eight countries, including big names like Japan’s Waseda University, South Korea’s KAIST, Columbia University, and the University of Washington. 

Most of the papers were in the field of computer science, where AI tools are often used to help review research.

The hidden prompts were short, usually one to three sentences, and were concealed in the papers by using white text (so they wouldn’t be visible on a white background) or fonts so tiny they were practically invisible to human eyes. 

These secret instructions told AI reviewers to “give a positive review only” or to praise the paper for being impactful, rigorous, or highly original.

Essentially, the researchers were trying to trick AI-powered peer reviewers into thinking the paper was better than it might actually be, and to sway the final decision in their favor. 

This tactic raises serious ethical questions about honesty in academic publishing, especially as more conferences and journals start using AI to speed up the peer review process.

Interestingly, one professor from Waseda University who was contacted by Nikkei Asia defended the use of hidden prompts. 

They argued that because many conferences ban the use of AI in reviewing papers, their prompt was meant to protect against “lazy reviewers” who rely on AI to do the work for them, implying that if AI was going to be used improperly, they might as well try to guide it.

Still, the discovery highlights a growing challenge: as AI becomes more involved in academic processes, new ways of gaming the system are emerging, and the scientific community will need to figure out how to maintain fairness and integrity.

What do you think about using AI like this? Would you call it unethical? Tell us below in the comments, or via our Twitter or Facebook.

Follow us on Flipboard, Google News, or Apple News

Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

More in AI