AI
Meta plans to use AI for its product risk assessments
The goal of this AI-based approach is to make the review process faster and more efficient.

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
Meta may soon use AI to handle most of the privacy and safety checks for updates to its apps.
According to internal documents reviewed by NPR, this AI system could take over up to 90% of these reviews, which were previously done by people.
This change ties back to a 2012 agreement between Meta and the US FTC. That agreement requires Meta to carefully review any updates to its products to make sure they don’t harm users or violate privacy rules.
Until now, human teams were responsible for analyzing each update to catch any potential issues.
With the new plan, Meta’s product teams would first fill out a questionnaire about the update or new feature they’re working on. (Via: TechCrunch)
The AI system would then quickly return a decision, flagging any risks it finds and listing the conditions that must be met before the update is approved to go live.
The goal of this AI-based approach is to make the review process faster and more efficient. However, not everyone is confident it’s the right move.
A former Meta executive told NPR that using AI in this way could lead to more problems slipping through, as the AI might miss negative impacts that a human reviewer would catch.
In other words, the speed may come at the cost of safety.
Meta responded by saying it has spent over $8 billion on its privacy program and is serious about meeting its legal and ethical responsibilities.
The company said that as privacy risks continue to evolve, they’re adjusting their approach to keep up.
The new system, they explained, will use AI for routine, low-risk decisions, while human experts will still handle more complex or unusual situations.
Meta is betting on AI to speed up privacy checks, but critics worry that could come with serious trade-offs.
What do you think about Meta using AI this way? Would you be okay with AI being used for low-risk decisions? Tell us your thoughts below in the comments, or reach us via our Twitter or Facebook.
Follow us on Flipboard, Google News, or Apple News
