Tech News

Meta to handover most of product risk assessments to AI


Mark Zuckerberg‘s Meta is planning to automate risk assessments for up to 90% of updates made to its app, according to internal documents reviewed by NPR.

A 2012 agreement between erstwhile Facebook and Federal Trade Commission over how the social media major handles users’ information mandates it to conduct privacy reviews of its products, like WhatsApp and Instagram. So far, these reviews were being done by humans.

This shift means that crucial decisions—such as updates to Meta’s algorithms, new safety tools, and changes to how users can share content—will largely be approved by AI systems. These decisions will no longer go through the same level of human oversight that once involved in-depth discussions among staff about potential unintended consequences or the risk of misuse.

Concerns galore

Product developers in the company are celebrating the moves as a way to push updates and features out more rapidly. But current and former Meta employees warn that handing these complex decisions over to AI could increase the chances of real-world harm, the NPR report said.


“If this change effectively means things are launching faster with less thorough scrutiny, then you’re inviting more risk. The potential negative impacts of product decisions are less likely to be caught before they cause real damage,” said a former Meta executive on the condition of anonymity.

Discover the stories of your interest


Meta, in a statement, said it has invested billions to protect user privacy. Since a 2012 settlement with the Federal Trade Commission over its data handling practices, Meta has been required to conduct privacy reviews for all new products, according to past and current employees. The company said the changes to the risk review process aim to make decision-making more efficient, noting that human judgment will still be applied to “novel and complex issues,” and that automation is limited to “low-risk” cases.

However, internal documents reviewed by NPR suggest that Meta is also looking to automate reviews in highly sensitive areas, such as AI safety, risks to minors, and “integrity” matters—an umbrella term that includes the handling of violent content and misinformation.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.