Meta's methods for identifying deepfakes are "not robust or comprehensive enough" to handle how quickly misinformation spreads during armed conflicts like the Iran war. That's according to the Meta Oversight Board - a semi-independent body that guides the company's content moderation practices - which is now calling on Meta to overhaul how it surfaces and labels AI-generated content across Facebook, Instagram, and Threads.
The call for action stems from an investigation into a fake AI video of alleged damage to buildings in Israel that was shared on Meta's social platforms last year, but the Board says its recommendations are particularly r …
Read the full story at The Verge.
from The Verge https://ift.tt/XAqmg0l
via IFTTT
EmoticonEmoticon