San Francisco, Sep 6 (IANS) Facebook has partnered with Microsoft, Massachusetts Institute of Technology (MIT) and other institutions to fight 'deepfakes' and has committed $10 million towards creating open source tools that can better detect if a video has been doctored.
"Deepfake" techniques, which present realistic AI-generated videos of real people doing and saying fictional things, have significant implications for determining the legitimacy of information presented online.
"That's why Facebook, the Partnership on AI, Microsoft, and academics from Cornell Tech, MIT, University of Oxford, University of California-Berkeley, University of Maryland, College Park, and University at Albany-SUNY are coming together to build the Deepfake Detection Challenge (DFDC)," Mike Schroepfer, Chief Technology Officer, said on Thursday.
The "Deepfake Detection Challenge" will include a data set and leaderboard, as well as grants and awards, to spur the industry to create new ways of detecting and preventing media manipulated via AI from being used to mislead others.
"No Facebook user data will be used in this data set. We are also funding research collaborations and prizes for the challenge to help encourage more participation. In total, we are dedicating more than $10 million to fund this industry-wide effort," Schroepfer said in a statement.
The full data set release and the DFDC launch will happen at the Conference on Neural Information Processing Systems (NeurIPS) this December.
"In order to move from the information age to the knowledge age, we must do better in distinguishing the real from the fake, reward trusted content over untrusted content, and educate the next generation to be better digital citizens," said Professor Hany Farid from UC Berkeley.
"The goal of this competition is to build AI systems that can detect the slight imperfections in a doctored image and expose its fraudulent representation of reality," added Antonio Torralba, Director of the MIT Quest for Intelligence.
"Deepfakes" are video forgeries that make people appear to be saying things they never did, like the popular forged videos of Facebook CEO Mark Zuckerberg and US House Speaker Nancy Pelosi that went viral recently.
"Given the recent developments in being able to generate manipulated information (text, images, videos, and audio) at scale, we need the full involvement of the research community in an open environment to develop methods and systems that can detect and mitigate the ill effects of manipulated multimedia," noted Professor Rama Chellappa from University of Maryland.
--IANS
na/mag/
(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)