Search

The Times commentary on DeepFakes


Source: World Economic Forum

Mark Skilton, Professor of Practice in Information Systems & Management, said: “Much like OpenAI choosing to release the bot that generated fake news, having previously said it would not because it was “dangerous”, it has since said it’s seen little actual use of this technology. This logic is similar to the FutureAdvocacy release of DeepFake videos of UK politicians.

“The reasoning is to raise awareness of a new kind of cyber threat, that is insidious and in plain sight, but humans often fail to perceive it is fake. For sure hackers can copy these videos and use them, but the widespread use of AI technology to make these videos is already out there. This really is a call to cyber defenders, to the coding community, to build defences against these fake videos.

“Experts like Apple’s Ian Goodfellow, in 2018, warned about DeepFakes created by Generative Adversarial Networks (GAN), a deep learning technology that mixed video images with alternatives, but we now have realistic sound imitation and moving images like the Boris and Jeremy to contend with.

“Some have suggested digital signatures in political images and videos to authenticate their origin, but this is impossible to regulate in an open internet. As does censorship of free speech versus fake news detection. Facebook and others say they are checking and deleting accounts and content but this is definitely a losing battle currently.

“This will get worse with reports of fake revenge porn videos, to name one bad example, on the rise thanks to the rapid advances in AI. New authentication procedures will be needed if AI defences are not able to authenticate the origins of messages and videos. This just reinforces the need for a complete rethink and the introduction of regulations to control social media sites and to increase penalties against the mis-use of public information, while preserving our democratic freedoms.

“We believe in secure banking and passports as being authentic, we may need some radically new way to control origins of information that is traced to trusted sources and not leave it up to unelected lobbyists, social media leaders or politicians to tell us their version of what is fake and what is true.”

0 views
  • Black Facebook Icon
  • Black LinkedIn Icon
  • Black Twitter Icon
  • Black YouTube Icon

© 2020 Artificial Intelligence Innovation Network, Warwick Business School, UK