George Hoyem, managing partner of In-Q-Tel, the US intelligence community’s strategic venturing unit, moderated a panel featuring John Kelly, chief executive of social network platform analysis technology provider Graphika, and Hany Farid, professor of electrical engineering and computer science at University of California, Berkeley.
The malicious use of artificial intelligence (AI) and machine learning technologies for deception was becoming rampant on the internet, especially on social media platforms, Farid said.
Kelly and Farid agreed that internet users were unable to distinguish the difference between a real video and a fake persona that was synthetically generated – and it was now easy to create and share fake media online.
Deepfake technology mainly affected high-profile individuals in the past, but it was increasingly being deployed to spread propaganda against governments and businesses. The number of AI-generated fake videos almost doubled in 2019.
The panellists agreed that although efforts were being made to develop “truthtech”, or fake media detection tools, they might lag behind cross-platform disinformation campaigns on social media websites such as Facebook, Twitter and Pinterest.
Kelly concluded: “Since it is a hard problem, it is perfect for AI. This narrative extraction driven by AI is really interesting – it is a more sophisticated topic modelling – but so far it still requires human training.
“We have to come up with a way to lower the amount of human involvement and human supervision.”