Deepfake technology is software that allows people to accurately clone someone’s face, voice, smell and other characteristics to create digital forgeries. You can now easily adapt audio, image and video data to make someone say things they never did with artificial intelligence.
Last week, the African Union Commission became a victim of the advancing AI-facilitated cyberattacks after fraudsters deployed AI tools to impersonate its head, Moussa Faki.
They developed fake video alterations and placed calls with various European leaders and capitals. The motive of the imposters remains ambiguous. This could be the first diplomatic loophole for the new technology.
In the past few years, deepfakes have been used to develop a synthetic substitute of Elon Musk that shilled a cryptocurrency scam, to digitally “undress” more than 100,000 women on Telegram and to steal millions of dollars from companies by mimicking their executives’ voices on the phone.
Further, they have been weaponised for use in nonconsensual pornography, election violence and disinformation campaigns. Of particular concern is how deep fakes have been weaponised against world leaders during election cycles or times of armed conflict.
It's difficult to distinguish between fake AI-generated content and factual information, identify the person behind any deepfake videos, and worse, the videos usually spread like wildfire.
As the software proliferates, becoming more sophisticated and widely accessible, few laws exist to regulate its spread. Deepfakes are not illegal in many countries, even the Kenya Data Protection Act lacks clauses about them, but producers and distributors of fake news can be caught on the wrong side of the law.
China is setting precedence in the debate on digital forgeries and influencing how other governments should deal with machine learning and artificial intelligence that powers “deep synthesis technologies".
In January, it adopted the first-of-its-kind expansive rules requiring that manipulated material have the subject’s consent and bear digital signatures or watermarks, and that deepfake service providers offer ways to “refute rumours".
The enacted legislation includes getting user consent to produce digitally altered images and prohibiting the dissemination of fake news. The legal provisions seek to tackle two goals — tighter online censorship and getting ahead of regulation around new technologies.
But China faces the same stumbling blocks that have hindered the progress of similar efforts to govern deepfakes. As we build towards an AI-powered future, and technology scales, new sets of regulatory challenges and considerations arise.
The malicious users of the technology are elusive: the hardest to catch, operating anonymously, adapting quickly and sharing their fabricated videos through borderless online digital platforms.
China’s move has also highlighted another reason that few countries have adopted rules: Many people worry that China could use the laws to curtail freedom of speech and expression. Similar attempts in the United States, Canada, South Korea and the European Union to establish a federal task body to examine and set guardrails for the deepfake technology have stalled in the past.
Deepfakes are a double-edged sword: they hold great promise in many industries (education, medicine, entertainment and journalism), but harmful applications are also plentiful. Lawmakers worry that these technologies could be misused to erode trust in surveillance videos, body cameras and other evidence, harming individual privacy and integrity.
This could expedite the world to ‘information apocalypse’ or ‘reality apathy,’ a phenomenon where “citizens no longer have a shared reality, or could create societal confusion about which information sources are factual and reliable.
The recent incident involving the African Union Commission highlights the increasing sophistication of AI-facilitated cybercrimes and the need for organisations, tech companies, consumers and other stakeholders to strengthen their security measures.
A wake-up call for global leaders to exercise caution when communicating with unfamiliar individuals and to verify the authenticity of communications through official channels.
More research is needed by regulators into best practices on available recourse mechanisms, evaluation of any gaps left by existing laws and identify other opportunities to deter the violation of human rights and protect the right to privacy, expression, speech, personal data protection rights and copyright law.
Locally, what proactive mechanisms are we implementing to adapt and design our AI and data regulations to keep content regulation and censorship efforts one step ahead of emerging technologies?
Master’s Student at the University of Edinburgh