This week Charlie looks at deepfakes and why we need to be aware and exercise this risk.
You are the business continuity manager and responsible for crisis management in a large, very male-dominated organisation. The CEO has decided to resign after 20 years’ service and there are two candidates for the role: the Operations Director, John, and the Human Resources Director, Helen. The board deliberates and decide to appoint Helen. She will be the first female CEO in the organisation’s long history. Most of the organisation agree with the selection as they recognise Helen as a very bright, thoughtful and a competent leader. However, there is a group of staff, mainly male, who feel John didn’t get the job purely because he was male and the board made the decision on political correctness, rather than merit.
For the first month, the job goes well for Helen and she even manages to win over some of her doubters, but she comes in to work one Monday and she feels that the atmosphere has changed slightly. She goes to visit one of the organisation’s depots and as she arrives, she feels that the men are slightly leering at her and there is something not quite right, but just can’t work out what it is. She returns to headquarters and speaks to her closest confidant, her manager. When she tells him about her visit, he looks shifty and embarrassed. She presses the case and reluctantly he pulls out his mobile phone. As she watches the video she goes white – she is watching herself taking part in a full-on porn film. She knows it is not her, but it looks pretty convincing. You, as the crisis manager, are called into the boardroom and asked for your advice on what to do.
Welcome to the world of deepfakes.
We have talked before in this bulletin about how a picture paints 1000 words and we talked about the iconic picture from the refugee crisis of little Aylan Kurdi lying dead in the sand in Turkey. He was dressed in similar clothes to those which our children wear, which brought home the human cost of the refugee crisis to many in Europe. Many conflicts have iconic photographs, which have come to symbolise the human cost. Politicians, governments and journalists understand the power of photographs and know that a single photograph can encapsulate a moment much more significantly than thousands of pages of text. As long as there have been photographs people have tried to manipulate them, long before Photoshop was created.
Figure 1, from the civil war, shows General Sherman posing with his generals, and General Blair is added to the photograph from another sitting.
Stalin was particularly famous for ‘airbrushing’ political rivals and colleagues who had fallen from grace. Figure 2 shows one of the doctored photos where a commissar was removed after falling out of favour with Stalin.
Modern conflicts are not immune to faked photos. Figure 3 shows a photo of child in Syria sleeping between the graves of his dead parents, which was widely shared on social media. However, the picture was taken in Saudi Arabia and the graves were not of his parents.
We used to think that videos could not be easily faked but in the last couple of years with new software, increase in AI and cheap computing power, deapfakes are now much easier to develop and have become more convincing. Deepfake videos started in the porn industry, with people superimposed the face of a famous actress onto the body of a porn star, but more recently this has included to doctoring videos of politicians and celebrities to make them say something completely different. With the use of actors doing the voice over, computer power to edit their speech and facial expressions and make the mouthed words convincing, you can have a video of pretty much anybody saying anything. I found an article on deepfakes which has examples, and It had a video of Corbyn endorsing Boris and vice-versa. They look reasonably convincing. We now have the power to make it look like anybody is saying anything, if we put enough time and effort into it. The cleverest deepfakes play on existing stereotypes or narratives and can be produced to reinforce a cause or to discredit someone.
With the use of actors doing the voice over, computer power to edit their speech and facial expressions and make the mouthed words convincing, you can have a video of pretty much anybody saying anything. I found an article on deepfakes which has examples, and It had a video of Corbyn endorsing Boris and vice-versa. They look reasonably convincing. We now have the power to make it look like anybody is saying anything, if we put enough time and effort into it. The cleverest deepfakes play on existing stereotypes or narratives and can be produced to reinforce a cause or to discredit someone.
This brings us back to the story at the beginning of the bulletin. I think it is a difficult situation to deal with. You can ask various social media companies to remove the video but if it has circulated around a number of different platforms then it is difficult to reradiate. Does Helen go public and acknowledge and condemn the video and its creator, or does this actually draw attention it and create further watchers? I am not sure of the answer to this one and I would be interested in any thoughts from bulletin readers.
I was inspired to write this bulletin by another article by The Economist on deepfakes. Their article shows how women, especially politicians and journalists, are being undermined by deepfakes, rumours and disinformation by rivals, trolls and the government, to undermine or discredit their authority.
As business continuity people we need to be aware of this risk and perhaps use it as an exercise scenario to explore how your organisation would respond.