With the recent Facebook–Cambridge Analytica data scandal influencing new restrictions surrounding social media platforms, I consider whether the use of social media during incidents will change.
Over the last few weeks we have seen the Facebook and Cambridge Analytica data scandal unfolding. The incident led to Facebook CEO, Mark Zuckerberg, testifying before members of Congress and answering questions on privacy, data mining and regulations. One of the fallouts of this incident is that social media platforms are going to be more closely scrutinised and regulated. As we know, using social media can be a key tool in incident response and this got me thinking about whether further regulation will be a game changer in the use of social media during incidents.
One of the changes resulting from the Facebook incident will see social media platforms being held more accountable for the content on their platforms. The platform’s excuse for not taking down material is that it is the user’s content, it is very difficult for them to police the volume of posts and there is no clear-cut definition of what is classed as offensive material. As the internet was born out of a desire to promote freedom and expression without ‘government censorship’, it is in the DNA of companies like Facebook not to interfere with their users’ content.
During the last few years we have seen a push by the government to get social media platforms to take down hate material and propaganda for terrorist organisations like ISIS. This has gathered pace and the push has become stronger. The government has become more aware that posts on social media evoke a reaction and can influence the behaviour of individuals. The result is jihadis who have been radicalised online, who then carry out terrorist attacks and kill people.
The Russian attempts to interfere in the USA and European elections has further accelerated government beliefs that they need to take further action to regulate and control social media platforms. GDPR coming into force this month, along with the Cambridge Analytica revelations on how they obtained and used personal data, have added to this perfect storm around social media.
Holding social media platforms more accountable for their content should be good for organisations to redress untruths and malicious content on the internet. This should allow content to be taken down and prevented from being shared further. However, how long will it take to get the redress, how can you prove that the actual content is totally untrue or defamatory and by the time it is removed has the damage already been done?
In other aspects, social media will not change. This week we saw the sad death of terminally ill toddler Alfie Evans, after his parents lost a long court battle to allow him to be taken to Italy for further medical care. His parents ran a powerful and emotional campaign on social media, which resulted in hundreds of supporters protesting, with some trying to storm Alder Hey hospital where he was being treated. I am currently reading ‘Crisis Ready’ by Melissa Agnes and one of the points she makes is, when it comes to social media, emotion always triumphs over reason. In this case, “Alfie’s Army” didn’t want to listen to the reason of the courts or the prognosis of the doctors, they listened to the emotions of the parents.
If an organisation is involved in a crisis, which encompasses a lot of emotion and is being run on social media, reasoned arguments won’t work. You have to communicate with those involved in the incident at an emotional level and not in cold corporate speak, which will fall on deaf ears or further inflame the situation. Some CEOs and senior managers have high emotional intelligence and this type of emotional communication comes naturally to them, but others just don’t get it. So next time you are carrying out an exercise, use an emotive scenario and see how your crisis teams cope, communicate and respond. As with many incidents, it is not the initial incident which causes the damage, but the botched response which has the real impact.