Politics, Science/Environment

Its the fake news age – and you are its ‘battlefield.’

AT FIRST glance, you might think you were in the office of a technology start-up. People peer at computers and talk about influencers, reach and hashtags. Like their peers in Silicon Valley, these men and women know how the internet can be used to change hearts and minds.

By Carl Miller

But this is a world away from the primary-colour campuses of the tech giants. These offices lie behind barbed wire, and everyone is wearing the green patterned camouflage of the British Army.

New Scientist Default Image
Cyberwarfare is now. Image – Corey Brickley

The 77th Brigade is the British Army’s unit for what it calls “information manoeuvre” and what everyone else calls information warfare: using print and online media to change the behaviour of hostile parties and prevent them causing problems at home. When I visited, just over two years ago, everything was in motion. Flooring was being laid, units installed. Desks formed neat lines in offices still covered in plastic, tape and sawdust. Even then, there was a sense that they were already too late.

Today, they face new kinds of conflict that are breaking out online, leading to mass deception, protests and even deaths. Our information flow is being invaded. Attention is being hacked. The hostile manipulation of information has even been blamed for rigging elections, the Brexit vote and paving the road for Donald Trump.

Whether real or imagined, the fear of such activity is changing our world. Amid all the intrigue and shadows, you have become the front line. Your opinions, your values, what you hold to be true, even the way you feel, are all under siege. And it isn’t clear what anyone can do to stop it.

A powerful illustration of that fragility came on 7 March 2019, when Facebook made an announcement. Among the billions of accounts, groups and pages that inhabit its site and its subsidiary, Instagram, it had identified a network of 137 engaged in what it termed “inauthentic” activity targeting the UK. Yet to the 180,000 people who followed all or part of this network, it would have seemed utterly unremarkable. Tedious even.

On the one hand, nationalists were sharing slogans. “Being a leftist is easy!” one meme said. “If anyone disagrees with you, call them a racist!” But others in the network pushed a different angle. One account called for the leader of the pro-Brexit party UKIP to be charged with hate crimes. Others drew attention to stories that LGBT Christians were being bullied because of their faith. The vitriol and polarisation would be familiar to anyone who has spent time on social media. The one key difference was that none of it was real. Neither the nationalists nor the anti-racism campaigners existed. Both were online masks worn by a single coordinated and hidden group.

This ecosystem of fake identities, false voices and deceptive groups was attempting to provoke broad social change. Its members pumped polarised messages to both ends of the political spectrum not to change anyone’s mind, but to confirm the beliefs their viewers already held. The aim was outrage: to make people angrier and angrier about the injustices they were already convinced were happening. To alter the way that people behaved and thought, they had lured them into a fake society that only existed online.

This was the first time that Facebook had found a network specifically targeting the UK. And while it didn’t say who was behind it, the culprit could have been almost anyone. Military groups, intelligence operatives, party political campaigns, extremist political factions or even just technically savvy individuals have all joined the rush for influence and attention that has broken out in cyberspace, forming a background hum to many of our experiences online.

Same old stories

To David Omand, none of this is new. Omand has spent most of his career inside the UK’s Ministry of Defence, before serving as the director of GCHQ, the country’s technical spy agency based near Cheltenham. “You always had two different levels of battlefield,” he told me. “You have the intelligence battlefield where the adversary’s intelligence agencies would be slugging it out with us. And you have the campaign for influence through propaganda.”

The manipulation of information during warfare is as old as warfare itself. But it really took off during the cold war, when both sides systematically developed tools to influence the public watching at home and abroad. Fake companies, front organisations, leaked letters, bogus journalism, planted conspiracy theories and manufactured protests were all part of the ideological struggle.

For the practitioners of these tactics, the arrival of the internet and social media was a spectacular opportunity. Here was an environment far more open than newspapers and television. Here were global forums for debate and discussion that were very easy to join and post in, and which were curated and shaped by algorithms that could be reverse-engineered, gamed and manipulated. The platforms also became increasingly personalised, serving up the information they thought users wanted and, in doing so, sometimes creating bubbles of hyper-partisanship – small online knots of identity that could each be contacted and exploited.

In the space of a decade, it became far easier, faster and cheaper for people to mould the public with social media using networks like the one Facebook had found. And it didn’t take the resources of a state, either. Anyone could do it, so long as they had a smartphone.

The fake news economy

When I met one such fake news merchant in a dimly lit bar in Kosovo in November 2018, his phone never stopped chirping. Each noise was a click – the sound of someone stepping into a vast digital web that could also be called coordinated and inauthentic. One where bay leaves can cure cancer and George Washington was really Albanian. The man, who I’ll call Besar, told me his operation was all about pumping out content that, true or false, was so shocking that people couldn’t help but be drawn in. Some stories were patently false, others simply clickbait. But for Besar that distinction is a waste of time. “It’s all total nonsense,” he told me. “I don’t even read this stuff.” Click on any of Besar’s stories, and you’re taken to the money-making part of the operation. On a series of crude-looking websites, he turns eyeballs to money in the same way as any journalist: advertising.

A former waiter, Besar was now building and buying Facebook groups with huge audiences dedicated to everything from evangelical Christianity to holiday destinations. He created thousands of fake accounts to rope in even more people. He would use already large groups to grow new ones, and invest thousands of euros in carefully targeted advertising to drive numbers up even further. When we met, I judged he probably had more online readers than some UK broadsheets.

Besar isn’t alone. He showed me a whole network of invitation-only groups on Facebook, with memberships ranging from a few hundred to several thousand. They formed a kind of marketplace where pages with hundreds of thousands of likes were traded for thousands of dollars. Others sold fake likes or fake accounts, or offered advice on how to get around Facebook’s evolving enforcement. I found a “fake news starter pack”, complete with a collection of pages to get an audience and websites to monetise them. It wasn’t just Facebook that was innovating, people like Besar were too.

Around the world, thousands of people are using the same tools to game and manipulate social media platforms on an industrial scale. For $3 you can buy a “HUGE MEGA BOT PACK” on the darknet, allowing you to build your own army of automated accounts across hundreds of social media platforms. Other services can manipulate search engine results, buy Wikipedia edits or rent fake IP addresses to make it look like your accounts come from all over the world. There are even “legend farms” that you can recruit, giving you control of tens of thousands of unique identities, each with its own personality, interests and writing style.

Despite the power these rogue agents claim to possess, the harm they cause is purely incidental to them; their biggest driver is profit. They work in small groups, with limited budgets – they are the agile start-ups of the influence industry.

The giants when it comes to propaganda and influence are the nation states. Their aim isn’t profits, but geopolitics, and they work at a far larger scale.

Yevhen Fedchenko, the director of Mohyla school of journalism in Ukraine, was among the first to realise that states were joining the race for influence. That realisation came after what he called Maidan: the public demonstrations in Ukraine in 2013 and 2014 against Russian influence in the country.

In the months that followed, new messages and narratives appeared in Russian media. They were on TV bulletins and in newspaper stories, as they had been during the cold war, but were now joined by mobs on social media. In July 2014, a gruesome story appeared on Russia’s most popular TV station, Channel One. It claimed that Ukrainian officials had nailed a 3-year-old boy to a wooden board in the city of Slovyansk. It wasn’t true, but in story after story, interview after interview, tweet after tweet, a case was being put together using false stories: that the Ukrainian authorities were a Western-backed junta; that Ukraine was a failed state, a fascist state. Ukrainian journalists were hounded and threatened. “All Russian media started to describe Maidan using the same words, and the same kind of perspective,” says Fedchenko. “It was massive.” It was almost like a preordained narrative had been switched on.

The Institute of Strategic Dialogue (ISD) in the UK is one of a number of think tanks that have tried to stay on top of how social media activities are being used to manipulate politics. Chloe Colliver, a researcher at the institute, told me that in the run-up to May’s elections for the European Parliament, the ISD and colleagues at the community organisation Avaaz were finding active networks of influence across a range of platforms in Germany, France, Poland, Spain and the UK that dwarfed what Facebook had found in March. “This is a long-term investment geared towards changing what entire populations are seeing and thinking,” she told me. “The culture of a continent is threatened by something it has no idea it’s supposed to be defending itself from.”

The team estimates that far-right disinformation networks across France, the UK, Germany, Spain, Italy and Poland produced content that was viewed an astonishing 750 million times in three months. In Poland, pro-government accounts posed as pensioners in order to attack striking teachers, all drawing on the same archive of infographics and linking to anti-Semitic youth-oriented sites. A network of 60 pages on Facebook also amplified anti-Semitic and pro-Kremlin content in the country. In Germany, 200,000 fake social media accounts were spreading electoral content supportive of the far-right political party Alternative für Deutschland. In Italy, a network with more than 2.6 million followers spread anti-migration, anti-Semitic and anti-vaccine information. An estimated 9.6 million Spanish voters had seen disinformation on WhatsApp. Five of the top 10 accounts mentioning the UK’s Brexit party on Twitter were showing “bot-like” activity.

Real-world consequences

The creation of fake realities online can lead to violence. In 2018, false information shared on social media in Nigeria caused rioting and people to be hacked to death by machetes. In 2019, rumours of child abductions in France caused violence against the Roma community. In Myanmar, hundreds of soldiers posed as celebrities and national heroes on social media to flood it with incendiary comments about the Rohingya minority, again leading to violence and conflict.

This type of information warfare is on the rise. In 2017, researchers at the University of Oxford found it happening in 28 countries. In 2018, it was 48. The nature of battle has changed. Information is no longer being used in war. War is being waged within information.

Since the end of the cold war, the militaries of liberal democracies have been bigger, better funded and more powerful than the military of any country that wishes to do them harm. The dangers, however, are no longer physical. Now, coordinated groups can step right into the middle of the politics of any country with an online presence. And this poses a problem that no state can answer alone.

While new treaties, laws and sanctions are long overdue to turn the cheap, easy and risk-free practice of information warfare into something more hazardous and difficult for its perpetrators, the platforms will also need to change. In their search for frictionless online spaces accessible to anyone, the tech giants have created places where it is far easier to create information than to tell if it is true, and easier to create a fake identity than it is to expose one. To fix this, platform engineering will need to change, whether through forcing more identity checks, slowing down how information circulates, introducing cooling off periods or challenging far more of the accounts that are behaving suspiciously. For the societies that use these platforms, growth can no longer be the priority. Authenticity should be.

“Information is no longer being used in war. War is being waged within information”

In the meantime, states will continue to slug it out in the theatre of information. The 77th Brigade will continue to grow, and every military around the world will build its own equivalent to try to meet the threat. Yet the real challenge isn’t to join the arms race but to avoid it altogether. If information is a theatre of war, what does de-escalation in it look like?

Ultimately, that comes down to you. You may be the target of all this activity, but you are also its off switch. By guarding against outrage, pushing against the desire to believe what is convenient and simply becoming less angry about what you see online, you may lose some battles, but you can help end the war.

Seven rules to keep yourself safe online

1. Actively look for the information you want, don’t let it find you. The information that wants to find you isn’t necessarily the information you want to find.

2. Beware the passive scroll. This is when you are prey to processes that can be gamed and virals that can be shaped.

3. Guard against outrage. Outrage is easy to hijack, and makes you particularly vulnerable to being manipulated online. What’s more, your outrage can induce outrage in others, making it a particularly potent tool.

4. Slow down online. Pause before sharing. Give time for your rational thought processes to engage with what you are reading.

5. Lean away from the metrics that can be spoofed. Don’t trust something because it is popular, trending or visible.

6. Never rely only on information sourced from social media. This is particularly the case for key pieces of information, such as where polling booths are or whether you can vote.

7. Spend your attention wisely: it is both your most precious and coveted asset.

First published at New Scientist 16 October 2019. See: https://www.newscientist.com/article/mg24432520-800-in-the-age-of-fake-news-and-manipulation-you-are-the-new-battlefield/#ixzz62g4Y8crl

Leave a Comment

Your email address will not be published. Required fields are marked *

*