During the morning commute on July 7, 2005, three suicide bombers blew themselves up on London underground trains. Just an hour later, another bomb exploded on a London bus, killing 52 people and injuring many more in what has come to be known as the worst terrorist atrocity on United Kingdom soil.
It quickly emerged that the four young men who blew themselves up had not only sworn allegiance to al Qaeda (something that had already been suspected because of the method of attack), but they were all “homegrown terrorists,” British citizens who hailed from industrial northern towns in the U.K., a far cry from the image of the 9/11 terrorists as foreigners attacking a sovereign country.
What could have led “normal” British kids to the point where they attacked their own country? asked the pundits. And, crucially, what could be done to prevent more such attacks? These questions led to much soul-searching in the terrorists’ own communities and within the rank and file of British law enforcement and intelligence departments.
The policy results looked very different to traditional counterterrorism. Rather than fighting a military command-and-control hierarchy based abroad, the British security agencies found themselves countering a far more diffuse, digitally enabled model of radicalization. Ideology, training techniques and the proclivity for violence spread quickly on the internet, was catching on within communities and networks around the country and radicalizing individuals and small groups who would have otherwise had no real connection to the control structures of the broader movement. This was the birth of the terrorist as lone wolf.
This led to the development of a new approach to counterterrorism, the now-notorious Prevent strategy. Recognizing that something more than a traditional security approach was needed, the government’s new plan attempted to challenge the extremist ideas that lead to violence and tackle underlying vulnerabilities that make youth prone to recruitment by extremist organizations in the first place. This model of prevention has precursors in many areas of health, but making it the approach to counterterrorism was new: treating radicalization much like a virus, targeting and surveilling not only those actively engaged in extremist activities (or suspected to be), but whole communities seen as susceptible, entering a variety of their spaces, from mosques to youth clubs. Target audiences were defined as “at risk,” much like the market segmentation of any advertising campaign, receiving messages based on their profile and perceived vulnerability to joining extremist organizations.
But there are serious and well-documented flaws in the Prevent approach, starting with its basic assumption that those who will commit a crime behave just like those who already have, down to their shopping and music habits. This assumption is lifted from consumer product marketers, which is deeply problematic because, surely, committing acts of terrorism does not resemble the average shopper’s dilemma of what to buy and from whom. Yet Prevent has been exported around the world. It has also been adopted wholesale by law enforcement in the U.K. and elsewhere, incorporating new digital tools to monitor and target populations in ever more sophisticated ways. If you live in the U.K. and go and buy a candle with a credit card, then return home and turn on your Alexa, you will hear an ad from the Home Office telling you about fire safety in the home. Harmless stuff, as many consumers in the U.K. will tell you, perhaps even beneficial to warn populations about safety. After all, messages of washing your hands and staying at home were important in limiting the spread of COVID-19.
But our research shows that this is the tip of the iceberg. Targeted ads, delivered straight to individuals based on their digital profiles from shopping habits to online reading, are being used in all areas of government messaging. They are invisible to all but those targeted, and the results are as yet not understood, despite anecdotal evidence that there have been negative effects. We desperately need more information to understand what governments are really achieving with these approaches, yet governments remain uncooperative.
One of the central pillars of Prevent from the start was strategic communications, or stratcomms. The Home Office observed that there was a diffusion of narratives and ideologies that encouraged violence through the internet with notable successes, including, seven years on, the Islamic State group’s own digital communications in persuading people to join their cause. So they poured money into programs that aim to counter radicalization in the domain of culture by enlisting psychologists, marketing experts, and communications professionals. These experts crafted narratives and plots that made their way into comic book series, radio shows and pop-up YouTube ads, in an attempt to highlight the sometimes ridiculous hypocrisy of Islamist groups. As one insider put it to us: “We were throwing things at the wall to see what sticks.”
Stratcomms was an ideal vehicle for civil servants desperately searching for something to combat the appeal of extremism across the globe: Outputs, such as online videos, and budgets were clearly defined, there was something to show for money at the end, and there was an assumption that this was at the very worst ineffective — a YouTube video wasn’t going to do any harm (which proved to be an incorrect assumption: Extremist messaging could be amplified with the messages countering them, a finding suppressed by governments and those designing the campaigns alike). This soon blossomed into a full-scale industry, complete with mavericks selling snake oil to governments and other donors. Marketers created whole teams to work exclusively with companies and partnered with civil society organizations to provide “local context” (not bad in itself, but hard to achieve given the short deadlines); consultants on high daily rates were contracted to provide “expertise.”
Conferences and workshops were held to share best practice, often a nonstop advertisement of marketing products and services. At the height of the ideological battle against the Islamic State (circa 2016), there were such events every day of the year somewhere in the world. In one session that encouraged participants to brainstorm humorous ways to combat extremism, an Indian comedian shared a real-life example. He showed a video of an “advert” for brainwashing shampoo that leads women to blindly follow influencers, buy anything targeted at them to improve their appearance or marry the first man who proposed to them and follow him to Syria to join the Islamic State. Other sessions included training on “formative research” — how to gather information (through polling, surveys, focus groups and so on) about your target audience, the methods drawn directly from market research approaches.
But one question remained stubbornly unanswered during this merry-go-round of international conferences: Does it work? Where was the evidence? researchers began to ask. Where are the evaluations that are supposedly built into funding? Even more uneasily, people began to whisper about the failures: the backlashes against clumsy efforts that felt Islamophobic or the research projects that felt like surveillance and suspicion. When such concerns were raised, they were acknowledged privately but batted away easily in public: Evaluations must be kept private, one argument went, to protect the message itself, which might not be publicly known as a stratcomm.
Throwing things at the wall to see what sticks might be a valid approach if you’re convinced there’s no harm done by the things that slide to the floor, but if we don’t know which — if any — did stick, there’s no way of improving programming in the future. But evaluations are scarce, and the available ones show few signs of robust research. Further, if you don’t acknowledge that harm can be done and adapt accordingly, you are acting unethically. But these attempts continue. Not only that, our research has shown that this approach is now applied to prevent a huge number of other types of crime — and techniques have evolved substantially to incorporate developments in digital tools of monitoring, profiling and delivering targeted content.
The most well-known example of a law enforcement “influence campaign” is also perhaps one of its most shocking failures. In August 2019, the news broke that boxes of fried chicken in a swath of restaurants in London would have a new form of ads. As teenagers relaxed with their friends in their local hangouts, their food packaging was plastered with messaging. “Sean’s True Story,” printed on the inside lid of black boxes branded #knifefree, describes the life of a young man tempted by knives but saved by boxing.
The whole campaign quickly became a national outrage at what appeared to be sloppy and racist targeting aimed at young Black people in London. However, while reporting focused rightly on the shocking stereotypes and assumptions behind the placement of these advertisements, the coverage missed that they were in fact part of a much wider campaign, one that targeted its core demographic through a range of other mechanisms, including online ads and social media. The initial picture — of a single, clumsily targeted ad based around racist assumptions — becomes more insidious when seen as part of a bigger campaign: one of law enforcement using the techniques of marketing agencies to cast a wide cultural net of influence via messages to particular communities in a range of offline and online methods.
But there’s more: the risk of blowback, of increasing the very problem you are trying to combat. Imagine you and your friends begin to see ads warning you of the risks of carrying knives, popping up everywhere you go, whether browsing your phone or on the packaging of the food you eat. Without you realizing, it suddenly feels like knife crime is everywhere, so you start to carry a knife yourself for protection. This problem extends to a wide range of blanket communications campaigns. Maybe you’re scrolling through Facebook and a series of ads catches your eye: Each is designed to debunk a far-right message. You’d never heard these messages before, but something about them resonates — you Google a few terms you don’t understand, and before you know it, you’re becoming exposed to more and more hateful content. Or perhaps you start to get police messages that feel as though they’re targeted at you on the basis of your ethnicity and religion. You get angry: Who is watching you? How can they do this?
In law enforcement, these strategic communications approaches are part of a wider “preventative turn’” of (often welcome) attempts to find alternatives to locking people up, dealing with crime “upstream” rather than after the fact. This can in part be traced back to the success of the Glasgow Violence Reduction Unit (VRU), set up to combat the high levels of violence in the Scottish city — Glasgow at one point was known as the “murder capital of Europe.” Violence was reconceived not as a crime to be punished but as a public health problem to be treated. This led to preventative, community-focused interventions, many of which incorporate education, provision of social welfare and one-to-one mentoring.
Many other police services have followed this lead, inspired by the impressive drops in violent crime seen in the wake of the new approach. (If only we had evaluations available to us from Prevent, we could begin to compare and start to understand what really “prevents” crime: There is anecdotal evidence that one-to-one deradicalization programs work, just as the more engaged aspects of VRU work, but until there is transparency, insights are unavailable to researchers.) There are clear overlaps between the VRU and the Prevent strategy, but there was more to it than a joint commitment to prevention: Many of the programs that have stemmed from the VRU approach have also borrowed wholesale from Prevent’s stratcomms, despite the lack of evidence. Prevent’s own stratcomms team has developed messaging to counter online fraud, and their methods have been employed in countering knife crime, online grooming and gendered violence.
At the same time, the techniques themselves have been evolving. Modern strategic communications are a far cry from the awareness campaigns of the past, where a billboard or TV blurb might tell you to wear a seatbelt or stay away from drugs. These new campaigns are digital, targeted and much more intricately folded into the spaces in which we live our daily lives. The influence infrastructures created by private companies to sell products and profile the public give governments cheap tools to directly target messaging — based on intimate corporate surveillance of our online lives. Our purchases, what we read and watch, and who we speak to online are now directly linked to the messaging we receive from governments.
Targeted advertising became infamous with the Cambridge Analytica scandal, with widespread reporting of the effects of microtargeting on elections around the world. We now know that corporate techniques using big data and social media infrastructure have been employed for the purpose of political interference, from Trinidad’s domestic politics to the Brexit movement, and so effectively that governments themselves have adopted the techniques. Targeted digital communications campaigns, which aim not to raise “awareness” but to directly shape and change the public’s behavior, now span gun crime, cybercrime, online grooming and domestic violence as well as health and safety issues (buying a candle), vaccination and the environment.
What this means in practice is the increasing incorporation of marketing expertise — and marketing consultancies themselves — into frontline law enforcement interventions. While the police have their own datasets of recorded crime and intelligence, the marketing consultancies bring massive corporate consumer datasets and digital profiles that allow them to segment the public into audiences. Treating those “at risk” of committing or being subject to crime the same as people who might be interested in buying a new pair of sneakers involves market research layered on top of assumptions about your target audience, both of which are ripe for misunderstandings and offensive conclusions. Many people are being treated like potential criminals with no evidence other than their purchasing patterns.
As with Prevent, the evidence base for these interventions is still extremely underdeveloped — both for what makes a campaign effective (if at all) and for the forms of harm that can result. Some of these communications campaigns are fairly benign and well considered.
For example, a Police Scotland campaign targeting male sexual entitlement (in a welcome change from preventive messaging targeted at women), told men “don’t be that guy” across social media. Videos described seemingly innocuous behavior: “Ever stared at a woman on a bus?” one man says. “Or said to yer mate, ‘I’d do that’?” Messages were seen across the country and even retweeted by the First Minister of Scotland Nicola Sturgeon. All of them end with this message: “Sexual violence begins long before you think it does.” Another example was a well-targeted campaign from the U.K.’s National Crime Agency reminding young gamers that cybercrime is illegal — which researchers at Cambridge found actually succeeded in reducing low-level cybercrime in the U.K.
However, the wider proliferation of these methods — often led by marketing agencies, some of whom have little evidence and no real oversight — can lead to serious harm. For example, the targeting available to the commercial marketers is increasingly drawing on reams of digital data about every aspect of our lives online — these are used to generate intimate “profiles” that can be used to advertise to us but pose serious privacy concerns. And the intimate, in-the-moment delivery of these messages makes accountability very hard. Whereas the public can see a “Go Home” van driving through London telling immigrants they’re not welcome or a racist chicken box ad, these digital ads are (theoretically at least) only seen by the target audience. In your living room, these ads attract far less scrutiny. The use of targeted digital nudges by government and law enforcement has proliferated far faster than the attending democratic debates about accountability, privacy and harm.
These discussions need to happen now.
We are not passive consumers of media. While marketing can and does contribute to shaping our lives and perceptions, we interpret these messages actively, embedded as we are in a complex mesh of social and personal contexts, leading us to hold certain biases and assumptions and to view the world through different lenses. The issue with stratcomms is not the shadowy creep of insidious “brainwashing” propaganda as a souped-up digital tool for police and spies to control the public. Rather, it is with a proliferation of incompetent, poorly evaluated campaigns based on little evidence whose “theory of change” is based on biased stereotypes and marketing schtick. The growth of digital ad targeting services and the massive consumer market means that these interventions are extremely cheap — only a few thousand dollars for a campaign. This puts them well within the reach of cash-strapped local councils and police services, for whom they present an “easy” way to show that they are investing in prevention. But with no real oversight, little established, evidence-based best practice for mitigating (or detecting) blowback and even more intimate government data available about communities at the local level, there is a risk that the mistakes of the Prevent program are poised to be repeated across a range of social issues.
We are not arguing against governments and law enforcement communicating with the public: The messaging about COVID was vital in limiting its spread. Nor do we think that this communication should not be targeted: It doesn’t make much sense for an awareness campaign about local issues in one city to be delivered to people in another. However, modern, active communications tactics — particularly those enhanced with digital targeting — cannot be seen as simple PR campaigns. They are frontline preventative interventions and require adequate scrutiny, accountability and evidence. Just as with other interventions, they can be carried out as top-down, command-and-control exercises in disciplinary power, or they can be truly embedded and created with communities, who can be asked what messages they want to receive and how.
Messages on their own will never be a silver bullet for social issues. A wider problem is the authorities’ perception that the culture of communities is at the heart of the problems they face and that cultures can be edited and steered through targeted messaging. When communications stop being part of a conversation and instead become the front line in a culture war, with the state attempting to shape and craft cultures perceived to be risky, this can exacerbate the real issues, which are often rooted in alienation, stigma and marginalization. Without attention to the structures of power and oppression in our societies, the more entrenched issues of misogyny, racism, poverty, exclusion and violence — all of which are underlying drivers of crime — are unlikely to change.