PEACE RESEARCH INSTITUTE OSLO
LECTURE 7

Technology and Warfare

Bruno Oliveira Martins speaks about drones and
the gradual dehumanization of war.
How have drones and other technologies impacted the ways wars are fought, and how can they change the way powerful countries consolidate their power?

We're challenged today not only to think about how science and technology affect armed conflicts, but how information broadly impacts how safe we feel.
Bruno Oliveira Martins is back again for his second seminar during our summer school at the Peace Research Institute Oslo (PRIO). Last time he spoke about security and securitization, and today he wants to talk about technology.

Again, like with his presentation on security, his hope is less to give us new facts about technology than promote new ways of thinking about both concepts. And, hopefully, problematizing the relationship between the two – problematize here meaning an invitation to doubt certain assumptions that we have about tech, science, power and knowledge.

Technology is developing so fast that it has an increasingly relevant impact on the way that security is thought of and practiced. There are a lot of examples of this, and today we're going to mainly focus on drones as a case study, but he wants us to keep in mind the way that technological developments generally impact security practices. Which demands we keep in mind his last seminar.

So how does technology impact security practices? This is part of a broader critical perspective, and for Martins it's fundamental that we think hard about how information comes to us and how it is used to impact how safe we do or do not feel in the world.

Science and Technology Studies

Science and Technology Studies (STS) is a sub-field at the intersection of science, security and critical theory. It's relatively not but not especially groundbreaking. It has generated a number of insights, however, ones that we'll be taking from today in our discussion about drones.

The main takeaway is that knowledge and security are fundamentally connection, which isn't necessarily the first thing someone thinks about when talking about security. Weapons, sure, but knowledge?

But it's knowledge that underlies the latest developments in weaponry, and not only. Security practices manifest themselves in who can have access to our smartphones and why, or what information is picked up from our communications systems, or what justifications are used (and where the evidence comes from) when force is used against citizens of your own and other countries.

It's also important to remember that none of this comes out of nowhere. It emerges from a particular society and social context.

Maybe it's easy to think about science as something neutral, but research and the breakthroughs they produced are often located in particular laboratories, financed by certain bodies, and then potentially used for entirely different purposes as time goes on.

The power of STS lies in trying to open up the black box of science and technology, breaking down the artificial boundaries that separate science, tech and society to open up new possibilities for imagining and implementing alternative sociotechnical arrangements.

That last bit might sound a bit utopian, but that's the vision of researchers like Clark Miller. This is also where the critical theory bit comes in: reimagine things so that you can resist, or even break down, structures that you see as oppressive.

In other words, they want us to think that it's not just luck that the US and Israel are the first to get high-tech drones. The technology has to be understood to be within the society that it's part of.

But how do science and technology connect to society? Martins breaks it down into three points.

The first is design. This is the idea that the way systems are constructed are a product of societal contexts and are developed with the purpose of getting results that match the needs of that society. Or that are bound by the restrictions and biases of that society.

So the way that an intelligence-gathering system is designed may reflect certain inclinations of the society that built it. And the way it functions will impact the quality and nature of the information gathered, which will in turn influence the policies or decisions that come out of it.

In one of our readings there's a drone strike that's used as a case study. It was a deeply flawed attack, and the system that was built up around it allowed twenty-three innocent people to be killed in Afghanistan.
Voice 1:
That's weird.
Voice 2:
Can't tell what the fuck they're doing.
Voice 3:
Are they wearing burqas?
Voice 1:
That's what it looks like.
Voice 2:
They were all PIDed as males. No females in the group.
Voice 1:
That guy looks like he's wearing jewelry and stuff like a girl, but he ain't… if he's a girl, he's a big one.
Voice 4:
Screener said there weren't any women earlier.
Voice 1:
What are those? They were in the middle vehicle.
Voice 4:
Women and children…
Voice 2:
It looks uh, one of those in the, uh, bright garb may be carrying a child as well.
Voice 1:
Younger than an adolescent to me.
Voice 3:
Well….
Voice 3:
...no way to tell, man
Voice 1:
No way to tell from here.
The next is innovation, or how technology actually proceeds to make breakthroughs and how society then reacts to those innovations.

Take nuclear technology in the late 1930's. There was a lot of discussion in society (or among social elites) about what to do with it. It wasn't just about bombs: people were wanting to know how nuclear power could solve a wide variety of problems, including with energy. The question here that STS asks is what are the forces that shape what we do with a piece of technology once we have it – how can we take advantage of it, and whose interests will it serve?

Today we're facing similar questions when it comes to biotechnology. We still don't know what we can do, or what uses the field can have outside of medicine. Many are asking if the supposed benefits might bring on more problems than expected. Much of this is a balancing act.

The third idea is that of expertise, and this will be central to what Martins has spoken about in all of his seminars: the ones who have knowledge, the ones who position themselves as experts, they're the ones that manage to get their message across or shape the ways that technology is used in society.

You can think about this in terms of the way expert knowledge (or at least the impression of 'expertness') shapes policy making and decisions made at the political level. If you're a politician and you have a team of leading scientists, and if they say they have a major new breakthrough, they can ask the politician to affect public policy where it affects the technology at hand.

If there are people like activists, teachers, researchers or concerned citizens who are not the proper 'experts' in the eyes of this politician, how much are their criticisms of these incoming policies going to matter in the face of the 'expertise'? How much power might experts have when it comes to our public life?

Martins points out that, in the European Union, you see a major agenda put forward by drone manufacturers and experts that allow for drones to fly in Europe. That's one example. And we're asked today to be aware of the power that knowledge brings (and, on occasion, the knowledge that power brings, making it a cycle that's hard to break into).

Power and Knowledge

According to Clark Miller, a central question for STS studies is about the "relative power and authority of diverse knowers and the forms of knowledge in government processes...the construction of expertise and experts, as well as [what makes them different] from [the public] is a political exercising entailing the allocation and restraint of power."

Meaning that the cliche knowledge is power can sometimes be very true. And, depending on your values, somewhat frightening.

Since we're talking mostly about security, what kinds of questions can we pose about the relationship between knowledge and structures of power & safety?

The first factor we can think about is about the way we think about security and securitization (which, in Martins' last lecture, was defined as 'the process by which a regular topic becomes changed into a security topic). Which is another way of saying that we have to keep in mind the scope, boundaries and framework of security discourse, as well as how they change. And who changes them. And in what circumstances. And why.

There are a number of issues that are considered to be security topics right now that weren't thought of in this way thirty years ago. Migration is a classic example: it used to be thought of in terms of sociology and economics but now it's about security. There's an article called The Securitization Of Migration In The EU that goes into this process in detail.

He asks us to keep in mind that security isn't an objective, clearly-defined point. Different groups can securitize certain events in different ways, and this can lead to major conflicts in a society. When two different countries securitize world events (or each others' actions), it can lead to major consequences.

On a cultural level, the Charlie Hebdo attacks of 2015 provoked very different reactions, with some securitizing the issue as us-against-them (Europe-vs-Islamism), others framed it as Islamophobic-West-vs-Repressed-Third-World. Technology, especially a mastery of algorithms, can impact what narratives win out and thus impact securitization and policy making on even a military scale.

The second factor to think about here is what Martins calls "knowledge, non-knowledge, secrecy and ignorance."

This here is less about securitization and more about the practices for managing knowledge. For example, who has access to knowledge, and who defines what ways we can collect it, and using what technologies? A few obvious examples come to mind: surveillance, whistleblowing, big data.

Here it's common to ask ourselves how much someone knows about us. How that knowledge is produced and consumed. What security enterprises or military institutions know about citizens. How that knowledge is produced, how it might be used against them, and what consequences there might be.

These questions are particularly relevant in democracies as this kind of society tries to create a tighter bond between people and their politicians than in other states. So this is a particularly existential question – transparency and accountability are held up as cornerstones of democracies.

Which then makes everyone in the room think about the case of Edward Snowden. If we didn't know about the systems he revealed, how could we demand increasing protections for our rights and privacy? In order for democracies to function, we need access to information so that mechanisms of accountability can actually fall into place.

Getting ahead of ourselves, this is a big issue when it comes to drones. A few years ago we knew very little about how decisions concerning them were made, and about when, where and who would be struck. We know more now, but that's because civil society has put pressure on official bodies and have tried to uncover as much as they've could.

We are very far from an ideal situation when it comes to responsibility and accountability, but still closer than five or six years ago. They both, as it were, require information in order to be transparent. And things are concealed from us by officials and politicians who say that transparency is, at times, a barrier to national security.

This leads to the question of when the 'security' involved is actually the security of citizens, or if it's the 'security' of reputation or power at stake. The Black Banners is a book by an ex-FBI officer that was heavily censored before publication because of its description of classified information, including torture practices by American institutions abroad. Dozens of pages were redacted (though restored in Sept 2020) in the name of security, which many have decried as just a way to avoid scandal.

Martins points out that security really is needed, and that there has to be a balance between information and safety. Not everything can be open-book all of the time. But this line of reasoning can be abused to redact problematic information that the public may in fact be entitled to know.

And much of that knowledge has to do with how powerfully technology is changing the ways war is waged. Perhaps the most significant example of this is the drone.

The God-Trick

In 1988, a critical theorist named Donna Haraway wrote about her problem with what she called the god-trick. She used the word to refer to how voices in politics, science, academia, philosophy and culture often carried their biases with them even when they took on a stance of being 'objective.' This objectivity, she argued, was actually an illusion and just reflected dominant voices of the time, which she identified as being primarily male, white, heterosexual, militaristic and more.

The consequence of this, she wrote, was that this 'objectivity' (and the strong traditions about claiming it to be objective) made these biased perspectives present 'everywhere and nowhere', constantly omnipresent, devoid of the human particularities that inform our experience. This was the god-trick: a sense of omnipotence and omnipresence (together with disembodiedness) that made it hard to resist the biases that still found their way through.

Critical theory is often concerned with drawing attention to and deconstructing 'invisible' power structures like this (with their own critics saying that they sometimes, and problematically, reduce everything to power structures), and the idea of the disembodied god-trick found traction and was used by Lauren Wilcox to describe the sensation of being targeted by drones. Or, generally, the grand military-surveillance complex as we know it today.

Basically, with drones and advanced surveillance techniques, military and intelligence technologies have gotten so advanced that it almost seems like you can be watched or attached by a force that seems like it's everywhere and nowhere at the same time. This can make drone attacks almost seem, from the perspective of the people on the ground literally disembodied, lacking in human substance, capacity or emotion. An act of God rather than a choice made by very real humans with very real interests.

With drones and other new technologies reinforcing this god-trick, and with the enemy becoming almost entirely disembodied, it becomes hard to ask questions of who makes choices when it comes to drone attacks, using what biases or problematic assumptions about the people they are targeting.

The articles we're asked to read before the seminar point out that these drones make mistakes or hit targets precisely because of their operators' interests, or even their stereotypes of race, gender and terrorist potential. Which gives god-like powers to people who are all too human. Which goes back to the 23 people mentioned above who were killed in Uruzgan province while trying to get out of Kabul.

This is a frightening example of what STS advocates fear: the development of a technology that renders the people behind it completely invisible (to the people on the receiving end) and rather capable of taking lives. And given that this was a technology that was kept secret for so long, it was hard for democratic and public institutions to find out enough to be able to keep the forces ordering drone strikes accountable.

But how precisely did drone warfare come about?

Drone Wars

Drones have existed for decades and have been deployed since the 1970's, often equipped with cameras in order to gather intelligence. But these were rather rough tools back then, and with time they were able to take videos, and then high-resolution footage, and by 2002 they were finally big enough to carry missiles.

By now they're well-studied, with dozens of books written since 2010, when the practice truly became cost-effective and widespread.

One of the main advantages to using drones, of course, is that they replace pilots and thus reduce risk.

The first drone strike happenуd in late 2001, potentially sped up due to the 9/11 attacks in New York City. Between 2002-2006 drone strikes didn't happen too often, and then usually used by either the US or Israel (though it's difficult to get data from Israel). It wasn't until 2010-2013 that the boom really took off, with hundreds of strikes becoming a regular thing.

They weren't put to use in the context of war, but technically outside the boundaries of legally recognized battlefields. This is also why studying drones (and the structures behind them) is so crucial.

People don't understand drones as a normal weapon but as something else entirely. Because it has no risk involved (except the money lost if someone downs the machine), it lowers the threshold for engagement and makes it less costly to engage in some kind of firefight. So instead of having boots on the ground, some ask why they don't just send a drone and kill the enemy that way?

Targeted killings and political assassinations have always stood just behind the curtain, but this used to be an exceptional practice and conducted in the dark. Because it's not socially acceptable to just go in and kill. Drones changed all that because it became possible, it got cheaper and it minimized the risk of (our own) blood on our hands.

Drone technology ended up normalizing these kinds of practices, especially during the Obama administration. It became a normal thing. In fact, kids can play with recreational drones all the time, which also creates a general fascination with the machines.

The link between drones and toys is also something that critical theorists like to point out. For some, it's almost like a fetish tool, an exhibition of power and, if you like using gendered terms, an explosion of testosterone embodied in a weapon (though some find that language problematic).

Never have you been so powerful as compared to your enemy, and your technology allows you to create a huge asymmetry of power.

Not that there isn't any resistance to these practices. Civil society groups like Statewatch and Drone Wars consistently file freedom of information acts where they demand in court that certain ministries provide more information about their drone strikes. People like this play a key role in creating transparency and accountability.
Martins wants to highlight three main points in the debates surrounding drone issues.

The first one is about abstraction. Since many drones are operated from US bases on American soil, soldiers can be sitting in a dark room with screens and use joysticks and the decisions that are made are made there. The information they get is from a satellite relay and through that from the drones themselves, who are observing on the spot and feed everything through satellites to the US.

When talking about American practices, the information has to bounce through satellites not on US soil and there's an agreement to use German arrays. All major drone assemblages involve multiple countries, especially ones that have linked military bases.

There was a famous case where a US citizen was targeted in Yemen by an American drone, but the intelligence that was fundamental in identifying him to be targeted was Danish. The Danish secret service played a major role, and if this strike turned out to be illegal then what would be the responsibility of Denmark in all of this?

Also, if the majority of the information goes through Germany, does that mean that there's some legal way to take action against Germany to either stop the transaction or to gain access to that information?

Again about distance: the long line of military progress has nearly always been in the service of creating distance between the attacker and the attacked. Spears replace fists, which are replaced by arrows, which are replaced by guns, and now we have drones. The point is to make it safer for yourself while making things dangerous for others.

And now there's a huge distance: upwards of 10,000 km. Missile launchers also could operate at huge distances, but nothing like this, and nothing like the idea that killing can now happen risk-free.

Tellingly, Martins says, many drone shooters are recruited out of video game competitions.

Secondly, most of the information fuelling drone strikes is what's called signal intelligence (SIGINT), coming from technology, as compared to human intelligence (HUMINT). And some critics say that there's not enough HUMINT involved in the decision making process.

Examples of data coming in from SIGINT can be how people are tracked through connecting to wi-fi, or by their SIM card. They may not know who we are, but they know there's a WhatsApp involved, or Google Map locations, or bill payments. All of these can be picked up using SIM cards and can reveal a lot about our lives without revealing who we are.

This can add up to what's called pattern of life analysis. If certain factors are picked up, like where you live or what you read, you can be seen as more or less likely to be a terrorist or other kind of security threat. So if you're a 25-year old Afghan male living in a trouble zone, you'll already have a number of factors working against you.

A lot of strikes that are made are done with help of these pattern of life signatures. Decisions aren't necessarily made because the target is for sure this or that person, but because the target has enough of a pattern of life that fits with a definition of what an enemy combatant looks like. Which can lead to innocent lives being smashed out of existence by a well-intentioned missile.

It doesn't help that the relative safety of drone-strikers lowers the cost and engagement threshold. It costs less money to operate a drone than to work with human spies. It can fly for 24 hours or more, and so can collect the information needed for strikes itself. Which lowers the need for human elements.

It's also cheaper politically because war, one of the alternatives to drone strikes, is problematic. It's complex. It has to go through political systems and a lot of people have to be behind it and if war doesn't go as planned then there are plenty of consequences. Not just dead bodies or traumatized veterans, but political consequences for the people who supported it.

The third and final idea Martins wants us to think about is the blowback argument. While drones can be precise if the information is reliable, and they can provide strong assets, there's still the problem about what happens when you kill the wrong people too often. Or too many people. Or too many civilians. In this case, the drone strike can be counterproductive, fuel radicalization or make people pick up weapons (or suicide bombs) who had no prior intention of doing so. Just because of how unjust and asymmetric they find the drones. Because of how wrong they feel. So now they want to fight back.

In the case of Al-Qaeda in Afghanistan, the drones made a huge contribution to taking out leaders of terrorist cells. But some say that, in Yemen, it's merely empowered them.

Going a little further out into potential conspiracy territory, some claim that a drone strike can provoke a response that can then be used as a pretext for launching some other big attack.

And then there are all sorts of concerns about the future. Like with proliferation: it's moving fast and even the Islamic State had drones and launched more than 20,000 by 2018. There's also the question of artificial intelligence and drones, which could take out the human element entirely (activists have tried to stigmatize them into non-use the same way that landmines have been stigmatized).

Anything But Neutral

A major distinction Martins wants to make, as he starts to conclude today's seminar, is that data and facts are different things. It might be easy to think that the more data we have, the more likely it is that we'll get correct or useful knowledge. This, for researchers like Martins, is misguided.

Data can say a lot of different things, but it can't be equated with facts. You can't base political decisions on SIGINT; it needs to be gone over with HUMINT as well. Technology, in addition to creating intoxicating asymmetries of power, can also give a false sense of security and so can facilitate extreme security practices that might prove problematic, as is (arguably) the case of drone strikes.

In other words, it doesn't always pay to trust the machine.

And so when we think about technology, doing so in the context of the social environment that gives rise to the tech itself can help us make sense of knowledge and the symmetries that form around knowledge practices. One thing for sure is that the more we rely on technology, the bigger the asymmetries become between the ones who know and the ones who don't.

Another thing we know is that technology is not neutral. There can be this impression that the more we rely on data and information, the more we can strip ourselves of cultural (and other) contexts and gain objectivity. Not only is this wrong, according to STS experts, but it builds even the problematic elements of our subjectivity into our decisions and decision-making structure and thus give them the sheen of objectivity. Contributing to the god-trick.

And the god-trick, in the form of drones, is incredibly tempting. An illusion, one created in the West, that we can elevate ourselves to a point of nowhere, where strikes can form almost as if out of thin air, and giving us information as if from a neutral point from which we can evaluate everything.

For critical theorists like Haraway, this is connected with predatory capitalism, colonialism and male supremacy. For those who are willing to take risks (and, regrettably, kill a few innocent people along the way) to eliminate major threats to national security, there's a sense that this still saves oceans of blood that would otherwise flow in the event of a real war.

It depends on your priorities, I suppose.
Bruno Oliveira Martins is a senior researcher at the Peace Research Institute Oslo (PRIO) with a focus on the intersection of technological developments,
security practices and societal change.

Josh Nadeau is a freelance writer and peacebuilding practitioner.
He studied at PRIO in 2018.
Banner photo: Zev Marmorstein on wikicommons
Be the first to hear about new content!
Peace research, activism, facilitation - it's all coming.
Sign up to receive an email whenever new Summerpax content becomes available.
Further Reading
Technologization of Security: Management of Uncertainty and Risk in the Age of Biometrics
Ceyhan, A. (2008).
Surveillance & Society 5(2): 102-123.
Embodying Algorithmic War: Gender, Race, And The Posthuman In Drone Warfare
Wilcox, L. (2017).
Security Dialogue 48 (1).
Critical Theory of Technology: An Overview
Feenberg (2005).
Tailoring Biotechnologies 1 (1): 47-64