The forensic empire that is Bellingcat

Home / Uncategorized / The forensic empire that is Bellingcat

Eliot Higgins and his 28,000 forensic foot soldiers at Bellingcat have kept a miraculous nose for truth – and a sharp sense of its limits – in Gaza, Ukraine, and everywhere else atrocities hide online.

BY:

Eric De Grasse

Chief Technology Officer

A member of the Project Counsel Media team

19 June 2024 — Due to the nature of our investigative reporting, we live, breath and die with OSINT – OpenSource Intelligence.

It is derived from data and information that is available to the general public. It’s not limited to what can be found using Google or other public search engines, although the so-called “surface web” is an important component. Most of the tools and techniques used to conduct open source intelligence initiatives are designed to help security professionals (or even threat actors) focus their efforts on specific areas of interest.

It is a subject we written about many, many times before, especially the pieces by our founder and boss, Gregory Bufithis, who has built an enormous OSINT network which has informed his long-running series on the war in Ukraine, and the war in Gaza. And our readers involved in the eDiscovery and cybersecurity industries know it well because many vendors in those industries use OSINT in their daily digital investigations work.

I have been fortunate because when Greg set up Luminative Media some 12+ years ago, he got me involved in setting up our OSINT unit which has proved invaluable. Many of those early contacts were used for our cybersecurity posts and our posts on Russian special ops and disinformation campaigns, many Russian and Ukrainian, still based across Eastern Europe, which we tapped for our Ukraine war coverage.

But as a journalist, Greg was following the path any responsible publisher needs to follow: to invest in greater capacity for robust fact-checking and digital verification. It’s not only the media giants like the New York Times or the BBC – who obviously have enormous resources to maintain fully-staffed open-source investigations units – but us smaller operations that must do the same. And I am heartened that organizations like Bellingcat (which is the star of this post as you will see) are receiving more and more funding.

Last year Greg published a monograph on OSINT for our paying subscribers but I’ll pull a few points (with his permission) from his piece for all of our readers:

• OSINT has enabled many forensic breakthroughs in recent years and Bellingcat has made the most full use of it over any organization I know. The internet remains an astonishing resource for helping redress the power imbalances between the rulers and the ruled. History is no longer just written by the winners, but filmed by the losers on their smartphones. To me, Bellingcat stands at the nexus of journalism, activism, computer science, criminal investigation and academic research.

• Its origins (somewhat) lie in the world of intelligence and law enforcement. It was in the U.S., via the final report of the 9/11 Commission in 2004, that the first “official” recommendation came to create a government open-source intelligence unit, a proposal reinforced a year later by the Iraq Intelligence Commission. But as we all knew, the methodology had already found its most innovative and effective use in the hands of journalists. OSINT has a long history and use. 

• The central pursuit in open-source investigations is finding publicly accessible data on an incident, verifying the authenticity of the data, using that data to confirm the temporal and spatial dimensions of the incident, and cross-referencing the data with other digital records.

• An open-source investigator will thus start by scouring social media for postings from the area around the time. For instance, once such images are found, they will be geolocated using Google Earth to cross-referencing geographical features. The time for each image will then be confirmed, using digital sundials to calculate shadow length and direction. For instance, a route for a missile launcher can then be constructed by placing the photographs on a map along with the time for each sighting.  

• For all its utility, such material always carries the risk of inauthenticity or manipulation. With the help of its ally Russia, for instance, Syria adapted to our new media environment by mobilizing armies of trolls to add digital noise to the mix, further diminishing trust in such material. This is where open-source verification becomes essential, establishing the authenticity of audio-visual material before any conclusions can be drawn from them.  

• And an important note. The remoteness of open-source analysts from the subject of their analysis is not as absolute as its critics make it out to be. Much of the data used in open-source analysis comes from witnesses on the ground who have more immediate access to events. Which is certainly true in Ukraine.

• Most open-source investigators aren’t formally employed as journalists – many emerged from a gaming subculture where street cred derives from the economy and precision of one’s method – and professionals from other fields of expertise such as architecture, medicine, chemistry, finance, and law have found uses for their specialist knowledge in unraveling forensic puzzles. The British-Israeli architect Eyal Weizman has pioneered the entirely new field of forensic architecture, using open-source data for spatial investigations into human rights violations; the chemical weapons expert Dan Kaszeta has contributed to several Bellingcat investigations; UC Berkeley’s Human Rights Investigations Lab recruits from over a dozen disciplines.   

• For me, this is the closest that journalism has come to a scientific method: the transparency allows the process to be replicated, the underlying data to be examined, and the conclusions to be tested by others. This is worlds apart from the journalism of assertion that demands trust in expert authority. 

And the Big Gorilla: metadata. Just to highlight a few points certainly known to any of us that have worked in cybersecurity, military intelligence or the legaltech industry:

1. Metadata can be more useful than the content of a particular message or voice call

2. Metadata can be mapped through time creating a nifty path of an individual’s movements

3. Metadata can be cross-correlated easily with other data. If you follow the myriad of experts on Linkedin who know this stuff cold, or read the works of Gordon Corera, John Hughes-Wilson or Bruce Schneier (plus a host of others but they are my favs; email me and I’ll send you my reading list) the ease and magic of cross-correlation is an eye opener.

4. Metadata can be analyzed in more than two dimensions.

This month Wired magazine had an interview with Elliot Higgens, founder of Bellingcat, and it is a nifty read, especially for our subscribers involved with forensics. The interview is behind the Wired magazine paywall but we have permission to republish it. Many of you have a subscription to Wired so I have left in the hyperlinks if you want to explore/read more.

Ten years ago, Eliot Higgins could eat room service meals at a hotel without fear of being poisoned. He hadn’t yet been declared a foreign agent by Russia; in fact, he wasn’t even a blip on the radar of security agencies in that country or anywhere else. He was just a British guy with an unfulfilling admin job who’d been blogging under the pen name Brown Moses—after a Frank Zappa song—and was in the process of turning his blog into a full-fledged website. He was an open source intelligence analyst avant la lettre, poring over social media photos and videos and other online jetsam to investigate wartime atrocities in Libya and Syria.

In its disorganized way, the internet supplied him with so much evidence that he was beating UN investigators to their conclusions. So he figured he’d go pro. He called his website Bellingcat, after the fable of the mice that hit on a way to tell when their predator was approaching. He would be the mouse that belled the cat.

Today, Bellingcat is the world’s foremost open source intelligence agency. From his home in the UK, Higgins oversees a staff of nearly 40 employees who have used an evolving set of online forensic techniques to investigate everything from the 2014 shoot-down of Malaysia Airlines Flight 17 over Ukraine to a 2020 dognapping to the various plots to kill Russian dissident Alexei Navalny.

Bellingcat operates as an NGO headquartered in the Netherlands but is in demand everywhere: Its staffers train newsrooms and conduct workshops; they unearth war crimes; their forensic evidence is increasingly part of court trials. When I met Higgins one Saturday in April, in a pub near his house, he’d just been to the Netherlands to collect an award honoring Bellingcat’s contributions to free speech—and was soon headed back to collect another, for peace and human rights.

Bellingcat’s trajectory tells a scathing story about the nature of truth in the 21st century. When Higgins began blogging as Brown Moses, he had no illusions about the malignancies of the internet. But along with journalists all over the world, he has discovered that the court of public opinion is broken. Hard facts have been devalued; online, everyone can present, and believe in, their own narratives, even if they’re mere tissues of lies. Along with trying to find the truth, Higgins has also been searching for places where the truth has any kind of currency and respect—where it can work as it should, empowering the weak and holding the guilty accountable.

The year ahead may be the biggest of Bellingcat’s life. In addition to tracking conflicts in Ukraine and Gaza, its analysts are being flooded with falsified artifacts from elections in the US, the UK, India, and dozens of other countries. As if that weren’t enough, there’s also the specter of artificial intelligence: still too primitive to fool Bellingcat’s experts but increasingly good enough to fool everyone else. Higgins worries that governments, social media platforms, and tech companies aren’t worrying enough and that they’ll take the danger seriously only when “there’s been a big incident where AI-generated imagery causes real harm”—in other words, when it’s too late.

WIRED: You now preside over the world’s largest open source, citizen-run intelligence agency. A decade ago, when you switched from your blog to the Bellingcat website, what path did you see this taking?

ELIOT HIGGINS: At that point, I was still trying to figure out exactly how I could turn this into a proper job. I’d been blogging for a couple of years. But I had children, and it was getting more important to earn a living. When I launched Bellingcat, the goal was to have a space where people could come publish their own stuff. Because at that point, I had several people who’d asked to publish on my blog. I needed a better-looking website. I also wanted a place where people could come together. But that was the extent of my strategy. There was no grand plan beyond that. It was all, “What’s happening next week?”

Well, I launched on July 14, and then three days later MH17 was shot down. The way the community formed around MH17, it was really a massive catalyst for open source investigation—in terms of the growth of the community, the work we did developing techniques, the profile that gave it. Today our Discord server has more than 28,000 members. People can come and discuss stuff they think might be worth investigating, and we’re publishing articles based off the work of the community.

WIRED: The world is never boring these days. What has it been like at Bellingcat since October 7, for example?

We’ve hired more people. We’re bringing in more editors. We’ve shifted people from other projects. We’ve already got one person who’s specifically working on archiving footage. But what’s different is that you don’t get the same kind of footage that we’ve gotten from, say, Ukraine or Syria. There’s actually a lot less coming from the ground.

WIRED: Because of internet blackouts?

Yeah, and a lot of the stuff we find is actually from Israeli soldiers who’re misbehaving and doing stuff that I would say are definitely violations of international laws. But that’s coming on their social media accounts—they post it themselves.

Another issue is: Because of the lack of electricity there, you actually get a lot of stuff happening at night that you can’t really see in the videos. Like the convoy attack that Israel had the drone footage of—there’s lots of footage of that, but it’s just all at night and it’s pitch-black. But there was a good piece of analysis I saw recently where they used the audio and could actually start establishing what weapons were being used. Just the sound itself makes it very distinct …

WIRED: Like audio signatures of missiles?

Yeah, and it’s not just being able to identify the type of weapon: When you fire something, you can hear the sound of the bullet going by but also the sound the barrel makes—and you can use that to measure how far away the shot came from. When the Al Jazeera journalist Shireen Abu Akleh was killed in 2022, we had the footage where she was shot. And the shot came from the direction of positions occupied by Israeli forces. [Months after the shooting, the Israel Defense Forces announced that there was “a high possibility” that the journalist was killed by one of its soldiers.]

WIRED: Are there things you haven’t seen before, coming from this conflict?

It’s certainly the first time I’ve seen AI-generated content being used as an excuse to ignore real content. When a lot of people think about AI, they think, “Oh, it’s going to fool people into believing stuff that’s not true.” But what it’s really doing is giving people permission to not believe stuff that is true. Because they can say, “Oh, that’s an AI-generated image. AI can generate anything now: video, audio, the entire war zone re-created.” They will use it as an excuse. It’s just easy for them to say.

WIRED: And then they can stay in their own information silo …

Yeah, just scrolling through your feed, you can dismiss stuff easily. It reinforces your own beliefs. Because Israel-Palestine has been such an issue for so long, there is a huge audience already primed to be emotionally engaged. So you see grifters churn out misattributed imagery or AI-generated content. The quality of that discourse is really low. It means that if you’re looking for real accountability, it’s hard.

WIRED: You have this entirely transparent process, where you put all your evidence and investigations online so anyone can double-check it. But it’s a feature of the world we live in that people who’re convinced of certain things will just remain convinced in the face of all the facts. Does the inability to change minds frustrate you?

I’ve gotten used to it, unfortunately. That’s why we’re moving toward legal accountability and how to use open source evidence for that. We have a team that’s just working on that. You can have the truth, but the truth is not valuable without accountability.

WIRED: What do you mean by legal accountability?

Well, you have people on the ground capturing evidence of war crimes. How do you actually take that from YouTube to a courtroom? No one has actually gone to court and said, “Here’s a load of open source evidence the court has to consider.” So we’ve been doing mock trials using evidence from investigating Saudi air strikes in Yemen.

A lot of our work is educating people: Lawyers in general don’t know much about open source investigation. They need the education to understand how investigators work, what they’re looking for—and what is bad analysis.

Because there’s more and more bad analysis with open source evidence. Do you know Nexta TV? They’re this Belarusian media organization, and they did a series of tweets after the attack on the concert in Moscow. They said there’s a lot of people in this scene wearing blue jumpers. They could be FSB agents [members of Russia’s Federal Security Service]. But where’s the proof they’re FSB agents in the first place? That was terrible analysis, and it went viral and convinced people there was something going on. If you can draw colored boxes around something and say you’re doing open source investigation, some people will believe you.

WIRED: There are elections this year in the US and in the UK and in India. Are you preparing to deal with these three big election events as you deal with Ukraine and Gaza?

There’s only so much we can do to prepare, because I think the scale of disinformation and AI-generated imagery will be quite significant. If you look at what’s happened already in the US with the primaries, you’ve already got fake robocalls; the DeSantis campaign used AI-generated imagery of Trump and Dr. Fauci hugging each other. So that line has already been crossed. These tools are available to ordinary members of the public as well, not just political agents.

WIRED: Which makes it much worse.

Yeah, because it’s not what the campaigns decide to do, it’s what their supporters decide to do.

WIRED: Given this flood of AI-generated imagery, are you wary of Bellingcat turning into just a fact-checker rather than doing these much deeper investigations where you build a case?

It’s like the Kate Middleton thing that happened recently. I really tried not to join the conversation. I thought: This is really stupid discourse. But then you start seeing, like, TikTok videos that were saying, “Oh, the color’s being photoshopped” or whatever, and they have millions and millions and millions of views. So you kind of feel: Yeah, I have to say something. It’s actually a good reflection of how disinformation starts and spreads, and the dynamics.

I will not lie. I was fascinated too, for the span of a week.

That’s why it was prime territory for disinformation! I’ve dealt with lots of communities who believe in conspiracy theories. None of them generally believe they’re conspiracy theorists. They believe they’re truth seekers fighting against some source of authority that is betraying us all. They’ve come to understand that a source of authority cannot be trusted, because of their personal experiences.

WIRED: I love a phrase you used for this once: that people who believe in conspiracy theories have previously suffered some kind of “traumatic moral injury.”

I use the example of Covid. A lot of people who were driving Covid disinformation were people in the alternative health community who’ve often had bad experiences with medical professionals. Like they’ve had a treatment go wrong, or they’ve lost a loved one, or they’ve been mistreated. And some of that is legitimate. Some of that is real trauma.

Now, they found like-minded people, and within that community you have people who are anti-vaxxers. When Covid came along, suddenly those voices became a lot louder within those communities. And the distrust people had in medical professionals was kind of reinforced. It’s about feeding their anxiety—and they’re being fed every single day, every time they scroll through their groups.

WIRED: In an era when AI images are going to proliferate, wouldn’t you rather that people have this heightened spidey sense about the world, where they’re alert? That they’re too skeptical rather than too trusting?

I’d argue against the frame of that question. If you have people’s spidey sense tingling all the time, they’ll just distrust everything. We’ve seen this with Israel and Gaza. A lot of people are really at that point where they do care about what’s happening, but it’s so confusing that they cannot stand to be part of this anymore. You’re losing people in the center of the conversation. This is a real threat to a democratic society where you can have a debate, right?

WIRED: Is this AI-generated stuff at a stage of sophistication where even your team has to struggle to distinguish it?

Well, we explore the network of information around an image. Through the verification process, we’re looking at lots of points of data. The first thing is geo-location; you’ve got to prove where something was taken. You’re also looking at things like the shadows, for example, to tell the time of day; if you know the position of the camera, you’ve basically got a sundial. You also have metadata within the picture itself. Then images are shared online. Someone posts it on their social media page, so you look at who that person is following. They may know people in the same location who’ve seen the same incident.

You can do all that with AI-generated imagery. Like the Pentagon AI image that caused a slight dip in the stock market. [In May 2023, a picture surfaced online showing a huge plume of smoke on the US Department of Defense’s lawn.] You’d expect to see multiple sources very quickly about an incident like that. People wouldn’t miss it. But there was only one source. The picture was clearly fake.

My concern is that someone will eventually figure that out, that you’ll get a coordinated social media campaign where you have bot networks and fake news websites that have been around for a long time, kind of building a narrative. If someone were clever enough to say, “OK, let’s create a whole range of fake content” and then deliver it through these sites at the same time that claims an incident has happened somewhere, they’d create enough of a gap of confusion for an impact on the stock market, for panic to happen, for real news organizations to accidentally pick it up and make the situation much worse.

WIRED: So how do we even begin to fix this?

Social media companies need to have the responsibility—like, legislatively—to have AI detection and flagging as part of the posting process. Not just as something that’s a fact-check layer, because that’s not going to matter at all. I don’t think a voluntary system is going to work. There need to be consequences for not doing it. I think my worry is that we’re only going to figure this out when something really terrible has happened.

WIRED: Do you still do a lot of investigative work yourself now?

No. If I’ve got a gap in my day to do a quick geolocation or something like that, I’ll do it. I’m involved with a lot of the work we do on our production company side of things, so that’s keeping me busy. I do a lot around PR and comms.

WIRED: Is that easy for you? Somewhere you’d said that when you were younger, you were slightly socially anxious?

I was cripplingly socially anxious. I’ve had to beat it out of me. When I first started doing this, I had loads of anxiety, really serious levels. The idea of speaking on stage was terrifying to me. The first time I did a big event on stage was at a 2013 Google Ideas summit. I don’t remember anything about that. Just dripping with anxiety. But doing this again and again, about something I really care about, has helped balance that out.

WIRED: How do you spend your spare time online? What do you do on holiday?

I’ve removed Twitter from my phone, because that was one of the worst things. Arguing with people …

WIRED: You don’t do that anymore, I noticed. You used to do it a lot, and in such good faith.

It was kind of like testing my own knowledge. If someone can come up to me and say, “Oh, you’re wrong because of this,” and I can’t argue against that, then I’m the one in the wrong. It used to be worthwhile having those debates, even if they were arguing in bad faith. But it got to the point where the mythology around Bellingcat that existed in these echo chambers became crystallized. When someone now says, “Oh, Bellingcat is the CIA,” it’s always the same nonsense.

WIRED: OK, you’re not arguing as much. What else are you doing?

I use AI a lot for my own entertainment. Do you know Suno AI, or Udio? These are music-creation tools—and in the past six months they’ve taken huge, huge leaps.

WIRED: Oh, Suno. It’s the Hindi word for “listen.”

Yeah. Have you used these at all?

WIRED: No.

I’ll show you. I have a SoundCloud where I upload my music. You can put in style prompts. You can also put in custom lyrics.

WIRED: This is how the founder of Bellingcat spends his spare time.

Yeah. I like it especially when the AI generator really gets weird, goes completely off the rails. I write loads of songs about things like filter bubbles online and stuff. If you can condense an idea into a lyrical form, I find that helps process it into a simpler form to explain it to people in articles and books.

WIRED: When you’re giving these prompts, are you giving them influences or are you just giving them genres?

Oh, I’ve got a whole process for this now! It used to be that I’d say, “OK, let’s do an ambient song.” But then I was thinking: How do I get the exact sound of certain bands? Because you can’t put in “Make a Beastie Boys song.” It won’t let you prompt it that way; they’re clearly trying to avoid getting sued. But I go to ChatGPT and explain the scenario: I’m giving prompts for a music-generation program that requires style tags and types of music, so what are the style tags for, like, Kraftwerk? It will break down styles into separate tags, and you can take those tags and put them back in.

WIRED: I’ve read elsewhere that you call any yearning for a time before the internet “cyber-miserabilism,” which is a great phrase. But it’s also true that all of us remember our minds being calmer before we started scrolling through feeds.

You’re continually wired now. What really worries me is how this is traumatizing people. We had this a lot with Ukraine in 2022, when there were so many people engaged with the content stream. Those people were saying, “I just feel horrible all the time.” We didn’t realize we were traumatizing ourselves. We’re seeing the same issue with Israel and Gaza and people streaming through this imagery that’s just reinforcing the hate they have for the other side.

In the early days of Bellingcat, you were being exposed to videos like that on a daily basis, very often including footage of dead bodies. How do you protect yourself from what you’re seeing?

For me, it felt like there was a point to it, because I had success through seeing all this stuff. It’s the powerlessness that is often part of the traumatic response. But you can learn to disassociate from that.

WIRED: Can you though?

I just think I got very good at compartmentalizing stuff. It’s so, so important for this work. With MH17, I was looking at the wreckage of the site. There was a big, high-resolution photo, and I was going through it looking at the details of the shrapnel holes, and there was a doll in the wreckage, and my daughter had been given the exact same doll by her aunt when she was born. What happens then is you have a subconscious engagement with it. And you have to stop at that point. Trying to push through it is a really bad idea.

When I was looking at the victims of the 2013 sarin attacks in Syria, for example, we were trying to identify the symptoms. And one of the symptoms is the constriction of pupils. So I had to look at the eyes of these dead people to find enough screenshots to establish their cause of death. That was upsetting in itself. But then you go online, and you have all these idiots saying: “Oh, it’s fake. No one really died. The babies are acting.” That is traumatic.

What happens to a lot of people is they have this kind of compulsive witnessing, where you’re like, “I have to witness this thing.” Because, in history, people have turned their backs, right? So I have to witness this, so that these people’s suffering is being acknowledged. It’s an illusionary way of getting power back from the situation, because it really doesn’t change anything. All you’ve done is traumatized yourself.

WIRED: I understand Bellingcat offers psychological support so anyone on staff can get free therapy. Do people use that counseling facility a lot?

Oh yeah, absolutely. It’s not just about the content we face but also the reaction from governments that we have to deal with. Which can be, as you know, quite aggressive.

WIRED: I did wonder about that. I’ve read that you don’t eat room service meals anymore, and I wanted to know what else you do or don’t do. And also, what changed when Bellingcat was declared a foreign agent by Russia in 2021?

We have a security team, we have a lot of reviews around cybersecurity. We have a lot of discussions about our physical security. We have staff retreats, where consultants come talk to us about, like, “Here’s what to do if you’re being followed.” Fun stuff like that. Being declared undesirable and a foreign agent—in one sense, it’s a badge of honor. It’s also a problem, because we try to be transparent about who funds us, but if we’re a foreign agent and have donations from people who’re linked to Russia, that will put them at risk. We’ve had to stop publishing some of our donors’ names, which we’re not fans of. But they need to be protected.

WIRED: What about this meeting, for instance? How did you know whether to agree to have a cup of coffee with me? What did you do?

Well, some research. First of all, I made sure to know what you look like. There’s been incidents where people have had meetings with journalists, who suddenly start asking very weird questions. They’ll start saying, “Oh, Israel are pretty awful, aren’t they?” And then you wonder, “What’s going on here?” I know people who’ve had Skype calls, and suddenly their call is on Iranian state media, selectively edited.

WIRED: I found a quote online from one of your former employees in which he says, “Data is the great equalizer between an individual and the state.” But surely, at some point, governments and intelligence agencies will find ways to hide their own data better?

Russia tried to do that. After we did the first investigation of the poisoners [of Sergei Skripal, a former Russian intelligence officer, in England], we got copies of their GRU documents. The next time we tried that, they’d removed the photos from the documents of GRU officers. But that just told us they were GRU officers. When we posted about that, the photos returned, but they were of different people. They’d replaced a photo of a man with a photo of a woman. So … they’re not smart.

WIRED: But they’re bound to get smarter?

Maybe. The thing is, these are doors. One door closes, we just go through the 10,000 other open doors. It’s never the end of the investigation. We just need to take another route.

Related Posts