Translate page with Google

Journalist Resource February 5, 2024

How We Investigated Mass Surveillance in Argentina

Country:

Authors:
People walk down the street. Digital squares are placed over the face of each person, presumably to identify individual facial features.
English

An increasing number of policymakers are turning to artificial intelligence to fight and prevent...

author #1 image author #2 image
Multiple Authors
SECTIONS

Seventy-five percent of the Argentine capital area is under video surveillance, which the government proudly advertises on billboards. But the facial recognition system, part of the city's sprawling surveillance infrastructure, is being criticized after at least 140 other database errors led to police checks or arrests after the system went live in 2019. Officials deactivated the facial recognition feature during the COVID-19 pandemic in 2020, and it has remained off due to precautionary measures by the judiciary. The city of Buenos Aires is now in a legal battle for it to be switched back on.

From the beginning of the investigation, we considered the question of privacy versus security, as well as the regulation of AI and already known racist patterns in facial recognition with the help of AI. South America is a continent struggling with security problems. In this context, an increasing number of policymakers are turning to artificial intelligence to fight and prevent crime. The use of AI in public spaces, such as facial recognition technology, receives relatively little media coverage. What causes heated discussions in Europe and the U.S., many South Americans silently accept. 

We started our research meeting victims of database errors who had been mistakenly stopped by police using facial recognition with AI in Buenos Aires. But at the same time, a judge investigated and found that facial recognition may have been misused for surveillance and big data purposes. When we found out about this, our story took a turn. It was now: The City of Buenos Aires relies on facial recognition for public security. But judicial inquiries show that the system has been tampered with—and possibly used for surveillance. 

Facial recognition in public spaces is regulated in Europe. But in Buenos Aires, it's a fact of life. We wanted to get to the bottom of the issue and looked into ethical questions, questions about transparency, regulatory options, errors in the system, and data protection. When judicial research revealed that the system may have been abused, it was clear that this was our story. This is also what the story revealed: Systems are not perfect. And when there is a lack of control, systems like face recognition can be very easily abused.

What the investigation revealed 

“A nightmare” is how Guillermo Federico Ibarrola describes his arrest. The cameras of the facial recognition system in Buenos Aires had identified him as a criminal. But a different Guillermo Ibarrola had committed the robbery in question, in a city 600 kilometers away. He had to spend five days in prison, until he was finally released with a coffee to go, a bus ticket home, and, yes, his shoelaces. This happened in 2019. We still don't know the recognition rates of the cameras, how the software was acquired, who runs it, under what standards or control mechanisms, or how long and where the harvested data is stored.

We searched and found not only Guillermo Ibarrola, but also other victims by looking at media reports and tweets about false positives, and by contacting lawyers and the Ombudsman's Office of the City of Buenos Aires. Guillermo's case was especially difficult because he spent six nights in a cell.

The city of Buenos Aires relies on facial recognition. A few months after the system was installed in 2019, the government announced that almost 1,700 wanted criminals had been caught. But data privacy activists sued the city: 140 innocent people had also been stopped by police because the system recognized them as wanted criminals. The IT specialists who searched the servers of the Ministry of Security on a court order came to an outrageous suspicion: Was facial recognition used to create a Big Data database or even to monitor individuals?

In our story, the main actors involved have their say: a man who spent six days in a cell due to a database error. The Minister of Security, who considers the system indispensable for the safety of the citizens of Buenos Aires. The judge who ordered the investigation of the system, found evidence of various irregularities—and drew alarming conclusions.

We were able to access the sensitive report that resulted from that judiciary investigation. Confidential background conversations were particularly important to enhance understanding of the report, which was in part very technical, and also to ensure the credibility of this important document in a politically polarized country. An advantage in this case: The report was produced and signed off by two different police bodies. The city police reports to the Buenos Aires government, the airport police to the national government—political opponents at the time of the research. 

Visual approach: documentary photography + artistic interventions

The uniqueness of this project lies in the fact that we worked as a team of a reporter (Karen Naundorf) and photographer (Sarah Pabst). Media reporting in general, and imaging in particular, currently is in a credibility crisis. Fake news and ultimately AI-generated imagery have contributed to the increase of distrust. Documentary work of professional authors and photographers whose names also stand for the authenticity of text and images help to improve transparency and credibility.

We knew the importance of working together as a team from the beginning: You brainstorm and research together, input for each other during interviews and photos, and follow-up conversations. Therefore the end results are better and more complete, increasing the reach of our story: Magazines give more space to stories not only when they have good research, but also when the text interacts well with first-class images.

There is broad scientific evidence on the importance of visuals: 90% of the information transmitted to our human brain is visual, and it can process visuals 60,000 times faster than text. Images also persist longer in memory than written words. Yet, articles on AI rarely feature extensive photography, and most of the time they use the same illustrations. In something as abstract and technical as AI, visuals become especially important for easier access to a broader spectrum of people. We have been working as a team for years now and know how much we potentialize each other in our projects. So while the text conveys all the important information, personal stories, and results of the investigations, the images, both documentary and intervention, help to bring those results closer to the readers. 

The aim of our reporting on AI is always to show the impact on communities—that is, on humans. That's why classic documentary photography is not obsolete. On the contrary, it’s an important part of the photographic work, as it shows actual people. The artistic interventions at the same time illustrate visually what is hidden and hard to document—AI, algorithms, and the effects on us. Through this combination of both documentary photography and artistic intervention, the visuals shed light on something as hard to understand as AI. 


Image by Sarah Pabst. Argentina, 2023.

AI is omnipresent, but at the same time abstract, hard to grasp, and mostly invisible. The artistic intervention aims to visualize this invisible—small lines on photos show the ranges of installed cameras, digital collages reproduce the function of software on cameras, small holes in images and photographed through light artistically shed light but at the same time represent the violation of privacy and the way our lives are intervened by algorithms. 


Sun sets above the obelisk, where cameras are placed on the top and 9 de Julio avenue in Buenos Aires, Argentina, on Wednesday, May 10, 2023. The lines symbolize the invisible constant surveillance. Image by Sarah Pabst.

The spots to photograph followed a strictly documentary approach: Portraits where the victims Guillermo and Leo had been picked up by the cameras, politicians and judges in their offices, lawyers at home and at work. The interventions we made on those documentary images further highlight how cameras are working and their range, the lack of information behind portraits, the way camera software marks a person, digital distortion that shows how randomly one can be picked out as a false positive, and how we are all affected when it comes to AI. We also intervened self-portraits after we found out that our biometric data was also used for surveillance. 


False-positive Guillermo Ibarrola stands for a portrait at the Retiro train station, where he was arrested, in Buenos Aires, Argentina, on Thursday, October 13, 2022. Image by Sarah Pabst.

Challenges & lessons learned

The challenges we faced were broad: 

  • Our interview partners usually have special interests themselves (experts usually have a commercial or political background)
  • Some questions simply remain unanswered ("sitting out" requests, e.g. questions about software and algorithms)
  • The importance of these issues is underestimated—other problems are more pressing, even for potential interview partners 
  • Not even the victims were interested in talking—the political background was too great, and nobody wanted to take on the city of Buenos Aires 
  • Political polarization limits the choice of interview partners and access to data

When we started the research, the goal was to address ethical conflicts and present the benefits and potential harms of the technology. This seemed particularly interesting to us in a context where there is an actual security problem, on a continent where such a massive use of technology is not expected—and in times when authoritarian tendencies are gaining space in many countries.

This very idea was reinforced when we learned about Judge Gallardo's judicial research: Suddenly, the question arose whether the facial recognition system might have been used for surveillance or Big Data. The project took a second turn when we found out that our own biometric data was also requested by the Ministry of Security. 

The judiciary initially refused—for understandable reasons—to give us insight into the data records, because this would violate the personal rights of third parties. It was therefore only possible to inquire on our own personal data. So started an inquiry by legal means with the assistance of a lawyer, at the Contentious, Administrative and Tax Court of the Autonomous City of Buenos Aires, which at the time was dealing with the case. After several months, we first received an answer that did not help us: The court confirmed that we were both in the database, but claimed to have no technical expertise to give the date and time of the search.

However, it was precisely this information (date and time) that we needed to confront the Ministry of Security: Why were our data retrieved? So we made a second inquiry, this time to a public prosecutor's office specializing in corruption cases, which is now also dealing with this issue. The prosecutor’s office confirmed not only the fact that our personal data had been requested, they also informed us on time and date of the requests.

The next question was: Why did the city request our data? We first tried to get a response from the Ministry, contacting the press office. As we didn't receive any answer, we filed a FOIA request. However, the city refused to answer (December 2023), citing ongoing legal proceedings. We have objected and are waiting for a new response.

This shows: Time constraints can be a challenge. All information that public authorities do not want to give out takes ages. This applies to requests regarding algorithms (some things you never get) as well as classic reporter questions. Only after almost a year did we confirm that our biometric data had been requested by the city of Buenos Aires.

All over the world, lawmakers are wondering: How can facial recognition be adequately regulated? The Buenos Aires case clearly shows that good legislation is not enough. Functioning controls are needed. Otherwise, facial recognition can become a dangerous surveillance tool.

Finally, AI stories are definitely harder to place in the media than stories on other topics, basically for the same reason that makes covering AI difficult—AI is hard to grasp for readers and for editors who aren’t specialized in tech stories. Also, the Latin American continent faces problems that at first glance are more urgent and easier to place in magazines: economical crisis, migration, poverty, crime, violence, and corruption. AI, however, reproduces a lot of these factors, but the effects are more hidden to the eye. Therefore, it’s a bigger challenge to publish those stories for a broader audience. AI stories tend to appear in the technology sections, but, especially when they touch public interest, this can be a shame: It might limit the possible readership as non-tech-interested readers might not see the story as they simply don't click the articles in the tech section. But we know that AI has long been present in all areas of life and will continue to be.

RELATED INITIATIVES

Logo: The AI Accountability Network

Initiative

AI Accountability Network

AI Accountability Network

RELATED TOPICS

an orange halftone illustration of a hand underneath a drone

Topic

AI Accountability

AI Accountability

RELATED CONTENT