Translate page with Google

Pulitzer Center Update October 5, 2023

Former Pulitzer Center Fellow Ari Sen Lectures on A.I.'s Benefits, Risks in Journalism

Media: Author:
Demonstrators at UNC-Chapel Hill protest the Silent Sam statue in August 2018.
English

How one company’s AI surveillance tool helps colleges nationwide monitor protests

Journalist Arijit (Ari) Sen, left, speaks with a student after his lecture "AI and Journalism: Looming Threat or Massive Opportunity" in the Lewis and Clark room at South Dakota State University’s Student Union on Sept. 28, 2023. Image by Brayden Byers. United States.

This article was originally published in The Collegian on October 3, 2023.


Arijit (Ari) Sen encouraged the use of Artificial Intelligence to augment reporters, but not to replace them, during his lecture held on Sept. 28, at the Lewis and Clark room located in South Dakota State University’s Student Union.

Sen, an award-winning computational journalist at The Dallas Morning News and a former A.I. accountability fellow at the Pulitzer Center on Crisis Reporting, discussed ways A.I. could be used and the potential risks of A.I. in journalism.

“A.I. is often vaguely defined, and I think if you listen to some people, it’s like the greatest thing since slice bread,” Sen said as he proceeded to quote Sundar Pichai, CEO of Google’s parent company, Alphabet, calling A.I. the most profound technology humanity is working on.

According to Sen, A.I. is basically ‘machine learning’ that teaches computers to use fancy math for finding patterns in data. Once the model is trained, it can be used to generate a number and predict something by putting things into categories.

Sen feels that a rather important question to focus on is how A.I. is being used in the real world and what are the real harms happening to people from this technology.

“There is a really interesting thing happening right now. Probably about since 2015, A.I. is starting to be used in investigative journalism specifically,” Sen said, as he spoke about a story reported by the Atlanta Journal-Constitution (AJC) on doctors and sex abuse. Around 100,000 disciplinary complaints had been received against doctors. Due to the amount of complaints, AJC used a machinery model to feed in data about complaints that were related to sexual assault and those that were not related.

Sen explained the risks of A.I. technology and questions pertaining to people behind the model. He discussed factors about people who label data and the intentions of the A.I. creator.

“The other question we need to think about when working with an A.I. model is asking if a human could do the same thing if we gave them unlimited amount of time on a task,” Sen said. “And if the answer is no, then what makes us think that an A.I. model could do the same thing.”

Sen further elaborated on A.I. bias and fairness by bringing in a case study of how Amazon scrapped its secret A.I. recruiting tool after it showed bias against women. Amazon used its current engineers resume as training data to recruit people; however, they realized that most of their existing engineers were men, which caused the A.I. to have a bias against women and rank them worse than male candidates.

“One of the cool things about A.I. in accountability reporting is that we’re often using A.I. to investigate A.I.,” Sen said as he dove into his major case study on the Social Sentinel.

Sen described Social Sentinel, now known as Navigate360, as an A.I. social media monitoring technology tool used by schools and colleges to scan for threats of suicides and shootings.

“Well, I was a student, just like all of you, at University of North Carolina at Chapel Hill (UNC) and there were these protests going on,” Sen said. “You know, I being the curious journalist that I was, I wanted to know what the police were saying to each other behind the scenes.”

Sen’s curiosity led to him putting in a bunch of records requests and receiving around 1,000 pages in the beginning. He ended up finding a contract between his college and Social Sentinel that led him to wonder about his college using a ‘sketchy’ A.I. tool. Sen landed an internship at NBC and wrote the story, which had been published in Dec. 2019.

“Around that time, I was applying for journalism at grad school, and I mentioned this in my application at Berkeley,” Sen said. “I was like, this is why I want to go to grad school; I want two years to report this out because I knew that straight out of undergrad no one was going to hire me to do that story.”

He recalls that he spent his first year doing a clip search on Social Sentinel and found out about no one looking at colleges, which he stated was ‘weird’ as the company had been started by two college campus police chiefs. He called colleges and wrote story pitches the remainder of the time.

Sen added details on his second year at Berkeley, where he was paired up with his thesis advisor, David Barstow, and conducted tons of record requests from all over the country for at least 36 colleges and every four-year college in Texas.

“We ended up with more than 56,000 pages of documents by the end of the process,” Sen exclaimed.

After having all documents prepared, Sen went on to build databases in spreadsheets and analyzed Social Sentinel alerts sent as PDFs. He later began analyzing tweets to check for threatening content and look for common words after filtering out punctuation and common words.

“You can see the most common word used was ‘shooting’ and you can see that would make sense,” Sen said. “But a lot of times ‘shooting’ meant like ‘shooting the basketball’ and things like that.”

With all this information acquired, Sen began speaking with experts, former company employees of Social Sentinel, colleges that used the service, and students and activists who were surveilled.

Through this reporting, Sen came up with three findings. One, being the major, was that the tool was not really being used effectively to prevent suicide and shootings but was used to monitor protests and activists. Second, Social Sentinel was trying to expand beyond social media. Lastly, the tool showed little evidence of lives saved, although Social Sentinel claimed that it was doing great.

Sen concluded that the impact of the story reached various media houses, which later published on A.I. monitoring student activities, and, eventually, UNC stopped using the service. 

According to Joshua Westwick, South Dakota State's director of the School of Communication and Journalism, the lecture was timely, especially considering the increased conversations about A.I.

“Ari Sen’s lecture was both engaging and informative. The examples that he shared illuminated the opportunities and challenges of A.I.,” Westwick said. “I am so grateful we could host Ari through our partnership with the Pulitzer Center.”

Westwick further explained that the lecture was important for students and attendees as A.I. is present throughout many different aspects of our lives.

“As journalists and consumers, we need to understand the nuances of this technology,” Westwick said. “Specifically, for our journalism students, understanding the technology and how to better report on this technology will be important in their future careers.”

Greta Goede, editor-in-chief of The Collegian, described the lecture as one of the best lectures she has attended. She explained how the lecture was beneficial to her as Sen spoke about investigative journalism and how to look up key documents before writing a story.

“He (Sen) talked a lot about how to get data and how to organize it, which was really interesting to me since I will need to learn those skills as I go further into my career,” Goede said. “I thought it was a great lecture and enjoyed attending.”

RELATED INITIATIVES

Logo: The AI Accountability Network

Initiative

AI Accountability Network

AI Accountability Network

RELATED TOPICS

Governance

Topic

Governance

Governance
an orange halftone illustration of a hand underneath a drone

Topic

AI Accountability

AI Accountability

RELATED CONTENT