There’s an Algorithm to Fight Online Extremism
Back in the early 2000s, the internet had a problem with child pornography. For its part, the United States hadn’t anticipated the explosion of illegal images that had come online in the early days of the internet. Tracking these illegal activities became much more difficult, and removing all trace of the images from the World Wide Web seemed nearly impossible. So government officials turned to Silicon Valley for help.
But technology companies dragged their feet. By 2008 little had been done to fix the issue of online child pornography until one tech honcho—Microsoft—contacted Dartmouth College computer scientist Hany Farid.
Farid is an expert in photo forensics, techniques used most often to identify fake images. But together Farid and Microsoft built a tool to identify any image by a unique signature, like a photo fingerprint. With that signature Microsoft could compare images—before they got posted on websites—to a database of nearly 30,000 images of child ponorgraphy catalogued by the National Center for Missing and Exploited Children. Farid tested the tool out on just 10 images, guessing that it would take several weeks, if not months, to find a match. It took just four days. Technology companies began to adopt the tool over the next decade; Google finally signed on in 2014.
Now Farid is ready to use this same technology to fight another internet spectre—terrorist messaging. According to the Counter Extremism Project (CEP), online terrorist videos and images play an important role in radicalizing extremists. But unlike images of child exploitation, the rules that govern what is and isn’t terrorist messaging aren’t as clear. Such messages aren’t illegal under U.S. law, and some worry that an arbitrary definition of “terrorist messaging” would pave the way for censorship. Silicon Valley is again dragging its feet to find a solution, even as evidence mounts that content hosted on their websites is in part responsible for recent acts of terrorism. Indeed, earlier this month two women filed a lawsuit against Twitter for assisting in the terrorist attacks that killed their loved ones in 2015 and 2016.
Farid joins Ira to discuss how photo forensics could curb terrorist messaging online. He is joined by Jillian York, the director for International Freedom of Expression at the Electronic Frontier Foundation, for a discussion about technology and the boundaries of censorship.
Update: Facebook got back to us after last week’s show regarding terrorism messaging. They said they’re partnering with Microsoft, Twitter and Youtube on this issue. You can read their press release here.
Hany Farid is a professor of computer science at Dartmouth College. He’s based in Hanover, New Hampshire.
Jillian York is the the director for International Freedom of Expression at the Electronic Frontier Foundation. She’s based in Berlin.
Katie Feather is a former SciFri producer and the proud mother of two cats, Charleigh and Sadie.