NCRI Logo
Contact Us | Donate
  • About NCRI
  • Contextus Technology
  • Network Contagion Labs
    • About NCL
    • Webinars/Events
    • Faculty
  • Reports
  • Media Room
March 11, 2019

HATE IS WAY MORE INTERESTING THAN THAT’: WHY ALGORITHMS CAN’T STOP TOXIC SPEECH ONLINE

Researchers have recently discovered that anyone can trick hate speech detectors with simple changes to their language—and typos are just one way that neo-Nazis are foiling the algorithms.

Pacific Standard interviews Professor Jeremy Blackburn and NCRI Director Dr Joel Finkelstein about the research and algorithms we use to understand hate speech in social networks.

Read the article here

← Silicon Valley’s Year in Hate
Post Gazette – When Twitter bans, Gab grows →
  • Recent Posts

    • No Lone Shooter: How Anti-Semitism Is Winning New Converts on the Internet
    • The New Yorker: Inside the Team at Facebook That Dealt with the Christchurch Shooting
    • WESA: Alleged Pittsburgh And Christchurch Shooters Had Similar Online Activity, Report Finds
    • Post Gazette: Report rips online haunts of accused Tree of Life, New Zealand shooters as platforms for extremism
    • The Crime Report: How ‘Echo Chambers of Hate’ on Social Media Fuel Right-Wing Violence
NCRI Logo

JOEL FINKLESTEIN,
DIRECTOR, COFOUNDER
Joel@networkcontagion.us

Network Contagion Logo
  • About NCRI
  • Contextus Technology
  • Reports
  • Media Room
  • Donate
  • Contact