In September 2021, whistleblower reports about social media platforms’ use of artificial intelligence (AI) that promote certain platform content over others raised critical questions about the relationship between AI algorithms and corporate liability standards. Facebook consistently claims that AI is an “efficient” and “proactive” means to stop hate speech and other problematic content on its platform. However, internal documents reveal that AI removes less than ten percent of harmful content, such as hate speech or misinformation, from the platform.
Read More