Skip to main content

Today the Federal Trade Commission issued a report to Congress warning about using artificial intelligence (AI) to combat online problems and urging policymakers to exercise “great caution” about relying on it as a policy solution. The use of AI, particularly by big tech platforms and other companies, comes with limitations and problems of its own. The report outlines significant concerns that AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance.

“Our report emphasizes that nobody should treat AI as the solution to the spread of harmful online content,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection. “Combatting online harm requires a broad societal effort, not an overly optimistic belief that new technology—which can be both helpful and dangerous—will take these problems off our hands.”

In legislation enacted in 2021, Congress directed the Commission to examine ways that AI “may be used to identify, remove, or take any other appropriate action necessary to address” several specified “online harms.” The harms that are of particular concern to Congress include online fraud, impersonation scams, fake reviews and accounts, bots, media manipulation, illegal drug sales and other illegal activities, sexual exploitation, hate crimes, online harassment and cyberstalking, and misinformation campaigns aimed at influencing elections.

The report warns against using AI as a policy solution for these online problems and notes that its adoption could also introduce a range of additional harms. Indeed, the report outlines several problems related to the use of AI tools, including:

  • Inherent design flaws and inaccuracy: AI detection tools are blunt instruments with built in imprecision and inaccuracy. Their detection capabilities regarding online harms are significantly limited by inherent flaws in their design such as unrepresentative datasets, faulty classifications, failure to identify new phenomena, and lack of context and meaning.

  • Bias and discrimination: In addition to inherent design flaws, AI tools can reflect biases of its developers that lead to faulty and potentially illegal outcomes. The report provides analysis as to why AI tools produce unfair or biased results. It also includes examples of instances in which AI tools resulted in discrimination against protected classes of people or overblocked content in ways that can serve to reduce freedom of expression.

  • Commercial surveillance incentives: AI tools can incentivize and enable invasive commercial surveillance and data extraction practices because these technologies require vast amounts of data to be developed, trained, and used. Moreover, improving AI tools accuracy and performance can lead to more invasive forms of surveillance.

Congress instructed the Commission to recommend laws that could advance the use of AI to address online harms. The report, however, finds that, given that major tech platforms and others are already using AI tools to address online harms, lawmakers should consider focusing on developing legal frameworks that would ensure that AI tools do not cause additional harm.

The Commission voted 4-1 at an open meeting to send the report to Congress. Chair Lina M. Khan as well as Commissioners Rebecca Kelly Slaughter, and Alvaro Bedoya issued separate statements. Commissioner Christine S. Wilson issued a concurring statement and Commissioner Phillips issued a dissenting statement.

The Federal Trade Commission works to promote competition and protect and educate consumers.  The FTC will never demand money, make threats, tell you to transfer money, or promise you a prize. Learn more about consumer topics at consumer.ftc.gov, or report fraud, scams, and bad business practices at ReportFraud.ftc.gov. Follow the FTC on social media, read consumer alerts and the business blog, and sign up to get the latest FTC news and alerts.

Contact Information

Media Contacts

Staff Contact

Michael Atleson
Bureau of Consumer Protection