Skip to main content

This blog is part of a series authored by the FTC’s Office of Technology focused on emerging technologies and consumer and market risks, with a look across the layers of technology—from data and infrastructure to applications and design of digital systems.

Over the last several years, artificial intelligence (AI)—a term which can refer to a broad variety of technologies, as a previous FTC blog notes—has attracted an enormous amount of market and media attention. That’s in part because the potential of AI is exciting: there are opportunities for public progress by enhancing human capacity to integrate, analyze, and leverage information. But it’s also, perhaps in larger part, because the introduction of AI presents new layers of uncertainty and risk. The technology is altering the market landscape, with companies moving to provide and leverage essential inputs of AI systems, such as data and hardware – opening a window of opportunity for companies to potentially seize outsized power in this technology domain. AI is also fundamentally shifting the way we operate; it’s lurking behind the scenes (or, in some cases, operating right in our faces) and changing the mechanics by which we go about our daily lives. That can be unsettling, especially when the harms brought about by that change are tangible and felt by everyday consumers.

With the flurry of AI products deployed to hundreds of millions of people and reports about the potential harms of AI, the FTC is interested in understanding what consumers are concerned about and what harms they are experiencing. One way we go about that is by looking at FTC’s Consumer Sentinel Network.[1] We queried Sentinel, using search terms we thought could best capture AI related interactions in the marketplace[3]—returning thousands of submissions from the past 12 months alone.

The bottom line? Consumers[2] are voicing concerns about harms related to AIand their concerns span the technology’s lifecycle, from how it’s built to how its applied in the real world. In this piece, we summarize a few key areas of harm we reviewed.

It’s worth noting that these findings are a sampling, and not necessarily representative. This is not a deep dive into the technical details or underlying facts of each complaint we have received that cites AI or related technologies. Rather, we aim to get a pulse on what consumers are worried about or are experiencing in the marketplace.

Concerns About How AI is Built: Data, data, and more data

Today’s leading AI models require massive amounts of data for training. The power and breadth of these models have implications for both consumer protection and competition. There are two areas of concern highlighted in Sentinel by consumers, which touch on risks around consumer privacy as well as risks from consolidation of power—in the form of data aggregation and access of these AI tools:

  • Copyright & IP. As has been discussed previously by the FTC, some reported concerns about copyright infringement stem from the scraping of data from across the web, and many consumer reports expressed concern that content they post to the web may be used to train models that could later supplant their ability to make a living by creating content and contribute to large firm advantages— and without their consent. The agency continues to do outreach to understand how this issue impacts communities.
  • Biometric and personal data. Again mirroring a past FTC warning, other reports mention the use of biometric data, particularly voice recordings, being used to train models or generate “voice prints” (the equivalent of a fingerprint but for unique characteristics of a person’s speech). A consumer expressed reservation about continuing with customer support calls after hearing a message indicating the call could be recorded, expressing a fear that the recording could then be used to train an AI using their voice.

Concerns About How AI Works and Interacts with Users: Bias, inaccuracies, and, um, can we talk to a human please?

AI models are susceptible to bias, inaccuracies, “hallucinations,” and bad performance. At the end of the day, AI model accuracy is dependent on a number of factors including the input data, training techniques, and context of deployment. Further, companies design applications to be efficient (using less resources, while yielding more output) in order to optimize for scalability and profit. This often means reducing the number of humans involved, leaving consumers to engage with their AI replacements. In Sentinel, consumers voiced concern and frustration about both of these elements:

  • Bias and inaccuracies. Some reports cite the biases of facial recognition software, including customers being unable to verify their identity because of a lack of demographic representation in the model. In another, a consumer says they asked a chat-based generative AI interface for the customer service phone number of the bank that issues their credit card and received the number of a scammer pretending to be the bank instead. This is something we have been tracking at the FTC. In a 2022 report to congress, our agency warned of harms from bias and inaccuracies in AI showing up in products.
  • Limited pathways for appeal and bad customer service (AKA can we talk to a human please!?). Another frequently cited concern is limited pathways to appeal decisions for products using AI. In one report, a contractor for a delivery platform said they had trouble getting in contact with real people in the case when an algorithm decided to kick them off the platform. There are also many reports by regular users of products who believe they were mistakenly suspended or banned by an AI without the ability to appeal to a human. Finally, there are numerous complaints of consumers who are unable to reach a human for customer service complaints or the end subscriptions and are stuck trying to communicate with AI-powered service bots.

Concerns About How AI is Applied in the Real World: Misuse, fraud, and scams

With the increasing sophistication of large language models, image generation systems, and more, it is becoming harder to distinguish human from machine. AI products could be used by malicious actors to increase the scale or sophistication of existing scams, another issue the FTC has written about before.

And in online fraud world, that may make common cybersecurity tips less effective for the most consumer. Consumers submitted a number of complaints about scams and fraud they believed may have been powered by AI:

  • Scams, fraud, and malicious use. Some reports worry phishing emails will become hard to spot as scammers start to write them with generative AI products and previously tell-tale spelling and grammar mistakes disappear. Others state concerns about how generative AI can be used to conduct sophisticated voice cloning scams, in which family members’ or loved ones’ voices are used for financial extortion. Some even say they’ve already experienced this themselves.

    Similarly, romance scams and financial fraud could be turbo-charged by generative AI, as scammers use chatbot products to communicate with more people at a lower cost. Many reports described being tricked by such scams and expressing a belief the messages originated from an AI model.

We’re Keeping an Eye Out

The FTC is keeping a close watch on the marketplace and company conduct as more AI products emerge. We are ultimately invested in understanding and preventing harms as this new technology reaches consumers and applying the law. In doing so, we aim to prevent harms consumers and markets may face as AI becomes more ubiquitous.

----

Thank you to reviewers of this post: Paul Witt, Vincent Law, Maria Mayo, David Koh, Stephanie Nguyen, Monica Vaca, Sam Levine, John Newman, Josephine Liu.


[1] The Consumer Sentinel Network, often referred to as Sentinel, is a tool that aggregates consumer complaints from data contributors, including the FTC’s own fraud reporting website, and makes the reports available internally and to a network of law enforcement partners.

[2] Sentinel contains complaints made to the FTC and a network of national and international data contributors.

[3] Identifying the particular technology powering an interaction can be difficult even for experienced technology professionals. This true for artificial intelligence, especially because it does not have a single broadly agreed upon definition. This means technologies described as “AI” in Sentinel may encompass a wide range of different types of software. Since this project is primarily interested in an overview of consumer concerns related to AI, we do not attempt to investigate the precise nature of the software a complaint references to determine whether it fits within a common definition.

More from the Technology Blog

Get Business Blog updates