Skip to main content

The FTC’s Tech Summit on AI1 convened three panels that highlighted different layers of the AI tech stack: hardware and infrastructure, data and models, and consumer-facing applications. This third Quote Book is focused on consumer-facing applications. This post outlines the purpose of the quote book, a summary of the panel, and relevant topics and actions raised by the FTC.

Purpose of the quote book

A key component of the FTC’s work is to listen to the people on the ground who experience or have knowledge of the effects of innovation in real time—the engineers building next-generation cloud-computing platforms, the data scientists training AI models, the investigative journalists reporting on the marketplace, or the startups building applications to improve consumers’ lives. As policymakers debate the benefits or risks of new technologies, these voices can sometimes be lost in otherwise dense technical, policy, or legal discussions. The FTC’s tech summit is one component of our effort to listen and engage with a variety of perspectives. 

The Quote Book aims to reflect and compile quotes from the participants, aggregated into common themes. The Quote Book is a resource to distill various perspectives on topics, from ways to enable competition and innovation, to potential consumer concerns like deceptive marketing and privacy risks.  

Overview of the panel on consumer facing applications

In the session, panelists discussed factors to build a model along with the risks and benefits of AI-enabled products and services. Some discussed more nuanced topics including norms of tech product design and deployment, including how products are being deployed to hundreds of millions of users with known harms, and without incentives for companies to mitigate risks upfront. In addition, panelists mentioned that end-user AI applications can create harmful outcomes that stem from data collection, sharing, use, and monetization tactics, discriminatory algorithms, and security practices. 

The panelists shared that companies may be employing marketing tactics such as ill-defined “AI Safety” or “Privacy Enhancing” labels to falsely build trust with consumers – and that these terms should not be taken as a blanket shield to break the law.

The topics discussed in the panel are not new for the FTC. The agency has a track record of addressing consumer-facing harms due to AI-generated technologies. Below are some highlights of the FTC’s work making clear that there is no AI exemption from the laws on the books.2

Quietly changing terms of service agreements could be unfair or deceptive.3 Companies developing AI products possess a continuous appetite for more and newer data. They face a potential conflict of interest to turn the abundant flow of user data into more fuel for AI products while maintaining their commitments to protect users’ privacy. Companies might be tempted to resolve this conflict by simply changing the terms of their privacy policy so that they are no longer restricted in the ways they can use their customers’ data. And to avoid backlash from users who are concerned about their privacy, companies may try to make these changes surreptitiously. But market participants should be on notice that any firm that reneges on its user privacy commitments risks running afoul of the law. It may be unfair or deceptive for a company to adopt more permissive data practices—for example, to start sharing consumers’ data with third parties or using that data for AI training—and to only inform consumers of this change through a surreptitious, retroactive amendment to its terms of service or privacy policy. 

Model-as-a-service companies that deceive users about how their data is collected may be violating the law.4 This includes promises made by companies that they won’t use customer data for secret purposes, such as to train or update their models—be it directly or through workarounds. In prior enforcement actions, the FTC has required businesses that unlawfully obtained consumer data to delete any products—including models and algorithms5 developed in whole or in part using that unlawfully obtained data. The FTC will continue to ensure that firms are not reaping business benefits from violating the law. 

Claims of privacy and security do not shield anticompetitive conduct.6 

The FTC will closely scrutinize any claims that competition must be impeded to advance privacy or security. In the face of concerns about anticompetitive conduct, companies may claim privacy and security reasons as justifications for refusing to have their products and services interoperate with other companies’ products and services. As an agency that enforces both competition and consumer protection laws, the Commission is uniquely situated to evaluate claims of privacy and data security that implicate competition.

To that end, the FTC aims to ensure that agency staff’s skillsets and knowledge are keeping pace with evolving markets. We plan to continue to hear and learn from players across the AI ecosystem through various forums like the recent FTC Tech Summit on AI and will continue to use our existing legal authorities to address harms.  

“We similarly recognize the ways that consumer protection and competition enforcement are deeply connected—with firms engaging in privacy violations to build market power and the aggregation of market power, in turn, enabling firms to violate consumer protection laws. And our remedies will continue requiring that firms delete models trained on unlawfully acquired data in addition to the data itself,” FTC Chair Khan recently said in her remarks.7 








More from the Technology Blog

Get Business Blog updates