What you say in your home, what you do in your home. It doesn’t get more private than that. But, according to two recent FTC complaints, Amazon and Ring used this highly private data – voice recordings collected by Amazon’s Alexa voice assistant and videos collected by Ring’s internet-connected home security cameras – to train their algorithms while giving short shrift to customers’ privacy. These matters, the first announced since the FTC’s new Biometric Policy Statement, contain important lessons for companies using AI, biometric data, and other sensitive information.
AI and privacy should work hand-in-hand. In this age of AI, developers want more and more data – oftentimes, no matter its source. But be careful when collecting or keeping consumer data. Under Section 5’s unfairness standard, the FTC doesn’t look just at AI’s potential benefits, but also at the costs to consumers. According to the complaints, Amazon and Ring failed that test. The FTC alleged Ring’s data access practices enabled spying and harassment, while Amazon’s permanent retention of voice data and shoddy deletion practices exposed consumers’ voice recordings to the risk of unnecessary employee access. The message for businesses: The FTC will hold companies accountable for how they obtain, retain, and use the consumer data that powers their algorithms. As the Commissioners put it in their joint statement in the Alexa matter, machine learning is not a license to break the law.
Consumers – not companies – control their data. Some companies think they’re free to use personal data in their possession for any purpose they choose. Not so fast. The FTC complaints against Amazon and Ring make clear that companies that ignore consumers’ rights to control their data do so at their peril. In its complaint, the FTC says Ring gave all employees and contractors access to customers’ videos to train algorithms (among other things) with only check-the-box “consent.” But that’s not enough to ensure that users are really in control of what happens to their information. And in the Amazon complaint, the FTC says Amazon undermined parents’ rights under the Children’s Online Privacy Protection Act (COPPA) Rule to delete their children’s voice recordings. Parents have the right under the COPPA Rule to decide what data about their children is stored by a company, and what data is deleted. The upshot is clear: Any company that undermines consumer control of their data can face FTC enforcement action.
Place special safeguards on human review and employee access to sensitive data. AI developers often rely on human reviewers to tag and annotate the data that trains machine learning algorithms. But do consumers know when their data is under review? In its complaint, the FTC says Ring hid this review from its customers and let reviewers abuse their access to consumers’ videos. As a result, Ring’s customers – who bought Ring’s products for more security – ended up being the target of Ring employees’ spying and surveillance. The Amazon complaint also says that Amazon didn’t use appropriate controls to limit which employees could access Alexa users’ voice recordings, so thousands of employees had access to sensitive voice recordings that they didn’t need. Companies relying on human review are on notice that safeguards for sensitive data, including strict access controls, can’t be an afterthought. They should be the first step.
The FTC protects biometric data. Last month, the FTC issued a policy statement on the protection of biometric data. That statement explains that biometric data – whether fingerprints and iris scans or videos and voice recordings – deserves the utmost protection because of its inherent sensitivity and the potential for bias, discrimination, and other harmful uses. The FTC’s settlements with Amazon and Ring underscore that when the FTC says protecting biometric data is a priority, it means what it says – and the Commission will back up that policy with enforcement action.
The FTC uses every tool available to protect kids’ privacy. After a series of enforcement actions about kids’ and teen privacy (think Microsoft, Epic Games, Edmodo, Weight Watchers (Kurbo), and Chegg), it should be clear that protecting kids is a top FTC priority. That’s especially true at the intersection of AI and kid and teen privacy. In the Amazon complaint, the FTC says Amazon was keeping kids’ voice recordings (both audio files and transcripts) permanently and undermining parents’ deletion rights. According to the complaint, Amazon could then use that data for natural language processing. In the Ring complaint, the FTC describes Ring’s cavalier approach to privacy and security, notwithstanding the fact that its cameras were marketed to watch over kids’ bedrooms. The FTC’s response? No dice. The FTC will use every available tool – including the COPPA Rule and the FTC Act’s prohibitions on deceptive and unfair practices – to protect kids’ privacy.
Want to keep your algorithms and data products? Get the data lawfully. With Ring and Alexa, as well as Kurbo, Cambridge Analytica, and Everalbum, the FTC has obtained numerous orders requiring companies to delete data and delete or refrain from generating data products, like algorithms, models, and other tools derived from ill-gotten data. These actions make clear that there are no free passes for data abuse. If you illegally obtain or misuse consumer data, you may well pay with your data product.
The purpose of this blog and its comments section is to inform readers about Federal Trade Commission activity, and share information to help them avoid, report, and recover from fraud, scams, and bad business practices. Your thoughts, ideas, and concerns are welcome, and we encourage comments. But keep in mind, this is a moderated blog. We review all comments before they are posted, and we won’t post comments that don’t comply with our commenting policy. We expect commenters to treat each other and the blog writers with respect.
- We won’t post off-topic comments, repeated identical comments, or comments that include sales pitches or promotions.
- We won’t post comments that include vulgar messages, personal attacks by name, or offensive terms that target specific people or groups.
- We won’t post threats, defamatory statements, or suggestions or encouragement of illegal activity.
- We won’t post comments that include personal information, like Social Security numbers, account numbers, home addresses, and email addresses. To file a detailed report about a scam, go to ReportFraud.ftc.gov.
We don't edit comments to remove objectionable content, so please ensure that your comment contains none of the above. The comments posted on this blog become part of the public domain. To protect your privacy and the privacy of other people, please do not include personal information. Opinions in comments that appear in this blog belong to the individuals who expressed them. They do not belong to or represent views of the Federal Trade Commission.
Has Amazon or Ring taken any corrective action since the findings? If not, then how does FTC plan on bringing Amazon and Ring into compliance?
Google has done a similar thing. If you have the Google app it will be using your microphone without your permission even if it's not selected in your privacy settings on your phone.
More talk and zero actions, as always.
FTC can write as many "strongly worded letters" as they want, these statements are not relevant. The AI companies have made it very clear to everyone that they do not care, and will continue their strip-mining of people's privacy, intellectual property and human rights without any concern.
Look now, these companies are earning Billions with their unethical practices, and when they do get caught, it's only 4 years later after a glacially slow legal process. And the only punishment they ever get is a few millions in fines which is only trivial pocket change to them.
I'm more and more convinced each month that the government and all regulatory bodies like the FTC here are just on the side of AI companies and want their piece of the pie too.
Hollow statements with no meaningful actions like this one are just designed to pretend like they are doing something in order to keep the public quiet...
if you really care about protecting customers, why is there not an investigation for fraud, theft and IP infringement on OpenAI, Microsoft, Google, Midjourney, and all the others...?
Everyone can see the copyright infringements they are doing in plain view, if the FTC actually cared, these investigations should have already started 6 month ago...