Skip to main content

Rite Aid has “used facial recognition technology in its retail stores without taking reasonable steps to address the risks that its deployment of such technology was likely to result in harm to consumers as a result of false-positive facial recognition match alerts.” That’s the lawyerly language of the FTC’s just-filed action against drug store chain Rite Aid and a subsidiary. Put in more common parlance, the FTC alleges that Rite Aid launched an inadequately tested and operationally deficient covert surveillance program against its customers without considering the impact that its inaccurate facial recognition technology would have on people wrongly identified as “matching” someone on the company’s watchlist database. Among other things, a proposed settlement in the case would ban Rite Aid from using any facial recognition system for security or surveillance purposes for five years.

From at least 2012 until 2020, Rite Aid has used facial recognition technology in hundreds of its retail locations to “drive and keep persons of interest out of [Rite Aid’s] stores.” Most of those stores were in large urban areas. What’s more, the complaint alleges that Rite Aid didn’t tell consumers that it used facial recognition technology and specifically instructed employees not to reveal that fact to consumers or the media.

How did Rite Aid’s facial recognition system operate? Rite Aid supervised the creation of a “watchlist database” of images of people the company claimed had engaged in actual or attempted criminal activity at one of its stores. Called “enrollments,” these entries included – to the extent known – first and last names, years of birth, and a description of the behavior Rite Aid claimed the person in the photo had engaged in. Uploaded by in-store Rite Aid employees, the images were often low-quality – sometimes screenshots from closed-circuit TV or photos taken on employees’ cell phones. According to the complaint, Rite Aid directed store security to “push for as many enrollments as possible,” resulting in a watchlist database that included tens of thousands of people.

If someone who entered the store supposedly “matched” an image in Rite Aid’s watchlist database, employees received an alert on their company cell phone. Based in whole or in part on that alert, Rite Aid staff were directed to swing into action based on categories in the database that informed staff’s response. According to the complaint, “A majority of Rite Aid’s facial recognition enrollments were assigned the match alert instruction ‘Approach and Identify,’ which meant employees should approach the person, ask the person to leave, and, if the person refused, call the police.” But according to the complaint, in numerous instances, the match alerts that led to those actions were false positives – in other words, the technology incorrectly identified Rite Aid customers as people in the watchlist database.

You’ll want to read the complaint for allegations about the considerable – and injurious – inaccuracies of the system, but here’s just one example. During one five-day period, Rite Aid generated over 900 separate alerts in more than 130 stores from New York to Seattle, all claiming to match one single image in the database. Put another way, Rite Aid’s facial recognition technology told employees that just one pictured person had entered more than 130 Rite Aid locations from coast to coast more than 900 times in less than a week. Giving a whole new meaning to the phrase “facially inaccurate,” Rite Aid allegedly used that information to expel consumers from its stores.

Companies that are considering using AI surveillance technologies or other biometric surveillance systems, take note. The FTC says that in deploying facial recognition technology in some of its locations, Rite Aid has failed to take reasonable measures to prevent harm to consumers. Here are just some of the allegations in the complaint

  • Rite Aid failed to consider the risks that false positives had on consumers, including risks of misidentification based on race or gender.  For a host of reasons outlined in the complaint, the FTC alleges that Black, Asian, Latino, and women consumers were at increased risk of being incorrectly “matched” with an image in the company’s watchlist database – leading to humiliating and injurious consequences. As the complaint charges in detail, “As a result of Rite Aid’s failures, Black, Asian, Latino, and women consumers were especially likely to be harmed by Rite Aid’s use of facial recognition technology.”
  • Rite Aid failed to test the system for accuracy.  According to the FTC, Rite Aid didn’t bother to ask its first facial recognition technology vendor whether the system had been tested for accuracy. In fact, Rite Aid deployed the technology despite the vendor’s express statement that it:


Before going to a second vendor, Rite Aid was allegedly aware of the problem of false positives and yet again didn’t ask for test results about the accuracy of that vendor’s system.

  • Rite Aid failed to enforce image quality controls.  As one vendor explained to Rite Aid, “The quality of the photos used for [facial recognition technology] is extremely important . . . Without good quality photos, an enrollment is not useful.” Aware of that warning, Rite Aid claimed to establish image quality standards. But according to the FTC, Rite Aid flouted its own policies by regularly using blurry, low-quality images taken in low light, increasing the likelihood of false positives.
  • Rite Aid failed to train its staff.  Rite Aid’s training focused on navigating the website to use the technology and uploading new enrollments. The complaint alleges that Rite Aid’s training materials either didn’t address the risk of false positives or covered the topic only briefly. Even when the company had evidence of the false-positives problem, the FTC says Rite Aid didn’t take reasonable steps to improve its training.
  • Rite Aid failed to monitor, test, or track the accuracy of results.  Even after the problem with false-positive matches became apparent, the FTC says Rite Aid didn’t adequately address the issue. As the complaint alleges, “In part because of Rite Aid’s failures to track, monitor, assess, or test its facial recognition technology, Rite Aid did not have a reasonable basis to believe that any given match alert was likely to be accurate. Nevertheless, Rite Aid continued to instruct store-level employees to take action against consumers on the basis of facial recognition match alerts.” Furthermore, the FTC says Rite Aid didn’t keep accurate records of the outcomes of alerts and didn’t track false positives.

The complaint includes more examples of the impact Rite Aid’s failures had on people who shopped at its stores. To cite one instance, in May 2020 Rite Aid staff in The Bronx uploaded an image to the watchlist database. For the next several months, Rite Aid’s facial recognition technology generated over 1,000 match alerts for that one photo – nearly 5% of all match alerts generated by Rite Aid’s system during that period. What’s more, 99% of those match alerts came from the Los Angeles area. In fact, four of the match alerts told Rite Aid staff that the one person was spotted in both New York and California stores in the same 24-hour period. 

How were consumers injured by false-positive match alerts generated by Rite Aid’s facial recognition technology? According to the FTC, numerous consumers were mistakenly identified as shoplifters or wrongdoers. As a result, the complaint charges that Rite Aid surveilled them and followed them around the store; told them to leave without making purchases, including for prescription or over-the-counter medications; searched them; publicly accused them of being shoplifters and humiliated them in front of employers, coworkers, and family members, including their children; and called the police to confront or remove them – all based on facial recognition technology known to produce false positives and especially likely to result in inaccurate matches for Black, Latino, Asian, and women consumers. 

The FTC filed its lawsuit against Rite Aid in a Pennsylvania federal court. Count I of that complaint alleges that Rite Aid used facial recognition technology in its stores without taking reasonable steps to address the risks that its use would likely harm consumers due to false-positive match alerts, especially women consumers and consumers of color. Count II stems from a 2010 order that Rite Aid is already under that requires it to maintain a comprehensive information security program to protect consumers’ personal information. That count alleges that Rite Aid’s failure to maintain that required program is an unfair practice, in violation of the FTC Act.

The FTC says the failures in Rite Aid’s information security program were significant. The complaint cites a number of specific deficiencies and shortcomings, including:

  • Rite Aid failed to properly vet vendors that had access to consumers’ personal information.  Conducting an accurate and verifiable assessment of the data security capabilities of vendors is an essential part of any comprehensive information security program. According to the FTC, Rite Aid entrusted sensitive consumer data to vendors, including those the company deemed to be “high risk,” based just on conversations, rather than on a thorough evaluation of written materials and other documentation. Sure, talking things over can be part of an information security program, but it shouldn’t constitute the entire assessment process.   
  • Rite Aid failed to periodically reassess service providers’ data security practices.  Procedures and capabilities can change, so when sensitive data is at stake, assessing service providers isn’t a one-and-done task. The FTC says Rite Aid failed to conduct periodic reassessment to ensure that consumers’ information was safe in service providers’ hands – a key component of any comprehensive information security program. 
  • Rite Aid failed to include sufficient information security requirements in contracts with service providers.  The FTC alleges that Rite Aid’s contracts with vendors lacked information security standards or had only minimal  requirements.  Enforceable contract clauses help protect consumers’ information when it’s in the hands of vendors or other third parties.    

The proposed settlement would ban Rite Aid from using any facial recognition or analysis system for security or surveillance purposes at its retail stores or online for five years. In addition, the company would have to delete the photos or videos collected as part of the facial recognition system it operated between 2012 and 2020, as well as any data, models, or algorithms derived from those visuals.

The proposed settlement covers the company’s use of all automatic biometric security or surveillance systems, not just facial recognition and analysis systems. If the company uses any such automated system in the future, it must implement a monitoring program that requires sound technical and organizational controls. Among other things, the monitoring  program must address the potential risks to consumers posed by any automatic biometric system the company may implement. You’ll want to read the proposed new order for specifics, but it would put broad provisions in place to ensure appropriate training, testing, and evaluation. Before deploying any automatic biometric security or surveillance system, Rite Aid will need solid proof that it’s accurate. And if Rite Aid has reason to believe at some point that the system’s inaccuracies contribute to a risk of harm to consumers, the company must shut the system down.

Furthermore, if Rite Aid has an automatic biometric security or surveillance system in place in the future, under the proposed order, it must give individualized, written notice to any consumer the company adds to its system and anyone that it takes action against as a result. Rite Aid also would have to implement a robust consumer complaint procedure. In addition, the company would have to clearly disclose to consumers at retail locations and online if it’s using automatic biometric security and surveillance and the notices must be placed where consumers can read them in time to avoid the collection of their biometric information.

Additionally, Rite Aid must implement a comprehensive information security program, obtain biennial assessments of that program from a third party assessor, and provide an annual certification to the FTC from its CEO stating that Rite Aid is in compliance with the proposed order. You’ll want to read the proposed order for more about specific requirements.   

Because Rite Aid is currently in bankruptcy, the proposed settlement is subject to the Bankruptcy Court’s approval.

Does your company use AI or other automated biometric surveillance technologies? The FTC’s action against Rite Aid  demonstrates the need to test, assess, and monitor the operation of those systems and to ensure that their performance in real-world settings complies with consumer protection standards.


It is your choice whether to submit a comment. If you do, you must create a user name, or we will not post your comment. The Federal Trade Commission Act authorizes this information collection for purposes of managing online comments. Comments and user names are part of the Federal Trade Commission’s (FTC) public records system, and user names also are part of the FTC’s computer user records system. We may routinely use these records as described in the FTC’s Privacy Act system notices. For more information on how the FTC handles information that we collect, please read our privacy policy.

The purpose of this blog and its comments section is to inform readers about Federal Trade Commission activity, and share information to help them avoid, report, and recover from fraud, scams, and bad business practices. Your thoughts, ideas, and concerns are welcome, and we encourage comments. But keep in mind, this is a moderated blog. We review all comments before they are posted, and we won’t post comments that don’t comply with our commenting policy. We expect commenters to treat each other and the blog writers with respect.

  • We won’t post off-topic comments, repeated identical comments, or comments that include sales pitches or promotions.
  • We won’t post comments that include vulgar messages, personal attacks by name, or offensive terms that target specific people or groups.
  • We won’t post threats, defamatory statements, or suggestions or encouragement of illegal activity.
  • We won’t post comments that include personal information, like Social Security numbers, account numbers, home addresses, and email addresses. To file a detailed report about a scam, go to

We don't edit comments to remove objectionable content, so please ensure that your comment contains none of the above. The comments posted on this blog become part of the public domain. To protect your privacy and the privacy of other people, please do not include personal information. Opinions in comments that appear in this blog belong to the individuals who expressed them. They do not belong to or represent views of the Federal Trade Commission.

December 21, 2023


Hope Fern
January 02, 2024

I was falsely accused at my local rite aid publicly once by the manager and told to leave then said I had already been kicked out once before but that nvr happened and I waa

Eric Dennebaum
January 02, 2024

I have experienced on many occasions being what I perceive as over surveillance in some Retail and shopping stores. I have often wondered about the possibility of someone placing me on some watch list, aided by this technology, but not sure what steps to take.

I have actually heard some Employees discussing someone's face being recognized, as they appear to be on a state of alert, among other things I have observed in some stores.

Georgeanna Ramirez
February 05, 2024

I was falsely accused of shoplifting at Rite Aid. Had me fighting in court for two years until the charges were dropped. That caused me financial hardship and stress beyond belief.

More from the Business Blog

Get Business Blog updates