Skip to main content

A common trope crossing the science fiction and mystery genres is a human detective paired with a robot. Think I, Robot, based on the novels of Isaac Asimov, or Mac and C.H.E.E.S.E., a show-within-a-show familiar to Friends fans. For our purposes, consider a short-lived TV series called Holmes & Yoyo, in which a detective and his android partner try to solve crimes despite Yoyo’s constant malfunctions. Let’s take from this example the principle – it’s elementary – that you can’t assume perfection from automated detection tools. Please keep that principle in mind when making or seeing claims that a tool can reliably detect if content is AI-generated.

Image
AI and Your Business

In previous posts, we’ve identified concerns about the deceptive use of generative AI tools that allow for deepfakes and voice cloning and for manipulation-by-chatbot. Researchers and companies have been working for years on technological means to identify images, video, audio, or text as genuine, altered, or generated. This work includes developing tools that can add something to content before it is disseminated, such as authentication tools for genuine content and ways to “watermark” generated content.

Another method of separating the real from the fake is to use tools that apply to content after dissemination. In a 2022 report to Congress, we discussed some highly worthwhile research efforts to develop such detection tools for deepfakes, while also exploring their enduring limitations. These efforts are ongoing with respect to voice cloning and generated text as well, though, as we noted recently, detecting the latter is a particular challenge.

With the proliferation of widely available generative AI tools has come a commensurate rise in detection tools marketed as capable of identifying generated content. Some of these tools may work better than others. Some are free and some charge you for the service. And some of the attendant marketing claims are stronger than others – in some cases perhaps too strong for the science behind them. These tools may have other flaws, such as not being able to detect images or video that a generative AI tool has only lightly edited, or a bias against non-English speakers when attempting to detect generated text.

Here's what to deduce:

  • If you’re selling a tool that purports to detect generative AI content, make sure that your claims accurately reflect the tool’s abilities and limitations.  To go back to our trope, for Knight Rider fans, that means your claims should be more in line with KITT and less than with its bad twin, KARR.
  • If you’re interested in tools to help detect if you’re getting the good turtle soup or merely the mock, take claims about those tools with a few megabytes of salt.  Overconfidence that you’ve caught all the fakes and missed none can hurt both you and those who may be unfairly accused, including job applicants and students.

Wouldn’t it be nice to live in a techno-solutionist land in which a simple gadget could easily and effectively handle all difficult AI issues of our day? No such luck. Our agency can address some of these issues using real-world laws on the books, though, and those laws apply to marketing claims made for detection tools.

Looking for more posts in the AI and Your Business blog series?

It is your choice whether to submit a comment. If you do, you must create a user name, or we will not post your comment. The Federal Trade Commission Act authorizes this information collection for purposes of managing online comments. Comments and user names are part of the Federal Trade Commission’s (FTC) public records system, and user names also are part of the FTC’s computer user records system. We may routinely use these records as described in the FTC’s Privacy Act system notices. For more information on how the FTC handles information that we collect, please read our privacy policy.

The purpose of this blog and its comments section is to inform readers about Federal Trade Commission activity, and share information to help them avoid, report, and recover from fraud, scams, and bad business practices. Your thoughts, ideas, and concerns are welcome, and we encourage comments. But keep in mind, this is a moderated blog. We review all comments before they are posted, and we won’t post comments that don’t comply with our commenting policy. We expect commenters to treat each other and the blog writers with respect.

  • We won’t post off-topic comments, repeated identical comments, or comments that include sales pitches or promotions.
  • We won’t post comments that include vulgar messages, personal attacks by name, or offensive terms that target specific people or groups.
  • We won’t post threats, defamatory statements, or suggestions or encouragement of illegal activity.
  • We won’t post comments that include personal information, like Social Security numbers, account numbers, home addresses, and email addresses. To file a detailed report about a scam, go to ReportFraud.ftc.gov.

We don't edit comments to remove objectionable content, so please ensure that your comment contains none of the above. The comments posted on this blog become part of the public domain. To protect your privacy and the privacy of other people, please do not include personal information. Opinions in comments that appear in this blog belong to the individuals who expressed them. They do not belong to or represent views of the Federal Trade Commission.

Get Business Blog updates