A creature is formed of clay. A puppet becomes a boy. A monster rises in a lab. A computer takes over a spaceship. And all manner of robots serve or control us. For generations we’ve told ourselves stories, using themes of magic and science, about inanimate things that we bring to life or imbue with power beyond human capacity. Is it any wonder that we can be primed to accept what marketers say about new tools and devices that supposedly reflect the abilities and benefits of artificial intelligence (AI)?
And what exactly is “artificial intelligence” anyway? It’s an ambiguous term with many possible definitions. It often refers to a variety of technological tools and techniques that use computation to perform tasks such as predictions, decisions, or recommendations. But one thing is for sure: it’s a marketing term. Right now it’s a hot one. And at the FTC, one thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them.
AI hype is playing out today across many products, from toys to cars to chatbots and a lot of things in between. Breathless media accounts don’t help, but it starts with the companies that do the developing and selling. We’ve already warned businesses to avoid using automated tools that have biased or discriminatory impacts. But the fact is that some products with AI claims might not even work as advertised in the first place. In some cases, this lack of efficacy may exist regardless of what other harm the products might cause. Marketers should know that — for FTC enforcement purposes — false or unsubstantiated claims about a product’s efficacy are our bread and butter.
When you talk about AI in your advertising, the FTC may be wondering, among other things:
Are you exaggerating what your AI product can do? Or even claiming it can do something beyond the current capability of any AI or automated technology? For example, we’re not yet living in the realm of science fiction, where computers can generally make trustworthy predictions of human behavior. Your performance claims would be deceptive if they lack scientific support or if they apply only to certain types of users or under certain conditions.
Are you promising that your AI product does something better than a non-AI product? It’s not uncommon for advertisers to say that some new-fangled technology makes their product better – perhaps to justify a higher price or influence labor decisions. You need adequate proof for that kind of comparative claim, too, and if such proof is impossible to get, then don’t make the claim.
Are you aware of the risks? You need to know about the reasonably foreseeable risks and impact of your AI product before putting it on the market. If something goes wrong – maybe it fails or yields biased results – you can’t just blame a third-party developer of the technology. And you can’t say you’re not responsible because that technology is a “black box” you can’t understand or didn’t know how to test.
Does the product actually use AI at all? If you think you can get away with baseless claims that your product is AI-enabled, think again. In an investigation, FTC technologists and others can look under the hood and analyze other materials to see if what’s inside matches up with your claims. Before labeling your product as AI-powered, note also that merely using an AI tool in the development process is not the same as a product having AI in it.
This message is not new. Advertisers should take another look at our earlier AI guidance, which focused on fairness and equity but also said, clearly, not to overpromise what your algorithm or AI-based tool can deliver. Whatever it can or can’t do, AI is important, and so are the claims you make about it. You don’t need a machine to predict what the FTC might do when those claims are unsupported.
The purpose of this blog and its comments section is to inform readers about Federal Trade Commission activity, and share information to help them avoid, report, and recover from fraud, scams, and bad business practices. Your thoughts, ideas, and concerns are welcome, and we encourage comments. But keep in mind, this is a moderated blog. We review all comments before they are posted, and we won’t post comments that don’t comply with our commenting policy. We expect commenters to treat each other and the blog writers with respect.
- We won’t post off-topic comments, repeated identical comments, or comments that include sales pitches or promotions.
- We won’t post comments that include vulgar messages, personal attacks by name, or offensive terms that target specific people or groups.
- We won’t post threats, defamatory statements, or suggestions or encouragement of illegal activity.
- We won’t post comments that include personal information, like Social Security numbers, account numbers, home addresses, and email addresses. To file a detailed report about a scam, go to ReportFraud.ftc.gov.
We don't edit comments to remove objectionable content, so please ensure that your comment contains none of the above. The comments posted on this blog become part of the public domain. To protect your privacy and the privacy of other people, please do not include personal information. Opinions in comments that appear in this blog belong to the individuals who expressed them. They do not belong to or represent views of the Federal Trade Commission.
Thanks guys... I thought what happened around me... I don't know what to do since one month today also ... Bt now I'm breath well and now I realised you people trying to help me from this 😌😌 ... Thanks
Your comments above are So Well stated. Thank you to have the clarity and courage to say this, and to say it so well. Your several brief paragraphs are a great example of what is needed for " Responsible AI."
In reply to Your comments above are So… by Steven Miller …
Almost to well said... if we're taking about Ai or machine learning... I'm expecting the SEC, FBI,CIA and FTC to have the most advanced tools to vet and locate rule breakers.
This was almost too... well... said... maybe they used a chatbot to edit? Anyway, if we're taking about Ai or machine learning... I'd expect the SEC, FBI, CIA, and especially the FTC to have the most advanced tools to vet and locate rule breakers. Open AI has to be under the microscope...
This is so true and applies to all Marketing practices: ie making trendy promises for the sake of lead gen...
This article do apply to all the trendy or great KPI messages Marketing people provide: Marketing claims should be backed and supported by evidence and proof points.
Fed-up with "Fluffy Marketing"!
This article rubbed me the wrong way when I first read it, I had to let it sit for 24 hours and think about it. As a business development professional who works for a small company that is bringing AI-enabled software to USG customers I found myself being defensive as I read the article. After letting those initial feelings subside, I think it is a great article and I have sent it to several DOD friends. There is an explosion of companies who 'are doing AI', I see it at all the trade shows, companies that four years ago were skeptical of AI technology now have AI sprinkled on everything they do. I went through your questions and our process conforms to them, we say that AI isn't magic, it is math. Completely agree that movies and media have caused an unfortunate and exaggerated view of AI. I tell DOD and USG folks to educate themselves on AI/ML, there are tons of YouTube videos on it and some simple books to read, podcasts, etc - know what it is and what it isn't. Like any product, you have to do your research. After that and some critical thinking, folks can ask the questions you laid out above and make their judgements. I would beg this of the FTC, please apply these principals across the board - to both large and small companies. The most prevarication I have witnessed in the last five years (three in a DOD uniform and two in the AI-industry) have come from very large, well-established, well-connected companies. These are the same companies that only adopted AI-enabled technology once it became a buzzword and a lot of what I have seen wouldn't pass your four question metric above. Some of us are actually doing AI/ML and it is a game changer, it just isn't going to be a panacea, just tools to make sense of data and make things quicker and easier. Until the killer robots come...
Wow, just noticed I have to provide labeled data for CV ML to post this. The irony.
Good on you! Thanks!
If we know of a company that is lying to the public about its AI capability, is there a way to report that?
Why aren't you going after "Full Self Driving" and "Auto Pilot" used by Tesla? Clear case of over promising.
It is good, that there seems to be some “pushback” against the mindless competition that is prematurely unleashing the “AI” Pandoras’ Box upon the world.
Corporate greed should not override the current state of dangerous affairs. Investors’ appetite for high-tech driven frenzy should not override balanced caution and common sense.
How can a TEMPORARY PAUSE be put on a technology that is being unleashed faster than high-tech firms can safely deploy / control it?
I feel you folks (at the FTC) have the moral high-ground to put a TEMPORARY PAUSE on the premature unleashing of what some knowledgeable people predict ** could ** result in . . . on the long-run . . . huge damage to the human race.
AI as just a magnification / catalyst to human miss behavior. Bigotry, over aggressiveness, conflicting ideologies, partisan beliefs, history revisionism and questioning “fact” were used as input. GIGO is a term we use in tech. . . Garbage-in Garbage-out. Not that all human knowledge sucked in by AI is “garbage” . . . but someone must be able to make distinctions and be able to tell it what is moral, what is not, what is a fact, what is not, what is intentional miss-information what is not … what is humor / sarcasm what is not, etc.
Unless they figure out WHO IS GOING TO DO THAT IN THE FUTURE, will not this whole technology continue to go off the rails and be very dangerous in the wrong hands?
Before making AI widely available . . . humans will need to be able to peer into what is now not transparent, the invisible gut. We need to be able to fully account for, and show / trace the logic behind (ie: the source used) for: morality, decency, ethics, and the general good. Or can all this be simply ignored?
Are you folks coordinating with the EU on all this? I believe they want to make distinctions on the danger level of various deployment “aims” and “scope” of the AI implementations.
I want all involved who are grappling with this “tiger by the tail” (Developers, Regulators, Investors, etc.) to remember what Forbidden Planet / Lost in Space’s Robby the Robot said . . . DANGER, WILL ROBINSON, DANGER!
In reply to It is good, that there seems… by Peter Halas
Unless you're a World Government of some sort you can't put a pause on AI (or any other) development, even if you could do so just within the U.S. (and that's not even practical). If there's perceived to be a market for this stuff someone will develop it, if not here in the U.S. then elsewhere. Trying to steer the inevitable ongoing development is likely our best hope.
A.I is just a computer program that hundred of software engineer scan instructions book of topic and write codes in a flow chart of every senecio, it the same as you study and how you recall how it's breakdown to its basic, you know why a. I. autonomous driving will never be had? Just one word that e separate a 18 old driver that A.I will alway be comaoared from a experience older driver, that word is " anticipation " pro active computer are re active from 1 dimensions Cabrera and sensor. You are driving down a rod you see a guy waiting to cut across to get to the other lane so you see him look the other way and see a whole line of car coming so you slow down as you know he was inching forward. And yes he cut across. " Anticipation" same as cars in front you turn right, you can see ahead it all stopped. A.I accelerate till the camer tell it the cars are all stopped. It can only see both side and in front not a car on the side facing the street that going to pull out. It a stationary object, we see it action path cross that about to happen. You can't write software for this as sensor cant see behind the car in front , we can.
In reply to A.I is just a computer… by Tommy
Tommy, AI (or machine learning)
may have many flaws but it is not what you describe. The reason it is a new field is that it, explicitly is not direct software where engineers write code for every scenario. From the wikipedia page on machine learning "Machine learning algorithms build a model based on sample data, known as training data, in order to make predictions or decisions without being explicitly programmed to do so."
This was written by a lawyer? With very due deference to the high quality and readability of the text, one is left wondering if there was Chatbot assistance in smoothing out the sharp edges of law-speak into the eminently readable prose of the article. One could be, almost, excused for missing the very real world consequences of misreprentation while drifting through a pleasant read.
Hi ftc.gov administrator, You always provide great examples and case studies.
very interesting, but nothing sensible
The only issue I have here is that "AI" is a very general umbrella term that, in the tech world has always meant a computer automating tasks that a human would do, with some level of decision making. This encompasses rules based definitions. The is different to "ML" which is much more specific, and I think probably what you're targeting.
Add new comment