It’s a ritual we’ve all grown accustomed to: something needs a software update to repair security flaws. Traditionally, it’s been our computer; increasingly, it’s our smartphones or their apps. In the not very distant future (possibly now, for some of us), it will be our printers, our thermostats, our cars, our “anything that uses software”—and that will be more or less everything. WiFi-controlled light bulbs are already on sale in some countries; if it’s WiFi-controlled, it may be Internet-accessible and some day in need of security patches. Such patches don’t create themselves; it’s worth stepping back and looking at the process, which in turn helps explain why some companies are so much better at it than others.
The first step is remarkably hard: understanding that you have a problem. More precisely, it’s understanding that you’re in the networked software business, with all that implies, rather than in the phone, thermostat, printer, light bulb, or what have you business.
Rule #1: If something has software that can talk to the outside world, it can (and probably does) have security problems; this in turn means that remediation mechanisms are necessary.
The phrase “remediation mechanisms” covers a lot of ground. It means that you need, among other things, a process for reporting security problems (and this may be from your own developers, partner companies, academics, security researchers, and more); equipment and people to reproduce and analyze the problem; coders and testers to produce a fix; and a rapid and effective system for pushing it out to end users and helping them install it (saying “plug your Wi-Toaster into a Model 3.14159 Development Unit and Crumb Cleaner” won’t cut it); and more. Most important, you need management energy behind all of this, to make sure it all works effectively.
Rule #2: Unless the security process is someone’s responsibility, it won’t work well (or possibly at all).
Mainstream software companies understand this; they’ve been through the wars. One can argue if Microsoft’s “Patch Tuesday” or Apple’s “Good morning—here’s a security patch you should install immediately, even though it’s 3am on a Sunday” is the better model; nevertheless, both companies understand that security flaws are not ultra-rare events that can be dealt with in the next model of their products.
Companies new to the software world don’t always get this. With the possible exception of cars that may receive regular oil changes, most consumer products are “fire and forget”. Only expensive products, such as major appliances, are routinely repaired (or even repairable); improvements wait for the next model. That’s fine for routine matters; it may even be acceptable for dealing with occasional “inexplicable” outages. It’s a non-starter for most security holes, since those can become critical, recurring problems any time some attacker wants them to.
Rule #3: Bugs happen, ergo fixes have to happen.
In a technical sense, pushing the patch out to affected devices is often the hardest. It’s relatively straightforward technically for today’s computers and smartphones; virtually all of them have frequent or constant connectivity. It’s less clear what to do about devices with, say, local-only connectivity. (Many Bluetooth devices fall into this category.) They can be attacked from nearby, perhaps via an infected laptop, but can’t always be patched that way.
In some markets, notably phones, no one party controls the patch deployment channel. With Android phones, for example, software—and hence fixes—can come from any of three parties: Google, the device manufacturer, or the wireless carrier. This, coupled with the comparatively short lifespan of many phones, has led to delays in patching and even out-of-date software being shipped with new devices. It’s easy to understand why this has happened; that said, it leaves most users without effective recourse. As we move towards complex service models—one company as the front end for another’s cloud-based system, running software from several different vendors?—we’ll see more and more of this. Who is responsible for security patches? Who should be?
Rule #4: Own the patch channel.
That last point deserves a closer look. In a multivendor world, who should own the patch channel? There are two possible answers: the party with the ability to distribute patches, or the party the consumer will blame if something goes wrong. They’re not independent, of course; occasionally, they’re contradictory. In today’s world, phones are updatable only by the carrier (Apple iPhones are a notable exception), but will people blame the carrier or the manufacturer if there’s a security problem? Put another way—and arguably a more important way from a business perspective—if something goes badly wrong and consumers are angry enough to switch, will they switch carriers or brands of phone? The answer will vary across markets, and depend on things up to and including who has the better brand awareness; the answers, though, might help structure the contracts among the various parties.
The context here, of course, is the settlement just announced with HTC. That situation is in some ways a special case, in that the vulnerabilities were introduced by HTC. The problem, though, is broader and not limited to Android. Apple, for example, controls its own patch distribution for iOS; that’s good, but their approval system can slow down shipments of updated phone apps. In other words, their app vendors do not control their own patch channel. (I should note that patching isn’t the only security issue with smartphones. The FTC will be holding a Mobile Threats workshop on June 4 to discuss many other concerns as well.)
Embedded devices—the computers built into our printers, modems, thermostats, and more—are problematic in a different way. Vendors can prepare patches, but they often have no good way to notify users about the patch. Similarly, the device itself has no good way to inform its owners that it wants to be updated. (Quick: what should an online light bulb do? Blink SOS in Morse code?) Autoupdates are one answer, but if the vendor gets the patch wrong they’ve bricked the device, with all that implies.
Consumers have to worry about such things, too. If you’re buying something, how will you be notified of security patches? How will you install them? For that matter, for how long will your vendor keep producing patches? Any time you’re running software that’s been “EOLed”—end of “lifetime”—by the vendor, you’re taking a risk; there are almost certainly residual holes, but there won’t be new patches. You need to plan for this and upgrade your computers (and phones, and perhaps embedded devices) before that happens. (If you’re still using Windows XP, note that Microsoft says it will discontinue support on April 8, 2014.)
Patching isn’t easy, but even in a world of 0-days, it’s still important. Vendors and consumers need to take it very seriously and understand how it will happen.
Original comments for “Shipping security”
June 16, 2013 at 8:24 am
Is there at all chance that legislation could be passed which would PREVENT the use of software on some devices? Imagine the breadth and scope of a hacker with the ability to attack not only our computers, laptops and smartphones (which is bad enough by itself) but also our light bulbs….
In addition, what happens of the product dies? Imagine the “red ring of death” that occurs on X-Box machines. Am I simply supposed to accept the fact that an item didn’t download something properly? Am I responsible for power outages. I simply can’t see net benefit of putting EVERYTHING on a network.
The author’s views are his or her own, and do not necessarily represent the views of the Commission or any Commissioner.