Editor’s Note: As noted in a previous post, Tech@FTC is expanding to include posts by other technically minded staff at the Commission. This is the first in a series of blog posts by Nithan Sannappa, an attorney in the Division of Privacy and Identity Protection, that will explore several important issues regarding user privacy and security in mobile computing. The posts build on the Commission’s 2013 mobile security workshop, which convened four panels consisting of security researchers, academics, and industry representatives to engage in a wide-ranging conversation on the mobile threat landscape, industry efforts to secure the mobile ecosystem, and consumers’ mobile security expectations. The posts are intended to foster further discussion on these important topics among industry participants, academic and policy communities, and government regulators. They draw from the Commission’s previous work, academic literature, and research from the field.
Mobile devices today – always on, connected, and outfitted with a range of sophisticated sensors – are full-fledged computers capable of collecting, storing, and transmitting gigabits of personal information. With most of the U.S. population owning a smartphone, the consumer experience is now decidedly “mobile first.” Application developers have been able to tap into these resources to provide consumers with a vast array of innovative and often personalized services, transforming American society in the span of less than a decade. Along with these capabilities, however, the mobile revolution has presented both new opportunities and challenges with respect to user privacy and security.
In this post, I’ll introduce a foundational concept in computer security known as the principle of least privilege, discuss how this principle has been applied to mobile operating systems, and take a look at three real-world examples that highlight how application programming interface (API) design can affect whether application developers adhere to this principle.
Today, consumers typically have a one-to-one relationship with their smartphone or other mobile device. That is, they generally do not share their smartphone with other users. In the early days of personal computing, however, multiple people often shared a single computer in places such as the office, the classroom, or the public library. Given these environmental conditions, desktop operating systems were primarily concerned with protecting a user’s files from the potentially prying eyes of other users. Operating systems included security features such as multiple login accounts to address this threat, but typically assumed that applications installed by the user could be trusted with global access to device resources, including the user’s personal information. With the rapid evolution of the internet and the spread of malware, it soon became clear that not all applications could be trusted. In designing the next generation of computing devices, modern operating system architects included advanced security features, such as “sandboxing,” to address the threats posed by untrusted applications.
In computing, a “privilege” is the right to perform an action, such as accessing a device resource. Sandboxing is an implementation of the principle of least privilege, which holds that “every program and every user of the system should operate using the least set of privileges necessary to complete the job.” By containing each application installed on a computer within its own isolated environment, or “sandbox,” the operating system can restrict an application’s access to only those resources necessary for its operation. In segregating an application and limiting its privileges in this manner, the operating system can contain the damage that a malicious or vulnerable application can inflict on the system as a whole. While sandboxing is not foolproof – software bugs may allow a malicious application to escape the sandbox – it presents a much stronger security architecture than existed in earlier generations of operating systems.
Although nearly all modern mobile operating systems feature sandboxing, there have been varying approaches with respect to how and when an application should be allowed to interact with resources outside of its sandbox, such as device capabilities (e.g., the device’s camera or microphone), user data (e.g., the user’s contacts or calendar), and other applications. Mobile operating systems provide developers with access to these resources through tools known as application programming interfaces (APIs). In designing APIs, operating system architects must decide which resources to open up to developers, the scope of access, and how users should be informed of that access, considering a variety of factors including functionality, convenience, and privacy and security. Whether an operating system’s implementation of sandboxing truly adheres to the principle of least privilege largely depends on the design of these APIs.
In some instances, operating system architects may decide that a resource is so sensitive that third-party developers should not have access to the associated API at all. Mozilla’s Firefox OS, for example, prevents third-party applications from accessing the device’s telephony API. According to its documentation, Mozilla restricted access to this API in order to prevent the creation of malicious applications that surreptitiously dial premium phone numbers, a practice known as “toll fraud”. Although this restriction ensures the security of the end-user with respect to toll fraud, it also presents a functional trade-off: developers cannot build legitimate dialer applications to compete with the standard dialer pre-installed on a Firefox OS phone. In designing APIs, operating system architects must regularly make these kinds of trade-offs, balancing the utility of providing developers with access to a resource against factors such as privacy and security.
By contrast, Google’s Android operating system provides developers with a telephony API, as well as many other APIs that are not accessible on other operating systems. On the one hand, developers have praised Android for the unique user experiences that this flexibility makes possible. As Mark Zuckerberg once put it, “The great thing about Android is that it's so open. . . . Because of Google's commitment to openness, you can have experiences on Android that you can't have anywhere else.” Indeed, applications such as Facebook Home, Yahoo Aviate, and Twitter’s Cover take advantage of various Android APIs to transform the device’s user interface in ways that simply are not possible on other operating systems.
However, providing developers with too much flexibility can create privacy and security risks. For example, Android once featured an API that provided third-party applications with access to a central system log. By reading the logs, developers could troubleshoot application crashes and debug their software. However, the log also proved to be a risk to user privacy and security. The FTC’s complaint against HTC America, for instance, alleges that a vulnerable application pre-installed on the company’s Android devices copied sensitive personal information, such as location data and text messages, to the system log, potentially exposing this information to third-party applications. Similarly, Facebook’s Alex Rice explained at the FTC’s 2013 mobile security workshop that many developers copied Facebook user IDs to the system log, noting that “the read_logs permission was a source of more single privacy vulnerabilities in our ecosystem than any other issue.” Due to such risks, Google – to the consternation of some developers – deprecated the API with the release of Android 4.1. Rather than being able to access the entire system log, developers can now only access the logs of their own applications. As Google’s Adrian Ludwig explained at the 2013 workshop, “we made a decision to narrow down the scope of the read_logs permission to protect the user’s privacy.”
Similarly, Apple’s iOS once featured several APIs that provided third-party applications with global access to certain user personal information, including the user’s contacts and calendar. Although Apple instituted a policy in September 2010 that prohibited developers from collecting such information without permission, numerous applications continued to abuse these APIs. For example, the FTC’s complaints against Path and Snapchat allege that these popular applications collected information from iOS users’ address books without providing notice or obtaining user consent. With the release of iOS 6 in September 2012, Apple took technical steps to address this problem by integrating a set of more robust access controls, known as “permissions,” into its mobile operating system.
Sandboxing – as a foundation for the principle of least privilege – provides an opportunity for mobile operating systems to enhance user privacy and security. However, as exemplified by the Commission’s actions in HTC America, Path, and Snapchat, decisions about how to design APIs – including which resources to open to developers, the scope of access to provide, and how to inform users of that access – play a critical role in realizing this opportunity. Participants at the 2013 workshop noted that securing APIs is an ongoing process, and that operating systems must “add, adjust, or course correct” based on application behavior. Although reacting to application behavior is critical, participants also noted that operating systems should provide developers with incentives to follow the principle of least privilege – to ask the question “what’s the least amount you need in order to be able to develop your application.” Indeed, developers have noted that API design affects developer behavior. With operating systems adding thousands of new APIs with each new release, this is an important lesson to keep in mind. While not all operating systems will come to the same conclusions on these questions (and, as we’ll see in future posts, there can be multiple approaches to achieving the same objectives), it is critical to consider how API design decisions affect developer behavior with respect to user privacy and security.
In my next post, I look at one tool that mobile operating systems have used to mediate developer access to resources – permissions – and the debate around their effectiveness as a privacy and security-enhancing mechanism.
The author’s views are his or her own, and do not necessarily represent the views of the Commission or any Commissioner.