Tech highlights of the FTC privacy report

Tags:

Today the FTC is releasing a major report on privacy.   Privacy geeks will read the whole thing–and should, because it represents a lot of careful thinking by folks in the agency.

But if you’re a techie who doesn’t have time to read it all, let me point you to a few of the parts you’ll probably find most interesting.

When you’re reading, keep in mind that the report does not by itself establish any new laws or regulations.  It summarizes current law and asks Congress to consider new laws in certain areas, but most of the discussion is about best practices that the FTC thinks well-intentioned companies will want to follow.   These best practices are organized in a three-part framework: privacy by design, which means building privacy into your products and practices from the beginning; simplified choice for consumers; and greater transparency about data practices.

With that said, here are four sections of the report that might be of special interest to techies:

  1. De-identified data (pp. 18-22):   Data that is truly de-identified (or anonymous) can’t be used to infer anything about an individual person or device, so it doesn’t raise privacy concerns.  Of course, it’s not enough just to say that data is anonymous, or that it falls outside some narrow notion of PII.   But beyond that, figuring out whether your dataset is really de-identified can be challenging. If you’re going to claim that data is de-identified, you need to have a good reason–the report calls it a “reasonable level of justified confidence”–for claiming that the data does not allow inferences about individuals.  What “reasonable” means–how confident you have to be–depends on how much data there is, and what the consequences of a breach would be.  But here’s a good rule of thumb: if you plan to use a dataset to personalize or target content to individual consumers, it’s probably not de-identified.
  2. Sensitive data (pp. 47-48):  Certain types of information, such as health and financial information, information about children, and individual geolocation, are sensitive and ought to be treated with special care, for example by getting explicit consent from users before collecting it.   If your service is targeted toward sensitive data, perhaps because of its subject matter or target audience, then you should take extra care to provide transparency and choice and to limit collection and use of information.  If you run a general-purpose site that incidentally collects a little bit of sensitive information, your responsibilities will be more limited.
  3. Mobile disclosures (pp. 33-34): The FTC is concerned that too few mobile apps disclose their privacy practices.  Companies often say that users accept their data practices in exchange for getting a service.  But how can users accept your practices if you don’t say what they are?  A better disclosure would tell users not only what data you’re collecting, but also how you are going to use it and with whom you’ll share it.   The challenging part is how to make all of this clear to users without subjecting them to a long privacy policy that they probably won’t have time to read.   FTC staff will be holding a workshop to discuss these issues.
  4. Do Not Track (pp. 52-55): DNT gives users a choice about whether to be tracked by third parties as they move across the web.  In this section of the report, the FTC reiterates its five criteria for a successful DNT system, reviews the status of major efforts including the ad industry’s self-regulatory program and the W3C’s work toward a standard for DNT, and talks about what steps remain to get to a system that is practical for consumers and companies alike.

There’s a lot more in the report, and I expect to write more in the future about privacy issues raised by the report.  For now, I welcome your comments on this post, or the report generally.

 

Note: This blog post was reposted from the former Tech @ FTC blog. Comments are now closed for this post.

 

Original Comments to “Tech Highlights of the FTC Privacy Report.”

Jules Polonetsky
March 26, 2012 at 1:12 pm

Ed – how do you read ad reporting today under this analysis…assuming company commits to not identify, publicly and contractually. It isnt used to identify….or made publicly available. But I am sure a researcher with enough time could ID a small % of users, with much extra work. Not “collected for behavioral use.

Ed Felten
March 26, 2012 at 2:47 pm

Jules – I’m not sure I understand your question. Could you be more explicit about what you’re asking?

Clinton
March 26, 2012 at 1:42 pm

Ed, is this actually going to make a difference to how companies like Google operate? The apology approach seems to now be the norm. Violate. Hope to not get caught. If caught, apologize.

Almost exactly one year ago, the FTC reached a settlement with Google for the Google Buzz privacy violations. Google was meant to make major changes and submit to regular, independent audits.

Do you really think Google respects privacy more now as a result?

I can give you half a dozen major violations since then and we have been discussing these at my forums. And that’s without the privacy violation problems Google are facing in Europe.

Is there any plans to publish these “independent” audits agreed to as part of the settlement?

Ed Felten
March 26, 2012 at 3:02 pm  

Clinton – We certainly hope that the report advances the debate and helps stakeholders move toward better practices and more trust online. As I noted in the main post, the report does not create new law or regulations.

Regarding your question about the Buzz order: We take potential order violations seriously. But we cannot comment on whether we are or are not investigating a particular company or act, so I can’t say much more than that.

Karen Kring
March 26, 2012 at 10:44 pm  

I’m not sure if the privacy report will address this topic or not but it should. Companies should not be able to ask for certain personal information electronically during the job application process before interviews have even been conducted. The SSN is something that should be collected later during the hiring process. If the data is stored then it needs to be protected like credit cards are under PCI compliance. I went into an employers site (about a month ago) that was still storing my SSN in their system since 2010. At a minimum there should be standard data retention requirements. With identity theft on the rise care should be taken. I will no longer apply to employers who ask for this information up front. I changed my number in that employers system after noticing they were storing my SSN. Potential employers should not be allowed to require individuals to provide private password information to employers for social networking sites either. That’s like asking for the keys to your house. That is an individuals personal data.

Ed Felten
March 27, 2012 at 8:52 am  

Karen – I don’t think the report talks specifically about employers asking for the SSN, but it does say that the SSN is sensitive information that should get an extra level of protection. And it talks about the duty that companies have to provide adequate protection for personal information that they have.

The practice of employers asking applicants for their social network passwords has been in the news lately, but the timeline for producing and approving the privacy report did not leave time for the report to address those stories. It’s certainly an issue that we’re aware of, and that we may be speaking and writing about in the future.

Cedric
March 31, 2012 at 3:08 pm

OT: This year’s 2011 Predictions Scorecard and 2012 Predictions posts at Freedom to Tinker are three months overdue and counting. You have usually been the one to make the predictions/scorecard posts in the past, and queries about the missing posts at FtT are apparently being ignored, so I figured I’d ask over here.

Peter Cranstone (@cranstone)
April 2, 2012 at 6:04 pm  

Ed,

Great blog. I’ve been following along with the DNT standard and my personal opinion is that it’s not viable. I’ve written lots of blogs on the subject – but perhaps the best one is “Privacy on the Internet is not binary” (its the second one if you do a google search). DNT lacks the real time context and consent required to deliver real privacy protections. In fact by adding a “null” value (indicating No Preference) you’re going to run into a lot of issues with caching servers that routinely remove those kind of values.

The other issue DNT is going to run into is regional privacy laws. For example – lets say i set a value of Null which equates to “No Preference”. In the US it means one thing (tracking allowed) vs the EU where it means do not track. So this one variable has now introduced another variable – geo-location.

Now one answer is a quick IP address lookup. Works great on the desktop but not so much on mobile, because the IP address is for a general area, not a specific regional area. What is needed is real time precision as to where i am so that appropriate laws may be applied. Unfortunately this means tracking me so that you can in essence not track me.

There’s no such thing as perfect privacy and there never will be. What there needs to be is a “programatic” solution that scales across all “screens” (DNT is not focused on Mobile) so that consumers can transparently share or not share more of their context with trusted sites. Pandora cannot be put back in the box, but we can improve the interaction.

Also you may wish to look at Section 5.2 of the DNT spec which talks to a header field for the response. In short this is the acknowledgement by the server to the user that they have received their DNT header setting. The key part to notice here is the use of the word “May” and “Should” – as in – An origin server “may” indicate the tracking status for a particular request by including a Tk header field in the corresponding response. If a request contains a DNT-field-value starting with “1″, an origin server “should” send a Tk header field in the corresponding response.

As the bard says – therein lies the problem. May and Should is not the same as a MUST. The reason is obvious, a must would mean that you actually have to do it, which would in turn force the issue of browsers sending the header in an environment like HTTPS where you can build a connection outside of the caching issues mentioned above.

DNT is a start in the right direction – what’s really needed though is a more contextual solution where the customer can be in more control, like the ability to set DNT=1 and instantly the browser blocks any cookie from my device. Now we all know this isn’t possible (for economic not technology reason) but that’s what we should be addressing.

techatftc (Ed Felten)
May 7, 2012 at 9:45 am

Jim:

I’m not sure I understand all of your critiques of Do Not Track.

I don’t see why DNT would be inconsistent with “real time context and consent” nor with mobile . I don’t understand your argument for the no-preference state being a mistake–it is an unavoidable fact that some users will not express a preference and some will have user-agents that are not DNT-enabled and therefore cannot express a preference on the user’s behalf. And the question of whether a server’s responsibilities depend on the client’s location is independent of DNT.

I don’t understand what you mean by a “programmatic” solution.

Regarding section 5.2 of the DNT spec and whether the server must indicate its status, I would suggest that you look at section 5.1 (“The origin server MUST provide a tracking status resource …”) and review the discussions on the W3C DNT mailing list.

Why do you think it is impossible for the browser to change its cookie behavior based on user’s tracking preference?

Peter Cranstone
May 7, 2012 at 11:29 am

Ed,

Some answers for you:

Real time context and consent is more than DNT=1 Context is defined as who I am, where I am and what device I’m using (it’s capabilities). DNT is just binary and while it might have a place with as something like Do Not Call it won’t work on the Web whose business model is built around knowing as much context as it can get

As for consumer choice. Actually this one is simple. What part of Do Not Track doesn’t anyone understand? I’ve indicated a preference – do not track me. Ergo all cookies, browser fingerprinting schemes should disappear. Anything less than that is Do Not Succeed

DNT=1 should be the default (PoD) it’s not, because if it was it would collapse the ad model on the Internet. Anything that cannot operate as a default fails by design

DNT=”” This is a fail for two reasons.

1. Caching servers that see a null value will strip out the header, ergo the server never knows what the preference was
2. Null has two values – one in the EU (no track) and one in the US (track). So how do you resolve “programmatically” what a no preference value is (assuming that it shows up at the server). You have to ask the consumer where he is which means more bandwidth BEFORE you can even determine whether or not you should track or not track him/her. You cannot send the real page until you have “more context” which DNT is not providing

Programmatic means – how do I code all the different variances on the server? And as a follow up how much is all this going to cost me in programming time, debug time and lost ad revenue?

RE: Section 5.1 of the spec

I read it. And here’s the critical section…

This section explains how a user agent MAY discover an origin server’s tracking status for a given resource. It defines a required well-known tracking status resource for describing a machine-readable tracking status and a Tk response header field that MAY be sent in any HTTP response and MUST be sent in responses to requests that modify the tracking status for that user agent.

So lets parse this out. A user agent MAY discover an origin servers tracking status for a given resource. This means that I can query the server (from my ad server) to determine the tracking status for a given resource. This is NOT the same as a consumers browser sending a request.

It then goes on to describe how a Tk response header field MAY be sent in any HTTP response –makes sense to make it a MAY, because the users will never see that status. The only one that cares about that status is a 3rd party user agent who needs to know whether or not they can track. So IF the server sees a DNT=1 it may send a Tk response to the consumers browser but it MUST send to a query from another server.

As for why it’s impossible to the browser to change its cookie behavior based on user’s tracking preference?

Lets try this scenario – DNT=1 Pretty clear, it means do not track. How do I verify what cookies have been placed on my device during that session? Where is the ability to audit those cookies to see exactly who placed them there, how long they are valid for and to ensure compliance with the spec?

For DNT to really work their has to be a browser audit mechanism otherwise I just have to “trust” the provider. The alternative is to remove all cookies from the session which is not going to happen.

Cookies are required for all sorts of things – everyone of those “sorts of things” needs to be determine in the context of DNT. The current approach is all about arguing who is a first party and who is a third party. The complexity of that is daunting from a programmatic standpoint.

And lets say you solved all the problems? What about latency/bandwidth on mobile. What about pop-ups on a mobile screen with * for all content. How do I know what I’ve just committed too with a * command.

DNT is binary – privacy is contextual and changes based on context. A one size fits all approach works for Do Not Call, but will struggle in a contextual world.

Peter

lapine real estate
May 9, 2013 at 7:57 pm  

Here we show you the Top 10 Fuel Efficient Cars in 2010.

Here are 10 easy steps you can take to lower your exposure to tree pollen and other spring allergens, and keep allergy
symptoms under control. Clean your gloves with lukewarm water and squeeze out the excess water.

traded endowment policy
May 9, 2013 at 9:28 pm

The device is also very light, weighing slightly less than 3
oz. Sony Cybershot DSC S3000 reviews have made it a foremost choice for millions of mobile users.
The Fujifilm Fine – Pix x100 is generally considered a professional
compact rangefinder camera.

Christal
May 10, 2013 at 6:39 am

POINT BLANK THEIR HAS TO BE SOME KIND OF DISCLIPINE IN ORDER TO LIVE A HEALTHY LIFESTYLE AND MAINTAIN A GOOD APPEARANCE.
The built quality on the XPS is also a solid one with aluminum carbon chassis,
giving it a sturdy, premium look. The actual pc is usually
Wi-Fi enabled and can connect to web anywhere.

medical billing classes
May 19, 2013 at 10:20 am  

For the year 2011, here are five of the best Android tablets ever released in the
techie world:. It has a 2-megapixel rear-facing camera
and a 5-megapixel front-facing one. At the impressive number one spot is
the Superpad 10.

half price books
May 19, 2013 at 1:24 pm  

There are many people moving on with the art of
photography in the right manner. Whereas some cameras will auto-focus while doing video,
others do not which is something else to consider. Make
sure you get a decent resolution LCD with any camera in this category.

The author’s views are his or her own, and do not necessarily represent the views of the Commission or any Commissioner.

Add new comment

Comment Policy

Please enter a username. Don't use your email address.
Image CAPTCHA
Enter the characters shown in the image.

Privacy Act Statement

It is your choice whether to submit a comment. If you do, you must create a user name, or we will not post your comment. The Federal Trade Commission Act authorizes this information collection for purposes of managing online comments. Comments and user names are part of the Federal Trade Commission’s (FTC) public records system (PDF), and user names also are part of the FTC’s computer user records system (PDF). We may routinely use these records as described in the FTC’s Privacy Act system notices. For more information on how the FTC handles information that we collect, please read our privacy policy.