There is no commonly accepted definition of Privacy Enhancing Technologies (PETs), although a good description is provided by UK Information Commissioner’s Office (e.g. in their “Data protection guidance note” dated 2008) as “… any technology that exists to protect or enhance an individual’s privacy, including facilitating individuals’ access to their rights under the Data Protection Act 1998”. Examples include tools for: • privacy management that compare service-side polices about the handling of personal data with preferences expressed by users (for example W3C P3P and EU PRIME) • data access that enable users to securely check and update online the accuracy of their personal data • pseudonymisation that provide - in certain contexts (email, payment, web browsing, etc.) - users with complete anonymity or else pseudonymity (i.e. anonymity that is reversible if needed, for example in case of fraud). From this description, it follows that PETs do play a role in resolving privacy concerns. However, they do not resolve all privacy concerns. Why is this? There are a number of reasons, including: vested interests in obtaining personal information (for example, for marketing), lack of regulatory powers, lack of user awareness of privacy risks and other factors that prevent development of effective economic models for organizational investment in PETs, a rate of technological change that is high enough to introduce new privacy risks at at least the rate that older risks can be addressed technologically and legally (for instance, new privacy risks have been introduced by RFID tags, social computing and cloud computing), the complexity of privacy requirements in a global environment and the contextual nature of privacy risk, making it difficult for people to understand the privacy requirements in a given case, the increasing distribution and ease of exposure of personal information in a global online environment. One way of addressing this issue is for regulators to have more power in order to encourage adoption of PETS, it would not be just a question of increased fines for non-compliance, but encouragement of a new mind set within organizations to ‘do the right thing’, in particular via expansion of the use of Privacy Impact Assessments (PIA) that help organizations assess the impact of their operations on personal privacy, and more generally encouragement of accountability both within organisations and to external stakeholders. Technology can help with this, and moreover will be needed to help with this due to the potential complexity of determining requirements and the need to track and enforce obligations: known techniques include decision support tools that highlight privacy requirements and controls even in a global environment, education and training tools, obligation management systems, cryptographic binding of policies that describe how personal data should be handled to the data itself (‘sticky policies’), improved key storage and distribution, trustworthy location information of data processors, etc.