Recent from talks
Contribute something
Nothing was collected or created yet.
Contextual integrity
View on WikipediaThis article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
Contextual integrity is a theory of privacy developed by Helen Nissenbaum and presented in her book Privacy In Context: Technology, Policy, and the Integrity of Social Life.[1] It comprises four essential descriptive claims:
- Privacy is provided by appropriate flows of information.
- Appropriate information flows are those that conform with contextual information norms
- Contextual informational norms refer to five independent parameters: data subject, sender, recipient, information type, and transmission principle
- Conceptions of privacy are based on ethical concerns that evolve over time
Overview
[edit]Contextual integrity can be seen as a reaction to theories that define privacy as control over information about oneself, as secrecy, or as regulation of personal information that is private, or sensitive.
This places contextual integrity at odds with privacy regulation based on Fair Information Practice Principles; it also does not line up with the 1990s Cypherpunk view that newly discovered cryptographic techniques would assure privacy in the digital age because preserving privacy is not a matter of stopping any data collection, or blocking all flows of information, minimizing data flow, or by stopping information leakage.[2]
The fourth essential claim comprising contextual integrity gives privacy its ethical standing and allows for the evolution and alteration of informational norms, often due to novel sociotechnical systems. It holds that practices and norms can be evaluated in terms of:
- Effects on the interests and preferences of affected parties
- How well they sustain ethical and political (societal) principles and values
- How well they promote contextual functions, purposes, and values
The most distinctive of these considerations is the third. As such, contextual integrity highlights the importance of privacy not only for individuals, but for society and respective social domains.
Parameters
[edit]The "contexts" of contextual integrity are social domains, intuitively, health, finance, marketplace, family, civil and political, etc. The five critical parameters that are singled out to describe data transfer operation are:
- The data subject
- The sender of the data
- The recipient of the data
- The information type
- The transmission principle.
Some illustrations of contextual informational norms in western societies include:
- In a job interview, an interviewer is forbidden from asking a candidate's religious affiliation
- A priest may not share congregants' confessions with anyone
- A U.S. citizen is obliged to reveal gross income to the IRS, under conditions of confidentiality except as required by law
- One may not share a friend's confidences with others, except, perhaps, with one's spouse
- Parents should monitor their children's academic performance
Examples of data subjects include patient, shopper, investor, or reader. Examples of information senders include a bank, police, advertising network, or a friend. Examples of data recipients include a bank, the police, a friend. Examples of information types include the contents of an email message, the data subject's demographic information, biographical information, medical information, and financial information. Examples of transmission principles include consent, coerced, stolen, buying, selling, confidentiality, stewardship, acting under the authority of a court with a warrant, and national security.
A key thesis is that assessing the privacy impact of information flows requires the values of all five parameters to be specified. Nissenbaum has found that access control rules not specifying the five parameters are incomplete and can lead to problematic ambiguities.[3]
Nissenbaum notes that the some kinds of language can lead one's analysis astray. For example, when the passive voice is used to describe the movement of data, it allows the speaker to gloss over the fact that there is an active agent performing the data transfer. For example, the sentence "Alice had her identity stolen" allows the speaker to gloss over the fact that someone or something did the actual stealing of Alice's identity. If we say that "Carol was able to find Bob's bankruptcy records because they had been placed online", we are implicitly ignoring the fact that someone or some organization did the actual collection of the bankruptcy records from a court and the placing of those records online.
Example
[edit]Consider the norm: "US residents are required by law to file tax returns with the US Internal Revenue Service containing information, such as, name, address, SSN, gross earnings, etc. under conditions of strict confidentiality."
- Data subject: a US resident
- Sender: the same US resident
- Recipient: the US Internal Revenue Service
- Information type: tax information
- Transmission principle: the recipient will hold the information in strict confidentiality.
Given this norm, we can evaluate a hypothetical scenario and see if it violates the contextual integrity norm: "The US Internal Revenue Service agrees to supply Alice's tax returns to the city newspaper as requested by a journalist at the paper." This hypothetical clearly violates contextual integrity because providing the tax information to the local newspaper would violate the transmission principle under which the information was obtained.
Applications
[edit]As a conceptual framework, contextual integrity has been used to analyze and understand the privacy implications of socio-technical systems on a wide array of platforms (e.g. Web, smartphone, IoT systems), and has led to many tools, frameworks, and system designs that help study and address these privacy issues.
Social media: privacy in the public
[edit]In her book Privacy In Context: Technology, Policy, and the Integrity of Social Life, Nissenbaum discussed the privacy issues related to public data, discussing examples like Google Street View privacy concerns and problems caused by converting previously paper-based public records into digital forms and making them online. In recent years, similar issues happening in the context of social media have revived the discussion.
Shi et al. examined how people manage their interpersonal information boundary with the help of the contextual integrity framework. They found that the information access norms was related to who was expected to view the information.[4] Researchers have also applied contextual integrity to more controversial social events, e.g. Facebook–Cambridge Analytica data scandal[5]
The concept of contextual integrity have also influenced the norms of ethics for research work using social media data. Fiesler et al. studied Twitter users' awareness and perception of research work that analyzed Twitter data, reported results in a paper, or even quoted the actual tweets. It turned out that users' concerns were largely dependent on contextual factors, i.e. who is conducting the research, what the study is for, etc., which is in line with the contextual integrity theory.[6]
Mobile privacy: using contextual integrity to judge the appropriateness of the information flow
[edit]The privacy concerns induced by the collection, dissemination and use of personal data via smartphones have received a large amount of attention from different stakeholders. A large body of computer science research aims to efficiently and accurately analyze how sensitive personal data (e.g. geolocation, user accounts) flows across the app and when it flows out of the phone.[7]
Contextual integrity has been widely referred to when trying to understand the privacy concerns of the objective data flow traces. For example, Primal et al. argued that smartphone permissions would be more efficient if it only prompts the user "when an application's access to sensitive data is likely to defy expectations", and they examined how applications were accessing personal data and the gap between the current practice and users' expectations.[8] Lin et al. demonstrated multiple problematic personal data use cases due to the violation of users' expectations. Among them, using personal data for mobile advertising purposes became the most problematic one. Most users were unaware of the implicit data collection behavior and found it unpleasantly surprising when researchers informed them of this behavior.[9]
Contextual integrity has also influenced the design of mobile operating systems. Both iOS and Android are using a permission system to help developers manage their access to sensitive resources (e.g. geolocation, contact list, user data, etc.) and to provide users with control over which app can access what data. In their official guidelines for developers,[10][11] both iOS and Android recommend developers to limit the use of permission-protected data to situations only when necessary, and recommend developers to provide a short description of why the permission is requested. Since Android 6.0, users are prompted at runtime, in the context of the app, which is referred to as "Increased situational context" in their documentation.
Other applications
[edit]In 2006 Barth, Datta, Mitchell and Nissenbaum presented a formal language that could be used to reason about the privacy rules in privacy law. They analyzed the privacy provisions of the Gramm-Leach-Bliley act and showed how to translate some of its principles into the formal language.[12]
References
[edit]- ^ Helen Nissenbaum, Privacy in Context, 2010
- ^ Benthall, Sebastian; Gürses, Seda; Nissenbaum, Helen (2017-12-22). Contextual Integrity Through the Lens of Computer Science. Now Publishers. ISBN 978-1-68083-384-3.
- ^ Martin, K and Helen Nissenbaum. "What is private about 'public' records data?" Targeted Submission: Fall Law Reviews.
- ^ Shi, Pan, Heng Xu, and Yunan Chen. "Using contextual integrity to examine interpersonal information boundary on social network sites". Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2013.
- ^ https://www.ntia.doc.gov/files/ntia/publications/rfc_comment_shvartzshnaider.pdf [bare URL PDF]
- ^ Fiesler, Casey, and Nicholas Proferes. "'Participant' Perceptions of Twitter Research Ethics". Social Media+ Society 4.1 (2018): 2056305118763366.
- ^ Enck, William, et al. "TaintDroid: an information-flow tracking system for realtime privacy monitoring on smartphones". ACM Transactions on Computer Systems (TOCS) 32.2 (2014): 5.
- ^ Lange, Patricia G. "Publicly private and privately public: Social networking on YouTube". Journal of computer-mediated communication 13.1 (2007): 361-380.
- ^ Lin, Jialiu, et al. "Expectation and purpose: understanding users' mental models of mobile app privacy through crowdsourcing". Proceedings of the 2012 ACM conference on ubiquitous computing. ACM, 2012.
- ^ "Accessing User Data - App Architecture - iOS - Human Interface Guidelines - Apple Developer".
- ^ "App permissions best practices".
- ^ Barth, Adam; Datta, Anupam; Mitchell, John; Nissenbaum, Helen (2006). "Privacy and contextual integrity: Framework and applications". 2006 IEEE Symposium on Security and Privacy (S&P'06). pp. 184–198. CiteSeerX 10.1.1.76.1610. doi:10.1109/SP.2006.32. ISBN 978-0-7695-2574-7. S2CID 1053621.
See also
[edit]- H. Nissenbaum, Privacy in Context: Technology, Policy and the Integrity of Social Life (Palo Alto: Stanford University Press, 2010), Spanish Translation Privacidad Amenazada: Tecnología, Política y la Integridad de la Vida Social (Mexico City: Océano, 2011)
- K. Martin and H. Nissenbaum (2017) "Measuring Privacy: An Empirical Examination of Common Privacy Measures in Context", Columbia Science and Technology Law Review (forthcoming).
- H. Nissenbaum (2015) "Respecting Context to Protect Privacy: Why Meaning Matters", Science and Engineering Ethics, published online on July 12.
- A. Conley, A. Datta, H. Nissenbaum, D. Sharma (Summer 2012) "Sustaining both Privacy and Open Justice in the Transition from Local to Online Access to Court Records: A Multidisciplinary Inquiry", Maryland Law Review, 71:3, 772–847.
- H. Nissenbaum (Fall 2011) "A Contextual Approach to Privacy Online", Daedalus 140:4, 32–48.
- A. Barth, A. Datta, J. Mitchell, and H. Nissenbaum (May 2006) "Privacy and Contextual Integrity: Framework and Applications", In Proceedings of the IEEE Symposium on Security and Privacy, n.p. (Showcased in "The Logic of Privacy", The Economist, January 4, 2007)
Contextual integrity
View on GrokipediaOrigins and Theoretical Foundations
Historical Development
The theory of contextual integrity originated with Helen Nissenbaum's 2004 article "Privacy as Contextual Integrity," published in the Washington Law Review, which critiqued dominant privacy paradigms like notice-and-consent and individual control for failing to account for socially embedded information practices disrupted by emerging technologies such as the internet and surveillance systems.[5] Nissenbaum, a professor at New York University with a background in philosophy and computer science, drew on interdisciplinary insights from ethics, law, and social norms to posit that privacy violations occur when digital tools alter the flow, use, or dissemination of personal information in ways that breach established contextual expectations of appropriateness.[6] This formulation addressed limitations in earlier privacy theories, which often prioritized abstract rights over empirical assessments of social practices, by emphasizing normative judgments tied to specific domains like healthcare, commerce, or public spaces.[7] In 2006, Nissenbaum collaborated with computer scientists Adam Barth, Anupam Datta, and John C. Mitchell on "Privacy and Contextual Integrity: Framework and Applications," which operationalized the theory through formal models amenable to automated verification, marking an early bridge to computational privacy tools amid rising concerns over web tracking and data aggregation.[8] The concept gained fuller articulation in Nissenbaum's 2009 book Privacy in Context: Technology, Policy, and the Integrity of Social Life, published by Stanford University Press on November 24, which systematically applied contextual integrity to case studies in policy, including critiques of post-9/11 surveillance and online platforms, solidifying its role as a lens for evaluating technological impacts on social integrity.[9] By the 2010s, the framework influenced extensions in fields like human-computer interaction and data ethics, with Nissenbaum refining it to encompass actor purposes and transmission principles in response to big data and algorithmic systems, though core tenets remained anchored in the 2004-2009 foundations.[10]Core Definition and Parameters
Contextual integrity is a framework for understanding privacy as the appropriate flow of personal information within specific social contexts, rather than as a general right to control information about oneself. Developed by philosopher Helen Nissenbaum, it posits that privacy violations occur when information flows deviate from established norms governing those contexts, such as norms dictating what types of data are suitable to share and under what conditions they may be transmitted.[1] This approach emphasizes that privacy protections must align with the purposes, roles, and expectations inherent to particular domains of social life, like healthcare, education, or commerce, where norms evolve but serve underlying values such as trust, fairness, and autonomy.[1] At its core, contextual integrity requires compatibility between actual information practices and the "presiding norms of information appropriateness and distribution" in a given context. Norms of appropriateness determine whether certain personal attributes—such as health records or financial details—are fitting to disclose in that setting; for instance, sharing medical history with a physician aligns with medical context norms, but broadcasting it publicly does not. Norms of distribution, or flow, regulate the transmission of information, specifying permissible senders, recipients, and constraints like purpose or consent; a breach occurs if data intended for one recipient, such as a teacher sharing student performance with parents, is redirected to an unrelated commercial entity without justification.[1] Contexts themselves are parameterized by social spheres characterized by distinct activities, relationships, and institutional roles that generate these norms. Information flows within contexts are analyzed through five key parameters: the data subject (the person the information concerns), sender (the entity disclosing it), recipient (the entity receiving it), attribute type (the specific information, e.g., location or purchase history), and transmission principle (conditions governing the flow, such as for a defined purpose or with temporal limits). These parameters must cohere with contextual expectations to preserve integrity; disruptions, often from technological changes like data aggregation across contexts, signal privacy issues by altering flows in ways that undermine social values.[2] For evaluation, norms are assessed against criteria like preventing harm, maintaining equity, and supporting contextual goals, rather than abstract individual preferences.[1]Key Components and Mechanisms
Contextual Norms and Appropriateness
Contextual norms in the framework of contextual integrity refer to the socially entrenched expectations and conventions that govern the flow of personal information within specific social contexts, such as healthcare, education, or commerce. These norms arise from the purposes, roles, activities, and relationships defining each context, dictating which information about whom may appropriately be shared with whom, under what conditions, and for what ends.[6][1] For instance, in a medical context, norms typically permit a patient's symptoms to flow from the patient to a physician but restrict dissemination to third parties absent consent or overriding necessity.[11] Appropriateness of an information flow is determined by its conformance to these contextual norms, where a flow is deemed appropriate if it aligns with the context's integrity—preserving the context's values, functions, and relational dynamics—rather than by absolute rules of secrecy or consent. Helen Nissenbaum posits that privacy violations occur when flows contravene these norms, even if the information is not sensitive in isolation or consent is obtained, as consent alone may not rectify contextual misalignment.[6][12] This assessment involves evaluating five parameters: the data subject, sender, recipient, attribute (type of information), and transmission principles (e.g., voluntary, coerced, or commercial).[11] Norms are not static but evolve through societal deliberation, technological shifts, and institutional practices, though they remain relatively stable to maintain contextual predictability. Empirical methods, such as crowdsourcing user judgments on hypothetical flows, have been proposed to infer these norms systematically, revealing variances across demographics and cultures that challenge universalist privacy models.[13] Deviations from norms, like a teacher's sharing of a student's disciplinary record with parents in a school setting, may be appropriate, whereas the same flow to an unrelated advertiser would typically breach norms, underscoring the relational specificity of appropriateness.[6][14]Information Flows and Transmission Principles
In the framework of contextual integrity, information flows represent the transfer of personal data from a sender to a recipient, characterized by the type of information (attribute) pertaining to a data subject, and regulated by a transmission principle that imposes conditions or constraints on the transfer.[2] These flows are deemed appropriate when they align with contextual norms, preserving the integrity of social practices by ensuring data moves in ways that respect the purposes, roles, and values of the given context. Deviations, such as unauthorized recipients or mismatched attributes, signal privacy violations by disrupting expected patterns of exchange.[8] Transmission principles function as the normative rules governing how and under what circumstances information may flow, distinguishing legitimate transfers from illicit ones.[2] They account for mechanisms like restrictions tied to specific ends or relational dynamics, preventing flows that could undermine trust or fairness within the context. For instance, in medical settings, transmission principles might limit sharing of health records to disclosures "in confidence" between patient and physician or "for the purpose of" diagnosis and treatment, barring commercial resale without justification.[2] Common transmission principles include:- Confidentiality: Data shared "in confidence," prohibiting further disclosure except under exceptional overrides, as in professional ethics codes for doctors or lawyers.
- Informed consent: Explicit agreement by the subject or authorized party, often required for sensitive flows like genetic data sharing, though its validity depends on contextual power imbalances.[15][16]
- Purpose-bound: Flows restricted to "for the purpose of" advancing the context's core functions, such as judicial data shared solely to secure justice, with no spillover to unrelated ends.[2]
- Commercial exchange: Permitted via buying, selling, or contractual terms, common in market contexts but scrutinized for exploitation in non-commercial ones.[15]
