It is possible for the unregulated functioning of free markets to result in the optimal social output not being reached, known as market failure. In cases where producers know more than consumers, or vice versa, it creates a situation of asymmetric information that results in either party gaining an unfair advantage over the other. Such market failure because of asymmetric information is present within the market for most digital products, as they heavily impact users’ privacy without most users being able to make an informed choice about using the product, due to their lack of knowledge of the inner workings of the technology. This asymmetric information, then, needs to be addressed to return to a socially optimal market where users can make truly informed choices about their privacy and the technology they use. However, a paternalistic policy of imposing a “data tax” that has been suggested by some, or of forcefully denying companies the right to access consumer data, simply does not work. These overbearing solutions often lead to stunted market growth, inefficiency, or both. Therefore, a kind of “soft” paternalism needs to be employed, wherein both the rights of consumers and producers are protected to the maximum degree, while reaching the socially optimal market condition. These policies are known as “nudge” policies, first introduced by Richard Thaler and Cass Sunstein, two world-renowned economists. This paper analyses the efficacy and feasibility of implementing nudge policies within two contexts: firstly, on a microeconomic scale assessing the impact on the digital market and the effect on various stakeholders, and secondly, on a macroeconomic scale assessing the feasibility of the application of nudge policy in privacy law. While there exists research about the implementation of certain nudges to protect privacy online, this paper acts as a review of the existing literature and data, along with providing an original real-world application of nudge policy in India to protect consumer data privacy.
The first and most common nudge, often already implemented by governments, is of notifying the consumer. Theoretically, if companies were to notify a user about exactly how their data is being collected, processed, and use, it would level out the asymmetric information present in the market. However, in practical use, these benefits often do not materialize. Instead, in order to confuse the consumer and maintain their asymmetric information advantage, companies deliberately obfuscate the language of their notifications, making it difficult for consumers to actually understand the effect on their privacy, as well as discouraging consumers from actually taking the time to read it due to purposefully overwhelming complexity. In fact, an EU report found that only 13% of consumers actually bother to read through the notifications in full (Dellinger, 2019). In addition, most companies present consumers with a binary choice- either accept all the terms or not be able to use the application. This, when combined with the obfuscated and complicated language of these notifications, does not solve asymmetric information or help protect consumer data privacy. To help solve this issue, the idea of “privacy nutrition labels” has been suggested, wherein, similar to food nutrition labels, companies will need to plainly state their collection, processing, and usage of the consumer’s data (Ciocchetti and Corey, 2009). However, a similar problem exists within this solution, being the difficulty of establishing standards of complexity of language due to its inherently qualitative nature, which opens it up to the same obfuscation that is currently occurring.
A non-obvious solution is taking advantage of consumers’ inherent biases and heuristics in order to protect their privacy, meaning that by appealing to certain irrational aspects of the human psyche, nudges can actually encourage consumers to protect their privacy. This paper will analyze two of these biases, namely the status quo bias and the framing bias, and evaluate how nudges can be used to exploit these biases and therefore protect consumer data privacy.
Status quo bias
The status quo bias refers to the irrational tendency of human beings to stay with the default option if presented with a certain choice, sometimes even if the other options are more favorable to them. A certain choice architecture, i.e., a system of organizing choices when presented to the consumer, if it includes the socially optimal result as its default choice, is likely to actually help achieve the socially desirable outcome. Most companies today have an “opt-out” system, in which data collection is on by default, and the consumer has to actively navigate to their preferences page, find the correct option, and then turn off data collection from that particular company. The friction involved in the process of turning off data collection prevents most consumers from even trying in the first place, another demonstration of the status quo bias. In fact, a 2006 study found that “Facebook default settings at the time of their study allowed profile information to be publicly searchable, the majority of the surveyed Facebook users had not changed these settings, providing more evidence that users hardly change default settings” (Acquisti et. al., 2017).
However, this bias can also be used to protect consumer privacy by implementing an “opt-in” choice architecture, wherein data collection is off by default, and consumers would have to proactively turn it on in order for it to start. This would mean that the personal data of many consumers who are blind to the inner workings of the application would be protected, and would require an active understanding and consent from the consumer in order to allow data collection. This would solve the asymmetric information problem to a great extent and achieve socially desirable results.
People react differently to information based on the way it is presented to them. This is known as the framing bias, and can majorly influence consumer choice. If choices are framed in a positive, inviting manner, consumers are likely to select it, while the opposite occurs when choices are framed negatively as it skews the consumer’s perception of the information. This bias provides potential for encouraging consumers to choose to protect their privacy. A recent paper analyzed how privacy choices, by implementing color to convey certain connotations, can be framed as more positive or more negative (Schöbel et. al., 2020).
By using red in denoting privacy-invasive options and green in privacy-beneficial choices, consumers will be more likely to take the choice that will protect their privacy, especially if they were originally indifferent. This is because red carries connotations of alarm and green is typically associated with the go-ahead signal. However, comprehensive research and data surrounding this hypothesis is not present, and as such, its actual efficacy is questionable.
Stakeholder analysis and conclusion
Overall, the implementation of this nudge policy has various effects on certain stakeholders. The microeconomic stakeholders of the individual producers and consumers will be analyzed here, while the impact on the macroeconomic stakeholders like the government will be assessed in a later section. The implementation of this nudge policy will have a majorly positive effect on consumers as it will protect their privacy without encroaching much on their freedoms. However, even these “soft” paternalistic nudges could be construed as manipulation of consumers by taking advantage of their irrational tendencies. Overall, nevertheless, consumers will benefit significantly. Secondly, such nudge policy might impact the profits of technology companies, which could be dangerous due to the risk of stunting the growth of the technology market, which is becoming an increasingly vital sector in most economies. Care must be taken, then, not to levy excessive regulations, which was the reason these nudges were analyzed in the first place, i.e., as an alternative to overbearing government policy. That being said, the implementation of this nudge policy will solve the symmetric information in the market, which will only reduce the unfair profits that companies make on this advantage, and will allow the market to return to a socially optimal state where both producers and consumers still benefit. Therefore, these nudge policies will be most likely be socially beneficial on a microeconomic scale.
Alessandro Acquisti, Idris Adjerid, Rebecca Balebako, Laura Brandimarte, Lorrie Faith Cranor, Saranga Komanduri, Pedro Giovanni Leon, Norman Sadeh, Florian Schaub, Manya Sleeper, Yang Wang, and Shomir Wilson. 2017. Nudges for privacy and security: Understanding and assisting users’ choices online. ACM Comput. Surv. 50, 3, Article 44 (August 2017), 41 pages. DOI: http://dx.doi.org/10.1145/3054926
Ciocchetti, Corey, The Future of Privacy Policies: A Privacy Nutrition Label Filled with Fair Information Practices (June 1, 2009). John Marshall Journal of Computer and Information Law, Vol. 26, №1, pp. 1–46, 2009, Available at SSRN: https://ssrn.com/abstract=1417136
Dellinger, A., 2019. Most privacy policies are too long and complicated to read. That needs to change.. [online] Mic.com. Available at: <https://www.mic.com/p/privacy-policies-are-too-complicated-to-understand-new-analysis-confirms-18002848> [Accessed 8 August 2021].
Schöbel, S.; Barev, T. J.; Janson, A.; Hupfeld, F. & Leimeister, J. M.
(2020): Understanding User Preferences of Digital Privacy Nudges — A Best-Worst
Scaling Approach. In: Hawaii International Conference on System Sciences (HICSS).