The peculiar evil of silencing the expression of an opinion is, that it is robbing the human race; posterity as well as the existing generation; those who dissent from the opinion, still more than those who hold it. If the opinion is right, they are deprived of the opportunity of exchanging error for truth: if wrong, they lose, what is almost as great a benefit, the clearer perception and livelier impression of truth, produced by its collision with error.
– John Stuart Mill, 1859.
The central idea behind the notion of deliberative democracy is exercising our freedom of expression in a political discourse which allows a fair and critical exchange of ideas and values. This is a process through which individuals develop autonomous understandings of their own political preferences and will.
Numerous philosophical theories have offered an analysis of the nature and normativity of personal autonomy. Among them, “hierarchical” accounts, based on the work of Harry Frankfurt and Gerald Dworkin, have proven most popular and influential [1]. Frankfurt argues that the decision-making process suggests performing an action of self-evaluation that „essentially involves critical reflexivity for without it agents would be unable to make up their minds and govern themselves“. However, in order to make a decision and be autonomous one must be capable of making up one’s mind, which can be challenging in the tremendous deluge of information provided online. There is only a limited amount of time and attention that users are willing and able to invest. A study of the German audience’s use of EPGs found that many users respond to increased digital channel variety with ‘simplification strategies’; they concentrate on what they know and appreciate, and, interestingly, they are convinced that there is a great probability that the additional information channels will add little new [2]. Findings like these suggest the importance of the availability of effective mechanisms to help people cope with the digital abundance, and support them in making diverse choices.
With the above-mentioned autonomy goals in sight, the primary goal of European media policies is to ensure that users have access to pluralistic media content. However, diversity policies are still primarily aimed at organising the supply side and paradoxically, they are increasingly detached from the way users actually find, access and consume media content in the brave new world of digital abundance [3].
‘Exposure diversity’ looks at the audience dimension of media diversity, and the question to what extent the diversity of content and supply actually results in a (more) diverse content consumption. Van der Wurff brought a further differentiation to the discussion and observed that what media content the audience eventually consumes is also a question of the offering that is factually available to them (which can differ from the overall offering). He therefore suggested an additional aspect of exposure diversity, and that is ‘diversity of choice’ (ie the ‘absolute amount of different programme types that viewers can [actually] choose from’) [4].
The human need for filtering is what prompted algorithmic personalization. Content personalization systems (think search engines, social media feeds and targeted advertising) and the algorithms they rely upon play an increasingly important role in our political discourse by guiding users’ media consumption and informing their choices. Personalized algorithms filter the content and structure of a web application to adapt it to the specific needs, goals, interest and preferences of each user [5].
It all begins with a user-model created on the basis of various “user-signals”, such as click history, location, personal information, etcetera. By unifying the various signals under a single identity, the system can predict what information will be of relevance for the user in question and filter out data that does not coincide with the user-model, making it easier for the user to find her way in the flood of online information [6]. Ultimately, these choice intermediaries or information regimes might help users to enjoy a more diverse media diet, although it is important to note that the decisions of these gatekeepers follow their own logic and economic preferences [7]. However, with prodigious potential comes prodigious risk. “Personalization filters can also serve up a kind of invisible auto-propaganda, indoctrinating us with our own ideas, amplifying our desire for things that are familiar and leaving us oblivious”8 of any divergent information. As a result, filter bubbles can prevent users from realizing that their beliefs and desires may need rethinking; thus, heavily undermine the agent’s capacity for self-evaluation. This is not to say that social media per se have a negative effect on media diversity; they can also encourage people to engage with opposing views however, much will ultimately be a question of how social information systems are designed [9].
Therefore, some new questions are imposed: How does diverse media content reach the audience? What obstacles to choice are encountered? What are the potential obstacles that viewers may experience on their journey towards effective exposure?
Critics may argue that the only viable way of protecting our capacity for self-evaluation, and autonomy respectively, is to opt out of the service. This would, however, be an unrealistic scenario since, as Hildebrandt pointed out, what we increasingly want is not a right not to be profiled — which means effectively secluding ourselves from society and its benefits — but to determine how we are profiled and on the basis of what data — a “right how to be read” [10]. Users should be given more insight and control over the filtering process. After all, self-governance implies a certain degree of assessment and influence in the algorithms doing the filtering or in other words, one possible benchmark for assessing the diversity of one’s choices could be how equipped the user is with tools to burst his own „filter bubbles“.
Users have long been disturbed at the idea that machines might make decisions for them, which they could not understand or countermand; a vision of out of control authority which derives from earlier notions of unfathomable bureaucracy found everywhere from Kafka to Terry Gilliam’s Brazil [11]. Turning to machine learning systems, this has led some to caution against the rise of a „black box society“ and
make demands about increased transparency in algorithmic decision-making. Along these lines, Schulz, Held and Kops, for example, emphasise the importance of preserving the openness of the communication process against path dependencies in order to expose the audience to new insights [12].
Opaque workings of filters that are imposed upon the user diminish their autonomy; the unawareness of users combined with the lack of control and transparency could place the capacity for critical reflection in a vulnerable position. The Council of Europe speaks of the need to ‘empower users’ and encourages member states, the private sector and civil society to develop ‘common standards and strategies to promote transparency and the provision of information, guidance and assistance to the individual users’ [13]. A question that is still left open, however, is what any meaningful algorithmic transparency initiative to promote diverse exposure would look like.
Explaining the functionality of complex algorithmic decision-making systems and their rationale in specific cases is a technically challenging problem. In the existing literature “explanation” typically refers to an attempt to convey the internal state or logic of an algorithm that leads to a decision [14]. For example, information provided with a model-centered approach (MCE) centers on and around the model itself and could include: setup information (the intentions behind the modelling process), the family of model (neural network, random forest, ensemble combination), the parameters used to further specify it before training; training metadata: summary statistics and qualitative descriptions of the input data used to train the model, the provenance of such data, and the output data or classifications being predicted in this model, performance metrics: information on the model’s predictive skill on unseen data etc [15].
Therefore, MCEs provide one set of information to everyone (system functionality information), but there are limitations on how detailed and practical— and thus, how “meaningful” — such an explanation can be individually. Individual data subjects are not empowered to make use of the kind of algorithmic explanations they are likely to be offered since they are mostly too time-poor, resource-poor, and lacking in the necessary expertise to meaningfully make use of such information. This kind of transparency places a tremendous burden on individuals both to seek out information about a system, interpret it, and determine its significance, only then to find out they have little power to change things anyway, being “disconnected from power”. This is obviously oversimplifying, but for the purpose of this article let’s say transparency around system functionality and acess to models gets you nowhere without the data. Furthermore, people mistakenly assume that, personalization, for example, means that decisions are made based on their data alone. To the contrary, the whole point is to place your data in relation to others’. Take, for example, Facebook’s News Feed. Such systems are designed to adapt to any type of content and evolve based on user feedback (e.g., clicks, likes and other metadata supplied unknowlingly). When you hear that something is “personalized,” this means that the data that you put into the system are compared to data others put into the system such that the results you get are statistically relative to the results others get. Even if you required Facebook to turn over their News Feed algorithm, you’d know nothing without the data of the rest of your “algorithmic group”. This seems extremely difficult to organise in practice, as well as probably also involving unwanted privacy disclosures.
Therefore, in order to avoid transparency fallacies algorithmic transparency should be operationalized as meaningful and useful information that will enhance user autonomy. I believe it is crucial not to view user autonomy in this context as “full user control” or access to information about the whole input-output pipeline; user autonomy seems to have more to do with what aspects of the algorithm that are controllable, that allow users to reflect on, and possibly reconsider, their own preferences. In this context, “subject-centric’ explanations (SCEs), which restrict explanations to particular regions of a model around a query and a set of data show promise. For example:
Sensitivity-based subject-centric explanations: what changes in my input data would have made my decision turn out otherwise? [16]
– Case-based subject-centric explanations: which data records used to train this model are most similar to mine? [17]
– Demographic-based subject-centric explanations: what are the characteristics of individuals who received similar treatment to me? [18]
On this note, there have been proposals in the research community for recommender systems that more explicitly address diversity which include software designs that allow users to ‘tune’ the level of personalization of the recommender system they are using to further boost personal autonomy. For example, safeguarding personal autonomy could include being able to reset the default, choose between different recommendation logics, make a complaint about the tool, provide feedback or trigger people to reset their personal choices at regular (but not very frequent) intervals, modifying the way in which recommendations are presented to the user, by clearly stating additional options which may produce a different recommendation list or by giving users the opportunity to interactively navigate through connected lists of recommendations or change the settings in recommender systems [19].
Counterfactuals like this could provide information to the data subject that is meaningful and practically useful for understanding the reasons for an automated decision and altering future behaviour for a different result. They could serve as a minimal solution that balances the current technical limitations of algorithmic interpretability and the rights and freedoms of others (e.g. privacy and trade secrets). Therefore, autonomy can, in principle, be enhanced without opening the black box. The possible benchmarks and criteria used to identify the desirable level and form of exposure diversity would thus include aspects such as user satisfaction and awareness of certain options and choices. These kind of systems might help restore what Mireille Hildebrandt terms “double contingency” — the mutual ability to anticipate, or “counter-profile” how an agent is being “read” by another, so she can change her own actions in response.
Individual studies have examined the role of information disclosure and explanation of specific algorithms in recommendation systems. Basically, results correspond to the notion of transparency being generally considered as a means to see the truth and motives behind people’s actions and to ensure social accountability and trust [20]. Effective transparency in recommender systems can serve to enhance user autonomy, exposure diversity, acceptance of specific recommendations and a user’s impression of recommendation quality.
However, even if we provide users with options and choices we have discussed above, users can still choose to stay inside their own „thought clusters“. This possible outcome poses a broader societal question of what makes an active citizen and how do we, as a society, nurture such citizen – one that is actively searching for reliable information and questions whether her beliefs are indeed justified. More fundamentally,nurturing her to a point that it becomes her responsibility to do so. Again, critics may argue that curiosity and the vehement search for diverse information and employment of critical reasoning should be self-evident for any autonomous person. I would argue this position is somewhat snobbish: the importance of critical reflection and self-evaluation is self-evident mostly to academics who have learned its value. Today, exposure to media diversity is increasingly also a matter of possessing the right skills to find the relevant information. Consequently, media/informational literacy and educational activities are also probably one of the most prominent means to address issues of exposure diversity (difficult questions about what would that education look like and who would be the educator is a topic for another discussion).
Finally, we need to understand the new role that social media is playing in the digital media ecosystem in order to know how to tackle the policy questions they raise. According to Pew Research, 61 percent of millennials use Facebook as their primary source for news about politics and government, but Facebook refuses to acknowledge its identity as a news source. Removing Daily Stormer’s vile brand of content from the internet, Google changing its algorithm for news to stop sites like 4Chan appearing at the top of news results, Twitter’s removal of blue checks from far-right Twitter accounts raised serious concerns about how Google and Facebook help determine what news people read and whether they should have that power without transparent regulatory systems. Understanding their editorial power and the privately-controlled, public spheres they are creating is critical for us to be able to formulate the standards and effective policy responses.
Obviously, there are no simple recipes for the issue at hand. This doesn’t mean we should give up but that valuing freedom of expression is basically a question of drawing and constantly re-evaluating the fine line between what is ethically desirable and what is technically feasible – a task which encompasses a complex but much needed constant multi-disciplinary collaboration at this intersection of law, ethics and technology.
1 A.M. Pereira Daoud, Filter Bubbles and Personal Autonomy, Utrecht University (2016-2017).
2 Natali Helberger, Diversity by Design, Journal of Information Policy 1 (2011): 441-469. 3 Helberger, no (2). 4 Ibid.
5 Garrigós, I,. Gomez, J., and Houben., G. (2010). Specification of personalization in web application design. Information and Software Technology, 991- 1010.
6 Pereira Daoud, n (1).
7 Helberger, Karppinen, D’Acunto, Exposure diversity as a design principle for recommender systems 2016.
8 Eli Pariser, introduction to The Filter Bubble: What the Internet is Hiding From You (London: Viking e- book, 2011), 26.
9 Ibid.
10 Mireille Hildebrandt, Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology (Edward Elgar, 2015).
11 Edwards, Lilian and Veale, Michael, Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For (May 23, 2017). Duke Law & Technology Review, pg 15.
12 Helberger, no (2).
13 Council of Europe, Recommendation Rec (2007) 11 of the Committee of Ministers to Member States on Promoting Freedom of Expression and Information in the New Information and Communications Environment, 26 September 2007.
14 Wachter, Sandra and Mittelstadt, Brent and Russell, Chris, Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR (October 6, 2017).
15 Edwards, Lilian and Veale, no (13).
16 Wojciech Samek et al., Evaluating the visualization of what a deep neural network has learned, IEEE Transactions on Neural Networks and Learning Systems (forthcoming), doi:10.1109/TNNLS.2016.2599820; Marco Tulio Ribeiro et al. “Why should I trust you?”: Explaining the predictions of any classifier, in KDD ’16 Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135-1144 (2016), doi:10.1145/2939672.2939778.
17 Donald Doyle et al., A review of Explanation and Explanation in Case-based reasoning (Department of Computer Science, Trinity College, Dublin, 2003).
18 Edwards et al, no (13).
19 Helberger, no (5).
20 Nicholas Diakopoulos & Michael Koliska (2016): Algorithmic Transparency in the News Media, Digital Journalism, DOI: 10.1080/21670811.2016.1208053
About the Author: Tihana Krajnović is currently writing her Master’s thesis at the Faculty of Law, University of Zagreb. and writing for medialaws.eu. Some of her past activities include participating in Price Media Law Moot Court Competition in Oxford, working as a national researcher and academic coordinator in two ELSA International and Council of Europe Legal Research Groups on online hate speech and social rights, volunteering at the University of Zagreb’s Legal Clinic (Anti-Discrimination and Protection of National Minorities Rights Department). Her research interests include data ethics; big Data; AI; machine learning; algorithms; robotics; privacy; data protection and technology law, European, international and human rights law.