4•1 • Understanding data justice •

"Virtue knows no colour line, and the chivalry which depends upon complexion of skin and texture of hair can command no honest respect. "


By Caroline Kuhn

Discrimination is linked to broader processes of oppression, which leave socially different groups vulnerable to violence, marginalisation, exploitation, cultural imperialism, and powerlessness (Peña Gangadharan and Niklas, 2019). There are different approaches to dealing with these injustices. Fairness, accountability, and transparency studies are examples of such approaches, but as Peña Gangadharan and Niklas (2019) suggest these studies are quite techno-centric, that is, the attention is put on the technology, instead of the social and cultural structures in which these technologies are embedded. Scholars in this field believe that engineers and computer scientists can ensure fairness or engineer discrimination-aware data mining and machine learning. As a consequence, the field ends up with technical solutions that are overly simplistic and ill-equipped to accommodate the complexity of social life, and also reinforcing power imbalances of tech privileged groups and communities using these technologies, exacerbating power imbalances, bias and oppression, all of which leaves ample room for injustice and discrimination to materialise.

Some benefits but…

As a consequence of the expansion of data-intensive development, there are a number of benefits: faster and better decisions that are facilitating improved development outcomes in health, agriculture, urban planning, etc (Cartesian 2014, Kshetri 2014). Nevertheless, there have also been a number of growing concerns about emerging negative impacts: loss of privacy, discrimination, growth in inequality, etc. (Spratt & Baker 2015, Taylor & Broeders 2015). Whilst we constantly read that we are undergoing a data revolution, the revolution is not so much about the data and the technologies, but rather, relates to long-standing social, political, economic and cultural issues, as D’Ignazio and Klein (2019) would argue. Hence, data justice is about scrutinising, uncovering, and challenging – most of the time – invisible structures of oppression that are entrenched in the social and cultural structures in which we are embedded. In short, data justice is not solely about data and data-driven technologies. Instead, it is about addressing structural problems of oppression and injustices that occur in the world of data and data-driven technologies and systems. These originate as much from class hierarchies as from status and political ones. Discrimination is a consequence of the intersection of different social structures such as gender, class, race, ethnicity, as well as the (mal)distribution of material wealth in society. It works for the reinforcement of certain groups’ privilege or domination over others through the exclusion of particular democratic structures and styles of governance.

Some examples

Examples of data injustices are useful to illustrate these structural issues. As Heeks so rightfully points out, big corporations, such as Facebook, Google, Amazon, etc. capture, in shady ways, huge flows of granular data about our daily interactions with different social media platforms and online services. It then profits from that data in some obscure ways, as Zuboff (2019) has so eloquently has demonstrated. Yet, we struggle to find much data about these corporations and their operations. Another example that is illustrative is an initiative in India where land records were digitised and the data that was put on them was purely quantitative, which meant that the data held about these lands in local communities that were informal, qualitative, and traditional was excluded from the new data set.

The Aadhaar digital campaign

In addition and also in India, is the Aadhaar digital identity campaign. Beginning in 2009, the Indian government began enrolling residents onto a platform known as Aadhaar, which provided each enrollee with a 12-digit unique identification number linked to both their demographic and biometric details, including fingerprint and iris scans. These biometric features were the first problem because many workers have had their hands very damaged due to hard work, thus their fingerprints are not always recognised by the system. But more dramatic is the fact tha the project was initially sold as a voluntary way of improving welfare service delivery and giving those without identification an ID they could use. In practice, the government expanded it by making it mandatory for a number of services, forcing residents to sign up for Aadhaar to get access to things they were entitled to. Today, Aadhaar essentially functions as a catch-all identity proof which the government would not just like to make mandatory for every Indian resident, but also, to link it to a number of other services, such as the PAN card, phone numbers and bank accounts. This represents a huge burden for the more marginalised individuals who will be excluded not just from the welfare they are entitled to by law but also, all the other services that the digital ID serves as a requisite. It is not that deploying a unique ID to access public services especially in a huge country like India is bad per se. The problem is the type of information they collect (collecting biometric data is always a bad idea given the potential limitations that can exist if individuals don’t have that information available as others), having a lot of information in a sole database and going digital without understanding literacies and connectivity gaps in the population is highly problematic.

So what is data justice?

There is an undeniable interplay between data-driven technologies and individuals’ daily lives. This interplay is complex and nuanced and it has both positive implications as well as negative ones. This makes it clear that how data-driven systems and technologies are managed is critical in order to deliver these services and their benefits equally to all citizens. Data justice is a response to the negative social impacts that these systems have on our lives and seeks structural changes to the system. The idea of data justice, thus, can be understood as the consideration of fairness in the way people are made visible (recognised), represented and treated as a result of their production of digital data (Taylor, 2017). Data justice is, therefore, necessary to find ethical pathways through a datafied world. A framework of data justice will have to address some of the implications of these systems that affect broader democratic processes, the entrenchment and introduction of inequalities, discrimination and exclusion of certain groups, deteriorating working conditions, the dehumanisation of decision-making and ensure that there is interaction around sensitive issues (Dencik, Hintz, Redden & Treré 2019).

From the above, we can see how data instead of being an abstract and neutral technical artefact is situated and understood in relation to other social practices. Therefore, data justice cannot be only about data, for it needs to look into, scrutinise, and understand those social practices that unfold and pivot around data so that new more inclusive data practices that are centred around human need can be envisioned and realised. Data justice, as such, can be understood as a concept, a set of practices and an approach.

How can all the flow of data be managed and used in a just way?

How can data users make sure that no harm is going to be made to particular communities or groups of people? One would think that drawing on the Human Rights framework would be enough to protect the rights of individuals, but the problem is that for people to respond to human rights violations, these need to be visible and recognisable so that they are aware and able to act or respond to them (Taylor 2017). Hence, a data justice approach needs to go beyond a Human Rights framework. In the next section (4.3), we will explore Taylor’s (2017) framework and model for data justice. There are different interpretations of the concept, we describe some of them next as we believe they imply different approaches to data justice in practice.

Activity

The Aadhaar case, India’s digital and biometric identification system, a promise that never was kept. Beginning in 2009, the Indian government began enrolling residents onto a platform known as Aadhaar, which provided each enrollee with a 12-digit unique identification number linked to both their demographic and biometric details, including fingerprints and iris scans. The project was initially sold as a voluntary way of improving welfare service delivery and giving those without identification an ID they could use. In practice, however, the government expanded it by making it mandatory for a number of services, forcing residents to sign up for Aadhaar to get access to things they were already due. Today, Aadhaar essentially functions as a catch-all identity proof that the government would not just like to make mandatory for every Indian resident, but also, has to be linked to a number of other services such as mobile phones contracts, bank accounts and the like.

Watch this video

And read this article

Start a discussion with your pupils about the difficulties with a system of data justice that relies only on Human Rights.

Does India has an active Privacy law? What is the problem with that? Privacy for whom? Think about the importance of governmental culture in such things as surveillance and control.

Discuss how Taylor’s model for data justice can be of more use to tackle problems such as this one.

 

css.php