2.3 • A critical approach to AI and algorithms •
"The EU digital education action plan 2021-2027 suggests promoting understanding of emerging technologies and their applications in education, developing ethical guidelines on artificial intelligence (AI) and data usage in teaching and learning for educators and support related research and innovation activities through Horizon Europe (p.12)"
By Javiera Atenas
Artificial intelligence (AI), algorithms and machine learning are having a great impact on humanity and this will only increase in the future. Some fundamental questions about how to regulate these technologies have been arising, as they present a series of risks for people and challenges to the legal systems. The ethics of AI is often focused on “concerns” of various sorts, such as its opacity and bias, as well as regulations for automated decision support and predictive analytics, as according to Whittaker et al. (2018), these lack due process, accountability, community engagement, and auditing, thus creating power imbalance and limit opportunities for participation. Another AI ethics issue is its opacity, which means that, normally, people affected by automated decisions and algorithms, cannot challenge the outcome of a resolution. To address issues regarding opacity it is essential to remove bias and establish legal frameworks to respond to these challenges and protect people.
The diagram below illustrates the inner workings of a data-driven system. It is useful to understand how the inferences that a system makes using our personal data despite not always being transparent with us about these inferences, are subsequently used to drive actions in individuals’ daily life. The nascent, multi-disciplinary ﬁeld of Human-Data Interaction (HDI) places the human at the centre of these data ﬂows, and it is concerned with providing mechanisms (legibility, agency and negotiability) for people to interact explicitly with these data-driven systems.
2• 3.1 •Principles of AI ethics•
At a glance, AI systems should benefit individuals, society and the environment. The principles of AI ethics according to the OECD should be considered as that:
- AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being;
- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity. They should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society;
- There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them;
- AI systems must function in a robust, secure and safe way throughout their life cycles, with potential risks being continually assessed and managed;
- Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
Also, the G20 Ministerial Statement on Trade and Digital Economy lists the key AI principles as:
- Inclusive growth, sustainable development and well-being;
- Human-centred values and fairness;
- Transparency and explain-ability;
- Robustness, security and safety;
Moreover, the Building Australia’s artificial intelligence capability commission has listed an AI Ethics Framework, which comprises eight principles useful for when designing, developing, integrating or using AI systems aimed at reducing the risk of negative impact on business and promoting good governance, which can be summarised as:
- Human-centred values: Throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals;
- Fairness: Throughout their lifecycle, AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups;
- Privacy protection and security: Throughout their lifecycle, AI systems should respect and uphold privacy rights and data protection as well as ensuring the security of data;
- Reliability and safety: Throughout their lifecycle, AI systems should reliably operate in accordance with their intended purpose;
- Transparency and explain-ability: There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted upon by an AI system, and can find out when it is engaging with them.
- Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge its use or output;
- Accountability: Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes and human oversight of AI systems should be enabled.
Furthermore, the European Alliance of the European Council has developed a series of Ethics Guidelines for Trustworthy AI, which sets up an EU legal framework for AI, stating in its point (22) that AI systems do not operate in a lawless world. A number of legally binding rules at European, national and international levels already apply or are relevant to the development, deployment and use of AI systems today. We highlight point (26) holding that Ethical AI – Achieving Trustworthy AI requires not only compliance with the law, which is but one of its three components, and point (27) Robust AI, which states that even if an ethical purpose is ensured, individuals and society must also be confident that AI systems will not cause any unintentional harm. Such systems should perform in a safe, secure and reliable manner, with safeguards put in place to prevent any adverse impact. A proposed framework for trustworthy AI proposed by the European AI alliance is presented below.
The guidelines as a framework for trustworthy AI promote that AI system objectives should be clearly identified and justified. AI systems that help address areas of global concern, like the United Nations Sustainable Development Goals, should be encouraged. Ideally, AI systems should be used for the benefit of all human beings, including future generations as well as respecting human rights, diversity, and the autonomy of individuals. They should be inclusive and accessible, not discriminating against individuals, communities or groups.
It is key to look at the landscape of ethics of AI, as for example, EU Parliament presents a series of issues and initiatives, to raise awareness and prevent AI from affecting the democratic process, or the use of deception, unfair manipulation, or unjustified surveillance. Thus, it is key to consider AI’s implications in politics, to develop strong regulations towards respecting and upholding privacy rights and data protection, ensuring proper data governance and transparency providing information to help understanding key factors used in algorithmic decision making.
- You can analyse with your students the Cambridge Analytica (article in WIRED magazine and The Cambridge Analytica Files from The Guardian) case and discuss how democracy can be easily jeopardised.
- As a complement, you can watch the video below and have a group discussion in the class to raise awareness about the consequences of biased data.
2• 3.2 •Examining AI ethics•
To design teaching and learning activities regarding the ethical boundaries of AI, algorithms, and machine learning, we need to mention how its opacity affects us all directly and indirectly. Students as citizens need to develop awareness and competencies to participate in democratic discussions to create legal frameworks to prevent misuses or unethical uses of AI. Accordingly, UNESCO holds that we need to educate algorithms, whilst citizens need to understand potential problems and, consequently, challenge them.
Safiya Umoja Noble has been working on showcasing how algorithms are a tool for oppression, opening a discussion of unethical or illegal uses of AI and algorithms, and some examples from around the world can be categorised as follow:
- Racism: The opacity of algorithms creates black boxes, and one of the critical arguments towards the need to have regulatory frameworks is the Rise of the Racist Robots, which, for example, is leading to consumer lending discrimination or preventing certain groups from obtaining visas to visit or live in countries. Moreover, they can harm certain groups’ educational experience using unfair learning analytics and student surveillance tools that require facial recognition. These technologies tend to break black people (video: Ain’t I a woman?) through racist predictive policing, leading to longer incarceration sentences being imposed on such minorities.
- Sexism: We need to consider the fact that 78% of AI professionals are men, and thus, their experiences inform and dominate algorithm creation. Women are affected by algorithmic decisions in every aspect of their lives, including access to health, services and the labour market. Algorithms are failing women through misdiagnosis, affecting clinical decisions, prescribing wrong treatments and hence, damaging their health.
Also, algorithms are unfair to women regarding finance, but even more worryingly, AI is harming their job opportunities. For example, women are targeted with lower-paying jobs ads and being discriminated against by HR personality tests or when applying for a job. Moreover, AI can potentially harm the queer and trans community, portraying them in a wrong and stereotypical way. Thus ethical AI developments must address the needs of the non-binary and trans people to protect them from potential harm.
- Socioeconomic discrimination: Algorithms badly affect those coming from lower-income households and neighbourhoods, for example, by lowering their school grades. This kind of behaviour is known as automating poverty or automating inequality, where AI is used to assign or remove such as unemployment benefits, child support, housing and food subsidies, leading in the worst cases to death or severe health problems. Automated inequality is a way of imposing systemic oppression, for example, requesting biometric data for access to food in schools. Thus, UNICEF is calling to protect childrens’ rights because low-income families are affected by automated decisions on benefits and welfare, as AI is used to determine, showcase and map poverty, with the risks of depicting groups in a negative way, depending on the school they went, where they live, also, it is used to predict poverty. Thus, it is necessary to work towards protecting the most vulnerable in society from predatory and dangerous uses of AI.
- Surveillance: Businesses, employers, educational organisations and governments are using surveillance mechanisms to control specific behaviours. Shops monitor customers behaviours; companies monitor employees’ activities; schools monitor children’s engagement; universities use proctoring systems to invigilate exams. In other words, we are constantly monitored under what Shoshana Zuboff calls surveillance capitalism. The Carnegie Endowment for International Peace has pointed out that a growing number of states are deploying advanced AI surveillance tools to monitor, track, and surveil citizens to accomplish a range of policy objectives – some lawful, others that violate human rights, and many fall onto a murky middle ground. They have developed an AI Global Surveillance (AIGS) Index to showcase how AI is rapidly proliferating worldwide. Hence, the United Nations, UNESCO, The European Council and the OECD, amongst other international players, call for regulatory frameworks to prevent the abuse of surveillance mechanisms.
- Manipulation: AI has been used for social influence and behaviour manipulation, mostly through social media and predominantly about our political views and opinions. It has been utilised to spread propaganda and target specific groups of people, with content that can lead to radicalisation, and extreme political views, thereby threatening democracy and democratic processes. Thus, a regulatory framework for targeted information in political campaigns needs to be enforced.
- To discuss discrimination through algorithms you can start by asking your students to play with the Pre-crime Calculator, which is an interactive experience that takes you to the world of predictive policing. How much of a potential suspect or a victim are you in the eyes of the system? And what are the areas in your city you should avoid in the next week so as not to get involved in the crime?
- Then, ask the students to take an online personality test, such as the Business Personality Profile, using a male and female identity with similar characteristics and compare the results of the test.
- Finally, ask your students to share their experiences with the rest of the class.