2.3 • A critical approach to AI and algorithms •

"The EU digital education action plan 2021-2027 suggests promoting understanding of emerging technologies and their applications in education, developing ethical guidelines on artificial intelligence (AI) and data usage in teaching and learning for educators and support related research and innovation activities through Horizon Europe (p.12)"

By Javiera Atenas

Artificial intelligence (AI), algorithms and machine learning are having a great impact on humanity and this will only increase in the future. Some fundamental questions about how to regulate these technologies have been arising, as they present a series of risks for people and challenges to the legal systems. The ethics of AI is often focused on “concerns” of various sorts, such as its opacity and bias, as well as regulations for automated decision support and predictive analytics, as according to Whittaker et al. (2018), these lack due process, accountability, community engagement, and auditing, thus creating power imbalance and limit opportunities for participation. Another AI ethics issue is its opacity, which means that, normally, people affected by automated decisions and algorithms, cannot challenge the outcome of a resolution. To address issues regarding opacity it is essential to remove bias and establish legal frameworks to respond to these challenges and protect people.

The diagram below illustrates the inner workings of a data-driven system. It is useful to understand how the inferences that a system makes using our personal data despite not always being transparent with us about these inferences, are subsequently used to drive actions in individuals’ daily life. The nascent, multi-disciplinary field of Human-Data Interaction (HDI) places the human at the centre of these data flows, and it is concerned with providing mechanisms (legibility, agency and negotiability) for people to interact explicitly with these data-driven systems.

  Author/Copyright holder: Richard Mortier. Copyright terms and licence: CC BY-NC-ND

2• 3.1 •Principles of AI ethics•

At a glance, AI systems should benefit individuals, society and the environment. The principles of AI ethics according to the OECD should be considered as that:

  • AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being;
  • AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity. They should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society;
  • There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them;
  • AI systems must function in a robust, secure and safe way throughout their life cycles, with potential risks being continually assessed and managed;
  • Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

Also, the G20 Ministerial Statement on Trade and Digital Economy lists the key AI principles as:

  • Inclusive growth, sustainable development and well-being;
  • Human-centred values and fairness;
  • Transparency and explain-ability;
  • Robustness, security and safety;
  • Accountability.

Moreover, the Building Australia’s artificial intelligence capability commission has listed an AI Ethics Framework, which comprises eight principles useful for when designing, developing, integrating or using AI systems aimed at reducing the risk of negative impact on business and promoting good governance, which can be summarised as:

  • Human-centred values: Throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals;
  • Fairness: Throughout their lifecycle, AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups;
  • Privacy protection and security: Throughout their lifecycle, AI systems should respect and uphold privacy rights and data protection as well as ensuring the security of data;
  • Reliability and safety: Throughout their lifecycle, AI systems should reliably operate in accordance with their intended purpose;
  • Transparency and explain-ability: There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted upon by an AI system, and can find out when it is engaging with them.
  • Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge its use or output;
  • Accountability: Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes and human oversight of AI systems should be enabled.

Furthermore, the European Alliance of the European Council has developed a series of Ethics Guidelines for Trustworthy AI, which sets up an EU legal framework for AI, stating in its point (22) that AI systems do not operate in a lawless world. A number of legally binding rules at European, national and international levels already apply or are relevant to the development, deployment and use of AI systems today. We highlight point (26) holding that Ethical AI – Achieving Trustworthy AI requires not only compliance with the law, which is but one of its three components, and point (27) Robust AI, which states that even if an ethical purpose is ensured, individuals and society must also be confident that AI systems will not cause any unintentional harm. Such systems should perform in a safe, secure and reliable manner, with safeguards put in place to prevent any adverse impact. A proposed framework for trustworthy AI proposed by the European AI alliance is presented below.

The Guidelines as a framework for Trustworthy AI
    The guidelines as a framework for trustworthy AI

 

The guidelines as a framework for trustworthy AI promote that AI system objectives should be clearly identified and justified. AI systems that help address areas of global concern, like the United Nations Sustainable Development Goals, should be encouraged. Ideally, AI systems should be used for the benefit of all human beings, including future generations as well as respecting human rights, diversity, and the autonomy of individuals. They should be inclusive and accessible, not discriminating against individuals, communities or groups.

It is key to look at the landscape of ethics of AI, as for example, EU Parliament presents a series of issues and initiatives, to raise awareness and prevent AI from affecting the democratic process, or the use of deception, unfair manipulation, or unjustified surveillance. Thus, it is key to consider AI’s implications in politics, to develop strong regulations towards respecting and upholding privacy rights and data protection, ensuring proper data governance and transparency providing information to help understanding key factors used in algorithmic decision making.

 

Activity

  • You can analyse with your students the Cambridge Analytica (article in WIRED magazine and The Cambridge Analytica Files from The Guardian) case and discuss how democracy can be easily jeopardised.
  • As a complement, you can watch the video below and have a group discussion in the class to raise awareness about the consequences of biased data.

 

2• 3.2 •Examining AI ethics•

To design teaching and learning activities regarding the ethical boundaries of AI, algorithms, and machine learning, we need to mention how its opacity affects us all directly and indirectly. Students as citizens need to develop awareness and competencies to participate in democratic discussions to create legal frameworks to prevent misuses or unethical uses of AI. Accordingly, UNESCO holds that we need to educate algorithms, whilst citizens need to understand potential problems and, consequently, challenge them.

Safiya Umoja Noble has been working on showcasing how algorithms are a tool for oppression, opening a discussion of unethical or illegal uses of AI and algorithms, and some examples from around the world can be categorised as follow:

Activity

  • To discuss discrimination through algorithms you can start by asking your students to play with the Pre-crime Calculator, which is an interactive experience that takes you to the world of predictive policing. How much of a potential suspect or a victim are you in the eyes of the system? And what are the areas in your city you should avoid in the next week so as not to get involved in the crime?
  • Then, ask the students to take an online personality test, such as the Business Personality Profile, using a male and female identity with similar characteristics and compare the results of the test.
  • Finally, ask your students to share their experiences with the rest of the class.