AI Ethics course companion for Raising AI

Supplementary materials for a common core university course on AI Ethics.

AI Ethics course companion for Raising AI
Do not index
Do not index
The Raising AI course companion for a common core university course on AI Ethics.
 

Grading scheme

  • 26% exercises, quizzes, assignments
  • 20% midterm
  • 25% class participation
  • 29% final project

Syllabus

 

Chapter 2: Our artificial children

Trolley problems everywhere!

Overview and orientation to topics of fairness, accountability, and transparency in society, AI and machine learning, the impact of AI and automation upon labor and the job market (IEEE foundation of methodologies to guide ethical research and design) CILO-1, 5, 8
 
LECTURE 1
 
Provocation:
"The Trolley Problem", The Good Place, s02e05
 
Required reading:
  • EAD p9-35, "From Principles to Practice", "General Principles"
 
Suggested materials:
The Good Place might be just a sitcom, but excellent introductory ethics books have been based on it.
 
Exercises:
  1. How should the AIs in self-driving cars make life-and-death decisions when suddenly faced with unexpected real world emergencies?
  1. Can AIs be trusted to make those decisions?
  1. Statistics show that self-driving AIs are far less likely to injure or kill people than human drivers. Is it more ethical to allow or to prohibit self-driving cars? (Notice that this dilemma is itself yet another trolley problem!)
  1. If a self-driving car is at fault in an accident, who is accountable? The owner of the car? The responsible human in the car? The manufacturer of the car? The maker of the AI in the car? Society at large? Nobody?
 
LECTURE 2
 
Provocation:
 
Required reading:
 
Suggested materials:
The creator of the Moral Machine, Iyad Rahwan, is interviewed on my podcast.
  • De Kai, host (2025). “What have machines learned about human ethics? MIT Moral Machine creator Iyad Rahwan and De Kai”. De Kai on AI, podcast, s01.
 
Exercises:
  1. Can you give logical rules to describe how a self-driving AI should make decisions?
  1. What criteria and objectives should a self-driving AI align to in its decision making?
  1. Are those culturally dependent?
  1. What is fairness?
  1. What happens to human taxi drivers and truck drivers?

Chapter 17: Lessons from the history of AI

Paradigms of AI ethics

Descriptive versus prescriptive and predictive ethics; relates classic philosophy of normative/comparative ethics and deontological/consequentialist/virtue ethics to the problem of AI ethics, and discusses why purely rule-based AI ethics will fail (IEEE goal of human rights; IEEE objective of legal frameworks) CILO-1, 2
 
Provocation:
 
Required reading:
  • EAD p36-67, "Classical Ethics in A/IS"
 
Suggested materials:
 
Exercises:
  1. Suggest real-world examples of trolley problems where one or more of Asimov’s Laws of Robotics contradict each other.
  1. Suggest real-world examples of trolley problems where one or more of the Asilomar AI Principles contradict each other. https://docs.google.com/forms/d/1fbY_QAXZHz5MPRlRPRIvMKu1N0cwfiZntFimzwBMUOA/edit#responses

Preface, Afterword: The toxic AI cocktail

AI and social disruption

Deepfakes, chatbots, and drones: how AI democratizes weapons of mass destruction and disrupts civilization with information disorder and lethal autonomous weapons CILO-1, 5, 6
 
Provocation:
 
Required reading:
  • RAI Preface, Afterword
  • EAD p68-89, "Well-being"
 
Suggested materials:
PDF of the following is available at
 
  1. Discuss how the emergence of AI might alter analyses of Carl Schmitt’s (1932) advocacy for making a “friend-enemy distinction” in The Concept of the Political.
  1. Contrast how a deontological rule-based AI ethics would look, assuming (a) Schmitt’s “friend-enemy distinction” should be made, versus assuming (b) Schmitt’s “friend-enemy distinction” should not be made.
  1. Contrast how a consequentialist AI ethics would look, assuming (a) Schmitt’s “friend-enemy distinction” should be made, versus assuming (b) Schmitt’s “friend-enemy distinction” should not be made.
  1. Contrast how a virtue AI ethics would look, assuming (a) Schmitt’s “friend-enemy distinction” should be made, versus assuming (b) Schmitt’s “friend-enemy distinction” should not be made.

Chapter 1: How’s your parenting?

Artificial moral cognition

Embedding ethics into AIs themselves (IEEE foundation of embedding values into autonomous systems) CILO-1, 4, 7
 
Provocation:
 
Required reading:
  • EAD p169-197, "Embedding Values into Autonomous and Intelligent Systems"
 
Suggested materials:
  • Peter Slattery, Alexander K. Saeri, Emily A. C. Grundy, Jess Graham, Michael Noetel, Risto Uuk, James Dao, Soroush Pour, Stephen Casper, and Neil Thompson (2024). “The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks From Artificial Intelligence”. https://arxiv.org/pdf/2408.12622
 
  1. Can we have moral, ethical AIs?
    1. Can a dog be taught ethics?
    2. Can a dog be taught ethical behavior? If so, how?
    3. Can an LLM be taught ethical behavior? If so, how?
    4. Can an LLM be taught ethics? If so, how?
    5. Can a reasoning AI be taught ethics? If so, how?
    6. What are the pros and cons of teaching AI ethics?
  1. As we’ve seen, ethics norms differ from culture to culture. What ethics should AIs be aligned to?
    1. Does each culture need different AIs?
    2. What are the risks if each culture’s AIs are aligned to different ethics?
    3. How could such risks be mitigated?
  1. What is the proper response when AIs fail to behave ethically?

Chapter 3: Artificial gossips

Privacy, safety, security

[Inclusion and respect] Surveillance capitalism, identity theft (IEEE objective of personal data rights and individual access control) CILO-1, 6
 
Provocation:
 
Required reading:
  • EAD p110-123, "Personal Data and Individual Agency"
 
Suggested materials:
 
Exercises:
  1. What are the positive consequences of personal data being collected by AIs?
  1. What are the negative consequences of personal data being collected by AIs?
  1. How might online gossip by humans contribute to the consequences?
  1. What are the consequences — both positive and negative — of having different privacy regulations in different regions?

Chapter 4: Is our AI neurotypical?

Weak AI, strong AI, and superintelligence

Contrasts between different senses and levels of “AI” that impact human-machine interaction and society in very different ways CILO-1, 8
 
Provocation:
 
Required reading:
 
Suggested materials:
 
Exercises:
  1. “Situationship” is an example of a recent word without which you could not easily have meaningful conversations exploring recent evolutions in relationships. What is another recent word that has enabled what new kinds of meaningful conversations?
  1. How quickly do you think superintelligences might create new words and start having conversations that humans would have trouble following?

Chapter 5: The Three Rs

Regurgitation, routine, remixing

How today's AI falls short of human intelligence CILO-1, 8
 
Provocation:
Large Language Models explained briefly, Grant Sanderson (3Blue1Brown), Nov 2024. For an exhibit at the Computer History Museum, Mountain View, California.
 
Required reading:
 
Suggested materials:
 
Exercises:
  1. Try to find examples of something that LLM based AIs generate, that aren’t fairly direct remixes of material on the internet that were written by humans.
  1. How many unpredictable things you do or say on an average day? Try to list the actual examples from today.

Chapter 6: Toward mindfulness

Consciousness, sentience, dual-process theory

Consciousness, sentience, feeling versus thinking CILO-1, 5
 
Provocation:
 
Required reading:
 
Suggested materials:
 
Exercises:
  • What are some things you originally did using system 2, but through repeated practice, now do using system 1? (Psychologists describe this process as compiling declarative knowledge into procedural knowledge.)

Chapter 7: Of two minds about AI

Artificial system 1 and artificial system 2

Unconscious automatic AI versus conscious controlled reasoning AI
 
Provocation:
 
Required reading:
 
Suggested materials:
 
Exercises:
  • What are some system 1 kinds of tasks where AIs have instead tried to apply artificial system 2 kinds of models?
  • What are some system 2 kinds of tasks where AIs have instead tried to apply artificial system 1 kinds of models?

Chapter 8: Cognitive bias

Deviation from rationality

 
LECTURE 1
 
Provocation:
 
Required reading:
  • RAI ch8 (first half)
 
Suggested materials:
 
 
LECTURE 2
 
Provocation:
 
Required reading:
  • RAI ch8 (second half)
 
Suggested materials:
  • “Why incompetent people think they're amazing - David Dunning” TED-Ed, Nov 2017
 
Exercises:
  • Identify three of your own experiences where a cognitive bias caused you to make the wrong judgment, prediction, or decision.

Chapter 9: Algorithmic bias

Machine biases from nature and nurture

 
Provocation:
 
Required reading:
  • Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. “Semantics derived automatically from language corpora contain human-like biases”. Science 356, 183-186(2017). DOI: 10.1126/science.aal4230
 
Suggested materials:
  • T. J. Sejnowski, ‘Large Language Models and the Reverse Turing Test’, Neural Computation, vol. 35, no. 3, pp. 309–342, Feb. 2023, doi: 10.1162/neco_a_01563
  • D. E. O’Leary, ‘Confirmation and Specificity Biases in Large Language Models: An Explorative Study’, IEEE Intell. Syst., vol. 40, no. 1, pp. 63–68, Jan. 2025, doi: 10.1109/MIS.2024.3513992
  • A. Acerbi and J. M. Stubbersfield, ‘Large language models show human-like content biases in transmission chain experiments’, Proc. Natl. Acad. Sci. U.S.A., vol. 120, no. 44, p. e2313790120, Oct. 2023, doi: 10.1073/pnas.2313790120.
  • A. K. Singh, B. Lamichhane, S. Devkota, U. Dhakal, and C. Dhakal, ‘Do Large Language Models Show Human-like Biases? Exploring Confidence—Competence Gap in AI’, Information, vol. 15, no. 2, p. 92, Feb. 2024, doi: 10.3390/info15020092
  • Luyang Lin, Lingzhi Wang, Jinsong Guo, and Kam-Fai Wong. “Investigating Bias in LLM-Based Bias Detection: Disparities between LLMs and Human Perception”. Version 2, https://arxiv.org/abs/2403.14896, retrieved 10 Dec 2024
 
Exercises:
  • List three examples in the past 24 hours where the algorithms behind your social media feeds, newsfeeds, chatbot interactions, or search or recommendation engines have learned your biases.

Chapter 10: Inductive bias

The necessary evil of mathematical biases for learning

The third of three different foundational kinds of bias: mathematical biases that are required to do any learning or generalization CILO-1, 5
 
Provocation:
 
Required reading:
 
Suggested materials:
 
Exercises:
  • Using your native language for this exercise, list three ideas that wouldn’t be natural for someone who doesn’t speak your native language to think of. https://forms.gle/WEjgEeMJ36P82jp67

Chapter 11: Storytelling: Learning to talk, learning to think

The grand cycle of intelligence

Learning to talk, learning to think; role of social media, recommendation engines, and search engines in computational propaganda and artificial storytellers CILO-1, 3, 5
 
Provocation:
 
Required reading:
 
Suggested materials:
 
Exercises:
  • Think of some issue that has gotten politicized. Describe two different ways that story has been framed, from opposing standpoints. For each of the different stories, what metaphors have been chosen? What has been omitted from the partial truth in each story?

Chapter 12: Neginformation

Willful algorithmic negligence

[Sound, informed judgment] Information disorder, misinformation, disinformation, malinformation, and neginformation; collective intelligence CILO-1, 3, 5
 
Provocation:
 
Required reading:
 
Suggested materials:
 
Exercises:
  • How could the amount of neginformation in news stories be measured? (This is a difficult research question! It is well known that measuring recall is much harder than measuing precision.) Try to imagine some possible approaches.

Chapter 13: Algorithmic censorship

Misinformation theory

[Open minded diversity of opinion] Catering to the id: key challenges for social media, recommendation engines, and search engines CILO-1, 3, 5
 
Provocation:
 
Required reading:
 
Suggested materials:
  • Eli Pariser (2011). The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think. Penguin.
 
Exercises:
  • Sometimes it’s suggested that people should be allowed to choose their own algorithmic censorship criteria. Given what we’ve studied about cognitive biases, what are the unintended consequences that could be dangerous?
  • What percentage of the output given by a search engine or chatbot should give a human user exactly what they want (whether factually true or not), versus suggesting things the user may not have wanted but are more grounded logically and empirically?

Chapter 14: Schooling our artificial children

AI ethics methodologies

The roles and responsibilities of AI/ML scientists, tech companies, modelers, think tanks, regulators and governments; approaches to formulating AI ethics methodologies (IEEE goal of accountability “what's the responsibility and accountability of an ML designer, an ML professional teacher, an ML end user teacher, and an ML end user operator?”; IEEE objective of legal frameworks) CILO-1
 
Provocation:
 
Required reading:
  • EAD p124-139, "Methods to Guide Ethical Research and Design"
 
Suggested materials:
 
Exercises:
  • How much say do you think the general population should have in determining government AI policies?
  • How much say do you think government should have over AI companies’ management and machine learning engineers? What are the consequences?

Chapter 15: Can AI be mindful?

Self-awareness, mindlessness, mindsets, mindfulness, metacognition

Chapter 16: Nurturing empathy, intimacy, and transparency

Explainable AI, illusion of explainability, affective computing, artificial empathy, artificial intimacy, translation mindset

Affective computing and artificial intimacy (IEEE future technology concern of affective computing) CILO-1, 3
Explainable AI and the illusion of explainability; mindful AI and its societal impact (IEEE goal of transparency; IEEE objective of transparency and individual rights) CILO-1, 3, 4
 
Provocation:
 
Required reading:
  • EAD p90-109, "Affective Computing"
 
Suggested materials:
  • Read about “supernormal” in Nell Watson (2024). Taming the Machine. London: Kogan Page.
 
Exercises:

    Chapter 18: Planning for retirement

    AGI safety

    Is the future of humans extinction, to be zoo or pets, to upload, or to merge with AIs? (IEEE future technology concern of safety and beneficence of artificial general intelligence (AGI) and artificial superintelligence; IEEE future technology concern of mixed reality) CILO-1, 7, 8
     
    Provocation:
       
      Required reading:
      • EAD p198-281, "Policy", "Law"

      Final project

      Challenge:
      How should the criteria for our algorithm censors be decided?
      Give convincing arguments for your answer.
      Be sure to pinpoint the risks you’re focusing on, and to consider ethical, social, cultural, psychological, legal, and technological factors.
      Maximum team size: 2
      Format:
      • 2 minute (max) video shot in vertical 9:16 (Instagram/TikTok style) format
      • a .doc file containing:
        • for each team member: full name (mark the family name!) along with nickname, HKUST student ID number, ITSC username/email, and permanent (non-HKUST) email and mobile/whatsapp number(s)
        • clean transcript of your video
      Submission: Please zip your video and the .doc file into a single .zip folder, and submit it via the CASS automated assignment collection system.
      1. Login: Go to https://course.cse.ust.hk/cass/student/#!/submit and log in using your CSD account. (If you haven’t activated your CSD account yet, please follow the instructions in https://cssystem.cse.ust.hk/UGuides/activation.html to do so.)
      1. Select Course and Assignment:
          • After logging in, select ‘COMP1944’ under the course section.
          • Choose ‘Final Project’ under the assignment list.
      1. Submit: Upload your zip file and click submit. If you submit multiple times, we will use the last submission. Submit in one of the following ways:
        1. If the zipped file size <100MB , you can upload it directly to CASS.
        2. If the zipped file size >=100 MB , please upload it to a cloud drive (e.g., Google Drive, OneDrive). Then, submit a file to CASS that includes the link to download the zipped file. For example, you can write your Google Drive share link in a .txt file and submit that to CASS.
      Due: 29 May 2025 at 11:59pm

      Required texts

       
       
      [EAD] Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (1st edition), The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, by IEEE. 2019.

      Reference material

       
       
       
       
       
       
      Exercises:

      Stay engaged on what we can't overlook in the AI age!

      Ready to help raise AI?

      Subscribe

      Written by

      De Kai

      AI Professor @ HKUST CSE / Berkeley ICSI / The Future Society