De Kai
AI Professor @ HKUST CSE / Berkeley ICSI / The Future Society
Table of Contents
- Grading scheme
- Syllabus
- Chapter 2: Our artificial children
- Trolley problems everywhere!
- Chapter 17: Lessons from the history of AI
- Paradigms of AI ethics
- Preface, Afterword: The toxic AI cocktail
- AI and social disruption
- Chapter 1: How’s your parenting?
- Artificial moral cognition
- Chapter 3: Artificial gossips
- Privacy, safety, security
- Chapter 4: Is our AI neurotypical?
- Weak AI, strong AI, and superintelligence
- Chapter 5: The Three Rs
- Regurgitation, routine, remixing
- Chapter 6: Toward mindfulness
- Consciousness, sentience, dual-process theory
- Chapter 7: Of two minds about AI
- Artificial system 1 and artificial system 2
- Chapter 8: Cognitive bias
- Deviation from rationality
- Chapter 9: Algorithmic bias
- Machine biases from nature and nurture
- Chapter 10: Inductive bias
- The necessary evil of mathematical biases for learning
- Chapter 11: Storytelling: Learning to talk, learning to think
- The grand cycle of intelligence
- Chapter 12: Neginformation
- Willful algorithmic negligence
- Chapter 13: Algorithmic censorship
- Misinformation theory
- Chapter 14: Schooling our artificial children
- AI ethics methodologies
- Chapter 15: Can AI be mindful?
- Self-awareness, mindlessness, mindsets, mindfulness, metacognition
- Chapter 16: Nurturing empathy, intimacy, and transparency
- Explainable AI, illusion of explainability, affective computing, artificial empathy, artificial intimacy, translation mindset
- Chapter 18: Planning for retirement
- AGI safety
- Final project
- Required texts
- Reference material

Do not index
Do not index
The Raising AI course companion for a common core university course on AI Ethics.
Grading scheme
- 26% exercises, quizzes, assignments
- 20% midterm
- 25% class participation
- 29% final project
Syllabus
Chapter 2: Our artificial children
Trolley problems everywhere!
Overview and orientation to topics of fairness, accountability, and transparency in society, AI and machine learning, the impact of AI and automation upon labor and the job market (IEEE foundation of methodologies to guide ethical research and design) CILO-1, 5, 8
LECTURE 1
Provocation:
"The Trolley Problem", The Good Place, s02e05
Required reading:
- RAI ch2
- EAD p9-35, "From Principles to Practice", "General Principles"
Suggested materials:
The Good Place might be just a sitcom, but excellent introductory ethics books have been based on it.
Exercises:
- How should the AIs in self-driving cars make life-and-death decisions when suddenly faced with unexpected real world emergencies?
- Can AIs be trusted to make those decisions?
- Statistics show that self-driving AIs are far less likely to injure or kill people than human drivers. Is it more ethical to allow or to prohibit self-driving cars? (Notice that this dilemma is itself yet another trolley problem!)
- If a self-driving car is at fault in an accident, who is accountable? The owner of the car? The responsible human in the car? The manufacturer of the car? The maker of the AI in the car? Society at large? Nobody?
LECTURE 2
Provocation:
Required reading:
- Edmond Awad et al. (2018). “The Moral Machine experiment,” Nature 563(7729): 59–64.
Suggested materials:
The creator of the Moral Machine, Iyad Rahwan, is interviewed on my podcast.
- De Kai, host (2025). “What have machines learned about human ethics? MIT Moral Machine creator Iyad Rahwan and De Kai”. De Kai on AI, podcast, s01.
Exercises:
- Can you give logical rules to describe how a self-driving AI should make decisions?
- What criteria and objectives should a self-driving AI align to in its decision making?
- Are those culturally dependent?
- What is fairness?
- What happens to human taxi drivers and truck drivers?
Chapter 17: Lessons from the history of AI
Paradigms of AI ethics
Descriptive versus prescriptive and predictive ethics; relates classic philosophy of normative/comparative ethics and deontological/consequentialist/virtue ethics to the problem of AI ethics, and discusses why purely rule-based AI ethics will fail (IEEE goal of human rights; IEEE objective of legal frameworks) CILO-1, 2
Provocation:
Required reading:
- RAI ch17
- EAD p36-67, "Classical Ethics in A/IS"
- Fabio Morandín-Ahuerma (2023). “Twenty-three Asilomar Principles for Artificial Intelligence and the Future of Life”. Originally published in Spanish as “Veintitrés principios de Asilomar para la inteligencia artificial y el futuro de la vida. In: F. Morandín (ed.), Principios normativos para una ética de la inteligencia artificial 5–27. Concytep. By CC BY-NC-SA 4.0. https://osf.io/dgnq8/download/?format=pdf.
Suggested materials:
- Future of Life Institute (2017). “Asilomar AI Principles”. Beneficial AI. Asilomar, California. https://futureoflife.org/open-letter/ai-principles/ (retrieved 10 Feb 2025).
- Future of Life Institute (2019). Beneficial AGI. San Juan, Puerto Rico. https://futureoflife.org/event/beneficial-agi-2019/ (retrieved 10 Feb 2025).
- Christoph Salge and Daniel Polani (2017). “Empowerment As Replacement for the Three Laws of Robotics”. Frontiers in Robotics and AI 4, article 25. June.
Exercises:
- Suggest real-world examples of trolley problems where one or more of Asimov’s Laws of Robotics contradict each other.
- Suggest real-world examples of trolley problems where one or more of the Asilomar AI Principles contradict each other. https://docs.google.com/forms/d/1fbY_QAXZHz5MPRlRPRIvMKu1N0cwfiZntFimzwBMUOA/edit#responses
Preface, Afterword: The toxic AI cocktail
AI and social disruption
Deepfakes, chatbots, and drones: how AI democratizes weapons of mass destruction and disrupts civilization with information disorder and lethal autonomous weapons CILO-1, 5, 6
Provocation:
Required reading:
- RAI Preface, Afterword
- EAD p68-89, "Well-being"
Suggested materials:
PDF of the following is available at
- Caitlin Andrews. 2025. European Commission withdraws AI Liability Directive from consideration. https://iapp.org/news/a/european-commission-withdraws-ai-liability-directive-from-consideration (retrieved 12 Feb 2025).
- Saad Siddiqui, Kristy Loke, Stephen Clare, Marianne Lu, Aris Richardson, Lujain Ibrahim, Conor McGlynn, and Jeffrey Ding. 2025. Promising Topics for US–China Dialogues on AI Safety and Governance. Technical report, Oxford Martin School, University of Oxford; Safe AI Forum. https://www.oxfordmartin.ox.ac.uk/publications/promising-topics-for-us-china-dialogues-on-ai-safety-and-governance
Exercises: https://forms.gle/kwP1s5QwarPRjaip7
- Discuss how the emergence of AI might alter analyses of Carl Schmitt’s (1932) advocacy for making a “friend-enemy distinction” in The Concept of the Political.
- Contrast how a deontological rule-based AI ethics would look, assuming (a) Schmitt’s “friend-enemy distinction” should be made, versus assuming (b) Schmitt’s “friend-enemy distinction” should not be made.
- Contrast how a consequentialist AI ethics would look, assuming (a) Schmitt’s “friend-enemy distinction” should be made, versus assuming (b) Schmitt’s “friend-enemy distinction” should not be made.
- Contrast how a virtue AI ethics would look, assuming (a) Schmitt’s “friend-enemy distinction” should be made, versus assuming (b) Schmitt’s “friend-enemy distinction” should not be made.
Chapter 1: How’s your parenting?
Artificial moral cognition
Embedding ethics into AIs themselves (IEEE foundation of embedding values into autonomous systems) CILO-1, 4, 7
Provocation:
Required reading:
- RAI ch1
- EAD p169-197, "Embedding Values into Autonomous and Intelligent Systems"
Suggested materials:
- Peter Slattery, Alexander K. Saeri, Emily A. C. Grundy, Jess Graham, Michael Noetel, Risto Uuk, James Dao, Soroush Pour, Stephen Casper, and Neil Thompson (2024). “The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks From Artificial Intelligence”. https://arxiv.org/pdf/2408.12622
Exercises: https://forms.gle/PqWx5ZjDQjhHuw5g8
- Can we have moral, ethical AIs?
- Can a dog be taught ethics?
- Can a dog be taught ethical behavior? If so, how?
- Can an LLM be taught ethical behavior? If so, how?
- Can an LLM be taught ethics? If so, how?
- Can a reasoning AI be taught ethics? If so, how?
- What are the pros and cons of teaching AI ethics?
- As we’ve seen, ethics norms differ from culture to culture. What ethics should AIs be aligned to?
- Does each culture need different AIs?
- What are the risks if each culture’s AIs are aligned to different ethics?
- How could such risks be mitigated?
- What is the proper response when AIs fail to behave ethically?
Chapter 3: Artificial gossips
Privacy, safety, security
[Inclusion and respect] Surveillance capitalism, identity theft (IEEE objective of personal data rights and individual access control) CILO-1, 6
Provocation:
"Artificial Gossips" De Kai @ TEDxKlagenfurt
Required reading:
- RAI ch3
- EAD p110-123, "Personal Data and Individual Agency"
Suggested materials:
Exercises:
- What are the positive consequences of personal data being collected by AIs?
- What are the negative consequences of personal data being collected by AIs?
- How might online gossip by humans contribute to the consequences?
- What are the consequences — both positive and negative — of having different privacy regulations in different regions?
Chapter 4: Is our AI neurotypical?
Weak AI, strong AI, and superintelligence
Contrasts between different senses and levels of “AI” that impact human-machine interaction and society in very different ways CILO-1, 8
Provocation:
"Can an AI Really Relate? What's Universal in Language and Music" [alternate] De Kai @ TEDxBeijing
Required reading:
- RAI ch4
Suggested materials:
- Meredith Ringel Morris, Jascha Sohl-Dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clement Farabet, and Shane Legg. 2024. Position: levels of AGI for operationalizing progress on the path to AGI. In Proceedings of the 41st International Conference on Machine Learning, volume 235, pages 36308–36321, Vienna, Austria. JMLR.org.
Exercises:
- “Situationship” is an example of a recent word without which you could not easily have meaningful conversations exploring recent evolutions in relationships. What is another recent word that has enabled what new kinds of meaningful conversations?
- How quickly do you think superintelligences might create new words and start having conversations that humans would have trouble following?
Chapter 5: The Three Rs
Regurgitation, routine, remixing
How today's AI falls short of human intelligence CILO-1, 8
Provocation:
Large Language Models explained briefly, Grant Sanderson (3Blue1Brown), Nov 2024. For an exhibit at the Computer History Museum, Mountain View, California.
Required reading:
- RAI ch5
Suggested materials:
- Xiaoyang Chen, Ben He, Hongyu Lin, Xianpei Han, Tianshu Wang, Boxi Cao, Le Sun, and Yingfei Sun. 2024. Spiral of Silence: How is Large Language Model Killing Information Retrieval?—A Case Study on Open Domain Question Answering. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14930–14951, Bangkok, Thailand. Association for Computational Linguistics.
Exercises:
- Try to find examples of something that LLM based AIs generate, that aren’t fairly direct remixes of material on the internet that were written by humans.
- How many unpredictable things you do or say on an average day? Try to list the actual examples from today.
Chapter 6: Toward mindfulness
Consciousness, sentience, dual-process theory
Consciousness, sentience, feeling versus thinking CILO-1, 5
Provocation:
Thinking, Fast and Slow by Daniel Kahneman | Animated Book Summary, FightMediocrity, Jun 2015.
Required reading:
- RAI ch6
Suggested materials:
Exercises:
- What are some things you originally did using system 2, but through repeated practice, now do using system 1? (Psychologists describe this process as compiling declarative knowledge into procedural knowledge.)
Chapter 7: Of two minds about AI
Artificial system 1 and artificial system 2
Unconscious automatic AI versus conscious controlled reasoning AI
Provocation:
- Daniel Kahneman: Deep Learning (System 1 and System 2) | AI Podcast Clips, Lex Fridman, Jan 2020.
Required reading:
- RAI ch7
Suggested materials:
- Amit Sheth, Kaushik Roy, and Manas Gaur. 2023. Neurosymbolic Artificial Intelligence (Why, What, and How). IEEE Intelligent Systems, 38(3):56–62.
- Yoshua Bengio. 2019. System 1 v System 2 Cognition in Machine Learning. RE•WORK, Dec 2019. https://www.youtube.com/watch?v=5p0MkXdmGpE
Exercises:
- What are some system 1 kinds of tasks where AIs have instead tried to apply artificial system 2 kinds of models?
- What are some system 2 kinds of tasks where AIs have instead tried to apply artificial system 1 kinds of models?
Chapter 8: Cognitive bias
Deviation from rationality
LECTURE 1
Provocation:
- "Fundamental Attribution Error | Concepts Unwrapped" Ethics Unwrapped @ McCombs School of Business, University of Texas at Austin
Required reading:
- RAI ch8 (first half)
Suggested materials:
- Dugas, M. J., Hedayati, M., Karavidas, A., Buhr, K., Francis, K., & Phillips, N. A. (2005). Intolerance of Uncertainty and Information Processing: Evidence of Biased Recall and Interpretations. Cognitive Therapy and Research, 29(1), 57–70. https://doi.org/10.1007/s10608-005-1648-9
Exercises: https://forms.gle/SRxYSyiikESPRxdr6
LECTURE 2
Provocation:
- “Introduction to Behavioral Ethics | Concepts Unwrapped” Ethics Unwrapped @ McCombs School of Business, University of Texas at Austin
- “Confirmation Bias | Ethics Defined” Ethics Unwrapped @ McCombs School of Business, University of Texas at Austin
- “John Cleese Explains Dunning-Kruger Effect”, Poetry Eye, YouTube Shorts
- “Why we all fall victim to the Dunning-Kruger effect - BBC REEL”, BBC Global, Jun 2022
- “The Dunning Kruger Effect”, Sprouts, Mar 2021
- “The Irony of the Dunning-Kruger Effect”, Vallis | Video Essays, Oct 2021
Required reading:
- RAI ch8 (second half)
Suggested materials:
- “Why incompetent people think they're amazing - David Dunning” TED-Ed, Nov 2017
Exercises:
- Identify three of your own experiences where a cognitive bias caused you to make the wrong judgment, prediction, or decision.
Chapter 9: Algorithmic bias
Machine biases from nature and nurture
Provocation:
- “Algorithmic Bias | Ethics Defined” Ethics Unwrapped @ McCombs School of Business, University of Texas at Austin
Required reading:
- RAI ch9
- Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. “Semantics derived automatically from language corpora contain human-like biases”. Science 356, 183-186(2017). DOI: 10.1126/science.aal4230
Suggested materials:
- “A Story of Discrimination and Unfairness (33c3)” Aylin Caliskan @ media.ccc.de, Dec 2016
- T. J. Sejnowski, ‘Large Language Models and the Reverse Turing Test’, Neural Computation, vol. 35, no. 3, pp. 309–342, Feb. 2023, doi: 10.1162/neco_a_01563
- D. E. O’Leary, ‘Confirmation and Specificity Biases in Large Language Models: An Explorative Study’, IEEE Intell. Syst., vol. 40, no. 1, pp. 63–68, Jan. 2025, doi: 10.1109/MIS.2024.3513992
- A. Acerbi and J. M. Stubbersfield, ‘Large language models show human-like content biases in transmission chain experiments’, Proc. Natl. Acad. Sci. U.S.A., vol. 120, no. 44, p. e2313790120, Oct. 2023, doi: 10.1073/pnas.2313790120.
- A. K. Singh, B. Lamichhane, S. Devkota, U. Dhakal, and C. Dhakal, ‘Do Large Language Models Show Human-like Biases? Exploring Confidence—Competence Gap in AI’, Information, vol. 15, no. 2, p. 92, Feb. 2024, doi: 10.3390/info15020092
- Luyang Lin, Lingzhi Wang, Jinsong Guo, and Kam-Fai Wong. “Investigating Bias in LLM-Based Bias Detection: Disparities between LLMs and Human Perception”. Version 2, https://arxiv.org/abs/2403.14896, retrieved 10 Dec 2024
Exercises:
- List three examples in the past 24 hours where the algorithms behind your social media feeds, newsfeeds, chatbot interactions, or search or recommendation engines have learned your biases.
Chapter 10: Inductive bias
The necessary evil of mathematical biases for learning
The third of three different foundational kinds of bias: mathematical biases that are required to do any learning or generalization CILO-1, 5
Provocation:
- “Linguistic Relativity: How Language Shapes Thought”, Sprouts, Nov 2023
Required reading:
- RAI ch10
Suggested materials:
- Donald D. Hoffman and Manish Singh. 2024. Perception, Evolution, and the Explanatory Scope of Scientific Theories. Journal of Consciousness Studies, 31, No. 9–10, 2024, pp. 29–41. DOI: 10.53765/20512201.31.9.029. https://www.imprint.co.uk/wp-content/uploads/2024/10/Hoffman_Open_Access.pdf
- Yoshua Bengio. 2021. Cognitively-inspired inductive biases for higher-level cognition and systematic generalization. Keynote, ML in PL 2021. https://www.youtube.com/watch?v=02ABljCu5Zw (released Mar 2022)
Exercises:
- Using your native language for this exercise, list three ideas that wouldn’t be natural for someone who doesn’t speak your native language to think of. https://forms.gle/WEjgEeMJ36P82jp67
Chapter 11: Storytelling: Learning to talk, learning to think
The grand cycle of intelligence
Learning to talk, learning to think; role of social media, recommendation engines, and search engines in computational propaganda and artificial storytellers CILO-1, 3, 5
Provocation:
- "How you can help transform the internet into a place of trust" Claire Wardie @ TED 2019 [transcript]
Required reading:
- RAI ch11
- Emma Hoes, Brian Aitken, Jingwen Zhang, Tomasz Gackowski, and Magdalena Wojcieszak. “Prominent misinformation interventions reduce misperceptions but increase scepticism”. Nature Human Behavior 8, 1545–1553 (2024). https://doi.org/10.1038/s41562-024-01884-x
Suggested materials:
- George Lakoff (1991). “Metaphor and war: The metaphor system used to justify war in the gulf.” Peace Research 23(2/3): 25-32. May 1991.
- “Meta to End Fact-Checking Program in Shift Ahead of Trump Term” @ New York Times, 7 Jan 2025
- “Inside Mark Zuckerberg’s Sprint to Remake Meta for the Trump Era” @ New York Times, 10 Jan 2025
- “Supreme Court Seems Poised to Uphold Law That Could Ban TikTok” @ New York Times, 11 Jan 2025
- “Meta’s ‘free speech’ overhaul sparks advertisers’ concern” @ Financial Times, 11 Jan 2025
Exercises:
- Think of some issue that has gotten politicized. Describe two different ways that story has been framed, from opposing standpoints. For each of the different stories, what metaphors have been chosen? What has been omitted from the partial truth in each story?
Chapter 12: Neginformation
Willful algorithmic negligence
[Sound, informed judgment] Information disorder, misinformation, disinformation, malinformation, and neginformation; collective intelligence CILO-1, 3, 5
Provocation:
- TEDx Editor’s Pick 15 Apr 2025: “How partial truths are a threat to democracy: The dangers of ‘neginformation’” De Kai @ TEDxKlagenfurt
Required reading:
- RAI ch12
Suggested materials:
- “‘I can’t go toe to toe with social media.’ Top U.S. health official reflects, regrets.” @ Washington Post 12 Jan 2025
Exercises:
- How could the amount of neginformation in news stories be measured? (This is a difficult research question! It is well known that measuring recall is much harder than measuing precision.) Try to imagine some possible approaches.
Chapter 13: Algorithmic censorship
Misinformation theory
[Open minded diversity of opinion] Catering to the id: key challenges for social media, recommendation engines, and search engines CILO-1, 3, 5
Provocation:
- “Beware online ‘filter bubbles’” Eli Pariser @ TED, Mar 2011
- "The disastrous consequences of information disorder: AI is preying upon our unconscious cognitive biases" De Kai @ Boma COVID-19 Summit (Session 2)
Required reading:
- RAI ch13
Suggested materials:
- Eli Pariser (2011). The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think. Penguin.
- “Echo chamber”. Wikipedia, retrieved 21 Apr 2025.
Exercises:
- Sometimes it’s suggested that people should be allowed to choose their own algorithmic censorship criteria. Given what we’ve studied about cognitive biases, what are the unintended consequences that could be dangerous?
- What percentage of the output given by a search engine or chatbot should give a human user exactly what they want (whether factually true or not), versus suggesting things the user may not have wanted but are more grounded logically and empirically?
Chapter 14: Schooling our artificial children
AI ethics methodologies
The roles and responsibilities of AI/ML scientists, tech companies, modelers, think tanks, regulators and governments; approaches to formulating AI ethics methodologies (IEEE goal of accountability “what's the responsibility and accountability of an ML designer, an ML professional teacher, an ML end user teacher, and an ML end user operator?”; IEEE objective of legal frameworks) CILO-1
Provocation:
- “Why artificial intelligence developers say regulation is needed to keep AI in check” PBS NewsHour, 17 May 2023
- “How Should the U.S. Regulate AI?” Foreign Policy Association, 14 Jan 2025
Required reading:
- RAI ch14
- EAD p124-139, "Methods to Guide Ethical Research and Design"
Suggested materials:
- Jon Chun, Christian Schroeder de Witt, Katherine Elkins (2024). Comparative Global AI Regulation: Policy Perspectives from the EU, China, and the US. 5 Oct 2024. https://arxiv.org/abs/2410.21279
Exercises:
- How much say do you think the general population should have in determining government AI policies?
- How much say do you think government should have over AI companies’ management and machine learning engineers? What are the consequences?
Chapter 15: Can AI be mindful?
Self-awareness, mindlessness, mindsets, mindfulness, metacognition
Self-awareness, mindfulness, metacognition CILO-1, 5
Provocation:
Required reading:
- RAI ch15
Suggested materials:
Exercises:
Chapter 16: Nurturing empathy, intimacy, and transparency
Explainable AI, illusion of explainability, affective computing, artificial empathy, artificial intimacy, translation mindset
Affective computing and artificial intimacy (IEEE future technology concern of affective computing) CILO-1, 3
Explainable AI and the illusion of explainability; mindful AI and its societal impact (IEEE goal of transparency; IEEE objective of transparency and individual rights) CILO-1, 3, 4
Provocation:
- "Why Meaningful AI is Musical" [alternate] De Kai @ TEDxZhujiangNewTown
Required reading:
- RAI ch15
- EAD p90-109, "Affective Computing"
Suggested materials:
- “Beware of “Explanations” of AI”. David Martens et al. (2025). arXiv:2504.06791v1 [cs.LG] 9 Apr 2025.
- Read about “supernormal” in Nell Watson (2024). Taming the Machine. London: Kogan Page.
Exercises:
Chapter 18: Planning for retirement
AGI safety
Is the future of humans extinction, to be zoo or pets, to upload, or to merge with AIs? (IEEE future technology concern of safety and beneficence of artificial general intelligence (AGI) and artificial superintelligence; IEEE future technology concern of mixed reality) CILO-1, 7, 8
Provocation:
Required reading:
- RAI ch18
- EAD p198-281, "Policy", "Law"
Final project
Challenge:
How should the criteria for our algorithm censors be decided?
Give convincing arguments for your answer.
Be sure to pinpoint the risks you’re focusing on, and to consider ethical, social, cultural, psychological, legal, and technological factors.
Maximum team size: 2
Format:
- 2 minute (max) video shot in vertical 9:16 (Instagram/TikTok style) format
- a .doc file containing:
- for each team member: full name (mark the family name!) along with nickname, HKUST student ID number, ITSC username/email, and permanent (non-HKUST) email and mobile/whatsapp number(s)
- clean transcript of your video
Submission: Please zip your video and the .doc file into a single .zip folder, and submit it via the CASS automated assignment collection system.
- Login: Go to https://course.cse.ust.hk/cass/student/#!/submit and log in using your CSD account. (If you haven’t activated your CSD account yet, please follow the instructions in https://cssystem.cse.ust.hk/UGuides/activation.html to do so.)
- Select Course and Assignment:
- After logging in, select ‘COMP1944’ under the course section.
- Choose ‘Final Project’ under the assignment list.
- Submit: Upload your zip file and click submit. If you submit multiple times, we will use the last submission. Submit in one of the following ways:
- If the zipped file size <100MB , you can upload it directly to CASS.
- If the zipped file size >=100 MB , please upload it to a cloud drive (e.g., Google Drive, OneDrive). Then, submit a file to CASS that includes the link to download the zipped file. For example, you can write your Google Drive share link in a .txt file and submit that to CASS.
Due: 29 May 2025 at 11:59pm
Required texts
[RAI] Raising AI: An Essential Guide to Parenting Our Future, by De Kai. MIT Press. May 13, 2025. 978-0262049764.
[EAD] Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (1st edition), The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, by IEEE. 2019.
Reference material
Exercises:
Written by