De Kai
AI Professor @ HKUST CSE / Berkeley ICSI / The Future Society
Table of Contents

Do not index
Preface, Afterword: The toxic AI cocktail
AI and social disruption
Deepfakes, chatbots, and drones: how AI democratizes weapons of mass destruction and disrupts civilization with information disorder and lethal autonomous weapons CILO-1, 5, 6
Provocation:
Required reading:
- RAI Preface, Afterword
- EAD p68-89, "Well-being"
Suggested materials:
PDF of the following is available at
- Caitlin Andrews. 2025. European Commission withdraws AI Liability Directive from consideration. https://iapp.org/news/a/european-commission-withdraws-ai-liability-directive-from-consideration (retrieved 12 Feb 2025).
- Saad Siddiqui, Kristy Loke, Stephen Clare, Marianne Lu, Aris Richardson, Lujain Ibrahim, Conor McGlynn, and Jeffrey Ding. 2025. Promising Topics for US–China Dialogues on AI Safety and Governance. Technical report, Oxford Martin School, University of Oxford; Safe AI Forum. https://www.oxfordmartin.ox.ac.uk/publications/promising-topics-for-us-china-dialogues-on-ai-safety-and-governance
Exercises: https://forms.gle/kwP1s5QwarPRjaip7
- Discuss how the emergence of AI might alter analyses of Carl Schmitt’s (1932) advocacy for making a “friend-enemy distinction” in The Concept of the Political.
- Contrast how a deontological rule-based AI ethics would look, assuming (a) Schmitt’s “friend-enemy distinction” should be made, versus assuming (b) Schmitt’s “friend-enemy distinction” should not be made.
- Contrast how a consequentialist AI ethics would look, assuming (a) Schmitt’s “friend-enemy distinction” should be made, versus assuming (b) Schmitt’s “friend-enemy distinction” should not be made.
- Contrast how a virtue AI ethics would look, assuming (a) Schmitt’s “friend-enemy distinction” should be made, versus assuming (b) Schmitt’s “friend-enemy distinction” should not be made.
Written by












