De Kai
AI Professor @ HKUST CSE / Berkeley ICSI / The Future Society
Table of Contents

Do not index
Do not index
Chapter 13: Algorithmic censorship
Filter bubbles and echo chambers: what criteria should the AI algorithms use when deciding what to hide from us?
[Open minded diversity of opinion] Catering to the id: key challenges for social media, recommendation engines, and search engines CILO-1, 3, 5
Provocation:
- “Beware online ‘filter bubbles’” Eli Pariser @ TED, Mar 2011
- "The disastrous consequences of information disorder: AI is preying upon our unconscious cognitive biases" De Kai @ Boma COVID-19 Summit (Session 2)
Required reading:
- RAI ch13
Suggested materials:
- Eli Pariser (2011). The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think. Penguin.
- “Echo chamber”. Wikipedia, retrieved 21 Apr 2025.
Exercises:
- Sometimes it’s suggested that people should be allowed to choose their own algorithmic censorship criteria. Given what we’ve studied about cognitive biases, what are the unintended consequences that could be dangerous?
- What percentage of the output given by a search engine or chatbot should give a human user exactly what they want (whether factually true or not), versus suggesting things the user may not have wanted but are more grounded logically and empirically?
Written by










