Misinformation theory

Chapter 13: Algorithmic censorship — What criteria should the AI algorithms use when deciding what to hide from us?

Misinformation theory
Do not index
Do not index

Chapter 13: Algorithmic censorship

Filter bubbles and echo chambers: what criteria should the AI algorithms use when deciding what to hide from us?
[Open minded diversity of opinion] Catering to the id: key challenges for social media, recommendation engines, and search engines CILO-1, 3, 5
 
Provocation:
 
Required reading:
 
Suggested materials:
  • Eli Pariser (2011). The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think. Penguin.
 
Exercises:
  • Sometimes it’s suggested that people should be allowed to choose their own algorithmic censorship criteria. Given what we’ve studied about cognitive biases, what are the unintended consequences that could be dangerous?
  • What percentage of the output given by a search engine or chatbot should give a human user exactly what they want (whether factually true or not), versus suggesting things the user may not have wanted but are more grounded logically and empirically?

Stay engaged on what we can't overlook in the AI age!

Ready to help raise AI?

Subscribe

Written by

De Kai

AI Professor @ HKUST CSE / Berkeley ICSI / The Future Society