De Kai invented and built the world’s first global-scale online language translator that spawned Google Translate, Yahoo Translate, and Microsoft Bing Translator. For his pioneering contributions in AI, natural language processing, and machine learning, De Kai was honored by the Association for Computational Linguistics as one of only seventeen Founding Fellows and by Debrett's as one of the 100 most influential figures of Hong Kong.

He speaks to philosophical questions about the fears and possibilities of new technology and how we can be empowered to shape our future. His work to bridge cultures spans artificial intelligence, cognition, language, music, creativity, ethics, society, and policy. Talks and media this year have included New York Times, Dun & Bradstreet, TEDx, global news channels like NDTV and i24 News, and dozens more.
For bookings ➛

De Kai is Professor of Computer Science and Engineering at HKUST and Distinguished Research Scholar at Berkeley's International Computer Science Institute. He is an independent director of AI ethics think tank The Future Society, and was one of eight inaugural members of Google's AI ethics council.
Learn more


De Kai








De Kai is Professor of Computer Science and Engineering at HKUST, and Distinguished Research Scholar at Berkeley's International Computer Science Institute. He is an independent director of AI ethics think tank The Future Society. De Kai is among only 17 scientists worldwide named by the Association for Computational Linguistics as a Founding ACL Fellow, for his pioneering contributions to machine translation and machine learning foundations of systems like the Google/Yahoo/Microsoft translators. Recruited as founding faculty of HKUST directly from UC Berkeley, where his PhD thesis was one of the first to spur the paradigm shift toward machine learning based natural language processing technologies, he founded HKUST's internationally funded Human Language Technology Center which launched the world's first web translator over twenty years ago.

De Kai's cross-disciplinary work relating music, language, intelligence, and culture centers on enabling cultures to relate, and stems from a liberal arts perspective emphasizing creativity in both technical and humanistic dimensions. A native of St Louis, he worked and traveled extensively in San Francisco, New York, Germany, Spain, China, India and Canada before joining Hong Kong's ambitious creation of Asia's now top-ranked HKUST, where his pioneering contributions to machine learning of the cognitive relationships between different languages led to the foundations of modern machine translation technology, with broad applications in computer music and computational musicology as well as human language processing.

During his doctoral studies in cognitive science, artificial intelligence and computational linguistics at Berkeley, he worked on seminal projects on intelligent conversational dialog agents. His PhD dissertation employing maximum entropy to model human perception and interpretation of ambiguities was one of the first to spur the paradigm shift toward today's state-of-the-art statistical natural language processing technologies. De Kai also holds an executive MBA from Kellogg (Northwestern University) and HKUST. His undergraduate degree at UCSD was awarded cum laude, Phi Beta Kappa, and won the liberal arts oriented Revelle College's department award.

De Kai was named by Debrett's HK 100 as one of the 100 most influential figures of Hong Kong.

In 2019, Google named De Kai as one of eight inaugural members of its AI Ethics council, ATEAC (Advanced Technology External Advisory Council).

















  How To Academy: New York Times - How To Change the World Royal Geographical Society, London, 4 Nov 2020

  London Tech Week Royal Institution, London, 11 Jun 2020

  UN/ITU AI for Good Global Summit Geneva, 4-8 May 2020

  1st International Congress for the Governance of AI Prague, 16-18 Apr 2020

  Global AI Summit Riyadh, 31 Mar 2020

  OAP "Collective Responsibility and Accountability in an AI Era" Workshop (Co-Chair) Berkeley (via Zoom), 24-26 Jun 2020

  Good After Covid19 Bologna (via Zoom), 24 Mar 2020

  The disastrous consequences of information disorder erupting around COVID-19: AI is preying upon our unconscious cognitive biases Boma COVID-19 Summit (via Zoom), 23 Mar 2020

  Foresight Institute AGI Strategy Meeting San Francisco (via Zoom), 20 Mar 2020

  How Does AI Know Good from Evil? Mobile Sunday, Barcelona, 23 Feb 2020

  Next Generation Dialogue Bank Julius Baer, Santiago, Chile, 16 Jan 2020

  Reframing Modes of Intimacy The Aspen Institute Roundtable on Artificial Intelligence, 'Artificial Intimacy', Santa Barbara, 13 Jan 2020

  AI Scenario Role Playing Game San Francisco, 6 Dec 2019

  The Ethics of AI Storytelling: Trust in the Era of Artificial Storytellers Plenary, ICEI 'AI for Trust': International Confrence on Ethics of the Intelligent Information Society, Seoul, 5 Dec 2019

  Debate: De Kai and Gary Marcus  Necker Island, 21 Nov 2019

  AI for Good vs AI for Evil  Necker Island, 20 Nov 2019

  AI Commons / Global Data Commons  Global Forum on AI for Humanity, Paris, Oct 2019

  The Social and Ethical Impact of Emerging AI Tech  Emerging Tech and Social Impact, University of California at Berkeley, 14 Oct 2019

  The Unintended Consequences of AI Ethics  UCOT, Unintended Consequences of Technology, Scotts Valley, 13 Oct 2019

  Empathetic AI for SDGs  SocialBusiness4SDG Summit, UN General Assembly, New York, 26 Sep 2019

  Social Challenges of AI  Ethical AI, London, 10 Sep 2019

  Prescriptive vs Descriptive AI Ethics  Plenary, East-West Center Symposium on Humane Artificial Intelligence, Honolulu, 8 Sep 2019

  The Paradox of AI Ethics  TEDxChiangMai, 7 Sep 2019

  How Not to Hack Language  Keynote, Allianz Global Investors, Hong Kong, 11 Jul 2019

  Putting AI and Innovation into Practice  Allianz Global Investors, Hong Kong, 10 Jul 2019

  The Internet of Language  Keynote, Eco IoT Business Trends, Düsseldorf, 2 Jul 2019

  The Ethics of AI Ethics  Kinnernet, Avallon, France, 22 Jun 2019

  Foresight Institute AGI Strategy Meeting  San Francisco, 20 Jun 2019

  Managing Technological Disruptions: Governance and Accountability  Chatham House London Conference, 14 Jun 2019

  CGTN TV discussion panel with Stephen Cole “Eyes wide open: An AI future”  AI Summit, London, 12 Jun 2019

  Governance and Accountability in an AI world  CogX, London, 12 Jun 2019

  Global Data Commons 2nd International Workshop  UN/ITU AI for Good Global Summit, Geneva, 31 May 2019

  AI in China: Building Stakeholder Networks on AI Ethics and Governance  Taihe Institute, Beijing, 21 May 2019

  Partnership on AI Workshop on Positive Futures  San Francisco, 15-16 May 2019

  How Artificial Children Frame our World — The Frontiers of AI  Lightning in a Bottle, 11 May 2019

  Partnership on AI Workshop on the AI Arms Race Narrative  San Francisco, 8 Apr 2019

  Epistemic Security Workshop  Alan Turing Institute, London, 19 Mar 2019

  AI and the Future Food Institute  Bolzen, Italy, 23 Feb 2019

  2nd Global Governance of AI Roundtable  World Government Summit, Dubai, 9-12 Feb 2019

  KTUH radio interview  The Future Accords, Honolulu, 29 Jan 2019

  Once Upon a Time, Human Culture Drove AI  AAAI/ACM Conference on AI, Ethics and Society, Honolulu, 26 Jan 2019

  How Machines can be More Creative than Humans  Art Machines, International Symposium on Computational Media Art (ISCMA), Hong Kong, 4 Jan 2019

  Fairness, Accountability and Transparency / Asia  Hong Kong, 11-12 Jan 2019

  Democratizing Empathy: AI, Existential Risk, and Evolutionary Psychology  FLI Beneficial AGI Conference, Future of Life Institute, Puerto Rico, 5-8 Jan 2019

  Is AI Sustainable?  Wonderfruit, Thailand, 15 Dec 2018

  Storytelling for Artificial Children  AI for Social Good, Google Asia-Pacific, Bangkok, 14 Dec 2018

  AI Strategist  Foresight Institute Vision Weekend, San Francisco, 1-2 Dec 2018

  Manipulated Reality: The Coming Age of Deepfakes and Mistrust  The Assemblage, New York, 26 Oct 2018

  Artificial Mindfulness  TEDxOakland, 18 Nov 2018

  Hearing vs Listening: How AI can help us understand better  UNDP Future of Cities Forum, Venice, Italy, 13 Oct 2018

  The Trouble with Artificial Gossips  Hatch Summit, Big Sky, Montana, 6 Oct 2018

  AI Roundtable  POLITICO AI Summit, Washington DC, 27 Sep 2018

  New Era Solutions for Education  BlockSeoul, Seoul, 19 Sep 2018

  Designing Our Future: Technology and Humanity  BlockSeoul, Seoul, 19 Sep 2018

  BMIR iHeartRadio interview, “I, Robot”  Black Rock City, 28 Aug 2018

  Artificial Intelligence vs Natural Children  AI Summit, Hong Kong, 1 Aug 2018

  The Empathy of Artificial Children  Nexus Global Summit, New York, 29 Jul 2018

  US-China AI Technology Summit  The Future Society + AI Alliance (Silicon Valley) + AI Industry Alliance (China), Half Moon Bay, 29 Jun 2018

  Kinnernet Europe  Avalon, France, 22-24 Jun 2018

  Artificial Gossips  TEDxKlagenfurt, 16 Jun 2018

  HealthTech  Founders Forum, London, 15 Jun 2018

  Language, Relatability, and AI  Simulation #118, San Francisco, 10 Jun 2018

  Foresight Institute AGI Strategy Meeting on Coordination & Great Powers  San Francisco, 7 Jun 2018

  The Future of Artificial Children  2050 Conference, Hangzhou, 27 May 2018

  Humanity 2.0 in the Age of Disruptive Technology  HSBC China Conference, Shenzhen, 15 May 2018

  The Consciousness of Artificial Children  Vested Summit, El Gouna, Egypt, 11 May 2018

  Conscious Creativity  Vested Summit, El Gouna, Egypt, 11 May 2018

  The Meaning of Machine Learning  Vested Summit, El Gouna, Egypt, 10 May 2018

  The Social Impact of Artificial Intelligence  Berkman Klein Center for Internet & Society APAC-UA AI Workshop, Hong Kong, 9 May 2018

  The Ethics of Artificial Children  Nexus Futurism, Los Angeles, 3 May 2018

  The Growing Pains of Artificial Intelligence  AI Society, Hong Kong, 24 Apr 2018

  Artificial Children vs. Conscious Society  Greater Salon, Hong Kong, 22 Apr 2018

  Design Dialogue  Art Basel, Hong Kong, 30 Mar 2018

  Constrained Innovation for Affordable Nutrition Summit  Gates Foundation, Singapore, 29 Mar 2018

  ReOrientate ¿ReAlity?  Sónar 2018 Hong Kong, 17 Mar 2018

  Future-Proofing Yourself in the Coming AI World  Weesper, Hong Kong, 15 Mar 2018

  Artifice  Telefonica Alpha Expert Forum, Cambridge, Massachusetts, 1 Mar 2018

  Humanity 2.0 in the Age of Exponential Technologies  Singularity University, Barcelona, 26 Feb 2018

  Artificial Children vs Human Intelligence  Intelligent Futures, Chatham House, Hong Kong, 2 Feb 2018

  The Imagination of Artificial Children  Nuit des Idées, Asia Society, Hong Kong, 25 Jan 2018

  How AI Feels  Asian Financial Forum, Hong Kong, 16 Jan 2018

  Artificial Societies in Automated Landscapes  UABB, Shenzhen Biennale, 17 Dec 2017

  Los Niños Artificiales  México Economía Digital, Mexico City, 29 Sep 2017

  TEDxBlackRockCity, 29 Aug 2017

  The Soul of the Machine  BudLab 2017, Santiago, Chile, 21 Apr 2017

  Why Meaningful AI is Musical  TEDxZhujiangNewTown, Guangzhou, 14 Jan 2017

  Language AIs will broker the Virtual Silk Road  Hack The Future, Shanghai, 10 Dec 2016

  Regenerating the Creative Impulse: Technology, Community, Art and Space  Bosphorus Summit, Istanbul, 1 Dec 2016

  East of West, West of East  Para Limes Institute, Singapore, 18 Oct 2016

  Artificial Children  TEDxBlackRockCity, 31 Aug 2016

  Generalizing Transduction Grammars to Model Continuous Valued Musical Events  International Society for Music Information Retrieval Conference, New York, 11 Aug 2016

  How AIs Are and Aren't Kids: Language, Music and Society  Volumetric Society of NYC, Brooklyn Experimental Media Center at NYU, 28 Jul 2016

  Surprise! You already have kids and they're AIs  TEDxXi'an, Jun 2016

  Language, Music and Reorientation: The Keys to Artificial Intelligence  Café Scientifique, Hong Kong, Mar 2016

  Can an A.I. Really Relate? What's Universal in Language and Music  TEDxBeijing, Jan 2016

   ICMA BEST PRESENTATION AWARD   Neural Versus Symbolic Rap Battle Bots  International Computer Music Conference (ICMC), Denton, Texas, Sep 2015

  Translating Music  Radio and Television Hong Kong, The Sound of Art and Science, May 2015

  Translating Music: How Computational Learning Explains the Way We Appreciate Music and Language  Raising The Bar, Hong Kong, Mar 2015

  Reorientate Musical Frames of Reference across Cultures  Detour 2014 @ PMQ, Hong Kong, Nov 2014

  How Music Can Reorientate Our Cultures  Sebasi x Pan-Asian Network, Jeju, South Korea, Nov 2014

  Augmenting Human Communication: What Doesn't Translate, and What is the Cost of Not Translating?  World Economic Forum, Summer Davos, Sep 2014

  Do You Speak Pentatonic? The Multilinguality of Music  TEDxWanChai, Hong Kong, Aug 2014

  Music in Translation  TEDxElsaHighSchool, Apr 2014

  Music in Translation: Artificial Intelligence and the Languages of World Music  World Music Expo (WOMEX), Cardiff, UK, Oct 2013

  ReOrientate  TEDxHKUST, Hong Kong, Mar 2012


De Kai's work on machine learning of human languages and the cognitive relationships between them—in formal terms, induction of transductions—has produced over 120 scientific papers. His PhD and Masters students come from cultures all over the world including China, India, United States, Canada, France, Sweden, Algeria and Hong Kong. His laboratory is globally funded by Asian, US, and European research grants.

Milestones in statistical machine translation pioneered by his laboratory include

  • The first models between different language families and the first between Chinese and English
  • The first syntactic and tree-structured models and the first stochastic transduction grammars
  • The first semantic models and the first incorporating contextual sense disambiguation and semantic parsing
  • The first semantic evaluation metrics based purely on semantic frames and in both human and automatic variants

Some of this is surveyed in De Kai's book chapters

  • Alignment” in CRC Press' Handbook of Natural Language Processing (2010)
  • Lexical Semantics for Statistical Machine Translation” in DARPA's Handbook of Natural Language Processing and Machine Translation (2010)

Keynote / Invited Presentations

• “How AIs Are and Aren't Kids: Language, Music and Society”. Volumetric Society of NYC, Brooklyn Experimental Media Center at NYU. New York, Jul 2016
• “Neural Versus Symbolic Inversion Transduction Grammars”. NLP Seminar, Columbia University. New York, Jul 2016
• “Surprise! You already have kids and they're AIs”. TEDxXi'an. Xi'an, China, Jun 2016
• “Music is a Relationship between Languages”. Music Colloquium. Chinese University of Hong Kong, Apr 2016
• “Language, Music and Reorientation: The Keys to Artificial Intelligence”. Café Scientifique. Hong Kong, Mar 2016
• “Can an A.I. Really Relate? What's Universal in Language and Music”. TEDxBeijing. Beijing, Jan 2016
• “AI = Learning to Translate”. 14th Estonian Summer School on Computer and Systems Science (ESSCaSS). Nelijärve, Estonia, Aug 2015
• “Translating Music: How Computational Learning Explains the Way We Appreciate Music and Language”. Raising The Bar. Hong Kong, Mar 2015
• “Why Structural Relationships Between Human Representation Languages Are Efficiently Learnable: The Magic Number 4”. HKU Spring Symposium, Science of Learning. Hong Kong, Feb 2015
• “Augmenting Human Communication: What Doesn't Translate, and What is the Cost of Not Translating?”. SummerDavos, World Economic Forum—Annual Meeting of the New Champions. Tianjin, China, Sep 2014
• “The GAGO Principle”. QTLeap. Lisbon, Portugal, Mar 2014
• “Language Structures Thought! Learning Relationships in Big Data”. HKUST Science-for-Lunch, Institutional Advancement and Outreach Committee of the University Council. Hong Kong, Dec 2013
• “Translation Memories or Machine Learning? The Science of Statistical Machine Translation”. U-STAR Workshop. Gurgaon, India, Nov 2013
• “Music in Translation: Artificial Intelligence and the Languages of World Music”. Conference of the World Music Expo (WOMEX). Cardiff, UK, Oct 2013
• “Semantic SMT Without Hacks”. 4th Workshop on South and Southeast Asian NLP (WSSANLP). Nagoya, Japan, Oct 2013
• “Re-Architecting The Core: What SMT Should Be Teaching Us About Machine Learning”. Recent Advances in Natural Language Processing (RANLP). Hissar, Bulgaria, Sep 2013
• “Tutorial on Deeply Integrated Semantic Statistical Machine Translation”. Recent Advances in Natural Language Processing (RANLP). Hissar, Bulgaria, Sep 2013
• “Tutorial on Tree-structured, Syntactic, and Semantic SMT”. Machine Translation Summit XIII. Xiamen, China, Sep 2011
• “Master seminar on Syntax, Semantics, and Structure in Statistical Machine Translation”. Universitat Politècnica de Catalunya (UPC). Barcelona, May 2011
• “Meaningful Statistical Machine Translation: Semantic MT and Semantic MT Evaluation”. National Natural Language Processing Research Symposium (NNLPRS). De La Salle University, Manila, Philippines, Nov 2010
• “Inversion Transduction Grammars, Language Universals, and Tree-Based Statistical Machine Translation”. National Natural Language Processing Research Symposium (NNLPRS). De La Salle University, Manila, Philippines, Nov 2010
• “The Future of Machine Translation: Statistics + Syntax + Semantics”. Asian Applied Natural Language Processing for Linguistics Diversity and Language Resource Development (ADD-5). Bangkok, Feb 2010
• “Toward Machine Translation with Statistics and Syntax and Semantics”. IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). Merano, Italy, Dec 2009
• “SMT with Semantic Roles”. Japan-China Joint Conference on NLP (JCNLP). Okinawa, Nov 2009
• Panel talk. Third Linguistic Annotation Workshop (LAW III) at ACL/IJCNLP. Singapore, Aug 2009
• “Is There a Future for Semantics?”. Workshop on Semantic Evaluations (SemEval 2010) at NAACL. Boulder, Colorado, Jun 2009
• “Structured Models in Statistical Machine Translation”. CASIA-HKUST Workshop. Beijing, Apr 2009
• “WSD for Semantic SMT: Phrase Sense Disambiguation.” Second Symposium on Innovations in Machine Translation Technologies (IMTT). Tokyo, Mar 2008
• “Syntax and Semantics in Statistical Machine Translation”. 4th Young Scholar Symposium on Natural Language Processing (YSSNLP). Suzhou, Oct 2007
• Panel talk. Machine Translation Summit XI. Copenhagen, Sep 2007
• Panel talk. 11th Conference on Theoretical and Methodological Issues in Machine Translation (TMI). Skövde, Sweden, Sep 2007
• “Tutorial on Inversion Transduction Grammars and the ITG Hypothesis: Tree-Structured Statistical Machine Translation”. TC-STAR OpenLab on Speech Translation. Trento, Italy, Mar 2006
• Keynote. Nokia Academic Summit. Beijing, Dec 2005
• “Statistical vs. Compositional vs. Example-Based Machine Translation”. EBMT-II: 2nd EBMT Workshop at Machine Translation Summit X. Phuket, Thailand, Sep 2005
• “Directions in Tree Structured Statistical Machine Translation”. JCNLP 2005: 5th Japan-China Natural Language Processing Joint Research Promotion Conference X. Nov 2005
• “Tutorial on Inversion Transduction Grammars and the ITG Hypothesis: Tree-Based Statistical Machine Translation”. Johns Hopkins Summer Language Workshop 2005. Baltimore, Jul 2005
• “Overcoming Disambiguation Accuracy Plateaus”. MEANING-2005. Trento, Italy, Feb 2005
• Invited talk. DFKI / SJTU Workshop on Promising Language Technology and Real-World Applications. Shanghai, Nov 2004
• Invited talk. 4th China-Japan Joint Conference to Promote Cooperation in Natural Language Processing (CJNLP). Hong Kong, Nov 2004
• “True or False? Every Serious Multilingual Application Needs a Parallel or Comparable Corpus”. LREC Workshop on the Amazing Utility of Parallel Text. Lisbon, Portugal, May 2004
• “The HKUST Leading Question Translation System”. Have We Found the Holy Grail? Machine Translation Summit IX. New Orleans, Sep 2003
• “Managing Multilingual Information: Theory in the Real World, and the Real World in Theory”. RIDE/MLIM: 13th International Workshop on Research Issues on Data Engineering: Multilingual Information Management, at ICDE’03. Hyderabad, India, Mar 2003
• Panel talk. MT-03: DARPA/NIST Machine Translation Workshop. Washington DC, Jul 2003
• “A Position Statement on Chinese Segmentation”. Chinese Language Processing Workshop, University of Pennsylvania, Jul 1998
• “Bilingual Parsing of Parallel Corpora”. Japan Society for the Promotion of Science / HCRL Workshop on New Challenges in Natural Language Processing, Tokyo, May 1998
• ”Making Machines Work at Child’s Play: The Decade of Linguistic Computing”. 9th International Computer Expo and Conference (Computer-93). Hong Kong, May 1993
• “Approximate Maximum-Entropy Integration of Syntactic and Semantic Constraints”. ROCLING-IV. Taipei, Sep 1992


#UniversalMasking #masks4all #wearamask

media + videos + articles + interactive simulation

De Kai, expert in the development of AI technology: “Without clear artificial intelligence ethics, we have little chance of civilization surviving”

A professor in Hong Kong and Berkeley, experimental musician and Google ethics advisor, the American expert was recently in Chile speaking about the change that humans and societies will experience from the exponential advance of technology.

El Mercurio

El Mercurio, 20 Feb 2020 (English translation)

  When he's not speaking around the world, De Kai divides his time between Hong Kong and San Francisco, as if the Pacific Ocean were a mere pool of water. Professor and founding faculty member of the Hong Kong University of Science and Technology (a city where he was recognized as one of the 100 most influential people) and an academic in Berkeley, his impressive resumé includes creating the first internet translator — and basis for Google Translate — in addition to making music using artificial intelligence (AI), which he came to present in Chile almost three years ago.

Back in Chile in the context of the Formula E Grand Prix, this Chinese-American born in St. Louis and raised in Chicago has come to speak about the need to introduce ethics in machines. Asked what the main ethical challenges that AI currently presents are, the former advisor on Google's AI ethics council bluntly points out that this is one of the most crucial issues in history, because “we are on the verge of breaking all our social, cultural and governmental norms. This is unprecedented, given the ways we have created to develop and relate, both good and bad, will be exponentially increased by AI. In this way, the impact it will have on society and culture will be unimaginable”.

Although the vast majority of people are concerned about the effect that AI will have on employment with the automation of many jobs and professions, what worries him most is how the way we think will be torn apart and how the worst reactions generated by the human unconscious will be amplified. “Things like our natural tendency to fear surprises, our fear of the unknown, the anger with which we react; it is all disproportionate with respect to what rational analysis would justify. When we lived in the caves, it didn't matter if we overreacted, but today when we operate within democracies where people vote, we are constantly subjected by social networks to post those stories that are designed to generate that kind of fear. If we continue to click on that kind of thing, we are only raising our fears, and that immediately causes anger or hatred, and polarizes. Our social norms were not designed to handle that level of stress. I think that without clear ethics in AI, we have little chance of civilization surviving,” he said.

— Do you see that technology companies are aware of this situation or do you think they tend to simplify, or are not worried about many of the consequences that their developments may cause?

“International companies have to see the problems of each place where they are, especially when they are related to ethics and governance, and therefore, the ethics of AI, because it is going to matter in these places. Companies that understand this quickly realize that there is no solution that fits everyone. If they operate in Europe, these technology companies must do something different from what they do in the United States due to the social norms and expectations that exist in the Old Continent. If they do not comply, they will end up being boycotted. One of the biggest problems about ethics in AI is that profits — which is what drives these technology and social media companies — are responsible for terrible decisions that have catastrophic consequences.”

— Today we live in a world where companies and governments are very involved in social networks, which makes it difficult to see where the rules regulating these links are. Do you see that the need to impose clear limits is being discussed or do you think that companies or governments that use AI are not worried about developing outside of them?

“If companies are managed to maximize profits, we cannot expect them to necessarily stay within those limits, so there is a need for state regulators, although I don't think that is the complete solution. Facebook decided to allow the lies in political advertising. In other words, you cannot do false advertising, unless you are a paying politician. What kind of ethical norm is that? That makes no sense, because you can't have an effective democracy like that. Information and knowledge are the lifeblood of democracy. If you give polluted information and knowledge to the voters, they will not be able to vote correctly. This is a structural issue for the sustainability of democracy. If we fail to solve this better than Facebook has, we cannot maintain democracy.”

— Between those who are in favor and those who fear the consequences of AI, you have said that the only way we have is to merge with it, since there is no possibility of being against machines. How could this union occur?

“I think it's already happening. For example, Google Maps has transformed my spatial orientation. My sense of orientation was never great, but now I've outsourced it. That might sound like I've lost a part of myself by delegating it to my cell phone, but at the same time, what I'm doing is freeing up my brain to think, analyze and create other things. Isn't it better that I use my mind for things that AI and a computer currently cannot do instead of doing the same things? We do this with phone numbers too, which we no longer remember. I used to be great at memorizing phone numbers when I was a child, and now I don't remember them. Is that a disaster? No, because my brain can be doing other things instead. But I do believe that in some cases Google Maps should be retraining your mind so you can remember the information, and transform it into an exercise so that your mind does not atrophy. All this could be implemented today.”

— What is the main thing you try to teach your students?

“There is a saying that says ‘with great power comes great responsibility.’ AI is democratizing weapons of mass destruction and giving everyone great power. Whether with physical weaponry, using robotics, or with information warfare, giving everyone megaphones to weaponize the internet. So we must realize that with an exponential power comes an exponential responsibility, and I have not yet seen that happen. Everyone is taking exponential power, both individuals and corporations, but where is the exponential responsibility? Because if it does not happen, society cannot remain stable.”


Raising a brood of artificial children

As smart technologies pervade our lives like never before, we turn the spotlight on some of HK’s most noteworthy creative personalities engaging with artificial intelligence, or the idea of it. The five-part series explores what architects, artists, designers and writers make of the ways in which machines are impacting human society and how they plan to adjust and renegotiate their places in this changing world order where AIs will outnumber humans by several multiples.

The pilot edition features linguist, musician, academic and public speaker, De Kai.

China Daily

China Daily, 23 Dec 2019

  Raise them like you would your children,” declares De Kai. We are nearing the end of a two-hour conversation on the evolving power dynamics between man and AIs — a growing brood of ersatz humans whose tribe is multiplying at a shockingly exponential rate. The possibility of developing sentient AI took up a significant part of our discussion. And by now it’s evident that while the AI models De Kai routinely creates and tests may not have feelings — yet — his own concerns for “today’s artificial children” are strong enough to make him anthropomorphize man-made intelligence.

We meet in a near-deserted al fresco dining area adjacent to a café in Hong Kong University of Science and Technology (HKUST) where De Kai is a professor of computer science and engineering. The sparse presence of students in the public areas on campus in the season of term-end exams seems to, somewhat eerily, underscore a point De Kai makes about AIs having overtaken the human population several times over already.

“Society comprises billions of humans and even more billions of artificial members of society,” says De Kai. “We have more AIs today that are part of our society. These are functioning, integral, active, imitative, learning, influential members of society than most — probably more influential than 90 percent of human society — in shaping culture.”

He does not seem to have a very high opinion of the level of intelligence of most commercial AIs in circulation though. “AIs that can play chess or golf better than any human would get their pants beaten up by a five-year-old cooking an egg,” De Kai says.

And yet the plan to use robots toward building a better life for humanity — based on the principles of sharing and tolerance — has every chance of going askew.

To pre-empt such an eventuality, De Kai makes impassioned appeals to start handling AIs with care during his TED Talks appearances. “Even though these are really weak AIs, the culture that we are jointly shaping with our artificial members of society is the one under which every successive stronger generation of AIs will be learning and spreading their culture. We are already in that cycle and we don’t realize it because we don’t look at machines from a sociological standpoint,” he says.

Dispelling the likelihood of AIs doing a Frankenstein, i.e., having a disruptive effect on the society of humans who created them — a widely-held notion perpetuated through the use of a familiar trope in novels and films — De Kai says it is the humans who need to treat AIs with greater mindfulness. The onus lies on all users of AI-powered devices, he clarifies, and not just the scientists actively engaged in developing and testing newer models of AI.

“It’s easy to point fingers at big tech organizations (for building AI), or governments for not imposing more regulations (on AI trading). Rules only catch the most egregious violations of acceptable social behavior. What really hold societies together are the unwritten rules, unspoken conventions and shared norms,” says De Kai, urging all users of smart technology to step forward and take responsibility.

He advocates following a self-formulated moral code during human-AI interactions, based on a bit of soul-searching. “Am I setting a good example? Am I a good role model? Do I speak respectfully to AI and teach them to respect diversity, or do I show them that it’s okay to insult people online?” might be some of the questions to ask oneself by way of introspection, says De Kai. This is no longer an option, rather an imperative, for “the dozens of billions of our AI children are shaping our culture that the next generation of humans and machines are going to be learning from.”

Making connections

A pioneer of machine learning of the cognitive relationships between languages and fellow of Association for Computational Linguistics, De Kai wears many hats. To his mind, though, his engagement in the multiple sciences of computing, language and music are simply different expressions of a common theme. Unsurprisingly, his activities in these diverse fields lend themselves to experiments with AI.

The commonalities between different music traditions around the world struck him somewhat serendipitously. A nine-year-old De Kai was playing the piano in an unlit room in his childhood home in Chicago. He took lessons in western classical music at Northwestern University Conservatory of Music, but while playing at home would sometimes throw in a few notes of blues music for fun. His grandfather, who happened to hear him on one of those occasions, remarked that some of the music played by young De Kai sounded remarkably Chinese.

“That got me thinking. I realized that the way we understand music is really dependent on the cultural frame of reference we adopt,” says De Kai, who was born in St. Louis, Missouri in the United States to immigrant Chinese parents, and spoke little English before he started going to school. He says the advantage of growing up with exposure to multiple cultures made it relatively easy to set himself in a cultural frame that was American “and at the same time intuitively create music resonating with a different frame.”

“I realized our cognition works by reorienting perspectives across different frames of reference, just unconsciously. Since then I’ve always had it in my head that I’m jumping around between different mental and cultural frames of reference when I perform,” he says. The interweaving of music from diverse traditions — the meditative Sufi music from Pakistan with the robust beats of the Spanish flamenco with electronica and computer music, for example — presented as a virtual reality-supported experience is the hallmark of the music events staged by ReOrientate, a musical collective founded by De Kai.

He developed a bot called Free/Style that can participate in rap battles with humans. It is an AI manifestation of De Kai’s core philosophy that since both spoken language and music are driven by sequencing of their components (words or musical notes, as the case may be), the ability to recognize such patterns common to languages or music from completely different cultures could be useful to learning a foreign language, or a piece of music. Free/Style can respond to musical challenges across genres and languages and seems particularly good at hip hop.

Now only a talking-singing digital screen, Free/Style might soon get a three-dimensional physical form. “We received a fairly prestigious arts grant that we are now looking to fundraise to match (in order) to roll out physical implementations of the Free/Style rap bot to help underprivileged youth learn the machine learning principles so that they can build their own AI,” De Kai says.

Rule-based vs probability-based

De Kai had already begun experimenting with machine learning and web translation decades before our lives came to be so inextricably linked to big data and Google Translate. Before the term big data was coined roughly a decade ago, it was called Very Large Corpora. De Kai remembers participating in several Workshops on Very Large Corpora — “one of the earliest special interest annual workshops since the 1990s” – one of which was held in HKUST in 1997.

Since long, De Kai had been skeptical of the practice of feeding AIs an inventory of data “encompassing the entire domain that you want the program to be able to function in.” That became a bone of contention between him and his supervisor at University of California, Berkeley, where he was on a PhD program. By the time the Corpora workshops took off he was on a mission “to break the stranglehold that logic/rule-based systems had on AI.”

It led to his developing a statistical machine translation system that takes its cues from the language-learning patterns displayed by small children — that is, by trying to figure out the relationship or common patterns between different sequential frames of reference in two languages.

However, much of the research and development of AI is still rule or knowledge-based. And most commercially-used AIs are still miles away from developing intuitive understanding of our surroundings that enables a child to connect words with actions or images with a sense of context. “I think these commercial AIs, trained on trillions of words, do not qualify as intelligent if it takes them a square of the number of words a three-year-old needs to learn,” De Kai says. “Sorry, but that’s artificial stupidity.”

The task ahead, he says, is to raise a generation of “mindful AIs” — those with “general human-level intelligence” and “mindful of their ethical responsibilities.”

Even as we await the arrival of conscious AIs capable of taking a moral position, De Kai himself has made a slight shift from his earlier aversion to data-based systems. He now sees the advantages of the steadily-expanding sea of data that advanced, superfast computation systems have made possible.

“I think my personal journey has been an oscillation between (exploring) how do we take advantage of the current state of hardware and data availability and bring the best to bear upon socially and culturally impactful applications by not necessarily going to one extreme or the other — between neural networks/deep learning and probabilistic machine learning,” he says.

“There is truth in both of these and the question is how do we actually do both.”

Click here to read De Kai’s exclusive interview to China Daily


People need to wake up to dangers of AI, warns Google ethics adviser

Hong Kong professor De Kai was named by Google as one of eight members of its Advanced Technology External Advisory Council on AI


South China Morning Post, 3 Apr 2019

  With artificial intelligence gradually creeping into more areas of human existence, from surveillance to health scans to McDonald’s menus, one Hong Kong professor has added his voice to those such as Tesla founder Elon Musk that advise humanity to tread cautiously with the new technology.

“AI is the single most disruptive force that humanity has ever encountered and my concern is that so much of the discussion that we hear about now is very incremental,” said De Kai, a professor at the Department of Computer Science and Engineering at Hong Kong University of Science and Technology, in an interview this week. “We are near an era when people can easily produce weapons such as fleets of armed AI drones … the cat is out of the bag.”

De Kai was last week named by Google as one of eight members of its Advanced Technology External Advisory Council, assembled to review and advise the company on the development and deployment of AI technology in the real world. He is the only Asian member in the group.

He is particularly keen to challenge those that have dubbed the advent of the AI era as the “fourth industrial revolution”, pointing out that while past industrial revolutions involved the automation of human brawn – AI is focused on the substitution of human thought and opinions with machine thinking and is therefore different in kind.

“AI is not the fourth of anything. It is the first,” De Kai said. “It’s always comforting to say we’ve seen things like this happening before and we know how to tackle this by drawing lessons from history – absolutely we need to draw lessons from history as much as we can – but we also need to recognise that what we are facing now has no precedent.”

De Kai’s warning aligns him with figures such as Elon Musk, who has likened AI to “an immortal dictator from which we would never escape” to “summoning the devil” and to the technology entrepreneur signing up to a public pledge with other AI researchers to never create autonomous killer robots. Even the late British physicist Stephen Hawking warned that the emergence of AI could be the “worst event in the history of our civilisation”.

“There are people who wish they could control [technology] the same way we control an element like uranium. That is wishful thinking. This is why our priority is to alter human culture. We need to grow up as a species,” De Kai said on Monday.

Technology companies globally have been grappling with the implications of applying AI technologies to different aspects of human life. Google’s involvement in a US Department of Defence drone programme led to a public backlash and some employees resigned in protest last year. Google later cancelled the programme, dubbed Project Maven, and released a set of AI principles aimed at making its AI projects socially beneficial and accountable to society.

Meanwhile, Google’s AI ethics council has not got off to the best of starts after one member quit and another became the subject of an employee petition to have her removed, according to a Bloomberg report. Google said meetings of the ethics council are due to commence this month.

Google did not immediately respond to a request for comment.

De Kai, who looks more like an artist than a scientist with his long hair pulled into a knot at the back of his head under a flat cap, said one of his biggest concerns is that the current education system, especially in the developed world, creates a division between the sciences and the humanities.

As such, students trained in engineering fail to think about the human consequences of technology at times whereas humanities graduates are often unable to grasp the possibilities that new technology opens up.

“It is the single worst possible time in history to have an education system that cripples people to be unable to think deeply across these boundaries, about what humanity is in the face of technology,” De Kai said.

— Iris Deng



South China Morning Post, 8 Jan 2012

  Imagine learning how to translate from Chinese to English by reading millions of sentences from Hong Kong's bilingual Legislative Council transcripts.

You look at the Chinese. You look at the English below.

Actually, you don't know either language: you are a cluster of 75 computers in Professor De Kai's computational linguistics and musicology lab at the University of Science and Technology.

But as a machine, you are not only looking at the unfamiliar sentences two at a time, as a person would. Instead, you are using statistics to relate huge heaps of data to one another simultaneously.

You notice that in thousands of instances, the English “government building” appears in the same chunk of text as the Chinese phrase for it, so it is highly probable that these chunks mean the same thing.

You study these bilingual patterns, billions of them, cranking away at your algorithms.

Mostly, you work unguided, making your own dictionary as you go along based on the multitude of connections you detect between groups of words. When you make a mistake, a human researcher may correct you with a few programming keystrokes, and over time, you learn to make the right associations in the right context.

This is the world of computational linguistics: a field that strives to model natural human language on the computer. Google Translate and Siri are some of the recent products of these hi-tech linguists.

But even a decade ago (which in the cyberage is more like a century), De Kai was already finding up to 86 to 96 per cent accuracy rates between English and Chinese translations from a set of computers that, yes, really did read Legco documents.

His program did not just operate with individual words, but used chunks of other, smaller chunks in relation to each other, a mathematical model called inversion transduction grammars which enormously sped up the process of learning to translate.

For this work, he was last month honoured as one of only 17 founding fellows around the world of the Association for Computational Linguistics, and the only one from China.

He pioneered the computational study of English-Chinese language pairs, which no one else was doing at the time; the US first put funding into Chinese translations around 1999, years after De Kai started his work. In 1995 he launched Silc, a multilingual engine that handled the first web translations from English to Chinese.

First-mover technology takes time,” he said, pointing out that Google Translate is still not making money. He said developers in Asia had to be patient investing in new fields.

But computational linguistics is gaining ground: De Kai is just about to close multimillion-dollar translation technology research projects with the European Union and the Defense Advanced Research Projects Agency, the US military arm that funds most US computer science research.

“All your thinking and cognition is taking the world as you see it and translating to an internal language”

Decades ago, machines would try to learn English grammar by simply processing millions of sentences and trying to find a common structure.

That's like tying a child to a chair, blindfolding the child, and making them hear millions of sentences of English,” De Kai said.

His computers are not trying to “learn” English or Chinese, per se—at least not separately. For translation to work, what the machines need to do is figure out the relationships between the two languages and then match them.

That's how humans learn language, too: a child from birth to six or seven years old is constantly matching relationships between what they sense in their environment to the spoken language they hear.

They learn that the round thing on the ground is a ball because they hear their parents say “ball” repeatedly around the object, even if it is mixed up with other words, and so they eventually associate the image with the sound. They, like the computers, are making correlations, not between Chinese and English, but between the language of their environment and the language of words.

In this sense, we are all translators. A child translates an action into a meaning. De Kai's machines translate from one language to another. A newspaper reader translates the text on the page into a narrative.

“All your thinking and cognition is taking the world as you see it and translating to an internal language,” De Kai said.

The original meaning of translate is to move or transform between one place or form and another. To be translatable, then, means to be able to be shifted, to be transformed.

De Kai is used to translating. He grew up in the US Midwest, but from the age of seven went back to China, including Hong Kong, in the summer. He remembers seeing the disparity between the US and post-Cultural Revolution China, and how much of it had to do with language and culture.

“You see the cultural disconnects, that English speakers aren't understanding something about the Chinese situation and vice versa.

“This is where we can make a difference,” De Kai said.

— Kanglei Wang


Music and language define humanity. Music and language, the only capabilities where humans outshine all other species, have been inextricably bound to each other since the dawn of humankind. Our prehistoric ancestors were probably singing before talking—animal songs are the likely evolutionary precursor of music and language. Out of the evolutionary refinement of these abilities, human intelligence emerged.

De Kai's work asks some fundamental questions about music and language, introducing a new computational musicology theoretical paradigm of stochastic transduction models to provide explanations.

How did evolutionary conditions drive humans to develop music and language as they did?  Music and language share a common set of neural and psychological resources. These arm us with fundamental cognitive abilities including being able to semantically associate signals with contexts, and to syntactically segment song strings that are statistically noteworthy. Yet the theoretical space of possible musics and languages is far too vast for human music and language to have converged as they have, without other strong factors driving the evolutionary pressures. De Kai's transduction models naturally imply cognitive load constraints that explain the remarkable convergence of music and language characteristics even across distant cultures.

How do we learn the languages that music is built out of?  Music is full of different kinds of languages. Just as lyrics are sequences of words, melodies are sequences of notes, rhythms are sequences of percussive hits, cadences and progressions are sequences of chords, and song structures are sequences of verses, choruses, bridges, and the like. And just as with spoken languages, all these different musical languages have many subtle complexities when it comes to what does and doesn't sound good. De Kai's work implements cognitive models of our ability to absorb the right patterns.

How do we learn the relationships between multiple musical languages?  It is the relationships between multiple different musical languages, simultaneously being played in parallel, that differentiate aesthetically pleasing music from jarring noise. Learning the contextual relationships between languages—formally called  “transductions” —can easily become far more difficult than learning the individual musical languages. De Kai's models are the first to show how we depend on a virtuous cycle to crack the code: partial knowledge of a musical language helps us learn its relationships to other kinds of musical languages, while conversely, partial knowledge of relationships between musical languages helps us learn more about specific musical languages.

How does our ability for creative expression in music arise?  The true test of music learning lies in musical communication, improvisation, and composition. Not only is it important to be able to express personal sentiments and attitudes, but a musician should also complement what is being played by other musicians, and respond appropriately to subtle cues from them. De Kai's transduction models explain how the musical relationships that are learned can naturally be used creatively to improvise, accompany, or compose in context.