All You Can Books

The Cartesian Cafe

Timothy Nguyen

The Cartesian Cafe

The Cartesian Cafe

The Cartesian Cafe is the podcast where an expert guest and Timothy Nguyen map out scientific and mathematical subjects in detail. This collaborative journey with other experts will have us writing down formulas, drawing pictures, and reasoning about them together on a whiteboard. If you’ve been longing for a deeper dive into the intricacies of scientific subjects, then this is the podcast for you. Topics covered include mathematics, physics, machine learning, artificial intelligence, and computer science. Content also viewable on YouTube: www.youtube.com/timothynguyen and Spotify. Timothy Nguyen is a mathematician and AI researcher working in industry. Homepage: www.timothynguyen.com, Twitter: @IAmTimNguyen Patreon: www.patreon.com/timothynguyen

Podcast Episodes

Jay McClelland | Neural Networks: Artificial and Biological

Jay McClelland is a pioneer in the field of artificial intelligence and is a cognitive psychologist and professor at Stanford University in the psychology, linguistics, and computer science departments. Together with David Rumelhart, Jay published the two volume work Parallel Distributed Processing, which has led to the flourishing of the connectionist approach to understanding cognition.

In this conversation, Jay gives us a crash course in how neurons and biological brains work. This sets the stage for how psychologists such as Jay, David Rumelhart, and Geoffrey Hinton historically approached the development of models of cognition and ultimately artificial intelligence. We also discuss alternative approaches to neural computation such as symbolic and neuroscientific ones.

Patreon (bonus materials + video chat): https://www.patreon.com/timothynguyen

Part I. Introduction

  • 00:00 : Preview
  • 01:10 : Cognitive psychology
  • 07:14 : Interdisciplinary work and Jay's academic journey
  • 12:39 : Context affects perception
  • 13:05 : Chomsky and psycholinguists
  • 8:03 : Technical outline

Part II. The Brain

  • 00:20:20 : Structure of neurons
  • 00:25:26 : Action potentials
  • 00:27:00 : Synaptic processes and neuron firing
  • 00:29:18 : Inhibitory neurons
  • 00:33:10 : Feedforward neural networks
  • 00:34:57 : Visual system
  • 00:39:46 : Various parts of the visual cortex
  • 00:45:31 : Columnar organization in the cortex
  • 00:47:04 : Colocation in artificial vs biological networks
  • 00:53:03 : Sensory systems and brain maps

Part III. Approaches to AI, PDP, and Learning Rules

  • 01:12:35 : Chomsky, symbolic rules, universal grammar
  • 01:28:28 : Neuroscience, Francis Crick, vision vs language
  • 01:32:36 : Neuroscience = bottom up
  • 01:37:20 : Jay’s path to AI
  • 01:43:51 : James Anderson
  • 01:44:51 : Geoff Hinton
  • 01:54:25 : Parallel Distributed Processing (PDP)
  • 02:03:40 : McClelland & Rumelhart’s reading model
  • 02:31:25 : Theories of learning
  • 02:35:52 : Hebbian learning
  • 02:43:23 : Rumelhart’s Delta rule
  • 02:44:45 : Gradient descent
  • 02:47:04 : Backpropagation
  • 02:54:52 : Outro: Retrospective and looking ahead

Image credits: http://timothynguyen.org/image-credits/

Further reading:

Rumelhart, McClelland. Parallel Distributed Processing.

McClelland, J. L. (2013). Integrating probabilistic models of perception and interactive neural networks: A historical and tutorial review

 

Twitter: @iamtimnguyen

 

Webpage: http://www.timothynguyen.org

Download This Episode

Michael Freedman | A Fields Medalist Panorama

Michael Freedman is a mathematician who was awarded the Fields Medal in 1986 for his solution of the 4-dimensional Poincare conjecture. Mike has also received numerous other awards for his scientific contributions including a MacArthur Fellowship and the National Medal of Science. In 1997, Mike joined Microsoft Research and in 2005 became the director of Station Q, Microsoft’s quantum computing research lab. As of 2023, Mike is a Senior Research Scientist at the Center for Mathematics and Scientific Applications at Harvard University.

Patreon (bonus materials + video chat): https://www.patreon.com/timothynguyen

In this wide-ranging conversation, we give a panoramic view of Mike’s extensive body of work over the span of his career. It is divided into three parts: early, middle, and present day, which respectively include his work on the 4-dimensional Poincare conjecture, his transition to topological physics, and finally his recent work in applying ideas from mathematics and philosophy to social economics. Our conversation is a blend of both the nitty-gritty details and the anecdotal story-telling that can only be obtained from a living legend.

I. Introduction

  • 00:00 : Preview
  • 01:34 : Fields Medalist working in industry
  • 03:24 : Academia vs industry
  • 04:59 : Mathematics and art
  • 06:33 : Technical overview

II. Early Mike: The Poincare Conjecture (PC)

  • 08:14 : Introduction, statement, and history
  • 14:30 : Three categories for PC (topological, smooth, PL)
  • 17:09 : Smale and PC for d at least 5
  • 17:59 : Homotopy equivalence vs homeomorphism
  • 22:08 : Joke
  • 23:24 : Morse flow
  • 33:21 : Whitney Disk
  • 41:47 : Casson handles
  • 50:24 : Manifold factors and the Whitehead continuum
  • 1:00:39 : Donaldson’s results in the smooth category
  • 1:04:54 : (Not) writing up full details of the proof then and now
  • 1:08:56 : Why Perelman succeeded

II. Mid Mike: Topological Quantum Field Theory (TQFT) and Quantum Computing (QC)

  • 1:10:54: Introduction
  • 1:11:42: Cliff Taubes, Raoul Bott, Ed Witten
  • 1:12:40 : Computational complexity, Church-Turing, and Mike’s motivations
  • 1:24:01 : Why Mike left academia, Microsoft’s offer, and Station Q
  • 1:29:23 : Topological quantum field theory (according to Atiyah)
  • 1:34:29 : Anyons and a theorem on Chern-Simons theories
  • 1:38:57 : Relation to QC
  • 1:46:08 : Universal TQFT
  • 1:55:57 : Witten: Donalson theory cannot be a unitary TQFT
  • 2:01:22 : Unitarity is possible in dimension 3
  • 2:05:12 : Relations to a theory of everything?
  • 2:07:21 : Where topological QC is now

III. Present Mike: Social Economics

  • 2:11:08 : Introduction
  • 2:14:02 : Lionel Penrose and voting schemes
  • 2:21:01 : Radical markets (pun intended)
  • 2:25:45 : Quadratic finance/funding
  • 2:30:51 : Kant’s categorical imperative and a paper of Vitalik Buterin, Zoe Hitzig, Glen Weyl
  • 2:36:54 : Gauge equivariance
  • 2:38:32 : Bertrand Russell: philosophers and differential equations

IV: Outro

  • 2:46:20 : Final thoughts on math, science, philosophy
  • 2:51:22 : Career advice

 

Some Further Reading: Mike’s Harvard lecture on PC4: https://www.youtube.com/watch?v=TSF0i6BO1Ig Behrens et al. The Disc Embedding Theorem. M. Freedman. Spinoza, Leibniz, Kant, and Weyl. arxiv:2206.14711

 

Twitter: @iamtimnguyen

 

Webpage: http://www.timothynguyen.org

Download This Episode

Marcus Hutter | Universal Artificial Intelligence and Solomonoff Induction

Marcus Hutter is an artificial intelligence researcher who is both a Senior Researcher at Google DeepMind and an Honorary Professor in the Research School of Computer Science at Australian National University. He is responsible for the development of the theory of Universal Artificial Intelligence, for which he has written two books, one back in 2005 and one coming right off the press as we speak. Marcus is also the creator of the Hutter prize, for which you can win a sizable fortune for achieving state of the art lossless compression of Wikipedia text.

Patreon (bonus materials + video chat): https://www.patreon.com/timothynguyen

In this technical conversation, we cover material from Marcus’s two books “Universal Artificial Intelligence” (2005) and “Introduction to Universal Artificial Intelligence” (2024). The main goal is to develop a mathematical theory for combining sequential prediction (which seeks to predict the distribution of the next observation) together with action (which seeks to maximize expected reward), since these are among the problems that intelligent agents face when interacting in an unknown environment. Solomonoff induction provides a universal approach to sequence prediction in that it constructs an optimal prior (in a certain sense) over the space of all computable distributions of sequences, thus enabling Bayesian updating to enable convergence to the true predictive distribution (assuming the latter is computable). Combining Solomonoff induction with optimal action leads us to an agent known as AIXI, which in this theoretical setting, can be argued to be a mathematical incarnation of artificial general intelligence (AGI): it is an agent which acts optimally in general, unknown environments. The second half of our discussion concerning agents assumes familiarity with the basic setup of reinforcement learning.

I. Introduction

  • 00:38 : Biography
  • 01:45 : From Physics to AI
  • 03:05 : Hutter Prize
  • 06:25 : Overview of Universal Artificial Intelligence
  • 11:10 : Technical outline

II. Universal Prediction

  • 18:27 : Laplace’s Rule and Bayesian Sequence Prediction
  • 40:54 : Different priors: KT estimator
  • 44:39 : Sequence prediction for countable hypothesis class
  • 53:23 : Generalized Solomonoff Bound (GSB)
  • 57:56 : Example of GSB for uniform prior
  • 1:04:24 : GSB for continuous hypothesis classes
  • 1:08:28 : Context tree weighting
  • 1:12:31 : Kolmogorov complexity
  • 1:19:36 : Solomonoff Bound & Solomonoff Induction
  • 1:21:27 : Optimality of Solomonoff Induction
  • 1:24:48 : Solomonoff a priori distribution in terms of random Turing machines
  • 1:28:37 : Large Language Models (LLMs)
  • 1:37:07 : Using LLMs to emulate Solomonoff induction
  • 1:41:41 : Loss functions
  • 1:50:59 : Optimality of Solomonoff induction revisited
  • 1:51:51 : Marvin Minsky

III. Universal Agents

  • 1:52:42 : Recap and intro
  • 1:55:59 : Setup
  • 2:06:32 : Bayesian mixture environment
  • 2:08:02 : AIxi. Bayes optimal policy vs optimal policy
  • 2:11:27 : AIXI (AIxi with xi = Solomonoff a priori distribution)
  • 2:12:04 : AIXI and AGI. Clarification: ASI (Artificial Super Intelligence) would be a more appropriate term than AGI for the AIXI agent.
  • 2:12:41 : Legg-Hutter measure of intelligence
  • 2:15:35 : AIXI explicit formula
  • 2:23:53 : Other agents (optimistic agent, Thompson sampling, etc)
  • 2:33:09 : Multiagent setting
  • 2:39:38 : Grain of Truth problem
  • 2:44:38 : Positive solution to Grain of Truth guarantees convergence to a Nash equilibria
  • 2:45:01 : Computable approximations (simplifying assumptions on model classes): MDP, CTW, LLMs
  • 2:56:13 : Outro: Brief philosophical remarks

 

Further Reading: M. Hutter, D. Quarrel, E. Catt. An Introduction to Universal Artificial Intelligence M. Hutter. Universal Artificial Intelligence S. Legg and M. Hutter. Universal Intelligence: A Definition of Machine Intelligence

 

Twitter: @iamtimnguyen

Webpage: http://www.timothynguyen.org

Download This Episode

Richard Borcherds | Monstrous Moonshine: From Group Theory to String Theory

Richard Borcherds is a mathematician and professor at University of California Berkeley known for his work on lattices, group theory, and infinite-dimensional algebras. His numerous accolades include being awarded the Fields Medal in 1998 and being elected a fellow of the American Mathematical Society and the National Academy of Sciences.

Patreon (bonus materials + video chat): https://www.patreon.com/timothynguyen

In this episode, Richard and I give an overview of Richard's most famous result: his proof of the Monstrous Moonshine conjecture relating the monster group on the one hand and modular forms on the other. A remarkable feature of the proof is that it involves vertex algebras inspired from elements of string theory. Some familiarity with group theory and representation theory are assumed in our discussion.

I. Introduction

  • 00:25: Biography
  • 02:51 : Success in mathematics
  • 04:04 : Monstrous Moonshine overview and John Conway
  • 09:44 : Technical overview

II. Group Theory

  • 11:31 : Classification of finite-simple groups + history of the monster group
  • 18:03 : Conway groups + Leech lattice
  • 22:13 : Why was the monster conjectured to exist + more history 28:43 : Centralizers and involutions
  • 32:37: Griess algebra

III. Modular Forms

  • 36:42 : Definitions
  • 40:06 : The elliptic modular function
  • 48:58 : Subgroups of SL_2(Z)

IV. Monstrous Moonshine Conjecture Statement

  • 57:17: Representations of the monster
  • 59:22 : Hauptmoduls
  • 1:03:50 : Statement of the conjecture
  • 1:07:06 : Atkin-Fong-Smith's first proof
  • 1:09:34 : Frenkel-Lepowski-Meurman's work + significance of Borcherd's proof

V. Sketch of Proof

  • 1:14:47: Vertex algebra and monster Lie algebra
  • 1:21:02 : No ghost theorem from string theory
  • 1:25:24 : What's special about dimension 26?
  • 1:28:33 : Monster Lie algebra details
  • 1:32:30 : Dynkin diagrams and Kac-Moody algebras
  • 1:43:21 : Simple roots and an obscure identity
  • 1:45:13: Weyl denominator formula, Vandermonde identity
  • 1:52:14 : Chasing down where modular forms got smuggled in
  • 1:55:03 : Final calculations

VI. Epilogue

  • 1:57:53 : Your most proud result?
  • 2:00:47 : Monstrous moonshine for other sporadic groups?
  • 2:02:28 : Connections to other fields. Witten and black holes and mock modular forms.

 

Further reading: V Tatitschef. A short introduction to Monstrous Moonshine. https://arxiv.org/pdf/1902.03118.pdf

Twitter: @iamtimnguyen

Webpage: http://www.timothynguyen.org

Download This Episode

How It Works

30-day FREE trial

Get ALL YOU CAN BOOKS absolutely FREE for 30 days. Download our FREE app and enjoy unlimited downloads of our entire library with no restrictions.

UNLIMITED access

Have immediate access and unlimited downloads to over 200,000 books, courses, podcasts, and more with no restrictions.

Forever Downloads

Everything you download during your trial is yours to keep and enjoy for free, even if you cancel during the trial. Cancel Anytime. No risk. No obligations.

Significant Savings

For just $24.99 per month, you can continue to have unlimited access to our entire library. To put that into perspective, most other services charge the same amount for just one book!

Start Your Free Trial Now

Our Story

Welcome to All You Can Books, the ultimate destination for book lovers.

Welcome to All You Can Books, the ultimate destination for book lovers.

As avid readers, we understand the joy of immersing ourselves in a captivating story or getting lost in the pages of a good book. That's why we founded All You Can Books back in 2010, to create a platform where people can access an extensive library of quality content and discover new favorites.

Since our founding days, we’ve continuously added to our vast library and currently have over 200,000 titles, including ebooks, audiobooks, language learning courses, podcasts, bestseller summaries, travel books, and more! Our goal at All You Can Books is to ensure we have something for everyone.

Join our community of book lovers and explore the world of literature and beyond!