Argmax
A show where three machine learning enthusiasts talk about recent papers and developments in machine learning. Watch our video on YouTube https://www.youtube.com/@argmaxfm
Episodes
17 episodes
Mixture of Experts
In this episode we talk about the paper "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean.
•
54:46
LoRA
We talk about Low Rank Approximation for fine tuning Transformers. We are also on YouTube now! Check out the video here: https://youtu.be/lLzHr0VFi3Y
•
Season 2
•
Episode 1
•
1:02:56
15: InstructGPT
In this episode we discuss the paper "Training language models to follow instructions with human feedback" by Ouyang et al (2022). We discuss the RLHF paradigm and how important RL is to tuning GPT.
•
Season 1
•
Episode 15
•
57:27
14: Whisper
This week we talk about Whisper. It is a weakly supervised speech recognition model.
•
Season 1
•
Episode 14
•
49:14
13: AlphaTensor
We talk about AlphaTensor, and how researchers were able to find a new algorithm for matrix multiplication.
•
Season 1
•
Episode 13
•
49:05
12: SIRENs
In this episode we talked about "Implicit Neural Representations with Periodic Activation Functions" and the strength of periodic non-linearities.
•
Season 1
•
Episode 12
•
54:17
11: CVPR Workshop on Autonomous Driving Keynote by Ashok Elluswamy, a Tesla engineer
In this episode we discuss this video: https://youtu.be/jPCV4GKX9DwHow Tesla approaches collision detection with novel methods.
•
Season 1
•
Episode 11
•
48:51
10: Outracing champion Gran Turismo drivers with deep reinforcement learning
We discuss Sony AI's accomplishment of creating a novel AI agent that can beat professional racers in Gran Turismo. Some topics include:- The crafting of rewards to make the agent behave nicely- What is QR-SAC?- How to deal with "ra...
•
Season 1
•
Episode 10
•
54:50
9: Heads-Up Limit Hold'em Poker Is Solved
Today we talk about recent AI advances in Poker; specifically the use of counterfactual regret minimization to solve the game of 2-player Limit Texas Hold'em.
•
47:55
8: GATO (A Generalist Agent)
Today we talk about GATO, a multi-modal, multi-task, multi-embodiment generalist agent.
•
Season 1
•
Episode 8
•
44:51
7: Deep Unsupervised Learning Using Nonequilibrium Thermodynamics (Diffusion Models)
We start talking about diffusion models as a technique for generative deep learning.
•
Season 1
•
Episode 7
•
30:55
6: Deep Reinforcement Learning at the Edge of the Statistical Precipice
We discuss NeurIPS outstanding paper award winning paper, talking about important topics surrounding metrics and reproducibility.
•
Season 1
•
Episode 6
•
1:01:08
5: QMIX
We talk about QMIX https://arxiv.org/abs/1803.11485 as an example of Deep Multi-agent RL.
•
Season 1
•
Episode 5
•
42:06
4: Can Neural Nets Learn the Same Model Twice?
Todays paper: Can Neural Nets Learn the Same Model Twice? Investigating Reproducibilityand Double Descent from the Decision Boundary Perspective (https://arxiv.org/pdf/2203.08124.pdf)Summary:A discussion of reproducibility and d...
•
Season 1
•
Episode 4
•
55:23
3: VICReg
Todays paper: VICReg (https://arxiv.org/abs/2105.04906)Summary of the paperVICReg prevents representation collapse using a mixture of variance, invariance and covariance when cal...
•
Season 1
•
Episode 3
•
44:46