Exploring the unknown, together.

Roads to Research: A Panel Conversation on Research Question Ideation

May 22, 2024 Cohere For AI
Roads to Research: A Panel Conversation on Research Question Ideation
Exploring the unknown, together.
More Info
Exploring the unknown, together.
Roads to Research: A Panel Conversation on Research Question Ideation
May 22, 2024
Cohere For AI

Let's demystify ML research. This panel hosted by Marzieh Fadaee focuses on how to ask research questions, featuring insight and advice from experienced researchers.

Discussion includes:
-some practical strategies or techniques you use for generating and refining research ideas in the field of ML
-common pitfalls or mistakes that ML researchers should avoid when formulating research questions
-focus on a specific niche or sub-field of ML versus exploring a variety of topics
Look forward to a lively discussion, followed by audience questions.

About the speakers
- Stella Biderman: I am a mathematician and theoretical computer scientist interested in a variety of types of computational research, including artificial intelligence, combinatorics, and data science. Currently, I do research developing new techniques and new applications of established techniques in AI and machine learning to data analysis, especially social network analysis. Recently I have been focusing on sociopolitical and ethical implications of computational decision making.

- Tim Bettmers: I am a graduating PhD student at the University of Washington advised by Luke Zettlemoyer working on efficient deep learning at the intersection between machine learning, natural language processing, and computer systems with a focus on quantization and sparsity. My main research goal is to empower everyone to make AI their own. I do this by making large models accessible through my research (QLoRA, LLM.int8(), k-bit inference scaling laws, Petals, SWARM) and by developing software that makes it easy to use my research innovations (bitsandbytes).

- Mostafa: I'm a Research Scientist at Google Brain, where I work on machine learning, in particular, deep learning. My areas of interest include self-supervised learning, generative models, training giant models, and sequence modeling. Before Google, I was doing a PhD at the University of Amsterdam. My PhD research was focused on improving the process of learning with imperfect supervision. I explored ideas around using injecting inductive biases into algorithms, incorporating prior knowledge, and meta-learning the properties of the data using the data itself, in order to help learning algorithms to better learn from noisy or/and limited data.

- Eugene Cheah: I am currently the CEO / Co-Founder at Recursal AI and currently work on: RWKV : Opensource foundation model under the Linux Foundation, uilicious.com : Low-Code UI Testing, various other opensource projects, most notable : GPU.JS. In the past, I worked as a developer / project manager at multiple banks and insurance companies and developer at multiple startu

Show Notes

Let's demystify ML research. This panel hosted by Marzieh Fadaee focuses on how to ask research questions, featuring insight and advice from experienced researchers.

Discussion includes:
-some practical strategies or techniques you use for generating and refining research ideas in the field of ML
-common pitfalls or mistakes that ML researchers should avoid when formulating research questions
-focus on a specific niche or sub-field of ML versus exploring a variety of topics
Look forward to a lively discussion, followed by audience questions.

About the speakers
- Stella Biderman: I am a mathematician and theoretical computer scientist interested in a variety of types of computational research, including artificial intelligence, combinatorics, and data science. Currently, I do research developing new techniques and new applications of established techniques in AI and machine learning to data analysis, especially social network analysis. Recently I have been focusing on sociopolitical and ethical implications of computational decision making.

- Tim Bettmers: I am a graduating PhD student at the University of Washington advised by Luke Zettlemoyer working on efficient deep learning at the intersection between machine learning, natural language processing, and computer systems with a focus on quantization and sparsity. My main research goal is to empower everyone to make AI their own. I do this by making large models accessible through my research (QLoRA, LLM.int8(), k-bit inference scaling laws, Petals, SWARM) and by developing software that makes it easy to use my research innovations (bitsandbytes).

- Mostafa: I'm a Research Scientist at Google Brain, where I work on machine learning, in particular, deep learning. My areas of interest include self-supervised learning, generative models, training giant models, and sequence modeling. Before Google, I was doing a PhD at the University of Amsterdam. My PhD research was focused on improving the process of learning with imperfect supervision. I explored ideas around using injecting inductive biases into algorithms, incorporating prior knowledge, and meta-learning the properties of the data using the data itself, in order to help learning algorithms to better learn from noisy or/and limited data.

- Eugene Cheah: I am currently the CEO / Co-Founder at Recursal AI and currently work on: RWKV : Opensource foundation model under the Linux Foundation, uilicious.com : Low-Code UI Testing, various other opensource projects, most notable : GPU.JS. In the past, I worked as a developer / project manager at multiple banks and insurance companies and developer at multiple startu