top of page

AI Research
Program

Conduct authentic AI research under expert mentorship. Develop your own techniques and publish your results in a research paper.

ezgif.com-crop.gif

Improve state-of-the-art performance of large language models like GPT-4.

We are embarking on an ambitious goal of advancing the frontiers of large language model capabilities, rigorously evaluating their performance against industry-standard benchmarks. Leveraging open source LLMs such as Meta's Llama 2, our program is uniquely positioned to contribute to this cutting-edge field of research.

Fall 2023 Research Highlights

*Our NAACL submissions are not displayed here, as they are in the anonymity period. The works displayed here are still in progress and will be submitted to another *ACL conference. Authors listed in alphabetical order.

(New) NeurIPS 2024

In April 2024, NeurIPS, the most prominent AI conference in the world, announced their inaugural high-school conference due at the end of June. 

Our NeurIPS Track

  • Program Dates: May 12 - June 30.

    • Lectures on Sundays 1-2:30 pm PT

    • Two 20-30 minute check-ins per week on Tu/Th

  • AI for Social Impact: Aligning with the conference scope, our projects will focus on AI for social impact. Unlike our usual program, it will not be necessarily LLM-specific.

  • Learn AI fundamentals and publish a paper: Gain a foundation in AI/ML skills and conduct scientific research to be submitted to NeurIPS.

Program Overview

Objective

Research Experience: Immerse yourself in the process of real-world AI research by delving into literature review, formulating hypotheses, running experiments, communicating your results in a research publication, and submitting research to conferences.

Academic Contribution: Engage with the rapidly growing field of large language models by developing techniques that have the potential to make an actual impact.

Schedule

Weekly Structure: The program has two weekly meetings, with optional office hours available. We expect you to dedicate 5-10 hours per week in total, with flexibility for further exploration.

Weekend Instructor Lecture (1.5 hours): Learn LLM and ML fundamentals and review relevant literature for research inspiration.

Mid-week Progress Update (20 minutes, scheduled by group): Share your weekly progress with your mentor and explore research directions.

Office Hours and Slack: Receive support from mentors throughout the week if you run into roadblocks debugging, want to bounce ideas, or deep dive into technical topics.

Pedagogy

Hands-on Mentorship: Work in a close-knit team of 3-4, guided by a dedicated mentor who collaborates intimately with the team to facilitate progress and engages individually to enhance learning.

Streamlined Pedagogy: The program is tailored to allow you to engage in real AI research without prior research experience or AI expertise. We provide pre-structured code frameworks to minimize technical hurdles, and lessons on LLM fundamentals and meta-level research skills to ensure a solid foundation for all students.

Logistics

Class Format: Meetings are fully online and held over Zoom.

Program Dates (times listed in Pacific Time):

(New) NeurIPS Track: May 12 - Jun 30. Lecture time: Sundays 1-2:30 pm PT

Summer A: Jun 1 - Aug 17. Lecture time: Saturdays 10-11:30 am PT

Summer B: Jun 23 - Sep 8. Lecture time: Sundays 10-11:30 am PT

Application Deadline: Admissions for all cohorts are currently on a rolling basis and will close as spaces fill. Summer A is almost at full capacity.

Program Fee: The total cost of the program is $1725 (~$60 per instructional hour). We are priced at a fraction of other research programs; unlike many research programs, we are genuinely committed to accessibility and an authentic AI research experience.

 

Scholarships: Need-based scholarships and a limited number of merit-based scholarships are available.

Words from our Fall 2023 Research Alumni

unnamed - Michael Naeim.jpg

Michael Naeim, Grade 12, Miami College Language High School

"I am thrilled to share my experience with the Algoverse research program. The lectures were exceptionally well-crafted, providing invaluable insights. The support I received throughout the program was nothing short of amazing. In comparison to other research programs, Algoverse stands out as I found myself learning a lot from lectures to office hours and weekly meetings. Embarking on a research project focused on BERT was a daunting task for me and my team, but the unwavering support from the program made it achievable. The mentors were not just knowledgeable, but also perfect in their guidance. Their friendliness and constant support made the learning journey truly enjoyable. One aspect that truly impressed me was the clarity of the plan provided and the abundance of resources at our disposal. The program's commitment to following up with teams and fostering a sense of community was outstanding. I not only gained valuable knowledge but also forged connections with like-minded individuals, creating a network of friends who share my interests. Above all, the mentors were the highlight of the program for me. Their daily support was instrumental in my success. I am grateful for the experience, the community, and the exceptional guidance I received from the Algoverse research program. Definitely, I am going to recommend this program to anyone who is both excited to learn about machine learning from amazing mentors and have a goal to publish his paper at a huge conference like NAACL but also interested in having fun and friendly experience. I would say mentors and the team you will have is the best part in the program. Support that you will get from the mentors and how much time they dedicate to helping you is definitely amazing."

Our Research Team

We are a dedicated team of graduate student researchers from leading AI universities and AI researchers in the industry, with an extensive background in teaching.

367402774_823078789438227_12447910522344

Sean O'Brien

AI Research Director

AI Research at UCSD | Former AI Resident at Meta | Berkeley AI Research

Sean conducts research on large language models like GPT-4 as a PhD researcher at UCSD. While an AI resident at Meta, he researched language model decoding methods and co-authored Shepherd, a small language model that generates critiques matching the quality of ChatGPT. Previously, at Berkeley AI Research (BAIR), he specialized in transformer architectures for strategy learning. Sean was also a 7-time GSI at Berkeley, teaching introductory programming, discrete mathematics, and upper-division machine learning, while triple majoring in EECS, math, and cognitive science.

profile pic, but removed some hair and filtered so it's not as yellow overall.png

Kevin Zhu

Program Director, Research Mentor

Former UC Berkeley Instructor | Software Engineer at Palantir | Quant at Citadel

Kevin taught 3000+ Berkeley students during his tenure as a lecturer for CS198-112 and 5-time Head GSI, specializing in upper-division algorithms. He has also taken software engineering roles at Palantir and various startups, and ML research roles at Citadel, Goldman Sachs, and Berkeley RISE Lab, where he applied traditional machine learning techniques to the stock market and researched techniques for improving convolutional neural network inference efficiency. Kevin now serves as the lead director for the Algoverse programs, as well as an instructor.

Screen Shot 2023-08-22 at 2.55.02 PM.png

Thomas Lu

Research Mentor

AI Research at CMU | Former AI Research at Tiktok | Berkeley AI Research

Thomas conducts AI research at Carnegie Mellon University as a Master's student in machine learning. He is a co-author of "Learned Incremental Representations for Parsing", which earned the highest distinction of Best Paper at ACL 2022, the premier NLP conference (reference). He has previously researched at Berkeley AI Research, MDI, and Tiktok. Thomas completed his bachelor's at UC Berkeley, triple majoring in CS, data science, and linguistics with a 4.0 GPA.

Applications for Spring and Summer 2024 are now open.

bottom of page