Model-Based RL in the Era of Generative World Models
Reinforcement Learning Conference (RLC) Workshop
August 15, 2026, Montreal, Canada
Submit Your Paper

Outstanding contributions receive a Best Paper Award

Core Topics

Where model-based RL meets the new wave of large-scale generative world models.

  • Model-Based RL Algorithms & Theory
  • Planning & Search in Learned Models
  • Exploration & Sample Efficiency
  • Generative & Latent World Models
  • World Models for Sim-to-Real Transfer
  • Offline RL & Model-Based Imagination

Invited Speakers actively updating

Jack Parker-Holder

Jack Parker-Holder

Google DeepMind

Confirmed
Danijar Hafner

Danijar Hafner

Google DeepMind

Tentative
Doina Precup

Doina Precup

Google DeepMind & McGill University

Confirmed
Romain Laroche

Romain Laroche

Wayve

Confirmed
Cyrus Neary

Cyrus Neary

University of British Columbia

Confirmed
Amir Zadeh

Amir Zadeh

Lambda AI

Confirmed

Sponsors

We are grateful for the support from our partners in advancing world modeling research.

Organizers

Mohamad H. Danesh

Mohamad H. Danesh

McGill University & Mila
Organizer

Amin Abyaneh

Amin Abyaneh

McGill University & Mila
Organizer

Michael Przystupa

Michael Przystupa

Vrije Universiteit Amsterdam
Organizer

Chenhao Li

Chenhao Li

ETH Zurich
Organizer

Huihan Liu

Huihan Liu

UT Austin
Organizer

Glen Berseth

Glen Berseth

UdeM & Mila
Senior Advisor

Stan Birchfield

Stan Birchfield

Nvidia
Senior Advisor

Hsiu-Chin Lin

Hsiu-Chin Lin

McGill University & Mila
Senior Advisor

Call for Papers

Submission Deadline

May 30, 2026

Anywhere on Earth (AOE)

Author Notification

June 15, 2026

Anywhere on Earth (AOE)

Workshop Date

August 15, 2026

Montreal, Canada

Format

Non-archival

Review

Double-blind via OpenReview

Length

Up to 4 pages excl. references

Template

RLC 2026

Submit Your Paper

Outstanding contributions receive a Best Paper Award

Contact

Questions about the workshop or submissions? worldmodelworkshop@gmail.com