Skip to content
Michal Bortkiewicz

Michał Bortkiewicz

I’m a PhD student at the Warsaw University of Technology and a visiting researcher at Princeton University. I'm advised by Tomasz Trzciński, Piotr Miłoś and I collaborate closely with Benjamin Eysenbach. I work on making reinforcement learning practical and reliable at scale. My research focuses on unsupervised and goal-conditioned RL, representation learning for efficient exploration, and methods to mitigate catastrophic forgetting and improve plasticity.


News

November 26, 2025

Our paper "1000 Layer Networks for Self-Supervised RL..." received the Best Paper Award at NeurIPS 2025. 🎉

December 1-7, 2025

I will attend NeurIPS 2025 in San Diego! Feel free to reach out if you want to chat about RL 😀

October 1, 2025

Two papers accepted to NeurIPS 2025! "Contrastive Representations for Temporal Reasoning" and "1000 Layer Networks for Self-Supervised RL: Scaling Depth Can Enable New Goal-Reaching Capabilities" (Oral) 🎉

June 6, 2025

Started my stay at Princeton University as a Visiting Student Research Collaborator 🐅

February 1, 2025

Two papers accepted to ICLR 2025! "Accelerating Goal-Conditioned RL Algorithms and Research" and "Learning Continually by Spectral Regularization" 🎉


Selected Publications


2025

NeurIPS 2025: "Contrastive Representations for Temporal Reasoning"

Alicja Ziarko, Michał Bortkiewicz, Michał Zawalski, Benjamin Eysenbach, Piotr Miłoś

NeurIPS 2025 Oral: "1000 Layer Networks for Self-Supervised RL: Scaling Depth Can Enable New Goal-Reaching Capabilities"

Best Paper Award

Kevin Wang, Ishaan Javali, Michał Bortkiewicz, Tomasz Trzciński, Benjamin Eysenbach

ICLR 2025 Spotlight: "Accelerating Goal-Conditioned RL Algorithms and Research"

Michał Bortkiewicz, Władek Pałucki, Vivek Myers, Tadeusz Dziarmaga, Tomasz Arczewski, Łukasz Kuciński, Benjamin Eysenbach

ICLR 2025: "Learning Continually by Spectral Regularization"

Alex Lewandowski, Michał Bortkiewicz, Saurabh Kumar, András György, Dale Schuurmans, Mateusz Ostaszewski, Marlos C. Machado


2024

ICML 2024: "Overestimation, Overfitting, and Plasticity in Actor-Critic: the Bitter Lesson of Reinforcement Learning"

Michal Nauman*, Michał Bortkiewicz*, Piotr Miłoś, Tomasz Trzciński, Mateusz Ostaszewski, Marek Cygan

ICML 2024 Spotlight: "Fine-Tuning Reinforcement Learning Models is Secretly a Forgetting Mitigation Problem"

Maciej Wołczyk, Bartłomiej Cupiał, Mateusz Ostaszewski, Michał Bortkiewicz, Michał Zając, Razvan Pascanu, Łukasz Kuciński, Piotr Miłoś

More available on Google Scholar.


Open Source Projects