Michał Bortkiewicz
I’m a PhD student at the Warsaw University of Technology and a visiting researcher at Princeton University. I'm advised by Tomasz Trzciński, Piotr Miłoś and I collaborate closely with Benjamin Eysenbach. I work on making reinforcement learning practical and reliable at scale. My research focuses on unsupervised and goal-conditioned RL, representation learning for efficient exploration, and methods to mitigate catastrophic forgetting and improve plasticity.
News
Two papers accepted to NeurIPS 2025! "Contrastive Representations for Temporal Reasoning" and "1000 Layer Networks for Self-Supervised RL: Scaling Depth Can Enable New Goal-Reaching Capabilities" (Oral) 🎉
Started my stay at Princeton University as a Visiting Student Research Collaborator 🐅
Two papers accepted to ICLR 2025! "Accelerating Goal-Conditioned RL Algorithms and Research" and "Learning Continually by Spectral Regularization" 🎉
Selected Publications
NeurIPS 2025: "Contrastive Representations for Temporal Reasoning"
NeurIPS 2025 Oral: "1000 Layer Networks for Self-Supervised RL: Scaling Depth Can Enable New Goal-Reaching Capabilities"
ICLR 2025 Spotlight: "Accelerating Goal-Conditioned RL Algorithms and Research"
ICLR 2025: "Learning Continually by Spectral Regularization"
ICML 2024: "Overestimation, Overfitting, and Plasticity in Actor-Critic: the Bitter Lesson of Reinforcement Learning"
ICML 2024 Spotlight: "Fine-Tuning Reinforcement Learning Models is Secretly a Forgetting Mitigation Problem"
More available on Google Scholar.