Nils Kohring, M.Sc.

Phone: +49 (0) 89 289 - 17506
Room 01.10.055
Boltzmannstr. 3
85748 Garching, Germany
Hours: by arrangement

Short Bio

I'm a Ph.D. student at the DSS chair supervised by Prof. Bichler. My research focuses on the computation of equilibria in markets and auctions via multi-agent reinforcement learning methods.

  • 2016 - 2019: Master of Economathematics (M.Sc.), University of Cologne
  • 2018: Visiting Student at The University of Tokyo, Japan
  • 2013 - 2016: Bachelor of Economathematics (B.Sc.), University of Cologne
Working Experience
  • 2019/06 - 2019/08: Data Science Intern at Fintech Startup
  • 2018/08 - 2019/02: Intern in the Applied Mathematics Team, Bayer (Leverkusen)
  • 2016/09 - 2018/04: Student Tutor for different mathematics lectures, University of Cologne
  • 2015/08 - 2015/10: Intern in Process Management, Deutsche Bank (Frankfurt a.M.)


Journal Publications

M. Bichler, N. Kohring, and S. Heidekrüger. Learning equilibria in asymmetric auction games. INFORMS Journal on Computing, 2023. [ link ]

M. Bichler, N. Kohring, M. Oberlechner, and F. R. Pieroth. Learning equilibrium in bilateral bargaining games. European Journal of Operational Research, 2022. [ link ]

M. Bichler, M. Fichtl, S. Heidekrüger, N. Kohring, and P. Sutterer. Learning equilibria in symmetric auction games using artificial neural networks. Nature Machine Intelligence, (3), 2021. [ link  ] Also presented at the 2020 annual meeting of NBER Market Design Working Group.

Conference Publications

N. Kohring, F. R. Pieroth, and M. Bichler. Enabling first-order gradient-based learning for equilibrium computation in markets. Proceedings of the 40th International Conference on Machine Learning, PMLR 202:17327-17342, 2023. [ link ]

Peer Reviewed Workshop Publications

N. Kohring, C. Fröhlich, S. Heidekrüger, and M. Bichler. Equilibrium computation for auction games via multi-swarm optimization. In AAAI-22 Workshop on Reinforcement Learning in Games (AAAI-RLG 22), Online, 2022. [ link | pdf ]

S. Heidekrüger, N. Kohring, P. Sutterer, and M. Bichler. Equilibrium learning in combinatorial auctions: Computing approximate bayesian nash equilibria. In AAAI-21 Workshop on Reinforcement Learning in Games (AAAI-RLG 21), Online, 2021.

S. Heidekrüger, N. Kohring, P. Sutterer, and M. Bichler. Multiagent learning for equilibrium computation in auction markets. In AAAI Spring Symposium on Challenges and Opportunities for Multi-Agent Reinforcement Learning (COMARL-21), Online, 2021.

S. Heidekrüger, P. Sutterer, N. Kohring, and M. Bichler. Learning bayesian nash equilibria in auction games. In INFORMS Workshop on Data Science, Online, 2020.

S. Heidekrüger, P. Sutterer, N. Kohring, and M. Bichler. Equilibrium learning in combinatorial auctions: Computing approximate bayesian nash equilibria. In Workshop on Information Technology and Systems (WITS20), Online, 2020.

Working Papers

F. R. Pieroth, N. Kohring, and M. Bichler. Deep reinforcement learning solves continuous multi-stage games. 2023.

Conference Talks

Learning equilibria in double auctions via self-play. International Conference on Operations Research, Karlsruhe, 09/2022.

Learning Bayesian Nash equilibria in auction games. Workshop on Data Science, INFORMS Annual Meeting, Online, 11/2020.


Most of my research is based on our software package bnelearn, which can be found at It is a framework for equilibrium learning in sealed-bid auctions and other markets that can be modeled as Bayesian games.


Supervised Theses

Gefei Qiu. Assessing the Viability of B2B Recommender Systems for Hierarchical Product and Customer Data, M.Sc. Management and Technology (ongoing, 2023).

Undisclosed project.

Longtao Liu. Counterfactual Regret Minimization in Sequential Auctions, M.Sc. Data Engineering and Analytics (2023).

Sohrab Tawana. An Analysis of League-Training in Multi-Agent Reinforcement Learning Applications, B.Sc. Informatics (2023).

Hlib Kilichenko. Multimodal Trajectory Prediction for Self-driving Vehicles using a Single Monocular Camera, M.Sc. Informatics (2023), in cooperaton with Tensoreye.

Maximilian Göldl. Multidimensional Analysis of Social Networks: Structure Evaluation and Cluster Detection, B.Sc. Informatics (2022), in cooperation with an intelligence agency.

Jonas Lang. Mobile Network Key Performance Indicator Prediction and Explainability Using Transformers, M.Sc. Informatics (2022), in cooperation with an industry partner.

Manuel A. Schreiber. Model-Agnostic Explainable AI Methods for Binary Classification, M.Sc. Management and Technology (2022).

Christ Ligori. Using Competitive Gradient Descent for Nash Equilibrium Computation, B.Sc. Information Systems (2022).

Congnan Wang. Exploring the Constraints and Opportunities of Artificial Intelligence-driven Tax Policies, M.Sc. Robotics, Cognition, Intelligence (2022).

Richard Stromer. Dynamic Topic Clustering for News Articles, Guided Research Project (2022).

Dmitrij Boschko. Rational Agents for the Board Game Scotland Yard based on Partially Observable Markov Decision Processes, M.Sc. Informatics (2022).

Farheen Zehra. Equilibrium Learning in Double Auctions, M.Sc. Mathematics in Data Science (2021).

Undisclosed project.

Shakiba Sheikhian. Strategy Evaluation of the Game RPS: Detecting Exploitabilities via Methods of RL, B.Sc. Information Systems (2021).

Carina Fröhlich. A Survey on Particle Swarm Optimization with an Application to non-differentiable Vector Optimization, M.Sc. Information Systems (2021).

Qiaoxi Liu. Data-driven Marketing Attribution Model Based Attention Mechanism for a Quantitative Estimation of TV-Advertisement Effects, M.Sc. Informatics (2021), in cooperation with ProSiebenSat.1 Media.

Michael Gigler. Predicting the Individual Suitability for E-Mobility Using Machine Learning, M.Sc. Information Systems (2021), in cooperation with BMW.

Wusheng Liu. Learning Approximate Bayes-Nash Equilibria with Opponent-Learning Awareness, M.Sc. Data Engineering and Analytics (2021).

Duc Anh Le. Hyperparameter Optimization of a Deep Reinforcement Learning System for Equilibrium Computation, B.Sc. Informatics (2020).