To content
Lecture Series

AI Colloquium

The AI Colloquium is a series of lectures dedicated to cutting-edge research in the field of machine learning and artificial intelligence, coorganized by the Lamarr Institute for Machine Learning and Artificial Intelligence (Lamarr Institute), the Research Center Trustworthy Data Science and Security (RC Trust), and the Center for Data Science & Simulation at TU Dortmund University (DoDas).

Programme

Distinguished researchers deliver captivating lectures followed by vibrant discussions. However, unlike traditional colloquia, the AI Colloquium prioritizes interactive dialogue, fostering international collaboration. Conducted primarily in English, these 90-minute sessions feature hour-long lectures and 30-minute Q&A sessions. Join every Thursday at 10 AM c.t. for a stimulating exploration of cutting-edge topics. Whether in-person at our Lecture Room on Fraunhofer Strasse 25 or via Zoom, our hybrid format ensures accessibility for all.

Day (usually) Thursday
Start and end time 10 AM c.t. - 12 AM
Duration of Presentation 60 Minutes
Location (usually) Lecture Room 303
3. Floor
Fraunhofer Strasse 25
Dortmund

Upcomming Events

Bayesian Optimization at 1,000 Dimensions: Why It Works, What Breaks, and What’s Next

Start: End: Location: JvF25/3-303 - Conference Room (Lamarr/RC Trust Dortmund)
Event type:
  • Lamarr
  • Resource-aware ML
Profile picture of Leonard Papenmeier © Leonard Papenmeier
Leonard Papenmeier (Universität Münster)

Abstract: Bayesian optimization (BO) with Gaussian-process (GP) surrogates is often said to “top out” around ~20 dimensions under realistic evaluation budgets due to the curse of dimensionality. In response, high-dimensional BO (HDBO) has produced a rich toolbox of optimisation algorithms that report strong performance in the hundreds or even thousands of variables under specific assumptions. Yet recent benchmark studies paint a more puzzling picture: surprisingly vanilla GP-BO configurations, with only minor implementation choices, can match or outperform many specialized HDBO methods. So what, exactly, did the sophisticated methods solve - and why do the simple ones work?

In this talk I retrace the trajectory from structured HDBO to this new „simplicity" era. I first present BAxUS and Bounce, which adaptively expand nested subspace embeddings to handle high-dimensional continuous and mixed/combinatorial spaces. Using these as a lens, I then dissect common HDBO benchmarks and show how seemingly state-of-the-art gains can arise from unintended or "too-helpful" structure in the test problems rather than from genuinely scalable modeling.

The second half focuses on a diagnostic explanation for strong vanilla performance. I highlight two mechanisms: (i) vanishing gradients in both GP marginal-likelihood training and acquisition-function optimization, which can silently freeze learning unless length-scale initialization (or priors) are scaled with dimensionality; and (ii) implicit locality induced by widely used acquisition optimizers - especially "sample-around-best" / RAASP-style candidate generation - which effectively turns global BO into a robust local search procedure. The takeaway is that much reported HDBO success is driven by effective locality, and that many benchmarks are easier than their nominal dimension suggests. I close with open challenges around realistic benchmark design, budget-aware model complexity, and principled local-search hybrids.

About the Speaker

Dr. Leonard Papenmeier

Bio: Leonard Papenmeier is a postdoctoral researcher at the University of Münster, where he conducts research on Bayesian optimization in high dimensions, with a focus on robust methods and meaningful benchmarks. He previously earned his Ph.D. in 2025 from Lund University (dissertation: Bayesian Optimization in High Dimensions). His work has been published in conferences such as NeurIPS and ICML/UAI.

Archiv

Past Events

Bayesian Optimization at 1,000 Dimensions: Why It Works, What Breaks, and What’s Next

Start: End: Location: JvF25/3-303 - Conference Room (Lamarr/RC Trust Dortmund)
Event type:
  • Lamarr
  • Resource-aware ML
Profile picture of Leonard Papenmeier © Leonard Papenmeier
Leonard Papenmeier (Universität Münster)

Abstract: Bayesian optimization (BO) with Gaussian-process (GP) surrogates is often said to “top out” around ~20 dimensions under realistic evaluation budgets due to the curse of dimensionality. In response, high-dimensional BO (HDBO) has produced a rich toolbox of optimisation algorithms that report strong performance in the hundreds or even thousands of variables under specific assumptions. Yet recent benchmark studies paint a more puzzling picture: surprisingly vanilla GP-BO configurations, with only minor implementation choices, can match or outperform many specialized HDBO methods. So what, exactly, did the sophisticated methods solve - and why do the simple ones work?

In this talk I retrace the trajectory from structured HDBO to this new „simplicity" era. I first present BAxUS and Bounce, which adaptively expand nested subspace embeddings to handle high-dimensional continuous and mixed/combinatorial spaces. Using these as a lens, I then dissect common HDBO benchmarks and show how seemingly state-of-the-art gains can arise from unintended or "too-helpful" structure in the test problems rather than from genuinely scalable modeling.

The second half focuses on a diagnostic explanation for strong vanilla performance. I highlight two mechanisms: (i) vanishing gradients in both GP marginal-likelihood training and acquisition-function optimization, which can silently freeze learning unless length-scale initialization (or priors) are scaled with dimensionality; and (ii) implicit locality induced by widely used acquisition optimizers - especially "sample-around-best" / RAASP-style candidate generation - which effectively turns global BO into a robust local search procedure. The takeaway is that much reported HDBO success is driven by effective locality, and that many benchmarks are easier than their nominal dimension suggests. I close with open challenges around realistic benchmark design, budget-aware model complexity, and principled local-search hybrids.

About the Speaker

Dr. Leonard Papenmeier

Bio: Leonard Papenmeier is a postdoctoral researcher at the University of Münster, where he conducts research on Bayesian optimization in high dimensions, with a focus on robust methods and meaningful benchmarks. He previously earned his Ph.D. in 2025 from Lund University (dissertation: Bayesian Optimization in High Dimensions). His work has been published in conferences such as NeurIPS and ICML/UAI.

Bayesian Optimization at 1,000 Dimensions: Why It Works, What Breaks, and What’s Next

Start: End: Location: JvF25/3-303 - Conference Room (Lamarr/RC Trust Dortmund)
Event type:
  • Lamarr
  • Resource-aware ML
Profile picture of Leonard Papenmeier © Leonard Papenmeier
Leonard Papenmeier (Universität Münster)

Abstract: Bayesian optimization (BO) with Gaussian-process (GP) surrogates is often said to “top out” around ~20 dimensions under realistic evaluation budgets due to the curse of dimensionality. In response, high-dimensional BO (HDBO) has produced a rich toolbox of optimisation algorithms that report strong performance in the hundreds or even thousands of variables under specific assumptions. Yet recent benchmark studies paint a more puzzling picture: surprisingly vanilla GP-BO configurations, with only minor implementation choices, can match or outperform many specialized HDBO methods. So what, exactly, did the sophisticated methods solve - and why do the simple ones work?

In this talk I retrace the trajectory from structured HDBO to this new „simplicity" era. I first present BAxUS and Bounce, which adaptively expand nested subspace embeddings to handle high-dimensional continuous and mixed/combinatorial spaces. Using these as a lens, I then dissect common HDBO benchmarks and show how seemingly state-of-the-art gains can arise from unintended or "too-helpful" structure in the test problems rather than from genuinely scalable modeling.

The second half focuses on a diagnostic explanation for strong vanilla performance. I highlight two mechanisms: (i) vanishing gradients in both GP marginal-likelihood training and acquisition-function optimization, which can silently freeze learning unless length-scale initialization (or priors) are scaled with dimensionality; and (ii) implicit locality induced by widely used acquisition optimizers - especially "sample-around-best" / RAASP-style candidate generation - which effectively turns global BO into a robust local search procedure. The takeaway is that much reported HDBO success is driven by effective locality, and that many benchmarks are easier than their nominal dimension suggests. I close with open challenges around realistic benchmark design, budget-aware model complexity, and principled local-search hybrids.

About the Speaker

Dr. Leonard Papenmeier

Bio: Leonard Papenmeier is a postdoctoral researcher at the University of Münster, where he conducts research on Bayesian optimization in high dimensions, with a focus on robust methods and meaningful benchmarks. He previously earned his Ph.D. in 2025 from Lund University (dissertation: Bayesian Optimization in High Dimensions). His work has been published in conferences such as NeurIPS and ICML/UAI.