Munich🥨NLP April '26 Meetup

Save the Date!

April 28th, 2026 17:15 — 20:30 at Ludwig-Maximilians-University of Munich, Institute for Computer Science.

OettingenstraĂźe 67, 80538 MĂĽnchen - Room 151 (Roomfinder)

RSVP

About the Event

We are thrilled to announce our next on-site meetup in collaboration with MCML, taking place on April 28 at LMU!

Join us for an exciting evening starting with a public screening of the online lecture on LLM reasoning by Stanford’s Dr. Yejin Choi. Following the screening, we will have an insightful in-person presentation by Kathy Hämmerl (TUM) on cross-lingual representations in multilingual models.

Agenda - TIMETABLE:

  • 17:15 | Welcome & Intro to MunichNLP and MCML
  • 17:30 | Public Screening: Yejin Choi’s “The Art of (Artificial) Reasoning”
  • 18:30 | Break (30 min)
  • 19:00 | On-Site Talk + Q&A: Kathy Hämmerl on “Understanding Cross-Lingual Representations in Multilingual Models”
  • 19:30 | Food & Networking

Talks

  • Dr. Yejin Choi: The Art of (Artificial) Reasoning

    Scaling laws suggest that “more is more” — brute-force scaling of data and compute leads to stronger AI capabilities. However, despite rapid progress on benchmarks, state-of-the-art models still exhibit “jagged intelligence,” indicating that current scaling approaches may have limitations in terms of sustainability and robustness. Meanwhile, although the volume of papers on arXiv continues to grow at a remarkable pace, our scientific understanding of LLM reasoning has not kept up with engineering advances, and the current literature presents seemingly contradictory findings that are confusing to reconcile. In this talk, I will discuss key insights into the strengths and limitations of LLMs, examine when reinforcement learning succeeds or struggles in reasoning tasks, and explore methods for enhancing reasoning capabilities in smaller language models to help them close the gap against their larger counterparts in specific domains.

    Dr. Yejin Choi is the Dieter Schwarz Foundation Professor and Senior Fellow at Stanford’s Computer Science and Institute for Human-Centered AI (HAI). She is a MacArthur Fellow, AI2050 Senior Fellow, and was named to Time100 Most Influential People in AI (2023, 2025). Choi has received 2 Test-of-Time Awards and 10 Best/Outstanding Paper Awards at top AI conferences. She was a main stage speaker at TED 2023 and has delivered keynotes at several AI conferences including NeurIPS, ICLR, CVPR, ACL, and AAAI. Her research focuses on democratizing generative AI through smaller yet powerful language models, scaling intelligence via smarter algorithms, pluralistic alignment, and AI for science and social good. She received her Ph.D. in Computer Science at Cornell University and BS in Computer Engineering at Seoul National University in Korea.

  • Kathy Hämmerl: Understanding Cross-Lingual Representations in Multilingual Models

    Multilingual language models must share their parameters between many languages and varieties. Cross-lingually aligned representations are sometimes thought to correlate with token overlap, but is it really so simple? And what about the “aligned” representations themselves? How do we define what we want them to look like, and is that answer the same for different model types?

    Kathy Hämmerl is in the process of finishing their PhD at the Technical University of Munich under the supervision of Prof Alex Fraser. They have worked extensively on the subject of cross-lingual representation learning within multilingual Transformer models, working closely with Jindřich Libovický from Charles University, Prague. They recently completed an internship at LILT, focusing on localisation-specific evaluation of machine translation, and previously collaborated on topics ranging from low-resource LLM evaluation to how human biases are reified in language models.