Cyber Valley Research Fund Symposium 2025
Projects from the Cyber Valley Innovation Campus
Join us at the Scientific Symposium to celebrate the achievements of the Cyber Valley Research Fund!

Since 2019, the Cyber Valley Research Fund (CVRF) has been driving innovation in machine learning, computer vision, and robotics, thanks to the generous support of Cyber Valley's industry founding partners. With a total funding of five million euros, twenty groundbreaking research projects have been brought to life.
The event will feature presentations of several completed CVRF projects and offer exciting networking opportunities in collaboration with the AI Innovation Center of Fraunhofer IPA and Fraunhofer IAO. We are thrilled to welcome experts from Cyber Valley’s founding partners, including IAV, Porsche, and BMW, who will share their insights and highlights from their cutting-edge research and projects.
Time |
Activity |
---|---|
10:00 |
Introduction Cyber Valley GmbH & Fraunhofer Institute for Manufacturing Engineering and Automation IPA |
10:15 - 11:15 |
Cyber Valley Research Fund Projects - Neural networks & biomedical |
Dmitry Kobak – Hertie Institute for AI in Brain Health, Department of Data Science, University of Tübingen: “Contrastive learning for dimensionality reduction and visualization of transcriptomic data” Abstract and speaker’s biography >> BiographyDmitry Kobak studied computer science and physics in St. Peterburg, then did a PhD in computational neurosience at Imperial College London, followed by a postdoc at the Champalimaud Institute in Lisbon. He is currently a group leader in the Department of Data Science of the Hertie AI institute at Tübingen University, working on machine learning and data science for biological applications. He is interested in self-supervised and unsupervised learning, in particular contrastive learning, manifold learning, and dimensionality reduction for 2D visualization of scientific datasets. He is also interested in statistical forensics and has been involved in the analysis of Russian electoral falsifications, war fatalities, and Covid-19 excess mortality. AbstractIn his talk, he will present their work on self-supervised visualization of image data. Visualization methods based on the nearest neighbor graph, such as t-SNE or UMAP, are widely used for visualizing high-dimensional data. Yet, these approaches only produce meaningful results if the nearest neighbors themselves are meaningful. For images represented in pixel space this is not the case, as distances in pixel space are often not capturing our sense of similarity and therefore neighbors are not semantically close. This problem can be circumvented by self-supervised approaches based on contrastive learning, such as SimCLR, relying on data augmentation to generate implicit neighbors, but these methods do not produce two-dimensional embeddings suitable for visualization. Their method, called t-SimCNE, combines ideas from contrastive learning and neighbor embeddings, and trains a parametric mapping from the high-dimensional pixel space into two dimensions. |
|
Tian Qiu– Cyber Valley Group leader, University of Stuttgart – Head of Smart Technologies for Tumor Therapy, German Cancer Research Center (DKFZ) Dresden, Dresden University of Technology: “The cyber physical twin of human organs” Abstract and speaker’s biography >> BiographyTian Qiu is the department head of the “Smart Technology for Tumor Therapy” at the German Cancer Research Center (DKFZ) Site Dresden. He is also a professor at the Faculty of Medicine and the Faculty of Electrical and Computer Engineering, Technical University Dresden. He received the Bachelor and Master from Tsinghua University, China; and the Ph.D. from Swiss Federal Institute of Technology at Lausanne (EPFL), Switzerland. He did his postdoctoral research at Max Planck Institute for Intelligent Systems, and led the Cyber Valley research group at the University of Stuttgart, before joining DKFZ. His main research interest is micro/nano-robotics for minimally-invasive medicine. He received many awards, including the Nomination for the Big-on-Small Award at the IEEE MARSS conference and the ERC Starting Grant “VIBEBOT”. AbstractThe project aims to develop the cyber-physical twin of human organs that combines the advantages of physical and cyber models, to facilitate the sensing of hard-to-acquire biomedical data under realistic conditions and to allow the analysis and modelling using this data. For the physical models, we optimized the workflow of 3D printing and soft material molding to realize complex anatomy with realistic materials. We developed new hydrogel materials that can be resected and coagulated as real soft tissues with real surgical instrument. For the cyber models, we implemented a variety of sensing modalities into the physical models, such as optical imaging, ultrasound imaging and electrical sensing, to generate data for a quantitative evaluation of surgical outcome and feedback to the surgeon in real time to improve their surgical skills. Augmented Reality (AR) facilitated models are achieved and tested by surgeons. |
|
Fabian Sinz– IRG Neuronal Intelligence - Institute for Bioinformatics and Medical Informatics, University of Tübingen – Professor - Department for Machine Learning – Faculty for Mathematics and computer Science, University of Göttingen: “Mechanisms of representation transfer” Abstract and speaker’s biography >> BiographyDr. Fabian H. Sinz is a Professor for Machine Learning at the Georg August University Göttingen. He holds a Dr. rer. nat from the University Tübingen (Max Planck Institute for Biological Cybernetics) and a Diploma in Bioinformatics. His research bridges neuroscience and machine learning, focusing on visual systems modeling and robust AI. Before his current position, he held roles at Baylor College of Medicine as Research Assistant Professor and Machine Learning Coordinator, led an independent research group at University Tübingen, and worked as a postdoctoral researcher in Tübingen and Houston. Dr. Sinz and his group frequently publish in high rank journals including Nature, Nature Neuroscience, and Science, and at top AI conferences like NeurIPS and ICLR. His work has been recognized with awards including an ERC Consolidator Grant in 2024. He serves on multiple academic committees and maintains professional memberships in organizations including the ELLIS Society and Bernstein Network for Computational Neuroscience. AbstractThe project investigated how to transfer robust visual processing capabilities between artificial neural networks and from biological to artificial systems. Key findings included: 1) common functional transfer methods struggle with transferring even simple equivariance properties; 2) multi-task learning with monkey visual cortex data can improve network robustness against image distortions; and 3) networks co-trained with biological data show enhanced sensitivity to humanly salient image regions. The team developed several approaches, including a novel self-supervised contrastive learning algorithm and the HARD (Hard Augmentations for Robust Distillation) framework for transferring inductive biases between networks. These advances contribute to the broader goal of creating artificial vision systems that better mimic the generalization capabilities of biological vision while maintaining high performance on specific tasks. |
|
11:15 - 11:30 |
Coffee and Networking |
11:30 - 12:10 |
Robotics and GenAI at Fraunhofer Institutes |
Werner Kraus– Head of department robot and assistive systems - Leader AI Innovation Center, Fraunhofer IPA: “Recap European Robotics Forum 2025 and Outlook on current activities in robotics” Abstract and speaker’s biography >> BiographyTBD AbstractTBD |
|
Chandan Kumar– Team Leader – Interaction Design & Technologies, Fraunhofer Institute for Industrial Engineering IAO “Smart Innovation with LLM-Powered Agents“ Abstract and speaker’s biography >> BiographyDr. Chandan Kumar leads the Interaction Design and Technologies research group at the Fraunhofer Institute for Industrial Engineering IAO in Stuttgart. His work lies at the intersection of human-computer interaction and artificial intelligence, with a focus on intuitive, human-centered applications. Dr. Kumar holds a Ph.D. in Computer Science from the University of Oldenburg (2016) and has authored over 40 scientific publications. His research spans information retrieval, multimodal interaction, eye tracking, personalization, and visualization. He currently leads projects on data analytics, generative AI, and multi-agent LLMs, with applications in knowledge management, resource planning, customer research, and smarter innovation. His team collaborates closely with industry partners to translate research into real-world impact and shape emerging human-AI interaction strategies. AbstractTBD |
|
12:10 - 12:50 |
Cyber Valley - Research Fund Projects - (Flight) Robotics |
Aamir Ahmad– Flight Robotics Group, Institute of Flight Mechanics and Controls (IFR) - Department of Aerospace Engineering and Geodesy, University of Stuttgart and Robot Perception Group - Perceiving Systems Department, Max Plank Institute for Intelligent Systems: “ Wildcap – Autonomous Animal Motion Capture for Wildlife Conservation” Abstract and speaker’s biography >> BiographyAamir Ahmad is a tenure-track professor of Flight Robotics and the deputy director of research at the Institute of Flight Mechanics and Controls, University of Stuttgart, Germany. He is also a Research Group Leader at the Max Planck Institute (MPI) for Intelligent Systems in Tübingen, where he was previously a research scientist (2016–2020). He was awarded the Marie-Curie Intra-European Postdoctoral Fellowship 2013 at the MPI for Biological Cybernetics, Tübingen (2014–2016). He obtained his Ph.D. in Electrical and Computer Engineering from the University of Lisbon, Portugal, in 2013 and undergraduate degree in Civil Engineering (B-Tech) from the Indian Institute of Technology (IIT), Kharagpur, in 2008. His research interests include aerial robotics, multi-robot systems, deep learning and reinforcement learning for aerial robots, formation control, aerial vision, localization, target tracking and simultaneous localization and mapping (SLAM). He is one of the recipients of the prestigious award "KI-Champions Baden-Württemberg 2024" for his project "WildCap - Intelligent Aerial Robots for Wildlife Conservation". AbstractIn this talk, we will present WildCap's research on teams of aerial robots, including both multi-rotor drones and airships, that use only their on-board resources (camera and computer) and cooperate with each other via wireless communication to autonomously detect, track, follow and infer the behaviour of wild animals, all in their natural habitat. We will discuss the AI-based methods developed for these monitoring functionalities, and for the monitoring accuracy-driven autonomous control of these robot teams. We will also touch upon our work on synthetic data generation that fuels several of our AI-based methods. Finally, we will show how we are deploying these drones to understand the behavior of Grévy's zebras in Kenya and Przewalski's horses in Hungary. |
|
Christian Gall– Research associate at the University of Stuttgart’s Institute of Flight Mechanics and Controls: “ Decision Making for Environmental Energy Exploitation with Small Aircraft” Abstract and speaker’s biography >> BiographyChristian Gall is a research associate at the University of Stuttgart’s Institute of Flight Mechanics and Controls. He received his B.S. and M.S. degrees in Aerospace Engineering from the University of Stuttgart, where he specialized in system dynamics and automation engineering. In addition, he spent a year at the University of Virginia in the U.S., focusing on robotics and machine learning. AbstractAt the moment, there are large investments and developments in the field of individual mobility, such as urban air mobility vehicles and unmanned aircraft applications, e.g., for medical transportation or rescue purposes. The majority of these small aircraft are based on electric propulsion and thus, their endurance is limited. In the case of fixed-wing aircraft, the endurance can be significantly improved by exploiting energy from the atmospheric environment. Hence, a deep reinforcement learning approach to the autonomous localization and exploitation of thermal updrafts is presented and validated in real-world flight tests. As the resulting policy directly maps from the aircraft's position and climb rate to control commands, a deep neural network that can detect thermal updrafts and estimate their position, strength, and spread in a human-readable manner is additionally presented. |
|
12:50 - 14:00 |
Lunch & Poster Session |
14:00 - 14:50 |
Cyber Valley Research Fund Industry Partners - Industry points of view |
Mohsen Kaboli– - Lead & PI AI and Robotic Research of BMW Group and Professor of Embodied AI, Robotics and tactile Intelligence, Eindhoven University of Technology (NL); Director of RoboTac Lab: “ Embodied interactive visuo-tactile perception and learning for robotic grasp and manipulation” Abstract and speaker’s biography >> BiographyDr. Mohsen Kaboli is a professor of Embodied AI, Robotics, and Tactile Intelligence at Eindhoven University of Technology (TU/e) in the Netherlands. He is the head of Embodied Interactive Perception & Robot Learning Lab (RoboTac). Additionally, he is the Lead & PI of AI, Robotics research lab at BMW Group, Center of Invention in Munich, Germany, a role he has held since 2018. Previously, Dr. Kaboli held the position of assistant professor at the Institute for Brain and Cognition at Radboud University in the Netherlands from 2019 to 2022. He was a group leader of tactile robotic and postdoctoral research fellow at the Institute for Advanced Studies (IAS), the Technical University of Munich (TUM), Germany from September 2017 till August 2018. He received his Ph.D. degree with the highest distinction (summa cum laude) in robotics focusing on interactive tactile perception and learning in robotics from TUM in 2017. He was awarded the best European Ph.D. thesis prize in robotics, Georges Giralt Ph.D. Award (finalist). AbstractThis talk explores embodied interactive visuo-tactile perception and active learning as a foundation for robotic grasping and manipulation. By integrating predictive and interactive perception, robots can actively engage with objects, refining their understanding through both visual and tactile feedback. I will discuss the role of shared visuo-tactile representations in enabling real-time adaptation, as well as visuo-tactile perception cross-modalperception for dexterous manipulation. The talk will highlight key challenges and future directions in developing robotic systems that can intelligently interact, actively learn, and generalize across diverse tasks and embodiments. |
|
Maximilian Rabus– Product Manager & Data strategist – Data-Driven Development – Data Science & Governance, Dr.h.c. F. Porsche AG: “Data Fuels Performance: Insights into Porsche´s data-driven development” Abstract and speaker’s biography >> BiographyAfter completing his master’s degree in Automotive and Engine Technology at the University of Stuttgart, Maximilian Rabus joined Porsche in 2018. His initial focus was on establishing Artificial Intelligence as an alternative validation method in passive safety development. In late 2020, he took on the newly created role for Data Science and Digital Transformation in Body Development at Porsche. Since 2024, he has been a product manager in the Data-Driven Development department and, as a Data Domain Manager, is responsible for defining and implementing the data strategy in the vehicle development domain. His doctoral research at the University of Freiburg focused on integrating Data Science and Artificial Intelligence into conventional development processes. His particular interest lies in Explainable AI and the future-oriented organization and management of data. |
|
Alexander Joos–Senior Vice President Digital Solutions Powertrain, TD-X | Powertrain Systems – IAV GmbH: “The modern development in the age of AI” Abstract and speaker’s biography >> |
|
14:50 - 15:30 |
Cyber Valley - Research Fund Projects- Clusters of Excellence |
Paul-Christian Bürkner– Cluster of Excellence SimTech, Research Group for Bayesian Statistics, University of Stuttgart – Full Professor of Computational Statistics –Department of Statistics, Dortmund University of Technology: “Meta-uncertainty in Bayesian model comparison” Abstract and speaker’s biography >> BiographyPaul Bürkner is a statistician with a focus on probabilistic (Bayesian) methods currently working as a Full Professor of Computational Statistics at TU Dortmund University, Department of Statistics. Having originally studied psychology and mathematics, his core research is nowadays located somewhere between statistics and machine learning, with applications in almost all quantitative sciences. AbstractIn experiments and observational studies, scientists gather data to learn more about the world. However, what we can learn from a single data set is always limited, and we are inevitably left with some remaining uncertainty. It is of utmost importance to take this uncertainty into account when drawing conclusions if we want to make real scientific progress. Formalizing and quantifying uncertainty is thus at the heart of statistical methods aiming to obtain insights from data. Numerous research questions in basic science are concerned with comparing multiple scientific theories to understand which of them is more likely to be true, or at least closer to the truth. To compare these theories, scientists translate them into statistical models and then investigate how well the models' predictions match the gathered real-world data. One widely applied approach to compare statistical models is Bayesian model comparison (BMC). Relying on BMC, researchers obtain the probability that each of the competing models is true (or is closest to the truth) given the data. These probabilities are measures of uncertainty and, yet, are also uncertain themselves. This is what we call meta-uncertainty (uncertainty over uncertainties). Meta-uncertainty affects the conclusions we can draw from model comparisons and, consequently, the conclusions we can draw about the underlying scientific theories. However, we have only just begun to unpack and to understand all of the implications. This project contributes to this endeavor by developing and evaluating methods for quantifying meta-uncertainty in BMC. Building upon mathematical theory of meta-uncertainty, we will utilize extensive model simulations as an additional source of information, which enable us to quantify so-far implicit yet important assumptions of BMC. What is more, we will be able to differentiate between a closed world, where the true model is assumed to be within the set of considered models, and an open world, where the true model may not be within that set -- a critical distinction in the context of model comparison procedures. |
|
Nicolas Kubail-Kalousdian– Institute of Computational Design and Construction (ICD), University of Stuttgart “Task and motion planning for collaborative Robotic construction with irregular materials” Abstract and speaker’s biography >> BiographyNicolas Kubail Kalousdian is a PhD candidate at the Institute for Computational Design and Construction (ICD), University of Stuttgart, and the Principal Autonomy Engineer at Greensight, in Boston, MA. His research focuses on AI-driven task and motion planning for autonomous robotic construction with natural, deformable materials. At GreenSight, he leads autonomy efforts for DARPA, USAF, and NSF-funded projects. He has lectured internationally and published in IEEE RAL and Advanced Science. Nicolas holds an M.Sc. from the University of Stuttgart (ITECH) and has been supported by both German and U.S. research funding programs. AbstractThis project, Task and Motion Planning for Collaborative Robotic Construction with Deformable Materials, investigates the integration of robotics and artificial intelligence into architectural construction using natural, anisotropic materials. Departing from the prevailing reliance on rigid, standardized materials, the project advances a material-robot co-design framework that enables autonomous, sustainable construction in dynamic environments. It explores how distributed robotic systems can manipulate deformable elements through reinforcement learning (RL) and logic-geometric programming (LGP), forming a hybrid planning paradigm that adapts to material variability while preserving structural intent. The project developed a custom mobile robotic platform capable of transporting, bending, and aligning bamboo elements in simulation and physical experiments. Reinforcement learning, augmented by domain randomization and curriculum learning, was employed to teach dynamic motor skills under material uncertainty. In parallel, LGP was used to plan multi-step assembly sequences respecting geometric and structural constraints. A high-fidelity simulation environment, incorporating real-time feedback and mechanical modeling of bamboo, supported robust policy development. The culmination was an architectural-scale demonstrator capable of collaborative transport and assembly tasks, validating the feasibility of robotically constructing with irregular, renewable materials. Despite challenges in gripper implementation and hardware integration, the project demonstrated the potential for robotic systems to support low-carbon, scalable construction methods. The findings establish a foundation for follow-on work in adaptive construction robotics, emphasizing sustainability, system modularity, and material intelligence. This work contributes to the evolving landscape of autonomous building technologies, extending their reach into ecologically responsible practices using naturally variable materials. |
|
15:30 - 15:45 |
Coffee and Networking |
15:45 - 16:45 |
Cyber Valley - Research Fund Projects - Specific research areas |
Michael Sedlmair - Professor for Augmented Reality und Virtual Reality, Managing Director VIS - Visualization Research Center (VISUS) and Institute for Visualization and Interactive Systems (VIS), University of Stuttgart. “InstruData – Data-driven Musical Instrument Education” Abstract and speaker’s biography >> BiographyMichael Sedlmair is a professor at the University of Stuttgart, where he leads the Human-Computer Interaction research group. He earned his Ph.D. in Computer Science from the University of Munich in 2010. His career path has included positions at Jacobs University Bremen, the University of Vienna, the University of British Columbia in Vancouver, and BMW Group Research and Technology in Munich. His research focuses on immersive analytics, situated visualization, novel interaction technologies, visual and interactive machine learning, perceptual modeling for visualization, and the methodological and theoretical foundations underlying these areas. AbstractThis talk explores the intersection of human creativity and artificial intelligence in the domain of music. I will present two projects that highlight different facets of this interaction. The first focuses on AI-powered tools to support musical instrument practice, using data and motion capture to provide real-time, visual feedback that enhances learning for students and professionals alike. The second is a design study on a visual analytics interface for AI-assisted music composition, which enables composers to explore and co-create music with generative models. Together, these projects demonstrate how AI can meaningfully augment both the technical and creative processes in music through human-centered design. |
|
Martin Butz– Neuro-Cognitive Modeling Group, Department of Computer Science - University of Tübingen: “Neural Generative Weather Forecasting” Abstract and speaker’s biography >> BiographyMartin Butz is a professor at the Faculty of Science, University of Tübingen, Germany. His main background lies in computer science and machine learning. His interdisciplinary research agenda integrates the fields of cognitive and developmental psychology, computational neuroscience, robotics as well as parts of the geosciences. His current main research foci include learning conceptual, compositional, causal structures from data as well as developing machine learning algorithms for understanding atmospheric dynamics and hydrological processes. He has published three monographs, numerous edited books and special issues, and more than 200 peer reviewed conference and journal articles. AbstractForecasting spatiotemporal dynamics is a challenge that has many facets. In the talk I will present some spatiotemporal forecasting models developed in our group. The models span from local precipitation now casting, over hydro-logical forecasting, to mid-range global weather forecasting (up to two weeks) models. The focus will lie on how to develop physics-informed modular deep learning architectures, how to design a suitable data structure, and how to tune the involved loss components for the targeted problem. |
|
Falk Lieder– Rationality Enhancement, Max-Planck-Institute for Intelligent Systems - Assistant Professor of Psychology - Director of the Rational Altruism Lab, University of California Los Angeles (USA): “A scalable machine learning approach to improving human decision making” Abstract and speaker’s biography >> BiographyTBD AbstractTBD |
|
16:45 - 17:00 |
Closing greetings Cyber Valley GmbH and Fraunhofer Institute for Manufacturing Engineering and Automation |
17:30 |
Networking |
Registration
As spaces are limited, please register in advance via Eventbrite.