In het kort
Opent een externe website.

Rijksuniversiteit Groningen
Waar staat Rijksuniversiteit Groningen voor? Wat vinden ze belangrijk? Ontdek het hier.
Researchers have developed increasingly powerful foundation models, including transformers and diffusion models, that demonstrate remarkable capabilities across a wide range of tasks. The next decade will focus on transforming these models into autonomous, long-horizon agentic AI systems, such as LLM-based multi-agent systems and embodied agents, that can operate robustly in complex, real-world environments. Unlike static foundation models, agentic AI systems are designed to perceive their surroundings, formulate and execute plans, take goal-directed actions, and continuously learn from interaction.
We are seeking a highly motivated PhD candidate to conduct research on agentic AI systems, with the goal of developing a deeper conceptual and theoretical understanding of their underlying mechanisms. This position will focus on advancing the planning and reasoning capabilities, safety, and continual learning mechanisms of agentic AI systems. Depending on the candidate’s interests, there will also be opportunities to engage in international collaborations and explore real-world applications of agentic AI systems across diverse domains.
Where are you going to work?
This 4-year PhD position is offered at the Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence (https://www.rug.nl/research/bernoulli/). The Bernoulli Institute is a vibrant community with an international outlook, which fosters talent in all its research areas and disciplines and is active in pure and applied science, and (multi)disciplinary research and teaching. Within the Bernoulli Institute, the selected candidate will become a member of the Machine Learning Group, part of the Artificial Intelligence Department, and will work under the supervision of Dr. Qing Li.
What are you going to do?
The project aims to develop a deeper understanding of the underlying mechanisms of agentic AI systems and to leverage these insights to design novel approaches that enhance their trustworthiness and interpretability.
Examples of topics that could be explored in this PhD project:
Planning and Reasoning Ability
An agentic AI system must transform high-level instructions into a sequence of executable actions. This process inherently involves complex reasoning and planning challenges. As tasks become more realistic and open-ended, these challenges grow significantly in scale and complexity. Existing approaches often rely on predefined topological structures, such as trees or graphs, to model interactions among agents. However, such static structures are insufficient to capture the dynamic and evolving nature of real-world multi-agent systems over time. Furthermore, upstream tasks must provide sufficient information to downstream components to enable effective
coordination, while simultaneously preventing unintended information leakage, including privacy-sensitive data. A comprehensive understanding of the underlying interaction mechanisms in agentic systems remains largely unexplored.
Safety
Agentic AI systems are increasingly deployed to act on our behalf in real-world environments. This implies that they may gain access to sensitive personal data and interact directly with physical systems. In such settings, an error is no longer merely an incorrect response—it may lead to tangible consequences, such as damaging property or even causing harm to humans. Compared to foundation models that primarily generate text or predictions, agentic AI systems operate through embodied actions and collaboration with external tools, environments, and other agents. This expanded scope of interaction significantly enlarges the attack surface and introduces new safety vulnerabilities. A common mitigation strategy today is to introduce multiple filtering layers and perform repeated safety checks before producing outputs. However, these approaches often incur substantial computational overhead and primarily focus on response-level filtering, rather than enforcing action-level constraints or ensuring safety throughout the decision-making process.
Continual Learning.
Continual learning in agentic AI requires an explicit and well-designed memory management mechanism that enables agents to progressively accumulate knowledge and skills over time. As agentic systems are deployed to handle an increasing number and diversity of tasks, several fundamental challenges arise: How can agents effectively identify and retain critical knowledge while filtering out redundant or low-value information? How can they accurately detect outdated knowledge and update it without disrupting previously acquired competencies? Addressing these questions is essential for building adaptive, scalable, and long-lived agentic AI systems capable of sustained autonomous operation.
The goal is to advance the fundamental understanding of agentic AI systems by generating high-impact research contributions suitable for publication in leading machine learning and AI venues.
Employed PhD candidates are expected to spend 10% of their working hours on teaching and/or supervising candidates.
Who are you?
You are an enthusiastic and curious researcher with:
- A Master’s degree (completed or near completion) in Computer Science, Artificial Intelligence, Data Science, Applied Mathematics or another relevant field.
- A solid foundation in machine learning and interest in working on agentic AI systems.
- Strong programming skills, preferably in Python, and familiarity with modern deep learning tools.
- Good analytical and problem-solving abilities, and a critical mindset.
- Very good written and spoken English, as required for scientific communication.
- Motivation to perform high-quality science and publish in leading machine learning venues (e.g., NeurIPS, ICML, ICLR, Nature MI, IEEE TPAMI).
- Evidence of well-executed past research projects (e.g., Master thesis, publications, research assistant position).
- Ability to work both independently and collaboratively in an international research environment.


















