forward stud.io was invited to present at the Center for Humans and Machines at the Max Planck Institute for Human Development. The session connected applied work on avatar-based AI systems with research perspectives on human–AI interaction – with a focus on where these systems become socially sensitive and how responsible deployment can be designed from the start.
What the seminar covered
Digital twins as a new interface to identity
We discussed how AI-generated representations shift expectations around authenticity, authorship, and “who” users believe they are interacting with.
Avatar interaction – closeness, projection, and social behaviour
Avatars don’t behave like neutral UIs. They can quickly trigger social cues and emotional interpretation – which changes how users trust, disclose, and respond.
Privacy, consent, and data governance
We outlined what responsible handling looks like when systems touch identity-related signals (voice, likeness, personal narratives) – including minimisation, purpose limitation, and clear consent flows.
Risk-aware design for sensitive contexts
We shared why deployments in cultural, educational, or intimate settings require stronger framing, boundaries, and evaluation – especially when the content is personally meaningful.
From prototype to accountable real-world systems
The seminar also looked at the practical layer: how to make systems traceable, auditable, and explainable enough to be operated in high-trust environments.
Let’s keep in touch.
Discover more about our projects, pilots and interactive design. Follow us on LinkedIn and Instagram.



