Chantal Pisarzowski, founder of forward stud.io, was interviewed by ZDF aspekte on the topic of humans and AI, with a focus on digital avatars and their societal impact. The conversation explored why avatar-based AI systems can feel unexpectedly personal, how that shapes trust and behaviour, and what it takes to build these technologies responsibly.
What we talked about
Digital avatars and human–AI interaction
Avatars are more than an interface. They can trigger closeness, projection, and social expectations – raising the bar for transparency and system design.
Data privacy and consent by design
Avatar systems often touch identity-related signals such as voice, appearance, interaction patterns, or personal narratives. Responsible development starts with minimising data collection, defining clear purpose and retention rules, and ensuring consent is explicit.
Private individuals versus public figures
The interview also addressed how contexts differ when avatars represent private people versus public figures – and why governance and communication must be adapted accordingly.
A critical perspective on deployment
Not everything that is technically possible should be deployed without safeguards. Responsible AI requires boundaries, evaluation, and operational controls – especially in sensitive cultural and public contexts.
Why it matters for public-facing AI
As digital avatars move into museums, education, and public spaces, the key questions become practical: What data is processed, where does it run, who controls the outputs, and how do we prevent misuse? forward stud.io approaches avatar and AI systems with privacy-first architecture and a focus on traceable, responsible deployment.
Let’s keep in touch.
Discover more about our projects, pilots and interactive design. Follow us on LinkedIn and Instagram.


