Mengchen Dong on stage at TED AI with a slide behind her

TEDAI Vienna 2025 – Part Two – From Trust to Thought

Posted by:

|

On:

|

, ,

AI is no longer a distant concept — it’s the environment we live in. At TEDAI Vienna 2025, thought leaders from around the world explored how artificial intelligence is reshaping not only technology but human experience itself. From immersive 3D worlds to cognitive resilience and ethical alignment, the event underscored a single truth: AI’s future depends on how we choose to engage with it.

For business leaders, this shift is more than technological — it’s strategic. The next phase of AI adoption isn’t about scaling automation; it’s about building systems that think with us, not for us, and earning the trust that allows innovation to thrive.

The New Frontier: From Shared Content to Personal Worlds — with Christopher Lassner

Christoph Lassner, co-founder of World Labs, presented a compelling vision of content evolution — from collective consumption to personal creation.

“We’re moving from scheduled broadcasting to personalised experiences,” he explained.

Lassner mapped the journey succinctly:

  • Content 1.0 – produced by a few, consumed by many.
  • Content 2.0 – created by the masses across YouTube, TikTok, and social media.
  • Content 3.0 – generated by individuals for themselves.

By 2025, humanity will upload over 100 zettabytes of digital material — from code to music. Yet the next leap forward is not just about scale; it’s about spatial intelligence.

At World Labs, Lassner’s team is developing technology that understands and generates 3D environments from a single image. The result is a platform capable of turning imagination into interactive reality — where describing a dream can render an explorable world.

This approach transforms digital experience from a predetermined narrative to active exploration. For industries from gaming to retail to enterprise training, these immersive environments unlock new modes of engagement — where clients and customers can not only view a product or process but live inside it.

For executives considering how to differentiate in saturated digital markets, the takeaway is clear: personalisation is no longer a feature; it’s the foundation of relevance.

Trust as the Engine of Adoption Mengchen Dong

While technology pushes forward, the question of trust continues to define its success. Behavioural scientist Mengchen Dong reminded the audience that human attitudes toward AI are shaped as much by perception as by reality.

Her research at the Centre for Humans and Machines reveals that we often overestimate other cultures’ fear of AI, assuming hesitancy where openness actually exists. This misperception matters — it influences international collaboration, policy, and even market entry strategies.

“What we’re looking for in AI,” Dong said, “is what we’re looking for in each other.”

Trust, she argued, is not universal but contextual. Economic development, national narratives, and professional background all affect how individuals engage with intelligent systems. An engineer in Singapore, a teacher in Germany, and a policy analyst in Kenya may each trust AI for very different reasons — or not at all.

For organisations deploying AI solutions globally, Dong’s insight offers a crucial reminder: trust is designed, not assumed.

Building responsible AI isn’t just about data governance or model transparency — it’s about aligning technology with local expectations and human values. This means:

  • Designing interfaces that respect cultural communication norms.
  • Offering explainability that matches user literacy and need.
  • Embedding ethical review within deployment, not as an afterthought.

In practice, trust becomes the multiplier that determines whether a system is merely implemented or truly adopted.

Thinking Again: AI and Cognitive Resilience – with Advait Sarkar

Advait Sarkar on Stage at TEDAI Vienna 2025 - Advait Sarkar - What would you rather have: a tool that thinks for you, or a tool that makes you think
What would you rather have: a tool that thinks for you, or a tool that makes you think?

In one of the conference’s most thought-provoking talks, Advait Sarkar — Microsoft researcher and Cambridge lecturer — asked a deceptively simple question:

“If AI can think for us, what happens to our ability to think for ourselves?”

Sarkar’s research explores how AI is reshaping knowledge work, creativity, and critical thought. He warned that in our eagerness to automate, we risk reducing human contribution to “button-pushing” — operating tools without truly engaging with the tasks they support.

“We have solved the problem of having to think,” he said, “but thinking was not the problem.”

This cognitive outsourcing has subtle consequences. As AI systems take on more analytical and creative load, our “cognitive muscles” risk atrophy. The result: fewer original ideas and a diminished ability to challenge assumptions — the very skills that drive innovation and leadership.

Sarkar advocates for a new class of “Tools for Thought” — AI that provokes, questions, and collaborates rather than obeys. Within Microsoft’s experimental labs, he’s helping develop an interactive document manager that highlights insights and prompts reflection — without the presence of a chatbot or automated completion. The human remains the author; the AI catalyses deeper engagement.

For businesses adopting generative AI, Sarkar’s message is essential: AI should challenge your teams, not replace their curiosity. True transformation happens when technology augments human potential rather than anaesthetising it.

Speakers Included Here

Mengchen Dong is a behavioural scientist and Research Scientist at the Centre for Humans and Machines of the Mo Panc Society.

Christoph Lassner is a scientist, engineer, and co-founder of World Labs

Advait SarkarMicrosoft researcher and Cambridge lecturer

https://tedai-vienna.ted.com

See more from TEDAI Vienna 2025 here