The Dutch AI Conference: Why People, Not Code, Will Decide AI’s Future

The Dutch AI Conference: Why People, Not Code, Will Decide AI’s Future

This year marked the first edition of Dawn Technology’s Dutch AI Conference. Hosted at Pathé Amsterdam Noord, it had all the hallmarks of a Dawn Technology event – slick production, strong speaker lineup, a crowd that actually wants to be there – but with something new layered on top: urgency. Or more specifically, a shared sense that staying ahead of the AI curve is no longer optional.

But, given this was the first Dawn Technology event purely dedicated to AI, did it live up to the hype? Dave Barton gives his take on how things unfolded.

Hype or Help?

There’s a particular kind of energy you get at a first-time conference. Not the polished, predictable rhythm of something that’s been running for years, but something looser: a mix of curiosity, anticipation, and just a hint of uncertainty. And with Artificial Intelligence as the headline act, delegates arrived not just to be inspired, but to understand what comes next.

But considering how ‘new’ and fast-paced AI tech is; my biggest concern was can there really be any bona fide experts in something evolving this fast? And if so, given the vast scope of what AI actually is, which direction would the conference and its speakers be taking us?

With Dawn Technology’s other events, the Dutch PHP Conference, Webdevcon, and Appdevcon, now established fixtures in the international developer calendar, after two decades of running technology conferences, why add another fixture to the lineup?

I was told by several people that the AI conference came about because it felt like 2026 was an inflection point in AI’s evolution. As Ivo Jansch, Dawn Technology’s CTO, explained to me:

“Last year we had an AI track at one of our other conferences. There was so much interest, that we thought we should turn it into a conference of its own. The market is there. Many people want to use AI, many people want to talk about AI and share their experiences. It’s really the time to organise an event like this, to bring the community together, and to learn from each other.”

Carefully Curated Content

It was clear from the conference programming that the vast and eclectic nature of AI would be well-represented: from Sasha Denisov’s take on Building Voice An AI Agent That Listens, Understands, and (Most Importantly) Sells and Addressing Gender Bias in Generative AI with Julia Klinkert from InfoSupport.

Consultant Nikki van Emmerloot’s keynote The Innovation Engine Runs on People set the tone early. Not by talking about models or tools or benchmarks, but by reframing the entire problem. Her central idea was deceptively simple: AI isn’t failing us, we just haven’t built the conditions to use it properly. Yet.

Across the industry right now, she explained there’s a quiet tension: tools are improving exponentially, but adoption is uneven, and as a result, confidence is fragile. The Digital Friction Index is rising, she explained: when fear, innovation, and change converge.

Three groups exist in every organisation: Resistors, Adapters, Strugglers. While the first two groups take a bold position, the Strugglers stay quiet because they don’t want to look like they don’t understand; leading to ‘digital shame’, which threatens to be a brutal constraint on innovation.

If people don’t ask questions, they don’t experiment. If they don’t experiment, they don’t learn. And if they don’t learn, well, the tech never lands. Nikki’s framework for fixing this was clearly more cultural than technical.

Real World Integration

GitHub’s Sam Morrow’s Lessons from Scaling GitHub’s Remote MCP Server was another standout session; though much more technical in focus. The concept of an MCP (Model Context Protocol) server – essentially a bridge between AI models and real-world tools – might sound abstract. But the implication isn’t, he explained. AI isn’t just generating text anymore, it’s starting to operate systems.

Sam showed how GitHub’s implementation, with millions of downloads, shows where this is heading: AI agents that can read code, manage workflows, and interact with complex environments at scale.

One of the most refreshing talks of the day came from Iulia Feroli. Titled How I built my own intelligent Robot Arm from Scratch – her practical take on applying servo motors, Arduinos, Raspberry Pis, and documenting her trials and errors on her YouTube channel – at first glance seemed AI-adjacent. However, given the increasingly symbiotic relationship between robotics and AI, using the two in tandem – messy though it might appear – is a crucial part of the learning curve, she explained. And while that’s the opposite of how most AI is being marketed, it’s arguably much closer to how it actually works.

If Nikki’s keynote was about people, Jeff Watkins’ closing talk was about perception. Titled Descartes’ Daughter: How We Taught Machines to Feel (and Why We Believed Them), his core argument was that we humanise or ‘anthropomorphise’ AI. We speak and interact with it as if it were human, even though we know ‘it’s not real”.

However, the key distinction he made was that although modern LLMs are emotionally fluent, conversationally natural, and performatively intelligent, they simulate understanding. As humans, we often confuse performance with comprehension – which can have some morally questionable consequences, he reasoned. Interaction can lead to influence, and influence can morph into potential manipulation.

Affirmative Action

Throughout the day, the need for robust guardrails around this accelerating area of technology was referenced. And each time, the conclusion was consistent: the responsibility doesn’t sit with the technology – it sits with us.

This is a responsibility that many developers I spoke to take very seriously indeed. If they are tasked with building systems that none of us fully understand, and they have concerns about what the impact will be, they need to feel empowered to raise these ethical questions; which means we all need to feel comfortable asking questions – without a sense of ‘digital shame.’

It’s this willingness to engage in broader discussions around responsibility as well as capability, that for me made this conference distinctive. Though my expectation was for a tech-heavy cacophony of coding, the fact everything was couched in contemporary issues facing us here and now made the Dutch AI Conference all the more credible and insightful.

One of the most talked-about attractions was Aurora: a Unitree G1 humanoid robot roaming the venue, interacting with delegates. For some, it triggered the familiar unease of the uncanny valley. For others, it was irresistible: selfies, handshakes, even hugs. Which, in hindsight, proved Jeff Watkins’ point better than any slide could.

In short, this conference wasn’t just showcasing what AI can do. It was about exposing the gap between what AI is and how we experience it. And the right questions were being asked: How do we actually adopt this stuff? What are we building? Where do we draw the ethical lines? And how do we stay human in the process?

If this is where the conversation starts, then the next few years of this conference, and this industry, are going to be very interesting indeed.

Written by Dave Barton, founder at comms agency, wtf

Share