AI Engineer Europe 2026: Quality, Speed, Open Source – Three Approaches to the Future of Software Development

An experience report from the first AI Engineer Europe in London. How do we retain control over AI-generated code? The conference provided three answers – and a clear message: Everyone wants quality, but the path to achieving it is open.

Overview

  • Three approaches to retaining control over AI-generated code – which are not mutually exclusive: Slow Down (human control), Speed (better systems), and Open Source (transparency) – all aim for quality, only their emphasis differs.
  • The "Lethal Trifecta" (Palo Alto Networks): Web access + personal data + email sending = a complete attack chain. This is not a theoretical risk – it affects anyone using an internet-connected coding agent today. Steinberger addressed this in his talk.
  • Open-source software like Pi and OpenClaw offer transparent alternatives to proprietary tools.
  • Code could become almost free – which would shift value creation towards human judgement and architectural decisions.

How do we retain control over AI-generated code? The first AI Engineer Europe in London did not provide a simple answer. To make the positions tangible, I am defining three approaches: Slow Down, Speed, and Open Source. Reality is, of course, more fluid – but as Hayden White wrote: "The beginning of all understanding is classification."

Three of the top speakers had Austrian roots: Peter Steinberger (OpenAI / OpenClaw Foundation), Mario Zechner, and Armin Ronacher (both now at Earendil) – together with Cristina Poncela Cubeiro, they were among the defining voices of the conference.

Note on Context

The assessments and conclusions in this article are my personal opinion, based on the presentations and conversations on-site. I am only covering talks that I attended myself (and believe I have understood). Simplifications have been made in some places – please report any errors!


Event Details  

Event

AI Engineer Europe 2026 – Europe's first official AI Engineer conference, organised by Swyx (Shawn Wang) and the AI Engineer team

Dates

8–10 April 2026 — Day 1: Workshops + Expo, Days 2–3: Keynotes + Breakouts + Afterparties

Location

Queen Elizabeth II Centre, Broad Sanctuary, Westminster, London SW1P 3EE – directly opposite Westminster Abbey

Attendees & Format

1,000+ AI Engineers from around the world, 100+ speakers, 11 technical tracks, 23 hands-on workshops (Day 1)


Table of Contents  


The Conference at a Glance  

Play video
Loads YouTube & sets cookies

Click loads YouTube (Privacy)

Day 1 (9 April) – Agents, Security, and Economics. Malte Ubl on falling code costs, Raia Hadsell on DeepMind beyond language, Peter Steinberger on OpenClaw's security explosion, and Matt Pocock on software fundamentals in the AI age.

Play video
Loads YouTube & sets cookies

Click loads YouTube (Privacy)

Day 2 (10 April) – Code Quality and Open Source. David Soria Parra on the future of MCP, Mario Zechner and Armin Ronacher & Cristina Poncela Cubeiro on agent-legible architectures, Sarah Chieng on fast models – and Tuomas Artman (Linear) on taste as a competitive advantage.


Approach 1: Slow Down  

This group shares a clear conviction: AI speed without human control does not produce innovation, but uncontrollable junk. Their remedy: clean software design and clear responsibilities in the code.

Matt Pocock – Software Fundamentals are More Important Than Ever  

Pocock's Core Thesis

Code is not cheap. Bad code is more expensive today than ever because it prevents AI from working efficiently.

Matt Pocock dismantled the widespread "Specs-to-Code" approach – meaning the practice of simply throwing specifications (detailed requirement descriptions) into an AI and blindly adopting the output. The result, he argues, is uncontrollable "software entropy" (the creeping increase of disorder and complexity in a codebase).

His counter-proposal:

  • Deep Modules (a concept by John Ousterhout): Modules that contain a lot of logic internally but offer a simple interface to the outside. Like a car: a complex engine, but you only need a steering wheel and pedals. The human designs the architecture; the AI implements the logic within the module. Testing only occurs at the interface – ideal for TDD as a guardrail.
  • "Grill me" Prompts: He forces the AI to cross-examine him until a shared system understanding (design concept) is reached – instead of blind code output.
  • TDD: Tests first, code second. This allows him to guide the AI in small, verifiable steps.

Mario Zechner – Self-Control and the Fight Against AI Junk  

Zechner's Warning
  • Human protective mechanism: Developers feel a natural resistance when code becomes messy – and eventually stop to tidy it up.
  • AI lacks this instinct: Its sole goal is code that runs and passes tests – no matter how.
  • Bad role models: Agents fill gaps in the task descriptions with patterns from the internet, which largely consist of our own technical debt.
  • The result: Because agents never pause, they accumulate chaos in days that would take humans months – until the codebase is so cluttered that eventually even the agents can no longer fix it.

The Austrian developer Mario Zechner (known as the creator of libGDX) built the minimalist coding agent Pi because commercial tools like Claude Code deprive developers of context control. Pi deliberately comes with only four tools: read, write, edit, bash.

Three central points of the presentation:

  1. Automated agents are destroying Open Source: Agents are flooding open-source projects with useless pull requests. Zechner built a junk filter that automatically closes PRs and demands human confirmation. When it gets too much, he takes an "OSS Vacation" and closes the issue tracker.

  2. No pain, no quality: Unlike humans, agents do not learn organically from mistakes. The intellectual friction of programming is essential for truly understanding the system.

  3. "Slow the fuck down": It is imperative that critical code is still read by humans. Deceleration is not a step backwards – it is quality assurance.

Armin Ronacher & Cristina Poncela Cubeiro – "The Friction is Your Judgement"  

Armin Ronacher – creator of Flask, now founder of Earendil after a decade at Sentry – delivered one of the sharpest analyses of the conference alongside Cristina Poncela Cubeiro. Mario Zechner has also been part of Earendil since April 2026.

The efficiency trap: AI tools are addictive – you never know if the next prompt will deliver the feature or fill the system with junk. Speed gives the illusion of productivity. In reality, there is no time left to think: Code reviews are just waved through because agents produce more code than humans can verify.

Functioning code ≠ good code: A human developer feels uneasy when writing messy workarounds. An agent does not. It writes code that "runs somehow" – for example, failing silently with default values on error instead of a clean abort. The result: unnoticed corrupt data and rapidly growing technical debt.

Libraries yes, complex products no: AI agents can write well-defined software libraries – clear boundaries, simple interfaces. With complex products, where the interface, permissions, and billing interlock, the AI lacks the big picture. It acts logically on a local level, but makes the overall system increasingly chaotic.

The "Agent-Legible Codebase" – Code that agents can read without errors:

  • Each function has exactly one task – and its name makes that clear
  • What is written in the code actually happens – no hidden side effects (e.g., React Server Actions or complex ORMs that obscure the code's intent from the agent)
  • Automatic testing tools (Linters) prevent, for example, empty error handlers that swallow up problems
  • Error messages describe the specific problem ("Database connection failed") instead of just "An error has occurred"

Conscious friction: An AI agent can modify hundreds of files in seconds. That is great for routine tasks – but dangerous if it rebuilds a database structure or alters access rights in the process. Ronacher compares this to physics: Without friction, there is no steering. That is why they have built tools (extensions for the Pi agent) that stop hard during critical operations. The human receives a clear summary: "I want to drop table X and change permissions for Y – shall I proceed?" It only continues after explicit approval. This friction is the moment when human experience and judgement kick in.

Ronacher and Poncela Cubeiro's Conclusion: AI is a brilliant tool for narrowly defined tasks – such as reproducing bugs. But architectural decisions belong in human hands.

Read more on webconsulting.at

Making TYPO3 AI-ready: How clear structures deliver better agent results – Agent-Legible Codebase in practice: How we made a TYPO3 codebase readable for AI agents.


Approach 2: Speed  

This group also wants quality – but instead of braking, they want to build the infrastructure in such a way that quality emerges systemically at high speeds.

Malte Ubl – Mass Software Production  

Ubl's Economic Thesis

Software production has become so cheap that we can now build all the software that was previously uneconomical to develop. This will massively increase the demand for developers – not decrease it.

Malte Ubl (CTO of Vercel, the cloud platform behind Next.js for hosting, serverless, and edge computing) provided the economic perspective:

  • Falling infrastructure costs: AI inference costs are dropping rapidly – driven by competition between Google, OpenAI, and other providers. Ubl's point: When infrastructure becomes cheap, the money is made by those who build good products – not those who run the servers.

  • Agents as primary users: At Vercel, agents are already responsible for over 60% of page views. In the future, infrastructures must be natively optimised for agents (APIs/CLIs) instead of human UIs.

  • Agents as a new application layer: Bespoke automation is economically viable for the first time. Not only large enterprises, but small teams too can now build custom agents.

Swyx – Tiny Teams and AI Automation  

Swyx (Shawn Wang), organiser of the conference and founder of AI Engineer, demonstrated live how his 9-person team generates over 9 million dollars – handling tasks that previously required entire departments.

  • The end of "Yak Shaving": Before you can complete the actual task, you first have to fix ten other things – that is Yak Shaving. Agents take over this tedious preparatory work.

  • Replacing SaaS: He uses AI to completely replace complex SaaS solutions (like a CMS) with AI-managed code.

  • Everyday utility: Agents for everyday tasks – including web research to source a real lobster in London. No joke.

Read more on webconsulting.at

AI-Native Companies: How developers work with AI agents – How tiny teams integrate AI agents into their daily routines using the PDAA workflow.

Keeping AI Costs Under Control: The practical guide to strategic budget planning – What happens when code becomes almost free – and where the hidden costs lie.


Approach 3: Open Source  

Peter Steinberger – State of the Claw  

The 'Lethal Trifecta' (Palo Alto Networks)

Three capabilities that are manageable individually, but together form a complete attack chain: reading personal data (loot), processing untrusted websites (attack surface), sending emails (escape route). Steinberger addressed this in his talk.

Peter Steinberger – founder of PSPDFKit, at OpenAI since February 2026, and founder of the OpenClaw Foundation – delivered one of the most discussed contributions with "State of the Claw". OpenClaw is technically based on Zechner's Pi, has grown explosively, and has triggered over 1,142 Security Advisories.

For comparison (from Steinberger's presentation):

ProjectTimeframeCVEs / AdvisoriesRate
Django19 years94~8/year
curl8 years~600 Reports~100/year
Linux KernelSince 2023 (CNA change)~8–9/daySharply increased
OpenClaw68 days1,142 total16.8/day
  • Independence: To prevent large corporations from seizing control, the OpenClaw Foundation is being set up as a "neutral Switzerland" – analogous to the Linux Foundation.

Mario Zechner – Pi as an Open-Source Alternative  

Zechner stands for both approaches simultaneously: Slow Down and Open Source. His agent Pi is living proof that alongside proprietary products like Claude Code, Cursor, or GitHub Copilot, there is a radically different path: a lean, open-source tool that leaves developers in full control of their workflow.

Why Open-Source Agents are Important

Proprietary coding agents decide behind closed doors which context they send, which background actions they execute, and how they process data. An open-source agent like Pi makes these decisions transparent and verifiable – essential for security-critical or data-sensitive projects.

Pi is not a compromise here, but a conscious design principle: Fewer features, more control. No magic, no hidden API calls, no telemetry. Exactly four tools – and full responsibility remains with the human. The fact that Pi now forms the technical basis of OpenClaw (Steinberger's project) shows: Minimalism and scalability are not mutually exclusive.

Swyx in Conversation with Steinberger – Local Control and "Taste"  

In an open AMA format, Swyx and Steinberger discussed the philosophical foundations:

  • Power over your own data: Local models (AI running directly on your own machine rather than in the cloud) make it possible to bypass the data silos of big tech companies. Instead of waiting for official APIs, an agent can simply operate via web interfaces.

  • "Taste" as a differentiator: As AI automates the sheer creation of code, value shifts to human "taste" – that is, the ability to discern whether design or code "reeks of AI" or actually has a soul.

Read more on webconsulting.at

Model Context Protocol: 30 Questions and Answers – MCP is the open standard through which agents like Pi and OpenClaw communicate with external tools.

Agent Skills: 30 Questions and Answers – The open standard for reusable AI agent capabilities in Claude Code, Cursor, and VS Code.


Synthesis: Three Approaches, One Goal  

All speakers want quality – there was consensus on that. The differences lie not in the goal, but in the weighting. The three approaches are not mutually exclusive – in practice, most teams will likely combine elements from all three.

Slow Down

Quality through human control

Reduce speed, maintain intellectual control, introduce deliberate hurdles. AI-generated code requires human verification.

Matt PocockDeep Modules · TDD as guidance
Mario ZechnerPi Agent · Deceleration
Ronacher & Poncela CubeiroAgent-Legible Code · Conscious Friction

Speed

Quality through better systems

Build infrastructure so that agents work cleanly from the start. API-first, validation, automated feedback loops.

Malte UblAPI-first · Agents as primary users
SwyxTiny Teams · Replacing SaaS

Open Source

Quality through transparency

Build tools openly and locally. Whoever sees the code can verify it. Proprietary black boxes are the antithesis.

Peter SteinbergerOpenClaw Foundation · Lethal Trifecta
Mario ZechnerPi as an open-source alternative

Bonus: Tuomas Artman (Linear) – Why "Taste" is the New Engineering  

The Fireside Chat between Tuomas Artman (CTO and Co-Founder of Linear) and Gergely Orosz (The Pragmatic Engineer) was my personal highlight of Day 2 – rarely do you experience someone who argues so clearly from their own practice. Artman brought a perspective that none of the three approaches covers alone – but rather connects them all.

Artman's Core Thesis

In a world where anyone can generate code with AI, "taste" – meaning the instinct for good design, the right abstractions, and consistent opinionation – becomes the decisive competitive advantage.

The most important statements:

  • Zero Bug Policy: At Linear, bugs have top priority – nobody starts on new features until they are fixed. Artman's logic: Every bug gets fixed eventually – so why not immediately? The result: Customers report a bug and find it fixed the very next day.

  • Opinionated Software (software with a clear stance): Linear is deliberately not flexible for every workflow. There is one good way to do things – and the system guides you there. This reduces decision fatigue and increases speed.

  • Linear Agent: Linear builds AI directly into the product – not as a chatbot, but as an AI PM that handles triage, backlog grooming, and issue creation. Add to that deep integrations with coding agents via MCP to launch a local agent session directly from an issue.

  • Craftsmanship over metrics: Linear relies on good design and gut instinct – not on endless metrics and optimisation loops.

  • 5-Day Hiring: Anyone applying to Linear comes in for five days and builds something real. No whiteboard interviews, no LeetCode – but real work on the real product. This filters exactly for the kind of person Linear wants: people with a love for detail and craftsmanship.

Personal Note: Alongside Mario Zechner, the discussion with Artman impressed me the most. Everything at Linear is built with a love for design that you can feel immediately – from the product to the hiring process. Honestly: I wouldn't complain about a 5-day trial at Linear.

Happy to share that code reviews are coming to @linear, available in private beta on every plan. Modern code reviews with structural diffing that vastly reduce the number of changed lines in many cases. Review, comment, check previews, get notifications on failed CI check. And Show more

Linear
Linear
@linear

Announcing Linear Reviews. A modern code review experience for humans and agents. Join the waitlist for early access: linear.app/reviews

Reply

Statically rendered – no cookies, no tracking


What Else Moves the Speakers  

What the conference speakers posted in the days surrounding the event:

Peter Steinberger – Anthropic Locks Out Open Source  

Statically rendered – no cookies, no tracking

Mario Zechner – "I've sold out"  

Statically rendered – no cookies, no tracking

Armin Ronacher & Cristina Poncela Cubeiro – Earendil Explodes  

Statically rendered – no cookies, no tracking

Malte Ubl – just-bash Keeps Delivering  

Statically rendered – no cookies, no tracking


My Conclusion  

My impression after two days in London: The question is increasingly shifting from whether AI agents will transform software development to how we retain control whilst it happens.

The sharpest analyses came from those who build coding agents themselves – Steinberger (OpenAI / OpenClaw Foundation), Zechner, Ronacher, and Poncela Cubeiro (Earendil). Their message: Technology needs guardrails, Open Source needs protection from spam, and code needs humans who understand it.

My five personal takeaways:

  1. Open-source alternatives are not a luxury, but a necessity. The major providers – Anthropic, OpenAI, Google – are presumably pursuing a winner-takes-all strategy. Anyone building their workflow on a proprietary tool becomes dependent on third-party decisions. Steinberger's experience with the Anthropic block shows: Access can be cut off from one day to the next. Projects like Pi and OpenClaw are therefore not niche products – they are a strategic safeguard against platform lock-in.

  2. How much faster are you really? The honest answer is probably a factor of 2–3 – not 10x, as some claim. Zechner sums it up perfectly: Being 10 times faster in practice often just means producing 10 times as much junk. You only become truly more productive if quality control keeps pace – and that requires discipline, not speed.

  3. Google's pricing strategy could become a game-changer. If Ubl is right and AI inference costs continue to drop, value creation is likely to shift increasingly to the application layer. For agencies and developers, that would be an opportunity.

  4. "Taste" could become the most valuable currency. The more code is generated by machines, the more important the human judgement regarding what is good – and what merely "functions" – is likely to become.

  5. Friction is not a bug, but a feature. Ronacher and Poncela Cubeiro are right: If an agent wants to execute a database migration or permissions change, it must stop and wait for human approval – not just carry on.


My Addition: Software Creation is not Software Operations  

The conference revolved almost exclusively around the creation of code. In practice, however, I ask my clients other questions first:

  • Who handles the infrastructure updates – and how quickly does the team react if a dependency is compromised? (Keyword: the Axios Trojan of March 2026 – a North Korean attack on an npm package with 70 million weekly downloads, which smuggled a Remote Access Trojan onto thousands of systems within hours.)
  • Who is responsible for backup & disaster recovery?
  • How quickly is a critical bug fixed in the production environment?
  • And above all: What does the data strategy look like – are there correct, up-to-date data for the entire company available at the push of a button?

Without clean, validated live data, the entire AI effort is pointless. My recommendation: First organise, clean, and make data accessible – then we can talk about agents. And yes: Agents can also assist with the data cleaning itself – finding duplicates, standardising formats, identifying missing fields. But someone must set the strategy and verify the results.

Inconvenient Truth

AI agents can only work as well as the data they receive. Those who do not have their database under control – inconsistent master data, outdated exports, manual workarounds – will not experience any miracles even with the best coding agent. Data quality comes before agent quality.

Return Visit?

The first AI Engineer Europe has set the bar high. Technical depth, strategic foresight, and a community that brings enthusiasm and critical distance at the same time – you rarely find that all in one place. If the second edition builds on this level, London will become a fixed date in the calendar.


Day 1: Full Programme  

Click loads YouTube (Privacy)

Clickable Jump Marks (Timestamps link directly into the video):

  • 00:13:10Opening Remarks by Phil Hawksworth
  • 00:21:26Lia McBride (AI Engineer): 900% community growth and British AI infrastructure investments
  • 00:24:25Malte Ubl (Vercel): Agents as a new application layer, APIs must become "AI first"
  • 00:42:39Raia Hadsell (Google DeepMind): Gemini Embeddings 2, AI cyclone forecasting, Project Genie 3
  • 01:07:08Ryan Lopopolo (OpenAI): "Code is free" – Systems thinking and delegation for parallel AI agents
  • 01:25:48Peter Steinberger (OpenAI / OpenClaw Foundation) 🇦🇹: "State of the Claw" – OpenClaw's explosive growth and AI-generated security bounty flood
  • 01:45:12Break: Morning Coffee
  • 02:28:13Swyx with Peter Steinberger (OpenAI / OpenClaw Foundation) 🇦🇹: Open Source, "Token Maxing" and "Taste" as the ultimate engineering moat
  • 02:55:01Vincent Koc (Comet ML): "Dark Factories" – 60+ parallel AI agents for nocturnal codebase refactoring
  • 03:14:07Radek Sienkiewicz (VelvetShark): Handing over personal life to OpenClaw – via Obsidian, email, and background tasks
  • 03:34:12Sally Ann O'Malley (Red Hat): Secure agent deployments with Podman, Docker, and K8s – isolation and state recovery
  • 03:57:05Nick Taylor (Pomerium): Securing OpenClaw with Identity-Aware Proxy and live-coding an MCP server from Discord
  • 04:14:35Break: Lunch
  • 05:41:51Onur Solmaz (OpenClaw): ACP for standardised agent interactions and disposable agents on K8s
  • 06:02:17Merve Noyan (Hugging Face): HF ecosystem for local coding agents and model training via Hub Skills
  • 06:22:36Fryderyk Wiatrowski (Viktor): "Viktor" – Slack-native AI employee with context across thousands of integrated tools
  • 06:42:09Break: Afternoon
  • 07:42:39Gergely Orosz (The Pragmatic Engineer) with Swyx: "Token Maxing" – Big Tech engineers wasting AI inference to inflate productivity metrics
  • 08:09:26Kitze (Sizzy): Modern productivity apps roasted – and an OS where AI generates the UI on demand
  • 08:29:42Matt Pocock (AI Hero): Why DDD and TDD are more important than ever against AI-generated "slop"
  • 08:48:31Sunil Pai (Cloudflare): "Code Mode" – LLMs executing JavaScript in V8 isolates, bypassing slow JSON tool calls
  • 09:07:04Closing Remarks by Phil Hawksworth

Day 2: Full Programme  

Click loads YouTube (Privacy)

Clickable Jump Marks:

  • 00:10:40Tejas Kumar opens Day 2
  • 00:15:44Omar Sanseviero (Google DeepMind): Gemma 4 on-device capabilities and E2B architecture
  • 00:31:00David Soria Parra (Anthropic): The future of MCP and programmatic tool calls
  • 00:49:44Ido Salomon (MCP Apps): AgentCraft and the visual orchestration of multi-agent coding swarms
  • 01:01:05Mario Zechner (Earendil / Pi) 🇦🇹: The Pi agent and the dangers of AI-generated technical debt
  • 01:19:33Armin Ronacher & Cristina Poncela Cubeiro (Earendil) 🇦🇹: Agent-legible codebases and conscious friction
  • 01:38:12Benjamin Dunphy: AI Engineer World's Fair announcement
  • 01:44:14Break: Morning Coffee
  • 02:26:10David Gomes (Cursor): Replacing 15,000 lines of code with Markdown Skills and Git Worktrees
  • 02:46:17Matthias Luebken (TAVON): Embedding OpenClaw and Pi in multichannel production environments
  • 03:08:39Sarah Chieng (Cerebras): Adapting developer habits for ultra-fast models like Codex Spark (1,200 TPS)
  • 03:27:11Lawrence Jones (Incident.io): AI for evaluation, debugging, and management of complex AI systems
  • 03:45:47Luke Alvoeiro (Factory): Architecture for long-running, multi-day agent missions
  • 04:04:47Break: Lunch
  • 05:41:46Ben Burtenshaw (Hugging Face): Coding agents for AI systems engineering and CUDA kernel development
  • 06:00:33Michael Richman (Cmd+Ctrl): Curing FOMAT with mobile command-and-control
  • 06:17:29Liam Hampton (Microsoft): Orchestrating local, background, and cloud agents simultaneously in VS Code
  • 06:35:28Break: Afternoon
  • 07:41:28Tuomas Artman (Linear) with Gergely Orosz: Fireside Chat on Linear's design philosophy and Zero Bug Policy
  • 08:10:48Jacob Lauritzen (Legora): Vertical AI – why complex agents need permanent UI artefacts instead of chat
  • 08:25:11Peter Gostev (Arena AI): The "Bullshit Benchmark" and what top models on LMSYS Arena still cannot do
  • 08:45:32Swyx: Automating a 9-million-dollar conference business with AI agents for non-coding tasks
  • 08:59:02Closing Remarks by Tejas Kumar

Read more on webconsulting.at

Code at a Crossroads: 7 Insights from the AI Engineer Summit 2025 – Our report from the predecessor event in the USA: War on Slop, Skills Architecture, and Agent-Ready Codebases.

From Coder to Orchestrator: What the Anthropic Report Means for Teams – How the Software Development Lifecycle is changing due to multi-agent systems.

TYPO3 Extension Security: What We Can Learn from Cloudflare's EmDash – Capability Manifests as a security model for agent permissions.


Impressions from London  

London welcomed us with brilliant sunshine. The Queen Elizabeth II Centre is located in the heart of Westminster – right next to the Abbey, a stone's throw from Big Ben, with the London Eye and the Houses of Parliament in sight. A conference location can hardly be better situated.

Arrival & City  

Paddington Station platform with Victorian vaulted steel and glass roof structure, commuters disembarking from a Heathrow Express train

Arrival at Paddington Station – the Victorian roof structure from 1854 welcomes travellers

Detail shot of the vaulted steel beams and glass roof structure of Paddington Station at night with warm artificial light

Architecture detail: Isambard Kingdom Brunel's masterpiece of steel and glass

Coloured geometric light projection in blue, orange, and pink on the front wall of the Paddington Station concourse

Light installation in Paddington – modern lighting design meets Victorian architecture

Elizabeth Tower (Big Ben) close-up against a clear blue sky, golden details visible on the clock face and the spire

Big Ben in the spring sun – just a few minutes' walk from the conference centre

Panoramic view from Westminster Bridge of the River Thames and the London Eye, ornamental green bridge railing in the foreground, boats on the river

London Eye and River Thames – view from Westminster Bridge in the sunshine

Skyline of Vauxhall and Nine Elms with modern high-rises behind Vauxhall Bridge, houseboats on the River Thames

Contrasts: London's modern skyline behind the historic Vauxhall Bridge

Palace of Westminster and Big Ben seen from Westminster Bridge, black van and red double-decker bus on the road, clear sky

Houses of Parliament – direct neighbour of the Queen Elizabeth II Centre

Bronze statue of Winston Churchill in an overcoat with a walking stick, frog perspective against a blue sky on Parliament Square

Winston Churchill watches over Parliament Square – a few steps away from the conference

St James's Park in spring, lush green lawns under old plane trees with fresh foliage, park bench in the shade

St James's Park – perfect lunch break between the sessions

Walking path under tall plane trees in St James's Park, pedestrians in dappled sunlight, spring-like green

Plane tree avenue in St James's Park – London's green lung next to Westminster

Parliament Square with crowds of people, cyclists, and tourists, statue and Victorian government buildings in the background

Parliament Square – vibrant life at the political centre of London

Houses of Parliament West front with Victoria Tower under a cloudy sky, pedestrians and cyclists on the road

Typically London: Houses of Parliament under a cloud-covered sky

London Eye Ferris wheel between modern buildings, Gail's Bakery sign in the foreground, passers-by at café tables in backlighting

London Eye close up – coffee break at Gail's Bakery

View from the hotel through a multi-storey glass atrium towards Big Ben and Westminster, greened terraces and glass facades with reflections

View from the hotel towards Westminster – Big Ben through the glass atrium

The Cenotaph war memorial in Whitehall with the inscription THE GLORIOUS DEAD, stone laurel wreath, Union Jack flag, in the background the magnificent facade of the Foreign Office

The Cenotaph in Whitehall – Britain's central war memorial, MCMXIX

Heavy black iron fence with spikes in front of Downing Street, armed police officer in the background, colourful umbrella by the fence

10 Downing Street – heavily guarded and yet photogenic

Conference, Keynotes & Talks  

Large digital display board in the foyer of the QEII Centre: April 8 Workshops + Expo, April 9 Keynotes + Breakouts + Onsite Afterparty, April 10 Keynotes + Breakouts + Offsite Afterparty, URL ai.engineer/europe

The programme at a glance – welcome display at the QEII Centre

Breakout room with purple seating, mixing desk, and camera equipment in the foreground, technician at laptop, audience waiting for the next session

Behind the scenes – breakout room with professional AV setup

Full conference hall in the QEII Centre, hundreds of attendees packed closely, stage lighting, conference badges visible

Packed House – full main hall during the keynotes (not everyone fit inside)

Close-up of the audience in the darkened conference hall, diverse attendees with badges and laptops, following the presentations with concentration

Concentrated audience – the community between the sessions

Peter Steinberger with a cap and light t-shirt on the main stage, sponsor wall with OpenAI, Google DeepMind, Microsoft, Sentry, and other logos

Peter Steinberger (OpenAI / OpenClaw Foundation) on the main stage – State of the Claw

Speaker in a dark shirt on the main stage in front of the AI Engineer Europe sponsor wall, gesturing, logos of ElevenLabs, Modal, Sentry, neo4j, WorkOS, Braintrust visible

Keynote on the main stage – the sponsor wall shows the Who's Who of the AI industry

Vercel logo on a cosmic nebula background in blue and pink on the empty main stage, AIE letters in gold on the right

Vercel title slide on the main stage – shortly before Malte Ubl's keynote

Google DeepMind Raia Hadsell Slide: Games & Simulation for AGI Research, timeline from Atari 2013 via Go/Chess to StarCraft and MuJoCo Robotics

Raia Hadsell (Google DeepMind): From Atari to Robotics – the path to AGI via games and simulation

Slide: What is a Jennifer Aniston cell? – Neurons that activate only for one person or concept, robust across modalities

Neuroscience meets AI: Jennifer Aniston Cells – selective neuron activation

Peter Steinberger (OpenAI) Slide: Comparison table of Security Advisories – OpenClaw 1,142 total in 68 days vs. Django 94 in 19 years, curl ~600 in 8 years, Linux Kernel ~8-9/day

Peter Steinberger showing the security advisory explosion: OpenClaw 16.8 advisories per day

David Soria Parra (Anthropic) on the AI Engineer Europe main stage, close-up, raised index finger, OpenAI and Google DeepMind logos in the background

David Soria Parra (Anthropic) – the inventor of the Model Context Protocol (MCP) on the main stage

Main stage Coding Agents Track, Google DeepMind branding, animated robot graphic on the side screens, full audience

Coding Agents Track – one of the most attended tracks of the conference

Speaker in front of the huge AI Engineer Europe sponsor wall, all logos visible: Cloudflare, OpenAI, Arize, Stripe, ElevenLabs, bright data, WorkOS, Braintrust, Google DeepMind, Microsoft

The sponsor wall in all its glory – the biggest names in the AI industry under one roof

Slide on the conference screen: 2026 is all about connectivity. The best agents use every available method. – golden-yellow gradient background, audience taking photos with smartphones

The core message: 2026 is all about connectivity – the best agents use everything

Speaker in a dark shirt on the AI Engineer Europe main stage in front of the complete sponsor wall, Qodo, Sentry, ElevenLabs, and other logos

Another perspective of the imposing main stage

Armin Ronacher and Cristina Poncela Cubeiro (Earendil) on the AI Engineer Europe main stage, Ronacher gesturing at the podium, sponsor logos in the background

Armin Ronacher and Cristina Poncela Cubeiro (Earendil) – Agent-legible codebases and conscious friction

Slides & Insights  

Sonar LLM Leaderboard Slide: 50+ LLMs ranked by code quality and security, table with Pass Rate, Issue Density, and Complexity, sonar.com/leaderboard

Sonar LLM Leaderboard – 50+ models ranked by code quality and security

Sonar Slide: Top 5 models by pass rate – Gemini 3.1 Pro High 84.17%, Opus 4.5 Thinking 83.62%, Opus 4.6 Thinking 82.38%, Gemini 3 Pro 81.72%, Gemini 3 Pro High 81.60%

The Top 5: Gemini and Claude Opus dominate in code quality and pass rate

Cerebras Sarah Chieng title slide: Fast Models Need Slow Developers, red terminal icon in the centre, dark background

Sarah Chieng (Cerebras): Fast Models Need Slow Developers – provocation as a programme

GitHub COO Kyle Daigle Slide: Growth Is Accelerating, ~1B Commits 2024 (+25% YoY), projected ~14B Commits 2025 (14x), growing proportion AI-co-authored

GitHub numbers: From 1 billion to 14 billion commits – AI agents drive the growth

Stanford Study Slide: Clean code amplifies AI gains, chart showing Task Composition by AI Involvement vs. Environment Cleanliness Index, clean codebases enable more autonomous AI work

Stanford 120K study: Clean code amplifies AI gains – the Slow Down faction feels validated

Arena AI Peter Gostev Slide: BullshitBench Results – Claude Dominates, horizontal bar chart with pushback rates of various LLMs, Claude models lead with 90%+

BullshitBench: Claude dominates when it comes to pushing back against false instructions

Arena AI Slide: Anthropic vs OpenAI vs Google, timeline chart Q1 2024 to Q2 2026 with pushback rates, Anthropic (red) consistently ahead, OpenAI (green) and Google (blue) behind

The three-way battle: Anthropic vs. OpenAI vs. Google – Claude leads in bullshit detection

Arena AI Slide: What's the gap? METR Benchmark Timeline and BullshitBench results, Both Bad rates among Top 25 models by category

What's the gap? – where even the best models still systematically fail

Fireside Chat on the AI Engineer Europe stage, two speakers on white chairs with a small table in between, sponsor wall with Microsoft, OpenAI, neo4j, Tessi logos

Fireside Chat – Gergely Orosz (The Pragmatic Engineer) in conversation with Tuomas Artman (Linear)

Jacob Lauritzen (CTO, Legora) standing alone on the large stage, his name and company logo on the screen behind him, sponsor wall visible

Jacob Lauritzen (Legora) – AI agents need genuine user interfaces, not just chat


Soundtrack to the Conference  

Music for Reading

Three songs that, for me, capture the feeling of this conference.

Peter Gabriel – Solsbury Hill  

Play video
Loads YouTube & sets cookies

Click loads YouTube (Privacy)

Peter Gabriel describes his personal experience on the eponymous hill in Somerset, England – a moment that encouraged him to leave Genesis and forge uncertain paths. Fits well with a conference where much was called into question.

Lou Reed – There Is No Time  

Play video
Loads YouTube & sets cookies

Click loads YouTube (Privacy)

Real music – you can't beat two guitars, bass and drums. Lou Reed, raw and direct. A reminder that some things simply remain human – and are good precisely for that reason.

Elton John – Tiny Dancer  

Play video
Loads YouTube & sets cookies

Click loads YouTube (Privacy)

From England to California, 1970 – relaxed. The perfect song for the flight back, when the impressions sink in and London disappears behind the clouds.

Let's talk about your project

Locations

  • Mattersburg
    Johann Nepomuk Bergerstraße 7/2/14
    7210 Mattersburg, Austria
  • Vienna
    Ungargasse 64-66/3/404
    1030 Wien, Austria

Parts of this content were created with the assistance of AI.