EO AI Productivity Exchange #1 · Backup deck

Live Q&A.
Deep dive.

Twelve Slido questions, source-backed answers, panel-ready. 30 to 90 seconds per question. Sources cited per slide. Open during the panel as a teleprompter, or skip if Slido goes hot in another direction.

Christoph Erler & Dom RauteMay 11, 2026 · 18:00 CEST · Online

EO Berlin
01 / 15
Index

The 12 questions, ranked by Slido votes.

Q17If you could start over, what would you build differently?
Q26How do you handle sensitive data · GDPR?
Q35What does your setup cost monthly · honest breakdown?
Q45Thilo · AI risk to my own business + investments?
Q54Cloud vs self-hosted vs local?
Q62Claude vs Gemini · we are in Workspace?
Q72Pace of software development overruns enterprise AI?
Q81How many hours per week does this save?
Q91Best Perplexity use case beside search?
Q101Best practical use case for ChatGPT?
Q110Worst AI fail in your business?
Q120How do you get your team to adopt AI?

Panel discipline60 seconds per question. No filler. Read the question for the audience. Hard skip on financial trades, legal advice, prohibited actions.

EO Berlin
02 / 15
Slido Q1

If you could start over, what would you build differently?

7 votes
60-second answer
Five regrets, ordered. One: start with memory, not tools. Two: write down who you are and what you do before installing anything. Three: use Markdown files, not Vector-DB, until you cross 10.000 unstructured docs. Four: avoid vendor lock-in, every assistant should be a Markdown wrapper, not a SaaS subscription. Five: build for one stack you live in, not for every tool you have.
01My own v1 to v3 are dead. ChatGPT-only with no memory. Notion plus Zapier orchestration. Multiple tools with no architecture. 12 weeks of building, ripping, rebuilding before v4 held.
02Single biggest mistake: optimizing for tools before defining what I want the brain to remember. The tools became the project. The brain never showed up.
03Honest cost of starting over: 200 hours setup time, ~$50/day Opus burn during build-up phase, 4 to 5 months payback before the brain replaces the work of three hires.
04What I would do today, day one: 50-line CLAUDE.md, three skills (/morning-brief, /diarize-person, /weekly-review), one mobile interface (Telegram). Skip everything else for 30 days.

ReferencesMain deck slides #5-6 · chris-demo #6, #24 · solutions/openclaw-honest-assessment.md

EO Berlin
03 / 15
Slido Q2

Sensitive company data · GDPR compliance?

6 votes
60-second answer
You can run Claude GDPR-compliant in 2026, but you have to configure it. Default: route through AWS Bedrock EU Frankfurt (eu-central-1) or Google Vertex AI EU. Anthropic DPA auto-signs via Commercial Terms. ISO 27001 + ISO 42001 + SOC 2 Type II certified. API retention 7 days default since Sep 2025. ZDR available via Enterprise sales. EU AI Act full enforcement 2 Aug 2026, business assistants land in Limited Risk tier. For German Kanzleien: GDPR DPA alone is NOT enough for §203 StGB, add explicit Verschwiegenheitsverpflichtung.
01EU AI Act penalties from 2 Aug 2026: €35M or 7% turnover for prohibited practices, €15M or 3% for high-risk violations. Most business AI = Limited Risk = transparency obligation only.
02Bedrock EU regions for Claude: Frankfurt (eu-central-1), Ireland (eu-west-1), Paris, Stockholm. Cross-region inference profile "EU" keeps routing inside EU boundary.
03Claude Code on Bedrock EU: export CLAUDE_CODE_USE_BEDROCK=1; export AWS_REGION=eu-central-1. Must be env var, not .aws/config.
04ZDR scope: Anthropic API + Claude Code with commercial org key. Does NOT auto-apply on Bedrock/Vertex, those follow cloud-provider policy.
05BStBK AI FAQ (Feb 2026): for Kanzleien/Notariate, sign DSGVO DPA + §203-specific Verschwiegenheitsvereinbarung with cloud provider.

Sourcestrust.anthropic.com · AI Act timeline · Anthropic certifications · Claude API retention · solutions/gdpr-claude-checklist-dach.md (20 items)

EO Berlin
04 / 15
Slido Q3

What does your setup cost monthly?

5 votes
60-second answer
My personal stack: 150 EUR/mo all-in. Claude Pro + Telegram VM Lightsail Frankfurt + minor APIs. Dom is at $0/mo, fully local. Compared to Glean Enterprise 50-65 USD/seat (100-seat minimum), ChatGPT Enterprise 45-75 USD/seat (150-seat min), M365 Copilot 18-30 USD/seat. Architecture beats budget. The subscription sticker is 20-30% of true cost. Hidden cost: 70-80% is setup time, integration, maintenance, change management.
TierSoloTeam of 10Enterprise 50+Hidden cost
Subscription100-500 USD1-3k USD8-25k USD20-30% of true cost
Setup time~200h (~25k EUR)1-2 FTE quarter20-50k USDOften skipped from budget
Maintenance2h/mo patching0.5 FTE ongoing15-30% of build/yrAnnual recurring
Training0 (self)1.5k/person80-120k FTE/yr36% feel undertrained
Payback4-5 mo6-9 moMedian 2-4 yrsOnly 5-6% under 12mo

SourcesGlean pricing · M365 Copilot · ChatGPT Enterprise · Deloitte AI ROI paradox · Master of Code, 5% breakeven

EO Berlin
05 / 15
Slido Q4 · Thilo · part 1 of 1

AI risk to your business model + investments.

5 votes
90-second answer
I run two filters on every business and every investment. Replacement filter: can a well-prompted Claude do my next quarter? If yes, I am the moat or the casualty. Rails filter: am I selling picks-and-shovels to the people who cannot be replaced? Real estate, regulated advisory, trust-based DACH B2B pass both filters. Per-seat SaaS, generic content, junior-analyst work fail. DACH is 18 months behind US adoption, that is a feature not a bug for the next 3 years.
AI-vulnerable, sell side
  • Information arbitrage: Chegg $14B → $191M, Stack Overflow questions -76%
  • Junior-analyst SaaS: per-seat productivity, generic copywriting, L1 support
  • Horizontal SaaS without distribution moat: SaaSpocalypse, $285B repricing 2026
  • Mid-pyramid consulting: Accenture rolled Copilot to 743k employees
AI-resistant, buy/hold side
  • Regulated verticals with liability: legal (20k jurisdictions), healthcare, finance, KYC
  • Trust-based B2B with relationships: PE operating partners, founder advisory, family offices
  • Vertical depth + outcome data: Validata, BetterCloud beat horizontal SaaS
  • Physical assets: real estate, hospitality, manufacturing
  • Distribution moats: the rails, not the software

SourcesMcKinsey State of AI 2025 · MIT NANDA GenAI Divide · Gartner 2026 predictions · Stratechery: aggregators and AI · Forrester SaaSpocalypse · PitchBook PE 2026

EO Berlin
06 / 15
Slido Q5

Cloud vs self-hosted vs local?

4 votes
60-second answer
Three valid paths, depending on what you optimize for. Local first like Dom: $0/mo, full sovereignty, you carry the engineering load forever. Workspace native like me: 150 EUR/mo, mobile via Telegram, you ride the vendor roadmap. Enterprise hosted like Glean: $500/seat, no engineering load, vendor lock-in plus data in third party. Middle for DACH: AWS Bedrock EU Frankfurt, cloud convenience plus EU residency plus audit trail.
Local-first
$0/mo · Mac/RTX/H100
  • Best for power users, regulated industries, max privacy
  • Hardware 500-3000 USD one-time (RTX 4090) or 15-25k (H100)
  • Breakeven 2-3 months vs cloud API at 200 req/day
  • Catch: needs MLOps engineer for 24/7 production
Cloud Workspace-native
150 EUR/mo all-in
  • Mobile-first via Telegram from anywhere
  • AWS Lightsail Frankfurt, Claude Code as harness
  • Rides Anthropic + AWS roadmap, vendor co-managed
  • Best for multi-domain operators (real estate, sales, content)
Enterprise hosted
$500-1.5k/seat/mo
  • Best for teams not building, want it tomorrow
  • Glean / Notion AI Enterprise / M365 Copilot
  • 100-200 seat minimums, annual commits, vendor lock-in
  • Hidden cost: 80-120k EUR/yr admin FTE

SourcesPremai self-hosted LLM guide · Onyx best self-hosted 2026 · Glean Enterprise pricing · solutions/gdpr-claude-checklist-dach.md

EO Berlin
07 / 15
Slido Q6

Claude vs Gemini, we are in Workspace?

2 votes
60-second answer
Use both. Gemini wins inside Workspace for Doc/Sheet/Slide/Gmail inline work, native integration since Jan 2025, free in Business Plus. Claude wins for reasoning, code, long-context analysis, agentic loops. On Artificial Analysis Intelligence Index both score 57 (Opus 4.7 = Gemini 3.1 Pro Preview). SWE-bench coding: Opus 87.6%, Gemini ~64%. Do not wait for Google to catch up on reasoning, they are already there. Use Claude where reasoning matters, Gemini where Workspace integration matters.
Use caseClaude Opus 4.7Gemini 3.1 Pro
Workspace inline (Doc/Sheet/Slide/Gmail)Browser extension onlyNative, included in Business Plus
Coding (SWE-bench Verified)87.6%~64%
Reasoning (Humanity's Last Exam)46.9%44.4%
Multimodal video + long context1M context (Sonnet beta)2M context, native video
ARC-AGI-2 (abstract reasoning)75.8%77.1%
Pricing API$5 / $25 per M tokenComparable
Consumer plan$20/mo Pro$19.99/mo AI Pro (€7.99 AI Plus EU)

SourcesClaude Opus 4.7 · Artificial Analysis comparison · Gemini Workspace updates Mar 2026 · DataCamp comparison · Google AI Plus EU pricing

EO Berlin
08 / 15
Slido Q7

Software pace overruns my enterprise AI product?

2 votes
60-second answer
Bet on architecture, not on model. The same architecture (memory + tools + models) survives any model swap. If your platform is locked to a specific model API, you bought a depreciating asset. If your platform has a thin harness that calls a swappable model, you bought an appreciating asset. Test: can my product run on a different model in 30 days? If no, you have technical lock-in, not product moat. Models commoditize in 12-18 months. The moat is data, distribution, judgment encoding, vertical depth.
012026 commodity timeline: Opus 4.7 = Gemini 3.1 Pro on Intelligence Index. GPT-5.5 at 59. Anthropic, OpenAI, Google, Mistral, open-weights are all within 10% of each other.
02Lock-in test patterns: proprietary prompts in vendor format, vendor-specific tool calling syntax, fine-tunes that don't transfer, vector embeddings tied to one provider's model.
03What survives: data layer separate from model, thin harness with model-portable interface, skills as Markdown, observability/audit trail at app layer.
04Enterprise contracts 2027: customers will demand model-portability clauses by default. Vendor lock-in becomes a procurement red flag.
05NVIDIA bet: $40B in AI equity bets 2026 incl. $30B into OpenAI. Capital rotates toward rails (compute, distribution), away from replaced. Stratechery (Thompson 2026): aggregators with network effects win, point solutions lose.

SourcesStratechery Software Survival · AA Intelligence Index · NVIDIA $40B equity bets · solutions/pace-keeping-up/weekly-ai-filter.md

EO Berlin
09 / 15
Slido Q8

Hours per week real numbers?

1 vote
60-second answer
23 hours per week, tracked over 8 weeks. Industry benchmark: knowledge workers save 6.4h/seat, senior practitioners 10-12h. My setup is 2-3x that because I built skills not just tools. Personal Brain v4, alive 6 months, 150 EUR/mo, payback 4-5 months.
23h
my weekly save
tracked 8 wks
6.4h
industry median
knowledge worker
10-12h
senior practitioner
McKinsey 2026
9x
cost ratio
AI vs human ticket
01Morning briefing (daily 7am cron): 30 min → 3 min = 2.25h/wk
02Meeting prep (4 calls/day, dossier ready): 20 min × 4 → 3 min × 4 = 5.7h/wk
03Email + WhatsApp drafts voice-locked: 5 min × 8 → 1 min × 8 = 2.7h/wk
04Real estate pricing (6 studios via Holidu cron): 1h/day → 0 min = 5h/wk
05Friday weekly review: 90 min → 10 min = 1.3h/wk
06Client diagnostics + deliverables: 6.75h/wk combined

SourcesChris own tracking 8 wks · AI Agent productivity stats 2026 · chris-demo #7 Day in the Life table

EO Berlin
10 / 15
Slido Q9

Best Perplexity use case beside search?

1 vote
60-second answer
Three honest uses beyond standard search. One: Perplexity Spaces (5M+ created since launch, NVIDIA + Databricks + Bridgewater use it for competitive intel). Two: Deep Research mode for due diligence, 2-4 minute reports with 37% hallucination rate vs 67% on ChatGPT Deep Research. Three: Pro tier $20/mo gives CB Insights + PitchBook + Statista access, alone worth hundreds in subscriptions. Comet browser caveat: zero-click vulnerability in March 2026, careful with enterprise.
Perplexity wins
  • Spaces: dedicated workspaces with file upload + SSO permissions, competitive intel hub per competitor
  • Deep Research: 10 min preliminary DD reports vs 4h analyst work
  • Citations: 37% hallucination rate, lowest in market
  • Premium data: CB Insights, PitchBook, Statista in $20 Pro tier
  • Speed: 2-4 min multi-source research
Perplexity loses to Claude
  • Long nuanced drafts
  • Coding
  • Complex multi-step reasoning
  • Writing in your own voice
  • Comet browser security (3 CVEs Mar 2026)

SourcesPerplexity Spaces · Perplexity pricing 2026 · Deep Research comparison · Comet security

EO Berlin
11 / 15
Slido Q10

Best practical use case for ChatGPT?

1 vote
60-second answer
Three honest wins. Advanced Data Analysis (former Code Interpreter): Excel/CSV in, Python + charts + PPTX out, in one prompt. Beats Claude Code for one-shot data-to-visualization. DALL-E / GPT Image 2: wins on text rendering in images, ad creatives with readable text, API workflows. Industry mix recommended: 60% DALL-E API, 20% Midjourney for hero visuals, 20% Imagen. Voice Mode: hands-free brainstorming, CarPlay rubber-ducking, structured Socratic sessions on walks. GPT-5.5 Instant (May 2026) is the new default, hallucinates less in law/medicine/finance, can search Past Conversations + Files + Gmail.
ChatGPT wins
  • Image generation native in-window (Vision input too)
  • Voice Mode for hands-free
  • Custom GPTs for niche repeated workflows (B2B verticals)
  • Advanced Data Analysis one-shot data viz
  • Multimodal in one window
  • Speed on quick tasks
ChatGPT loses to Claude
  • Coding: 70% developer preference for Claude
  • Writing quality: less AI-speak
  • Long-context reasoning
  • Fewer invented function signatures
  • Claude Code as terminal agent for multi-hour tasks

SourcesOpenAI GPT-5.5 launch · Image gen comparison 2026 · Advanced Data Analysis guide · Claude vs ChatGPT honest comparison

EO Berlin
12 / 15
Slido Q11

Worst AI fail and what you learned?

0 votes
60-second answer
Three real industry fails, two of mine. Replit July 2025: AI agent deleted production DB during code freeze, fabricated 4000 fake users, lied about rollback. OpenClaw CVE-2026-25253: 1-Click RCE in agentic orchestration platform, 42k instances exposed, plus ClawHavoc supply chain attack 1184 malicious skills, ~300k users affected (Antiy CERT Feb 2026). Anthropic 31 Mar 2026: shipped 513k lines of Claude Code source via npm sourcemap. My own: April 2026 13-month memory drift, weekly /audit-memory now catches it. Lesson: AI fails silently. Sandbox destructive actions, version-control everything, never trust silent success.
01Replit (Jul 2025): Jason Lemkin SaaStr vibe-coding experiment, AI deleted prod DB on day 9, recovery manual, Replit shipped dev/prod separation + planning-only mode in response.
02Legal AI hallucinations: 1348 worldwide cases by April 2026 (Damien Charlotin tracker). Sullivan & Cromwell ~28 fabricated citations in Prince Global Chapter 15. Oregon $110k fine for Valley View winery case.
03Customer service liability: Air Canada 2024 ruling, chatbot promise was legally binding ($600 + costs). Chevrolet of Watsonville bot prompt-injected to sell Tahoe for $1.
04Devin reality: Answer.AI 30-day eval, 3 successes/14 fails/3 inconclusive out of 20 tasks = 15% success rate.
05Standard discipline: sandbox + human-in-loop for irreversible actions + memory hygiene with audit trail + short-lived credentials + dev/prod separation. OWASP AI Agent Security Cheat Sheet is the reference.

SourcesCVE-2026-25253 SOCRadar · ClawHavoc Antiy · Anthropic npm leak · Replit incident · Charlotin hallucination tracker · OWASP AI Agent Cheat Sheet

EO Berlin
13 / 15
Slido Q12

How do you get a team to actually adopt AI?

0 votes
90-second answer
Hard truth: McKinsey says C-levels think 4% use AI daily, employees self-report 13%. BCG silicon ceiling: half of frontline workers don't touch the tools you bought. Gartner: 50% of GenAI projects abandoned after PoC. Three things that actually work. 1) Name skills not tools, BBVA built 11k active users with 750 "wizards" and 2-5h/wk saved per person (HBR April 2026). 2) One workflow per quarter, pilot with 2 champions, measure, then scale. 3) Founder visibility, McKinsey says 3x adoption when leaders role-model. DACH: Betriebsrat first, BAG 2026 ruling co-determination on practically any AI, EU AI Act fully applicable Aug 2026.
50%
PoCs abandoned
Gartner 2025
3x
adoption lift
when leaders model
11k
BBVA active users
built bottom-up
2-5h
per person/wk
tracked at BBVA
01Measure what matters: skill invocations per team member per week, hours saved self-reported, active/licensed user ratio (best in class ~80%), 5+ days/week active rate (Copilot bench 67%).
02Failure modes to avoid: blanket mandate (Duolingo PR crisis), training-then-forget (only 36% feel training was enough), tool-led not workflow-led ("we bought Copilot" trap), centralized AI team picking use cases for the field.
03DACH §87/§90/§95 BetrVG: not optional. Loop Betriebsrat in at workflow-selection stage. Frame skills narrowly. Defining "AI-suitability to monitor" triggers co-determination, not intent.
04Drop-off norms: 20-30% drop-off is best-in-class. GitHub Copilot enterprises: 67% 5+ days/week, 1% monthly churn. Claude Code: 81% engineers still using at week 20.

SourcesHBR BBVA Alfaro Apr 2026 · McKinsey State of AI 2025 · BCG AI at Work 2025 · Gartner abandonment · BAG 2026 Betriebsrat · solutions/team-adoption/team-rollout-playbook.md

EO Berlin
14 / 15
After the panel

Slido stays
open 24h.

Late questions land in the same room. We pick the topic for Event #2 from tonight's votes plus the next 24 hours. All sources cited per slide are clickable. The QA cheatsheet companion lives at events/01-2026-05-11-setup-trap/QA-CHEATSHEET.md. Fork, remix, contribute.

github.com/chris1928a/eo-ai-exchangeSlido #3261743 · app.sli.do/event/ayixnbqUoPEUoNJQa6WuFA

EO Berlin
15 / 15