Cavalon
Koninklijke Verbeek IndustriesManufacturing & IndustrialAI GovernanceEthical AI Framework

Responsible Innovation Framework — Building Trust at Koninklijke Verbeek Industries

How we helped a Dutch manufacturing leader deploy AI responsibly, achieving EU AI Act compliance 8 months ahead of deadline while delivering €890K ROI from their first compliant deployment

January 2025

Services Provided

AI GovernanceEthical AI FrameworkRisk Assessment

Key Results

4 AI projects successfully deployed (vs 0 before)

EU AI Act compliance achieved 8 months ahead of deadline

Employee AI trust score improved from 2.3 to 4.1 (out of 5)

€890K ROI from first compliant AI deployment

Executive Summary

Koninklijke Verbeek Industries, a 127-year-old Dutch manufacturing company headquartered in Eindhoven, found themselves at a crossroads. With 2,400 employees across 6 production facilities in the Netherlands and Germany, they recognized the transformative potential of AI for quality control, predictive maintenance, and supply chain optimization. Yet after 18 months of exploration, not a single AI project had moved from pilot to production.

The barrier was not technical capability — it was trust. Leadership worried about regulatory compliance, the works council raised concerns about workforce impact, and engineers questioned the reliability of AI-driven decisions in safety-critical manufacturing environments. Meanwhile, the EU AI Act was approaching, adding regulatory urgency to an already complex situation.

Cavalon was engaged to build a Responsible Innovation Framework: a comprehensive governance structure that would enable Verbeek to deploy AI confidently, compliantly, and with the full support of their workforce. Over 8 months, we established an AI governance board, created risk assessment protocols aligned with the EU AI Act, built workforce trust through transparency and participation, and guided 4 AI projects from concept to compliant production deployment — generating €890K in measurable ROI.

The Challenge

Business Context

Koninklijke Verbeek Industries has been a cornerstone of Dutch manufacturing since 1898. Originally a metalworking company, they have evolved into a precision components manufacturer serving the automotive, aerospace, and medical device sectors. Their products demand exacting quality standards — tolerances measured in microns, full traceability, and rigorous compliance with industry-specific regulations (IATF 16949 for automotive, AS9100 for aerospace, ISO 13485 for medical).

The company had invested €2.3M in AI exploration over 18 months:

  • A computer vision system for automated quality inspection
  • A predictive maintenance model for their CNC machining centers
  • An AI-assisted production scheduling optimizer
  • A natural language interface for their quality management system

All four projects demonstrated promising results in controlled pilots. None had reached production.

The Trust Deficit

Through stakeholder interviews across the organization, we identified three distinct layers of the trust deficit:

Executive Leadership (Board and C-Suite) The board was acutely aware of the EU AI Act timeline and feared deploying AI systems that might later be found non-compliant. Their legal counsel had flagged potential fines of up to 6% of annual revenue (approximately €18M for Verbeek). The risk of non-compliance outweighed the potential benefits of any single AI project.

Key concern: "We cannot be the company that makes headlines for an AI failure in safety-critical manufacturing."

Works Council and Workforce Representatives The Dutch works council (ondernemingsraad) had legitimate concerns about AI's impact on employment. Verbeek's workforce includes highly skilled technicians who had spent decades developing expertise — expertise they feared AI would devalue or replace. The works council had not blocked AI projects outright, but their cautious stance created a de facto approval bottleneck.

Key concern: "Our people are our greatest asset. We will not support technology that undermines their expertise or employment security."

Engineering and Operations Teams The engineers and operators who would ultimately work alongside AI systems questioned their reliability. In manufacturing, a false positive in quality inspection means scrapping good product (costly). A false negative means shipping defective parts (catastrophic). The existing pilot systems showed accuracy rates that engineers deemed insufficient for production use without human oversight protocols.

Key concern: "I have been doing quality inspection for 22 years. How do I trust a system that has been trained on 6 months of data?"

Quantified Impact of Inaction

The 18-month paralysis had measurable consequences:

  • €2.3M in sunk R&D costs with zero production value
  • 3 competitor announcements of AI-enabled manufacturing capabilities
  • 14% increase in quality inspection costs due to labor market tightening
  • Rising customer pressure — two automotive OEM clients had included AI capability requirements in their 2026 supplier scorecards

Our Approach

Framework Design Principles

We designed the Responsible Innovation Framework around five principles that directly addressed the trust barriers:

  1. Proportional governance — Governance rigor scales with risk level. Not every AI application needs the same oversight intensity.
  2. Radical transparency — Every AI system's capabilities, limitations, training data, and decision processes are documented and accessible.
  3. Workforce partnership — Employees are co-designers of AI systems, not passive recipients of them.
  4. Regulatory alignment — EU AI Act compliance is built into the development process, not bolted on afterward.
  5. Evidence-based trust — Trust is earned through demonstrated performance, not promised through presentations.

Phase 1: AI Risk Classification (Weeks 1-4)

We began by classifying all existing and planned AI applications using a framework aligned with the EU AI Act risk tiers, adapted for manufacturing contexts.

Risk Classification Matrix

Risk LevelCriteriaGovernance RequirementExample at Verbeek
MinimalNo safety implications, internal use onlyStandard IT governanceMeeting room booking optimizer
LimitedCustomer-facing but non-criticalTransparency requirementsNL interface for quality docs
HighSafety-related, quality-affectingFull conformity assessmentComputer vision quality inspection
UnacceptableProhibited under EU AI ActNot permittedCovert worker surveillance

All four pilot projects were classified:

  • Quality inspection CV system — High risk (directly affects product safety)
  • Predictive maintenance model — Limited risk (advisory, human decision remains)
  • Production scheduling optimizer — Limited risk (optimization suggestions, human approval)
  • Quality management NL interface — Limited risk (information retrieval, no autonomous decisions)

This classification was transformative. It immediately clarified that not every project needed the same level of scrutiny, unblocking the three "Limited risk" projects from the same governance burden as the "High risk" quality inspection system.

Phase 2: Governance Structure (Weeks 3-8)

We established a three-tier governance structure:

Tier 1: AI Ethics Board (quarterly review)

  • Composition: CEO, CTO, Head of Legal, Works Council Chair, External Ethics Advisor
  • Responsibility: Strategic direction, policy approval, high-risk project sign-off
  • First action: Approved the AI Ethics Charter, a public commitment to responsible AI use

Tier 2: AI Review Committee (bi-weekly)

  • Composition: AI Lead, Quality Director, HR Director, Data Protection Officer, rotating department representative
  • Responsibility: Project-level risk assessment, compliance monitoring, incident review
  • First action: Reviewed all four pilot projects against the risk classification matrix

Tier 3: Project AI Leads (per-project, continuous)

  • Composition: Technical lead + domain expert + workforce representative for each AI project
  • Responsibility: Day-to-day governance, documentation, monitoring, escalation
  • First action: Created Algorithmic Impact Assessments for each pilot project

Phase 3: Workforce Trust Building (Weeks 4-16)

This was the most critical phase. We implemented a comprehensive trust-building program:

AI Literacy Program We developed a tiered training program delivered in partnership with Verbeek's internal academy:

LevelAudienceDurationContent
FoundationAll employees (2,400)4 hoursWhat AI is/isn't, how it works at Verbeek, rights and participation
PractitionerOperators working with AI (340)16 hoursSystem-specific training, override protocols, feedback mechanisms
ChampionAI project team members (45)40 hoursTechnical depth, governance processes, continuous improvement

Co-Design Workshops For each AI project, we ran structured co-design sessions where operators and engineers worked alongside data scientists to:

  • Define acceptable performance thresholds
  • Design human override protocols
  • Establish feedback loops for model improvement
  • Create monitoring dashboards they would actually use

The quality inspection team, initially the most skeptical, became the strongest advocates after they helped define the "confidence threshold" — the system would only make autonomous pass/fail decisions above 98% confidence. Everything else was flagged for human review with the AI's assessment presented as a recommendation, not a verdict.

Transparency Dashboard We built an internal dashboard showing:

  • Real-time model performance metrics
  • Training data composition and updates
  • Decision explanations for flagged items
  • Comparison of AI vs. human decision consistency
  • Monthly fairness and bias reports

Works Council Partnership We facilitated a formal agreement between management and the works council that included:

  • No job losses as a direct result of AI deployment (redeployment guarantee)
  • Workers' right to explanation for any AI-influenced decision affecting them
  • Works council representative on the AI Ethics Board
  • Annual independent audit of AI systems' workforce impact

Phase 4: Compliant Deployment (Weeks 12-32)

With governance structures in place and workforce trust established, we guided all four AI projects through compliant deployment:

Quality Inspection CV System (High Risk)

This required the most rigorous compliance pathway:

  1. Conformity Assessment: We prepared a complete technical dossier including training data documentation, performance benchmarking, robustness testing, and bias analysis
  2. Human Oversight Protocol: Defined clear roles for human operators — the AI handles initial screening, humans verify all failures and a random 5% sample of passes
  3. Post-Market Surveillance Plan: Continuous monitoring with automatic alerts for performance degradation, drift detection, and edge case logging
  4. Incident Response Plan: Defined escalation procedures for system failures, including automatic fallback to full human inspection

Results after 3 months in production:

  • 99.4% accuracy on pass/fail decisions (vs. 97.1% human-only baseline)
  • 65% reduction in inspection time per unit
  • 12 edge cases identified and fed back into model improvement
  • Zero customer complaints related to AI-inspected products

Predictive Maintenance Model (Limited Risk)

Deployed as an advisory system with full transparency:

  • Maintenance teams receive predictions with confidence levels and explanatory factors
  • All predictions logged with outcomes for continuous calibration
  • Monthly accuracy reviews shared with maintenance team
  • Team has full authority to override or ignore predictions

Results after 4 months:

  • 78% of unplanned downtime events predicted at least 48 hours in advance
  • 23% reduction in total maintenance costs
  • Maintenance team voluntarily increased their reliance on predictions from 40% to 85% as trust grew

Production Scheduling Optimizer (Limited Risk)

Deployed as a suggestion engine:

  • Generates optimized schedules daily, presented as proposals
  • Production managers review and approve/modify
  • System learns from modifications to improve future suggestions
  • Full audit trail of all scheduling decisions

Results after 3 months:

  • 12% improvement in machine utilization
  • 8% reduction in changeover time through optimized sequencing
  • Production managers modified only 15% of suggested schedules (down from 60% in week 1)

Quality Management NL Interface (Limited Risk)

Deployed with standard transparency measures:

  • Clear labeling as AI-generated responses
  • Source document citations for every answer
  • Confidence indicator and "I don't know" responses for low-certainty queries
  • User feedback mechanism for continuous improvement

Results after 2 months:

  • 40% reduction in time spent searching quality documentation
  • 89% user satisfaction rating
  • 230 document suggestions fed back to improve the quality management system itself

Technical Architecture

Governance Technology Stack

The Responsible Innovation Framework required supporting technology:

Risk Assessment Engine A structured questionnaire tool that guides project teams through the EU AI Act risk classification process. It produces a standardized Algorithmic Impact Assessment document and automatically determines the required governance pathway.

Model Registry and Documentation A centralized registry for all AI models deployed at Verbeek, containing:

  • Model card (purpose, architecture, training data, limitations)
  • Performance benchmarks and ongoing monitoring metrics
  • Compliance documentation (conformity assessments, audit logs)
  • Version history with change rationale

Monitoring and Alerting Continuous monitoring infrastructure:

  • Performance drift detection (statistical tests on input/output distributions)
  • Fairness monitoring (checking for bias across product types, shifts, operators)
  • Availability and latency monitoring
  • Automatic alerting thresholds with escalation to AI Review Committee

Incident Management Integration with Verbeek's existing incident management system:

  • AI-specific incident categories and severity levels
  • Root cause analysis templates for AI failures
  • Corrective action tracking with governance sign-off
  • Quarterly incident trend analysis for the AI Ethics Board

Integration with Existing Systems

A critical success factor was deep integration with Verbeek's existing IT landscape:

  • SAP integration for production scheduling optimizer (reads work orders, writes schedule proposals)
  • SCADA/IoT integration for predictive maintenance (reads sensor data streams)
  • Quality management system integration for inspection results and documentation
  • Active Directory integration for role-based access to governance tools

Results

Quantified Outcomes

MetricBeforeAfterChange
AI projects in production04From stagnation to deployment
EU AI Act complianceUnknownFull compliance8 months ahead of deadline
Employee AI trust score2.3/54.1/5+78% improvement
Combined AI ROI-€2.3M (sunk costs)+€890K net positive€3.19M value swing
Quality inspection accuracy97.1% (human)99.4% (AI-assisted)2.3 percentage points
Unplanned downtime prediction0%78% predicted 48h aheadNew capability
Documentation search time25 min avg15 min avg-40%
Machine utilization71%80%+12%

ROI Breakdown

AI ProjectAnnual BenefitImplementation CostFirst-Year Net
Quality Inspection CV€520K (labor + quality)€180K€340K
Predictive Maintenance€310K (downtime + parts)€95K€215K
Production Scheduling€280K (utilization + changeover)€110K€170K
Quality NL Interface€195K (time savings)€30K€165K
Total€1,305K€415K€890K

Note: Governance framework development cost (€185K) is included in the quality inspection project cost as it was the primary driver for the governance investment.

Intangible Outcomes

Beyond the measurable financial returns:

  • Regulatory confidence: Verbeek's legal team now views AI as a managed risk rather than an uncontrollable liability
  • Competitive positioning: Two automotive OEM clients upgraded Verbeek's supplier scorecard rating based on their AI governance maturity
  • Talent attraction: Three senior data scientists cited the responsible AI framework as a key factor in their decision to join Verbeek
  • Industry recognition: Verbeek was invited to present their framework at the Dutch Manufacturing Summit 2025

Key Learnings

What Worked

1. Classification before governance By classifying projects by risk level first, we avoided the trap of applying maximum governance to every project. This immediately unblocked three of four projects and focused intensive governance effort where it was genuinely needed.

2. Workforce as co-designers, not subjects The co-design workshops were the single most impactful intervention. When operators helped define the confidence threshold for the quality inspection system, they transitioned from skeptics to stakeholders. They had ownership of the system's behavior, which fundamentally changed their relationship with it.

3. Evidence over promises We deliberately ran limited production trials with extensive monitoring before full deployment. Showing engineers 3 months of real performance data was infinitely more persuasive than any presentation about AI accuracy. Trust was earned, not declared.

4. Works council as partner Including the works council from day one, giving them a board seat, and establishing employment guarantees transformed them from a potential blocker into an active champion. Their endorsement carried significant weight with the broader workforce.

What We Would Do Differently

1. Start governance design in parallel with pilots Verbeek lost 18 months because governance was an afterthought. In hindsight, the governance framework should have been designed alongside the first pilot, not after four pilots had stalled.

2. External ethics advisor from the beginning We brought in an external ethics advisor for the AI Ethics Board. Their independent perspective was invaluable, and we wish we had included them in the initial risk classification phase rather than adding them later.

3. More international perspective Verbeek's German operations have different works council dynamics (Betriebsrat) and slightly different regulatory nuances. We initially focused too heavily on Dutch context and had to retrofit some governance elements for the German facilities.

Conclusion

The paradox of responsible AI innovation is that governance — often perceived as a barrier to speed — actually accelerated Verbeek's AI deployment. By building trust systematically through transparent governance, workforce partnership, and evidence-based validation, we transformed an organization that had been paralyzed by AI anxiety into one that confidently deploys AI within clear ethical and regulatory boundaries.

For manufacturing companies facing similar challenges, the lesson is clear: responsible innovation is not the opposite of fast innovation. It is the prerequisite for sustainable innovation. The companies that will lead in AI-enabled manufacturing are not those that move fastest — they are those that move most confidently, with their workforce, regulators, and customers aligned behind them.

Cavalon's Responsible Innovation Framework is now being adapted for three additional Verbeek facilities and has attracted interest from industry peers exploring their own AI governance journeys. The framework's principles — proportional governance, radical transparency, workforce partnership, regulatory alignment, and evidence-based trust — apply well beyond manufacturing to any organization navigating the complexities of responsible AI deployment.


Interested in building a responsible AI governance framework for your organization? Contact Cavalon to discuss how we can help you deploy AI with confidence and compliance.

Ready to Achieve Similar Results?

Let's discuss how we can help your organization make the path obvious and move forward with confidence.