top of page
FIVE AI/BOSCH
Scaling Autonomous Vehicle Verification Post-Acquisition

Five developed a cloud-based verification platform for testing autonomous driving systems, enabling engineers to configure simulations, evaluate system behaviour, and track performance in a safety-critical environment.

​

I joined prior to Bosch’s acquisition and contributed to the platform’s evolution during its early growth. This case study focuses specifically on the post-acquisition phase, where the challenge shifted to integrating the platform within a large-scale engineering organisation

Overview

​​Role: Lead Product Designer

Team: FE engineers, simulation specialists, verification engineers

Focus: Test config, reporting workflows, evaluation hierarchy

Impact

Established shared model of the testing ecosystem
Redesigned reporting to expose hidden regressions
Clarified lifecycle states across platform entities
Enabled pilot adoption inside Bosch engineering teams

The problem
Verification tooling fragmented across testing lifecycle

Post-acquisition, Five’s cloud-based verification platform was introduced into a 1,000+ engineer autonomous driving organisation.

​

  • Platform adoption was near zero

  • Engineers relied on entrenched local tooling

  • Reporting existed across multiple disconnected dashboards

  • Safety-critical results required high trust and traceability

​​

My role

  • Owned end-to-end design of test configuration and results workflows

  • Worked directly with engineering leads and pilot feature teams

Understanding the Verification Ecosystem

Before designing interfaces, I reconstructed the full testing lifecycle.

  • Traced requirements → scenarios → simulation → CI → reporting

  • Mapped local vs cloud workflows

  • Identified duplication, broken dashboards, naming inconsistencies

  • Exposed structural gaps across tools

​​

Impact

  • Influenced roadmap priorities

  • Revealed structural gaps across the toolchain

  • Aligned product and engineering on the same system view

Reconstructing the testing lineage across a fragmented toolchain
Using a single real requirement as a thread, I mapped how it flowed through test cases, scenarios, simulation engines, CI pipelines, and reporting dashboards — exposing duplicated logic, broken routing, and inconsistent hierarchy.
The wall.png
Translating complexity into shared understanding
I reduced a multi-layered testing ecosystem into a clear flow, isolating key breakdowns in routing, reporting, and evaluation — enabling engineering and product teams to align on priorities.
Post Bosch Tooling mapped to user workflows - Frame 3 with painpoints.jpg

Integration Challenge

Embedding a new verification model into established engineering practice

The Five platform focussed on checking what the car actually did against expected behaviour - so called black-box testing.

​

Bosch engineers were used to white-box testing, requiring them to dig into the vehicles internal data to understand why failures occurred - what the car believes it has done and why.

​

Adoption depended on making the platform work with these established practices, not replacing them.​

Key Insight

Trust in cloud verification depended on preserving diagnostic depth

Five brought requirements-level, black-box evaluation. Bosch teams relied on white-box diagnostics to explain failures.

​

The platform could not simply replace one with the other. It needed to integrate both evaluation models so engineers could trust cloud-based verification without losing diagnostic depth.

The Design Challenge
Making cloud-scale verification interpretable for engineers

The platform needed to:

​

• Preserve engineering diagnostic workflows
• Provide trustworthy aggregated metrics
• Allow drill-down to failure causes
• Align naming and hierarchy across teams

Designing the Evaluation Model

Engineers used signal-level diagnostics for test reporting, with the Five platform operating at requirement-level abstraction. These could not be collapsed into a single metric.

​

I redesigned reporting to:

  • Surface both evaluation layers independently

  • Preserve rule → scenario → instance hierarchy

  • Make disagreement visible, not hidden

  • Enable drill-down into failure causes

​​

Result

  • Clearer interpretation of system behaviour

  • Increased confidence in cloud-based evaluation​

From evaluation model to product design
Once principles for the evaluation model were established, I translated these into detailed designs across the hierarchy of test reports
Test cases regression and improvements on dashboard.png
Establishing Structural Clarity

Terminology and hierarchy were inconsistent across teams. I worked to:

  • Align naming across test plans, suites, scenarios, rules

  • Simplify lifecycle states and navigation paths

  • Reduce back-and-forth between local and cloud tooling

  • Clarify where results lived and how they were accessed

​​

Outcome

  • Reduced conceptual friction

  • Enabled more coherent platform onboarding

Diagram for creation of tests - inconsistencies with naming.png
Aligning hierachy with mental models

Visualising the platform structure revealed several points where the system reflected internal architecture rather than the engineer’s workflow.


Concepts like builds, runs, and results appeared in multiple places, making it unclear where tests should be created, executed, or reviewed.

​

I proposed a clearer hierarchy that simplified navigation, grouped related testing artefacts together, and aligned the structure with the real testing workflow — creating tests, running them, and reviewing results.

Approach and Impact

Approach

Rather than broad transformation, we embedded with a pilot feature team.

  • Weekly working sessions

  • Integrated existing EasyEval workflows into platform structure

  • Iterated reporting based on real engineering use

​​

Impact​

The redesign helped translate a fragmented verification ecosystem into a coherent platform model.

  • Validated revised evaluation hierarchy

  • Regressions that had never been clear before were easy to spot and trace

  • Pilot teams adopted the platform for regular testing workflows â€‹

​

© 2026 by Sophia Godfrey

Follow

  • LinkedIn
bottom of page