Design System Architect

UX Audit

Mobile Drive Mode - XR

2026

An App Audit Revealed Design System Gap

Overview

The company's design system, components library was built for people sitting still, or holding a phone. When the same system was deployed to drive mode and XR environments, it silently failed, wrong touch targets, hidden controls, missing standards. This is how I fixed it at scale.

Role

Design System Architect

Team

Solo Designer

Impact

1,512 new components variants

3x Platform coverage from a single design system

Zero orphaned components or token violations

Role

Design System Architect

Team

Solo Designer

Impact

1,512 new components variants

3x Platform coverage from a single design system

Zero orphaned components or token violations

Problem Area

One design system, 3 different context

I was asked to audit a field mission mapping application before a product campaign. The bounce-rate data and user feedback told a clear story the app was hard to use. But digging deeper, the real problem wasn't the app itself. It was that the design system behind it had never accounted for how or where officers actually used it. This is the story of what I found, and what I built to fix it.

The audit: what the data revealed about the app and its real world use

UX audit - User feedbackl - dashboard metrics - interaction analysis

Act 01

The fix: Expanding the design systme to cover, mobile, drive mode and XR

The product lacked meaningful, data-driven features that could help users make real financial decisions

Act 02

Act 01: The Audit

What the campaign brief uncovered

The brief was straightforward: conduct a full audit of the field mission mapping app ahead of a product campaign. The app is the unit's primary tool , officers use it to navigate, report, and coordinate in live field conditions. Before any campaign work could begin, I needed to understand what the product actually was and how it was performing.

I pulled the engagement dashboards and requested user feedback. The bounce rate on core map tools was high. The feedback was consistent and specific: the tools felt hard to use compared to Google Maps and Google Earth, apps officers use daily without thinking. This wasn't a training problem. Officers knew how to use maps. They were struggling because the app's patterns didn't match the instincts they'd already built.

Unfamiliar interactions on familiar tasks

The app used non-standard patterns for core actions search, navigation, location pinning that clashed with the muscle memory officers had built using Google Maps and Earth. Every interaction required conscious thought instead of reflex.

Finding 01

Mission critical tools hidden behind menus

The most important controls were buried 2–3 taps deep in nested menus. In a calm office this is annoying. In a time sensitive field situation, it's a failure mode the kind that causes officers to abandon the app entirely.

Finding 02

Gestures that don't work in the field

The app relied on long press for key actions. At 40km/h in a vehicle, with vibration and physical stress, long press is nearly impossible to trigger reliably. Officers were getting errors or no response and giving up.

Finding 03

If a user must stop and think about how to navigate during a mission, the interface has already failed. A non standard UI isn't just frustrating it's a liability.

I benchmarked the app's interaction patterns against Google Maps and Earth the two references officers cited in their feedback and mapped every point of friction. The audit confirmed what the data suggested. These weren't isolated UI problems. They had a single root cause: the design system had been built for a context that didn't match how or where the product was actually being used.

What the app assumed

User is seated, stationary, with full attention on the screen

User has time to explore menus and learn custom interaction patterns

Long-press and multi-step gestures are reliable inputs

Users are coming to this app fresh, without existing map instincts

What the field looked like

Officers in vehicles, under stress, with divided attention

Zero tolerance for learning decisions made in seconds, not minutes

Physical movement makes precision gestures unreliable

Deep Google Maps familiarity any deviation creates friction

Act 01: Decision

From app fix to system fix

I could have taken the audit findings and redesigned individual screens. That would have helped the app in the short term. But while conducting the audit, a parallel conversation was already happening inside the team: we were beginning to design for XR and we had no design system that could support it.

The same gap causing friction in the map app today was about to be replicated across every XR product we built tomorrow. I made the case to go upstream. Rather than patching screens one by one, I would expand the design system itself formally defining what components should look like and behave across three distinct contexts the team was building for.

The Pivot

Finding 01

The audit findings became the requirements for the DS expansion. Every problem uncovered in Act 01 wrong gesture types, undersized targets, hidden controls pointed to the same root cause: a design system that only knew one context.
Heading

This is where the case study changes register. Act 01 is about discovery. Act 02 is about what I built in response a design system that formally covers the three contexts this product family actually operates in.

The product lacked meaningful, data-driven features that could help users make real financial decisions

Standard mobile

Officers on foot. Seated or standing. Standard cognitive load. Apple HIG 44px minimum touch target. The existing DS was built for this and only this.

Context 01

Drive mode

Officers in moving vehicles. Vibration, divided attention, physical and cognitive stress. NHTSA mandates 60px+ minimum targets. Long press is not a viable input in this context.

Context 02

VR / XR

Headset use. Spatial interfaces, gaze and gesture input no physical touch surface. OpenXR recommends 2° visual angle minimum for any interactive element. The existing DS had nothing here.

Context 03

Act 02: The DS Expansion

Expanding the design system across three platforms

The existing DS was built on MUI (Material UI). It was well-structured for standard mobile but had no platform specific sizing, no drive mode considerations, and nothing that could support XR. Starting from scratch wasn't the right move the goal was to extend what existed without breaking it, and to do it in a way that engineering could implement consistently.

I identified 7 component groups that had the most impact on usability in high-stress and non-standard contexts. These are the components officers interact with most often, that carry the most decision-making weight in the field, and that break most visibly when the context changes.

Touch target size — platform comparison
Button (small)
Mobile 36px
Drive 56px
XR 72px
Button (large)
Mobile 52px
Drive 72px
XR 104px
FAB
Mobile 56px
Drive 76px
XR 96px
Nav row
Mobile 48px
Drive 64px
XR 80px
Mobile — Apple HIG 44px min
Drive mode — NHTSA 60px+ min
VR/XR — OpenXR 2° visual angle
ComponentWhy it's a priorityMobileDrive modeVR / XR
<Button>Primary action across all contexts36–52px56–72px ↑72–104px ↑
<FAB>One-tap emergency escalation56px76px ↑96px ↑
<Alert>Mission-status communicationStandardHigh contrast, largeSpatial, anchored
<BottomNavigation>Context switching under cognitive load48px row64px row ↑80px row ↑
<AppBar> / <Toolbar>Persistent orientation anchor56px72px ↑88px ↑
<Chip>Status filters, quick selection32px48px ↑64px ↑
<Snackbar>Non-blocking status feedbackStandardPersistent, largeSpatial overlay

Building the Design System

AI-Assited generation in Figma

Expanding a DS to three platforms manually duplicating every component, updating each size value, catching token drift by hand across 1,500+ variants would have taken weeks and introduced the kind of inconsistency I was trying to eliminate. I used Claude (Anthropic) to architect and execute the expansion inside Figma via the Figma Plugin API.

My role was as the design decision maker throughout: I defined the platform standards, identified which components needed expanding, reviewed all outputs, and directed corrections when problems surfaced. Claude handled the programmatic generation and structural fixes. The Figma files and documentation are the output of this phase. React implementation is the next step.

01

Platform standards research

Mapped NHTSA, Apple HIG, and OpenXR requirements to concrete sizing and interaction rules. These became hard constraints — non-negotiable minimums that every component had to meet before anything was generated.

Standards defined before any component was touched

02

Component triage, 7 priority groups

Not every component needed rethinking equally. I audited the full MUI library against interaction criticality for high-stress, non-seated environments. Only the 7 groups that actually mattered for these contexts were prioritised. No busywork.

Triage before generation

03

Programmatic generation via Figma Plugin API

Components were cloned from the originals not duplicated to preserve all variable bindings and design token connections. Six new platform-specific pages were generated across Drive Mode and VR/XR. 1,512 new variants total.

All token bindings preserved · zero manual copy-paste

04

Structural bug detection and correction

Generating at this scale surfaced real architectural issues in the original DS that had gone undetected invalid sizing modes, escaped component sets, token violations invisible to the naked eye. Eight bugs were caught and fixed programmatically during the same session.

8 structural bugs caught · 0 manual patches

05

Documentation on every page

All 9 platform pages include documentation headers and spec tables. Any engineer or designer picking up this file can understand the sizing rationale, the platform standard it references, and how to implement it without needing to ask.

9 pages documented · engineering-handoff ready

Review & Challenges

The bugs manual review missed

Generating components programmatically at this scale exposed structural issues in the original MUI DS that had gone undetected. These weren't visual inconsistencies, they were architectural problems that would have caused silent failures during engineering handoff.

Problems found

primaryAxisSizingMode = 'HUG' invalid value breaking auto-layout on all generated frames

Issue

VR Button rows overlapping the larger heights caused stacking to collapse with no gap.

Issue

Tab internal frames not resizing to match the new component height after platform scaling.

Issue

Text Wrapper frames carrying a near-white fill a token violation invisible to visual review

Issue

FAB internal Base frame (56px) sitting inside a 76px outer component dimension mismatch.

Issue

<Toolbar>/Regular/False variant escaped its COMPONENT_SET during VR page generation.

Issue

Fix applied

Changed to 'AUTO' only 'FIXED' and 'AUTO' are valid values in the Figma Plugin API

Fix

Row re-layout algorithm: bucket by Y position → sort → restack with height + 24px gap between each row.

Fix

fixVariantInternals() recursively resized all child FRAME and RECT nodes to match parent dimensions

Fix

Filtered all fills with luma > 0.95 from Text Wrapper frames automatically across all platform pages.

Fix

Resized the internal Base frame to match the outer component dimensions on all Drive Mode and XR FAB variants.

Fix

Renamed to Variant=Regular, Small Screen=False, moved back into the set, re-laid all 3 variants in correct order.

Fix

The deliverable

What exists now in the DS

The Figma file contains 9 platform-specific pages covering all 7 component groups across Mobile, Drive Mode, and VR/XR. Zero orphaned components. Zero token violations. Every page documented with spec tables — ready for engineering to implement without design interpretation required.

<Button>
M: 360D: 360XR: 360
<IconButton>
M: 135D: 135XR: 135
<FAB>
M: 150D: 150XR: 150
<Chip>
M: 140D: 140XR: 140
<Alert>
M: 12D: 12XR: 12
<BottomNavigation>
M: 18D: 18XR: 18
<AppBar> / <Toolbar>
M: 8D: 8XR: 8
<Tab> / <Tabs>
M: 54D: 54XR: 54
<Snackbar>
M: ✓D: ✓XR: ✓

Success Metrics

What the campaign brief uncovered

The audit gave the product team a clear, evidence based picture of why the app was underperforming. The DS expansion means every product this team builds going forward including the XR applications now in early design starts with a foundation that accounts for context from day one.

Root

Cause addressed at the system level not patched on individual screens, which changed the work culture in the department.

Root

Cause addressed at the system level not patched on individual screens, which changed the work culture in the department.

Day 1

XR design teams have a DS that supports their platform from the start not discovered missing at QA

Day 1

XR design teams have a DS that supports their platform from the start not discovered missing at QA

3x

Platform coverage from a single design system without forking into parellel disconnected systems.

3x

Platform coverage from a single design system without forking into parellel disconnected systems.

Sherin Soliman Portfolio

Principle Product Design & Design System Architect

i love creating designs that matter and make people's lives easier. i love creating designs that matter and make people's lives easier

Copyright ©

2024 Sherin Soliman. All rights reserved

Sherin Soliman Portfolio

Principle Product Design & Design System Architect

i love creating designs that matter and make people's lives easier. i love creating designs that matter and make people's lives easier

Copyright ©

2024 Sherin Soliman. All rights reserved

Sherin Soliman Portfolio

Principle Product Design & Design System Architect

i love creating designs that matter and make people's lives easier. i love creating designs that matter and make people's lives easier

Copyright ©

2024 Sherin Soliman. All rights reserved

Create a free website with Framer, the website builder loved by startups, designers and agencies.