Process

Omni 1.0: Foundations (2018)

Focus: Establish coherence across VR and 2D.

  • Unified color system, typography, iconography

  • Shared Sketch UI library

  • Unity VR component set

  • Initial cross-team review workflow

Connect to Content

Add layers or components to infinitely loop on your page.

Omni 2.0: Visual Language & System Adaptation (2019–2020)

Context

As VIVE evolved from PC VR to all-in-one (AIO) devices, the design system needed to respond to new constraints: lighter hardware performance, longer daily usage, and increasing adoption in enterprise and B2B environments.

Visual Language Evolution

Omni 2.0 introduced a lighter visual language with a 2.5D approach.

This allowed the system to retain depth and hierarchy while reducing visual fatigue and performance cost—striking a balance between spatial clarity and long-term comfort for AIO usage.

Theme & Color System Expansion

A key shift in Omni 2.0 was the introduction of a theme-ready color system.

Primary (accent) colors were redefined at a system level, enabling global replacement through shared color definitions. This allowed enterprise clients to adopt their own brand colors without fragmenting the design system or redesigning individual components.

System Impact

Enabled scalable brand customization for B2B use cases

  • Maintained consistency across VR, PC, Web, and Mobile

  • Prepared the foundation for future tokenization and system automation

Omni 2.0 marked the transition from visual consistency to system adaptability—ensuring the design system could evolve with both technology and business needs.

Connect to Content

Add layers or components to infinitely loop on your page.

Transition — From Omni 2.0 to Omni 3.0

Omni 2.0 established adaptability at a system level—introducing a lighter visual language and a theme-ready foundation that allowed the design system to scale across devices, brands, and business contexts.
As hardware capabilities and product ambitions continued to evolve, new challenges emerged.

 XR experiences were no longer limited to a single input method or a fully immersive environment. Interaction models expanded beyond controllers to include direct touch, eye tracking, and spatial gestures. Mixed reality scenarios introduced new visual constraints, where interfaces needed to coexist with the real world rather than replace it.
These shifts required more than visual updates or theme extensions.

 They called for a deeper rethinking of interaction architecture, component behavior, and system-level rules—one that could support multiple input modalities, spatial contexts, and platforms without fragmenting the experience.
This led to the next evolution of Omni.

Omni 3.0: XR Interaction Architecture & Token Expansion (2021–2025)

Multi-input interaction model

  • Near range (0 to 0.4 m): direct touch

  • Middle range (0.4 to 1 m): touch or ray-based

  • Far distance (beyond 1 m): raycast, gaze, pinch

Spatial ergonomics

  • Reach-based layout zones

  • Panel sizes based on distance & FOV

  • Minimum touch target 56 × 56 px at 1 m

  • Clear readability & comfort rules

Connect to Content

Add layers or components to infinitely loop on your page.

MR-ready visual system

  • Transparent & glass-style UI

  • Depth layering with blur

  • Real-world contrast testing

Tokenized foundations

  • Tokenized color, radius, elevation, typography

  • Multi-theme switching

  • Shared semantics across VR and 2D kits

Dual-kit system

  • Master Kit:tokens, icons, shared assets

  • VR Kit:spatial components

  • 2D Kit:PC / Web / Mobile components

Connect to Content

Add layers or components to infinitely loop on your page.

In-context validation across real spatial environments.

Omni 3.0 was developed and validated alongside live XR products.

System Continuity

Omni 3.0 evolved alongside XR products and was continuously validated through real product implementation.

Its application at runtime and system-level execution is detailed in Case 2: VRS. (Hyperlink)