VIVE Reality System (VRS)

As XR evolved, VRS had to adapt to shifting devices and interaction models. Maintaining coherence required the system to evolve without losing its structure.

As XR evolved, VRS had to adapt to shifting devices and interaction models. Maintaining coherence required the system to evolve without losing its structure.

As XR evolved, VRS had to adapt to shifting devices and interaction models. Maintaining coherence required the system to evolve without losing its structure.

Duration

2017-2025

Platform

XR, Web

Context

Between multiple generations of XR products, the pace of change accelerated rapidly.

Each new hardware generation introduced not only different performance profiles and form factors, but also new input methods — from traditional controllers to phone-based input, hand tracking, eye tracking, and eventually voice.
These shifts required the system to reconsider assumptions that had worked in earlier VRS experiences.

Meanwhile, XR use cases also expanded. What began as immersive, enthusiast-driven VR experiences increasingly moved toward enterprise, productivity, and everyday scenarios.
This shift placed new demands on interface design — toward more lightweight, context-aware, and accessible experiences, while still supporting advanced spatial interaction.

Problem

Problem

As XR hardware and input methods evolved rapidly,
interaction models risked diverging across generations of devices.

Fragmentation was not only visual—it was structural, affecting how systems were designed, implemented, and experienced.

When HTC VIVE’s XR products expanded from PC VR to Mobile VR and Mixed Reality, each generation introduced different hardware constraints, interaction models, and usage contexts.

Without a system-level approach, these changes risked creating inconsistent experiences and increasing complexity for both design and engineering teams.

This resulted in recurring challenges across products:

  • Each new device required rethinking onboarding, setup, and core system flows

  • Interaction models shifted from controllers to hand and direct touch, demanding new spatial rules

  • UI patterns risked diverging as products evolved independently

  • Design and engineering teams repeatedly redefined similar system behaviors

The core challenge was not starting over each time — but evolving VRS as a flexible, coherent system that could grow alongside XR technology.

My Role

I worked as a UI Design Lead on the Virtual Reality System (VRS), contributing to system-level UI and interaction direction within a cross-functional design organization.

As VRS evolved across device generations and interaction paradigms, my role expanded from hands-on UI design to shaping the overall interaction system across XR products.

  • Designed system UI, layouts, and interaction behaviors for VRS 2.0, ensuring consistency as the system expanded with new hardware capabilities

  • Led UI alignment across features — maintaining coherence across products, surfaces, and evolving interaction patterns 

  • Defined interaction models and system-level behaviors for VRS 3.0, supporting the transition from controllers to hand input and direct interaction

  • Continued hands-on design on key flows — bridging system thinking with real product execution

Beyond design creation, I worked closely with engineering on prototyping, validation, and implementation — using iterative testing to refine interaction behaviors as new paradigms emerged.

Process

Diagnosing Fragmentation

We identified recurring breakdowns across onboarding, system navigation, and interactions—where each new product generation risked redefining the same system behaviors. Fragmentation also surfaced in tooling and implementation, such as mismatched scale and layout between Figma designs and Unity runtime environments.

While design patterns appeared consistent, each system evolved independently across devices.

We identified recurring breakdowns across onboarding, system navigation, and interactions—where each new product generation risked redefining the same system behaviors. Fragmentation also surfaced in tooling and implementation, such as mismatched scale and layout between Figma designs and Unity runtime environments.

To address this, we introduced system-level UI layer and spatial rules defining how interfaces behave in 3D space—including layers, rotations, and positioning. This reduced repeated decision-making, improved design consistency, and ensured more consistent user experiences across XR generations.

Tutorial UI served as a representative example, demonstrating how shared spatial principles improved clarity and learnability across devices.

1st PC VR - Early system foundation

PC VR - Dashboard

1st Mobile VR - Tutorial

These patterns revealed that the issue was not visual inconsistency — but the absence of a shared system backbone guiding both design and runtime.

This led to a shift from feature-level design to system-level thinking.

Establishing A Shared System Foundation

Defining the System Backbone

A unified system backbone across XR products

End-to-end journey from onboarding to daily use

Launcher / Library
Central navigation layer connecting system surfaces

Launcher / Library
Central navigation layer connecting system surfaces

Launcher / Library
Central navigation layer connecting system surfaces

Tutorial
Introduces core interaction patterns and system behavior

Tutorial
Introduces core interaction patterns and system behavior

Tutorial
Introduces core interaction patterns and system behavior

System Menu
Persistent system controls accessible across contexts

System Menu
Persistent system controls accessible across contexts

System Menu
Persistent system controls accessible across contexts

Settings
Shared configuration layer across XR products

Settings
Shared configuration layer across XR products

Settings
Shared configuration layer across XR products

Defining A Unified Interaction Model

Interaction Model & UI Principles

As input methods evolved from controllers to hand tracking, gaze, and direct touch, interaction paradigms became increasingly diverse.

Rather than redefining interactions for each modality, we established a unified interaction model grounded in spatial behavior, consistent feedback, and predictable state transitions.

This enabled a coherent and scalable interaction system across XR devices and generations.
This shifted the focus from input-specific design to an intent-driven interaction framework.

Interaction is governed by spatial distance, transitioning from direct manipulation in near space to ray-based input in far space

While input methods vary across controller, hand, gaze, and touch, they share a unified model of feedback and state transitions

In direct touch, feedback is driven by proximity, where continuous visual cues transition seamlessly into committed states as distance decreases

Key Principles

  • Interaction is defined by user intent, not by input method.

  • Feedback provides immediate system response, while state represents persistent outcomes.

  • Not all interactions require a cursor, but all interactions require clear and consistent feedback.

  • Consistency across feedback and state transitions ensures a predictable and learnable user experience across modalities.

Together, these principles define a coherent interaction language across spatial contexts, input modalities, and user feedback.

Extending the System into MR Contexts

Extending interaction principles into real-world environments while preserving system consistency

As XR experiences expanded into Mixed Reality, system principles were extended into environments where digital elements coexist with the physical world.

Rather than redefining interaction patterns, the system adapted existing rules—distance-based interaction, feedback, and state transitions—into spatial and contextual environments.

I designed MR system UI and guided the integration of 3D scenes, ensuring interaction behaviors remained clear and predictable despite changes in depth, lighting, and real-world context.

From onboarding to interaction, users rely on the same underlying logic—enabling a seamless transition from fully immersive VR to context-aware MR.

MR experience framing and transition from real-world context to VR

Guided onboarding for mapping physical space into the system

Spatial alignment and direct manipulation within real-world context

Beyond interaction and context adaptation, the system also expanded into content creation and environmental design.

Scaling The System Across Environment

Systemizing Scene Design for Scalable Experiences

Transforming scene design into a scalable system for production efficiency and user personalization

As XR products evolved, each generation required distinct launcher scenes and spatial environments—introducing increasing design complexity and production overhead.

At the same time, users expected greater personalization, moving beyond a single default environment to selectable scene themes.

I co-led scene design direction and identified recurring patterns in spatial composition, UI integration, and environmental storytelling.

These patterns were distilled into a systematic scene framework—defining reusable structures, layout logic, and visual rules that scale across products, environments, and themes.

This enabled scalable scene variation while preserving a consistent system architecture.

Multiple scene themes built on a shared system structure, enabling personalization without breaking consistency

Scenes structured into layered components—environment, effects, and core spatial elements—forming a reusable system for composition and integration

This structured approach later informed scalable scene creation workflows and AI-assisted generation pipelines - See Case 3: AI Workflow & Design Infrastructure

Validation & Scaling Across Generations

The system was applied and validated across multiple XR generations and form factors—including PC VR, Mobile VR, and MR devices.

Each product introduced different constraints—performance, ergonomics, and input modalities—providing real-world validation of the system’s adaptability.

Through iterative testing, cross-functional collaboration, and runtime refinement, the system evolved while preserving consistent interaction principles.

This ensured users could transfer knowledge across devices without relearning core behaviors—reducing friction and improving learnability at scale.

The system proved scalable—not only across products, but across evolving technologies and interaction paradigms.

Process

Outcome & Impact

Interaction coherence

Enabled consistent interaction experiences across evolving XR devices and input models.

Adaptable interaction system

Defined interaction principles that scaled across controllers, hand interaction, and direct touch.

Usability & learnability

Ensured interactions remained learnable, predictable, and reliable as complexity increased.

System stability under change

Sustained system coherence despite rapid shifts in hardware capabilities and interaction paradigms.

Reflection

Interaction systems must evolve with changing inputs — without losing clarity, consistency, or user trust.

As hardware and interaction paradigms shifted, the challenge was not just designing new patterns, but defining underlying principles that could adapt across controllers, hand input, and direct interaction.

What mattered most was ensuring interactions remained learnable, predictable, and reliable — even as the system itself continued to evolve.

This experience reinforced my belief that strong interaction systems are not defined by individual patterns, but by the coherence of behaviors that users can intuitively understand and trust.

If you’re building products where clarity and structure truly matter, I’d be glad to connect.

Based in Taiwan · Open to global opportunities

© 2026 Claire Lee

If you’re building products where clarity and structure truly matter, I’d be glad to connect.

Based in Taiwan · Open to global opportunities

© 2026 Claire Lee

If you’re building products where clarity and structure truly matter, I’d be glad to connect.

Based in Taiwan · Open to global opportunities

© 2026 Claire Lee