This case study reflects enterprise work completed at Moody's. Product details and visuals have been generalized to respect organizational confidentiality.
Overview
While designing internal enterprise tools, I noticed recurring inconsistencies in how shared design system components were implemented across products. In data-heavy environments, small visual ambiguities can erode user confidence at scale.
Instead of fixing these issues locally, I partnered with the design system team to validate improvements through usability testing.
Two focused studies were conducted:
- •Field state clarity
- •Bulk action interaction patterns
The goal was to strengthen reusable components across the ecosystem.
My Role
As the UX designer, I:
- •Identified opportunities to improve shared components and interaction patterns
- •Partnered with UX Research to plan and conduct targeted usability studies
- •Synthesized insights into actionable design recommendations
- •Delivered recommendations to the design system team for adoption
This effort extended beyond a single product and focused on strengthening reusable patterns across the system.
Field State Validation
Clarifying editable, read-only, and disabled input behavior
Context
Field states appear in nearly every workflow. Across products, visual differences in background, borders, and icons created confusion about whether a field was editable or not.
Because these components are reused widely, improving clarity at this level would scale across multiple tools.
Research approach
Comparative interview testing — this study focused on perception clarity rather than full task completion.


Participants were shown controlled visual variations and asked:
Is this field editable?
What visual cues informed your decision?
The goal was to identify which signals users actually rely on.
Key findings
- •Background color was the strongest signal
- •Subtle border differences were frequently overlooked
- •Adding edit icons did not consistently improve clarity and sometimes introduced noise
- •Not all participants clearly understood the difference between read-only and disabled fields
Design Priorities
Design Priority: 01
State Differentiation Clarity
Users need clearer visual distinction between editable, read-only, and disabled fields to instantly understand what they can interact with.
Design Priority: 02
Reduce Icon Clutter
Users expressed a need to see fewer icons and more purposeful ones so that visual signals stay meaningful rather than overwhelming or repetitive.
Design Priority: 03
Edit Mode Clarity
Users need a clearer way to enter and recognize when they are in an editable state, ensuring they know when changes are possible.
These recommendations informed updates to the shared design system components.

Bulk Action Pattern Testing
Improving efficiency and user confidence in multi-item table actions
Context
Bulk actions introduce operational risk in data workflows. Users must understand what is selected, what scope applies, what action will occur, and whether recovery is possible.
Unlike field state interpretation, this interaction required behavioral validation through realistic task execution.
Research approach
Task-based prototype usability testing — an interactive Figma prototype was created to simulate real workflows.

Key findings
Across 8 participants:

- •Selection scope was not always clearly understood
- •Filter state and selection state were often conflated
- •The presence of undo significantly increased execution confidence
- •Confirmation messaging required stronger specificity
Design Priorities
Design Priority: 01
Selection Scope Clarity
Make it explicit whether users selected items on the current page or across all results.
Example messaging:
- "Selected 10 of 50 results"
- "All 50 results selected (across 5 pages)"
Design Priority: 02
Strong Post-Action Feedback
Clearly state what action was applied and to which items.
Design Priority: 03
Distinct Action Bar Design
Visually separate bulk actions from filters and table controls.