UX Research UX & UI Design iOS University × Government

From confusion to clarity: turning an abandoned WSDOT ferry app into a trusted, usable experience.

1 million ferry riders. Zero prior user research. I led the first-ever usability study on the WSDOT iOS ferry app, found out why users were leaving for third-party apps, and delivered validated design recommendations their team could act on immediately.

Project Lead · WSDOT Liaison · Sole Designer
4 people, directed by me
11 weeks · Jan to Mar 2025
Research report · 27 annotated findings · 7 findings validated through redesign · A/B validation
Figma · FigJam · Zoom

At a Glance

The first usability study ever run on the WSDOT ferry app.

Design Question

How might we surface the right information at the right moment, so riders never need to leave the WSDOT app?

Collaborated with
WSDOT Stakeholders UX Researchers Usability Testers Dev Team
Problem Scale

3 in 5 new users dropped off after first use. 50% were switching to third-party apps mid-task. Zero prior user research existed.

Key Insight

Users weren't failing because features were missing. They were failing because the app spoke the system's language, not theirs.

Final Experience

The redesigned screens — before you read the process.

Search Introduced

Route-first discovery — Surfaces relevant routes upfront, helping users find what they need faster without scanning long lists.

1-Step Ticket Booking

Contextual booking actions — Book tickets and reserve vehicles now directly within route details, reducing backtracking and simplifying decisions.

Clear Labels

Information clarity — Ambiguous labels replaced with intent-specific language. 'Reservations' became 'Vehicle Reservation', drive-up spaces now show live counts, so users always know what each action does.

Methodology
Round 1
11 users · existing app
Usability testing to find what was broken
Round 2
10 new users · A/B split · 5 vs 5
Redesigned screens vs original app — same tasks, two experiences
Research Output
27
Total findings surfaced
Across 13+ hours of recorded usability sessions
7
Designed & validated
High-impact, low-effort findings taken to redesign and A/B tested
+
20
Documented for future sprints
Annotated in the final WSDOT report for the next development cycle
User Outcomes
26%Faster task completion
47%Fewer booking errors
68%Higher satisfaction (post-redesign)
ZeroApp-switching
Business / Product Outcomes
Digital adoption — Fewer users relying on third-party apps, increasing engagement with the official platform.
Operational load — Reduced support dependency through clearer booking flows.
Commuter trust — Actionable insights enabling immediate improvements for WSDOT.
Reflection
Biggest lesson

Working within WSDOT's existing system wasn't a limitation. It forced a sharper question: what do riders actually need at the moment they need it?

Communication as design

Annotated screens moved WSDOT stakeholders faster than any written report. Knowing your audience is half the design work.

If I had more time

All 27 findings would have been validated, not just 7. The research was complete for all of them. Given more time, every finding gets a redesign, a test, and a measured outcome.

My Role On This Project
🎯

Project Lead

Set direction, made method decisions, and owned every scope call.

🤝

Team and Client Coordinator

Translated research needs to WSDOT and client constraints back to the team. Both sides stayed aligned because that coordination ran through me.

🔬

Research Director

Designed the methodology from scratch and kept the study scoped tight across a 11-week timeline with no prior data.

✏️

Sole Designer

7 high-impact, low-effort findings taken from 27, each validated through redesign and Round 2 A/B testing.

Measured Outcomes

11 users tested the original app in Round 1. In Round 2 A/B testing, 5 new users tested the original app and 5 tested the redesigned screens. Same tasks, same scenarios. Here is the difference.

Speed
0%

Faster task completion, Group B vs Group A, timed across identical tasks

Accuracy
0%

Fewer errors during ticket purchase, Group B vs Group A, directly observed and logged

Satisfaction
0%

Higher satisfaction scores, Group B vs Group A, measured via rating scale at end of session

Trust
Reduced

Group A switched to FerryFriend mid-task. Group B completed the same tasks entirely within the WSDOT app.

Washington State Ferry

Washington State Ferries, the largest ferry system in the US, serving over 1 million riders

01 The Problem

WSDOT knew users were leaving. They didn't know why.

The data below was provided by WSDOT. The research set out to find what was driving it.

3 in 5

New users dropped off after first use

WSDOT's analytics showed where users dropped off. Nothing explained why.

50%

Users were switching to third-party apps

FerryFriend, Blue Sky, Marine Tracker. Users were leaving mid-task and WSDOT had no visibility into the reason.

0

Prior usability research existed

No sessions, no recordings, no benchmarks. Starting from zero with no baseline to compare against.

No answers

App Store reviews kept mounting

Recurring complaints about confusing navigation and misleading labels. Patterns WSDOT could see but couldn't diagnose.

Research
Questions

What What problems are users running into when using the ferry services feature on the WSDOT app?

Why Why are users facing those problems?

How How might we fix these problems within the system WSDOT already has?

Research Participants

Who we tested with

21 participants recruited across four distinct rider types chosen to reflect the real range of people who depend on the WSDOT ferry app.

Daily Commuter

Rides the ferry to work 4 to 5 times a week. Knows the routes and relies on the app for live schedules and ticket purchase.

Occasional Rider

Takes the ferry a few times a month for errands or leisure. Familiar with the system but not a frequent app user.

Tourist

Visiting or passing through. Unfamiliar with the ferry routes and using the app for the first time.

First-Time User

No prior experience with the ferry system or the WSDOT app.

Key Insights

What the research revealed that shaped the design, in 30 seconds.

01

Route-based thinking, not alphabetical lists

The app was designed for discovery, but most users were not discovering. They were returning. An alphabetically listed A to Z route list optimises for a first-time visitor. 68% of users were not first-time visitors.

02

Terminology mismatch creates hesitation

The app used backend language, not user language. "Reservation" means something to a developer. To a rider standing at a terminal, it means nothing distinct from "Buy Ticket," and that ambiguity cost them the booking.

03

Visibility matters more than availability

The biggest UX problem was not missing features. It was invisible ones. Vessel tracking existed. Drive-up space counts existed. Users simply never found them. Findability was the product gap, not functionality.

Key Pivot

Shifted from an information architecture problem to a decision-making clarity problem. The app had most of the right features. They just weren’t findable or understandable at the moment users needed them.

The full process: how the problem was uncovered, what got built, and what the data proved

02 The Process

Test the real app. Find the problems. Fix them. Test again.

Phase 1

Diagnose first. Design second.

Understand the product, the users, the problems, and why they existed, before touching a single screen.

1

Navigation Flow Analysis

→ Mapped every screen and interaction in the existing app

WhyWith no prior research, the system had to be understood before the right questions could be formed.

→ 5 structural barriers identified
2

Stakeholder Interviews

→ Spoke directly with WSDOT before running a single session

WhyThe interaction map gave hypotheses. Stakeholder conversations gave ground truth. Knowing what WSDOT already suspected meant the research could confirm or challenge it directly.

→ Known pain points documented · research scope confirmed
Stakeholder interview session
3

Study Preparation

→ Recruited deliberately across 3 channels to avoid a homogenous sample

WhyWho you test with shapes what you find. Social media for reach, prior WSDOT participants for credibility, personal networks for speed. Each channel targeted a different rider type. Session guidelines shared in advance so participants arrived prepared.

→ 21 participants · 3 rider types · 3 recruitment channels
4

Round 1 Usability Testing

→ Task-based sessions observing real behaviour, not collecting opinions

WhyNo analytics shows you what a real person does when confused. Scenarios were built around the core jobs: find a route, check schedules, book a ticket. Observed without intervening. That behaviour was the evidence WSDOT needed to act.

→ 13+ hours recorded · 27 findings surfaced
Round 1 usability testing session

Round 1 sessions in progress. Participants sharing their screens over Zoom while navigating the live WSDOT app

5

Affinity Mapping

→ Sorted 150+ raw observations into clusters, then themes

WhyRaw observations are noise without structure. Notes were grouped by similarity, clusters named, and themes synthesised, turning scattered session quotes into a clear, prioritisable picture of what was failing and why.

→ 4 themes · 27 actionable findings
Theme 01
Route schedule
68%
scrolled the full route list instead of finding their route quickly
"I wish there was a search. I can never find my route quickly."
14 findings
Theme 02
Vessel watch
57%
had never discovered live ferry tracking. Not a missing feature, a buried one
"I didn't even know you could track the ferry live in this app."
4 findings
Theme 03
Buying & ticket reservation
41%
chose the wrong flow. Couldn’t distinguish "Buy Ticket" from "Reservation"
"I have to go all the way back just to buy a ticket. By then I've forgotten which time I was looking at."
6 findings
Theme 04
Favourite route
50%
left the app mid-task to verify information on a third-party app
"FerryFriend just shows me what I need. The WSDOT app is confusing."
3 findings
6

Prioritisation by Impact and Effort

→ Scored all 27 findings against implementation cost and user impact

WhyWith 11 weeks total and Round 2 still ahead, we couldn't design for everything. Impact-effort scoring gave a defensible way to choose, and gave WSDOT a clear roadmap for the findings that couldn’t be reached.

→ 7 findings taken to redesign · 20 documented for future sprints

Full Impact–Effort Matrix — all 27 findings mapped

Full Impact–Effort Matrix

High impact · Low effort — the 7 findings taken to redesign

High Impact Low Effort findings
27 Total findings surfaced
7 Designed and validated 20 Documented for future sprints
27 All findings annotated onto the existing app and documented in the final report
Design Question: How Might We

The research was clear: users weren't failing because features were missing. They were failing because navigation was opaque, labels were misleading, and actions were disconnected from context. That reframing drove every design decision.

How might we surface the right information at the right moment, so riders never need to leave the WSDOT app?

Phase 1 complete

27 findings. 7 prioritised for redesign and validation. A report ready to hand over.

Phase 2: The pivot

But handing over findings isn't the same as proving they work. So I didn't stop there.

The Pivot: My Initiative and Planning

The research was done. Pushing for Round 2 wasn't part of the brief. It was a decision made to prove the work actually mattered.

Hand WSDOT a report and walk away. Will it actually change anything?

Thinking
Me
Team's concern 2.5 weeks left. No participants lined up. Too tight to pull off.
My move Brought both the idea and the concern directly to WSDOT. Openly, without assuming the answer.
WSDOT's response "We'll bring the participants ourselves."
2.5-week sprint: everything runs simultaneously
Me Redesign all 7 findings alone
Other 3 teammates Prepare the final stakeholder report
Everyone Run A/B testing together
7

Design Recommendations within WSDOT's Existing Design Language

→ 7 findings addressed across 4 problem areas — search, routes, reservation copy, drive-up spaces

HowEvery fix worked within WSDOT's existing colors, components, and patterns. A search bar introduction, alphabetical route reordering, clearer vehicle reservation copy, drive-up space label and color fix, and a prominent buy now button, among others. Nothing required a redesign budget.

→ Every screen buildable inside the system WSDOT already had
8

Round 2: A/B Usability Testing

→ 10 new participants · 5 on the original app · 5 on a mix of redesigned and original screens · same tasks

PrototypeThe redesigned screens were built as a high-fidelity interactive prototype — participants could search for routes and reach real results, and favourite a route to see it surface at the top of the schedule. Not static screens. Real tasks, real interactions.

ResultEvery problem area that had failed in Round 1 showed measurable improvement in Group B. 26% faster task completion, 47% fewer errors, 68% higher satisfaction. The errors weren't just reduced. They were removed.

→ Error rate drop confirmed across all 4 problem areas
Round 2 usability testing session

Round 2 sessions in progress. Participants testing redesigned screens via the high-fidelity prototype over Zoom

9

Research Report and Design Rationale for WSDOT

→ Packaged findings, designs, and A/B results into an actionable handover

WhyA report only works if the reader knows exactly what to do with it. All 27 findings were annotated directly onto the existing app, marking where each problem lived. Every design recommendation was written with enough specificity for WSDOT's dev team to act in the next sprint.

→ 27 findings annotated · 7 findings validated through redesign · 1 post-testing iteration included
03 Design Recommendations

7 high-impact findings. Each one addressed through redesign and validated.

WSDOT confirmed · Implementation in progress

Following delivery, WSDOT confirmed several recommendations are under active review for implementation. Changes are being introduced incrementally to minimise disruption to existing users, a direct outcome of designing within their existing system from the start.

Three problem areas. Three design decisions. Each comparison shows what changed and why it mattered.

Route Discovery & Findability

Navigation
Before
No search, forcing users to scroll the full list every time.
Routes listed A–Z and not based on user's location.
After
Location-aware search surfaces your route instantly.
Routes prioritised based on user's location.
30–40%

Faster route discovery, post-redesign

Design Decision

Added search and surfaced routes by location, not alphabetical order

Surfaced location-aware routes and added search rather than forcing users to scan an alphabetical list. The most frequent action should require the least effort.

Information Clarity & UX Writing

Information Clarity
Before
“Buy Tickets” and “Reservations” interpreted as the same action by 41% of users.
Departed ferries shown alongside live ones, adding noise and slowing decisions.
Drive-up spaces bar had no context. Availability was unreadable.
After
“Vehicle Reservation” clearly separated from ticket purchase.
Departed ferries removed. Only actionable sailings shown.
Contextual drive-up label makes availability immediately readable.
90%

Correct status interpretation (up from 32%)

Design Decision

Rewrote labels to match what users were actually trying to do

Replaced ambiguous labels with intent-specific language and removed visual noise. Users make faster decisions when copy matches their mental model.

Navigation Continuity Across Screens

Booking
Before: 3 steps to buy a ticket
No purchase option on the schedule screen. Users had to backtrack to the home page for booking.
3 steps to checkout. Context lost every time.
After: 1 step to buy a ticket
Booking actions surface inline on the schedule screen.
1 step to checkout. No backtracking, no lost context.
47%

Fewer errors. 2 fewer steps to checkout.

Design Decision

Moved booking into the schedule screen, cutting 2 steps from the flow

Moved booking inline with the schedule. Interrupting a user’s context to complete a related task is a flow failure, not a navigation pattern.

Post A/B Testing · Iteration

One thing Round 2 surfaced that Round 1 hadn't

Users consistently looked for their most-used routes first, but the original app’s global favourites, spread across ferry, mountain passes, and other services, made them hard to find and easy to ignore. Most participants didn’t rely on it at all. Ferry-specific favourite routes were surfaced directly at the top of the schedule.

→ Added to final report
04 Reflection

What I learned. What was hard. What I'd do differently.

The biggest lesson

The constraint was the insight

Designing within WSDOT’s existing system forced the right question early: not how to restructure everything, but how to make the right things findable at the right moment. That reframe shaped every screen. The constraint didn’t limit the work. It defined it.

Communication as a design skill

What you say, to whom, and how is part of the design

The team needed direction. WSDOT needed confidence. Annotated screens moved stakeholders where a written report wouldn’t. The finding wasn’t enough. The translation of it was. Format, framing, and who you’re talking to all shape whether the work lands.

The pivot moment

Raise the idea and the concern together

Round 2 almost didn’t happen because the recruitment problem felt like a blocker to solve alone first. Raising both the idea and the concern with WSDOT directly changed the project’s outcome. Don’t assume the answer before asking the question.

If I had more time

All 27 findings validated, not just 7

Within 11 weeks, 7 high-impact findings were identified, redesigned, and validated through A/B testing. The research for all 27 was complete. Given more time, every finding would have gone through the same process: a redesign, a usability test, and post-launch measurement to confirm the impact held. The work proved the method. The remaining 20 findings are the natural next step.