Presented at the Samsung Developer Conference, recognizing the app across five criteria: platform optimization, user experience and quality, innovation and feature set, relevance to Samsung users, and ecosystem integration.
Korean launch · session ratings and booking analytics. New product, no pre-launch baseline.
English language learning was a $33.5 billion global market in 2018, and Asia-Pacific made up about half of it. It was also where Rosetta had the least presence.
Rosetta Stone's existing offering was shaped around the hobbyist, but career-minded learners made up almost twice the market share of casual hobbyists.
I led design on a cross-functional tiger team with a six-month mandate: validate whether 1:1 mobile tutoring was the right path forward, and if so, shape what that product should be for both learners and tutors. After the design sprint, the PM and I built it out for the rest of the six months.
Industry estimates, 2018
I capped the interviews at seven with learners across APAC. The PM and I were doing the recruiting and screening, and timezones made every step slow. Going for more would have meant rushing each one. Two findings came up consistently.
Partner interviews in China and Korea showed why learners wanted tutoring over e-learning. Students are taught English from K-12 but rarely get to practice speaking, because native English-speaking tutors are expensive.
Take Soojung Park, 34, a marketing manager at Hyundai. Career-minded and proficient in English, she rarely felt confident speaking. Every learner I interviewed had the same fear: being wrong in front of an audience.
We tested two concepts: a group-session "Master Class" led by business leaders, and a private "Find a Tutor" for 1:1 tutoring.
Group sessions triggered the same hesitation we'd already heard in interviews. The shame of getting something wrong in front of others was a big source of anxiety that mattered more than the content quality.
Group sessions led by business leaders.
Private 1:1 tutoring.
WonAfter the confidence insight, I went looking for where the design could give learners something concrete to take with them. The post-session moment was the obvious place.
In session, learners juggled three things: chat, participation, and notes. Note-taking was the most burdensome of the three. Ironically, it was also what pulled them away from listening to the tutor, and they were paying for that listening time.
Note-taking was getting in the way of listening, so I designed the feedback feature to cover that work instead. I tested five variations across multiple rounds of 1:1 interviews with learners in Korea, iterating between rounds.
Learners wanted feedback that reflected measurable progress; they told me they were paying for progress, not praise.
"Encouragement is good, but what's more important is that we need to learn something."
Learner interview
V5 won because it gave learners both quantitative and qualitative feedback. They got scores for how they did, plus a tutor's note and recap for what to improve. The vocabulary that came up in session was bonus material they hadn't paid extra for. Learners started calling it the receipt, and one said it felt like getting something for free on top of the session itself.
The feedback feature only worked because the tutor portal made it possible in five minutes. That portal was its own design problem.
Tutors had to teach, maintain eye contact with the camera, navigate slides, manage chat, add vocabulary, and rate learner performance across multiple competencies, all at once.
One tutor said:
"We only have 5 minutes between sessions, so it's stressful."
Tutor interview
Tutors had five minutes between sessions to leave feedback, wrap up notes, and prep for the next learner. That constraint shaped two specific design moves:
|, type the definition, and the output formats automatically with the term in bold.
Edge cases are where I worked closest with engineering. Here are six moments where the design supported learners so they didn't run into a dead end.
When no sessions are available, learners can enter their preferred booking time, and that data feeds tutor availability decisions in Korea.
A clear modal flags scheduling conflicts the moment they happen, before the learner commits and has to back out.
Instead of a hard no, learners send the time they wanted and stay in the flow. The data goes back to us as a signal for capacity planning.
For learners who carved out rare time inside a 52-hour work week, the wait for a tutor to connect is the most anxious moment. A chat thread and a clear status reduce the uncertainty.
Permissions checked before the clock starts, so technical issues don't eat into learning time.
Two prompts instead of one, so a bad connection doesn't get conflated with a bad lesson. The distinction matters for how the data gets used downstream.
The seven interviews changed what I thought we were building. Going in, I thought learners wanted more teaching: better tutors, more native speakers. That's what every tutoring app already focused on. They actually wanted confidence. Without it, they wouldn't speak in real conversations, and without that, they couldn't actually get better.
The interviews worked because we let learners tell stories. They'd walk me through specific moments they wanted to speak but didn't. I'd say “tell me more,” and they'd tell me.