
How can Strings improve the onboarding and content creation experience for its users? And what user types and use cases should be prioritized first?
Context and Background
Strings is a content publishing and social app. While existing platforms are composed of standalone posts by individual creators, Strings allows users to share ongoing topical threads and enables much more collaborative content creation.
When I joined Strings, there was an MVP with beta users. There was no specific target market. And, while the UI was pretty, the UX had a number of issues.
From a product-perspective, we had two high-level objectives—to enable and encourage users to:
- Create content
- Grow their own audiences
Goals: UX & Use Cases
Our research goals were to:
- Identify and prioritize UX issues. To limit our scope, we focused on onboarding and content creation (in line with product goals).
- Identify possible early user types and define their specific use cases in order to drive adoption and retention.
Why these two?
Upon joining Strings, I interviewed several dozen early adopters to understand their motivations, goals, and use cases—including reasons why some used or did not use the app frequently.
Some initial findings from those preliminary interviews:
1. Overall, UX issues were a significant source of frustration. A number of specific issues were mentioned in those interviews, but those conversations were relatively high level and not usability tests.
So what? As a result, we needed to identify and prioritize UX issues so we could address them in the most relevant order. I wanted to make sure the scope was narrow enough to have concrete and actionable findings. As a result, we were focused on onboarding new users—specifically, identifying and getting them to their ‘aha’ moment—and increasing content creation so there would be more for the community to interact with.
2. There were no obvious patterns in terms of users or use cases. Earlier efforts at personas had largely failed because they were guesswork. Similarly, my initial interviews didn’t find users had much in common in terms of how they used or wanted to use the app.
So what? We had to learn more about how users and prospects viewed the app, used it to date, and wanted to use it in the future. Otherwise, there was no effective way to prioritize our roadmap beyond largely hunch-work.
My Role: Lead a Small Team
In order to conduct this research with the limited resources of a small startup, I enlisted the help of four other product researchers and designers.
I shared our research objectives in a kickoff meeting where I provided detail about the app and we discussed the research process and methodology.
In an interesting twist, my boss, Strings CEO and serial entrepreneur Edward Balassanian, had doubts about “focus groups” providing particularly helpful results. Yet I felt it was necessary to talk to users and prospects rather than rely purely on intuition. I had, after all, been hired for this reason!
I was adamant that even a small group of users could provide worthwhile insights — especially given the stage we were at as a company and the questions we needed to answer. Our head of engineering also felt that, in order to put together a more predictable roadmap, we needed greater clarity that user feedback and input could help define.
Timeline: Two Months
A four-week, overlapping timeline to conduct research and distill findings was broken down as follows:
- 2 weeks: recruit volunteers
- 2 weeks: usability studies and interviews
- 2 weeks: analysis & recommendations
After the research, we spent another four weeks on testing, to determine whether we could address some of the issues identified in our research.
- 2 weeks: Lo-Fi
- 2 weeks: Hi-Fi
- 2 weeks: validation testing
Methodology
We used a two-in-one approach, combining usability testing and questions probing for use cases. Afterward, we distilled findings to insights by identifying patterns in both usability and feedback. Next we associated our nuggets of insight with individual features or tasks and prioritized them.
Recruiting testers:
Current beta users were contacted directly via their emails (in TestFlight). New non-users were incentivized with Amazon gift cards. We wanted a relatively diverse group (age, gender, race, and interests), which respondents were screened for.

3/21 users knew how to interact with cards
0/21 users understood all icons
For usability testing, respondents were provided with scenarios and asked to accomplish several different tasks and voice out loud what they were thinking along the way.
Tasks included:
- (New users only) “You just downloaded this app, now open it and explore it for the first time.”
- Goal: understand questions like, “Do they engage with the onboarding flow?” “What do they engage with first/most/least?” “Do they have trouble with anything?”
- “You recently went on a trip. Create a “string” about the trip and add some content.”
- Goal: identify points of friction in the content-creation process.
- “You want to invite your friend, Jane Doe, to check out your string.”
- Goal: identify points of friction in the invitation process.
To address the second goal in our research (use cases), users were then asked about their impressions of the app. E.g., “What are some ways you see yourself using the app?” “How would you describe it to a friend?” “What app would you normally use?”
Analysis and synthesis process
All interviews were recorded (with consent). Each of us conducted a number of interviews individually and wrote notes after each. After all interviews were complete, we determined if each individual had been able to achieve the task with or without any issues. We wrote out details that reflected challenges.
Similarly, user interviews were analyzed to sum up different perspectives on the app and determine any patterns.
we consolidated notes and aggregated our findings. Both UX issues and use cases were then organized and sorted in order to give us a more quantitative perspective of each.
UX issues were relatively straightforward to organize, whereas use cases still had to be formatted. We used Jobs to be Done (JTBD) in the form of Users/Situations/Motivations/Outcomes (USMO).
Outputs and deliverables
The final output of the research project was a slide deck (screenshots of which are interspersed in this writeup) that highlighted findings and made recommendations for next steps.
Impact
We focused our usability testing specifically on areas that we believed would affect retention (hopefully turning more new users into returning and active users). After we implemented several of the changes recommended , we tracked new user activity after 1 day (D1), after 7 days (D7), and after 30 days (D30). We also tracked if these changes had any impact on content creation: number of strings created, total pieces of content created, and engagement with that content.
Success!
We found a large, positive change in each metric (a 14% increase in D7 retention, for example). But, because we were not testing these changes without making any others — we were a small startup making constant changes, after all — it’s hard to say exactly what changes could be directly attributed to the research effort.
What happened afterward?
The findings were presented first to the CEO and then to the entire company. Findings were used to prioritize issues to be addressed. These were used to establish the company’s first detailed roadmap (beyond the high-level one above).
The findings from our research also made their way into proposals for Gigi and Bella Hadid, T-Mobile, and UNICEF, all of whom committed to experiments with Strings.
That led to VC commitments to raise a $15M series A. Unfortunately, those agreements were upended by COVID-19.