I was recently reminded of the time I participated in an IDEO research project. I was curious to learn a bit about their research process and I fit their user research profile.
IDEO was looking to talk to people about a new “impact” focused FinTech product. This ultimately included multiple rounds of 1:1 and group interviews, prototype usability testing, and feedback on marketing/positioning with eight people over about two months. This started with a Facebook ad and a screener survey for people interested, which asked questions about personal finance habits, technologies used, and investing interests.
I’ve always been interested in personal finance and FinTech and I’m also a former Peace Corps Volunteer, which I think gave greater credence to my interest in the “impact” angle (more on that in a second).
The major qualifier seemed to be the FinTech angle. I’d been an early adopter of Betterment, Robinhood, Mint, Wealthfront, Lending Club, Credit Karma, Fundrise, and several other personal finance apps.
What was the product?
The product they were conducting research for was called Swell Investing, which turned out to be an “impact investing platform” — which is to say the goal was to be a new robo-investment tool with a positive world impact (clean water, renewable energy, greentech, etc.). The app was backed by Pacific Life.
Conceptually, I liked the idea. But I did have some concerns along the way (which is the point of research, after all).
IDEO’s research process
After I filled out the screening survey, I got a followup email saying they’d found my answers interesting and inviting me to participate. Initially, there was a group meeting to explain the project in greater detail and set expectations. Next, we had separate interviews that were about getting to know us as individuals, our outlooks on investing, the market, goals, etc. After establishing some those baselines—and presumably determining that we didn’t have any crazy biases—they began to collect more specific and tactical feedback.
At first, IDEO had us give feedback on some prototypes (sent via InVision, if I remember correctly) as well as emails. They were focused most on language, formatting, imagery, and what seemed to resonate most with us. After a few waves of this (both in person and virtual), they shared a preliminary app that we used and gave feedback on (reporting bugs but also anything that was confusing or otherwise off).
I was candid with my feedback at all stages, both good and bad, which they seemed to appreciate (one would hope, since that’s why we were there!). It was interesting being a participant rather than involved in the research, and I wondered how feedback from myself and other participants was being gathered, organized, and applied. To this day I don’t know exactly, since I was the one in front of the curtain, rather than behind it.
How did feedback affect the product?
Naturally, I wondered how much impact our feedback ultimately had.
For example, I worried that, at the end of the day, few people would sacrifice actual monetary returns in favor of “impact.” Unfortunately, when the platform was later publicly launched, the fees were higher than other existing robo-investors like Betterment.
As another example, I wondered if “impact investing” was a clear term that resonated with people. Personally, I suggested “altruistic investing,” which seemed to strongly resonate in a group session we had early on. (It didn’t take.)
What happened with Swell?
Swell Investing was launched in 2017… and about two years later it shut down.
I’m sure there were a variety of reasons, but it did leave me wondering… how many companies go through an extended, qualitative research process like this and either pick and choose or simply disregard what they hear in favor of already-made decisions?
To be fair, I can understand if something like the fee structure was based on market research or internal business requirements (although you’d expect them to look at competitors’ pricing, no?), but I was surprised that our feedback (I was one of only ten long-term participants) rarely seemed to have much of an impact along the way. Perhaps I was just in the minority most of the time. And perhaps IDEO presented findings and recommendations to Swell that were then ultimately disregarded.
Ultimately, it’s hard to say what would have made the company succeed instead of fail. Certainly, I won’t claim that taking my specific feedback would have flipped the script. But it reminds me that research is often used in support of existing assumptions.
It reminds me that some people hate the term “validation testing,” because the inherent bias of the term is to affirm what you’ve done.