Recently I ran some UAT (User Acceptence Testing) on one of the products I’m guiding through the process and wanted to share some tips on how I used Airtable to synthesize and pull meaning quickly from the data.
I will cover my mistakes and reflect on their remedies concluding why using the RICE framework early is too sticky and slows you down.
After all my respondents had submitted their videos doing the tasks I manually copied all the insights into one Airtable grid table with the headings (be mindful of GDPR, all ours are USA based).
The basic columns were issue type, description, steps to reproduce, any suggested updates and who it was from.
1. Mistake – Tucking in to RICE too early
I then made my first mistake, I attempted to add some rating columns to the insights to help me triage them quickly in RICE (Reach, Impact, Confidence and Effort) format.
In any previous UAT sessions, I had used simple Impact and Value, but RICE framework is a relatively new kid on our block and so I was keen to reap the benefits of it here.
Now… whilst RICE is what we do with our main product backlogs, trying to immediately triage ~100 raw insights from the UAT insights was taking too long. Using RICE was slowing me down (sorry for the weak metaphor :D).
Remedy – Cleaning the insights
After processing about 20 rows I deleted those columns and realised that the raw insights material had patterns, duplicates, bugs, features and “do not fixes” and it needed a clean up first.
Using Airtable’s unique MultiSelect type I instead went through the data tagging each insight with about 2 to 3 tags each. This let gave me time to focus and start to categorise and clean the insights.
2. Mistake – THinking there was time for more passes
After breaking through about 20 insights again I realised something else. I did not want to keep doing lots of passes over this data, as some patterns were forming based on the codification work already. I needed to be capturing patterns already.
Remedy – Be here, now
No, not the OASIS album 🙂 As was going through the data and finding trends I realised “being in the moment” and tagging those patterns was the most effective use of my time.
Using a new column with Airtable’s multiple-select field called “Story”, I tabbed through the insights and began writing in titles to the patterns emerging. i.e., copy changes, login wins, preview bugs. The autocomplete on the field meant I was grouping similar items quickly.
Also, I added two columns as checkboxes for potential easy wins and potentially high impact insights. This helped me reference why I thought this insight was of interest.
Doing this as I went through kept me in the flow and meant after 1 pass I had a group of insights that were potentially most valuable.
3. Airtable win – groups and filters
I now created a new view on the data which showed only things with a pattern story, or a tick for high impact or easy wins. Then they were grouped by the pattern.
It gave me my processed data in one snapshot and I was able to then clean those items further and see a more valuable collection of our insights.
I then used Airtable’s URL field to capture all the JIRA links to stories and bug reports I’d created for the next backlog refinement session.
4. REFLECTION – RICE isn’t the best starter
The process I envisioned was too heavy for the time I had available, cutting it into a quicker process that meant I could get insights bubbled up quick and the process closed sooner (yes, the environment I work in doesn’t always lend itself to long term reflection).
Next time, I’d like to start with this new process. Also, find a good place to store the notes and sessions in one place so it’s easily accessible to the devs and my fellow PM peers.
The best process was,
- Gather insights
- Codify, de-dupe, uncover patterns, identify rough wins
- Refine once
- Make work tickets
- Move on