Mob Testing A Mobile App
"We should seek the greatest value of our action."
I've only done mob "anything" at conferences. It's a skill I've learned and I've been itching to try out for a while and have never found a good opportunity to try it. I recently got my chance to facilitate a mob testing session after I proposed the idea to my team.
Currently we are working on a mobile application. We had a few days left in our sprint and a couple of stories which needed to be tested. I suggested we do a mob testing session so we could work through the last few stories together as a team. The team liked the idea! I was ecstatic, and then quickly had to figure out how I wanted the session to proceed.
I though the best way to get my team members to think about what they knew and communicate those thoughts to the other team members, was to write down assumptions we had about the application. I stole this idea from a recent twitter thread where several folks were commenting on assumptions and how best to handle them. Everybody has them, so why not make those assumptions known and share!
We wrote stickies of each of our assumptions and I grouped the similar ones on the board. I was asked by a team member about having the same assumptions as someone else in the group. I responded that it was OK, and actually not a bad thing. It means we are all thinking similarly about the problem.
Being aware of our assumptions can also help avoid the group-think around a problem. If we know we are all thinking we should "click the button and something happens" we can challenge that assumption with other interactions.
The end goal of reading out the assumptions everyone had was to give the navigator an informed testing path. I didn't purposefully write out test cases or scenarios. I didn't want to influence the navigators. I did get a comment from my team lead who thought that I should have had scenarios for the team to walk through instead of letting people drive the discovery. I noted that feedback, but upon reflection, I don't think we would have discovered and discussed things we found as much as we did if we had scenarios.
Our business analyst was also in attendance and he was the first one up as the driver. I was pleased that he wanted to join us and was also pleased that he had some great feedback after his driving session was completed.
Our team was organized like this: Driver, Navigator, Assistants to the Navigator, and Facilitator (which also played assistant navigator). We rotated roles every five minutes with the exception of Facilitator.
The session was set for one hour total. I set up an emulator which I displayed on a large monitor so the whole team could participate. We managed to get through the whole user flow and discovered a few things about caching, state changes, and device behavior we hadn't anticipated. I referenced the assumptions we had written earlier and flagged ones that were wrong and also created notes for stories which needed to be in our backlog to handle other parts of the workflow we hadn't discussed but realized would present issues in future sprints.
While I might have caught these things myself during a testing session, having the team review the work and talk about the issues and assumptions let us all discover what we should be focused on and what might be of concern. It broadcast this information to the whole team instead of me filtering it through my testers lens and then communicating it back to the developers.
Since we were doing mob testing with an android emulator, some of the behaviors we discovered were discussed and noted by the iOS developers as well. They realized they might face similar issues when they picked up the same story in the next sprint. This exchange of information across the platforms happens in our group, often at standup, but rarely do the developers have time to really think about the issue in detail and try to think of possible solutions based on what the other set of developers and the tester discovered.
Session Wrap Up
I collected notes, gave our business analyst information for future stories which should be researched and gathered feedback from the team. They were extremely positive about the experience. We have plans to run a mob testing session every other Friday. Since we are using emulators, we can run these sessions remotely and discuss our findings via chat or Zoom. Our first remote mob testing session will be this week. I'm looking forward to letting you all know how it goes!