Podcast Review: AB Testing #86 - Not the Customer's Champion
"Your most unhappy customers are your
greatest source of learning - Bill Gates"
Here are a few ideas and things I've been thinking about since I listened to AB Testing podcast #86.
It's always fun to listen to Alan and Brent chat about testing, and their brain-child Modern Testing Principles. In this podcast, they review the fifth principle, which states:
"FIVE - We believe that the customer is the only one capable to judge and evaluate the quality of our product."
I generally like the points these guys are making about testers not being the customer, or pretending to be the customer.
It's the idea that we need to go deeper than that and ask questions about what exactly it is that we are trying to make. This seems to head a tad bit into business analyst territory, but that's OK. Testers are analysts too, and business oriented folks can't think of everything. Often, testers pairing with business analysts or product/project folks can help better refine the requirements.
This essentially at it's root is a question everyone should be asking:
What problem are we trying to solve?
Hidden ProblemsAsking "What's the problem we are trying to solve?" is great in theory, and it's something I tend to ask a lot, or something like it, such as: "What are we trying to accomplish and why?"
But it makes me think, based on some experiences I've had, and what I've been TOLD to do over the years: How does this idea/question work in a less idealistic setting?
The Plot vs The SettingWhen you first land on a team, the idea of asking what problem you are trying to solve seems a little idealistic in terms of business and team dynamics. Often the business has a plan, or an idea of a plan. They likely have been told from the top down that they will figure out what those things are based on some metric. Like conversion rates, or adoption, or downloads.
The example Brent gives about trying to create a simplified smart phone to sell overstocked equipment is a great example of over-engineering a problem. An example of trying to solve the problem with the wrong solution. From the sound of the story, Brent knew it wasn't going to work, but he had to work on it anyway to finish out a contract.
If a company like Microsoft can make mistakes like that, and often mistakes created by business people far removed from the development process, how can a team of developers and/or testers stop that before the bad idea gets to them? Should they stop it? Is it worth doing the bad idea to prove to the business that it didn't work in the first place?
It's possible that some folks could influence up and make others realize that an idea isn't a good one, or the right one. Could I manage that same feat from a team that has been delivered requirements and told to innovate on a problem? Maybe.
Using Risk Analysis To Head Off Bad IdeasSometimes the best thing you can do is write up risks. In this case, writing a risk that states your team might be developing the wrong solution could also be pretty risky if you don't have a lot of access to customers and analysis to back up your statement. Sometimes testers get this information, sometimes we don't.
Most testers I know would like to have metrics, feedback, and market analysis, but often, we can't get to it, or for some weird reason, we are told we don't need it. In a lot of businesses, testing still isn't in the room where the ideas happen and where testers can be a voice of reason around ideas which might or might not solve a problem.
Risk Analysis in this situation is akin to Domain knowledge. If you've done your research, understand who competitors are, what the market is doing in the technical space you are in, you can help your company make better informed decisions. If you've read an article that a competitor is abandoning a technology, and they detail out WHY, and then there is your company thinking about picking it up, it might be a good idea to check and see if people have at least read the article.
A great example of this is the article recently written by a group out of AirBnB about React Native. They list a lot of great pros and cons for their situation. It was a great write-up on their experience with the tech and how it worked for them.
In my opinion, testers should be tuned into these things. We should be reading these kinds of half tech, half marketing research articles to help guide our thinking, and in turn, at least be able to float an opinion, or offer research around a topic like picking a technology, or adding a crazy feature set which might not be solving a problem.
When you add areas like security, accessibility, and usability to the mix, these caches of knowledge can help address questions around value, problem solving, and general understanding of what the solution should be with a given problem. Even if we can't really change the decision, it's possible someone at some point comes back and asks why a decision was made.
When a business fails to deliver something meaningful, testers often get the brunt of "not catching... xyz." If you've given a risk analysis/domain viewpoint, or information, you've done what you can, and after that, you can hold up the information you previously presented when someone asks why you didn't catch something or say something. It takes a couple of times before people catch on. But when they do, that shift in thinking around creating an idea and winging it verses testing an idea in a thought out, methodical approach can be pretty powerful.
Marketing & CapitalismBusinesses rarely think in terms of the customer because of capitalism. They think in terms of the metrics of selling more things to the customer which means, especially in marketing, convincing the customer to buy something they often don't really need. How does a team, for a business, balance the driving need to make money(because paychecks are nice), with the idea that a customer should be at the center of the decisions made for the direction of the product?
Thinking in terms of customer retention is where we can get more customer centric thinking around solving for a problem the customer is actually having. This is verses a problem the customer is having with the product, verses creating a problem for the customer to have which in turn creates a problem to solve for the customer, which causes them to spend more money.
Example: Apple's latest MacBook Pro, which has adopted USB-C. It has a touch interface where the function keys were. It has been widely discussed and even despised in some cases, along with the limitations of only having USB-C ports. The adoption of the ports are innovative, even forward thinking. The sale of all the various adapters and connectors to allow someone to continue to use various peripheral devices is an added bonus to Apple's bottom line.
It's the best kind of market to have, one that you artificially create. Apple has been doing it for years. Amazon does similar things, offering services on demand you didn't know you needed until they offered them.
All this is to say, sometimes what's wrong or what's right is a grey area. Projects fail all the time. Google glasses for instance are a good example, while the Chrome book is working out alright. Sometimes we can't predict what will work or won't, though understanding the risks and stating them should be part of the job description for testers at any level. Innovation is risky.
The Innovation QuestionIf you have the right problem, and you are working on several solutions because you don't know for sure which one will work exactly, how does the question change? It might change to something like:
I understand your problem, does this solve it for you, the customer?
"SIX - We use data extensively to deeply understand customer usage and then close the gaps between product hypotheses and business impact."
Development teams should keep asking this question as they continue along the development process as well. It's vital to continuing towards creating the right solution. The initial idea could change over time, and then as that happens, the customer needs to be involved to figure out the 'Did we get it right?' part. That can take a lot of money and time to get to that question. If some how, identifying the risks can shortcut to the correct answer (similar to something Brent mentioned around the Microsoft example), then it's worth asking, every time.