The seven basic tools of software quality are a set of visual presentation techniques associated with Six Sigma. These tools do not require advanced statistical knowledge and can be used to solve most of the types of problems you encounter.
Together, these tools help you analyze software quality and come up with effective visuals for communicating them to others in ways that can be easily understood.
As with some of my other articles, we will center this discussion around a fictitious resume scoring system that takes in resumes and outputs a score to a hiring manager or human resources department.
This is a legacy application that has been brittle to change and encounters frequent errors. In this make-believe scenario, we are responsible for analyzing the defects and coming up with recommendations to upper management.
Your boss is fed up with the quality issues caused by this application. The support load is overwhelming, users aren’t happy, and it’s your job to figure out why and find a way to fix it.
What do you do?
After some research, you stumble upon the 7 basic tools of quality and set out to use them to make sense of your application.
The Run Chart is a simple tool for illustrating trends in data. Here we have a simple run chart of total defects per release.
The chart shows a clear trend in decreasing counts of defects per release, followed by a recent spike. This says nothing about the severity or areas of those defects, but tells us that we had an application whose quality was stabilizing over time and suddenly relapsed.
Now that we know we have a problem, let’s drill into some of the raw data for releases via a Check Sheet.
A check sheet is just a tabulated list of defects that can be organized by area and by a time-bound aspect such as a sprint, release, year, quarter, month, week, day, or day of the week.
Above is a check sheet for our resume analysis application, highlighting defects by area by release. This data confirms the story that the last release was bad and highlights the Resume Parser component as a key reason why.
Based on our Run Chart, it looks like we had a problem with the last major release. The Check Sheet data shows us that the problem area is largely in the Resume Parser, so let’s take a look at the quality of that component over time by using a Control Chart.
Control charts tell us when data deviates significantly from the established areas. It can be used to spot outliers and trends towards outliers.
The above chart is a control chart indicating the quantity of defects in the resume parser component by release. In addition to charting the defects per release, this chart relies on 3 flat lines indicating the mean or average and bars above and below it based on standard deviation.
This clearly highlights both a bad release and a good release and can let us focus on what factors contributed to those releases.
Let’s get a better picture of the scope of the defects by using a histogram.
A histogram is a display of data points by frequency. It’s used to spot distribution curves in data based on the frequency of occurrences.
In the histogram pictured above, we see that defects most commonly impact only 1 user, but an alarming number of defects cluster around the 8 user mark and even some higher up (though in this example, the low quantity of data points makes the histogram less reliable).
The data pictured above can be best classified as bimodal since it has two common points of gravitation. The story this tells is that defects typically either only impact one or two users or a cluster of users.
It does not, however, offer any explanation as to why the issues are occuring.
Cause and Effect Diagram
To determine why the Resume Parser keeps having issues, let’s look at the parser-level defects we identified earlier and put them into a cause and effect diagram (also known as a fish bone, fish frame, or Ishikawa diagram).
In creating a Cause and Effect Diagram, we start with a central problem, which serves as the “head” of the fish. In this case that would simply be “Parser Defects”. From there, we pare down the parser errors by broad symptom, such as performance related issues or lack of support for all types of resumes. We can then drill into each point on those branches and start getting at the specifics and the factors contributing to it. You can drill as deep as you want in the hierarchy — your main goal is to delve to the contributing factors of each branch and understand why issues are occurring.
In this case, the fish bone diagram illustrates a number of problems, but the primary issues seem to be stemming from the lack of understanding of the diverse factors in resumes, such as not supporting non-English languages, foreign character sets, a large number of jobs, or multiple college degrees.
This makes sense given the application recently left beta and has grown in popularity as larger groups of users have adopted it.
Since we indicated performance as an area of concern, it makes sense to drill into that area and determine the relationships between different aspects of the data. We could collect data based on the size of the resume and the time taken to analyze it to come out with the following scatter plot.
Scatter plots are used to evaluate any relationship or trends between two or more variables, in this case the processing time and the size of the resume.
From this chart, we could see how closely the data fits to a line or curve and determine the relationship of the data points. In this case, we can say fairly confidently that there is a correlation and it looks like a fairly strong linear correlation where processing time increases as the size of the resume increases.
This gives us enough information to see how the size of a resume is tied to its processing time and extrapolate how long a resume is likely to take to parse given how many words it contains.
Now that we have a good understanding of the details of the component and the issues impacting it, let’s start formulating a plan for action by taking a look at the bugs encountered in the Resume Parser component by root cause.
To do this, we tabulate a count of defects by each cause and then sort the causes largest to smallest, tracking a cumulative percentage of defects.
The cumulative percentage is important due to the Pareto Principle that states:
For many events, roughly 80% of the effects come from 20% of the causes
For this reason, the Pareto Principle is often known as the 80 / 20 rule.
We use the constructed Pareto chart to home in on the 20% of the causes that are causing 80% of the issues.
In this chart, we see that the lack of diversity in beta users paired with the lack of unit test variety are our two leading causes of defects. This accounts for roughly 80% of all defects in the component (as illustrated by the yellow line indicating the cumulative percent of all values).
Based on this, we have a decent idea of how to fix the majority of user problems.
Armed with a series of charts and tables, you walk into your boss’s office and deliver your formal recommendation to find a large variety of resumes from diverse user groups that were not represented during the beta testing phase. Adapting the software to process these resumes mixed with improving and expanding the unit tests should stabilize the user experience with the application overall.
While your boss initially had some skepticism, illustrating the data clearly using the seven tools of software quality removed opinion and emotion from the debate and explained your thought process sufficiently.
She quickly recommends that the business acquire more resumes for analysis and focus more effort on unit testing and supporting a wider variety of resumes. You and the rest of the team are able to crush through the adjustments to the engine and Release 6 ships ready to meet the needs of a rapidly growing user base.