Beyond the Numbers: A New Approach to Assessing Project Quality
Project managers often use the triple constraint model to explain to clients and stakeholders that they have a choice: "Do you want it fast, do you want it cheap, or do you want it good?" The triple constraint model is a helpful tool for expressing that a project can be fast, cheap, or good, but it cannot be all three simultaneously, or the quality will suffer.
The underlying assumption in the model is that quality is an output of some function of the project's timeline, budget, and scope. However, confusion arises because time, cost, and scope are often easily quantifiable, but quality is not. Time is easily read by any project team member via the project plan and a calendar. The budget is known by the client from day one and usually with an acceptable level of variance. The scope of work is documented before the work begins to clearly outline what is agreed upon. So it stands to reason that the quality, an output of these three inputs, should also be quantifiable.
The problem with this assumption, specifically in software implementation projects, is that quality is incredibly subjective. Individual perspectives or organizational priorities can easily bias the perception of quality. Some examples of issues leading to its subjectivity are:
- Stakeholders from different departments will often hold their departments to higher quality standards.
- Some clients may overemphasize small but user-visible problems over larger problems that are not as easily visible to the end user.
- Clients may overemphasize issues that they had in previous systems or tools.
- Clients naturally associate the quality of the project with the trust they have in the implementation team (for better or worse).
Despite quality being so subjective, the temptation to track, measure, and incentivize quantifiable metrics persists.
One reason for this incorrect belief has to do with how issues are tracked in software implementation projects. During implementation projects, the client, who is receiving the newly implemented solution, is often responsible for testing and logging bugs and enhancements. These pieces of feedback, whether logged in a ticket, a record, an issue, a case, etc., are quantifiable. So it is not illogical to wonder, what is the relationship between the count of submitted tickets and the quality?
Note: For the remainder of this article, I will refer to all types of submitted bugs/issues/enhancements as a "feedback record," "client feedback," or just "feedback."
This question is often well-intentioned, and it often represents a desire from the client to be an active participant and help drive engagement or other positive behavior from their team. Additionally, this often means the client has some experience with logging bugs. This primary question about the connection between quality and submitted feedback records may look like, for example:
- How many feedback records have been submitted?
- How many feedback records should they expect to be submitted at X time frame (i.e. at the end of a sprint)?
- How many feedback records are currently open?
- What is the average time to resolve submitted feedback?
- How many feedback records did a similar client create during their implementation?
While these questions are understandable, they often reflect a misguided attempt to quantify quality, overlooking its inherent complexity. However, an experienced consultant can provide valuable insight into this complexity and guide the project toward more meaningful ways to assess and ensure quality beyond just metrics.
There is no single metric that can correlate with overall “Quality”
As clients look for measurable ways to assess quality, they often turn to feedback records, expecting them to serve as reliable indicators. However, this approach is fundamentally flawed.
When thinking about tracking feedback records as a proxy for quality, the client often makes an assumption that because they are working with an experienced consultant, the consultant should be able to tell them the appropriate number of feedback records to submit. This often comes in the way of asking, "For a project of similar size, how many feedback records do your clients typically submit?" The client wants to know if they are on track as compared to another client, preferably a client that you have had proven success with. While this seems reasonable on the surface, it misses several key issues that undermine the validity of this approach.
The biggest problem with this assumption is that there is no direct or indirect correlation from one project to another that can be attributed to the number of feedback records logged over time. If Client A submits 10 feedback records, without knowing anything else about Client A's experience, you cannot know if the 10 records are all high-quality feedback records. Were they all vague questions? Were they all low-effort enhancement requests? Or were they all clear, concise, and well-documented bugs to be fixed?
Additionally, if Client A submitted 10 high-quality, clearly defined feedback records per each test script and had a successful project implementation, that alone is not enough to say that Client A had a successful project because they submitted 10 high-quality feedback records. They could have had success for any number of reasons, including an increased budget, a longer available timeline, a different project team, etc.
Ultimately, comparing projects in this way is an oversimplified approach that fails to consider the broader context and nuances of each unique project.
Goodhart’s Law
The other problem with overemphasizing quantitative metrics is that, over time, users will consciously and subconsciously begin to game the system. This phenomenon, known as Goodhart's Law, is named after economist Charles Goodhart, who criticized the British government's monetary policy by stating, "Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes." Simply put, "When a measure becomes a target, it ceases to be a good measure."
In a project setting, this means that if you encourage clients to log a specific number of feedback records per test script and attach importance to this number—whether through rewards, consequences, or repeated emphasis—the testers will find ways to meet that target, regardless of the actual quality of the solution. They may divide larger feedback items into multiple smaller records, log low-effort issues, or even submit general questions as feedback just to hit the expected number.
This manipulation is not limited to clients. If, for example, the project team is measured by how quickly they resolve issues or how many issues they close, team members may focus on speed rather than thoroughness. They might rush to close tickets without fully addressing the root cause, leading to reopened issues or problems down the road. The emphasis on meeting a specific metric can detract from the more important goal: ensuring that the solution is robust and meets the client's needs.
Whether this pressure is placed on the client or the project team, the result is the same. Over time, this focus on meeting arbitrary numbers shifts the priority away from true quality and toward satisfying quotas, undermining the overall success of the project.
An Alternative Approach - Incorporating Qualitative Insights
The common thread between the two issues defined above is the usage of quantitative metrics without the context in which they were created. To combat the issues of over-reliance on quantitative metrics and to add back the missing context, teams should consider integrating more intentional space for qualitative evaluation methods. These qualitative methods foster deeper connections and insights into the client's perceptions of the project's success that cannot be found purely in statistics.
The great news is that adding qualitative reviews does not need to be complex. To implement this approach, consider the following simple steps:
- Step 1: Build Time for Qualitative Review
- Step 2: Ask Perception-Based Questions
- Step 3: Analyze and Document Conversations
- Step 4: Implement Feedback Over Time
Step 1: Build Time for Qualitative Review
Step 1 is setting aside intentional time to ask qualitative questions. This does not have to be a separate meeting or stand-up. Rather, in your standard recurring meetings, where you likely already have time blocked to ask questions about testing and feedback, add five to ten more minutes to ask about the client's impression of quality. By incorporating this time into existing meetings, you create space for more valuable insights—leading directly into step 2, asking perception-based inquiries.
Step 2: Ask Perception-Based Questions
Step 2 is using this time to ask perception-based questions. Some examples of these questions are:
- General Perception Questions: (How do you feel about the build so far?)
- Usability Questions: (Could you do your daily tasks in the new system if we went live tomorrow?)
- Training Questions: (Could you train your co-worker?)
- Quality Weakness Identification: (What do you feel are the areas of weakest quality?)
Ideally, you should ask these questions to stakeholders from a variety of backgrounds. User adoption plays a significant role in how stakeholders perceive the success of a project. Ensuring smooth adoption can mitigate some of the subjectivity surrounding quality. Explore tips for improving user adoption to enhance project outcomes.
Challenge the testers to share and expand on their concerns while reflecting back their concerns with empathy. If they are frustrated, ask what you can do to help. If they are happy, ask them what it is about the current build that they think will be an improvement over their current system. By asking these perception-based questions, you gather valuable commentary from diverse perspectives that would otherwise be easily missed when tracking traditional feedback record metrics. The next crucial step, Step 3, involves capturing and documenting these insights effectively.
Step 3: Analyze and Document Conversations
Step 3 is active listening and documenting the client's praises, concerns, and/or general sentiments. You can document these informally in meeting notes or formally in a feedback log or sentiment tracker. While tracking the overall feelings of the client team, the Project Manager should also document key takeaways and any action items that may need to be followed up on. Documenting these qualitative insights not only provides immediate guidance for the project but also serves as a critical reference for ongoing and future phases. This leads seamlessly into Step 4, where you'll use this documentation to drive iterative improvements.
Step 4: Implement Feedback Over Time
Step 4 is monitoring the qualitative feedback and making thoughtful adjustments based on the communication. Monitoring qualitative feedback over time offers benefits both in refining current processes and in reflecting on past progress. This approach enables more precise adjustments and provides a solid foundation for demonstrating the project's adaptability and success. When progressing through the project and looking ahead at the finish line, you can use these checkpoints to make better, more refined adjustments from sprint to sprint. Alternatively, at the end of the project, you can use these checkpoints as proof of opportunities provided to raise concerns and share feedback. By monitoring this feedback and adapting accordingly, project teams create an ongoing cycle of improvement, leading to more successful outcomes and increased client satisfaction.
Conclusion
Evaluating project quality is hard! Measuring quality, particularly in software development, presents significant challenges due to its inherently subjective nature. Quality assessments can be swayed by individual perspectives and organizational priorities, often reducing it to a matter of personal or business biases.
Because measuring quality is difficult, it's logical and understandable that clients will often try to simplify quality into trackable quantitative metrics in the same way they track time, budget, and scope. However, relying solely on quantitative quality metrics like submitted feedback records, while useful, can obscure the broader context and lead to an incomplete picture of project success. Learn more about conducting effective tech evaluations to set your project up for success.
Incorporating qualitative reviews at regular intervals provides a valuable counterbalance to these metrics. By engaging in meaningful conversations with clients and stakeholders, project teams can build stronger relationships, foster greater trust, and gain a more accurate understanding of perceived quality. This qualitative feedback becomes a critical tool for making informed adjustments, ultimately leading to a more successful project outcome.
Adopting this approach not only enhances client satisfaction but also ensures that the project evolves in alignment with true quality expectations. By integrating qualitative insights with your existing metrics, you create a comprehensive framework that supports continuous improvement and positions the project for a successful go-live.
Ready to elevate your technology implementation projects?
Contact our expert team at Idealist Consulting to discover how we can help you achieve success by balancing quality with measurable results. Whether you're starting a new project or need support with an ongoing one, we're here to ensure your implementation meets your goals.