Newsletter Subject

Is fear-driven testing holding your software quality back?

From

ministryoftesting.com

Email Address

hello@ministryoftesting.com

Sent On

Wed, Aug 7, 2024 12:04 PM

Email Preheader Text

Discover strategies to find a way forward in imperfect circumstances by Jose Carrera | Fear-driven t

Discover strategies to find a way forward in imperfect circumstances [View this email in your browser]( [an illustration of a fear face]( by Jose Carrera | [Read online at Ministry of Testing]( Fear-driven testing (FDT) is an accidental approach to software testing that results from a scenario where the people involved in software quality activities (Quality Assurance (QAs), developers, business analysts, etc) conduct their tasks mainly out of fear that defects will escape and reach production. This can happen for several reasons, such as pressure from the business, unfamiliarity with the domain, hard deadlines, etc. Another key aspect is how quality is perceived by the team/business: Is quality ownership shared among the team? How are quality gates applied? If the scenario you are facing is QA-centred, without proper involvement from other disciplines, it leads to QAs approaching testing with the fear of being blamed if a bug is not caught before production. FDT can be very detrimental to a software development project and affect software testing best practices. It not only impacts our ability to continuously deliver value to our customers but also affects our team’s morale and satisfaction as our practices create extra work without much value. Slower feedback due to longer test execution cycles, excessive focus on end-to-end testing, and centralised quality ownership on QAs are some examples of how it affects our daily work. Usually, when this occurs, teams don’t even realise they are in this position. Therefore, being aware of FDT is the first step for us to properly understand its impact and start working towards moving away from it. [Meaning of fear, according to the Cambridge dictionary, an unpleasant emotion or thought that you have when you are frightened or worried by something dangerous, painful, or bad that is happening or might happen.] 👋 TestBash is our annual conference happening in September in Brighton, UK. As we like to say, our network is your network, we'd love for you to join us. We will have 2 days of learning and community as testing professionals share what they know about testing, AI and more! [Explore the TestBash Experience]( Cambridge Dictionary defines fear as “an unpleasant emotion or thought that you have when you are frightened or worried by something dangerous, painful, or bad that is happening or might happen”. In software development, we might be worried or frightened about a future release or, worse, acting on fear due to previous traumatic experiences with earlier releases. If we get it right, fear can be a powerful fuel to help keep us alert and focused, helping us spot where things might go wrong, and what requires extra attention. The problem begins when fear is our only guide, making us overlook other factors like risk analysis, end-user understanding, system design, and how to simplify and facilitate our testing. In this article, we’ll explore why some teams / companies fall into this scenario. Then we’ll discuss the symptoms of FDT, its impact, and finally, how to improve the situation. Understanding the causes of FDT Understanding the root cause of an issue is a common approach in software development, and similarly, we can understand FDT by identifying some of the reasons we might find ourselves in this position. Once we understand the referenced areas, we can analyse their impact and how we can move forward. In this section, we’ll describe some issues we have observed and how, from a quality assurance perspective, they fuel the fear-driven approach. Little domain knowledge Most people involved in any software development role (developers, testers, business analysts, etc) will likely face the challenge of working in unfamiliar business domains. As we join a new project or company, it’s expected that our domain knowledge will increase over time, therefore allowing us to make better decisions. However, the lack of domain knowledge can quickly become a problem when team members are constantly changing, communication is inefficient, and existing documentation is poor. Making it harder for new joiners to get to a position where they are comfortable with the new area. Problem The lack of domain knowledge within the team will directly lead to a fear-driven approach. Change’s impacts won’t be properly evaluated, resulting in test execution cycles bloated with unnecessary scenarios. Solution Efficient documentation and knowledge sharing. Domain knowledge fuels and empowers good decision making, it gives team members understanding of how each functionality behaves and how they interact with different parts of our system. Therefore, we can do efficient risk analysis and targeted scoping of testing. Unfamiliarity with the architecture and code base Designing and executing an efficient test strategy that runs from project inception to features being released to the end customers requires that proper validation is built on each step of the way, from unit tests to end-to-end scenarios. Quality needs to be part of the team’s daily business, and testing across all levels of the testing pyramid is crucial to get early feedback on changes. When there is little collaboration between testers and developers, it creates a lack of visibility and understanding of what’s actually being tested on each level, causing extra work to be repeated unnecessarily. A feature that can be fully tested at unit and component levels ends up being retested from an end-to-end perspective, where maintenance costs are usually higher. Problem Little or no collaboration between testers, developers and other functions creates silos of knowledge. Leading to duplicated work across different levels and lack of trust. Solution Encourage collaboration, pairing and knowledge sharing across different functions. Get testers to work on lower level tests, developers to work on end-to-end and team members to work together on different testing types, like performance testing. Lack of context on how the parts work together Sam Newman [1] defines microservices as independently releasable services that are modelled around a business domain, where a service encapsulates functionality accessible to others via networks. For example, one might handle inventory, another order management, and yet another shipping, but together they might constitute an entire ecommerce system. As the number of applications built using a microservices architecture increases [2] [3]. It is essential to understand each service’s role to plan accordingly. What services do we rely on? How can we validate our integration with them? How can we identify if we are introducing a breaking change? Do we have the necessary tools in place? Without the answer to these kinds of questions, we will find ourselves in a position where every change needs to be treated as “high risk”, and we end up adding extra layers of validation to feel confident we are not breaking any relevant flow. Problem Lack of understanding of internal or external integrations. Solution Efficient documentation and communication of how each part integrates with one another. Implementation of various tools (e.g. API testing tools, test management tools etc.) to track dependencies, manage the contract between parties and API version management. Inadequate service isolation One of the main benefits of microservices is that it should allow us to validate each service independently. To achieve this, we need to understand how we communicate with other integrated services and plan our tests accordingly, so we can determine what functionality can be validated at each level. Good collaboration between teams, proper documentation of service contracts, and proper tooling to allow for stubs to be used at the correct test levels are fundamental. Our testing must be planned and designed to take full advantage of this architecture; otherwise, we won’t benefit from it. Problem Inability to validate services in isolation. Solution Collaboration between different teams to allow for proper mocking (or alternative solutions) to be introduced, allowing for different tests to be added at the appropriate level. Hard deadlines Deadlines are common in most software development projects due to reasons like contractual agreements, commercial dates (e.g. Christmas, Black Friday, etc.), competition, etc. [The iron triangle of project management. Three circles joined as the points of a triangle with Cost, Time and Scope in them and Quality at the centre] Under these circumstances, time is non-negotiable, therefore in order to not sacrifice quality we need to ensure that testing happens continuously throughout the entire project without being compromised by scope increases. On its own, the fixed deadline is already an aspect that brings concerns for most team members. From a testing perspective, they often mean testing time might be cut short, making it crucial that validation is in place across all levels as early as possible. If testing is reduced to a second class citizen, the risk of an inefficient and chaotic test stage happening at the end increases significantly. Problem Testing limited to the final stages and executed under limited time. Solution Testing happens throughout the entire software development lifecycle. Automation is in place at different levels and a continuous testing approach including exploratory and acceptance testing happens as soon as features are available. Pressure from management Dealing with the pressure of a coming release is part of our job, but things can get sour quite quickly when certain practices increase pressure, such as: - Isolated testing at the end of a release, - lack of coverage across different testing levels, - no team ownership of quality, - unclear priorities, etc. Under these circumstances, QAs are often seen only as bearers of bad news, causing a bottleneck through issues being raised. Another frequent problem is treating testing as a trivial activity, which anyone can perform, leading to poor test efficiency, insufficient coverage, and poorly described issues that end up standing in the way, instead of providing a clear direction. Problem No sharing or team ownership of product quality, leading to testing seen as a bottleneck. Solution Quality set as first class citizens, where everyone is responsible for it. Testing happens throughout the entire project, issues are discussed and prioritised as early as possible. Inefficient test data management Managing test data is a common challenge in software development, especially when it requires collaboration between multiple teams owning different services. It is through the availability of relevant data that we can exercise realistic customer flows and increase our level of confidence that the software behaves as expected. Lack of test data ensures defects are missed, feeding the cycle of fear, and creating further pressure and delays to subsequent test cycles. Problem Inability to identify and manage relevant test data. Solution Identify test data requirements as soon as possible, ensure efficient mechanisms are in place to provide test data across all levels. Limited test environments Testing can only be effective and provide valuable feedback, throughout the entire development cycle, when executed in adequate test environments. From the tests we run on our local machines, to the ones that are executed in the different stages of our continuous integration / deployment pipelines. Restrictions will always exist, such as limited number of instances, mocks instead of full integration, limited test data, 3rd party dependencies, etc. These sorts of issues can hinder our ability to extend automated checks coverage, increasing the need for manual checks. More importantly, when the test environment’s purpose is not clearly defined, teams start to believe that the application can only be safely tested in a fully integrated environment, leading to inefficient retests across all environments. Problem Unavailability of test environments or not configured according to each level of testing’s purpose. Solution Specify requirements needed for each test environment, define its purpose and plan which sets of tests are to run on each environment. Test duplication Test duplication can be beneficial or harmful. In some cases, exercising a functionality across different test levels (unit, component, integration, UI) enhances coverage as each level identifies different issues. However, harmful duplication increases testing time without much benefit: - Testing the behaviour of services we consume (internal or external), increasing our testing scope beyond what we have built and repeating tests already been done by service owners - Retesting through the UI application logic that has been tested at lower level - Manual testing scenarios already covered by automation - Creating end-to-end tests across multiple teams, with different teams testing the same flows Problem Repeating the same tests across different teams or re-testing at a higher level functionality when it is better covered at a lower level. Solution Identify application boundaries and implement automated checks at the appropriate levels. Endless testing When driven by fear, test scope tends to be larger than required. Since there is a lack of trust in the previous test stages, those in charge of planning the next iteration tend to be overly cautious, resulting in errors like: - Re-running previously executed tests that haven’t been impacted by recent changes - Setting unrealistic criteria to consider a test stage complete, for example, no new issues raised regardless of severity - Requiring fixed issues to be retested by multiple parties Problem No clear and well defined exit criteria for testing activity. Solution Definition of a clear and realistic exit criteria, together with continuous analysis of changes when planning the next test iteration Overfocus on end-to-end testing According to ISTQB Glossary [4], end-to-end testing is “A type of testing in which business processes are tested from start to finish under production-like circumstances.”. Driven usually from the UI, this testing is conducted in an environment as close as possible to production, which is extremely valuable and can fill in the gaps from previous testing levels. Problems begin when we lack trust or understanding of how these lower level tests work, then as a consequence we start making the assumption that the more end-to-end coverage equals more confidence in the application. By its nature, end-to-end tests are time consuming, no matter if performed manually or automated. From test data management to coordinating deployments and scenario setup, they always mean longer feedback cycles. Therefore, balancing what needs to be tested at this level is highly important. Problem Misunderstanding lower level test’s abilities to identify issues. Assuming that only end-to-end tests are capable of ensuring the application behaves as desired. Solution Limit end-to-end tests to the smaller number possible, ensuring efficient coverage is provided from other levels of testing. Bugs raised instead of team communication Bugs or change requests are the formal way QAs communicate findings to their teams, usually through a defect management tool, like JIRA. They are a useful mechanism to facilitate conversations with other parties, help triage issues, and determine its priority/severity. Problem starts when bugs become a measure of “testing effectiveness”. A test cycle starts and managers direct conversations through stats like the number and severity of issues raised. Under this context, it is also common for people to feel that their findings won’t be looked at, unless they are logged as issues, leading to more invalid defects being raised. Encouraging raising bugs before asking questions and investigating behaviour leads to defects being raised with several problems, like: - Lack of information - Wrong severity/priority - Invalid issues Problem Bugs being treated as the only way to communicate and discuss findings or problems. Solution Encourage earlier communication to clarify functionality behaviour, document common questions, review defects and act on them in a timely manner. Multiple reviews / approvals for testing artefacts Another indicator of fear-driven testing is the lack of trust in individual teams ensuring their service’s correctness and integration safety. Activities such as test planning and design are then put through a sequential list of steps, where nothing is considered complete without external reviews/approvals. The team’s ability to specify and plan which scenarios are part of a given test cycle is frequently questioned as an attempt to oversee what is tested and how it will be performed. As you can imagine this increases the amount of time that needs to be invested and the number of people, again creating extra work that produces very little value. Problem Lengthy approval processes, lack of trust and ownership. Solution Implement a more lightweight artefact review processes that encourages ownership and delegates responsibilities. Excessive documentation Software testing documentation comes in various forms: test plans, test cases, test reports, etc. Its intent is to provide the team information on how we are approaching quality, what is covered by our tests and what results have been achieved. Through review, we can improve our approach and provide more effective, reliable information to stakeholders. Under a blame culture, documentation efforts increase as a defence mechanism. Documentation starts to be used as a tool to indicate that the “process” was followed, reviewers approved the plan, and thus, we are not the ones to be blamed by that escaped defect. For example, a set of automated end-to-end tests might be built and run as follows: - Cucumber as the BDD framework to ensure scenarios are readable and traceable to user stories - Traceability between tests and stories published as documentation - Test pipelines publishing HTML reports after each run - Automated test results included as part of the overall test execution cycle - Final quality summary report added to wiki for each release Problem Repeated documentation across multiple places. Solution Reduce and simplify the artefacts that are manually maintained. Automate documentation where possible, such as release notes. The impacts and consequences of not dealing with FDT Slower feedback One of the reasons we test is to provide our team and stakeholders with valuable information during the different software development lifecycle stages, allowing for better-informed decisions. As seen during the previous sections, fear-driven testing adds unnecessary tasks, blowing up scope/time and causing slower feedback cycles. Team dissatisfaction The way we work as a team and our ability to influence our processes is an important factor in regards to team satisfaction. Fear-driven testing impacts team morale and satisfaction in multiple ways: - Blame culture and fear of escaped defects increases stress, making even taking part in daily standup meetings stressful - Slower feedback impacts our ability to release and team morale, making work completion painful and time-consuming - Lack of ownership promoted by the excessive number of reviews and approvals - Feeling ineffective, through spending time doing activities that provide little value, like replicating test reports in different places QA becomes a bottleneck As we focus on the fear of missing defects, QAs are put into a central position where they become the gatekeepers. Everyone feels the pain of waiting for testing results, but at the same time no one wants to own or take responsibility for approvals. Combined with factors like lack of application understanding, little trust on lower level tests, excessive end-to-end testing focus, excessive documentation, and lengthy approval processes, every change undergoes a time-consuming process where risk and impact are poorly accessed. How to minimise the impacts of fear driven testing Fear isn’t a problem as long as we understand and act to prevent or minimise its impact. The first step is recognizing fear-driven testing. Some issues can be tackled by the QA team, like test duplication and end-to-end testing overfocus, while others, like approval layers or excessive documentation, require wider team effort. As we said in the beginning, being afraid or worried about a possible outcome is not a problem as long as we manage to understand the reasons behind it and act to prevent or minimise its impact. The first step is recognizing that our testing is being led by fear. Amongst the problems that we have described some of them can be tackled by the QA team, like test duplication and end-to-end testing overfocus, while others, like the number of approval layers or excessive documentation, require wider team effort. A general approach involves: - Identifying what activities/processes are a result of acting on fear instead of information. - Breaking down problems and identifying necessary changes we need to implement to our testing or processes. Ideally, we would want those tasks to be as small as possible, to facilitate them being acted on. - Prioritising tasks and tag who needs to be involved on each type of task, allowing the team to address more problematic areas first. - Implementing identified tasks, technical or process-oriented, separating those under control from those needing further communication layers. Finally, this needs to be part of a continuous improvement cycle, it won’t be finished in one iteration and it won’t provide results unless we keep monitoring and evaluating the effectiveness of our actions. Software quality needs to become part of the company’s day-to-day thinking, from higher management to junior testers and developers, everyone has a role to play so we can increase the level of confidence in our software development lifecycle and our ability to deliver software that delivers value to our customers. To Wrap Up Fear-driven testing is a dysfunctional testing approach that can slowly creep into a team’s ways of working or even at a wider company level. Letting our work be driven by the fear of defects escaping to production or getting the blame for an incident will not only affect our ability to release, but it will also impact the team's mental health and above all it won’t ever guarantee that no issues will escape. Moving away from fear-driven testing Getting the right amount of testing performed at the right level and within the desired time is not an easy task. Quality needs to be part of everyone’s responsibility, then teams will be better equipped to increase confidence and provide meaningful information to stakeholders in a timely manner. Ensuring that risk analysis is a priority, encouraging a culture of domain knowledge sharing, and pushing for early and a continuous testing approach at all levels are valuable tools to prevent fear-driven testing. "I found a community in the Ministry of Testing, a form of belonging which helped me grow personally and in my career." — Kim Knup [Upgrade your software testing career with Ministry of Testing]( [Website]( [LinkedIn]( [YouTube]( [Twitter]( [Instagram]( Copyright © 2024 Ministry of Testing, All rights reserved. You have opted to join this email list. Our mailing address is: Ministry of Testing 19 New RoadBrighton, East Sussex BN1 1UF United Kingdom [Add us to your address book]( Want to change how you receive these emails? You can [update your preferences]( or [unsubscribe from this list](.

EDM Keywords (332)

wrap worried working work within wiki ways way waiting visibility validation validated validate used us update unsubscribe unless unit understanding understand ui type trust triangle treated traceable tool together time thus thought things tests testing testers tested test teams team tasks tag tackled symptoms stubs steps step start standing stakeholders specify sorts soon small simplify similarly sharing severity sets set services service september seen section scope scenarios scenario say satisfaction said runs run role risk reviews review retested results result responsible responsibility rely released release regards reduced recognizing receive reasons readable raised questions quality qas put pushing purpose providing provided provide production produces processes process problems problem prioritised prevent pressure preferences possible position points play planning planned plan place performed perceived people parties part pain oversee order opted ones observed number nothing network needs needing need morale ministry minimise might microservices measure meaning matter manage love looked long logged list like levels level led learning leads larger lack know kinds join job issues issue involved invested introducing internal interact intent integration influence inefficient indicate increases increase incident improve importantly implement impacts impacted impact imagine illustration identifying identify hinder helped harmful harder happening happen getting get gaps fundamental functionality fuel frightened found form focus finished finish findings find finally fill feel features feature fear fdt facing facilitate explore expected executing executed examples example everyone even evaluating essential escape environment ensuring ensure end emails email effectiveness effective early driven done discussed discuss disciplines developers detrimental determine designed design described describe delays defects dealing day cycle customers culture crucial creating creates covered correctness control contract context consider consequences consequence confidence conducted compromised company community communication communicate common comfortable collaboration close clear charge changes change challenge centre causes caught capable built bug browser breaking bottleneck blamed blame benefit beneficial belonging believe behaviour beginning become bearers bad aware availability automated attempt assumption aspect article artefacts architecture approach application anyone answer analyse amount already allow afraid affects affect address added actually activities acting acted act achieved achieve ability abilities

Marketing emails from ministryoftesting.com

View More
Sent On

02/12/2024

Sent On

29/11/2024

Sent On

06/11/2024

Sent On

21/10/2024

Sent On

25/09/2024

Sent On

23/09/2024

Email Content Statistics

Subscribe Now

Subject Line Length

Data shows that subject lines with 6 to 10 words generated 21 percent higher open rate.

Subscribe Now

Average in this category

Subscribe Now

Number of Words

The more words in the content, the more time the user will need to spend reading. Get straight to the point with catchy short phrases and interesting photos and graphics.

Subscribe Now

Average in this category

Subscribe Now

Number of Images

More images or large images might cause the email to load slower. Aim for a balance of words and images.

Subscribe Now

Average in this category

Subscribe Now

Time to Read

Longer reading time requires more attention and patience from users. Aim for short phrases and catchy keywords.

Subscribe Now

Average in this category

Subscribe Now

Predicted open rate

Subscribe Now

Spam Score

Spam score is determined by a large number of checks performed on the content of the email. For the best delivery results, it is advised to lower your spam score as much as possible.

Subscribe Now

Flesch reading score

Flesch reading score measures how complex a text is. The lower the score, the more difficult the text is to read. The Flesch readability score uses the average length of your sentences (measured by the number of words) and the average number of syllables per word in an equation to calculate the reading ease. Text with a very high Flesch reading ease score (about 100) is straightforward and easy to read, with short sentences and no words of more than two syllables. Usually, a reading ease score of 60-70 is considered acceptable/normal for web copy.

Subscribe Now

Technologies

What powers this email? Every email we receive is parsed to determine the sending ESP and any additional email technologies used.

Subscribe Now

Email Size (not include images)

Font Used

No. Font Name
Subscribe Now

Copyright © 2019–2024 SimilarMail.