AndPlus acquired by expert technology adviser and managed service provider, Ensono. Read the full announcement

Automated vs. Manual QA

Nov 13, 2019 9:05:00 AM

shutterstock_1006933030-1 med

Software is complex stuff. Even relatively straightforward applications that do only a few things can have a dizzying number of possible “journeys” for users to take. Ideally, every one of those journeys is tested under all possible circumstances to ensure the software works as expected and doesn’t crash, pop-up useless error messages, or provide wrong answers.

However, we live in the real world, with limited time and resources. There’s a fundamental tension in software development between the desire to get a product out the door and making sure it meets expectations for functionality, usability, usefulness, and quality.

Sometimes, corners are cut. This can mean quality assurance (QA) testing is curtailed, if not eliminated altogether.

It doesn’t have to be this way. Software quality assurance can, and should, be incorporated into the development process; not regarded as an expendable afterthought. But it takes planning, discipline, and practice.

Above all, it requires the adoption of automated testing techniques.

The Science of Software Testing

Software testing was once a haphazard undertaking, executed by the developers themselves. As long as the software worked on the developer’s machine, it was good to go.

This was more-or-less okay for simple programs, software grew in complexity. It became clear software quality testing required more diligence than a typical developer could provide. Software QA then became a profession separate from that of a software developer. Different types of testing were developed, such as:

  • Case-based testing - On the basis of the documented software requirements, test scripts are written for each “use case,” or journey through the software to accomplish a task. The idea is that every possible path is accounted for in the test scripts.
  • Exploratory testing - This is more of a “free-form” approach, where the tester clicks around in the software, trying different things and checking for consistency in look and feel, misspellings in text and labels, awkward or overlapping placement of controls, and so on.
  • Regression testing - This type of testing is intended as a sanity check, to make sure a change to one part of the software didn’t break something that was known to be working in another section.

Having a separate software testing team was a step in the right direction, but the approach still had shortcomings. Testing typically didn’t start until the software was “code complete,” meaning the developers were pretty-much finished and had moved on to other projects. The testers would do their thing, and when they found bugs, they tossed it back to the developers to fix.

If they were lucky, they would be able to do several rounds of this, but often by the time they got their hands on the software, it was already past due and over budget. This gave management two reasons to shorten the testing phase and ship the product anyway, leaving bug fixes for version 2.

Software Testing in an Agile Framework

As development teams migrated to the agile development methodology, it became clear that software testing needed to be re-thought.  If they left testing until the very end, as with the traditional waterfall development approach, the agile framework would have no benefit. If they tested after the end of each sprint, the regular sprint cadence would be disrupted.

Somehow, software testing had to be interwoven in the sprint. But this would require the testers to run the same test cases over and over again. Optimizing the testers’ time and effort required the adoption of automated testing.

How Automated Testing Works

The principle of operation for all automated testing platforms is simple: For each test, provide the software being tested with a set of defined inputs. Then check the outputs against expected values. If the outputs meet expectations, the software passes.

For software such as firmware, hardware drivers, background utilities, and the like, this can be simple and straightforward. Software that is intended for human interaction, which typically has a graphical user interface (GUI), presents much more complexity for automated testing. Consider a sophisticated application like Microsoft Excel; the number of buttons, fields, and other controls number in the thousands, to say nothing of the numerous formulas that can be used.

For GUI software, a tester has to decide what scenarios to test, write a script for each test, and then “teach” the testing platform what buttons to click and values to use at each step. In this way, automated testing starts out as manual testing, but once the test is set up, it can be executed repeatedly. And each execution takes much less time than it would take a human tester.

At the cost of some up-front manual work, a whole suite of tests can be set up and executed quickly, at any time. Those tests that do take more time to execute (for example, when the software is performing serious number-crunching) can be run overnight, without human supervision.

Furthermore, each test can be iterated with slightly different inputs in each iteration to cover as many cases as possible. This is often impractical in a manual-only testing regime.

Manual Testing is Still Relevant

Certain types of tests lend themselves to automation better than others. Automated testing works best when:

  • Negative testing is needed—that is, when a certain set of inputs is expected to result in an error message
  • The test cases are well-defined, with a finite amount of variation possible in the inputs

Automated testing is especially well suited for regression testing, which is necessarily executed over and over again. Automation can also be applied to stress testing; for example, simulating thousands of users accessing a website simultaneously to see if the system can handle the load.

Does manual testing still have a role in software QA? Absolutely. Manual testing is still needed for cases that automation doesn’t handle well:

  • Look and feel – An automated testing platform neither knows nor cares if the buttons align with each other or with their labels. Color schemes, fonts and sizes, overlapping or poorly sized controls, spelling errors, and anything else requiring human overseeing are still candidates for manual testing.
  • Simple applications – If an application is simple enough, it might not be worthwhile to go to the trouble of setting up a set of automated tests.
  • Early-stage testing – In the early stages of development, the software may not be stable enough for automated testing. The vast majority of automated tests will fail and may not provide much in the way of useful information.

The major advantage of automated testing is that it handles routine test cases that would be highly time-consuming for testers to execute manually. This leaves more time for them to focus on the kinds of testing that are less suited for automated test platforms. The result is a more comprehensive testing regime that catches more issues earlier in the development process when it’s less expensive to fix them.

Proper software testing is more important than ever. Development teams that can consistently produce high-quality software, on time, and within budget have a competitive advantage over those meeting the delivery date with a product full of bugs. Incorporating automated testing into an agile development approach is the best way to make customers happy.

Learn More!

Topics: Testing QA

Abdul Dremali

Written by Abdul Dremali

Abdul Dremali is a key content author at AndPlus and a driving force in AndPlus marketing. He was also instrumental in creating the AndPlus Innovation Lab which paved the way for the company’s leadership in Artificial Intelligence, Machine Learning, and Augmented Reality application development.

Get in touch

LET’S BUILD SOMETHING AWESOME. TOGETHER.