Intellection on managing automated tests with frequent releases / versioning



Problem: Group of products that are separate but related, and are periodically released (i.e. versioned).

What is required: Automated tests to run from the start of the suite against all different versions, meaning that the current state of our master test code branch should run meaningfully against arbitrary versions of the production builds.

Solutions:

1. Include version detection and an internal listing of features per build. Benefits here include making the test simpler because if the version list says feature X isn't part of it, the script never checks feature X, and the presence of a 'feature oracle'. Drawbacks are that it's more complex to implement and maintain, since the feature/version matrix has to be correct and up to date or you could have features not being tested (if the oracle says the feature is from a later version or is missing the feature entirely) or tests failing (if the oracle says the feature is from too early a version).


2. Build feature detection into your scripts instead: check for the existence of the link to feature X and act on it only if it exists. Advantages include - if the feature is there it will be tested, tests will never fail on trying to exercise a feature that doesn't exist. Disadvantages include - this will be slower because there will always be the necessary checking time to determine whether the feature exists, you won't know if a feature is missing from a version (unless you use both approaches, which of course gets you all the advantages and drawbacks of both).


3. What you should eulogize to the dev team:

•Use predictable element IDs in your UI so that tests are less reliant on the DOM layout.

•Design your pages using reusable components so that the page design is more consistent; this reduces the number of patterns the test needs to understand.
•Get your UI developers involved in the test design so that everyone understands the impact of changes

And then what the automation engineers should do is:

•Take advantage of element IDs when they are available (and when they are consistent; dynamic element IDs are not particularly useful to a tester).

•Structure your tests to take advantage of page design patterns.
•Allow your tests to be aware of what product version they are running against, and tag your tests with version information.
•Consider negotiating the number of production versions you intend to support. Perhaps there is some middle ground between compatibility with only the latest code and compatibility with arbitrary versions.
•Consider how much of test suite needs to remain compatible. Some tests may be more crucial than others; it may be enough to commit to compatibility for only a subset of the full test suite

Ideated from: http://sqa.stackexchange.com/

Comments

Popular posts from this blog

Software Testing @ Microsoft

Trim / Remove spaces in Xpath?