Tester: Understands Technology, Expert at functional testing & finding defects | $100
QA: I can assure the overall product Quality | $1000
Automation Engineer: I can automate product ‘regression’ suite | $10000
SDET: I can build tools for Testing & Automation | $100000
Start Up-1: Advertising a script-less automation tool | $1000000
Start Up-2: Modelling + XML + XSDs + Automation + Coverage | $10000000
Start Up-3: Data Analysis, AI-driven Test Design, ML driven Testing | $100000000
Any idea out of all who ‘identifies the maximum defects’? Adopting technology is important, but blind adoption is different than logical application of Technology. When I see it – looks like everyone wants to fit the tech-term anywhere in the test process and call it tech-driven QA. But how do you measure the ROI from client perspective? Do you compare Super-Tech.QA vs. Functional QA? Or Super-tech-tools are cheaper than the required man power? Nah! Everyone says “All these are long-term benefits, no short-term advantage”, but then eventually forget to measure the ROI in the long run as well 😛 On the other hand, due to all this fuss we have somewhere neglected the original Tester. Differ in opinion? Please comment…Differ in opinion? Please comment…
Note: The $$$ might vary 🙂
Gurprasad Singh | Lead Solution Architect in Test at ImpactQA IT Services
Is this a justification for all the manual testers not growing into AI or Data Analysis or even automation engineers..? I Hope not, future automation is also out of the testing domain that we highly neglect sometime, the world is much bigger 🙂
Jeff Nyman | Quality and Test Specialist
A key point in all this is often how specialist testers argue these points, particularly as technology-based testing (mostly automation) is seen as a better value-add for achieving quality.
Ah! “Achieving quality.” Well, what happens when people understand that quality is never a single, empirical fact; it’s always a working hypothesis. Our understanding of it is an interpretation and an assessment of value. Value is a social judgment, and different people value things differently.
To have something called “quality assurance,” (which cannot be a team, but must be a function), you must have a shared understanding and expression of decisions related to quality. That shared understanding has to be empirical and demonstrable. This is about having clear sight lines through all artifacts that reflect decisions made about how ideas become business rules and how those are turned into software that provides value.
Can tooling provide those clear sight lines? Can machine learning provide those clear sight lines? What aspects of this (if any) can only human engagement provide? But also, with that human engagement, what are the necessary skills to provide it? And if that falls to the specialist tester — well, what does that mean?