Design

Test smart: how to approach AI and stay sane?

While Woody (the optimist) is exploring new technologies, Owl (the sceptic) is concerned.

The line between human expertise and AI-assisted results gets delicate, yet there are ways to befriend new technology.

On my trip to Trieste, I met a colleague, a software developer, who offered a fresh view of the IT field. Our discussion ended with a classic rant on AI stealing our jobs. Yet, is it really AI? The reason is something deeper.

If even Gen Z colleagues are concerned about it, I’m losing my initial optimism. Is AI a threat to the industry or a helper that could save us time on repetitive work? There are multiple perspectives on this question.

There is no crystal ball to see if the developers, QA engineers, and designers will have opportunities for career growth in 5–10 years. Yet certain trends show that we can’t ignore the rapid development of AI tools and, step by step, everyone is supposed to incorporate them into their daily work routine. However, where is the line between AI capacity and human expertise, and how can we approach AI smartly? Let’s dive deeper into it.

AI vs. Quality

As someone who cares about the quality of digital products, I believe quality is more than a checkbox — it’s a bridge between technology and human experience. Yet, what I observe in the industry: engineers are becoming more and more like hamsters running in the wheels of a speedy development.

AI tools brought us cosmic velocity. At the same time, the speed might be overwhelming for quality. Major incidents triggered by AI agents, like accidental deletion of databases, are not funny anymore. Whenever we experiment with AI, we should play it safe.

Indeed, there are newcomers in the industry who blindly trust AI agents and copy-paste the outputs without any doubt or critical review. In the long run, this will surely affect the quality of digital products (I believe it already has an impact that no one is comfortable discussing openly).

Should we establish a better understanding of how to use new technology without being blinded by it? Self-education is a must-do for everyone in our field. AI agents will replace humans only in one case: when humans lose their ability to think critically.

AI-reservedness

Reservedness towards new technology is another extreme. I’ve noticed a surprising trend lately. Non-tech companies are slow in adopting modern stacks, and even some tech companies are quite hesitant about AI-driven tools.

Owl asks Woody what he is riding. Woody answers that this is the first in the forest, AI-powered scooter.

Indeed, it is hard to trust AI, especially if a company doesn’t have an established security policy. Once the company uses AI tools developed by external providers, there is an obvious risk of data leakage. Even if your company is flexible enough for AI agents, using a personal account for a company code is a no-go.

As a QA engineer, I’m currently exploring AI to help with my own drafting (test cases, charters, bug reports), then refining all of that manually. When I write the prompts, I provide only the necessary context without details.

Yet, I still create the stories and illustrations on my own (including the one you are scrolling through right now), although I use an AI-driven grammar-checking tool. Ironically, I regularly find my content used in AI-generated articles.

I also believe that the most creative ideas, e.g. for testing strategy, are still generated through people's collaboration. However, we might ask AI to suggest the bullet points for a brainstorming session.

To my mind, the next level of AI skills is learning to give an AI tool enough context without compromising security, privacy, or authenticity. If these pillars are in control, we should not worry much.

What to delegate to AI?

Being naturally curious about new tools, I’m testing how AI can help in the everyday work of a QA engineer. It saves time and energy on quite a few tasks.

I remember the time when everything was done by hand: test cases were written in a spreadsheet or a test management tool, and code was typed line by line in an automation framework.

Now, as far as I see, some QA professionals don’t hesitate to delegate these tasks to Gemini, ChatGPT or Claude. All you need is to have a sharp eye and review the results of AI agents.

If you just copy-paste, it is easy to get things overlooked. Besides, you can always polish the output generated by AI: edit it, add your points, rearrange it, etc. If an AI agent hallucinates and produces incorrect results, you always have the option to discard them.

Now, you can easily learn any automation framework, write your first tests and integrate them in a CI/CD pipeline in a few hours with the help of AI agents. Previously, it took much more effort to learn it when browsing through tons of tutorials and video courses.

Surely, I would delegate repetitive tasks, such as generating test cases and automating regression tests, to AI. However, I still believe that exploratory testing is done best by humans.

Yes, AI can guide you in creating a test charter, for instance. However, it can’t replace your human empathy and critical thinking when evaluating the product. Consequently, it might miss edge cases and UX inconsistencies that a human eye and experience would catch.

Nevertheless, AI might be a great helper in generating ideas, user scenarios, user stories, and finally, test cases and automated tests. You can also use AI as a brilliant assistant in everyday learning. Anyhow, an in-depth review by a human expert is still required.

Instead of being sceptical, make AI your ally. It saves a lot of time for senior developers, QA engineers and designers. It can be a great helper on the learning journey for mid-level and junior specialists. All in all, you need to be critical of the results delivered by an AI agent, though. Blind copy-pasting might result in a rookie mistake and cause serious damage to a business.

The balance is vital on the journey with AI. Don’t over-rely on it, but if you underplay, you won’t be moving fast enough (like that hamster in a wheel 🙂).

You may check my LinkedIn page if you are curious about my background. With 8+ years in the tech industry, I’ve evolved from a tester into a quality strategist and engineer. I’m ready to communicate with teams seeking guidance in enhancing product quality and testing. At this very moment, I’m looking for a new role as a QA Analyst, QA Engineer or QA Lead.

Illustrations: by me (Apple Pencil, iPad, and no AI 🙂)

Resources:

  1. Danielle Abril, Tech companies are cutting jobs and betting on AI. The payoff is far from guaranteed: https://www.theguardian.com/technology/2026/apr/06/tech-layoffs-ai-work
  2. Medhani Ranaweera, How I Transitioned from Traditional QA to AI-Assisted QA (And What I Actually Use Daily): https://medium.com/@medhani.ranaweera/how-i-transitioned-from-traditional-qa-to-ai-assisted-qa-and-what-i-actually-use-daily-14075b1c2fd5
  3. Niar, The End of Traditional Testing? How AI Programming Will Force QA to Evolve in 2026: My First Post After 30 Days of Understanding Development: https://medium.com/@niarsdet/c953590ecd3e


Test smart: how to approach AI and stay sane? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Leave a Reply

Your email address will not be published. Required fields are marked *