All articles
Engineering Culture
QA
Software Testing
Developer Culture

How Developers Really See QA (And Why Both Sides Are Getting It Wrong)

A CTO and founder shares his honest, unfiltered experience on the dev-QA relationship — the blame culture, the automation lie, incentive misalignment, and what actually needs to change.

Jitu Khubchandani· CTO & Founder, QualityKeeper.aiMay 12, 20268 min

A developer who got tired of the blame game and built a tool to end it.


There's a conversation that happens in almost every tech company, usually two days before a sprint ends or one hour after a client reports a production issue. It involves a developer, a QA person, a Jira ticket, and a lot of unspoken frustration.

I've been in that conversation more times than I can count. And after years of building products, watching QA automation cycles fail in slow motion, and eventually building QualityKeeper — a no-code automated testing platform — I think I finally understand why the developer vs QA relationship is so broken. And more importantly, what it would take to fix it.


The Honest Truth: Developers Don't Respect QA the Way They Should

Let me start with something most developers won't say out loud.

When I first started as a developer and QA found a bug in my code, my gut reaction wasn't gratitude. It was closer to mild annoyance. Not because I thought they were wrong, but because I saw QA as people who were looking for problems rather than helping build something better.

That perception hasn't changed much across the teams I've worked with — and I think I now understand why it persists. A lot of QA professionals are operating under enormous pressure from management to justify their existence through volume. How many bugs did you raise this month? How many Jira tickets are in your name?

When your value is measured in ticket count, you find tickets. Even if those tickets are about a button being 2px off on a screen that no B2B client will ever care about.

Meanwhile, the critical API integration is broken. The e2e flow fails silently under specific conditions. And that's what eventually blows up with a real client.


The Incentive Problem Nobody Talks About

In most companies I've worked at, QA performance was quantified by Jira activity — bugs raised, tickets closed. Not by system stability. Not by release confidence. Not by how much they helped developers understand and prevent entire classes of issues.

This creates a deeply misaligned incentive structure. QA avoids taking ownership of production issues by flooding the backlog with low-priority tickets. Developers avoid taking ownership by not writing unit tests. Management avoids taking ownership by pointing at both teams.

The result? When something breaks in production — and it always does — everyone blames everyone. Developers say QA missed it. QA says they flagged 47 other things and had too many tickets. Management says the process should have caught it.

Nobody is wrong. Nobody is right. And nothing changes.


The Automation Lie

Here's a pattern I've seen play out at company after company, almost like clockwork:

A company hires a QA engineer — sometimes a whole team — specifically for test automation. There's excitement. There are presentations about automated test coverage percentages and CI/CD pipelines.

Six months later, they're spending 90% of their time on manual testing.

Why? A few reasons that compound on each other:

Deliverable pressure. Manual testing produces visible, reportable output every sprint. Automation takes weeks of setup before it produces anything a manager can point to in a standup.

Skill gaps. Many QA professionals don't have strong enough coding backgrounds to build and maintain automation frameworks at the speed a product evolves. This isn't a criticism — it's just reality. Writing good automated tests for a complex product is genuinely hard.

Churn. A QA engineer leaves. The next one has their own preferred tools, their own framework opinions. The old system gets deprecated. The automation coverage resets to near zero. Repeat every 18 months.

This is actually the core problem QualityKeeper was built to solve. Not to replace QA, not to threaten anyone's job — but to break the cycle where automation stays a promise instead of becoming a practice. The idea is simple: build a no-code testing tool where a QA lead, a product manager, even a CTO can record interactions and generate test coverage without the whole thing collapsing when someone resigns or the codebase shifts. If you want the tactical version of this — what an actual no-code rollout looks like — my co-founder wrote the honest guide to automating website testing without coding.


What Good QA Actually Looks Like

In all my years of working across different products and teams, I've met very few QA professionals who operated the way I think QA should operate. But the ones who did — they were invaluable.

What separated them wasn't technical skill, though that helped. It was their orientation toward the system rather than toward the ticket queue.

A good QA person, in my experience:

  • Understands the product deeply enough to prioritize what actually matters — e2e flows, API behavior, state transitions — over cosmetic issues
  • Helps developers understand why something broke, not just that it broke, so similar issues don't recur
  • Builds a framework for how testing should happen across the team, rather than being a one-person bug-finding operation
  • Treats production stability as a shared responsibility rather than a blame-deflection exercise

The difference between a good QA and an average QA isn't how many bugs they find. It's whether they help the company build a culture of quality, or just a backlog of issues.


The Dev-QA Disagreement Nobody Resolves Well

When a developer and a QA person disagree about whether something is a bug, who wins?

In my experience: whoever the sprint timing favors.

If there's time in the sprint, the ticket gets fixed. If there isn't, it goes to the backlog where it will age gracefully until it either bites a client or gets closed as "won't fix."

This isn't a process failure. It's a prioritization failure rooted in the fact that developers and QA often have fundamentally different definitions of what matters. Developers — especially in B2B — care about system integrity: are the APIs working? Are the hooks firing? Does the e2e flow complete without silent failures?

QA, incentivized to raise tickets, often catches the visible stuff: responsiveness, UI inconsistencies, layout breaks on specific screen sizes. In consumer apps, that matters. In most B2B applications, a client will tolerate a slightly misaligned button. They will not tolerate a broken data sync.

This misalignment is never explicitly addressed in most teams. It just creates low-grade friction that compounds into a genuine cultural rift over time.


The Misconception That's Actually Slowing Teams Down

If I had to name the single most damaging misconception developers have about QA, it's this: that automation is something QA will eventually figure out, so developers don't need to be involved.

This leads to a kind of passive expectation — QA will automate eventually, so we'll just let them work on it. Meanwhile, the QA team is stuck in the manual testing loop because the product moves too fast, the framework keeps breaking, and nobody with enough coding context is helping them.

The result is months of effort that produce nothing deployable. And when it collapses, everyone points at QA's "rigidity" about tools or frameworks.

The stereotype I want to break is this: automation isn't hard because QA people are bad at it. It's hard because it's been set up as QA's problem alone, when it should be a shared engineering concern.


The Reframe That Changes Everything

If you take nothing else from this post, take this:

QA people are not responsible for developer mess.

They are not there to catch the unit testing you skipped. They are not there to compensate for the edge cases you didn't think about. They are not a safety net for underdone code.

They are there, ideally, operating on the assumption that developers have done their job — that the code is solid, that the obvious paths work, that basic error handling exists. Their job is to stress-test that assumption from the outside, to think about the system the way a real user or a real client would encounter it.

When developers don't write unit tests and QA is forced to catch basic logic errors, the whole system breaks down. QA is now doing two jobs badly instead of one job well. Developers get slower feedback. Production incidents increase. Blame goes around.


What Should Actually Change

If I could change one thing about how most companies approach QA, it would be the fundamental framing:

Stop treating QA as a bug-finding department. Start treating QA as the guardians of system health.

That means measuring QA success differently — not by ticket volume but by release confidence, production incident rates, and coverage of critical flows. (For the structural framework behind this — how to actually scale this kind of QA without ballooning headcount — see how to scale QA automation for startups without hiring more engineers.)

It means giving QA a seat at the architecture table, not just the sprint review.

It means building tools and processes where automation is genuinely accessible — not dependent on one engineer's framework preferences or a specific person's continued employment.

And it means developers taking ownership of the quality of their own code, so QA can focus on the things that actually require their expertise.


A Note for Junior Developers

If you're early in your career and you find yourself rolling your eyes at QA, I get it. I've been there.

But here's what I'd tell my younger self: a QA person who's operating well is not your adversary. They're the person who helps you see your system the way the world will see it — not the way you built it, but the way it actually behaves under pressure, at scale, with real users making unexpected choices.

The goal was never for them to find everything wrong with your code. The goal was always for both of you to make sure the system is worth trusting.

That's worth treating seriously.


The Bottom Line

If you're a startup founder searching "automate website testing without coding" — here's my honest answer:

The easiest way is to stop treating testing as an afterthought and start treating it as a release system. Get one good QA. Give them a tool that doesn't require coding. Build automation as you test, not after. Let AI handle the edge cases. And ship with confidence.

You don't need a 10-person QA team. You don't need to write a single line of code. You just need the right system.


Jitu Khubchandani is the CTO & Founder of QualityKeeper.ai — a no-code QA-as-a-Service platform built for startups that want enterprise-grade quality without an enterprise-sized QA team.

📅 Book a free discovery call at QualityKeeper.ai

Frequently asked questions

Why do developers see QA negatively?
Because most QA teams are incentivized by ticket volume — bugs raised, Jira tickets closed — instead of system stability. When QA's value is measured in bug count, developers perceive them as adversarial rather than as system guardians. The misalignment is structural, not personal.
What's the right way to measure QA performance?
By release confidence, production incident rates, and coverage of critical web app flows — not by Jira ticket count. Treating QA as guardians of system health, with a seat at the architecture table, produces better outcomes than treating them as a bug-finding department.
Why does QA automation fail in most companies?
Three reasons compound: deliverable pressure (manual testing shows visible output every sprint, automation takes weeks to produce anything reportable), skill gaps (writing maintainable test frameworks is genuinely hard), and churn (each new QA engineer brings their own preferred tools, so coverage resets every 18 months).
Should developers write tests, or is that QA's job?
Developers should own unit tests. QA should stress-test the system the way a real user or client would. When developers skip unit tests, QA gets stuck catching basic logic errors instead of doing exploratory and end-to-end testing — which means both jobs get done badly.
Can AI fix the developer-QA disconnect?
AI QA agents help close the automation gap so QA doesn't get stuck in manual testing loops — that's the tactical fix. But the cultural fix has to happen alongside: measuring QA on release confidence (not tickets), giving them architectural input, and making automation a shared engineering concern instead of QA's problem alone.

Topics

how developers see QAdeveloper vs QA relationshipQA automation problemsno-code test automationautomate website testing without codingQA as a service for startupswhy QA automation failsdev QA team conflictsoftware testing cultureautomated testing tool for non-coders

See AI QA Agents test your web app, end-to-end.

A dedicated QA engineer plus agentic QA running regression on your product 24/7. No SDETs to hire, no code to share.