All articles
Engineering Culture
developer testing
QA culture
fintech

Should Developers Write Tests or Should QA Handle It? Here's What I Learned Scaling Systems to 10 Million Users

After scaling fintech and LMS systems to 10 million users, Software Development Lead Ujjawal Raj shares his honest, experience-backed answer to who should own software testing — developers or QA.

Ujjawal Raj· Software Development LeadMay 14, 202611 min read

I'll be honest with you. For a long time, my teams didn't write test cases for most of what we built. User onboarding, analytics pipelines, credit report fetching — we shipped fast, we moved on, and QA caught what they caught. And for a while, that felt fine.

Then we started building the LMS — a system that touched real money. Reconciliation. Payments. Audit trails. And suddenly the calculus changed completely. I became one of the strongest advocates for mandatory test coverage on that system, regardless of who was writing the code or how senior they were.

That experience — the contrast between systems where we got away with skipping tests and systems where skipping tests would have been catastrophic — is what shaped my actual opinion on this debate. So let me give you the honest answer, not the textbook one.


The real question isn't "who" — it's "what's at stake"

When we were building the user onboarding flow, the consequences of a bug were annoying but reversible. A user couldn't complete a step? Support ticket, fix deployed, user retried. Not great, but manageable. We didn't have automated test coverage, and the product moved fast because of it.

But on the LMS, even a one-rupee discrepancy could cause reconciliation failures. A double payment could go undetected for weeks. An audit issue could bring regulatory trouble. Those things are not reversible. Money once gone, or not accounted for, is a different category of problem entirely.

"Everything else can be reversed. Money once gone or not properly accounted for — that's a different animal. That's when testing stops being optional and becomes mandatory."

So before any team argues about whether developers or QA should write tests, they should first ask: what happens if this breaks in production and no one catches it for 48 hours? If the answer is "a user gets mildly frustrated," that's one category. If the answer is "a person trying to get a loan for a medical emergency hits a dead end," that's another category entirely — and no amount of QA bandwidth replaces solid developer-written tests there.


What happens when nobody owns testing: a story from our dev environment

Here's something that doesn't get talked about enough: the cost of a broken dev environment. On our teams, we started seeing developers push code to the shared dev environment without proper manual testing. Not maliciously — just moving fast. The result? The environment became unusable for everyone else. Other developers were blocked. QA couldn't test anything.

The options on the table were painful. We could spin up a separate environment for QA — and watch our infrastructure costs shoot up. Or we could fix it culturally. We went with the latter: a hard rule that everyone, regardless of role, had to manually test their own changes before pushing to dev. No exceptions.

This is why I believe developer testing isn't just about code quality in isolation. It's about team velocity, environment stability, and not making your own colleagues the victims of your untested assumptions. The same dynamic shows up when environment config never leaves one machine — only the blast radius is bigger on a shared dev box.


Junior developers must write tests — but not for the reason you think

On our teams, test cases were mandatory for junior developers. Not optional, not recommended — mandatory. Some people assume this is purely a quality gate, a way to catch bugs before they cause damage. That's part of it. But the more important reason is what the act of writing tests does to a developer's thinking.

When you're forced to write a test case, you have to think through every scenario before you ship. You have to decide: what are the absolute deal-breakers here, and what are the edge cases I can accept missing? That distinction — between critical failures and acceptable misses — is the kind of judgment that takes years to develop intuitively. Test writing accelerates it.

Deal-breakers — always test
  • Server fails to restart after changes
  • NullPointerException on core flows
  • No circuit breaker on third-party APIs
  • Payment or reconciliation failure
Acceptable misses — can ship
  • Rare edge cases in low-risk flows
  • UI-level inconsistencies
  • Non-critical optional features
  • Low-traffic, easily reversible paths

Once you've learned to walk — once you've written enough tests to develop that instinct — you earn the right to run. Senior developers on our non-LMS systems had more flexibility precisely because they had already internalized these tradeoffs. But juniors? They needed the discipline first. And most of them became better engineers for it, faster.


Seniors aren't exempt — the LMS proved that

Now, here's where I'll push back on the common "seniors don't need as much oversight" mentality. For the LMS, there were zero exceptions by seniority. If you were touching the payment logic, you were writing test cases. Full stop. Didn't matter if you'd been writing code for ten years.

Why? Because experience gives you speed and instinct, but it doesn't make you immune to missing something in a complex financial system. And in a system where the cost of one missed edge case is a reconciliation audit or a failed medical loan, "I'm senior enough to skip this" is not a risk worth taking.


The director and the critic: the best mental model for dev vs QA testing

After years of working with QA teams across different products, I've landed on an analogy that I think nails the relationship better than any process framework.

Developer

The director. You know your vision. You should test your own scenes until they're gold standard. You solve the known knowns.

QA Engineer

The critic. They've seen hundreds of films. They catch what you're too close to see. They hunt the unknown unknowns.

A good film director doesn't hand a rough cut to the critics and say "you find the problems." They refine it until they believe it's ready. Only then does it go for review. That's exactly how our process worked: a developer would test their own code, reach a point of genuine confidence, and only then move it to the QA bucket. QA picked it up on their own timeline, or we'd give them a target date like "this will be ready for testing on December 12th" so they could plan accordingly.

This separation of concerns matters. If developers don't own their own testing, QA becomes a dumping ground. They end up spending their cycles catching obvious bugs that should never have left the developer's machine — instead of doing the high-value exploratory work that only a different mindset and broader exposure can provide. For the cultural side of that tension, how developers really see QA goes deeper than any process doc.


The P0 vs P2 war: who actually decides what gets fixed?

Anyone who has worked in a team with separate dev and QA functions knows this tension. QA flags something as P0 — a blocker. The developer looks at it and says P2, nice to have. Who wins?

On our teams, it often came down to seniority and credibility. If the QA lead had the experience and track record to make the case, developers listened. If a junior QA was flagging something a senior developer had already considered and consciously deprioritized, it went the other way. And when it was genuinely ambiguous — when both sides had a reasonable argument — product made the call.

Product's position was pragmatic: if this only affects 1% of users, and the release is otherwise ready, we won't block the entire release for it. We log it, we schedule it, and we ship. I understood the logic. I still do, mostly.


Shadow testing and the value of testing in production

One thing I want to challenge in the conventional testing conversation is the assumption that all meaningful testing happens before production. Sometimes testing live is not just acceptable — it's necessary. Shadow testing, where you route real production traffic to a new system in parallel without affecting the actual user experience, is genuinely valuable for catching things that no dev or QA environment fully replicates.

The combination of rigorous pre-production testing plus targeted production validation is, in my experience, more robust than either alone. The key is knowing which parts of your system can tolerate that kind of experimentation and which ones cannot. Financial transaction logic? Never shadow test live without extreme safeguards. Analytics collection? Much safer to test in the wild.


The confidence multiplier: why test cases make big changes less terrifying

There's a specific kind of developer confidence that only comes from good test coverage, and I don't think it gets discussed enough. When you make a small change and all existing tests pass — including tests written by other developers for other parts of the system — you know with a reasonable degree of certainty that your change isn't creating hidden regressions elsewhere. That's a completely different feeling from shipping a change and hoping for the best.

And the flip side is equally valuable: when you make a big refactor and something breaks a test in a module you didn't even touch, you know immediately that your change has wider consequences than you thought. The test suite is giving you signal you wouldn't have had otherwise. It doesn't remove all uncertainty, but it removes a significant layer of it. When QA or AI QA agents running automated regression validate what you've written, confidence moves from subjective to something more grounded.


My honest advice for developers starting out today

Write test cases from the start. Not because your manager says so. Not because it's a best practice you read in a blog. Write them because they force you to think through your own code more rigorously than any code review will.

You will develop intuition over time about which tests matter and which are noise. You will learn to make good tradeoffs. But that intuition only comes from having gone through the discipline first. You have to learn to walk before you can know when it's okay to run.

The developers I've seen grow the fastest are the ones who treated their test cases as part of their craft — not as bureaucratic overhead tacked on at the end, but as the way they thought through their own work. They wrote tests for themselves first, and the team benefited second.


Summary: developer testing vs QA — what actually matters

DimensionDeveloper TestingQA Testing
Primary focusKnown knowns — what you built and what you intendedUnknown unknowns — what you missed or didn't consider
Who owns itThe developer who wrote the codeQA team; feeds into shared coverage ownership
Mandatory for juniors?Yes — no exceptionsYes — QA validates junior code thoroughly
Mandatory for seniors?Relaxed on low-risk systems; mandatory on financial/critical systemsYes, QA still validates regardless of dev seniority
Financial/LMS systemsNon-negotiable for everyoneNon-negotiable — both manual and automated
Best analogyDirector refining their own filmCritic reviewing with fresh, comparative eyes
When to go to productionAfter confident self-testingAfter QA validates and releases to prod
Shadow/live testingValuable for low-risk, high-traffic systemsComplements pre-production QA, not a replacement

Ujjawal Raj is Software Development Lead at QualityKeeper.ai — building fintech and LMS systems with clear dev–QA ownership, reproducible environments, and test discipline that scales past 10 million users.

Book a free discovery call at QualityKeeper.ai

Frequently asked questions

Should developers write unit tests or is that the QA team's job?
Developers should own unit and integration tests for their own code. QA's job is to find what you couldn't see from inside your own work — edge cases, cross-system interactions, and real-world usage patterns you didn't anticipate. If you hand your untested code to QA and call it done, you've turned a critic into a safety net — and that's not fair to them or your users.
Do senior developers need to write test cases?
It depends on the system. On low-risk systems, experienced developers often have enough instinct to know what needs coverage and what doesn't. But on financial systems, payment flows, or anything with audit requirements, seniority is not an exemption. The cost of a missed test in a reconciliation-critical system is the same regardless of who wrote the code.
What is the difference between developer testing and QA testing?
Developer testing is about verifying what you built does what you intended — known knowns. QA testing is about finding what you didn't know you missed — unknown unknowns. Think of it as a film director who knows every scene versus a critic who watches the final cut with fresh eyes and a different frame of reference. Both are necessary; neither replaces the other.
Why do developers break the dev environment and how do you prevent it?
Developers break shared dev environments when they push untested changes — especially database migrations or config changes — without verifying them locally first. The fix is cultural and enforced: make manual self-testing before pushing to any shared environment a hard rule, not a suggestion. The cost of one broken shared environment across a full team far outweighs the time saved by skipping the check.
How do you decide which bugs are worth blocking a release for?
The framework I've used: ask who is in that affected percentage and what it costs them. A bug affecting 1% of users sounds manageable until that 1% includes someone in a medical emergency trying to access a loan, or a high-profile user who goes public. Severity isn't just about frequency — it's about consequence. Product makes the final call, but that call should always account for the worst-case version of the affected user, not just the average one.
Is shadow testing in production a good idea?
For the right systems, yes. Shadow testing routes real production traffic to a new system in parallel, without affecting the user experience, so you can observe real-world behavior that no test environment fully replicates. It's genuinely valuable for analytics, routing, or recommendation systems. It is not appropriate for payment logic, financial transactions, or anything where a side-effect in the shadow path could cause a real-world consequence.

Topics

should developers write tests or QAdeveloper testing vs QAwho should write unit testsdeveloper vs QA responsibilityfintech software testingLMS testingtest coverage for juniorsQA handoff processdeveloper self-testingshadow testing production

See AI QA Agents test your web app, end-to-end.

A dedicated QA engineer plus agentic QA running regression on your product 24/7. No SDETs to hire, no code to share.