All articles
Environment & QA
works on my machine
blockchain QA
environment parity

Why "Works on My Machine" Is a QA Nightmare — A Blockchain Developer's Perspective

Lead Blockchain Developer Sai Charan breaks down why "works on my machine" keeps destroying QA cycles — and what 9 years of Ethereum development taught him about fixing it for web apps and Web3 teams.

Sai Charan· Lead Blockchain Developer, QualityKeeper.aiMay 13, 202610 min read

The Four Words That Have Wasted More Developer Hours Than Any Bug

"Works on my machine."

In nine years of building full-stack blockchain applications on Ethereum, I've heard those four words more times than I can count. They are not just an excuse. They are a symptom — of undocumented environments, skipped checklists, and a cultural blind spot that quietly poisons QA cycles and erodes client trust.

I've seen it delay token launches, burn debugging hours on issues that were never actually code bugs, and create a slow drip of client doubt that's harder to recover from than any technical failure. And almost every time, the root cause wasn't complexity. It was process.

This post is everything I've learned — and everything I wish I'd known earlier — about why "works on my machine" persists, what makes it uniquely dangerous in Web3, and how to build a team culture where it simply can't survive. For web app teams, pairing environment parity with agentic QA and AI QA agents that run regression in reproducible staging — not just on a developer's laptop — is how you stop the cycle from repeating.


Where It All Starts: The .env File Nobody Updated

The most common pattern I've encountered is deceptively simple. A developer adds configuration directly to the database before starting the server locally. It works. They ship. When the code reaches a higher environment — staging, UAT, production — that database update was never replicated. The application fails, often silently, or with a cryptic error that sends the team down a debugging rabbit hole for hours.

The fix, when we finally trace it back, takes five minutes. The investigation took two days.

That pattern taught me something early: environment-specific configuration is not a prerequisite to development. It is part of development. Missing values in .env files and misconfigured backend settings aren't edge cases — they are the single most common cause of "works on my machine" failures I have personally witnessed. Every time, it came down to the same thing: someone treated config as an afterthought rather than a first-class citizen of the deployment process.

If a step isn't documented and scripted before the code ships, it doesn't exist for anyone but the person who ran it.


Why "Works on My Machine" Hits Differently in Blockchain Development

Traditional software has enough "works on my machine" problems. In Ethereum and full-stack blockchain development, the issue compounds in ways that most development guides don't account for.

Local testnets lie to you. Tools like Hardhat and Anvil spin up a clean-slate chain every time. No historical state, no edge-case transactions that have accumulated over years on a live network. Everything passes locally, right up until it hits a real chain — and then you discover that your contract assumptions were built on a foundation that doesn't reflect reality.

Wallet configurations are an afterthought until they break. Local development uses well-funded test accounts with predictable behaviour. Production might be a multisig or a hardware wallet with entirely different gas behaviour, permission structures, or nonce states. These differences aren't hypothetical — I've seen deployments fail because the production wallet simply didn't have the permissions we granted ourselves locally and never thought to document.

RPC endpoints are not interchangeable. Infura, Alchemy, and self-hosted nodes have subtle differences in rate limiting, supported methods, and response behaviour. A contract call that works perfectly against a local node can fail against a live RPC under real traffic. Because it's infrastructure-level, it gets mistaken for a contract bug — and wastes significant debugging time before the real cause surfaces.

The lesson I keep coming back to: treat chain state, RPC configuration, and wallet setup with the same rigor as environment variables. Version them. Document them. Validate them before every deployment. They are not infrastructure details — they are deployment requirements. The same discipline applies to web application testing on staging: cross-browser regression and API contract checks should run against an environment that mirrors production, not a developer's local stack.


The Human Behaviour Problem No Tooling Solves

The hardest part of fixing "works on my machine" isn't technical. It's behavioural.

Developers who've been on a project for a long time stop documenting setup steps — not out of laziness, but because they no longer need them. They have internalised the environment. What they forget is that every new team member, every new environment, every new deployment exposes every assumption they never wrote down.

It shows up in throwaway phrases: "just run it locally," "it worked yesterday," "QA must have set it up wrong." No record of what changed. No documented config. No reproducible setup.

QA feedback gets dismissed for the same underlying reason: developers trust their local environment more than any reproducible test environment QA can build. If QA can't replicate their exact setup, the bug gets closed rather than investigated. That dynamic is the same one our CTO describes in how developers really see QA — and fixing it starts with a written handoff, not more debate in Slack.

And then there's what I call the "ship first, fix later" mindset. Developers push code without self-testing, wait for QA to raise issues, and fix reactively. It feels faster in the moment. In practice, the rework cycles cost far more time than a proper pre-handoff review would have. QA becomes a bug-reporting service instead of a quality gate — and over time, that erodes trust between both sides.

The real damage isn't the individual bug. It's the culture that allows it. When setup knowledge lives in people's heads instead of the codebase, every deployment is a roll of the dice.


When It Reaches Production: The Stakes Are Irreversible

In traditional software, a "works on my machine" failure in production is painful. In blockchain, it can mean something that cannot be undone.

One pattern I've seen repeatedly: production simply doesn't get tested. Real money is involved, so teams avoid running functional tests against mainnet — which is understandable. But the consequence is that production becomes the first environment where the full real-world conditions are actually met. Real wallets. Real gas prices. Real contract state. Every assumption baked in during development surfaces here, in front of real users, with no rollback option.

IDAM and configuration gaps are another recurring issue. After a deployment, specific wallet addresses may not have the right permissions, or a critical config entry gets missed entirely. Everything looks fine until a user hits a function that requires that permission — and it silently fails or reverts, leaving users with no meaningful error and the team with no clear audit trail.

RPC endpoint behaviour differences across environments are the third consistent failure point. Because it's infrastructure-level, these failures get mistaken for contract bugs. Significant debugging time is burned before the actual cause is identified.

All three share the same root: manual steps and environment assumptions that were never made explicit. When permissions, configs, and infrastructure details are set by hand or left undocumented, something will eventually be missed. In blockchain, eventually always comes at the worst possible time.


The Tools and Processes That Actually Made a Difference

Over nine years, I've introduced and championed several things that genuinely moved the needle. Not because they're clever — but because they make implicit knowledge explicit.

Docker was the first real win. Containerising the entire development environment — app, dependencies, configuration — eliminated most "works on my machine" complaints overnight. Everyone runs the same environment. There's no debate, no "well, on my setup it works." The environment is the environment.

CI/CD pipelines with environment parity checks. Making the pipeline mirror production as closely as possible means issues get caught before they reach real users. Any config or dependency drift between environments gets flagged automatically, not discovered during a client demo.

Deployment scripts over manual steps. Every permission grant, every config initialisation, every address whitelisting — it goes into the deployment script. If a step isn't scripted, it doesn't ship. This single rule has prevented more production failures than anything else I've implemented.

Environment variable audits as part of the PR checklist. Before any merge, developers are required to verify that every new config value is documented and added to all environment templates. Not just their local .env. All of them.

The common thread across all of these is simple: automating anything that depends on human memory, and documenting everything that doesn't get automated. Teams that also adopt no-code test automation on top of environment parity catch regressions in hours, not weeks — without every QA engineer writing Selenium from scratch.


Where Responsibility Actually Lies: Developer vs QA

This is a conversation most teams avoid having clearly — and that ambiguity is where "works on my machine" finds its oxygen.

A developer's responsibility ends when the code works correctly in a properly configured, reproducible environment. Not just locally. That means writing setup documentation, ensuring config is complete, and self-testing before handoff. If code only works on one machine, it is not ready for QA. Full stop.

QA's responsibility is to validate behaviour against requirements — not to debug environment issues or reverse-engineer missing setup steps. When QA spends more time fighting configuration than testing features, something has already gone wrong upstream.

In most teams I've worked with, that line gets blurred in one direction: developers push to QA too early. Code arrives with missing configs, undocumented dependencies, and environment assumptions that were never communicated. QA ends up doing detective work that should have been caught in development.

The teams where this worked well had one thing in common: a shared definition of "ready for QA." A checklist. A deployment script. A definition of done that made the handoff explicit and held developers accountable before QA ever touched the build. That clarity changed the dynamic entirely.


Resistance to Standardised Environments: What I've Seen and How I Handle It

Pushback against containerisation is more common than people admit — and it rarely comes from bad intent.

The most common objection is "it slows me down." Developers who've spent years optimising their local setup see Docker as overhead — another layer to learn, debug, and maintain. That frustration is valid, but it's short-sighted. The time invested in a one-time Docker setup is negligible compared to the hours burned debugging environment-specific issues across a growing team.

The subtler resistance comes from experienced developers: "my setup works fine, it's everyone else's problem." Senior developers are often the biggest resistors, precisely because they've been on the project longest and don't feel the pain that newer team members or QA engineers experience daily.

Personally, I haven't encountered significant pushback in my career — and I think that comes down to culture set early. When I started out, senior developers were diligent about server management and environment setup. Standardised environments weren't a debate; they were how things were done. Carrying that forward as a lead developer, I've made environment parity a non-negotiable from day one of every project. Every environment documented. Every config scripted. The expectation set before anyone has a reason to resist.

The best way to prevent resistance is to never give it room to grow. When containerisation and environment standards are part of onboarding, they become the default — not an imposition.


The Root Cause: Why Process Failure Outlasts Every Tool and Culture Fix

If I had to name the single biggest systemic failure that allows "works on my machine" to persist — it's process. Culture and tooling problems are usually symptoms. The root is almost always a team operating without the right guardrails.

In my experience, it shows up in four consistent ways:

No proper handoff checklist. Code moves from development to QA to production without a defined gate. There's no explicit check on whether the environment is configured, dependencies are documented, or setup steps are reproducible. Things get missed not out of negligence, but because nobody was required to verify them.

Meaningless branch naming and commit messages. When branches are named arbitrarily and commits say "fix" or "changes," tracing what was done — and why — becomes nearly impossible. Debugging an environment issue is ten times harder when there's no meaningful history to follow.

Missing documentation. Setup steps, environment variables, deployment dependencies — they live in someone's head or a Slack message from six months ago. Every new developer, every new environment starts from scratch.

No code validation gates. Tools like SonarQube exist precisely to catch issues before they travel further down the pipeline. When these aren't enforced as part of the CI process, low-quality or misconfigured code moves freely — and the problem compounds with every environment it touches.

The fix isn't hiring better developers. It's building a process where these gaps simply can't be skipped.


The Business Impact: This Is Not a Technical Problem

The business impact of "works on my machine" is real, even when it's hard to quantify precisely.

The most direct hit is timeline slippage. In blockchain projects, environment issues don't just cost debugging hours — they delay the entire release cycle. A missed config, a wrong RPC endpoint, or an unpermissioned wallet address in a higher environment can block a deployment for days. On projects with fixed delivery windows or token launch dates, that delay has a direct financial cost — and erodes internal confidence before it ever reaches the client.

The longer-term damage is to client trust. When issues surface in staging or production that clearly worked in development, it raises an uncomfortable question: how thorough was the testing? In blockchain, where transactions are irreversible and funds are at stake, that doubt is hard to recover from. Clients don't just lose confidence in the build — they lose confidence in the team's ability to manage risk.

Both outcomes trace back to the same root: environment inconsistency treated as a technical nuisance rather than a business risk. The teams that get this right don't just ship better software. They build trust that survives the inevitable hard moments in any project.


What I'd Tell a Junior Developer or QA Engineer Starting Out

If there's one thing I wish someone had told me early in my career, it's this:

Treat your environment as part of your code — not a prerequisite to it.

Early on, I saw environment setup as something you dealt with once and moved on from. What I learned over time is that undocumented configs, manual deployment steps, and assumed shared knowledge are technical debt. They compound quietly and always surface at the worst possible moment.

Document everything as you build, not after. Every environment variable, every permission, every setup step. If it's not written down, it doesn't exist for the next person — and in blockchain, that next person might be you, six months later, debugging a production issue at midnight.

Never assume your local environment reflects reality. Testnets lie. Clean-slate chains hide edge cases. What works against a local node may behave entirely differently against a live RPC under real conditions.

And if you're in QA — push back early. If code arrives without documentation or a reproducible setup, that is not a QA problem to solve. A strong handoff process protects your time and raises the quality bar for the whole team.

The developers and QA engineers who stand out aren't just the ones who write good code or find good bugs. They're the ones who make their environment transparent, repeatable, and trustworthy for everyone around them.


Summary Table

Problem AreaRoot CauseReal-World ImpactRecommended Fix
Missing .env valuesConfig treated as afterthoughtSilent failures in higher environmentsPR checklist for all env variables
Local testnet assumptionsClean-slate chains hide state issuesContract failures on mainnetTest against forked mainnet state
RPC endpoint differencesInfra treated as interchangeableMisdiagnosed contract bugsValidate RPC config per environment
Undocumented setup stepsLong-tenure knowledge lock-inOnboarding delays, deployment failuresScripted deployments, no manual steps
Missing IDAM/permissionsManual post-deploy stepsSilent production failuresInclude all permissions in deploy script
No handoff checklistNo shared definition of "done"QA wastes time on dev problemsDefine and enforce a QA-ready gate
No CI validation gatesMissing SonarQube / parity checksBad code travels freelyEnforce code quality gates in pipeline
Senior developer resistance"My machine works" mindsetNo standardisation across teamSet environment standards at onboarding

Sai Charan is Lead Blockchain Developer at QualityKeeper.ai — helping teams ship Web3 and web applications with reproducible environments, clear QA handoffs, and automation that doesn't depend on one developer's laptop.

Book a free discovery call at QualityKeeper.ai

Frequently asked questions

Why does "works on my machine" happen so often in software development?
It happens because developers build and test in environments they've personally configured over time — with specific tools, versions, database states, and undocumented config values that nobody else has. The gap between a developer's local setup and any other environment is where bugs live. The phrase is usually honest, but it signals a process failure: the environment was never standardised or documented.
How do you fix "works on my machine" in a blockchain project?
Start by containerising your development environment with Docker so every team member runs identical setups. Next, move every permission grant, config value, and deployment step into a scripted process — nothing manual. Add environment parity checks to your CI/CD pipeline, and make environment variable documentation a mandatory part of the PR review. For Web3 specifically, validate your RPC endpoints and wallet configurations per environment, not just locally.
What is the difference between a developer's and QA engineer's responsibility for environment issues?
A developer is responsible for ensuring the code works in a properly configured, reproducible environment before handoff — not just locally. QA is responsible for validating behaviour against requirements. QA should never be debugging missing configs or reverse-engineering undocumented setup steps. That's upstream work that belongs in development.
Why is environment inconsistency more dangerous in blockchain than in traditional software?
Because blockchain transactions are irreversible. A failed deployment in traditional software can usually be patched and redeployed. In blockchain, a misconfigured permission, a wrong wallet address, or an untested function hitting a live contract can affect real funds with no rollback. The blast radius of environment assumptions being wrong is fundamentally higher.
How do I get my development team to stop ignoring QA environment issues?
Build a shared, written definition of "ready for QA" and enforce it before anything reaches QA. Make the handoff process explicit — a checklist, a deployment script, a set of criteria that must be met. When developers are accountable for the state of the environment before handoff, QA stops being a dumping ground and becomes what it's supposed to be: a quality gate.
Does using Docker actually solve "works on my machine" problems?
It solves most of them, especially those caused by OS differences, dependency version mismatches, and local tool configurations. What Docker doesn't solve is missing or incorrect environment variables, undocumented deployment steps, and blockchain-specific issues like RPC configuration and chain state assumptions. Docker is a strong foundation — but it has to be paired with documented config management and scripted deployments to cover the full problem.
What tools should every blockchain development team use to prevent environment issues?
Docker for environment standardisation. CI/CD pipelines with environment parity checks. SonarQube or similar for code quality gates. A deployment script that handles every permission, config, and setup step. And a PR review checklist that explicitly requires all new environment variables to be documented across every environment template — not just the author's local .env.

Topics

works on my machine QAworks on my machineenvironment parityDocker QACI/CD testingblockchain QA testingWeb3 testingEthereum development QAdeployment scriptsenvironment configuration

See AI QA Agents test your web app, end-to-end.

A dedicated QA engineer plus agentic QA running regression on your product 24/7. No SDETs to hire, no code to share.