Dead code rarely announces itself. It does not break builds. It does not fail tests. It sits quietly in the repository, looking important enough that nobody wants to delete it.
Then the codebase grows around it.
A deprecated module still exports public types. An old service is no longer called in production, but three packages still import its helpers. A feature flag was removed from the product two years ago, but the implementation still exists behind a condition that is always false. New developers read it, reviewers preserve it, and refactors tiptoe around it.
Dead code is not just clutter. In a large codebase, it slows onboarding, hides real ownership, increases test time, and makes architecture harder to understand.
The hard part is not deleting code. The hard part is knowing what is safe to delete.
What Counts as Dead Code?
Dead code is code that no longer has a meaningful path to execution or use.
It can appear in several forms:
- Functions that are never called
- Components that are no longer rendered
- API routes that no client uses
- Feature-flag branches that can no longer activate
- Packages that no application imports
- Database adapters for systems that were removed
- CLI scripts that nobody runs
- Types and utilities that only support other dead code
Some dead code is obvious. A private function with zero references is usually safe to remove. Other dead code is harder. Public exports, dynamically loaded modules, framework conventions, and external API consumers can make code look unused when it is still important.
That is why a safe dead-code cleanup needs more than one signal.
Why Dead Code Survives
Dead code survives because deletion feels risky.
Adding code is rewarded. Deleting code is questioned. If a developer removes a module and something breaks, the failure is visible. If they leave dead code in place, the cost is diffuse and delayed.
Teams also lack confidence. A search for references might show nothing, but what about dynamic imports? What about a scheduled job? What about a customer still calling an old endpoint?
Finally, dead code often hides behind weak architecture boundaries. If everything imports from everything, it becomes difficult to know whether a module is genuinely unused or just indirectly reachable through a tangled dependency path. This is one reason monorepo dependency management matters so much.
Signal 1: Static References
Start with the simplest signal: reference counts.
For TypeScript projects, your editor, TypeScript language server, ESLint, ts-prune, knip, or similar tools can identify exports that appear unused.
This catches many easy wins:
export function formatLegacyInvoiceDate(date: Date) {
return oldFormatter(date);
}
If this function is exported but never imported, it is a candidate for removal.
But static references have limits. They can miss dynamic usage:
const handler = await import(`./handlers/${event.type}`);
They can also misunderstand framework conventions where files are discovered by path, not by import. Next.js routes, NestJS providers, migration files, and CLI entrypoints may be used without a normal reference chain.
So treat static analysis as a candidate generator, not a final verdict.
Signal 2: Dependency Graphs
Reference counts answer "is this symbol imported?" Dependency graphs answer a broader question: "how does this module connect to the rest of the system?"
This is crucial in larger applications. A file might have references, but only from a package that is itself unused. A utility might look alive because a deprecated service imports it. A package might be included in the workspace but not reachable from any deployable app.
Dependency visualization helps you identify isolated islands:
- Packages with no incoming dependencies
- Modules only connected to deprecated features
- Circular clusters that exist only to support old behavior
- Shared utilities that have become accidental dumping grounds
ReposLens is useful here because it shows the actual architecture graph rather than only file-level warnings. If a module is disconnected from the paths that lead to production entrypoints, it becomes a stronger cleanup candidate.
This also helps prevent accidental deletion. If a module is still connected to a critical path, the graph makes that visible before the cleanup PR.
Signal 3: Runtime Usage
Static analysis tells you what could be used. Runtime data tells you what actually is used.
For routes, jobs, commands, and feature branches, add lightweight instrumentation before deleting anything risky.
Track:
- API endpoint hits
- scheduled job executions
- CLI command usage
- feature flag exposure
- event handler invocations
- module-level warnings for suspected legacy paths
For example, if you suspect an API route is unused, log its usage for two weeks:
logger.info("legacy_endpoint_called", {
route: "/api/v1/export",
userId,
});
If it receives no traffic during a representative period, you have stronger evidence. If it receives traffic from one customer, you can migrate that customer before deleting the endpoint.
Runtime evidence is especially important for public APIs and background jobs. Never delete externally reachable behavior based only on grep.
Signal 4: Ownership and Business Context
Some code looks dead because the team reading it does not know who owns it.
Before deleting a suspicious module, ask:
- Which product surface used this?
- Which team owned it?
- Is there an active customer, contract, or migration depending on it?
- Is it referenced in docs, runbooks, dashboards, or support scripts?
- Is there a replacement already in production?
This is where Architecture Decision Records help. If the team documented why a module exists, cleanup becomes less guesswork. If no owner or decision record exists, that is itself a signal, but not automatic proof.
A Safe Dead-Code Removal Workflow
The safest cleanup process is incremental.
1. Mark Candidates
Create a list of suspected dead modules with evidence:
module: packages/legacy-export
static refs: none from apps/web or apps/api
runtime usage: no endpoint hits in 30 days
owner: no active owner
replacement: apps/api/src/report
recommendation: remove in two PRs
This turns deletion into a reviewable decision instead of a surprise.
2. Remove References First
If a deprecated module still has a few callers, migrate those callers first. Keep the cleanup PR focused.
Small PRs are easier to review and less likely to hide behavior changes. They also work better with automated PR quality gates, because failures point to a narrower change.
3. Delete the Module
Once references and runtime usage are gone, delete the module and run the full test suite.
Also check:
- build output
- type checking
- generated clients
- Docker images
- CI scripts
- deployment configuration
- documentation links
Dead code often leaves traces outside the source directory.
4. Watch After Merge
For risky removals, monitor logs and errors after deployment. If the deleted path was truly unused, nothing happens. That silence is the best outcome.
If something breaks, the PR should be small enough to understand and revert quickly.
Dead Code in Monorepos
Monorepos make dead code both easier and harder to detect.
They make it easier because all packages live in one place, so dependency graphs can show the whole system. But they make it harder because internal packages often remain "available" long after no app needs them.
Look for workspace packages that:
- are not depended on by any app
- only depend on other deprecated packages
- have no recent commits except formatting or dependency updates
- publish artifacts that no deployment consumes
- exist only because another old package still imports them
In a monorepo, dead code cleanup is often a graph problem, not a file problem. You are not just deleting unused functions; you are pruning unreachable branches from the architecture.
What Not to Delete Too Quickly
Some code deserves extra caution:
- public API endpoints
- database migrations
- audit logs and compliance exports
- billing or subscription flows
- authentication callbacks
- webhooks
- scripts used by support or operations
- code loaded by convention rather than imports
For these areas, require stronger evidence: runtime data, owner confirmation, and a staged deprecation plan.
Make Cleanup Continuous
Dead code cleanup works best when it is continuous, not heroic.
Add a recurring maintenance habit:
- run unused export detection monthly
- review dependency graph islands after major releases
- remove feature flags shortly after rollout decisions
- require owners for long-lived deprecated modules
- add cleanup tasks to the same epic that introduced replacements
The goal is not to make the codebase perfectly minimal. The goal is to keep it readable enough that the active system is visible.
When dead code disappears, architecture becomes easier to see. Onboarding gets faster. Refactors become less scary. And every future developer spends more time understanding the system that exists, not the systems that used to.
Related Articles
Architecture Decision Records: How to Stop Losing Technical Context
Learn how Architecture Decision Records help teams preserve technical context, reduce architecture drift, and make future code reviews and refactors easier.
Continue readingMicroservices vs Monolith: How to Actually See Your Architecture
Monolith or microservices? The real question is: can you see what you actually have? Learn how to visualize, compare, and decide with confidence.
Continue readingHow to Detect Circular Dependencies in Your TypeScript Project (and Fix Them)
Learn 3 practical methods to detect circular dependencies in TypeScript: madge CLI, ESLint import/no-cycle, and automated PR checks with ReposLens. Includes fix patterns.
Continue reading