Bridging Result<T,E> with TypeORM transactions using a sentinel
If your codebase enforces "no throwing in domain or application layers" and uses Result<T, E> for error flow, you have a problem when wrapping use cases in TypeORM transactions: TypeORM only rolls back on a thrown error. The fix is a sentinel — an infrastructure-internal exception that carries the Result.failure value, is thrown inside the transaction callback when the use case returns failure, caught at the boundary, and converted back to Result.failure for the caller. The no-throw rule stays intact; transactions still roll back correctly.
Setting the stage: what you (probably) already know
Before the rules and the conflict, a brief grounding for readers who haven't lived inside Result-pattern codebases. Skip the section if you have — but if "why strings, not typed errors?" is the kind of question you've ever lost an afternoon to, the rest of this section is worth a read.
The Result type, in ten lines
Instead of throwing on failure, functions return either Result.success(value) or Result.failure(message). The caller checks result.success before using result.data. Failures are part of the function's signature, not a side channel.
type Result<T, E> =
| { success: true; data: T }
| { success: false; error: E };
// Usage
const r = await user_repo.find_by_id(user_id);
if (!r.success) return failure(`Lookup failed: ${r.error}`);
const user = r.data;
Why Result over throwing exceptions
- Control flow becomes visible at the type level. A function returning
Result<User, string>tells the reader (and the type-checker) that it can fail. A function that might throw tells you nothing — exception possibilities are invisible in TypeScript signatures. Throw-based APIs make failure paths a guessing game; Result makes them part of the contract. - No accidental swallowing. The classic sin:
try { stuff() } catch (e) { console.log(e) }— error caught, logged, forgotten. AResult.failurethat's never inspected is detectable statically (unused-locals, lint rules, even a one-line ESLint rule that flags ignored Results). The static analyzer becomes your safety net instead of "did the engineer remember to handle this?" - Composition that actually works. Async Result chains compose via
.then(map_async)or sequentialif (!r.success) return rearly-returns. Try/catch nests; it doesn't compose. Five operations in a row become a pyramid oftry { try { try { ... } catch } catch } catch }, or one mega-try that loses precision on which step failed. - Cross-module boundaries hate exceptions. Publish a domain event, enqueue a BullMQ job, return from a controller — exceptions don't serialize across those boundaries.
{ success: false, error: "..." }does. Once your codebase spans modules with event-driven communication, exceptions become a layer-violation generator: module A throws something, module B has to import the error class to catch it, and now A's error type is part of B's interface surface. - Async/await asymmetry disappears. Awaited promises reject differently from sync throws. The mental model is "is this a Promise that might reject, or a function that might throw?" — and the right wrapping changes based on the answer.
Result.failureis the same shape regardless of sync/async. One mental model, one error path.
Why strings as the error type, not typed errors
This is the design choice every team relitigates. The case for typed errors — Result<User, NotFoundError | ValidationError | DbError> — sounds appealing: the compiler tells you which errors to handle. The reality is:
- Types explosion. Every function signature has to enumerate its possible errors. Function A returns
NotFoundError; function B returnsValidationError | DbError; function C composes both, so C's signature is the union of all of them. Add a new error type → it propagates upward through every caller's signature. After a year, your signatures look like ten-line type unions that nobody reads. - Most callers don't case-match anyway. In practice, ~95% of error handling is "did it fail? if so, propagate up." Callers rarely care which error; they just want to know failure happened so they can fail-fast or wrap. The type-system muscle of "discriminated error union" goes unused; you pay the cost of carrying it through signatures and get little of the benefit.
- The 5% that do case-match can encode that in the string. If a caller really needs to distinguish, you can prefix:
"VALIDATION: email format","NOT_FOUND: user 123","CONFLICT: ICO already registered". Cheap, structured-enough, doesn't drag the type system through every signature for the rare case. - The honest trade-off. Typed errors give you "compiler tells you which errors to handle" at the cost of "every signature carries the full failure-mode set." For codebases where errors are 95% propagate and 5% case-match (most codebases), strings are the right reach. For codebases where error-type discrimination is load-bearing — a payment system that maps exhaustively to specific user-facing error messages, say — typed errors earn their keep. Pick based on your case-match-to-propagate ratio.
What it looks like without Result discipline
If you haven't worked in a Result-pattern codebase, here's what the alternative tends to produce:
- Cross-module coupling via error classes. Module A throws
InvalidUserError. Module B catches it. Now module B has toimport { InvalidUserError } from '../module-a/...'— the error class becomes part of module A's public interface. With Result, module A returnsfailure("Invalid user: ..."); module B reads the string. Zero cross-module type coupling, modules stay independently deployable. - Defensive try/catch everywhere. Every function that doesn't want to crash has to either handle or rethrow. Every layer becomes "did the layer below throw something I should catch?" The defensive boilerplate becomes the noise floor. With Result, the failure path is explicit at every step; you can see who handles what.
- Swallowed errors at scale. A codebase with throws ends up with
try { x() } catch (e) { /* silently logged */ }in dozens of places, drifting in over years. With Result, an unusedfailureis statically detectable. The discipline doesn't depend on humans remembering to bubble errors up properly — the type system does it. - Promise chains that fail invisibly.
doStuff().then(transform).then(persist)— which step rejected? Unless you instrument every.catch(), you find out from the user. With Result, every step is its own boundary; the caller sees exactly which one failed.
If you nodded along to all of that — skip to "Two rules that fight each other" below. The rest of this post assumes a codebase that's bought into Result; what follows is about the one place where Result and TypeORM's transaction API don't naturally get along, and the small pattern that bridges them.
Two rules that fight each other
I've been writing applications with two non-negotiable rules for the last couple of years:
- Domain and application layers never throw. Use cases return
Result<T, string>. Infrastructure catches exceptions and converts them toResult.failure. Controllers — the only layer allowed to throw — convertResult.failuretoHttpException. Anywhere else, a thrown error is an unhandled bug. - Multi-step writes inside a single use case must roll back atomically. If a use case writes Community, Company, Address, and BankAccount, and the BankAccount insert fails, the other three must be undone.
Each rule is sensible. They are also in direct conflict. Here's why.
The mechanism TypeORM gives you
TypeORM's transaction API is throw-to-rollback:
await dataSource.transaction(async (manager) => {
await manager.save(communityRepo, community);
await manager.save(companyRepo, company);
// ... if any throw happens here, the whole thing rolls back
});
If the callback throws, TypeORM rolls back. If the callback returns normally, TypeORM commits. There is no third option. There is no "return false to roll back." There is no rollback method you can call from inside the callback. The rollback signal is, structurally, a thrown exception.
The same is true if you use the typeorm-transactional package, which adds CLS-bound transaction propagation via decorators and a runInTransaction(...) helper. The propagation is more flexible — you can join an outer transaction or open a new one — but the rollback trigger is still "an exception escapes the callback."
The naive options, all bad
You can see three obvious approaches and reject them in turn.
Option 1: throw inside the use case when something fails.
This breaks rule 1. The use case is now allowed to throw. The discipline rots. A year later you have use cases that sometimes throw and sometimes return failure and the consumer has to handle both. The no-throw rule was supposed to give you a single contract; you've abandoned it.
Option 2: return Result.failure from the transaction callback and have the wrapper call rollback manually.
There is no manual rollback. You don't own the connection. dataSource.transaction committed the moment the callback returned without throwing. By the time your wrapper sees the Result.failure outside the callback, it's too late — the writes are committed.
Option 3: don't use transactions; rely on natural-key constraints and idempotent inserts.
This works for a subset of cases and fails for the rest. "Insert these four polymorphic rows in correct dependency order, with a Polish-Czech address normalizer between steps two and three" doesn't have a natural-key constraint that helps you. You need transactional rollback.
The sentinel
The pattern that resolves the conflict is a small one, but I think it's worth a name. Define a private exception class — the sentinel — that the infrastructure layer uses to signal rollback to the transaction machinery. Throw it inside the callback when the use case returns failure. Catch it at the wrapper boundary. Convert it back to a Result.
class TransactionRollbackSignal extends Error {
constructor(public readonly result_error: string) {
super(`Transaction rolled back: ${result_error}`);
this.name = 'TransactionRollbackSignal';
}
}
async function execute_transaction<T>(
operation: () => Promise<Result<T, string>>,
): Promise<Result<T, string>> {
try {
return await runInTransaction(
async (): Promise<Result<T, string>> => {
const result = await operation();
if (!result.success) {
throw new TransactionRollbackSignal(result.error);
}
return result;
},
{ propagation: Propagation.REQUIRED },
);
} catch (error) {
if (error instanceof TransactionRollbackSignal) {
return failure(error.result_error);
}
// Unexpected throw — convert to failure preserving message
const msg = error instanceof Error ? error.message : String(error);
return failure(`Transaction error: ${msg}`);
}
}
Three properties make this work.
The use case never sees the sentinel. The user-supplied operation returns Result.failure(msg). The wrapper throws the sentinel inside the transaction callback. The transaction machinery rolls back. The wrapper's outer try/catch catches the sentinel, unwraps the message, and returns Result.failure(msg). From the caller's perspective, the use case returned failure and the transaction rolled back. There is no thrown error in their world.
Unexpected throws are still caught. Connection drops, type errors, anything that happens during the operation and isn't a sentinel — those get caught by the same try/catch, logged, and converted to failure('Transaction error: ...'). The contract is preserved: the wrapper always returns a Result, never throws.
Rollback becomes structural, not aspirational. A use case that returns Result.failure(...) after partial writes — say, three rows inserted before the fourth fails — gets those writes rolled back automatically. The use case author didn't write a single line of rollback code. They returned failure. The infrastructure handled the rest.
Where to put the wrapper
I put execute_transaction on a base repository class. The repositories that participate in multi-write operations expose a public with_transaction(...) method that delegates to execute_transaction. The use case calls community_repository.with_transaction(async (tx_repo) => { ... }) and orchestrates inside the callback.
One subtlety worth knowing: if you use typeorm-transactional with Propagation.REQUIRED (the default in the snippet above), nested calls to execute_transaction join the outer transaction instead of opening a new one. This is what you want for cross-module atomic writes — a use case in module A can call into module B's repository, which in turn calls execute_transaction, and both end up writing inside the same transaction. If the outer use case returns Result.failure, the inner writes roll back too. This is the property that lets us do cross-module atomic operations without sagas, with the trade-off being that the modules can't yet be deployed as separate services (which we document and accept).
Why I think this deserves a name
Most of the patterns in domain-driven design are well-named: aggregate, value object, domain event, repository. The patterns that bridge DDD with persistence are not. People rediscover this sentinel in different forms — an EmptyResultError, a ManualRollback exception, an InternalRollbackSignal — and each name carries hints about what it's for. Naming it the sentinel makes its job explicit: it's a private exception whose only purpose is to communicate "roll back this transaction" to a mechanism that requires exceptions as its rollback signal. It is not an error in the application sense. It is a control-flow primitive.
The mental model that helps is: your code never throws this sentinel; the infrastructure does, on behalf of your code's Result.failure return value. You write return failure(...) the same way you always do. The wrapper translates that into the throw the transaction machinery needs, and translates the catch back into the failure the caller expects. The throw exists structurally but lives entirely inside one helper function. The rest of the codebase keeps its no-throw discipline.
One caveat
Don't put logic that depends on the sentinel anywhere except the wrapper. If a developer writes catch (e) { if (e instanceof TransactionRollbackSignal) { /* do something */ } } in business code, they've punched a hole in the abstraction. The sentinel is a private contract between the infrastructure layer and the transaction machinery. The wrapper's try/catch is the only place that should know it exists.
You can enforce this with the export visibility — define the sentinel in the infrastructure module, don't export it from the module's public API, and the type system makes it inaccessible from outside. If you must export it (some test setups need the type), at least mark it @internal in the JSDoc and add a lint rule that flags unintended use.
What this costs and what it earns
The cost is one new private exception class and ~30 lines of infrastructure code. The earn is significant: domain and application layers stay throw-free, cross-module atomic writes become a one-line wrapper around a use case, rollback failures stop being a thing developers can forget, and the rule about "no throwing in use cases" stays absolute instead of becoming "no throwing except for transactions."
The pattern is small enough that it's hard to argue against once you've seen it, and large enough in implication that I think it's worth writing down. If you've been fighting the same conflict between Result patterns and transaction semantics, this is the resolution. The sentinel isn't an error — it's the translation layer between two contracts.
The extraction caveat
There's one trade-off this pattern locks in that's worth being explicit about. The wrapper relies on cross-module repositories joining the same runInTransaction scope — which works because all the modules share a single process, a single connection pool, and a CLS-bound manager. The moment you extract one of those modules into its own service, the shared transaction breaks. Postgres transactions live on connections; connections live in processes; two services have two connection pools and no way to share a transaction without two-phase commit (which is its own can of worms).
This isn't a problem for the pattern itself — the sentinel is just as useful inside a single service's own transactions — but it does mean that the forwardRef-based cross-module atomic writes are, by construction, a deliberate decision not to extract those modules. If you ever do extract, the use case migrates to a saga + transactional outbox + inbox combination, not to a 2PC bridge. I wrote a separate post on this trade-off: When forwardRef becomes a saga.
The summary: the sentinel pattern is durable in-process. The cross-module use of it is durable as long as the modules ship together. Plan for the extraction question — even if your answer is "not yet" — because the day it changes, you'll want to know what changes with it.