Full-Stack Monorepo Patterns That Actually Work
Practical patterns for organizing full-stack monorepo projects with shared code, clean architecture, and efficient CI/CD
// table of contents (15 sections)
Monorepos are polarizing. Some developers swear by them; others avoid them at all costs. Having worked on both monorepo and polyrepo projects across multiple teams, I can say this: the problem is not the monorepo itself, it is how you organize it. A well-structured monorepo is a joy to work with. A poorly structured one is a nightmare.
This post shares the patterns I have found that actually work in production — not theoretical ideals, but practical structures that have survived real projects with real teams.
Why Monorepo
Before diving into structure, let me be clear about when a monorepo makes sense and when it does not.
Monorepo makes sense when:
- Your frontend and backend share models, types, and business logic
- You want atomic commits that span multiple packages (e.g., changing an API contract and updating the frontend consumer in one PR)
- You have a small-to-medium team that works across the stack
- Unified versioning simplifies your deployment process
- You want a single source of truth for shared utilities and conventions
Monorepo does not make sense when:
- Teams are large and independent, with no shared code
- You need different deployment schedules and release cadences
- Build times become a bottleneck that cannot be mitigated with caching
- Security boundaries require strict code isolation
Assuming you have decided a monorepo is right for your project, let us talk about how to structure it.
Directory Structure
The single most important decision is your top-level directory structure. Here is the pattern that has worked best across multiple projects:
project-root/
├── apps/
│ ├── web/ # React/Next.js frontend
│ ├── mobile/ # React Native or Flutter app
│ ├── api/ # Backend API server
│ └── admin/ # Admin dashboard
├── packages/
│ ├── ui/ # Shared UI component library
│ ├── validators/ # Shared validation logic
│ ├── api-client/ # Generated API client
│ └── config/ # Shared configuration (ESLint, TypeScript, etc.)
├── shared/
│ ├── models/ # Domain models and types
│ ├── utils/ # Pure utility functions
│ └── constants/ # Shared constants
├── tools/
│ ├── scripts/ # Build scripts, generators
│ └── generators/ # Code generation templates
├── docs/
├── package.json # Root package.json (workspaces config)
└── turbo.json # Turborepo config (or nx.json for Nx)
The key principle: apps/ consumes, packages/ provides, shared/ is the foundation.
apps/contains deployable applications. Each app has its own build pipeline and deployment target.packages/contains publishable or shareable packages. These may be internal-only or published to a registry.shared/contains code that is imported directly by both apps and packages. These are never published independently.tools/contains developer tooling that is not part of any application.
Why Three Levels
You might wonder why shared/ and packages/ are separate. The distinction is intentional:
packages/has its own build step, its ownpackage.json, and potentially its own release cycle. Think of these as libraries with clear boundaries.shared/is imported directly via TypeScript path aliases. No build step, no package publishing. It is just code that multiple things need.
This separation prevents over-engineering. Not every shared piece of code needs to be a proper package with its own build pipeline. Some things are just shared source files.
Shared Code Patterns
Domain Models
The most impactful thing to share across frontend and backend is domain models. When your API sends data, the frontend should have the same type definition for that data — without duplicating it.
// shared/models/user.ts
export interface User {
id: string;
email: string;
name: string;
role: UserRole;
createdAt: Date;
updatedAt: Date;
}
export enum UserRole {
Admin = 'ADMIN',
Editor = 'EDITOR',
Viewer = 'VIEWER',
}
export interface CreateUserRequest {
email: string;
name: string;
role: UserRole;
}
export interface UserListResponse {
users: User[];
total: number;
page: number;
pageSize: number;
}
The backend uses these types for request validation and response shaping. The frontend uses them for API client types and state management. A single change to the model updates both sides.
Validation Logic
Validation is another high-value shared layer. The same rules that validate input on the backend should be available on the frontend for client-side validation.
// shared/validators/user.ts
import { z } from 'zod';
import { UserRole } from '../models/user';
export const createUserSchema = z.object({
email: z.string().email('Invalid email address'),
name: z.string().min(2, 'Name must be at least 2 characters').max(100),
role: z.nativeEnum(UserRole),
});
export const updateUserSchema = createUserSchema.partial();
export type CreateUserInput = z.infer<typeof createUserSchema>;
export type UpdateUserInput = z.infer<typeof updateUserSchema>;
// Reusable validation helper
export function validateOrThrow<T>(schema: z.ZodSchema<T>, data: unknown): T {
const result = schema.safeParse(data);
if (!result.success) {
throw new ValidationError(result.error.flatten());
}
return result.data;
}
Using Zod for validation means the same schema validates API requests on the server and form inputs on the client. No more drift between what the backend expects and what the frontend validates.
API Contracts
API contracts define the interface between frontend and backend. I define them as a shared type:
// shared/models/api-contracts.ts
export interface ApiContract<
TRequest extends Record<string, unknown>,
TResponse extends Record<string, unknown>,
> {
method: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH';
path: string;
requestType: TRequest;
responseType: TResponse;
}
// Define your API endpoints as a typed map
export interface ApiEndpoints {
'users.list': ApiContract<{ page: number; pageSize: number }, { users: User[]; total: number }>;
'users.get': ApiContract<{ id: string }, User>;
'users.create': ApiContract<CreateUserInput, User>;
'users.update': ApiContract<{ id: string } & UpdateUserInput, User>;
'users.delete': ApiContract<{ id: string }, void>;
}
This pattern enables type-safe API clients. The frontend knows exactly what parameters each endpoint expects and what it returns. If the backend changes a contract, the frontend gets a compile error.
Clean Architecture Layers
Monorepo structure and clean architecture complement each other. Here is how the layers map:
┌─────────────────────────────────────────────────────────┐
│ PRESENTATION │
│ apps/web, apps/mobile, apps/admin │
│ UI components, screens, navigation │
├─────────────────────────────────────────────────────────┤
│ DOMAIN │
│ shared/models, packages/validators │
│ Business entities, rules, interfaces │
├─────────────────────────────────────────────────────────┤
│ DATA │
│ apps/api (repositories, services) │
│ packages/api-client (HTTP client) │
│ Database access, external API integrations │
└─────────────────────────────────────────────────────────┘
Dependency Rule
Dependencies flow inward. The presentation layer depends on domain models. The data layer implements domain interfaces. The domain layer depends on nothing.
In practice, this means:
shared/modelsimports nothing fromapps/orpackages/packages/validatorsimports fromshared/modelsbut not fromapps/apps/webimports fromshared/andpackages/but not from otherapps/apps/apiimports fromshared/and implements domain interfaces
// apps/api/src/repositories/user.repository.ts
// The data layer implements the domain interface
import type { User, CreateUserInput } from '@shared/models/user';
import { db } from '../database';
export class UserRepository {
async findById(id: string): Promise<User | null> {
const row = await db.query('SELECT * FROM users WHERE id = $1', [id]);
return row ? this.toDomain(row) : null;
}
async create(input: CreateUserInput): Promise<User> {
const row = await db.query(
'INSERT INTO users (email, name, role) VALUES ($1, $2, $3) RETURNING *',
[input.email, input.name, input.role],
);
return this.toDomain(row);
}
private toDomain(row: any): User {
return {
id: row.id,
email: row.email,
name: row.name,
role: row.role,
createdAt: new Date(row.created_at),
updatedAt: new Date(row.updated_at),
};
}
}
The toDomain mapping is important. Database rows are not domain models. The mapping layer keeps your domain clean and your database schema free to evolve independently.
CI/CD Considerations
A monorepo without smart CI/CD will punish you with slow build times. Every PR should not trigger a full build of every application.
Selective Builds
The key optimization is to only build and test what changed. With Turborepo or Nx, this is straightforward:
// turbo.json
{
"$schema": "https://turbo.build/schema.json",
"pipeline": {
"build": {
"dependsOn": ["^build"],
"outputs": ["dist/**", ".next/**"]
},
"test": {
"dependsOn": ["build"],
"outputs": []
},
"lint": {
"outputs": []
},
"typecheck": {
"dependsOn": ["^build"],
"outputs": []
}
}
}
The ^build syntax means “build all dependencies first.” Turborepo automatically determines which packages need rebuilding based on what files changed. If you only modified apps/web, the API server does not get rebuilt.
Caching Strategies
Caching is what makes monorepo CI/CD viable. There are two levels:
Remote caching stores build outputs keyed by input file hashes. If the inputs have not changed, the build is skipped entirely. Turborepo and Nx both offer this out of the box.
Layer-specific caching in Docker builds ensures that dependency installation is cached separately from source compilation:
# Dockerfile for apps/api
FROM node:20-alpine AS builder
WORKDIR /app
# Copy root package files first (cached if dependencies unchanged)
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./
COPY packages/ ./packages/
COPY shared/ ./shared/
RUN pnpm install --frozen-lockfile
# Copy application source (changes frequently, but dependencies are cached)
COPY apps/api/ ./apps/api/
RUN pnpm --filter api build
FROM node:20-alpine AS runner
WORKDIR /app
COPY --from=builder /app/apps/api/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/main.js"]
Affected Test Detection
For testing, only run tests for packages affected by the change:
# .github/workflows/ci.yml
name: CI
on:
pull_request:
jobs:
detect-changes:
runs-on: ubuntu-latest
outputs:
packages: ${{ steps.filter.outputs.changes }}
steps:
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v2
id: filter
with:
filters: |
web:
- 'apps/web/**'
- 'packages/ui/**'
- 'shared/**'
api:
- 'apps/api/**'
- 'packages/validators/**'
- 'shared/**'
test-web:
needs: detect-changes
if: contains(needs.detect-changes.outputs.packages, 'web')
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: pnpm install --frozen-lockfile
- run: pnpm --filter web test
test-api:
needs: detect-changes
if: contains(needs.detect-changes.outputs.packages, 'api')
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: pnpm install --frozen-lockfile
- run: pnpm --filter api test
This pattern means a frontend-only change does not waste CI minutes running backend tests.
Lessons From Real Projects
After applying these patterns across several projects, here are the lessons that stand out:
1. Do not share everything. Just because code can be shared does not mean it should be. If only one app uses a piece of logic, keep it in that app. Premature sharing creates unnecessary coupling.
2. Use TypeScript path aliases consistently. Configure path aliases in your root tsconfig.json and extend them in each package:
{
"compilerOptions": {
"paths": {
"@shared/*": ["./shared/*"],
"@ui/*": ["./packages/ui/src/*"],
"@validators/*": ["./packages/validators/src/*"]
}
}
}
This keeps imports readable: import { User } from '@shared/models/user' instead of import { User } from '../../../shared/models/user'.
3. Pin your tooling versions. A monorepo means one build toolchain. Pin TypeScript, ESLint, Prettier, and test framework versions in the root package.json. Different apps using different versions of the same tool is a recipe for inconsistent behavior.
4. Document the dependency graph. As the monorepo grows, it becomes hard to remember which packages depend on which. A docs/architecture.md file with a dependency diagram saves hours of confusion for new team members.
5. Keep the build fast. If the build takes more than a few minutes, developers will avoid running it. Invest in caching early, set up incremental builds, and monitor build times as the project grows.
Conclusion
A monorepo is not inherently good or bad — it is a tool. When organized with clear boundaries between apps, packages, and shared code, it enables faster development through code sharing and atomic changes. When organized poorly, it becomes a tangled web of implicit dependencies and slow builds.
The patterns in this post — the three-level directory structure, shared domain models and validation, clean architecture layers, and selective CI/CD — have worked well across multiple production projects. Start with this structure, adapt it to your specific needs, and resist the temptation to over-share code. The goal is not to share everything, but to share the right things in the right way.
