After spending several years building and maintaining CI/CD pipelines for high-traffic e-commerce platforms, I've learned that the difference between a good pipeline and a great one comes down to reliability, speed, and developer experience. In this post, I'll walk through the patterns that have worked well for our team.
Why GitHub Actions?
We evaluated several CI/CD platforms before settling on GitHub Actions. The tight integration with our existing GitHub workflow, the generous free tier for public repos, and the marketplace of reusable actions made it the clear winner for our use case.
That said, the principles here apply regardless of which CI/CD platform you're using. The workflow patterns, caching strategies, and deployment approaches are transferable to GitLab CI, CircleCI, or any other tool.
The best CI/CD pipeline is one that developers trust enough to deploy on a Friday afternoon. If your team is afraid to merge, your pipeline has failed its primary mission.
Pipeline Architecture
Our pipeline follows a straightforward three-stage pattern that balances thoroughness with speed:
- Build -- Compile assets, resolve dependencies, generate artifacts
- Test -- Run unit tests, integration tests, and linting in parallel
- Deploy -- Push to staging or production based on branch
The Build Stage
The build stage is where most time savings come from. We use aggressive caching to avoid rebuilding unchanged dependencies:
name: Build and Deploy
on:
push:
branches: [main, staging]
pull_request:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Build
run: npm run build
- name: Upload build artifacts
uses: actions/upload-artifact@v4
with:
name: build-output
path: dist/
retention-days: 1
The actions/setup-node@v4 action with cache: 'npm' automatically caches the npm cache directory between runs. This alone cut our install step from 45 seconds to about 8 seconds on cache hits.
Parallel Testing
Running tests sequentially is a common bottleneck. We split our test suite into parallel jobs that all depend on the build stage:
unit-tests:
needs: build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: build-output
path: dist/
- run: npm ci
- run: npm run test:unit
lint:
needs: build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci
- run: npm run lint
integration-tests:
needs: build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: build-output
path: dist/
- run: npm ci
- run: npm run test:integration
Deployment Strategies
For e-commerce platforms, zero-downtime deployment is non-negotiable. We use a blue-green deployment pattern with AWS:
- Staging deploys happen on every push to the
stagingbranch - Production deploys require a merge to
mainplus manual approval - Rollbacks are instant -- just swap the target group back to the previous environment
Monitoring and Alerts
A pipeline without monitoring is flying blind. We send deployment notifications to Slack and track these metrics:
- Build duration (target: under 5 minutes)
- Test pass rate (target: 100%, obviously)
- Deploy frequency (tracking weekly trends)
- Mean time to recovery (MTTR) after failed deploys
Key Takeaways
If you're setting up CI/CD for an e-commerce platform, here's what I'd prioritise:
- Cache aggressively -- every second counts when developers are waiting
- Parallelise tests -- don't run things sequentially if they don't depend on each other
- Use environment protection rules -- manual approval for production deploys
- Monitor everything -- you can't improve what you don't measure
- Keep it simple -- a pipeline your team understands is better than a clever one they don't
In the next post, I'll cover Docker image optimisation strategies that reduced our container sizes by 60% and cut build times significantly.
Comments