How I Cut Environment Provisioning from 8 Hours to 60 Seconds with Terraform and GitHub Actions
Every engineering team faces a common challenge: developers need clean environments to test features effectively, but creating them manually can be slow, error-prone, and costly.
Desiree' Weston
11/24/20254 min read


In my homelab, I built a solution that provisions isolated sandbox environments automatically with a single branch push — and tears them down just as easily.
Here’s how I eliminated all friction in environment provisioning.
The Problem: Manual Environments Kill Momentum
Before building this platform, our environment workflow looked like this:
Engineers manually spun up test environments
Each setup took 2–8 hours
Every environment had subtle configuration differences
Old environments lingered indefinitely, consuming resources
Debugging was painful because nothing matched production
The business impact was real:
Feature delivery slowed
Infrastructure costs climbed
DevOps became a bottleneck
Testing became unreliable
The root cause? Manual lifecycle management. Environments were slow to create, hard to replicate, and nearly impossible to clean up systematically.
I needed automation that made environment creation instant, consistent, and disposable.
The Solution: Ephemeral Sandboxes on Demand
I built a self-service platform that runs inside a Vagrant VM (Ubuntu ARM64) with Docker, Terraform, and a self-hosted GitHub Actions runner. When a developer pushes any branch matching project/*A complete sandbox environment is automatically spun up.
The flow is simple:
Branch Push → GitHub Actions → Self-Hosted Runner → Terraform → Docker Engine
Each sandbox includes:
Dedicated Docker network for isolation
PostgreSQL database container
Go API application container
Dynamically assigned host port
Public endpoint printed in the GitHub Actions log
Architecture Overview
Here’s how the components fit together:
Developer Pushes Branch
|
v
GitHub Actions (project/*)
|
v
Self-Hosted Runner (Homelab VM)
|
v
Terraform (Docker Provider)
|
v
+---------------------------------+
| Docker Engine |
| |
| +-------------------------+ |
| | sandbox-<env> network | |
| | | |
| | db-<env> (Postgres) | |
| | app-<env> (Go API) | |
| +-------------------------+ |
+---------------------------------+
Result: http://localhost:<random-port>
Implementation: GitHub Actions Orchestration
The magic starts with a simple workflow file that triggers on branch patterns:
# .github/workflows/sandbox-env.yml
name: Sandbox Environment
on:
push:
branches:
- "project/*"
jobs:
provision:
runs-on: self-hosted
defaults:
run:
working-directory: infra/terraform
steps:
- uses: actions/checkout@v4
- name: Extract environment name
id: envname
run: |
RAW="${GITHUB_REF#refs/heads/}"
ENV="${RAW#project/}"
echo "env=${ENV}" >> "$GITHUB_OUTPUT"
- name: Terraform Init
run: terraform init
- name: Terraform Apply
run: terraform apply -auto-approve -var "env_name=${{ steps.envname.outputs.env }}"
- name: Write sandbox endpoint
run: |
bash scripts/write-endpoint.sh > sandbox-endpoint.txt
echo "Sandbox ready:"
cat sandbox-endpoint.txt
Push a branch called project/feature-x, and you get a sandbox called feature-x. No manual intervention required.
Infrastructure as Code: Terraform + Docker
The Terraform configuration defines the entire sandbox infrastructure:
# infra/terraform/main.tf
provider "docker" {}
variable "env_name" {
type = string
}
locals {
network_name = "sandbox-${var.env_name}"
db_name = "db-${var.env_name}"
app_name = "app-${var.env_name}"
}
resource "docker_network" "sandbox" {
name = local.network_name
}
resource "docker_container" "db" {
name = local.db_name
image = "postgres:16-alpine"
env = [
"POSTGRES_USER=sandbox",
"POSTGRES_PASSWORD=sandbox",
"POSTGRES_DB=sandbox_db",
]
networks_advanced {
name = docker_network.sandbox.name
}
}
resource "docker_image" "app" {
name = "sandbox-app:${var.env_name}"
build {
context = "${path.module}/../../app"
dockerfile = "Dockerfile"
}
}
resource "docker_container" "app" {
name = local.app_name
image = docker_image.app.name
ports {
internal = 8080
external = 0 # Docker assigns a random available port
}
networks_advanced {
name = docker_network.sandbox.name
}
}
output "app_port" {
value = docker_container.app.ports[0].external
}
Key design decisions:
Dynamic port allocation eliminates port conflicts between sandboxes
Isolated networks prevent cross-environment interference
Parameterized naming makes each environment uniquely identifiable
Built-in outputs provide the connection details automatically
A simple bash script extracts the endpoint:
#!/usr/bin/env bash
# scripts/write-endpoint.sh
PORT=$(terraform output -raw app_port)
echo "http://localhost:${PORT}"
Application Setup
The Go application uses a multi-stage Docker build for efficiency:
# app/Dockerfile
FROM golang:1.22-alpine AS builder
WORKDIR /src
COPY . .
RUN go build -o /bin/app .
FROM alpine:3.19
COPY --from=builder /bin/app /app/app
EXPOSE 8080
CMD ["/app/app"]
This approach keeps the final image small while maintaining fast builds.
Repository Structure
The complete project layout:
/
├─ app/
│ └─ Dockerfile
├─ infra/
│ └─ terraform/
│ ├─ main.tf
│ ├─ variables.tf
│ ├─ outputs.tf
│ └─ scripts/
│ └─ write-endpoint.sh
└─ .github/workflows/
├─ sandbox-env.yml
└─ sandbox-destroy.yml
Everything needed to run the platform lives in the repository, making it portable and version-controlled.
Results: Measurable Impact
Before and after metrics:
Environment creation: 2–8 hours → < 60 seconds
Consistency: Manual configuration → Fully automated
Concurrent environments: 1–2 max → Unlimited
Cleanup process: Rare, manual → Single command
Developer effort: High touch → Zero touch
Provisioning time improved by 400–800%, but the real wins were qualitative:
Zero environment drift — every sandbox is identical
Instant feedback loops — developers test immediately after pushing
Lower cloud costs — teardown is trivial, so nothing lingers
Reduced DevOps burden — self-service removes the ticket queue
Lessons Learned
ARM64 Has Sharp Edges
Many Docker images default to x86_64. I hit countless exec format error crashes before learning to build for ARM64 or find compatible base images explicitly.
Self-Hosted Runners Require Real Ops Skills
Setting up the runner taught me about networking, file permissions, systems service configuration, and Docker socket access. It’s a forcing function for platform engineering thinking.
Terraform State Management Matters
With multiple environments running concurrently, I learned the hard way about naming conflicts, idempotency issues, and state locking. These patterns mirror what you’d encounter building internal platforms at scale.
The biggest takeaway: Automation only delivers value when it removes friction. This system does precisely that.
What’s Next
Short-term improvements:
Auto-teardown on branch deletion — clean up when work is done
24-hour TTL — expire sandboxes automatically
Slack/Teams notifications — alert when environments are ready
Container metadata — better labeling and tracking
Health checks — verify services are actually running
Long-term vision:
Web dashboard — visualize all active sandboxes
Resource guardrails — prevent runaway provisioning
Feature flags — toggle capabilities during provisioning
API/CLI access — support non-GitHub workflow
These additions would transform this into a lightweight Internal Developer Platform capable of serving entire engineering organizations.
Conclusion
Building this platform in my homelab taught me that great developer experiences don’t require massive infrastructure or enterprise budgets. You need thoughtful automation, precise abstractions, and tools that get out of the way.
If your team is still provisioning environments by hand, you’re leaving velocity on the table. Start small, automate the pain points, and watch your feedback loops collapse.
The code is the easy part. The real challenge is designing systems that developers actually want to use.
Want to see the code? The complete implementation is available on GitHub: https://github.com/desinthecloud/automation-onboarding.