Building Real Infrastructure on Apple Silicon: From VM Chaos to Running Go Apps

Part 2 of my home lab series The MacBook Air M2 sitting on my desk is modest hardware — 16GB of RAM in an ultraportable form factor. But after weeks of fighting with ARM images, Vagrant configs, and VMware quirks, it’s become more useful than any AWS free-tier account I’ve ever used.

Desiree' Weston

11/19/20253 min read

It’s a lab where I can break things, rebuild them, and actually understand what went wrong.

In Part 1, I fought through the basics: getting Vagrant and VMware Fusion to play nice with ARM architecture. This time, I’m building on that foundation. The goal wasn’t to make another VM. It was to turn the lab into a platform capable of running real applications.

And I decided to start with Go.

The Final Architecture (Spoiler: It Actually Works)

Getting a stable home lab on Apple Silicon means fighting through problems that don’t exist on Intel:

  • ARM-only base images

  • VMware Fusion provider quirks

  • Networking that works differently depending on which provider you use

  • Tools that are “almost” compatible

After enough trial and error, here’s what finally clicked:

MacBook Air M2 (Host)

│ Vagrant + VMware Fusion Provider


Ubuntu ARM64 VM
├── Docker Engine
├── Go Toolchain
└── Port Forwarding → :
8080

Simple architecture. But every piece works reliably, and I can spin up, destroy, and rebuild environments in minutes.

Here’s the Vagrantfile that made it stable:

Vagrant.configure("2") do |config|
config.vm.box =
"bento/ubuntu-22.04-arm64"

config.vm.provider
"vmware_desktop" do |v|
v.gui =
false
v.memory =
4096
v.cpus =
2
end

config.vm.network
"forwarded_port", guest: 8080, host: 8080

config.vm.provision
"shell", inline: <<-SHELL
apt-get update -y
apt-get install -y docker.io
usermod -aG docker vagrant
SHELL

end

The moment this built cleanly and I could hit the VM from my browser, I knew the hard part was over.

Why Go? Because It Powers Everything I Want to Learn

I didn’t want to build another throwaway Hello World app. I wanted something relevant to the tools I’m trying to understand.

Go runs the infrastructure layer of the modern cloud:

  • Kubernetes — container orchestration

  • Docker — container runtime

  • Terraform — infrastructure as code

  • Prometheus — monitoring

  • Consul, Etcd — service discovery and distributed config

If you’re serious about DevOps, learning the language that powers these tools gives you an unfair advantage. You start reading source code instead of just documentation.

Inside the VM, setup was straightforward:

sudo apt install -y golang
go version

I created a simple project structure:

/home/vagrant/go-app/
└── main.go

And started building.

Building a Real HTTP Service (Not Just Hello World)

I wanted this to feel like an honest service — something with routing, endpoints, and structure.

main.go:

package main

import (
"fmt"
"log"
"net/http"
)

func main() {
http.HandleFunc(
"/", func(w http.ResponseWriter, r http.Request) {
fmt.Fprintln(w,
"Hello from Go running inside the Home Lab!")
})

http.HandleFunc(
"/health", func(w http.ResponseWriter, r http.Request) {
fmt.Fprintln(w,
"OK")
})

log.Println(
"Server starting on :8080")
log.Fatal(http.ListenAndServe(
":8080", nil))
}

Build and run inside the VM:

go build -o app
./app

Because of Vagrant’s port forwarding, I could hit it directly from macOS:

curl http://localhost:8080

That first response hit different.

The lab wasn’t just assembled anymore. It was running workloads. The architecture I’d been fighting with for weeks was finally doing what I built it to do.

Containerizing the Service (Making It Production-Like)

Running the binary directly was a start. But I wanted to treat this like something I’d actually deploy.

FROM golang:1.21-alpine AS build
WORKDIR /app
COPY . .
RUN go build -o app

FROM alpine
WORKDIR /app
COPY --from=build /app/app .
EXPOSE 8080
CMD [
"./app"]

Build and run:

docker build -t go-app .
docker run -p 8080:8080 go-app

Now I had a workflow that mirrored real infrastructure:

  1. Code on the host (macOS)

  2. VM runs the build and container

  3. Traffic flows: macOS → VM → Docker container → Go app

This was the moment the lab stopped feeling like a toy and began to feel like a platform.

What This Phase Taught Me

1. Local-first development is faster and more forgiving

I consistently encountered issues such as port conflicts, networking problems, and Docker permission errors. However, each failure became a lesson, allowing me to iterate in seconds rather than wait for cloud resources to spin up.

2. Apple Silicon forces you to learn the “why” behind the tools

Nothing works out of the box. You don’t just follow tutorials — you have to understand architecture differences, how providers handle ARM, and why specific images fail. That friction makes you better.

3. Go feels natural once you run it in a real environment

Reading Go tutorials is one thing. Building, containerizing, and exposing a service through layered networking teaches you how real systems work.

4. A home lab only becomes useful when it runs real workloads

Once the Go app was accessible through my browser, the lab transformed from a learning project into an actual tool.

What’s Next: Building a Load-Balanced Stack

Now that the lab is running applications, I’m scaling up.

Project 3 will cover:

  • Multiple containerized app instances

  • A reverse proxy handling traffic distribution

  • Load balancing between containers

  • Host access through clean port forwarding

Basically, I’m building a tiny production environment — right here on a 16GB MacBook Air.

If you’re following along, the following article will show you how to turn a single-service lab into something that looks like a distributed infrastructure.