Uses
This is my digital toolbox – the stuff that's survived the test of real-world use and hasn't let me down. I'm picky about my tools because bad choices cost time, and time is what I use to tinker with cool stuff. Everything here has a story, usually involving at least one failure before I figured it out.
What started with a $50 Raspberry Pi in 2015 has evolved into a homelab and workflow that actually works for me. This is a living document: I update it when I change something significant or learn a better way.
Hardware
Workstation
- Desktop PC — Intel i9-9900K (2019), 64 GB RAM,
RTX 3090
(2021), 1TB NVMe + 8TB HDD storage
Why this build: Needed something that could handle local LLM experiments (hence the 3090's 24GB VRAM) plus VM workloads (hence the 64GB RAM). Built in 2019 for ~$2,400, upgraded GPU in 2021 when I realized 11GB VRAM on my old 2080 Ti limited me to 7B models. The 3090's 24GB handles 7B-34B models comfortably in full VRAM.
Trade-off: Could've gone AMD Threadripper for better multi-threading, but CUDA support for ML work made NVIDIA the obvious choice. The 3090 was expensive ($1,500 during the shortage), but it saves me hundreds per month in cloud GPU costs.
Lesson learned: Originally bought a 2080 Ti in 2020. Pushed it too hard overclocking for LLM inference and burned it out. Taught me to respect thermal limits. Upgraded to the 3090 and haven't looked back.
- Laptop —
Framework Laptop (DIY Edition, 2022) with
Ubuntu 24.04 LTS
Why Framework: After three laptops that became e-waste because one component failed, I wanted something actually repairable. Framework's modular design means I can upgrade RAM, storage, ports, even the mainboard without replacing the whole machine.
Cost reality: $1,400 for the DIY edition with an i7-1260P, 32GB RAM, and 1TB NVMe. More expensive than a Dell with similar specs, but I value the right-to-repair philosophy.
Ubuntu choice: Tried Fedora for six months, kept breaking after updates. Ubuntu LTS is boring, which is exactly what I want on a laptop I depend on.
- Displays — 34" LG 34WK95U ultrawide (3440x1440, ~$800)
Why ultrawide: Tried dual 27" monitors for years. The bezel gap drove me insane. One seamless ultrawide lets me have three vertical code panes side-by-side without visual interruption. Game changer for monitoring dashboards.
Peripherals
- Keyboard —
Wooting 80HE (~$185)
Analog hall effect keys: Thought it was marketing hype until I tried it. The ability to set actuation points per-key and have analog input changed how I interact with my machine. It's absurdly customizable. Now I can't go back to traditional mechanical switches.
- Mouse —
Glorious Model O (~$50)
Lightweight champion: 67 grams. After years of heavy gaming mice giving me wrist pain, going ultralight was a revelation. Simple, reliable, cheap.
- Headset —
SteelSeries Arctis 7X+ (~$150)
Wireless that actually works: Battery lasts 30+ hours, comfortable for all-day wear, and the mic doesn't sound like I'm in a cave. Works seamlessly across my PC, Xbox, and Switch.
- Coffee —
Chemex 10-cup (~$50) +
Baratza Encore grinder (~$170)
Because good security engineering requires good coffee. This is non-negotiable. Chemex makes clean, smooth coffee without the bitterness you get from French press. The ritual of manual pour-over also gives me time to think through problems.
Homelab Infrastructure
The Journey: Started with a $50 Raspberry Pi 4 in 2015. Thought "this is all I need." Ten years and ~$8,000 in equipment later, here we are.
- Firewall —
Ubiquiti Dream Machine Pro (~$380)
Why UDM Pro: Spent years with pfSense on repurposed hardware. Worked great until it didn't. UDM Pro isn't as flexible, but it's stable, fast, and I don't have to maintain another box. Trade-off accepted.
What I actually use: VLAN segmentation (IoT devices on isolated network), IDS/IPS for threat detection, DPI for traffic analysis. Handles gigabit routing without breaking a sweat.
- Hypervisor — Dell R910 running
Incus on Ubuntu 24.04 (~$800 used)
Why enterprise gear: Needed serious compute for VM testing. Considered building custom, but used enterprise hardware is cheap if you can handle the noise. This thing sounds like a jet engine at full throttle. Worth it.
Specs: 4x Intel Xeon E7540 (24 cores / 48 threads total), 256GB RAM, ~400GB ZFS storage pool.
What I run: BOSH-managed VMs for Cloud Foundry and Concourse CI, plus Podman containers for monitoring and services. Incus handles virtualization cleanly with less overhead than alternatives. Tried ESXi for three months in 2020, but the licensing complexity wasn't worth it for a homelab.
Power cost: ~$150/month at idle. Expensive hobby, but cheaper than renting equivalent cloud resources for learning.
- Infrastructure Nodes — 3x
Raspberry Pi 5
(16GB each) + 1x
Raspberry Pi 4
(8GB)
Dedicated roles: Two Pis run Pi-hole for redundant DNS filtering (network-wide ad blocking, catches ~25% of queries). Another runs Authentik for SSO — single sign-on across Grafana, Concourse, and other services via OIDC.
Why dedicated hardware: DNS and auth are too critical to share resources with other workloads. Separate Pis mean a server reboot doesn't take down name resolution or authentication.
- Storage —
TrueNAS Core (~$1,200 for custom build, 2020)
~40TB raw, ~30TB usable (RAIDZ2 configuration)
Why TrueNAS: ZFS is bulletproof. I've had drives fail, but never lost data. Snapshots saved me twice when I accidentally deleted things I shouldn't have — both times from careless mistakes in the terminal that would have been painful without ZFS.
Backup strategy: Critical data goes to Backblaze B2 (~$50/month) via restic. Follows the 3-2-1 rule: 3 copies, 2 different media, 1 offsite.
- Networking —
Ubiquiti UniFi Switch 24 PoE (~$380) + 2x U6 Pro APs (~$150
each)
Why Ubiquiti ecosystem: Centralized management, reliable, PoE for clean AP installation. Not the cheapest, not the most feature-rich, but it just works. I've had zero downtime in three years.
Network design: 5 VLANs (Management, Home, Lab, IoT, Guest). IoT devices can't reach anything else. Learned this lesson after a smart bulb tried to phone home to unknown overseas servers 47,000 times in one day.
Related posts:
Software & Development
Operating Systems & Virtualization
- Ubuntu 24.04 LTS as primary OS
Boring and stable: After distro-hopping for years (Arch, Fedora, NixOS, Pop!_OS), I settled on Ubuntu LTS. It's boring. Boring is good. I spend my time solving problems, not fixing my OS.
What I learned: The "best" distro is the one that doesn't make you think about it.
- Incus for virtualization
Incus vs Proxmox: Migrated from Proxmox to Incus on Ubuntu 24.04. Incus gives me clean CLI-driven VM and container management without a web UI I rarely used. Pairs well with BOSH for orchestrating Cloud Foundry workloads.
Learning curve: If you know LXD, Incus is familiar. If not, a weekend gets you productive.
- Docker /
Podman for containers
Docker for development, Podman for production: Docker is ubiquitous and has better docs. Podman is daemonless and more secure. I use both depending on context.
Container philosophy: If a service isn't in a container, it's doing it wrong. Makes deployment reproducible and rollbacks trivial.
- K3s for lightweight Kubernetes
Learning K8s the hard way: Tried learning full Kubernetes in 2021. Overwhelmed. K3s is stripped-down, easier to understand, perfect for homelab. Once you understand K3s, regular K8s makes sense.
Reality check: K8s is overkill for 90% of homelab use cases. I use it because I want to learn it, not because I need it.
Terminal & Editor
- Ghostty terminal
Recent switch: Moved from Alacritty in October 2024. Ghostty is stupid fast (GPU-accelerated), uses less memory, and the developer is responsive. Now stable and my daily driver — no regrets.
Why not GNOME Terminal: Startup time. Ghostty launches in ~40ms vs ~400ms for GNOME Terminal. When you open dozens of terminals daily, that adds up.
- Zsh shell +
oh-my-zsh + plugins
Why not bash: Tab completion and git integration. My most-used plugins:
git,docker,kubectl,z(directory jumping),fzfintegration.Tried fish: Great shell, but bash compatibility matters for scripts I copy from Stack Overflow. Zsh gives me better UX while staying bash-compatible.
- tmux multiplexer
Essential for remote work: SSH sessions that survive disconnects. I can start a long-running task, disconnect, reconnect hours later, and it's still running. Game changer.
Learning curve: Steep. Took me 3 months to stop fighting it. Now it's muscle memory. Worth the investment.
- VS Code with extensions for Python, Go,
Terraform, Docker
Controversial take: I know, "real developers use vim." I tried. For 3 months in 2019. I was 30% slower in vim. Life's too short. VS Code with vim keybindings is my compromise.
Essential extensions: Python (Microsoft), Docker, GitLens, Remote-SSH, Markdown All-in-One, Trailing Spaces.
Remote-SSH is magic: Edit files on remote machines like they're local. No more nano/vi in SSH sessions.
- Tokyo Night theme
Eye comfort: After years of high-contrast themes giving me headaches, Tokyo Night's softer palette is easier on my eyes during long coding sessions. Small quality-of-life improvement that matters.
Security & Monitoring (Homelab)
- Wireshark, tcpdump,
nmap for network inspection
The classics: These tools have been around forever because they work. Wireshark for deep packet inspection, tcpdump for quick captures, nmap for discovery. I use them weekly.
Learning investment: Spent ~40 hours over a year learning Wireshark filters. Now I can find issues in minutes that used to take hours.
- Nessus for vulnerability assessment
Using: Nessus Essentials (free version, up to 16 IPs). Tried OpenVAS for a year in 2020—spent more time fixing false positives than finding vulns. Nessus just works.
Trade-off: Free version is limited to 16 hosts, but that covers my critical infrastructure. For a full homelab scan, I rotate scans across subnets or use Grype/OSV for container/package scanning.
What I scan: Everything. Monthly full scans of all homelab assets. Found critical vulns in IoT devices that vendors never patched.
- Grype and
OSV-Scanner for supply chain scanning
Free alternatives: For container/code scanning, these are excellent. I also use Trivy. Run all three because overlapping coverage catches more issues.
Discovery: Found a critical vuln in a homelab container with Grype that Nessus missed. Now I always run multiple scanners.
- Wazuh for log analysis and detection
Open-source SIEM: Wazuh aggregates logs from everything and correlates events. Detected a brute-force SSH attack in real-time in 2023. Would've missed it without centralized logging.
Setup time: ~8 hours to configure properly. Worth every minute. Now I have visibility into everything happening on my network.
- Grafana,
Prometheus,
Loki for metrics, logs, and dashboards
Observability stack: Prometheus scrapes metrics from all nodes, Loki aggregates logs via Promtail, Grafana visualizes everything. Alertmanager routes alerts to ntfy for push notifications to my phone.
Prevented issues: Caught a failing disk before data loss, identified a memory leak in a service, spotted unusual traffic patterns.
- OWASP ZAP and
gobuster for web/app testing
Pentesting tools: ZAP for automated web app scanning, gobuster for directory/subdomain discovery. Use these for testing anything web-facing before exposing it to the internet.
- Bitwarden (self-hosted) for password management
Why self-hosted: I trust Bitwarden's security model, but I prefer controlling the infrastructure. Running Vaultwarden (lightweight Bitwarden server) on my homelab since 2021.
Migration: Moved from LastPass after their 2022 breach. Haven't looked back.
- YubiKey 5C NFC
for hardware 2FA (~$55)
Physical security keys: I use YubiKeys for every account that supports FIDO2/WebAuthn. Phishing-resistant 2FA is non-negotiable.
Rule: If a service doesn't support 2FA, it doesn't get my data. Full stop.
- CredHub for secrets in automation
and CI
Secrets management: Hardcoded secrets are evil. CredHub integrates natively with BOSH and Concourse for automatic secret injection into pipelines and deployments. No manual secret passing.
Why CredHub over Vault: Since I'm already running Cloud Foundry and BOSH, CredHub comes built-in. One less thing to operate.
Related posts:
AI & Coding Tools
- AI Coding Assistants
Claude Code is my primary coding tool. I use it for everything from feature implementation to code review to debugging. It's the backbone of my nexus-agents project — a multi-model orchestration system that routes tasks to Claude, Gemini, Codex, and OpenCode based on what each model is actually good at.
Why Claude Code over alternatives: Terminal-native, understands full project context, and the MCP integration means it can use my custom tools directly. I've tried Copilot, Cursor, and others. Claude Code fits my workflow best.
- Local LLMs on RTX 3090 (24GB VRAM)
Models that actually fit: Llama 3.3 8B (~4GB Q4), Mistral 7B (~4GB Q4), Qwen 2.5 Coder 32B (~18GB Q4), DeepSeek Coder V2 Lite (~9GB Q4).
Why local: Privacy, unlimited usage, learning how they work under the hood. For security research and analyzing potentially sensitive data, local inference is the only acceptable option.
Hardware reality: 24GB VRAM handles 7B-34B models fully in GPU memory with Q4 quantization. Larger models (70B+) require CPU offloading (storing part of model in system RAM), which drops performance from 20+ tokens/second to 2-5 tokens/second. For 70B tasks, I use API access or accept the slower offloaded inference.
Performance sweet spot: CodeLlama 34B at Q4 quantization (~20GB) gives excellent quality at 12-15 tokens/second. 8B models hit 40+ tokens/second. Good enough for 90% of my use cases.
- Ollama for model management
Game changer: Makes running local LLMs actually usable. Tried llama.cpp directly – too much friction. Ollama is Docker-simple.
Install to running LLM in 2 commands:
curl https://ollama.com/install.sh | sh ollama run llama3.1:8b # Or codellama:34b for larger models - Use cases that actually work:
- Code review: Catches obvious bugs, suggests improvements. Not perfect, but faster than waiting for human review.
- Security policy analysis: Summarizing 50-page compliance docs into actionable items.
- Homelab troubleshooting: Explains obscure error messages better than Google sometimes.
- Learning new tech: Asks better questions than docs sometimes. Great for "explain like I'm five" moments.
- Blog post editing: Catches typos and awkward phrasing I miss.
- Use cases that don't work:
- Anything requiring real-time data (models are frozen in time).
- Complex multi-step reasoning (hallucinations increase with complexity).
- Critical decisions where hallucinations matter (always verify).
- Code generation for complex systems (good for boilerplate, bad for architecture).
- Reality check: LLMs are tools, not magic. They're autocomplete on steroids. Useful when you understand their limitations, dangerous when you don't.
Related posts:
Services
- Code Hosting: GitHub (public),
GitLab CE (self-hosted private)
Why both: GitHub for open-source visibility, GitLab for private repos I don't want in someone else's cloud. GitLab CE is free and feature-complete.
- CI/CD: GitHub Actions (public),
Concourse CI (homelab automation)
GitHub Actions: Free for public repos, simple YAML config, integrates perfectly with GitHub. Handles my blog deployment.
Concourse CI: Runs on BOSH-managed VMs in the homelab. Pipeline-as-code with declarative YAML. Handles homelab automation, backup jobs, and deployment pipelines that GitHub Actions can't reach.
- Monitoring: UptimeRobot (free tier)
External health checks: Monitors my public-facing services from outside my network. Notifies me via email/SMS if something goes down. Free tier is generous (50 monitors, 5-minute intervals).
- VPN: WireGuard,
Tailscale,
ProtonVPN
WireGuard for homelab access: Fast, modern, secure. Self-hosted on my UDM Pro. Connect to my homelab from anywhere.
Tailscale for mesh networking: Zero-config VPN that just works. Free for personal use (up to 20 devices). Magic.
ProtonVPN for privacy: When I need to hide my traffic from my ISP or access region-locked content. Swiss privacy laws, no logs, trustworthy.
- DNS: Cloudflare 1.1.1.1 upstream,
Pi-hole local filtering
Layered approach: Pi-hole blocks ads/tracking at the DNS level (25% of queries), Cloudflare DNS for privacy (faster than ISP DNS, no logging).
Why not Google DNS: I don't need Google knowing every domain I visit.
Self-Hosted Services
Running these on Incus VMs and Podman containers because I control my data:
- Grafana, Prometheus, Loki + Alertmanager — Full observability stack (metrics, logs, alerts)
- Authentik — SSO/identity provider (OIDC for Grafana, Concourse, and internal services via Caddy forward-auth)
- Cloud Foundry — Application platform managed by BOSH (VMs for Diego, UAA, routing, log pipeline)
- Concourse CI — Pipeline automation (BOSH-deployed, secrets via CredHub)
- Pi-hole × 2 — Redundant DNS filtering on dedicated Raspberry Pis
- ntfy — Push notifications for alerts (Alertmanager → ntfy → phone)
- Vaultwarden — Self-hosted Bitwarden server for password management
Why self-host: Privacy, learning, control. Also, it's fun. I've learned more about networking, security, and system administration from running these services than from any course.
Cost: ~$50/month for Backblaze B2, ~$150/month for power. Compared to equivalent SaaS subscriptions (~$300/month), I'm break-even while learning and owning my data.
CLI Tools
Development
- git — Version control (use it hourly)
- gh — GitHub CLI (faster than web UI for PRs/issues)
- python3 — Scripting & automation (80% of my scripts)
- Go (golang) — Systems programming (learning, not expert)
- rust — Memory-safe development (aspirational, still learning)
Infrastructure
- terraform — IaC (declarative infrastructure, version-controlled)
- ansible — Configuration management (automate everything)
- docker — Containers (daily driver)
- kubectl — Kubernetes (learning)
- k3s — Lightweight Kubernetes (actually using)
Utilities That Changed My Workflow
- tmux — Multiplexer (can't work without it)
- fzf — Fuzzy finder (instant file/history search)
- ripgrep — Code search (10x faster than grep)
- bat — Syntax-highlighted cat (small QoL improvement)
- htop — Process monitor (better than top)
- ncdu — Disk usage (find space hogs instantly)
Pattern: I gradually replace standard tools with modern alternatives when they significantly improve my workflow. Not change for change's sake, but real productivity gains.
Learning
- Platforms: Pluralsight,
O'Reilly, YouTube (free)
ROI: These subscriptions pay for themselves if I learn one skill that saves 10 hours. They've saved me hundreds of hours.
YouTube underrated: Free, high-quality content. I've learned more from NetworkChuck, LiveOverflow, and IppSec than from some paid courses.
- Security labs: HackTheBox,
TryHackMe, personal homelab (priceless)
Hands-on learning: Reading about security is fine. Breaking things is better. These platforms provide safe, legal environments to practice offensive security.
Homelab advantage: I can test things these platforms don't cover. My lab, my rules.
- Threat intel: AlienVault OTX,
abuse.ch feeds,
CISA KEV
Free threat intelligence: These feeds tell me what bad actors are exploiting right now. I integrate them into Wazuh for automated detection.
Principles
- Open Source First — Transparent, inspectable tools
Learned this the hard way: Vendor locked me out of my own monitoring data in 2018. Never again. Open source means I control my data and can fix it myself if needed.
Exception: I'll use proprietary tools when they're significantly better (Nessus) or when no viable FOSS alternative exists. Pragmatism over ideology.
- Privacy & Safety — Minimize data exhaust; enforce 2FA everywhere
Rule: If a service doesn't support 2FA, it doesn't get my data. Full stop. Bitwarden + YubiKey for everything.
Data minimization: Services that don't need my real info get SimpleLogin aliases and fake data. Compartmentalization reduces blast radius.
- Automate Boring Things — Script repeatable tasks
Trigger: If I do something manually 3 times, it gets automated. Life's too short for repetitive tasks.
Examples: Database backups (automated), certificate renewal (automated), system updates (automated), blog deployment (automated), VM snapshots (automated).
- Document As You Go — Wikis > memory
Reality check: I don't remember why I made a change 3 months ago without notes. Future me always thanks past me for documentation.
Tools: BookStack for procedures, git commit messages for code changes, inline comments for complex logic.
Learned: If I can't explain it to someone else, I don't understand it well enough.
- Reliability > Novelty — Boring tech for critical paths
Translation: New and shiny is fun for labs. Production runs on battle-tested boring tech. Docker, PostgreSQL, nginx, Ubuntu LTS – they work because they've been broken and fixed 1,000 times.
Exception: I break this rule in the homelab constantly. That's what it's for. Break things, learn, iterate. Just don't do it in production.
Wisdom: The best tech stack is the one you understand, not the one on Hacker News.
Last updated: 2026-03-12