Skip to main content

Hi, I'm William Zujkowski

Senior Information Security Engineer with 15+ years securing federal platforms. I spent last Saturday at 2 AM debugging a K3s cluster in my homelab – and loved every minute of it. Here, I share what works, what spectacularly fails, and the lessons I've learned from breaking things (legally, of course) since 2010.

William Zujkowski
"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."
— Isaac Asimov, The Zeroth Law

That nerdy kid in 1995 devouring Foundation and Empire under the covers with a flashlight? Yeah, that was me. Now I'm helping secure a FedRAMP Moderate platform while running local LLMs on my RTX 3090 to explore AI security. I'm implementing the actual controls to keep AI systems safe – thinking about the same questions Asimov posed, just with more YAML files, threat models, and 3 AM security patches than I expected.

Recent Posts

Security insights, AI experiments, homelab adventures, and lessons from the field.

Cover image for Consensus Voting With AI Models: When Three Opinions Beat One
·8 min read

Consensus Voting With AI Models: When Three Opinions Beat One

How multi-model consensus voting catches blind spots that single models miss. The research behind adversarial roles, Bayesian aggregation, and structured deliberation across Claude, Gemini, and Codex.

aisoftware-engineeringopen-source

Let's Connect

Got a security question? Building your first homelab? Debating if that CVSS 7.2 vulnerability is worth an emergency patch at 10 PM? Let's chat.