Tagged 'machine-learning'
All posts tagged with 'machine-learning' on William Zujkowski's blog
Posts tagged "machine-learning"
10 posts tagged with machine-learning
Building a Privacy-First AI Lab: Deploying Local LLMs Without Sacrificing Ethics
Build privacy-first AI lab with local LLMs—run models up to 34B on RTX 3090 (24GB VRAM) with network isolation, traffic monitoring, and real privacy controls.
From Claude in Your Terminal to Robots in Your Workshop: The Embodied AI Revolution
Deploy Vision-Language-Action models for embodied AI robots—integrate physical world interaction with security considerations for homelab automation.
Supercharging Development with Claude-Flow: AI Swarm Intelligence for Modern Engineering
Deploy Claude-Flow AI agent swarms for development—achieve 84.8% SWE-Bench solve rate with neural learning and multi-agent orchestration for complex tasks.
Fine-Tuning LLMs in the Homelab: A Practical Guide
Fine-tune LLMs on homelab hardware with QLoRA and 4-bit quantization. Train Llama 3 8B models on RTX 3090 with dataset prep and optimization strategies.
Securing Your Personal AI/ML Experiments: A Practical Guide
Secure personal AI experiments with model isolation and network segmentation—protect LLM deployments using privacy controls and threat modeling.
Privacy-Preserving AI Training Across My Homelab: Federated Learning with Granular-Ball Computing
Deploy federated learning across homelab with granular-ball computing—train privacy-preserving models with 82% reduced network transfer.
Running LLaMA 3.1 on a Raspberry Pi: Memory-Efficient Edge AI with PIPELOAD
Run LLaMA 3.1 on Raspberry Pi with PIPELOAD pipeline inference—achieve 90% memory reduction and deploy 7B models on 8GB edge devices at 2.5 tokens/sec.
Multimodal Foundation Models: Capabilities, Challenges, and Applications
Build multimodal AI systems with GPT-4 Vision and CLIP—process text, images, and audio together for next-generation foundation model applications.
AI Learning in Resource-Constrained Environments
Train AI models on resource-constrained hardware with quantization, pruning, and distillation—run GPT-3 capabilities 100x faster through compression.
The Transformer Architecture: A Deep Dive
Master transformer architecture with self-attention and positional encoding—understand the foundation of GPT-4, BERT, and modern language models.