LLM Training & Optimization Patterns¶
Version: 1.0.0 Last Updated: 2025-01-13 Status: Active Standard Code: LLM
Overview¶
This standard provides comprehensive guidelines and best practices for the subject area. It aims to ensure consistency, quality, and maintainability across all related implementations.
Best practices for LLMs working with this standards repository
🎯 Core Optimization Principles¶
1. Always Use Short Codes¶
2. Progressive Loading¶
❌ @load CS:* # Don't load everything
✅ @load CS:api # Start specific
✅ @add CS:security when:[needed] # Add incrementally
3. Context-Aware Loading¶
# Let the system detect and load
@load context:[auto] # Detects language, framework, task
@load similar:[current-file] # Loads relevant standards
📚 Efficient Loading Patterns¶
Task-Based Loading¶
# Bug Fix Pattern
@load [TS:regression + CS:error-handling]
# → Loads only debugging-related standards
# New Feature Pattern
@load [CS:architecture + TS:tdd + SEC:relevant]
# → Loads design and testing standards
# Code Review Pattern
@load context:[review] + severity:[critical]
# → Loads critical review criteria
Language-Specific Loading¶
# Python Project
@load language:[python] level:[required]
# → CS:python, TS:pytest, SEC:python
# JavaScript Project
@load language:[javascript] framework:[react]
# → CS:javascript, FE:react, TS:jest
🚀 Token Optimization Strategies¶
1. Use Micro Standards First¶
2. Section-Specific Loading¶
# Load only what you need
@section CS:api/error-handling # ~500 tokens
@section SEC:auth/jwt # ~800 tokens
3. Use the Index¶
🔍 Smart Query Patterns¶
Natural Language Queries¶
❌ "Show me all security standards" # Too broad
✅ "JWT implementation security" # Specific
✅ "Python API error handling" # Context + specific
Efficient Search¶
# Use structured queries
@search "authentication" in:[SEC:auth] section:[implementation]
# Not broad searches
@search "security" # Returns too much
💾 Caching Best Practices¶
Session-Level Caching¶
# Cache frequently used standards
@cache [CS:api, SEC:auth, TS:unit] duration:[session]
# Check cache before loading
@if-not-cached CS:api then:[@load CS:api]
Preload Common Patterns¶
# At conversation start
@preload profile:[api-development]
# Caches: CS:api, SEC:api, TS:integration
🎓 Learning Patterns¶
Progressive Enhancement¶
1. Start: @micro CS:api # Quick overview
2. Expand: @section CS:api/patterns # Specific patterns
3. Deep: @load CS:api full:[true] # Complete standard
Cross-Reference Learning¶
When using: CS:api
Also load: SEC:api # Security implications
Consider: TS:integration # Testing approach
📊 Performance Benchmarks¶
Token Usage by Strategy¶
Strategy | Tokens | Load Time | Use Case |
---|---|---|---|
Micro | ~500 | <200ms | Quick reference |
Section | ~1000 | <500ms | Specific topic |
Standard | ~5000 | ~1s | Full implementation |
Multi-standard | ~15000 | ~3s | Complex feature |
Optimal Loading Patterns¶
# Sequential (Slow) ❌
@load CS:api
@load SEC:api
@load TS:integration
# Parallel (Fast) ✅
@load [CS:api + SEC:api + TS:integration]
🤖 LLM-Specific Commands¶
For Code Generation¶
@generate [component] with:[CS:patterns + SEC:validation]
@template [api-endpoint] following:[CS:api + SEC:api]
For Code Review¶
For Learning¶
🚨 Common Pitfalls to Avoid¶
1. Over-Loading¶
❌ @load * # Never load everything
❌ @load [CS:* + SEC:* + TS:*] # Too much
✅ @load task:[specific-task] # Focused
2. Ignoring Context¶
3. Not Using Cache¶
🔧 Integration Tips¶
With Claude¶
Human: Create a secure API endpoint
Claude: I'll load the relevant standards:
@load [CS:api + SEC:api] sections:[overview, patterns]
Based on these standards, here's the implementation...
With GitHub Copilot¶
# @standards CS:python + SEC:validation
# @micro CS:api
def create_user(data: dict) -> User:
# Copilot will follow loaded standards
With Custom Tools¶
// Efficient loading function
async function loadStandards(task) {
const manifest = await fetch('/MANIFEST.yaml');
const profile = manifest.profiles[task];
return loadSections(profile.standards, profile.sections);
}
📈 Continuous Improvement¶
Track Your Usage¶
Learn From Patterns¶
🎯 Quick Reference Card¶
Most Efficient Commands¶
@micro [standard]
- Ultra-fast lookup@section [standard/section]
- Specific info@index [query]
- Instant answers@cache [standards]
- Reduce reloads@load context:[auto]
- Smart loading
Loading Priority¶
- Check index first
- Use micro if sufficient
- Load sections as needed
- Cache frequently used
- Load full only when necessary
Remember: The goal is to get the right information with minimal tokens. Start small, expand as needed, and always cache frequently used standards.
Implementation¶
Getting Started¶
- Review the relevant sections of this standard for your use case
- Identify which guidelines apply to your project
- Implement the required practices and patterns
- Validate compliance using the provided checklists
Implementation Checklist¶
- Review and understand applicable standards
- Implement required practices
- Follow recommended patterns
- Validate implementation against guidelines
- Document any deviations with justification