Skip to content

Testing Guide

Version: 1.0.13
Status: Active

This guide covers testing strategies, patterns, and requirements for the Puppeteer MCP project. We maintain comprehensive test coverage across unit, integration, and end-to-end tests.

  • Test Runner: Jest with ESM support
  • Assertion Library: Built-in Jest matchers
  • Mocking: Jest mocks + custom Puppeteer mocks
  • Coverage: Jest coverage with tiered thresholds
tests/
├── unit/ # Unit tests for individual components
├── integration/ # Integration tests for subsystems
├── e2e/ # End-to-end workflow tests
├── benchmark/ # Performance benchmarks
└── __mocks__/ # Shared mock implementations

Following TS:JEST standards from William Zujkowski’s Standards:

ComponentCoverage TargetCurrent
Global15-18%
Auth/Security80-90%
Utilities50-80%
Browser Actions70%+

Test individual functions and classes in isolation:

describe('BrowserPool', () => {
it('should acquire browser instance', async () => {
const pool = new BrowserPool({ maxSize: 2 });
const browser = await pool.acquire();
expect(browser).toBeDefined();
expect(pool.getActiveCount()).toBe(1);
});
});

Test component interactions:

describe('Session with Context Integration', () => {
it('should create context with valid session', async () => {
const sessionId = await sessionStore.create(userId);
const contextId = await contextStore.create(sessionId, {
name: 'test-context',
});
expect(contextId).toMatch(/^ctx-/);
});
});

Test complete workflows:

describe('Browser Automation Workflow', () => {
it('should complete form submission', async () => {
// Create context
const context = await api.post('/contexts', {
name: 'form-test',
});
// Navigate and interact
await api.post(`/contexts/${context.id}/execute`, {
action: 'navigate',
params: { url: 'https://example.com/form' },
});
// Verify results
const screenshot = await api.post(`/contexts/${context.id}/execute`, {
action: 'screenshot',
});
expect(screenshot.data).toBeDefined();
});
});

Follow the AAA pattern:

describe('FeatureName', () => {
// Arrange - Setup
beforeEach(() => {
// Setup test environment
});
it('should perform expected behavior', async () => {
// Arrange
const input = createTestInput();
// Act
const result = await performAction(input);
// Assert
expect(result).toMatchExpectedOutput();
});
// Cleanup
afterEach(() => {
// Restore state
});
});

Use our custom Puppeteer mock for unit tests:

import { createMockBrowser } from '../__mocks__/puppeteer';
const mockBrowser = createMockBrowser({
pages: [
{
url: 'https://example.com',
title: 'Example Page',
},
],
});

Mock external dependencies:

jest.mock('../../src/store/session-store');
const mockSessionStore = sessionStore as jest.Mocked<typeof sessionStore>;
mockSessionStore.get.mockResolvedValue({
id: 'session-123',
userId: 'user-456',
});

Each browser action should have comprehensive tests:

describe('ClickAction', () => {
it('should click element by selector', async () => {
const page = createMockPage();
const action = new ClickAction();
await action.execute(page, {
selector: '#submit-button',
});
expect(page.click).toHaveBeenCalledWith('#submit-button');
});
it('should handle missing elements gracefully', async () => {
const page = createMockPage();
page.click.mockRejectedValue(new Error('Element not found'));
await expect(
action.execute(page, {
selector: '#missing',
}),
).rejects.toThrow('Element not found');
});
});
Terminal window
# Run all tests
npm test
# Run specific test suite
npm test -- auth.test.ts
# Run with coverage
npm run test:coverage
# Watch mode for development
npm run test:watch
# Run only unit tests
npm test -- tests/unit
# Run only integration tests
npm run test:integration
# Run benchmarks
npm run test:benchmark
Terminal window
# Debug with Node inspector
node --inspect-brk node_modules/.bin/jest --runInBand
# Debug specific test
npm test -- --testNamePattern="should authenticate" --verbose

Always properly handle async operations:

// Good - Proper async handling
it('should handle async operation', async () => {
const result = await asyncOperation();
expect(result).toBeDefined();
});
// Bad - Missing await
it('should handle async operation', () => {
const result = asyncOperation(); // Missing await!
expect(result).toBeDefined(); // Tests Promise, not result
});

Test both success and failure paths:

describe('Error Handling', () => {
it('should handle network errors', async () => {
mockFetch.mockRejectedValue(new Error('Network error'));
await expect(apiCall()).rejects.toThrow('Network error');
});
it('should handle validation errors', async () => {
const result = await validateInput({ invalid: true });
expect(result.error).toBeDefined();
expect(result.error.code).toBe('VALIDATION_ERROR');
});
});

Include performance benchmarks:

describe('Performance', () => {
it('should process requests within SLA', async () => {
const start = performance.now();
await processRequest(testData);
const duration = performance.now() - start;
expect(duration).toBeLessThan(100); // 100ms SLA
});
});

Each test should be independent:

beforeEach(() => {
jest.clearAllMocks();
// Reset test state
});

Use clear, behavior-focused names:

// Good
it('should return 401 when authentication token is expired');
// Bad
it('test auth');

Keep tests simple and declarative:

// Good
expect(result).toEqual(expectedOutput);
// Bad
if (condition) {
expect(result).toBe(value1);
} else {
expect(result).toBe(value2);
}

Create reusable test data builders:

const createTestUser = (overrides = {}) => ({
id: 'user-123',
email: 'test@example.com',
role: 'user',
...overrides,
});

Tests run automatically on:

  • Every push via Git hooks
  • Pull requests via GitHub Actions
  • Scheduled nightly runs
  • All tests must pass
  • Coverage thresholds must be met
  • No ESLint errors in test files
  • Performance benchmarks within limits
// Increase timeout for slow operations
it('should handle large file', async () => {
await processLargeFile();
}, 30000); // 30 second timeout
// Ensure cleanup
afterEach(async () => {
await browserPool.closeAll();
global.gc?.(); // Force garbage collection if available
});
  • Add retries for network-dependent tests
  • Use stable test data
  • Avoid time-dependent assertions
  • Mock external dependencies