Project

Playwright AI Test Framework

This framework is designed as a reusable foundation for modern QA work, combining Playwright, TypeScript, structured fixtures, documentation, and AI-assisted workflows in one maintainable automation repository.

Overview

The framework brings together page objects, fixture-based dependency injection, schema validation, reusable helpers, and a custom reporting layer so automation can scale without turning into a collection of one-off scripts.

It is also built to support AI-assisted development in a practical way: documented conventions, project structure, skills, and generated prompts all help keep outputs aligned with the framework instead of drifting from it.

What it demonstrates

  • Playwright + TypeScript framework design for maintainable UI and API testing
  • Strong documentation and onboarding paths for engineers and QA contributors
  • Smart reporting, fixtures, schemas, and reusable helpers that support scale
  • AI-guided workflows built around project standards instead of ad hoc prompts

Why it belongs in the portfolio

This project shows both implementation depth and framework thinking. It reflects how testing architecture, developer experience, and documentation quality all contribute to long-term automation value, not just raw test count.

AI Skills Architecture

One of the framework's defining ideas is its orchestrator + skills architecture for AI-assisted development. The same core model works across Claude Code, Cursor, and GitHub Copilot, with each tool getting a native orchestrator entry point and a tool-specific instruction format.

Diagram showing the Playwright AI Test Framework orchestrator layer, tool-specific AI entry points, shared skills layer, and common tasks guidance across Claude Code, Cursor, and GitHub Copilot.

How the model works

  • The orchestrator is always loaded and provides the Constitution, workflow guidance, and skills index
  • Claude Code uses `CLAUDE.md`, Cursor uses `.cursor/rules/rules.mdc`, and Copilot uses `.github/copilot-instructions.md`
  • Detailed skills live in tool-specific locations so each assistant gets deeper guidance in the format it understands best
  • Common tasks add prompt templates, anti-pattern reminders, and verification checklists for consistent generation

Why it matters

This architecture lets the project stay tool-agnostic without becoming generic. The framework keeps one mental model for AI-assisted development while still respecting the way each assistant loads and activates instructions.

AI-Assisted Development Workflow

The framework is also built around an incremental development cycle for AI-assisted work: prompt one focused task, review the generated code, verify it, commit the working result, and only then move to the next prompt.

Diagram showing the AI-assisted development workflow for the Playwright AI Test Framework: prompt, review code, verify tests, commit, and proceed, with the golden rule Verify then Commit then Proceed.

Workflow principles

  • One task per prompt keeps each change small enough to review, test, and revert safely
  • Generated code should be reviewed before execution so selector, structure, and pattern mistakes are caught early
  • Verification protects the repository from accumulating failing or unstable code as technical debt
  • Frequent commits create trusted checkpoints that future prompts can build on with less risk

Why this workflow matters

The point is not just discipline for its own sake. This pattern reduces debugging, preserves context for future AI prompts, and keeps the codebase moving forward through verified, working states instead of fragile chains of uncommitted changes.