General
analyze-project - Claude MCP Skill
Forensic root cause analyzer for Antigravity sessions. Classifies scope deltas, rework patterns, root causes, hotspots, and auto-improves prompts/health.
SEO Guide: Enhance your AI agent with the analyze-project tool. This Model Context Protocol (MCP) server allows Claude Desktop and other LLMs to forensic root cause analyzer for antigravity sessions. classifies scope deltas, rework patterns, roo... Download and configure this skill to unlock new capabilities for your AI workflow.
Documentation
SKILL.md# /analyze-project β Root Cause Analyst Workflow Analyze AI-assisted coding sessions in `~/.gemini/antigravity/brain/` and produce a report that explains not just **what happened**, but **why it happened**, **who/what caused it**, and **what should change next time**. ## Goal For each session, determine: 1. What changed from the initial ask to the final executed work 2. Whether the main cause was: - user/spec - agent - repo/codebase - validation/testing - legitimate task complexity 3. Whether the opening prompt was sufficient 4. Which files/subsystems repeatedly correlate with struggle 5. What changes would most improve future sessions ## Global Rules - Treat `.resolved.N` counts as **iteration signals**, not proof of failure - Separate **human-added scope**, **necessary discovered scope**, and **agent-introduced scope** - Separate **agent error** from **repo friction** - Every diagnosis must include **evidence** and **confidence** - Confidence levels: - **High** = direct artifact/timestamp evidence - **Medium** = multiple supporting signals - **Low** = plausible inference, not directly proven - Evidence precedence: - artifact contents > timestamps > metadata summaries > inference - If evidence is weak, say so --- ## Step 0.5: Session Intent Classification Classify the primary session intent from objective + artifacts: - `DELIVERY` - `DEBUGGING` - `REFACTOR` - `RESEARCH` - `EXPLORATION` - `AUDIT_ANALYSIS` Record: - `session_intent` - `session_intent_confidence` Use intent to contextualize severity and rework shape. Do not judge exploratory or research sessions by the same standards as narrow delivery sessions. --- ## Step 1: Discover Conversations 1. Read available conversation summaries from system context 2. List conversation folders in the userβs Antigravity `brain/` directory 3. Build a conversation index with: - `conversation_id` - `title` - `objective` - `created` - `last_modified` 4. If the user supplied a keyword/path, filter to matching conversations; otherwise analyze all Output: indexed list of conversations to analyze. --- ## Step 2: Extract Session Evidence For each conversation, read if present: ### Core artifacts - `task.md` - `implementation_plan.md` - `walkthrough.md` ### Metadata - `*.metadata.json` ### Version snapshots - `task.md.resolved.0 ... N` - `implementation_plan.md.resolved.0 ... N` - `walkthrough.md.resolved.0 ... N` ### Additional signals - other `.md` artifacts - timestamps across artifact updates - file/folder/subsystem names mentioned in plans/walkthroughs - validation/testing language - explicit acceptance criteria, constraints, non-goals, and file targets Record per conversation: #### Lifecycle - `has_task` - `has_plan` - `has_walkthrough` - `is_completed` - `is_abandoned_candidate` = task exists but no walkthrough #### Revision / change volume - `task_versions` - `plan_versions` - `walkthrough_versions` - `extra_artifacts` #### Scope - `task_items_initial` - `task_items_final` - `task_completed_pct` - `scope_delta_raw` - `scope_creep_pct_raw` #### Timing - `created_at` - `completed_at` - `duration_minutes` #### Content / quality - `objective_text` - `initial_plan_summary` - `final_plan_summary` - `initial_task_excerpt` - `final_task_excerpt` - `walkthrough_summary` - `mentioned_files_or_subsystems` - `validation_requirements_present` - `acceptance_criteria_present` - `non_goals_present` - `scope_boundaries_present` - `file_targets_present` - `constraints_present` --- ## Step 3: Prompt Sufficiency Score the opening request on a 0β2 scale for: - **Clarity** - **Boundedness** - **Testability** - **Architectural specificity** - **Constraint awareness** - **Dependency awareness** Create: - `prompt_sufficiency_score` - `prompt_sufficiency_band` = High / Medium / Low Then note which missing prompt ingredients likely contributed to later friction. Do not punish short prompts by default; a narrow, obvious task can still have high sufficiency. --- ## Step 4: Scope Change Classification Classify scope change into: - **Human-added scope** β new asks beyond the original task - **Necessary discovered scope** β work required to complete the original task correctly - **Agent-introduced scope** β likely unnecessary work introduced by the agent Record: - `scope_change_type_primary` - `scope_change_type_secondary` (optional) - `scope_change_confidence` - evidence Keep one short example in mind for calibration: - Human-added: βalso refactor nearby code while youβre hereβ - Necessary discovered: hidden dependency must be fixed for original task to work - Agent-introduced: extra cleanup or redesign not requested and not required --- ## Step 5: Rework Shape Classify each session into one primary pattern: - **Clean execution** - **Early replan then stable finish** - **Progressive scope expansion** - **Reopen/reclose churn** - **Late-stage verification churn** - **Abandoned mid-flight** - **Exploratory / research session** Record: - `rework_shape` - `rework_shape_confidence` - evidence --- ## Step 6: Root Cause Analysis For every non-clean session, assign: ### Primary root cause One of: - `SPEC_AMBIGUITY` - `HUMAN_SCOPE_CHANGE` - `REPO_FRAGILITY` - `AGENT_ARCHITECTURAL_ERROR` - `VERIFICATION_CHURN` - `LEGITIMATE_TASK_COMPLEXITY` ### Secondary root cause Optional if materially relevant ### Root-cause guidance - **SPEC_AMBIGUITY**: opening ask lacked boundaries, targets, criteria, or constraints - **HUMAN_SCOPE_CHANGE**: scope expanded because the user broadened the task - **REPO_FRAGILITY**: hidden coupling, brittle files, unclear architecture, or environment issues forced extra work - **AGENT_ARCHITECTURAL_ERROR**: wrong files, wrong assumptions, wrong approach, hallucinated structure - **VERIFICATION_CHURN**: implementation mostly worked, but testing/validation caused loops - **LEGITIMATE_TASK_COMPLEXITY**: revisions were expected for the difficulty and not clearly avoidable Every root-cause assignment must include: - evidence - why stronger alternative causes were rejected - confidence --- ## Step 6.5: Session Severity Scoring (0β100) Assign each session a severity score to prioritize attention. Components (sum, clamp 0β100): - **Completion failure**: 0β25 (`abandoned = 25`) - **Replanning intensity**: 0β15 - **Scope instability**: 0β15 - **Rework shape severity**: 0β15 - **Prompt sufficiency deficit**: 0β10 (`low = 10`) - **Root cause impact**: 0β10 (`REPO_FRAGILITY` / `AGENT_ARCHITECTURAL_ERROR` highest) - **Hotspot recurrence**: 0β10 Bands: - **0β19 Low** - **20β39 Moderate** - **40β59 Significant** - **60β79 High** - **80β100 Critical** Record: - `session_severity_score` - `severity_band` - `severity_drivers` = top 2β4 contributors - `severity_confidence` Use severity as a prioritization signal, not a verdict. Always explain the drivers. Contextualize severity using session intent so research/exploration sessions are not over-penalized. --- ## Step 7: Subsystem / File Clustering Across all conversations, cluster repeated struggle by file, folder, or subsystem. For each cluster, calculate: - number of conversations touching it - average revisions - completion rate - abandonment rate - common root causes - average severity Goal: identify whether friction is mostly prompt-driven, agent-driven, or concentrated in specific repo areas. --- ## Step 8: Comparative Cohorts Compare: - first-shot successes vs re-planned sessions - completed vs abandoned - high prompt sufficiency vs low prompt sufficiency - narrow-scope vs high-scope-growth - short sessions vs long sessions - low-friction subsystems vs high-friction subsystems For each comparison, identify: - what differs materially - which prompt traits correlate with smoother execution - which repo traits correlate with repeated struggle Do not just restate averages; extract cautious evidence-backed patterns. --- ## Step 9: Non-Obvious Findings Generate 3β7 findings that are not simple metric restatements. Each finding must include: - observation - why it matters - evidence - confidence Examples of strong findings: - replans cluster around weak file targeting rather than weak acceptance criteria - scope growth often begins after initial success, suggesting post-success human expansion - auth-related struggle is driven more by repo fragility than agent hallucination --- ## Step 10: Report Generation Create `session_analysis_report.md` with this structure: # π Session Analysis Report β [Project Name] **Generated**: [timestamp] **Conversations Analyzed**: [N] **Date Range**: [earliest] β [latest] ## Executive Summary | Metric | Value | Rating | |:---|:---|:---| | First-Shot Success Rate | X% | π’/π‘/π΄ | | Completion Rate | X% | π’/π‘/π΄ | | Avg Scope Growth | X% | π’/π‘/π΄ | | Replan Rate | X% | π’/π‘/π΄ | | Median Duration | Xm | β | | Avg Session Severity | X | π’/π‘/π΄ | | High-Severity Sessions | X / N | π’/π‘/π΄ | Thresholds: - First-shot: π’ >70 / π‘ 40β70 / π΄ <40 - Scope growth: π’ <15 / π‘ 15β40 / π΄ >40 - Replan rate: π’ <20 / π‘ 20β50 / π΄ >50 Avg severity guidance: - π’ <25 - π‘ 25β50 - π΄ >50 Note: avg severity is an aggregate health signal, not the same as per-session severity bands. Then add a short narrative summary of what is going well, what is breaking down, and whether the main issue is prompt quality, repo fragility, workflow discipline, or validation churn. ## Root Cause Breakdown | Root Cause | Count | % | Notes | |:---|:---|:---|:---| ## Prompt Sufficiency Analysis - common traits of high-sufficiency prompts - common missing inputs in low-sufficiency prompts - which missing prompt ingredients correlate most with replanning or abandonment ## Scope Change Analysis Separate: - Human-added scope - Necessary discovered scope - Agent-introduced scope ## Rework Shape Analysis Summarize the main failure patterns across sessions. ## Friction Hotspots Show the files/folders/subsystems most associated with replanning, abandonment, verification churn, and high severity. ## First-Shot Successes List the cleanest sessions and extract what made them work. ## Non-Obvious Findings List 3β7 evidence-backed findings with confidence. ## Severity Triage List the highest-severity sessions and say whether the best intervention is: - prompt improvement - scope discipline - targeted skill/workflow - repo refactor / architecture cleanup - validation/test harness improvement ## Recommendations For each recommendation, use: - **Observed pattern** - **Likely cause** - **Evidence** - **Change to make** - **Expected benefit** - **Confidence** ## Per-Conversation Breakdown | # | Title | Intent | Duration | Scope Ξ | Plan Revs | Task Revs | Root Cause | Rework Shape | Severity | Complete? | |:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---| --- ## Step 11: Optional Post-Analysis Improvements If appropriate, also: - update any local project-health or memory artifact (if present) with recurring failure modes and fragile subsystems - generate `prompt_improvement_tips.md` from high-sufficiency / first-shot-success sessions - suggest missing skills or workflows when the same subsystem or task sequence repeatedly causes struggle Only recommend workflows/skills when the pattern appears repeatedly. --- ## Final Output Standard The workflow must produce: 1. metrics summary 2. root-cause diagnosis 3. prompt-sufficiency assessment 4. subsystem/friction map 5. severity triage and prioritization 6. evidence-backed recommendations 7. non-obvious findings Prefer explicit uncertainty over fake precision.
Signals
Information
- Repository
- arlenagreer/claude_configuration_docs
- Author
- arlenagreer
- Last Sync
- 5/10/2026
- Repo Updated
- 5/7/2026
- Created
- 4/10/2026
Reviews (0)
No reviews yet. Be the first to review this skill!
Related Skills
upgrade-nodejs
Upgrading Bun's Self-Reported Node.js Version
cursorrules
CrewAI Development Rules
cloud
Documentation reference for using Browser Use Cloud β the hosted API and SDK for browser automation. Use this skill whenever the user needs help with the Cloud REST API (v2 or v3), browser-use-sdk (Python or TypeScript), X-Browser-Use-API-Key authentication, cloud sessions, browser profiles, profile sync, CDP WebSocket connections, stealth browsers, residential proxies, CAPTCHA handling, webhooks, workspaces, skills marketplace, liveUrl streaming, pricing, or integration patterns (chat UI, subagent, adding browser tools to existing agents). Also trigger for questions about n8n/Make/Zapier integration, Playwright/ Puppeteer/Selenium on cloud infrastructure, or 1Password vault integration. Do NOT use this for the open-source Python library (Agent, Browser, Tools config) β use the open-source skill instead.
browser-use
Automates browser interactions for web testing, form filling, screenshots, and data extraction. Use when the user needs to navigate websites, interact with web pages, fill forms, take screenshots, or extract information from web pages.
Related Guides
Mastering the Oracle CLI: A Complete Guide to the Claude Skill for Database Professionals
Learn how to use the oracle Claude skill. Complete guide with installation instructions and examples.
Python Django Best Practices: A Comprehensive Guide to the Claude Skill
Learn how to use the python django best practices Claude skill. Complete guide with installation instructions and examples.
Mastering Python Development with Claude: A Complete Guide to the Python Skill
Learn how to use the python Claude skill. Complete guide with installation instructions and examples.