No fluff. Just practical PM execution assets that hold up in real teams.
This repo is built for technical PMs who want reusable systems instead of blank-page anxiety.
templates/prd-template.mdtemplates/kpi-tree-template.mdtemplates/ai-eval-scorecard-template.mdtemplates/tradeoff-memo-template.mdtemplates/launch-readiness-template.mdtemplates/stakeholder-update-template.mdtemplates/postmortem-template.mdtemplates/prioritization-scorecard-template.md
playbooks/weekly-operating-rhythm.mdplaybooks/ai-pm-discovery-workflow.mdplaybooks/ai-eval-rollout-workflow.mdplaybooks/decision-quality-checklist.md
case-studies/examples/includes sanitized examplestools/new_case_study.pygenerates new case studies from a repeatable structure
templates/prioritization-scorecard-template.mdgives a lightweight scoring rubrictools/prioritize_features.pyranks candidate features from CSV into a decision-ready markdown backlog
templates/ai-eval-scorecard-template.mdhelps PMs define scenarios, thresholds, and launch guardrailstools/generate_eval_scorecard.pyturns a simple CSV into a decision-ready eval scorecard for AI products
Most PM docs fail for one of two reasons:
- They are too vague to guide execution
- They are too heavy to keep updated
This handbook aims for the middle: crisp enough to drive action, light enough to maintain weekly.
python tools/new_case_study.py \
--title "Agent Reliability Rollout" \
--problem "Support escalations were rising due to inconsistent AI responses" \
--outcome "Escalations dropped by 31% in six weeks"
python tools/prioritize_features.py \
--input path/to/features.csv \
--output templates/generated/prioritized-backlog.md
# Optional hard constraints to avoid low-confidence roadmap noise
python tools/prioritize_features.py \
--input path/to/features.csv \
--min-confidence 3.0 \
--min-strategic-fit 3.0
python tools/generate_eval_scorecard.py \
--input path/to/evals.csv \
--product-name "Support copilot" \
--output templates/generated/ai-eval-scorecard.mdThe prioritizer now supports hard constraints (--min-confidence, --min-strategic-fit) and writes an
"Excluded by hard constraints" section in the markdown output so tradeoffs stay visible.
The eval scorecard generator gives PMs a lightweight way to review offline, shadow, and launch-stage AI quality in one place with an explicit ship/hold rule.
- Start with a template
- Publish a weekly update
- Capture real outcomes
- Archive lessons into a case study
MIT