All Posts
October 2025·7 min read

P4AI: Shipping an AI Aptitude Assessment App in Four Sprints

How a hybrid Waterfall/Agile approach helped deliver a working AI-powered web application — from initial requirements through four two-week Scrum sprints — on time and with clear stakeholder visibility.

AIScrumAgileWeb AppProject Management

Methodology Is the Product

When clients ask about past projects, they want to know two things: what did you build, and how did you run the work? Technology choices matter, but they're table stakes. The thing that separates delivered projects from stalled ones is process — specifically, a process that keeps stakeholders informed, surfaces blockers early, and adapts without losing direction.

P4AI was an opportunity to demonstrate that discipline. The goal was to ship a working AI-powered web application within a fixed timeline, using a methodology that could be audited and communicated to stakeholders throughout. That meant choosing the right framework for the problem — and executing it consistently.

The Hybrid Model: Waterfall Where It Matters, Scrum Where It Works

Pure Agile works well for teams with high domain familiarity and stable stakeholder relationships. But for a project with defined deliverables and external stakeholders who needed visibility into progress, a hybrid approach made more sense.

The Waterfall phase covered the work that benefits from upfront clarity: requirements gathering, system architecture, data model design, and technology selection. Doing this phase properly meant that the Scrum sprints started from a solid foundation rather than discovering architectural problems mid-sprint.

Once the architecture was locked, the project shifted to Scrum for delivery: two-week sprints, a maintained backlog, sprint reviews, and retrospectives. This combination gave stakeholders a clear picture of scope and timeline (from the Waterfall phase) with the adaptability and transparency of iterative delivery (from Scrum).

What P4AI Does

P4AI — Profiling for Aptitude Inventory — is an AI-driven assessment application designed to help students and professionals identify their interests, strengths, and career aptitudes. Users complete a structured assessment, and the AI engine analyzes their responses to generate a personalized profile with career pathway recommendations.

The application targets a real gap: most aptitude assessments rely on static scoring rubrics that don't adapt to individual response patterns. P4AI's AI layer can weight and contextualize responses in ways that rigid scoring systems can't, producing more nuanced and useful output for users.

Sprint-by-Sprint Delivery

Sprint 1 — Foundations: Development environment setup, database schema implementation, authentication system, and core API scaffolding. By the end of Sprint 1, a user could register, log in, and reach the assessment interface — no AI yet, but the plumbing was in place.

Sprint 2 — AI Engine: Integration of the AI assessment model, prompt engineering for consistent output quality, response parsing and storage, and the profile generation pipeline. This was the highest-risk sprint; getting the AI integration right before the UI was built meant the team wasn't designing around a black box.

Sprint 3 — UI/UX: Full frontend implementation — assessment flow, results dashboard, profile visualization, and responsive design. With the backend and AI layer stable, the UI work could proceed without technical unknowns blocking design decisions.

Sprint 4 — Testing & Release: End-to-end testing, performance validation, bug fixes, documentation, and production deployment. The retrospective from this sprint produced a post-release defect list with zero critical issues — a result of the structured testing investment rather than luck.

What This Demonstrates

P4AI demonstrates the ability to deliver a complete, working software product — not a prototype — under a structured methodology that keeps stakeholders informed throughout:

  • End-to-end product delivery — from requirements through production, with each phase documented
  • AI integration in a real application — not a demo, but a functional system with prompt engineering, response handling, and output quality controls
  • PM discipline — sprint reviews, retrospectives, backlog management, and stakeholder communication built into the process
  • Risk management — tackling the highest-risk components (AI integration) early, not last

If you're working on a project that needs AI capabilities delivered on a real timeline with real accountability, this is the playbook.