AI Meeting Workflow Agent

This is a sanitized public case study of an internal AI meeting workflow agent I worked on as a Product Manager.

I owned the product side: defining the problem, writing the specification, shaping the context layer and scenarios, and reviewing outputs against a lightweight rubric while an engineering counterpart built the n8n workflow.

Result: post-meeting PM follow-up dropped from 60+ minutes to about 30 minutes, saving about 18 hours per month in one PM workflow.

Problem

Across the 3-5 projects I managed in parallel, post-meeting processing took 60+ minutes after each project meeting.

  • Review the meeting record.
  • Understand what changed.
  • Update the right project documentation.
  • Identify tasks.
  • Write task descriptions.
  • Assign owners.
  • Keep everything consistent with the current project state.

Before the agent, the flow was still manual and AI-assisted, with repeated context setup, checking, reformatting, and movement between tools.

The bottleneck came from repeated transfer between meeting review, project context, documentation updates, task drafting, and final checking.

Workflow

The agent supported two controlled scenarios:

  • Documentation update. Identify what changed, decide where the update belonged, and prepare the documentation change for review.
  • Task creation. Extract actionable tasks, draft descriptions and assignees with project context, then create tasks only after human confirmation.
  1. Natural-language request
  2. n8n orchestration
  3. Project and context selection
  4. Meeting record plus knowledge context
  5. OpenAI reasoning layer
  6. Proposed updates or task drafts
  7. Human review and confirmation

At the public level, the relevant stack signal is simple: n8n handled workflow orchestration, and the OpenAI API powered the reasoning layer. The original workflow design, private model instructions, internal setup details, and implementation details are not part of this public case study.

My Role

I owned the product layer, context design, scenarios, and evaluation approach. An engineering counterpart built and maintained the n8n workflow.

  • Identified the recurring bottleneck.
  • Wrote the product specification.
  • Defined the automation scenarios.
  • Built the pre-automation context layer.
  • Designed and authored the project knowledge base.
  • Defined a lightweight HITL evaluation rubric.
  • Reviewed outputs against that rubric.
  • Made acceptance decisions based on usefulness, accuracy, and operational risk.

Evaluation

The main question was whether the output was faithful to the meeting and safe to act on. I used a lightweight human-in-the-loop evaluation rubric because the workflow volume was low and the risk was correctness, not scale.

The rubric focused on source faithfulness, no hallucinated tasks, no duplicate or misleading tasks, task accuracy, assignee clarity, documentation placement, wording clarity, and readiness to act. Human confirmation stayed before task creation or any delivery-impacting update.

Result

Post-meeting PM work dropped from 60+ minutes to about 30 minutes, saving about 18 hours per month in one PM workflow.

The practical value was direct: the agent removed much of the manual copy-paste and repeated context-loading work, so meeting output moved faster into updated docs and actionable tasks while human review stayed before delivery-impacting changes.

What I'd Improve Next

The first version was optimized for one PM workflow. To scale it beyond my workflow, I would improve onboarding, add a clearer audit trail, make project rules configurable, and introduce lightweight quality monitoring.