With the security and architecture in place, I productized Agent Feedback Engine for CEMs and leads. I designed a workflow to standardize how insights are captured, ensure feedback reaches engineering in the right format, and aggregate individual engagements into actionable, executive-level trends. This operationalizes the AI, embedding it into existing quality and reporting frameworks rather than creating a fragile, parallel process.
Key Initiatives
CEMs access this workflow through the Copilot experience embedded in Microsoft Teams, so it fits into the same environment they already use for calls and follow-ups.
From Raw Notes to Actionable Recommendations
Capturing individual calls is useful, but the larger value is in aggregated analysis. I designed a second-stage process that makes the AI a recommendation engine, not a final decision-maker. Outputs are stored in Microsoft 365 (SharePoint), giving the trend agent and leads a consistent, queryable corpus of engagements. This surfaces collaboration opportunities, as leads and product owners can see exactly what the model is proposing and respond to it.
Documentation Agent: Reducing Cognitive Load
Before this, CEMs and leads had to manually search our playbooks and policy documentation whenever questions came up mid-engagement. To further reduce friction, I built a documentation-focused agent that answers questions using only our approved source material. It responds with verbatim quotes and links to the specific section, avoiding paraphrasing that could lead to hallucinated rules. This ensures leads and CEMs can rely on a consistent interpretation of policy. Because the underlying process and documentation set are complex, this agent gives CEMs and leads a single place to ask stage-specific questions and jump directly to the full underlying guidance when deeper context is needed.
Integrating AI into Existing Quality Processes
A critical decision was to integrate AI outputs into the existing quality review framework. I updated audit criteria to treat AI-generated summaries as first-class artifacts and provided guidance for how leads should sample, review, and correct outputs. This keeps auditors in familiar tools while still exposing them to the AI behavior they need to monitor. Leads also use the documentation agent during audits to confirm that decisions align with the latest stage-specific rules, reducing cognitive load and the risk of misinterpreting policy.
Adoption, Change Management, and Impact
Rolling out the workflow required structured change management, including live trainings and written guides. I positioned the AI workflow as a required part of the process, not an optional experiment. While there was initial resistance from another vendor team who saw automation as a threat, I successfully managed this by focusing on the value unlocked for the entire organization. The FTE who had supported the security review captured this in their closing line: "I am grateful for his leadership on this workstream."
Executive Summary
This article detailed how Agent Feedback Engine was operationalized. I designed a prompt workflow that produces three standardized outputs: an internal summary, engineering-ready feedback, and a customer-facing recap: all formatted to eliminate manual re-work. To scale analysis, I designed a second-stage agent that aggregates individual reports into trends and explicit recommendations for leadership review. By integrating the AI outputs into existing quality audit processes and leading the change management, I embedded the system into core business operations, turning a secure tool into a driver of measurable business intelligence.
- ➤I designed a multi-output workflow that saves an estimated 10-15 minutes per engagement by generating consistently formatted summaries and feedback. These savings are based on early time comparisons with CEMs, and I am expanding measurement coverage as adoption increases.
- ➤I created a trend analysis agent that converts raw notes into high-level recommendations, making our OKR reporting more honest and actionable.
- ➤I led the change management, training, and stakeholder alignment required to overcome resistance and successfully embed the AI into team operations.
One concrete example is our monthly active users (MAU) objective: the trend agent identifies customers who have already decided not to expand usage, such as those in extended proofs of concept or who have opted out of specific features. We track these accounts as a separate cohort in OKR reporting, so they do not appear as silent failures when the issue is a deliberate, documented business decision.