
Long Beach is rolling out a pilot artificial intelligence tool that listens to 9-1-1 calls, scores them, and kicks questionable ones back to supervisors, effectively putting an AI sidekick in the city’s emergency dispatch center. City officials say the software is meant to bolster quality assurance and coaching, not to replace human telecommunicators, and they are still testing the system to calibrate how it flags calls. If the program is fully adopted, Long Beach expects to review far more calls than the limited manual spot checks supervisors have time for now.
According to the Long Beach Post, the Disaster Preparedness and Emergency Communications Department handles roughly 600,000 9-1-1 calls a year and has historically aimed to review about 2% of them for quality. City officials say the pilot uses CommsCoach, a product from GovWorx, under a contract that runs through February 2028 at roughly $68,000 per year. The department says supervisors will still issue the final performance evaluations and will use the AI-flagged recordings as raw material for coaching sessions.
How the system will work
CommsCoach takes recorded calls, transcribes them, and runs standardized checks that generate scores for issues such as tone, professionalism, and whether required follow-up questions were asked. Any call the software flags is routed back to a human supervisor for review, so the system acts more like an aggressive filter than an automated boss. GovWorx says its CommsCoach suite is already in use in centers nationwide for quality assurance, training simulations, and real-time guidance, allowing agencies to scale review work that once relied on painstaking manual listening. In Long Beach, officials are sticking to the post-call quality assurance tools rather than handing any live call handling over to automation.
Privacy and oversight
The city’s privacy evaluation package spells out a set of guardrails for the pilot. The data is processed in AWS GovCloud, with sharing limited to relevant city departments and GovWorx, and non-mapped audio is deleted within 24 hours, while other call records follow the city’s existing retention schedule. The report also says CommsCoach is operated under CJIS and HIPAA standards and that the department will maintain auditing controls on who can access recordings. Officials emphasize that the information remains on city systems and that supervisors, not algorithms operating on their own, will make the final calls on employee performance.
Dispatchers and union reaction
The rollout has landed with a thud for some dispatchers and advocates, who see a risk of job displacement and an over-reliance on automated scoring. One Long Beach dispatcher told the Press-Telegram she was skeptical of the new grading system. City leaders respond that supervisors will still be the ones monitoring performance and coaching staff, and that employees are not being swapped out for software. Department representatives say the AI is meant to help spot patterns that can inform targeted training and support, especially in a busy, high-stress room.
Why this matters for Long Beach
In an understaffed emergency communications center where every minute is spoken for, trimming administrative burdens is not a small thing. Automated quality assurance promises faster, more data-driven coaching and a clearer view of what is and is not working on the floor. “AI will not replace dispatchers. AI will not take calls. AI will not replace human judgment,” Disaster Preparedness and Emergency Communications director Reginald Harrison told ABC7. The vendor notes that CommsCoach is already used by hundreds of agencies around the country, a sign that Long Beach is not alone in betting on this kind of technology, even as debates over bias, privacy, and labor issues continue.
Legal and transparency obligations
The city’s privacy materials outline how residents can request release of 9-1-1 audio and emphasize compliance with state privacy laws and Long Beach’s own data policies. The pilot program and the publicly posted privacy report card both signal that officials are trying to build rules and training into the system before expanding its use. Advocates, however, say that independent oversight will be important if and when the AI grading tool scales up.









