Public Trust Starts with Private Data: Ethical BWC Analytics for Police Leaders
Artificial Intelligence Zero Data Retention Privacy May 15, 2025 9:00:00 AM Anthony Tassone 3 min read

As law enforcement agencies across the nation turn to artificial intelligence (AI) to help manage staffing challenges, streamline documentation, and enhance accountability, one responsibility remains paramount: the protection of personally identifiable information (PII) belonging to both officers and community members.
When officers wear body cameras, they capture more than just interactions—they capture lives in motion. From names, license plates, and addresses to conversations, medical information, and facial features, BWC footage is a trove of sensitive data. It’s no longer a question of whether PII exists in these recordings; it’s a question of how it’s handled.
What Burlington, NC Is Doing Right
The City of Burlington, North Carolina, offers a compelling model. In its formal AI policy, Burlington explicitly prohibits the use of any AI engine that stores or processes PII in a publicly accessible tool, such as ChatGPT or other commercial LLMs. The city recognized that many generative AI platforms retain prompts or responses to improve their models, creating an unacceptable risk when those prompts contain personal data captured from police interactions.
This is a crucial stance for public safety agencies. While cloud-based AI tools have legitimate use cases, they were not designed for the ethical, legal, and operational standards of law enforcement. Burlington’s policy is a call to action: public trust demands public sector-grade safeguards.
Why Body-Worn Camera Analytics Demand Greater Caution
AI-powered BWC analytics have incredible potential–surfacing de-escalation examples, flagging policy violations, and even helping agencies meet accreditation requirements. But the raw data (officer and civilian conversations, addresses, faces, and even background audio) contains a level of granularity that makes traditional redaction methods insufficient.
This is where responsible AI must align with operational discipline. Agencies cannot risk using opaque AI systems that transmit sensitive footage to outside servers or third-party platforms.
A Step-by-Step Guide to Responsible PII Protection in BWC Analytics
Protecting PII in body-worn camera analytics starts before a single frame of video is reviewed.
Here's a best-practice framework agencies should consider:
Step 1: Never transmit raw video to public AI engines
Before any footage is analyzed, ensure it never leaves your secure environment. Public LLMs should not receive prompts containing PII, even if “anonymized.” Anything sent out can be stored, used for training, or exposed.
Step 2: Cache locally or use on-premises AI tools
Instead of pushing data to the cloud, work with solutions that cache footage internally or operate within CJIS-compliant, audit-friendly frameworks (i.e. TRULEO). Control over data location is key to compliance and trust.
Step 3: Auto-detect and redact
Deploy tools capable of identifying and automatically redacting PII elements such as faces, license plates, and spoken names before the footage is reviewed or queried (i.e. TRULEO). Redaction must occur early in the workflow and not be an afterthought.
Step 4: Log access and queries
Every access to footage or associated metadata should be logged. This not only helps with accountability, it also ensures compliance with audit trails and internal governance policies.
Step 5: Limit what’s retained
Just as Burlington’s policy suggests, the safest way to protect PII is to not retain it at all unless absolutely necessary. AI tools built for policing should offer zero data retention options for queries, transcripts, and metadata. Officer interactions with AI should be as protected as interview recordings or case notes.
“Responsible AI” Is More Than a Feature or Buzzword—It’s a Philosophy
Protecting PII isn’t just about filling in the boxes on a compliance checklist. It’s about respecting the rights of the people officers serve—and the officers themselves. Every misstep in handling sensitive data is a potential breach of trust, especially in an era where the public’s confidence in law enforcement is strained and communities are increasingly wary of AI’s role in surveillance and justice.
The good news? Thoughtful, intentional policy—like Burlington’s—combined with secure, purpose-built AI tools like TRULEO, can give law enforcement the best of both worlds: innovation with integrity.
Conclusion
As AI continues to play a larger role in policing, the burden is on agency leaders to ensure their systems are not only effective, but also ethical. The body-worn camera is one of the most powerful tools for transparency and accountability—but only if the data it captures is handled with care.
By prioritizing data protection, adopting PII redaction standards, and avoiding public-facing AI engines, agencies can lead the way in building a future where technology supports—not compromises—public trust.

Anthony Tassone
Anthony Tassone comes from a proud military and law enforcement family, he is a board member of the FBI National Academy Associates (FBINAA) Foundation. He received his bachelor's degree from DePaul University in Computer Science and lives just outside of Chicago with his wife and 4 kids. Anthony is an avid bow hunter and triathlete, and he regularly speaks about culture, leadership, and entrepreneurship.