AI Trained on Prison Phone Calls Is Now Scanning Inmate Calls for Planned Crimes

Overview: What happened, who is involved, and why it matters

Securus Technologies, a U.S. telecom company that provides phone and video services to prisons, has trained an artificial intelligence model using years of incarcerated peoples phone and video calls. The company is now piloting that model to scan inmates calls, texts, and emails, with the stated goal of identifying and preventing planned crimes. MIT Technology Review reported this initiative after speaking with Securus president, bringing the program into public view.

This development matters to families of incarcerated people, prison staff, civil liberties groups, and policymakers. The pilot raises urgent questions about privacy, consent, data scope, accuracy, bias, and oversight when predictive surveillance tools are used in carceral settings. The rest of this article explains how the system is said to work, the main risks, and what safeguards stakeholders should demand.

How the pilot is described

Securus says it trained models on a large corpus of recorded phone and video calls from people in prison. The pilot uses those models to automatically scan live and stored communications including phone calls, text messages, and emails. The system highlights phrases or patterns that the model predicts could signal the planning of crimes such as organizing contraband delivery, coordinating violence, or arranging escape attempts.

When the model flags a communication, the case is passed to human reviewers who decide whether to act, for example by alerting corrections staff or beginning an investigation. Securus frames the program as a safety tool, aimed at preventing harm inside and outside prisons. Critics warn that the approach could produce false positives and unfair targeting, and they emphasize the broader consequences for privacy and family ties.

Plain definitions: predictive surveillance and false positives

Predictive surveillance means using data and algorithms to flag people or messages that are more likely to be associated with future wrongdoing. These tools do not prove intent, they estimate risk based on past patterns.

A false positive occurs when a system incorrectly identifies benign behavior as suspicious. In a prison setting, false positives can mean investigations, disciplinary action, or restricted contact, even when no crime is planned.

What data is in play and why scope matters

The pilot relies on communications that Securus already records as part of prison phone and video services. That includes:

  • Phone calls between incarcerated people and contacts outside prison.
  • Video visitation sessions.
  • Text messages and emails routed through approved platforms.

Because many corrections systems require or route digital communications through provider platforms, those platforms can generate large training datasets. Using incarcerated peoples historic calls as training data creates the risk that models learn patterns specific to prison speech, slang, and context, rather than neutral indicators of criminal planning.

Stated goals versus practical risks

Securus presents the pilot as a tool to prevent violence and contraband by flagging credible threats earlier than manual review can. Preventing planned crimes inside prisons can improve safety for staff and residents, and it may reduce harm in the communities connected to incarcerated people.

At the same time, there are practical risks:

  • False positives can lead to wrongful investigations, disciplinary measures, or restricted communication rights.
  • Biased training data can reproduce or amplify existing racial and socioeconomic disparities.
  • People outside prison who communicate with inmates may be swept into surveillance without consent or knowledge.
  • Automated flags used as evidence could undermine due process if not carefully controlled.

Privacy and consent concerns

People in prison often have limited choices about how they communicate; many systems require calls and messages to pass through vendor platforms. That constraint creates two major concerns.

First, the question of consent. Were incarcerated people or their contacts informed that their communications could be used to train AI models? If consent was not informed and voluntary, the use of these communications as training data raises ethical problems.

Second, expectations of privacy. Courts have treated prisoner communications differently than those of free citizens, but that does not remove all privacy protections. Family conversations and legal calls are sensitive. Even if recording is permitted for security, using those recordings for algorithmic training and continuous automated scanning is a different use case that deserves public scrutiny.

Bias, accuracy, and the risk of disproportionate harm

Training AI on prison communications risks encoding patterns that reflect systemic bias. For example, people in prison are disproportionately from certain racial and economic groups. Language use, community references, and styles of speech that are more common in some groups may be incorrectly associated with risk.

A model that learns these correlations can produce higher false positive rates for already marginalized groups. Human review may reduce some errors, but if reviewers accept algorithmic flags uncritically, the harms can persist.

Accuracy in this context is hard to validate. It requires independent testing on representative data, transparent error rates, and public reporting on model performance by subgroup. Without that information, claims about effectiveness are difficult to evaluate.

Legal and regulatory gaps

U.S. law treats prison surveillance differently from general public surveillance, and regulations vary by state. Several legal questions are unsettled:

  • How should recordings used for safety be treated when repurposed for AI training?
  • What notice, consent, or review is required for family members and other third parties who communicate with incarcerated people?
  • Can algorithmic flags be used as admissible evidence, and if so, under what standards of reliability and transparency?

At present there is limited federal guidance on the use of AI in corrections, and oversight typically rests with corrections agencies and state legislatures. That patchwork approach can leave gaps in accountability.

Accountability, transparency, and safeguards

Experts and advocates recommend several safeguards when predictive tools are used in high stakes settings like prisons. Key measures include:

  • Independent audits of models for accuracy and bias, performed by qualified third parties.
  • Clear limits on the types of communications that can be used for training and scanning, with special protections for legal and attorney-client communications.
  • Human review procedures that require meaningful scrutiny, not blind reliance on algorithmic flags.
  • Transparency about error rates, decision rules, and data sources, made public enough for oversight by courts and legislatures.
  • Redress mechanisms so individuals can challenge incorrect or harmful decisions based on algorithmic outputs.

Absent these safeguards, predictive surveillance risks creating chilling effects on communication, undermining family ties, and eroding trust in corrections institutions.

Potential effects on recidivism, family communication, and public safety

There are possible benefits and harms to consider. On the benefit side, earlier detection of credible threats could reduce violence and contraband, potentially improving safety inside facilities. That could indirectly support rehabilitation goals if prisons are safer and better managed.

On the harm side, widespread monitoring may reduce the frequency and candor of family calls, which studies show can help reintegration after release. Fear of being flagged may discourage inmates and families from discussing legitimate support plans. If surveillance disproportionately affects certain groups, it could worsen inequality in outcomes after release.

Questions for policymakers, providers, and the public

When a private company uses incarcerated peoples communications to train and run predictive systems, stakeholders should ask clear questions:

  • What exactly was in the training data; were individuals and their contacts notified?
  • What are the models measured error rates, and how do those errors vary by race, age, and other factors?
  • Who reviews algorithmic flags, and what standards guide human decision makers?
  • How are flagged communications used, and are there safeguards to protect legal calls and family conversations?
  • What oversight exists, including independent audits, public reporting, and avenues for redress?

Key Takeaways

  • Securus Technologies trained an AI model on prison calls and is piloting it to scan inmate communications for signs of planned crimes.
  • The pilot raises concerns about consent, privacy, bias, false positives, and legal oversight.
  • Independent audits, transparency about error rates, human review standards, and redress options are necessary safeguards.
  • Potential safety benefits must be weighed against harms to family relationships and the risk of disproportionate targeting.

Short FAQ

Can recordings from prison calls be used to train AI?

Technically yes, if the provider or corrections agency has legal control of the recordings. Ethically, using those recordings for training requires careful notice, consent where feasible, and strong safeguards, especially for sensitive content.

Will the system stop all planned crimes?

No. Predictive systems estimate risk based on patterns. They can help identify suspicious communications earlier, but they are not perfect and will produce false positives and false negatives.

What protections should people expect?

Protections include transparency about how data are used, limits on which communications are scanned, independent audits for bias and accuracy, meaningful human oversight, and avenues to challenge decisions based on algorithmic flags.

Conclusion

The Securus pilot brings a longstanding technology trend into a charged setting. Using AI to scan prison calls for signs of planned crimes could provide safety benefits, but it also carries serious risks for privacy, fairness, and due process. Policymakers, corrections officials, technology providers, and the public should insist on transparency, independent evaluation, and clear limits before automated surveillance becomes a routine part of incarceration systems.

As this program moves from pilot to any broader use, public oversight must match the high stakes. Families, legal advocates, and civil liberties groups should be part of the conversation, because communications that help keep people connected can also help people rebuild their lives after release.

Leave a comment