Nashville

Tennessee Bill Targets Coerced Suicide and AI 'Predators'

AI Assisted Icon
Published on February 27, 2026
Tennessee Bill Targets Coerced Suicide and AI 'Predators'Source: Antony-22, CC BY-SA 4.0, via Wikimedia Commons

A Tennessee lawmaker has introduced a bill that would make it a felony to coerce someone into taking their own life, including through text messages, apps, video calls, and certain artificial intelligence systems. House Bill 1951 is closely tied to the 2019 death of 19-year-old Grace Anne Sparks and is backed by her family and local officials. Supporters say the proposal is aimed at closing holes in Tennessee’s criminal code that left prosecutors with limited tools in cases like Sparks’s.

What the bill would do

House Bill 1951, filed by Rep. Ryan Williams (R-Cookeville), would create a new Class D felony called “coercive suicide.” A person could be charged if they intentionally advise or encourage another person to commit or attempt suicide and know that person has communicated suicidal intent, according to the Tennessee General Assembly. The language also extends potential criminal liability to owners of artificial-intelligence systems if an AI “advises or encourages” a user to kill themselves and the owner “knew or reasonably should have known” there was such a risk.

The bill states it would apply to offenses committed on or after July 1, 2026. As a Class D felony, a conviction could bring a sentence of up to 12 years in prison, putting the charge in the same weight class as other serious, non-violent felonies.

Backers and testimony

Grace Anne’s mother, Candis Sparks, has become one of the bill’s most visible supporters, testifying in committee that the proposal is “about accountability for predators who exploit vulnerability,” according to the Tennessee House Republican Caucus. Local television coverage quoted Sparks saying, “Maybe it’ll save somebody else’s life,” a hope that has become a refrain as she speaks publicly about her daughter’s case, as reported by WBIR.

Rep. Williams has argued that state law needs to keep up with modern technology so prosecutors are not boxed in by older legal categories when harmful behavior happens through screens instead of in person.

A high-profile case behind the push

Sponsors of HB1951 point directly to the 2019 death of Grace Anne Sparks, who investigators say shot herself during a video call after months of manipulative messages and dangerous role-play. In that case, Hayden Berkebile was convicted by a Knox County jury of criminally negligent homicide. The trial court ordered two years of confinement, and the Tennessee Court of Criminal Appeals later affirmed the conviction.

Backers of the bill say the outcome in that prosecution highlights how current statutes can leave prosecutors with narrower sentencing options than lawmakers might intend, a concern described in the court record.

Legal implications

The legal questions around the Sparks case were already complicated. On appeal, Berkebile raised First Amendment and evidentiary challenges, and the Tennessee Court of Criminal Appeals considered those arguments before upholding the verdict. The opinion also cites the trial judge’s blunt assessment that “this whole idea of somebody committing suicide, to achieve sexual release is – is the height of evilness, in this [c]ourt’s mind,” a line that has drawn attention in discussions of the case.

The section of HB1951 dealing with artificial intelligence raises its own set of issues. By tying criminal liability to whether an AI system’s owner “knew or reasonably should have known” about the risk of the technology encouraging suicide, the bill edges into questions of platform responsibility and enforcement that legislative trackers have already flagged as potential flashpoints.

What’s next

HB1951 has been reported out of the Criminal Justice Subcommittee and is on the Finance, Ways and Means Subcommittee calendar for March 4, 2026, according to the Tennessee General Assembly. If the subcommittee advances the bill, it would move back through Judiciary and could then reach the full House for a vote.

Advocates and legal observers say the upcoming hearings will be the main stage for debates over how the law defines “coercive” conduct, how far its reach should extend, and whether owners of AI systems can, or should, be held criminally responsible for what their technology tells vulnerable users to do.