This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| less than a minute read

Should AI Allow Victims to Address a Sentencing Court from the Dead?

A state court in Phoenix recently considered an AI generated video in which the manslaughter victim appeared to present a victim impact statement.  Christopher Pilkey had been killed in a 2021 road rage incident, and his sister testified at his sentencing hearing. She prepared a statement she believed her deceased brother would have delivered - including expressing his belief in forgiveness - and used Artificial Intelligence to animate a picture of the victim so it appeared and sounded as if he was delivering the statement.  Defense counsel did not object and the relaxed evidentiary standards applicable to sentencing hearings (as opposed to trials) may have permitted use of the video regardless, but this creative use of technology frames novel issues. While state and federal laws generally provide victims the right to be heard at a sentencing hearing, does an AI video using a script prepared by a third person truly represent the thoughts and sentiments of the deceased victim?  The emotional impact of “seeing” a decedent address the court can be powerful, but is it fair? And while Pilkey's video clearly acknowledged that it had been generated by AI, courts are struggling with how to identify and deal with deep fakes. Given the apparent effectiveness in this case - the presiding Judge was quoted to have “loved that AI” - we can anticipate more such videos to continue to test these boundaries.

In what's believed to be a first in U.S. courts, the family of Chris Pelkey used AI to give him a voice. . . . The AI rendering of Pelkey told the shooter during the sentencing hearing . . . that it was a shame they had to meet under those circumstances — and that the two of them probably could have been friends in another life.

Tags

litigation