Effects of Use of Generative AI as Evidence in Criminal Cases

Journalist: Sydni Callaway (OSU)

Rep. Mensah-Woode (ORU) presented the “BASIC Act” of 2026 to the House of Representatives. The “BASIC Act” focuses on the use of Generative Artificial Intelligence (AI) in evidence of criminal cases and determining when altered evidence is an acceptable admission in this case.

While Mensah-Woode focuses on wanting these admissions of evidence to be misleading and unacceptable, he writes in his legislation that there could be exceptions in these cases. The bill states that AI can be acceptable “in cases where computer-generated artifacts are directly relevant to the grounds of a suit.” However, he did go into how generative AI has its place–that AI is cool and useful, but it happens to be used in an adverse way. “There is a risk of false sentencing at the hand of artificially reduced or generated content,” said Mensah-Woode.

Utilizing AI in a criminal case can be detrimental, and it can be understood how AI can help the judicial system. In his author’s explanation, Mensah-Woode pointed out how the current Oklahoma Statutes are already written well on the exclusion of relevant admissible evidence. 12 OK Stat § 2403 (2025) reads “evidence may be excluded if its probative value is substantially outweighed by the danger of unfair prejudice, confusion of the issues, misleading the jury.” 

Regardless of his explanation on why this bill should pass, and that the legislation is not inherently anti-AI, the bill ultimately failed 18-42 in the House Chamber. A lot of House delegates reasoned that AI is useful and can benefit the judicial court system.

Thinking of the effects of allowing AI in admissible evidence can be detrimental. As different sets of technological evidence are used in evidence admissions, allowing AI to be utilized can affect someone and their future. Back in 2020, the case where the jogger Ahmaud Arbery was attacked by three Georgia males had video evidence which ultimately held these men responsible for their crime. Now, it is incredibly hard to determine what videos come across as real or generative.

Former federal prosecutor Duncan Levin shared to Harvard Law Today how utilizing AI in criminal cases can be detrimental. “Criminal trials are not just about resolving disputes efficiently. They are about the state’s attempt to take liberty from an individual, and they are structured around a constitutional burden of proof designed to minimize the risk of factual error.”

While the enhancement of AI is inevitable, the determination of AI use in evidence should be considered. A lot of videos on social media have become increasingly difficult to determine whether it is AI or not, and this could potentially infiltrate professional settings and important decisions like the determination of an individual and their liberty.