University of North Texas (UNT) researcher Supreeth Shastri is collaborating on a project to establish an AI accountability framework, addressing the intersection of technology and law. Shastri, an assistant professor in the Department of Computer Science and Engineering at UNT, joins forces with Mihailis Diamantis, a law professor at the University of Iowa, on a National Science Foundation EAGER grant.
This initiative draws inspiration from the European AI Act. “Europe is approaching AI regulation by assuming companies are held responsible unless they can prove otherwise,” said Shastri. In contrast, Shastri and Diamantis’ research aims to align with U.S. legal practices and create a benchmark for AI accountability based on reasonableness, evaluating if AI poses a similar or lower risk than human or AI counterparts.
Research Focus
The framework will focus on three key areas:
- Self-driving vehicles and cybersecurity, led by Shastri, utilizing UNT’s Center for Integrated and Intelligent Mobility Systems (CIIMS) and Center for Information and Cybersecurity (CICS).
- Healthcare, led by Diamantis at the University of Iowa Health Care.
Shastri emphasizes the interdisciplinary nature of the project, integrating expertise from electrical, mechanical, and computer engineering. The proposed AI Negligence Standard, focusing on “reasonableness,” offers a potential guideline for judicial assessments of AI behavior.
As Shastri and Diamantis develop their framework, their work aims to ensure that innovation in AI is balanced with accountability, potentially shaping future U.S. regulations. “I think the most exciting aspect is that we’re building something engineers, lawyers, and judges can all understand,” Shastri stated, highlighting the collaborative nature of their endeavor.