The human at the heart of digital justice
Associate Professor Niamh Kinchin examines how technology is reshaping justice, and why human judgment still matters
October 21, 2025
As artificial intelligence continues to reshape how we live and work, its quiet arrival into the courtroom is raising profound questions about fairness, accountability, and humanity in justice.
At the ÁñÁ«ÊÓÆµapp of ÁñÁ«ÊÓÆµapp, Associate Professor from the School of Law is leading a conversation about what happens when algorithms meet the law, and when technology starts to influence life-changing decisions.
Her latest paper, ‘’, explores how governments around the world are testing predictive technologies to help assess asylum claims. The work shines a light on the growing use of artificial intelligence (AI) in legal processes and warns of what could be lost if human judgment is replaced by machine prediction.
Predicting the law using AI
Predictive analytics is a branch of AI that uses large amounts of historical data to identify patterns and predict outcomes. In the legal world, these systems can estimate the likelihood of a case succeeding in court, analyse precedents, or help manage growing caseloads.
“AI tools promise faster and more consistent decisions, but the law isn’t just about logic and numbers,” Associate Professor Kinchin says. “It’s about reasoning, fairness, and the human story behind every case.”
While predictive systems are already being used for administrative processes such as visa applications, their use in asylum and refugee status decisions remains limited and controversial. Refugee decisions often depend on complex, personal narratives and require compassion, not just computation.
Technology and human rights
Refugee status determination (RSD) is one of the most complex forms of legal decision-making. It asks decision-makers to assess whether a person has a well-founded fear of persecution if they return to their home country.
“Fear is both subjective and forward-looking,” Associate Professor Kinchin explains. “A refugee claim is not only about what has happened, but what might happen. That kind of reasoning doesn’t fit neatly into an algorithm.”

Her research points out that most AI systems rely on historical data and patterns. But in refugee law, those patterns may not exist or may not be fair. Algorithms trained on past cases could reproduce existing biases or exclude unique experiences.
“Predictive analytics risks turning deeply human judgments into statistical probabilities,” she says. “We have to ask whether efficiency is worth the cost of empathy.”
The case for consideration
Countries such as Canada and the Netherlands have started using automated systems to sort visa applications and identify case similarities. These systems are designed to speed up decisions and reduce administrative burdens. But when such tools move into more sensitive areas, like refugee determination, the consequences could be serious, Associate Professor Kinchin warns.
Her research highlights the limits of current AI models. Many rely on inductive reasoning – drawing general conclusions from data – while refugee law depends on abductive reasoning, which requires decision-makers to make informed judgments based on uncertain or incomplete information.
“Abductive reasoning recognises uncertainty and imagination,” she says. “It asks the decision-maker to consider competing explanations and to give the benefit of the doubt to the person seeking protection. That is something machines cannot do.”
A human in the feedback loop
The title of Associate Professor Kinchin’s paper, ‘The Human in the Feedback Loop’, captures what she sees as the essential safeguard in any use of AI in justice: the continued presence of people.
Technology can help lawyers and decision-makers manage complexity, identify inconsistencies, or detect bias, but it cannot replace human reasoning. “There must always be a person who interprets, questions and challenges what the algorithm produces,” she says. “Without that, we risk losing the very qualities that make justice human.”
Technology with a human touch
Associate Professor Kinchin’s work contributes to a growing global discussion about ethics and accountability in the age of AI. Her research is part of UOW’s broader focus on responsible innovation, ensuring that technology serves people, not the other way around.
“As we build smarter systems, we must also build smarter safeguards,” she says. “The challenge is not to stop innovation, but to make sure it aligns with human rights, fairness, and compassion.”
The future of digital justice lies in balance, combining the efficiency of machines with the moral insight of humans. “Technology can support decision-making, but it should never decide who deserves safety or protection,” she says. “That decision must always belong to us.”