Strategies for Avoiding Bias and Inequity in AI-enabled Fraud Detection

Strategies for Avoiding Bias and Inequity in AI-enabled Fraud Detection

We’re taught that ignoring a problem won’t make it go away. By the same token, not looking for a problem doesn’t mean the problem doesn’t exist. Yet that’s the approach some government agencies are essentially taking to bias and inequity in fraud detection.

Fraud detection has become a crucial effort for many departments, from revenue to labor to health and human services. If your agency is involved in approving or issuing benefits to constituencies, you need ways of identifying and preventing fraud.

One powerful solution is artificial intelligence (AI), which can rapidly sift through large troves of data to pinpoint anomalies that signal fraud. But AI for fraud detection is often based on datasets that can include sensitive attributes such as age, gender, financial history, and criminal justice interactions. That can potentially allow bias or inequity to creep into fraud detection.

The good news is that strategies exist for avoiding bias and inequity in AI-enabled fraud detection. Here’s what your agency needs to know.

Bias and ML Models

Agencies need to be diligent about uncovering benefits fraud for a couple of reasons. One is to meet federal funding requirements. Another is to make sure eligible residents receive the benefits they’re entitled to. A third is to ensure that criminals don’t steal limited resources. That’s especially crucial in an era when well-funded crime groups backed by adversarial nations are stealing tens of millions of dollars in benefits.

In fact, fraudsters have become so sophisticated and fraud has become so fluid that traditional methods of identifying fraud are no longer effective. That’s where AI-enabled fraud detection comes into play. But it’s also where bias or inequity can potentially become factor.

Bias can creep into machine learning (ML) models in a variety of ways. One is through proxy data. A model that omits racial data but includes ZIP code might still result in disparate impacts along racial lines, given the existence of residential segregation.

Another way bias can enter ML models is through training with historical data. If you import a dataset based on human interventions in benefits delivery, say, any bias that might have influenced those interventions will be built into your model.

In either case, the result could be disparate treatment – decisions that are applied to different demographics differently – or disparate impacts – decisions that harm or benefit different demographics differently.

Better Visibility for Greater Equity

A key strategy for avoiding unfairness in AI is visibility. Neural-network technologies like the hugely popular ChatGPT language model can be highly precise. But they’re built on “black box” algorithms that don’t allow stakeholders to understand how they work.

Your agency should be able to explain how certain inputs in your AI models result in certain outputs. For instance, if a model flags an individual for identity theft, you should be able to say that 15% of the reason was because they changed their bank account, 10% was because their address change wasn’t in the National Change of Address database, 2% was because they moved to a specific ZIP code, and so on. That will give confidence to your agency and your constituents that regardless of the individual’s age, gender, and race, say, they weren’t flagged because of bias.

A growing number of organizations are benefiting from AI-powered fraud detection that ensures visibility and explainability. The Arizona Department of Revenue, for example, relies on the Voyatek RevHub integrated tax system for risk scoring, issue detection, social network analysis, audit select, and collections optimization. RevHub uses an open-source game-theory methodology, called a Shapely Value, that shows how much each data attribute contributed to an ML prediction. That transparency helps ensure that bias isn’t inadvertently built into the model.

As organizations combat increasing rates of benefits fraud, they’ll need to respond with AI-enabled fraud detection. By understanding and committing to strategies for avoiding bias and inequity in AI, you can ensure that your agency continues to serve the public fairly and effectively.

-Voyatek Leadership Team