Fraud Detection: Why You Need a Human in the Loop

Fraud Detection: Why You Need a Human in the Loop

Their methods varied. Some used stolen identities to falsely claim jobless benefits from state workforce agencies. Others inflated employee numbers, misappropriated names of existing companies, or created fake businesses to qualify for federal loans. But in total, fraudsters scammed the government out of $80 billion to $100 billion in COVID-19 relief funds.

Of course, it’s not just pandemic relief. Government agencies know that fraud, waste and abuse is a perpetual problem – one that robs state programs of an average 15.7% of unemployment funds, for example.

In response, agencies are investing in emerging technologies like artificial intelligence (AI) and machine learning (ML) to automate fraud detection. These investments are wise, because AI can associate disparate data inputs and recognize patterns of abuse across channels in ways you simply can’t achieve manually.

But don’t sideline your fraud investigators just yet. Fraud experts have a vital role to play in maximizing your return on investment (ROI) in digitization and optimizing the outcomes of your fraud-prevention efforts. In short, you need a human in the loop.

Advantages of Automation

Agencies have long relied on human-enabled data analysis for fraud detection. But the approach suffers from a growing number of shortcomings.

Human-powered analysis is time-consuming, often requiring teams of people to spend many hours combing through numbers. It’s typically backward-looking, relying on historical facts rather than evaluating fraud signals as they come in. It’s limited in scope, unable to keep pace with the astonishing volumes of data agencies must deal with today. And it’s slow, limited in its ability to generate outputs in real time – a requirement if you want to stop fraud before it occurs.

Automated analytics technologies address these problems. For instance:

  • ML algorithms can determine the probability that benefits applications are fraudulent by learning from past known cases of fraud.
    Graph-based analysis can look for connections among networks of suspicious claims – for example, multiple claims associated with a single username, email address, IP address or bank account.
  • ML-based pattern recognition can determine, for example, that multiple claimant names are connected to a single data breach – potentially uncovering a “fraud cluster.”
  • Risk scoring can prioritize cases for investigation, focusing time and energy on identifying and stopping the most likely fraud cases, or those with the largest impact.

Looping in Humans

But automation technologies can introduce their own challenges, especially for government agencies. ML algorithms can generate outputs in ways that are nontransparent to anyone but a data scientist. Especially in an era of waning government trust, constituents might be skeptical of agency actions taken on the basis of automated findings.

There are historical reasons for these concerns. A widely reported MIT Media Lab study found that certain computer-aided facial analysis was biased based on skin color and gender. Error rates for images of dark-skinned women approached 35%.

This example involved one type of AI used in one application. Algorithms can be trained to continually improve their accuracy over time, and experienced data scientists can weed out bias, but even the potential for bias in algorithms can heighten anxieties about accuracy and equity when it comes to fraud investigations.

“Human in the loop” approaches can mitigate concerns about bias in ML solutions, and simultaneously enhance the performance of predictive models.  Fraud experts are trained and experienced in recognizing characteristics that make a benefits claim potentially fraudulent. Agencies can implement a validation process in which fraud teams regularly review samples of cases to rule out potential bias. As necessary, they can fine-tune algorithms to ensure ongoing accuracy.

Combining the knowledge and experience of fraud analysts with the computational speed and number-crunching capabilities of AI can be highly effective in preventing fraud. It can also help your agency maintain public trust.

The Power of Augmented Analytics

The value of this human-in-the-loop approach is leading many AI experts to refer to “augmented intelligence” and “augmented analytics.” Rather than pit digital against manual, augmented analytics continually combines the two in a virtuous cycle.

In fact, automated tools can actually aid manual reviews of AI outputs – essentially layering digital on top of manual on top of digital. Augmented analytics applies ML and natural language processing (NLP) to reduce the time necessary to clean and normalize data, identify the best analytics model for each use case, and generate reports based on the outputs.

Voice-recognition technology, for instance, can transcribe spoken questions and produce quantitative values in seconds. That allows for real-time clarifications and avoids the need to code and run new analyses to answer new questions.

Augmented analytics enables agencies to better understand outputs, address more complex problems and take more targeted actions. It can make fraud teams more efficient and undercover fraud faster – ideally, before benefits payments are issued.

AI can see patterns humans can’t see, detecting fraud quickly and continuously. In fact, ML algorithms have stopped hundreds of millions of dollars in tax-refund fraud at the state and federal levels. At the same time, fraud experts can apply their experience and expertise to ensure AI is identifying the right anomalous behaviors for the right reasons. With humans in the loop, agencies can better reduce fraud, ensure accountability, save dollars and build public trust.