Printer Friendly

AI and fraud: What CPAs should know.

Artificial intelligence technology can be misused for fraud, but it is also a tool accountants can use to detect fraud. Find out how to manage Al-generated fraud risks.

Artificial intelligence (AI), like any technology, can be misused. Malicious actors intent on committing fraud can use AI to create more realistic deceptions at a much faster clip.

How has the threat landscape changed? Consider executive impersonation scams. The initial attacks involved, for example, an employee receiving an emergency email from their CEO requesting the immediate transfer of funds for an urgent transaction. The employee would be instructed to transfer the funds to an account controlled by criminals. In response, companies adapted their policies and trained employees to recognize the scam's red flags.

With AI, criminals can generate deepfake voicemail or video messages from the CEO--or even live deepfakes of the CEO's voice or video. That's a next-level threat, but new types of fraud may not even be the biggest risk for CPA firms and other potential targets. The fear is that AI can help criminals automate old fraud schemes, increasing the speed, efficiency, and persistence of the attacks. In fact, it's already happening. The FBI, through its Internet Crime Complaint Center, has since at least 2020 alerted the public to the risk of AI-driven scams.

HOW AI IS USED TO CREATE FRAUD SCHEMES

AI can be used to assist in perpetrating fraud schemes by:

Generating convincing false or misleading documents and data

Traditionally, fraudsters have created false documents, reports, data, and deceptive emails in support of their fraud schemes. These falsified documents and data often contained certain deficiencies, such as mathematical mistakes, fuzzy logos, nonsensical invoice numbers, and formatting differences, that could tip off the recipient.

Resourceful fraudsters can now use AI to create convincingly realistic documents and data such as invoices, contracts, reports, spreadsheets, and bank statements to support a fraud scheme. The more examples of legitimate documents available for an AI system to evaluate, the higher-quality fake the AI can generate. AI's capabilities of generating documents--whether for fraudulent or legitimate purposes--are ever-increasing and now represent a dangerous tool in a fraudster's arsenal.

Increasing the sophistication of traditional attacks

AI can be used to analyze large sets of publicly available information to make attacks more targeted and personal in nature.

Take, for example, a traditional phishing attack. It may be rare for someone to send funds or personal information to a "Nigerian prince" today. But what about a message that appears to come from a distressed family member seeking funds, complete with name, address, phone number, and other personal information? If there is enough publicly available information on social media or other sources, the attacks could be bolstered with accompanying photographs, videos, and even a voice mimicking the family member.

Increasing the speed and persistence of schemes

AI's ability to process a large volume of data and perform tasks with incredible efficiency can make it a formidable problem for businesses and the public at large. Fraudsters understand that phishing schemes, spearphishing, robocalls, and ransomware are a numbers game. The more attempts, the greater the likelihood of success.

Historically, these types of schemes required some human intervention or at least hours of programming and planning. AI can perform them with astonishing speed. Further, AI does not get bored or distracted, nor does it need to take breaks to eat or sleep. AI (at least in its current state) does not have a conscience and can carry out attacks without getting discouraged or feeling guilt.

Decreasing detectability

The proliferation of cybercrime is primarily due to two factors. First, it is profitable. By 2025, cybercrime is projected to cost more than $10 trillion worldwide. Second, cybercrime is a risk bargain compared to other crimes. Tracing cybercrimes back to a human perpetrator is much more difficult than catching a burglar who is physically present in the act of stealing.

AI ups the ante with automated schemes that leave almost no trace leading to a human perpetrator. AI can use publicly available programs designed to evade detection. It can also be designed to "think" for itself, learning from detection countermeasures and altering itself to avoid any successful detection techniques.

Use of generative adversarial networks

In a recent development, criminals have turned to generative adversarial networks (GANs), which use two neural networks; i.e., they are basically two AI systems working in conjunction. The criminals train one of the networks to generate false information, while the other is designed to detect the false information. They are used to train each other, continually creating better means of evading detection.

A GROWING THREAT OUTLOOK

While these scenarios may sound dystopian, the techniques have already been successfully employed. According to court records, AI was allegedly used in January 2020 to mimic a United Arab Emirates company director's voice to steal $35 million. Even prior to that, in the UK, fraudsters allegedly used convincing deepfakes to impersonate the CEO of an energy firm, resulting in a fraudulent transfer of $243,000.

These attacks didn't use the latest technology. As AI technology improves, the sophistication of these types of attacks will only increase.

In past months, for example, computer-manipulated images and audio of celebrities have appeared on social media selling phony services.

HOW TO USE AI TO DETECT AND PREVENT FRAUD

While AI has the potential to support fraudsters, advancements in AI technology also present opportunities for those in fraud prevention and detection professions, such as accountants and finance professionals.

Ways AI can be used to assist in detecting and preventing fraud include:

Pattern recognition

Data analytics have long been used to detect anomalies, or fraud indicators, in large datasets.

AI has the potential to speed up and improve pattern recognition by analyzing massive sets of data quickly. AI and machine learning increase the ability for firms and finance departments to efficiently and effectively detect anomalies.

AI has the ability to self-learn, so anomalies determined to be false positives allow the system to train itself to place less emphasis on anomalies with similar attributes. Conversely, anomalies determined to be valid can help the system learn to place a greater emphasis on transactions or data with similar attributes. Banks and financial institutions have led the charge in this area, using machine learning capabilities to detect anomalous transactions and quickly avoid any potential additional fraudulent charges.

Risk assessment

AI can be used to evaluate system and process security to identify potential gaps in internal controls. This can be done through a single factor analysis or a multifactor scoring model, which can locate blind spots in less than a second.

Threat detection

AI can be used to detect and eliminate threats such as malicious code, sometimes even malicious AI code. Unfortunately, as is usually the case in the fraud space, the ability to detect fraud tends to lag behind the creativity of fraudsters, as it is difficult to detect a scam that has not yet been created. Still, AI tools available should be used to attempt to thwart bad actors.

Automation

The same way fraudsters are using AI to automate scams can be employed in fraud detection through implementing AI-driven software. In the past, running real-time data analytics was practically and economically impossible. With emerging AI capabilities, these measures can be automated and can be run in a matter of seconds and with little or no supervision or human intervention.

Accountants will face challenges in adjusting to increasingly sophisticated schemes that use AI technology. But it would be irresponsible to simply ignore or shun AI technology. Rather, accountants who embrace the emerging technology will enjoy advantages over competitors that do not.

WAYS TO MITIGATE THE RISK OF AI MISUSE

Establishing safeguards that address AI risks are the most important countermeasures against misuse.

* Learn as much as possible about AI technology and its capabilities, both now and what may come in the future. Familiarity will increase your ability to manage AI-aided fraud risks.

* Embrace technology that is useful in combatting fraud and performing forensic analyses. Accountants at one time had no choice but to manually enter data from various sources, but the advent of optical character recognition increased the speed and efficiency of data entry exponentially. AI has similar potential. (See the sidebar "Ways to Train Specialized AI Models.")

* Verify results that AI produces. The technology isn't perfect--despite constant improvements, it can get things wrong, particularly when it relies on data that is inherently false or misleading. If AI is used in an official capacity, statements relying on AI results need to be vetted for veracity.

* Limit and/or control internal company data. AI relies on available data to perform its analysis. Data that cannot be obtained cannot be used against you. Also, limit and/or control who can see publicly posted data, such as social media posts. The more publicly available images, video, and voice recordings AI can turn to, the more convincing a deepfake it can produce.

* Obtain data and supporting documentation from a reliable, third-party source. For example, bank statements obtained directly from the bank are far less likely to be altered using AI.

* Establish company- and firm-specific standards of use and development of AI as soon as possible. Principles developed by the United Nations AI Advisory Body or NGOs, such as the Center for AI and Digital Policy's Universal Guidelines for AI, can be leveraged in constructing corporate standards of use and development of AI.

The AI landscape is a complex and evolving field, and one that accounting professionals, particularly forensic accountants, should be conscious of. It is incumbent on those in accounting to be aware of the evolving landscape and to employ this technology in an ethical manner.

Ways to train specialized AI models

Two main methods exist today to create a special AI model solution for customized usage such as identifying fraudulent transactions, scams, and phishing attempts.

The most popular method is retrieval augmented generation (RAG), commonly referred to as the embedding method. The method first provides additional data to the AI model (after converting the data to vectors that computers can understand) and subsequently asks the model to search and respond based on the additional data provided. This technique does not require expensive hardware to retrain an existing AI model because the additional data still lives outside of the model itself.

The method is very effective in helping an existing AI model learn specialized data and knowledge supplied by a user. For example, accountants can feed additional datasets with fraud patterns and features to an open-source AI model via RAG to turn the model into a tireless fraud fighter in identifying fraudulent transactions.

Fine-tuning is another way to retrain an existing model with additional data. Tech-savvy companies and accounting firms can use their special data to retrain an existing commercial-ready AI model to create a new model that can produce more accurate results by (1) adding the special domain knowledge into the AI model itself, and (2) narrowing the model's responses and thus making the model response more concise. Fine-tuning requires a certain degree of AI development skills from developers and a fair amount of capital investment in hardware such as a graphics processing unit (GPU).

IN BRIEF

* Artificial intelligence (AI) can help scammers automate old fraud schemes, increasing the speed, efficiency, and persistence of the attacks. AI-generated scams, including deepfakes, can be used to gain access to company networks, obtain company data, or impersonate company executives.

* Scammers can use AI to generate convincing false or misleading documents and data. That makes deepfakes more sophisticated and less detectable. Also, AI-generated fraud can be generated faster and more consistently than traditional scams.

* Accountants can use AI as a fraud-detection tool. The technology is quick to recognize patterns, identify internal control gaps, and eliminate malicious code. Also, AI can be set up to automate fraud detection.

* Establishing safeguards that address AI risks is the most important countermeasure against misuse. That includes limiting or controlling internal and publicly available data, establishing company- and firm-specific standards of use and development of AI, and verifying information generated by AI.

LEARNING RESOURCES

Critical Thinking in Data Analytics

Improve your critical thinking in data analytics skills to diagnose problems with ease, make better data-driven decisions, and identify effective solutions.

CPE SELF-STUDY

Ethics in the World of AI: An Accountant's Guide to Managing the Risks

This webcast discusses the current uses of AI in business, reviews nine risk areas, and provides practical suggestions to address these risks effectively.

May 6, 9 a.m. ET

WEBCAST

Managing Risk Analytics

Learn valuable analytical tools for managing risks and making informed decisions.

CPE SELF-STUDY

For more information or to make a purchase, go to aicpa-cima.com/cpe-learning or call 888-777-7077.

By Ray Sang, CPA/CITP and Clay Kniepmann, CPA/CFF/ABV, J.D.

Ray Sang, CPA/ CITP, CISA, has held technology and finance management roles at several companies, including Expedia Group and Google Cloud; he also is the founder of Chipmunk Robotics, a finance automation company in Seattle, and a director at Elastic N.V. Clay Kniepmann, CPA/ CFF/ABV, CFE, J.D., is a principal in forensic, valuation, and litigation services at Anders CPAs + Advisors, a firm based in St. Louis.
COPYRIGHT 2024 American Institute of CPA's
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2024 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:certified public accountants on artificial intelligence
Author:Sang, Ray; Kniepmann, Clay
Publication:Journal of Accountancy
Date:May 1, 2024
Words:2179
Previous Article:Navigating the patchwork of pay transparency laws.
Next Article:Tracking the top trends in fraud: Leading forensic accountants share their insights on the schemes most likely to affect CPAs and their clients.
Topics:

Terms of use | Privacy policy | Copyright © 2024 Farlex, Inc. | Feedback | For webmasters |