The risks and rewards of artificial intelligence (AI) and other advanced technologies were on vivid display in the Wall Street Journal’s April 22 print edition. It’s not too late for third party risk management (TPRM) teams to track down the issue given the growing importance that their data governance activities will play in mitigating the challenges posed by AI and other emerging technologies (and in optimizing their benefits).
The Journal’s Business & Finance section opened with news of UiPath’s sensational initial public offering (IPO). The robotic process automation (RPA) solutions provider reported a year-over year revenue jump of 81 percent, and its shares closed at 22% above its IPO offering price of $56. (The stock has continued its ascent since then, crossing the $80 mark earlier this week).
Other articles focused on a couple of snowballing AI risks: more regulation and less talent. The European Union (EU) unveiled a comprehensive proposal for new AI rules that would apply to a wide range of companies – those based in Europe as well as those based elsewhere — that produce AI systems or use its outputs. And the demand for AI experts is soaring almost as quickly as valuations of newly public AI companies: the IT trade group CompTIA’s research shows that AI job postings increased by a staggering 45% from the fourth quarter of 2020 to Q1 of 2021.
RPA, machine learning, AI and other advanced technologies also represent growing TPRM challenges, which is the first and most important of the following points that outsourcers and vendors should keep in mind as they work together to harness these advancements in the most risk-intelligent manner possible:
- AI and other advanced technologies are a growing TPRM risk – and an opportunity: The Anti-Fraud Collaboration – a joint effort among the Center for Audit Quality, Financial Executives International, the Institute of Internal Auditors and the National Association of Corporate Directors – this month published a paper titled Fraud and Emerging Tech: Artificial Intelligence and Machine Learning. As you would expect, the report laid out several AI risks, including the lack of human agency in AI-supported processes, biases and other ethical risks, cybersecurity risks and regulatory compliance risks. Yet, the paper allotted fare more space to the value that AI and machine learning can add to fraud prevention, fraud detection and third party risk management: “Due diligence data, such as background checks, credit reports, questionnaires and annual assessments, can be aggregated and reviewed to perform trending analysis and calculate risk scores,” the AFC report notes. “Third parties can be compared to one another to identify patterns, relationships, and anomalies.” The same AI-driven pattern recognition can help nip instanced of vendor fraud in the bud, before they bloom into more costly incidents.
- AI adoption is quickly increasing: As part of its ongoing AI research , PwC annually surveys 1,000 or so executives. More than 70% of this year’s respondents possess C-level titles, and they are bullish about their organization’s expanding use of AI: 86% of those respondents expect AI to become a mainstream technology this year. The survey respondents also identify risk management, fraud prevention, and cybersecurity as their organization’s top-ranked AI applications for 2021.
- AI outputs – within the organization and among third parties – warrant continuous monitoring: The “mainstreaming” of AI and machine learning as business technologies mean that outsourcers and vendors will be processing far more data – data that is frequently sourced and shared from other forms of advanced technologies, including Internet of Things sensors and 5G cellular networks. Fortunately, AI can be used to strengthen monitoring approaches by making them more predictive. That said, many organizations still need to improve their continuous monitoring capabilities. “The convergence of emerging technologies makes it extremely difficult for companies to keep pace with the amount of data that needs to be assessed and managed from a controls perspective,” notes Shared Assessments Senior Advisor Miller explains. “The convergence is driving the need for continuous monitoring approaches and solutions — not only to identify where risks exist, but to mitigate risks as they are identified.”
- Managing advanced technology risks requires sound data governance: The rapid adoption of AI and other advanced technologies, the growth of regulatory compliance requirements related to data privacy and security, and the widening technology skills gap represent a problematic combination for all risk managers including those responsible for TPRM. This challenge marked a frequent discussion topic at the most recent Shared Assessments Program Summit –especially in the Privacy Breakout Session, where it was made clear that numerous data privacy and security developments continue to heighten the need to maintain and sustain a robust data governance capability.
The Shared Assessments program’s core privacy tool for managing privacy risk in third party relationships, the Target Data Tracker (TDT), can help on that count. The Target Data Tracker (TDT) tool streamlines and simplifies the information-gathering process used in privacy reviews or assessments. The Target Data Tracker provides a due diligence tool to identify and track classifications and categories of data used by a third party, including the identification of locations and disclosures to fourth parties. The tool can also help accelerate the maturation of third party governance by providing additional context needed from both a risk management and regulatory compliance perspective.
As outsourcers and vendors deploy more advanced technologies to process far more data, both groups have a shared interest in governing that data as effectively and efficiently as possible.
Looking for a crash course in the technology changes the risk management field should tune into? Check out the The Role of ERM in Managing Risks Related to New Technologies.