Mitigating and managing the main risks when using AI technology.

There are multiple risks that businesses need to consider when rolling out AI technology, but if these risks are properly mitigated and managed, the benefits of these new and evolving solutions can be reaped safely.

Susheel Sethumadhavan, partner and Middle East and Africa lead for Kearney Analytics, says that the main risks to businesses when using AI include data quality, data privacy, monitoring model outputs and ethical considerations.

“Poor quality or inaccurate data can lead to unreliable or misleading AI outputs, so regular data audits and cleaning processes are required to ensure optimal outputs from generative AI driven use-cases,” Sethumadhavan says. “Generative AI models are trained on large datasets, which may contain sensitive information, so it is critical to mitigate data privacy risks by ensuring data anonymization techniques and effective compliance mechanisms.” 

“Additionally, organisations need to comprehensively assess their infrastructure, usage of open-source LLMs versus customised LLMs to ensure data protection,” he adds.

In terms of monitoring model outputs, Setheumadhavan advises introducing “detailed processes and routines for monitoring generative AI’s model behaviour” to ensure efficient and effective operations and identify and address potential bias or ethical concerns. 

“For example, a generative model might have been trained on a biased dataset, which, in turn, generated more biased synthetic training data that further funnelled into a model producing newsletters. As a result, a snowball effect derived from an initial set of biased data might have caused newsletters to be written and published that are not representative of their intended use,” he explains.

Rich Wilson, CEO and co-founder of Gigged.Ai, says the company has an internal data analyst and software developer who both own generative AI as a solution: “They are the internal advocates, and then we then manage risk at board level through a risk register.”

Rebecca Wettemann from independent industry analyst firm, Valoir, says that when a company deploys generative AI-enabled applications, it is important to “choose a trusted partner that can clearly articulate how your data will be protected and separated from public data used for training”, and to “bring the compliance and risk office into any AI strategy discussion.”

In terms of mitigating data privacy risks, Wetteman says companies should put “appropriate policies and training in place to ensure employees aren’t unknowingly exposing proprietary data to public generative AI models [and] work with trusted vendors that can ensure the appropriate separation is in place between private and public data.”

To prevent data hallucination and toxic outcomes, Wettemann offers the following guidance: “For at least the short term, having a human in the loop is a critical piece of mitigating the potential risk. Moving forward, constitutional AI models that limit responses to certain parameters and guidelines will also help mitigate risk.”

But ultimately, the big question for businesses is whether the biggest risk of all is to not use AI solutions and fall behind their competitors. Serial entrepreneur James Caan CBE says that “fast movers will win big and those that move slowly will lose out.”

“Front-runners, defined as companies that fully absorb AI tools into their organisations over the next five to seven years, could increase economic value by about 120 percent by 2030, implying additional growth in cash generation of about 6 percent a year for the next 12 years,” says Caan. 

“Laggards, who adopt AI late or not at all, could lose about 20 percent of cash flow compared with today. A McKinsey survey found that late and non-adopters of AI reduce their employment and investment more than other businesses.”

Courtesy Georgia Lewis

Image credit: Fernando Arcos/Pexels

About Author

Leave a Reply

Your email address will not be published. Required fields are marked *