First Midwest BankFirst Midwest Bank logoArrow DownIcon of an arrow pointing downwardsArrow LeftIcon of an arrow pointing to the leftArrow RightIcon of an arrow pointing to the rightArrow UpIcon of an arrow pointing upwardsBank IconIcon of a bank buildingCheck IconIcon of a bank checkCheckmark IconIcon of a checkmarkCredit-Card IconIcon of a credit-cardFunds IconIcon of hands holding a bag of moneyAlert IconIcon of an exclaimation markIdea IconIcon of a bright light bulbKey IconIcon of a keyLock IconIcon of a padlockMail IconIcon of an envelopeMobile Banking IconIcon of a mobile phone with a dollar sign in a speech bubbleMoney in Home IconIcon of a dollar sign inside of a housePhone IconIcon of a phone handsetPlanning IconIcon of a compassReload IconIcon of two arrows pointing head to tail in a circleSearch IconIcon of a magnifying glassFacebook IconIcon of the Facebook logoLinkedIn IconIcon of the LinkedIn LogoXX Symbol, typically used to close a menu
Skip to nav Skip to content
FDIC-Insured - Backed by the full faith and credit of the U.S. Government

AI in Business: Maximizing Gains and Minimizing Risks

More than half of CEOs recently surveyed by Fortune and Deloitte said that they have already implemented generative artificial intelligence in their business to increase efficiency. And many are now looking to generative AI to help them find new insights, reduce operational costs, and accelerate innovation.

There can be a lot of relatively quick wins with AI when it comes to efficiency and automation. However, as you seek to embed AI more deeply within your operations, it becomes even more important to understand the downside risk. In part, because security has always been an afterthought.

Security as an afterthought

In the early days of technology innovation, as business moved from standalone personal computers to sharing files to enterprise networks and the internet, threat actors moved from viruses to worms to spyware and rootkits to take advantage of new attack vectors. The industrialization of hacking accelerated the trajectory by making it possible to exploit information technology infrastructure and connectivity using automation and evasion techniques. Further, it launched a criminal economy that flourishes today.

In each of these phases, security technologies and best practices emerged to address new types of threats. Organizations added new layers of defense, often only after some inevitable and painful fallout.

More recently, internet of things devices and operational technology environments are expanding the attack surface as they become connected to IT systems, out to the cloud, and even to mobile phones. For example, water systems, medical devices, smart light bulbs, and connected cars are under attack. What's more, the "computing as you are" movement, which is now the norm, has further fueled this hyperconnectivity trend.

Organizations are still trying to understand their exposure to risk and how to build resilience as pathways for attackers continue to multiply and create opportunities for compromise.

Risk versus reward

The use of AI adds another layer of complexity to defending your enterprise. Threat actors are using AI capabilities to prompt users to get them to circumvent security configurations and best practices. The result is fraud, credential abuse, and data breaches.

On the flip side, AI adoption within enterprises also brings its own inherent and potentially significant risks. Users can unintentionally leak sensitive information as they use AI tools to help get their jobs done. For instance, uploading proprietary code to an AI-enabled tool to help identify bugs and fixes, or company confidential information for assistance summarizing meeting notes.

The root of the problem is that AI is a "black box," meaning there's a lack of visibility into how it works, how it was trained, and what you are going to get out of it and why. The black box problem is so challenging that even the people developing tools using AI may not fully understand all that it is doing, why it is doing things a certain way, and the tradeoffs.

Business leaders are in a tough position of trying to decide what role AI should play in their business and how to balance the risk with the reward. Here are three best practices that can help.

1. Be careful what data you expose to an AI-enabled tool.

Uploading your quarterly financial spreadsheet and asking questions to do some analysis might sound innocuous. But think about the implications if that information were to get into the wrong hands. Don't give anything to an AI tool that you don't want an unauthorized user accessing.

2. Validate the tool's output.

AI hallucinates, meaning it confidently produces inaccurate responses. There have been numerous media reports and academic articles on the subject. I can point to dozens of examples personally as I've experimented with AI tools. When you ask an AI tool a question, it behooves you to have a notion of what the answer should be. If it's not at all what you expected, ask the question another way and, as an extra precaution, go to another source for validation.

3. Be mindful of which systems your AI-enable tool can hook up to.

The opposite side of the first point is that if you have AI-enabled tools operating within your environment you need to be aware of what other systems you're hooking those tools up to, and, in turn, of what those systems have access to. Since AI is a black box, you may not know what is going on behind the scenes, including what the tool is connecting to as it performs its functions.

There's a lot of optimism and excitement about the potential upside for enterprises that embrace AI. Fortunately, the past has shown that security is integral to reaping the positive impact of new technologies and processes that are brought into the enterprise. In the rush to capitalize on AI, get ahead of the security risks by committing yourself to understanding the tradeoffs and making informed decisions.

 

This article was written by Martin Roesch from Inc. and was legally licensed through the DiveMarketplace by Industry Dive. Please direct all licensing questions to legal@industrydive.com.

Subscribe for Insights

Subscribe