New frontier: Is artificial intelligence actually making cyberrisk worse?

Many business leaders and professionals know they face cyberrisk. But they might not be aware that artificial intelligence tools have added to their company’s vulnerability.

Artificial intelligence makes cyberattacks and fraud easier for crooks. They can “more easily infiltrate company networks, spoof emails, hold business data for ransom and deceive employees into making high-dollar payments,” stated Hayden Kopser, co-founder and president of insurance brokerage firm North Improvement LLC, based in Westfield.

Hayden Kopser.

While New Jersey already is fertile ground for cybercriminals, that opportunity is getting bigger, said cyberrisk insiders.

“Artificial intelligence increases the ‘attack surface,’ said Frank Costa, co-founder and chief growth officer of insurance brokerage firm World Insurance Associates, which has more than 20 offices in New Jersey. “More entry points become available for cybercriminals to exploit, including AI algorithms, data pipelines and communication channels.”

Said Costa: “Attackers can leverage AI to automate attacks, evade detection and identify vulnerabilities more efficiently.”

“AI is exciting and should not be associated with talk of societal doom. However, businesses must be aware of the threats it poses and consider those as a counterweight while exploring the possibilities it opens,” contended Kopser. “Protecting a company from AI and other more traditional online risks is a multistep process and one with no foolproof methods.”

Scott Schober, CEO and president of Berkeley Varitronics Systems, a Metuchen-based maker of wireless security, safety, test and cybersecurity products, explained that chatbots, because of their design, may prove to be big cybertargets.

“All AI is trained by large data sets containing a treasure trove of personal information from users and customers. This data is collected, analyzed and stored by large tech companies, who have been known to cut corners in order to produce larger profits,” said Schober.

“This creates inherent risks and higher chances of data leaks and breaches when this data is not handled carefully or protected properly. When companies train AI or design products and services based upon queries to AI, these companies are creating entirely new data sets that are also valuable targets to hackers and criminals of all types. State-sponsored hacks, corporate espionage and ransomware attacks are just a few outcomes from these rich sets of data.”

Frank Costa.

But it’s not just hacking and other cyber-misdeeds that present risk. Kopser of North Improvement said there is “potential for regulatory costs related to AI that well-intentioned businesses are using to improve their operations/marketing/sales.”

For example, he said a machine-learning-based email program used to automate customer outreach might run afoul of regulations if it isn’t trained on electronic contact laws.

“Even excellent AI should be used in tandem with both human and technological oversight,” he advised.

AI can open up risk gaps for businesses and nonprofit organizations, Costa pointed out. AI systems can inherit biases from their training data, leading to unfair and discriminatory outcomes. Other risks are related to transparency and security vulnerabilities.

Further, intellectual property issues are an open question for anyone using chatbots, Costa added: “Defining clear ownership and accountability of AI-generated content or decisions remains an unresolved issue.”

Beyond that, Costa said, “Integrating AI into critical systems like health care, transportation and energy requires ensuring safety, reliability and fail-safe mechanisms to prevent catastrophic failures.”

These risks from AI are an economywide challenge, requiring “a collaborative effort from researchers, policymakers, industry experts and society … essential to advance AI in a responsible and beneficial manner,” said Costa.

Businesses need internal governance systems for managing AI tools and processes, so setting up written procedures for use of chatbots and other AI tools can reduce risk, according to David Snyder, vice president and assistant general counsel of insurance company membership group American Property Casualty Insurance Association, who studies AI risk issues.

Scott Schober, CEO and president of Berkeley Varitronics Systems in Metuchen. ­— File photo

Governance policies for privacy and data, cybersecurity training, monitoring and incident response and recovery help provide defenses in the AI era, Schober added.

On the issue of data privacy, noted Costa, companies that implement strong data encryption, access controls and data anonymization techniques can help safeguard sensitive information. Training workers on AI-related cyberrisks and best practices can “foster a culture of cybersecurity awareness to prevent human errors.”

Nodding to the prominence of transportation businesses in New Jersey, Costa stated that companies that use telematics to make hiring, performance and rating decisions must be aware of how AI might be used in these systems. There’s an employment practices risk, he said: “Consistent impartiality in AI systems is essential to avert allegations of discrimination, wrongful termination or inaccurate, long-term strategic decisions.”

Brenda Wells-Dietel, a professor at East Carolina University, explained that, when Amazon started using AI to screen résumés, alleged biased input data used to train the AI system resulted in favoritism: Allegedly, women were discriminated against.

Additionally, any AI system an organization uses “can be vulnerable to adversarial attacks, where data is manipulated to deceive the AI into making incorrect decisions. This is a danger in applications like image recognition and autonomous vehicles,” Costa said.

Conversation Starters

Reach North Improvement LLC at: northimprovement.com or call 212-495-9172.

Reach Berkeley Varitronics Systems at: bvsystems.com or call 732-548-3737.