While other states fall short on AI, N.J. has opportunity to lead

Artificial intelligence development is at a pivotal moment. As adoption continues to accelerate, it’s widely accepted that regulation is necessary to ensure its safe and fair use. However, notable current and proposed regulations threaten innovation and jeopardize the country’s leadership in AI advancements globally.

Misguided measures, such as the President Joe Biden-Vice President Kamala Harris “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” along with California’s proposed SB1047 and AB3211, are not the solutions we need. These regulations risk imposing burdensome constraints that could halt AI progress. At the same time, they open the door for New Jersey to become a leader in sensible AI policy, offering a refuge for developers and companies alike.

In examining these attempts to regulate AI, there are critical flaws to consider:

Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence regulates processing power

Regulating the speed at which computer chips can perform calculations was tried before and failed miserably. Did you know that, in 2000, Sony PlayStation devices inadvertently fell victim to regulation and were classified as “devices powerful enough to guide missiles”? It’s true. Well-intentioned regulation was passed to attempt to prevent powerful processors from falling into the wrong hands. However, because the legislation failed to contemplate quickly evolving innovation in processing power, Sony’s PlayStation 2 was inadvertently considered a potential device for terrorism, causing headaches for consumers and major financial strain to Sony.

The Biden-Harris administration seems to have forgotten this lesson with their executive order. It imposes a similar future-folly restriction on AI models that will cripple innovation as processing speeds continue to advance at rapid rates. This time the implications are far greater than just a gaming device. As processors continue to evolve, this policy could grind the US AI industry to a halt.

Bill SB1047 and unprecedented liability

If a drunk driver kills someone while driving a Jeep, should Jeep be held responsible? SB1047 proposes an analogous version of this for those building AI. In this case, AI model developers would be held liable for unlawful uses of their AI models. For larger companies, this regulatory overstep will immediately restrict AI use and halt any open-source initiatives. For individual and smaller companies, it would make AI development nearly impossible.

Time and time again, industries rapidly advance because of the contributions of the open-source community. Open-source projects become foundational infrastructure: Android, Unix (on which Apple computers are built), many electric cars, 43% of all websites are built on open-source technology, 97% of all apps use open source — the list is endless. Exposing open-source development to extensive litigation will eliminate the innovation brought by open-source communities and will surely undermine the U.S.’ lead in AI at a drastic rate.

AB3221 & watermarking

AB3211 is intended to prevent the creation of disgusting content like the nonconsensual explicit deepfakes made of Taylor Swift and others. However, the bill is ineffective at combating the underlying issue; it makes AI development incredibly burdensome and potentially financially impossible.

Fortunately, the U.S. Senate recently passed the Defiance Act to combat this issue, and bill AB3221 was ordered to inactive file, but this proposed legislation sets a dangerous precedent. It requires that all AI-generated content that can potentially be confused with human content be watermarked.

The storage and management costs this would impose for an individual or small company developing AI applications would be enormous. Practically, it is impossible for developers of open-source AI models to manage how users run and apply the models on their own machines.

The impact

BetterFutureLabs, a spinoff of TechUnited: New Jersey, is a startup studio building companies that leverage cutting-edge agentic AI. We do so by engaging the incredible talent and assets among the TechUnited:NJ community. If restrictive laws like those under consideration in California were passed in New Jersey, we’d have to shut down or relocate immediately. We are just one example of thousands of companies that would be strangled by these policies.

Why are we focused on this and what New Jersey can do

AI development needs a safe haven. Businesses need a place to build.

Many businesses incorporate in states such as Delaware, Nevada or Wyoming — Delaware notably took the lead from New Jersey in the early 1900s due to shifts in our laws and regulations. These states are favored for their pro-business legal frameworks, including liability protections and favorable tax policies that appeal to various industries.

New Jersey could adopt a similar stake in the ground as the venue of choice for AI model licenses and innovation. We could implement common-sense laws that regulate the application of AI for nefarious uses and not stifle innovation or end the open-source community. This would allow New Jersey to combat explicit AI-generated content and other terrible uses of AI without overly burdening the tech community with policies that don’t make sense.

On top of this, New Jersey is primed with an opportunity around the immense energy needs required by AI, which are key to the future of the industry. With the right policies and incentives, New Jersey could become a beacon of energy for AI — specifically, with the collocation of reactors and data centers — an issue at the heart of all things AI at the moment.

This is New Jersey’s opportunity to embrace innovation and ensure that we become a safe haven where entrepreneurs and innovators can build AI for good, supporting a better future for all.

For a more technical and in-depth analysis of AI policy, learn more here.

Justin Trugman is co-founder and head of technology, BetterFutureLabs; Aaron Price is co-founder, BetterFutureLabs, and CEO of TechUnited:NJ; Mark Yackanich is partner in BetterFutureLabs.