Get All Access for $5/mo

4 Reasons Why Generative AI Won't Replace Humans Anytime Soon Generative AI is a revolutionary tool, but it won't be replacing humans anytime soon. Here's why.

By Frederick Pinto Edited by Chelsea Brown

Key Takeaways

  • Four key reasons why generative AI won't eliminate humans anytime soon

Opinions expressed by Entrepreneur contributors are their own.

Since generative AI (or "GenAI") burst onto the scene earlier this year, the future of human productivity has gotten murkier. Every day brings with it growing expectations that tools like ChatGPT, Midjourney, Bard and others will soon replace human output.

As with most disruptive technologies, our reactions to it have spanned the extremes of hope and fear. On the hope side, GenAI's been touted as a "revolutionary creative tool" that venture maven Marc Andreeson thinks will one day "save the world." Others have warned it'll bring "the end" of originality, democracy or even civilization itself.

But it's not just about what GenAI can do. In reality, it operates in a larger context of laws, financial factors and cultural realities.

And already, this bigger picture presents us with at least four good reasons that AI won't eliminate humans anytime soon.

Related: The Top Fears and Dangers of Generative AI — and What to Do About Them

1. GenAI output may not be proprietary

The US Copyright Office recently decided that works produced by GenAI are not protected by copyright.

When the work product is a hybrid, only the parts added by the human are protected.

Entering multiple prompts isn't enough: A work produced by Midjourney was refused registration even though a person inputted 624 prompts to create it. This was later confirmed in DC District Court.

There are similar difficulties in patenting inventions created by AI.

Markets are legally bounded games. They require investment risk, controlled distribution and the allocation of marketing budgets. Without rights, they collapse.

And while some countries may recognize limited rights in GenAI's output, human contributions are still required to guarantee strong rights globally.

2. GenAI's reliability remains spotty

In a world already saturated with information, reliability is more important than ever. And GenAI's reliability has, to date, been very inconsistent.

For example, an appellate lawyer made the news recently for using ChatGPT to build his casebook. It turns out that the cases it cited were invented, which led to penalties against the lawyer. This bizarre flaw has already led to legal ramifications: A federal judge in Texas recently required lawyers to certify they didn't use unchecked AI in their filings, and elsewhere, uses of AI must now be disclosed.

Reliability issues have also appeared in the STEM fields. Researchers at Stanford and Berkeley found that GPT-4's ability to generate code had inexplicably gotten worse over time. Another study found that its ability to identify prime numbers fell from 97.5% in March, to a shockingly low 2.4% just three months later.

Whether these are temporary kinks or permanent fluctuations, should human beings facing real stakes trust AI blindly without getting human experts to vet its results? Currently, it would be imprudent — if not reckless — to do so. Moreover, regulators and insurers are starting to require human vetting of AI outputs, regardless of what individuals may be willing to tolerate.

In this day and age, the mere ability to generate information that "appears" legitimate isn't that valuable. The value of information is increasingly about its reliability. And human vetting is still necessary to ensure this.

3. LLMs are data myopic

There may be an even deeper factor that limits the quality of the insights generated by large language models, or LLMs, more generally: They aren't trained on some of the richest and highest-quality databases we generate as a species.

They include those created by public corporations, private businesses, governments, hospitals and professional firms, as well as personal information — all of which they aren't allowed to use.

And while we focus on the digital world, we can forget that there are massive amounts of information that is never transcribed or digitized at all, such as the communications we only have orally.

These missing pieces in the information puzzle inevitably lead to knowledge gaps that cannot be easily filled.

And if the recent copyright lawsuits filed by actress Sarah Silverman and others are successful, LLMs may soon lose access to copyrighted content as a data set. Their scope of available information may actually shrink before it expands.

Of course, the databases LLMs do use will keep growing, and AI reasoning will get much better. But these forbidden databases will also grow in parallel, turning this "information myopia" problem into a permanent feature rather than a bug.

Related: Here's What AI Will Never Be Able to Do

4. AI doesn't decide what's valuable

GenAI's ultimate limitation may also be its most obvious: It simply will never be human.

While we focus on the supply side — what generative AI can and can't do — who actually decides on the ultimate value of the outputs?

It isn't a computer program that objectively assesses the complexity of a work, but capricious, emotional and biased human beings. The demand side, with its many quirks and nuances, remains "all too human."

We may never relate to AI art the way we do to human art, with the artist's lived experience and interpretations as a backdrop. Cultural and political shifts may never be fully captured by algorithms. Human interpreters of this broader context may always be needed to convert our felt reality into final inputs and outputs and deploy them in the realm of human activity — which remains the end game, after all.

What does GPT-4 itself think about this?

I generate content based on patterns in the data I was trained on. This means that while I can combine and repurpose existing knowledge in novel ways, I can't genuinely create or introduce something entirely new or unprecedented. Human creators, on the other hand, often produce groundbreaking work that reshapes entire fields or introduces brand new perspectives. Such originality often comes from outside the boundaries of existing knowledge, a leap I can't make. The final use is still determined by humans, giving humans an unfair advantage over the more computationally impressive AI tools.

And so, because humans are always 100% in control on the demand side, this gives our best creators an edge — i.e., intuitive understanding of human reality.

The demand side will always constrain the value of what AI produces. The "smarter" GenAI gets (or the "dumber" humans get), the more this problem will actually grow.

Related: In An Era Of Artificial Intelligence, There's Always Room For Human Intelligence

These limitations do not lower the ceiling of GenAI as a revolutionary tool. They simply point to a future where we humans are always centrally involved in all key aspects of cultural and informational production.

The key to unlocking our own potential may be in better understanding exactly where AI can offer its unprecedented benefits and where we can make a uniquely human contribution.

And so, our AI future will be hybrid. As computer scientist Pedro Domingos, author of The Master Algorithm has written, "Data and intuition are like horse and rider, and you don't try to outrun a horse; you ride it. It's not man versus machine; it's man with machine versus man without."

Frederick Pinto

Entrepreneur Leadership Network® Contributor

Founding Partner, Pinto Legal

Fred Pinto is an IP, technology and venture lawyer whose passion is helping innovative entrepreneurs build sustainable businesses while realizing their mission in the world. He's also the host of the Fred Pinto Podcast.

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Editor's Pick

Starting a Business

He Started a Business That Surpassed $100 Million in Under 3 Years: 'Consistent Revenue Right Out of the Gate'

Ryan Close, founder and CEO of Bartesian, had run a few small businesses on the side — but none of them excited him as much as the idea for a home cocktail machine.

Business News

'Jaw-Dropping Performance in 2024,' Says a Senior Analyst as Nvidia Reports Earnings

Nvidia reported its highly-anticipated third-quarter earnings on Wednesday.

Business News

'Do You Sell Cars?': Tesla CEO Elon Musk Trolls Jaguar Rebrand on X

The team running Jaguar's X account was working hard on social media this week.

Business News

Looking for a Remote Job? Here Are the Most In-Demand Skills to Have on Your Resume, According to Employers.

Employers are looking for interpersonal skills like teamwork as well as specific coding skills.

Business Ideas

63 Small Business Ideas to Start in 2024

We put together a list of the best, most profitable small business ideas for entrepreneurs to pursue in 2024.