Goldman Sachs Says AI Could Replace The Equivalent of 300 Million Jobs — Will Your Job Be One of Them? Here's How to Prepare. The galloping evolution of AI technologies has captured media attention over the past several months. But what are its potential ramifications? Is there a real risk that AI will replace humans at their job in the near future? And if so, how can we, as entrepreneurs, get prepared?
By Anton Liaskovskyi Edited by Maria Bailey
Opinions expressed by Entrepreneur contributors are their own.
Last year, many of us spent time thinking over the problem of AI bias, carefully depicted by one of the authors of "Coded Bias", the famous Netflix documentary. Now that yet another boost of generative AI popularity is here to stay, the talks about job replacement are back in the game.
Namely, one of the most verbose reports on how AI could potentially automate (or as many are afraid, replace people in their qualified jobs) belongs to Goldman Sachs, which was vehemently spread under a variety of alarmist headlines about 300 million potentially replaced jobs across the globe.In particular, some of the reported data suggests that 18% of the work worldwide is likely to be computerized, and the effects on the more developed economies could be worse than those across the emerging ones, for instance.
Strangely enough, the recent boom of generative AI has coincided with several consecutive waves of layoffs in the online tech industry, which only made some sort of a minor panic in a myriad of discussions on the web even more understandable.
Related: The 3 Principals of Building Anti-Bias AI
However, the report itself suggests that the so-called "exposal to automation" itself does not imply the elimination or removal of the human-involved job in any way. More importantly, many of the non-white-collar professions are not even prone to negative effects.
Facing the reality behind the hype
So, as Goldman Sachs estimates, up to almost 25% of all work could be managed by AI completely in the upcoming years. But what exactly does this mean for a specialist in the law department, a copywriter, or a motion designer, for example? To tell the truth, not that much.A friend of mine, running a video production studio has been testing AI solutions to generate images for some time and as it turns out scraping the creative inspiration from the machine learning algorithms has been quite a tiresome journey all along. The default imagery is often somewhat generic (and often gloomy for that matter), so their designer team hasn't been successful in actually applying the newly-acquired AI-powered assistance to a significant extent.
Meanwhile, in editorial departments, the recent trend of running the ChatGPT queries, regarding some news personalities and seeing the not-so-truthful results has also proven the point of truthfulness being the weakest point of generative AI.And given so many of the false narratives, and how easily the generative AI tools are being persuaded (e.g. write content with non-existent facts, if those are being given in the assigned request), I highly doubt their legal advice is qualified enough to go along with, let alone substitute even an inexperienced, yet hungry paralegal for their software equivalent just yet.
Will the future uphold our fears?
While the current state of generative AI is obviously not as advanced as its founders wish to believe, some of the job market predictions for 2024 may seem too pessimistic for that matter. Of course, chances are the technologies are likely to have a significant impact on our workforce this way or another within the upcoming decade. So how can we be prepared?
Here are a few focus points that entrepreneurs might keep in mind:Don't rush into cut-offs
Whatever the niche you're in business in, the current state of generative AI doesn't have the skills and competencies to replace any of the qualified specialists in your team.
More importantly, even when further AI advancements arrive, you will probably still need your team to manage the new software (i.e. explain precisely what needs to be done, then review the outcome) in order to obtain the best results.
Some of the most vivid examples include code reviews/tweaks, editing of the scripts created by AI, accounting and engineering project re-checks and physical exams/prescriptions reviews in medicine, but this list is virtually endless.
Check your facts
While we leave the media and celebrities worrying about the possible negative effects of complex deep fakes, made possible by the introduction of generative AI upgrades, using ChatGPT or similar tools to search for information remains a very tricky business.
As the algorithms' training evolves, the risks of being completely misguided will definitely decrease, but chances are that we won't be able to trust the AI-generated text/image in the foreseeable future.
Even though this aspect will remain of primary importance in editorial newsrooms, law firms and political offices, any calculations, provided by the advanced machine learning algorithms will also need to undergo re-checks, at least in the selected data cohorts.Peculiarly enough, the amount of time and operational resources, inevitably required to run these reviews/checks, actually challenges somewhat a common belief that the extended use of AI leads to higher productivity, with less budget spending.
Beware of the bias
The first thing we learned on the launch of ChatGPT was that its latest "knowledge acquisition dated to 2020 - 2021", but the more important thing is that in spite of its latest upgrades, the generative AI is still old-school, or better to say biased.
Here are several examples to prove my point.
I've run a simple query asking ChatGPT to "tell me a story of two people", and what I've got was a cheesy rom-com about John and Mary. Then I ran a short query to draw me two people on the beach in the relevant generative AI software and I got a picture of two males (even though the scene structure was good, no doubt about that). Presumably, having analyzed my request, the algorithm "decided" that "people" should primarily refer to "male people."What this means to entrepreneurs using generative AI, whether they're working in a creative industry or not, is their necessity to not just have a clear understanding of the AI-bias-risks, but also the willingness to triple-check, then update the intermediary software-generated results, prior to their incorporation into any of the further work product.
Prospects for 2023-2024
Long story short, whatever the misconceptions we might have about generative AI at this point, they aren't likely to stay relevant in 10 years. However, the most reasonable approach to its use remains in moderation. In plain words, exaggerating its benefits will definitely be damaging, but the exceeding focus on its possible ramifications can be just as much.
Quoting Ms. Verschuren from Dow Jones, it's still up to us humans to figure out our future, and tweak our machines for better results, however complex they might be.