WFA Musings - Summer 2023

Generative AI is Darwin on Steroids

This is the first of a two-part series on AI. This Musings will deal with some of the limits of regulation and the issue of whether it made sense to press pause. The second will lay out what AI will mean for corporate strategy development.

When we began to conceive of Acropolis Advisors, the most important factor driving us to launch the firm was our ability to instantly tap the brain trust of our partner group to help our CEO clients address their most vexing strategic challenges. Initially, we addressed the risk of the China exposure many US and European companies face in light of significantly increased geopolitical tensions sparked by President Xi’s global ambitions, saber rattling, persecution of the Uyghur population in the Xinjiang region, and by China’s long-standing tradition of stealing the intellectual property behind advanced Western technology. While there is the doomsday scenario of China invading Taiwan, is Xi really interested in destroying his main source of semiconductor chips while his economy is sputtering? He is too wise to make such a blunder.

The China challenge was followed by the advent of the near universal adoption of electrification by the automotive OEMs (with Toyota being the exception) deciding to convert their entire fleet of vehicle offerings from internal combustion engines by the mid-2030s. This has ramifications throughout the multi-trillion-dollar automotive supplier value chain. Finally, we began to hear rumblings about ESG and how companies should balance conventional financial performance metrics with sustainability metrics. The common thread amongst this trio of challenges and opportunities is that they will all take time to play out. We will talk more about these challenges in future Musings.

The frenzy and hysteria surrounding the explosion in the social consciousness of OpenAI and generative artificial intelligence rivals, if not exceeds, those of the aforementioned three issues. The range of predictions for this burgeoning technology goes from Armageddon to Utopia. Topping the discussion off is the popular press spewing a morass of useless, and inflammatory, albeit entertaining drivel.

We were not around for the introduction of the steam engine and the beginnings of the Industrial Revolution, but we were here for the introduction of distributed computing, the internet, handheld devices, fiber optics, high bandwidth wireless, and blockchain. Each of these breakthrough technologies evoked similar reactions of hope and fear. In fact, early advances in artificial intelligence and neural networks, the “secret sauce” behind today’s generative artificial intelligence (GenAI) and large language models (LLMs,) generated the same reactions then as we hear now.

We believe the worst course of action is to wait on regulation, slow down or pause. The reality is that the generative AI train left the station years ago and no one anywhere in the world can stop its natural advancement. As was the case in the race to the moon, this will be a race amongst nations with different notions on how technology should be harnessed and regulated. Without question, there will be a very compressed time period to understand the magnitude of this transformation.

There are a few fundamental issues that we need to understand. The first is that, unlike human learning, AI does not forget. It is a sponge for data and information, but a sponge with perfect retention. Humans learn by seeing, sensing, feeling, and reading and then by reasoning. Reading increases our knowledge, reasoning increases our intelligence, sensing and feeling increases our empathy. While we memorize some of what we consume, most of these inputs are abstracted into rules we retain without direct reference to the source. Over time, we lose much of the specific content we derive the rules from. Notwithstanding studies of the brain, we do not understand how this reasoning process works. Most experts claim this is also true for the most sophisticated forms of AI.

The algorithms employed by AI are complex mostly because there is a pyramid of algorithms that are built off of one another, but also because of the mass of information at their fingertips. We have heard recently that OpenAI’s GPT-4 “breakthrough” is more a result of laminating a few of these base modes together than of creating a new “supermodel” from scratch. They can be unpacked, but it is not a trivial exercise. Today, the major source of information for usable generative AI like ChatGPT is the internet, which houses good and faulty information with little or no indication of which is which. Therefore, “buyer beware” what is being spewed out. It could be a reasonable synthesis of available information or completely misleading. However, like human intelligence, AI learns when it is corrected. But unlike humans, it never forgets to correct for that error.

The second issue is that most AIs do not solve for accuracy, but rather for an acceptable answer. The AIs are trained on existing data without formal regard for the accuracy of the data. In fact, accuracy is inferred typically from repetition and sometimes from the reputation of the source material. Unfortunately, AIs are not able to distinguish between truth and popular fallacy. Moreover, AI’s do not create in the human sense, but rather stochastically choose incremental improvement based on observation. This is much closer to knowledge than intelligence. Given that the distinction between knowledge and intelligence lies at the heart of copyright and patent law, and thus stands central to everything the AIs can do, it is not clear that AI regulation can be based on existing legal and ethical traditions. This will not stop regulators from trying but generally that will favor the large, entrenched competitors and disadvantage those upstart competitors with superior innovations.

The ethical problem is that it is impossible to optimize for accuracy and fairness at the same time, particularly when there are multiple definitions of fairness. Even if we could inject a sense of accuracy into the AI models, they would have a hard time understanding how to prioritize between accurate answers and “fair” ones, especially as the interpretation of fairness differs. AIs are being delivered in a global environment and based on material accumulated over time. The differences between acceptable answers today in red and blue states pale in comparison to those between countries, even those not at war. How “fair” is an AI which gives an answer based on perceived ideology?

If the proposed regulators do not understand the full scope of what AI can do, how can they regulate it? Both the European and the US Governments are attempting to do so but their initial frameworks require features that are not technically feasible or would violate the very principles and objectives they claim. For instance, the US Senate’s proposal explicitly calls for differential treatment of China in addition to other “enemies” of America. However, these “enemies” are also users, creators and customers of AI technology.

Moreover, the AI companies are doing a pretty good job on their own with respect to self-regulation. Capitalism and markets work. In the world of Education, there have been significant strides with the introduction of tools using AI based models to detect AI generated essays and homework assignments. As for OpenAI, they have currently pulled ChatGPT’s ability to browse the web unfettered until it can resolve some of the thornier issues around its use. During Sam Altman’s recent European tour, he effectively “offered” to withdraw from Europe resulting in a refreshed regulatory framework that was more amenable to OpenAI’s approach. In addition, just recently OpenAI doubled down on its AI management approach, called “Superalignment” – aligning AI and human interests. This is an effort headed by two of its top researchers, and the company has dedicated 20% of its compute power to it. Finally, Google has clarified their position on IP ownership of materials found on the web.

We believe that generative AI and the inexorable march towards artificial general intelligence (AGI) and even artificial super intelligence (ASI) will be best managed by innovative adoption. While some jobs will be eliminated, most will be changed, and many more will be created. Electricity was once billed as a job killer, as was the Industrial Revolution. The internet was billed as hurrying the introduction of the “cashless society.” While many jobs of the eighteenth century are gone, few are missed save poetically. Today, we enjoy better jobs with advanced tools to support us. Most of the existing practical AI will serve as another generation of those better tools: a “co-pilot” to workers such that productivity will increase immeasurably. McKinsey estimates the potential at tens of billions in annual value creation. It is important to note that there is uncertainty whether AI will “lift all boats” in terms of overall living standards and whether AI can improve human intelligence.

We see the same progression with AI as we have witnessed in past technology introductions. The first impact will be automation-driven cost reduction of individual tasks, followed by the automation of segments of the value chain which will be redesigned to increase efficiency. Redundant administrative tasks will be the first targets.

Such professions as accounting, legal services, and medical administration will be under intense pressure. However, as was the case with the internet, there will be new value streams not possible without AI. The internet was to have eliminated libraries and librarians, which has not happened. More significantly, we have employed millions of folks in the fields of website creation, digital commerce, and enhanced real-time communication in support of their businesses.

The initially proposed six-month slowdown or “pause” is some combination of myopic and irresponsible. No one believes that China, Russia, Iran, or North Korea will abide by some global regulatory framework much less voluntarily join the “pause.” The bottom line is that six months is forever for software innovators. It is also way too short a period of time to come up with regulations. We can’t even agree on what to do with Section 230 which declares that participants on the internet ecosystem will not be held liable for illegal content posted online by other people. We can assure you that AI is an order of magnitude more complicated to regulate. As for the bad actors, they will spawn an industry within our military and intelligence services along with a slew of companies that will hack their spurious intentions and try to put them out of business or throw them in jail.

Success will be driven by embracing innovation, building value chains based not only on current capabilities, but in anticipation of the next generation of enhancements that are coming at lightning speed which could open the door to artificial super intelligence. The winners will be the people and companies that lead this revolution, not those that resist it. After all, Darwin never suggested a “pause” in his theory on evolution.

Part II of our series will address what CEOs can do to harness the power of AI in setting the strategy of the firm.

Onwards,


Previous
Previous

WFA Musings - Autumn 2023

Next
Next

WFA Musings - Spring 2023