My Thoughts on AI in August 2024
AI has already changed my world. As of August 2024, I rely on a large language model, typically the latest Llama model, to inform at least one key decision, refine my communication, or enhance my knowledge on a daily basis. It has truly become my “copilot”.
At roughly 18 months since ChatGPT took the world by storm, I wanted to document some thoughts on the evolution of this technology and the public’s perception of it. I’ll share each thought below, section by section.
Humans are Incredibly Adaptable
I remember the first time I chatted with a large language model. I was stunned at how humanlike the responses were and how its knowledge seemed to be endless. This was the first time I witnessed well-formed, creative, compelling text coming from a machine.
And this shocked the world too. There is a reason why ChatGPT was one of the fastest growing products in history. But now, only ~18 months later, there is much chatter about the “overhyping” of LLM technology and the broader advancement of AI.
It is amazing to me that only 18 months after its debut, the technology has become so normalized, both in my own behavior and observed behavior of others. Back when it was released, I remember staying up late and feeding ChatGPT a range of different prompts, testing its reasoning and creative capabilities, stunned with how it produced human-like responses. I was amazed and almost in disbelief at how this technology seems like magic. Now, with an understanding of how the technology works, it is almost second nature to me. I access and use an LLM interface in the same way I access and use my email, Teams, web browser, etc.
And this is true for many of my peers and friends. For many, there is the initial phase of amazement and curiosity, usually using ChatGPT to write a poem or a song; but after a few days of this, many usually put it down and dismiss it as a toy, unsure of how it could help them in their everyday life. It is incredible to me how quickly we are able to just accept that this revolutionary technology exists and has a (likely permanent) role in our lives.
I think this is very telling for how the world may react to the continued macroeconomic effects of AI on the world economy, both in expected and unexpected ways.
“Copilot” is Genius Branding — It’s Spot On!
As I alluded to in the opening, I use LLM technology for a lot of things. Realistically, I don’t think there is a day that goes by anymore that I am not interfacing with an LLM. And my use of these involves everything from assistance in cooking, coding help, advice on handling challenging situations, preparing for meetings, and much much more.
Llama 3.1, GPT-4o, and other frontier LLMs have truly become my “copilot”. I remember playing Halo 3 in my childhood; the relationship I have with Llama-3.1, for example, reminds me a bit of the relationship The Master Chief has with Cortana, his AI companion.
In the future, I can’t wait until we find further ways to integrate this technology into our daily lives. The Humane Pin and Rabbit R1 try to accomplish this, but I don’t think their vision of it is truly the future. Immediate near term (3–5 years), I’m not sure what this will look like. But long term, I could easily see a future where it is commonplace for humans to install a small implant in the skull/brain to allow a machine-to-human interface. All sensory perception will be fed to a model (models like GPT-4o are already multi-modal, allegedly accepting text, audio, and video inputs), the model will record “memories” in its own format and will retrieve these as needed, and the model will assist you in your daily life decisions and interactions with output immediately made available to the human through additional sensory perception or neural feedback.
Neuralink has already made incredible strides in this, successfully implanting one of their “N1” devices in the brain of Noland Arbaugh, their first patient. The results were successful, with Noland quickly learning how to use the technology to control a cursor on a screen, purely through thought.
Of course, we’ll need to pack a high degree of computational power into these very small devices. And we are already making massive strides in that direction. Which I’ll write about in the next section…
LLM Inference is Becoming Commoditized
The first digital storage form was magnetic tape. The first commercial computer, UNIVAC 1, used a magnetic tape system that could storage about 1.2 megabytes of data. Released in 1951, it took up an entire room. The UNIVAC 1 was also incredibly expensive, at $1,500,000.
We got floppy disks, capable of storing a few hundred kilobytes, in the 1970’s. And CDs in the 1980’s, capable of storing 650 MB.
Today, you can purchase a micro-SD card, something smaller than your thumbnail, with a capacity of 1 TB for under $70.
This is possible because the economy, realizing digital storage is something of value in today’s world, naturally applied focus on driving efficiencies in the production of digital storage. Through investments in R&D and mass production, now, storage is almost free. Even cloud providers like Microsoft and AWS offer storage at extremely inexpensive costs. It has truly become commoditized.
The same will happen for LLM inference, if it hasn’t already happened that is. Today, data scientists are squeezing more and more performance (“intelligence”) out of smaller and smaller models by driving efficiencies in the training process while also optimizing inference on LLM-optimized hardware. We’re squeezing performance out of smaller models at lower and lower computational costs.
For example, Microsoft’s Phi-3-Mini model is very impressive at only a footprint of 3.8B parameters! We are learning how to squeeze more and more performance out of smaller and smaller models. This is a very good thing as it opens up even more possibilities for AI (i.e. on-device wearables) while democratizing access to state-of-the-art models.
The Greatest Innovation is Around the Corner
Between 1995 and 2000, the massive “dot-com bubble” was forming in the stock market as investors threw untold amounts of money at any company with “.com” at the end of their name. In March of 2000, this came to a massive crash; the hype was over and the bubble had popped.
But, from the rubble of this crash emerged a very valuable technology that truly did have value: the internet. While many knew that this technology had tremendous potential back then and were buying up fill-in-the-blank.com equity in droves, it took well over a decade for the value of this technology to truly be recognized and reflected in the earnings and balance sheets of those companies building on the internet.
I think we are in a similar place today. Despite all the hype and chatter we’ve heard about AI in the past 18 months, it is still very new to us. The best use cases for it almost certainly have not been realized yet. Slowly, but surely, the world economy will acclimate to this new technology, realize its amazing potential, and build cutting edge solutions on top of it. And it is these solutions that will change the world visibly, maybe not the LLM technology itself.
When you think back to the success stories of the internet, you probably think of Amazon, Facebook, Google… you don’t think of the providers of the internet service, but rather the innovators that leveraged this foundational technology to build something amazing. The Amazons, Facebooks, and Googles of the AI wave have not yet been built.
AI’s Effects on the Job Market will be Gradual
Contrary to popular belief (or at least news headlines), there probably won’t be a major firing event where a company determines the work of 50% of its staff can now be done by AI and lays them all off. While there are certainly low-complexity jobs where this may ring partially true (maybe a call/customer support center?), for the majority of even moderately complex jobs, this will not be the case. It won’t be that sudden.
Instead of a massive firing event where AI suddenly replaces a group of employees, we are much more likely to see jobs slowly absorbed over time. Companies will be hesitant to backfill positions; instead, leadership will realize the productivity of the remaining employees on the team, now complimented and supercharged by the assistance of AI technology, more than makes up for the missing headcount. Why have a team of 8 mediocre performers when I can instead have a team of 4 all-star performers who understand how to boost their own productivity and value by leverage AI in their work?
I wouldn’t be surprised if we slowly see the need for the quantity of human talent decreasing, but the quality of human talent increasing. To be a valuable talent in the future of the job market, workers will need to be exceptionally performant and productive, using AI to boost their performance and productivity.
What will the next wave be?
As I mentioned above, we are still very new to AI being so strong. It’s still hot right now; mentions of “AI” in earnings calls by S&P500 executives are now at record numbers as reported here.
AI will probably be the “hot thing” for another decade or so. But what will the next big thing be?
I’m not very sure. Nobody is! But I think the frontrunner is robotics.
We’ve already seen some impressive demonstrations by companies like Figure and Boston Dynamics with their humanoid machines, but the intelligence technology may not be quite there yet. We have the mechanical know-how to make these machines move succinctly, but have not yet figured out a way of training or implementing an AI model to allow them to interact with the world as freely as we do. With so much R&D being poured into building and training AI models, it is likely robotics will finally get the missing piece of the puzzle.