Successful digital-native firms, like Amazon, Google, Facebook, Alibaba, Netflix and Uber, all delivered exponential growth by successfully innovating across three seismic shifts that have defined the online economy: (1) web-based, desk-top centric internet to (2) smartphones, connected devices and app-based ecosystems to (3) AI factories that leverage data and learning algorithms to power automated workflows and personalized recommendations. The three phases also reflect an evolution of how firms drive business value: from purely product-centric to network-centric to data-centric innovation.
Take Amazon as an example: their first innovation was online direct-to-consumer product. Over time, given the value of their fulfilment expertise and aggregation of consumer demand, Amazon expanded to include 3rd party sellers – creating additional network effects for both sellers and buyers, ensuring Amazon had the best combination of selection and delivery service. As these networks drove scale, Amazon transformed itself to enable all its businesses to have access to its massive data, so that AI algorithms can enable automated pricing, personalized recommendations, ad targeting, and optimization of inventory management, to continue its exponential growth.
Similar AI factories, run millions of daily ad auctions at Google or Facebook and decide the pricing and matching for rides on Uber, by systematically converting internal and external data into predictions, insights and choices, which in turn guide and automate operational workflows. These firms have been early leaders in the first phase of commercialized AI applications. But all firms, from these digital darlings to traditional enterprises to emerging start-ups, must now grasp the revolutionary impact that the next phase of AI innovation will have on operations, strategy and competition.
Large language models (LLMs) and neural networks have been around for a long time, but their uses were limited. The sentiment neuron and the use of Dota2 as a training environment, played a significant role in shaping LLMs such as OpenAI’s GPT. The sentiment neuron expanded GPT’s capabilities to generate text with emotional nuances, while the Dota2 training provided GPT with enhanced contextual understanding and strategic thinking within the gaming domain. These advancements, with the combination of vast internet training data, enhanced learning algorithms and compute power with high-bandwidth memory, enabled LLMs to become more versatile and adaptive in generating content across various domains and emotional contexts.
These factors in addition to availability of open-source diffusion models, contributed to the ability of LLMs from leading generative AI companies to produce more coherent, contextually appropriate, and insightful multi-modal responses. Google-backed Anthropic positions its Claude LLMs as safer than Microsoft-backed OpenAI’s GPT-4. Recently, Claude expanded its context window from 9k to 100k tokens. 100K context window corresponds to 6 hours of audio, enabling tasks such as analyzing and summarizing financial reports, legal briefs and developer documentation. This expanded capability also allows for efficient prototyping by incorporating an entire codebase and intelligently building upon or modifying it.
Anthropic (founded by former OpenAI executives), has raised over $1 Billion, with a recent $450 Million fundraise that included major enterprise clients such as Salesforce and Zoom. As these enterprises – and many others, including Quora, DuckDuckGo, Notion – have announced plans to integrate Anthropic into their applications, it is clear that the race to embed new generative AI models into core enterprise offerings has begun.
McKinsey predicts that by 2025, smart workflows and seamless interactions among humans and machines will likely be as standard as the corporate balance sheet, and most employees will use data to optimize nearly every aspect of their work. All firms will need to adapt to data-powered, AI-centric business models to drive the next phase of exponential growth. This is why, Microsoft’s CEO, Satya Nadella, refers to AI as the new “runtime” of the firm. A major opportunity for generative AI start-ups is to help firms transition to data and AI-centric business models with creative generation and co-pilots.
Use-case focused AI start-ups will shape the future of science, business, medicine, government, product design, and manufacturing by training AI models with domain-specific understanding to dramatically increase productivity of workflows and user experiences. For example, in drug design, AI start-ups can significantly reduce the time and cost associated with the discovery process. In marketing, it is predicted that by 2025, 30% of outbound marketing content from large organizations will be synthetically generated. Similarly, it is expected that by 2030, a blockbuster film will have 90% of its content generated by AI. Generative AI also enables industries such as manufacturing, automotive, aerospace, and defense to efficiently design parts that meet specific goals and constraints.
Last but not least, in this data-driven and AI-centric world, the role of application programming interfaces (APIs) is expanding rapidly, as they enable application interactions, and connections between AI models and critical training data (which is often siloed by functional org and application boundaries today). In fact, software development AI co-pilots can be used to create the code to enable autonomous application interactions, custom API integrations and data compatibility across all data sources, to easily help breakdown these silos.
The potential of generative AI holds great promise for the future, with more hardware and software advancements to come. We are excited to continue investing in start-ups driving these advancements and transitions to a data-driven AI-centric world!
By Ravi Sundararajan, Partner, AIspace Ventures