Building Competitive Advantage Using AI

Generative AI has the potential to transform businesses, but its democratizing power means that competitors have the same access and capabilities. To achieve a competitive advantage, companies need to build deep and enduring moats, which can come from proprietary domain-specific data and identifying the right use cases.

The “low-code, no-code” quality of tools such as ChatGPT, DALL-E2, Midjourney and Stable Diffusion makes it easier for organizations to adopt AI capabilities at scale. The immediate productivity gains can greatly reduce costs. But generative AI’s democratizing power also means, that a company’s competitors have the same access and capabilities. Many use cases that rely on existing large language model (LLM) applications, such as productivity improvements for programmers using Github Copilot and for marketing content developers using Jasper.ai, will be needed just to keep up with other organizations. But they won’t offer much differentiation, because the only variability created will result from users’ ability to prompt the system.

The companies that outperform are the ones who can build deep and enduring moats, whether they be at the foundation model layer, infrastructure, and tooling, or end-user applications. The potential for generative AI goes well beyond marketing and content creation when you look at industries like financial services, healthcare, legal, hospitality, transportation and the like. People underestimate the power of applying generative AI in a domain-specific manner.

We believe that rather than a small number of LLMs there will be a proliferation of models as people fine-tune them with proprietary and domain-specific data. It is also important to identify the right use cases and differentiated data inputs that bring true competitive advantage and create the largest impact relative to existing best-in-class solutions.

Using fine-tuned existing open-source or paid models make companies completely dependent on the functionality and domain knowledge of these model’s training data; they are also restricted to available modalities, which today are comprised mostly of language models. And they offer limited options for protecting proprietary data, including fine-tuning LLMs that are stored fully on premises. Training a custom LLM will offer greater flexibility, but it comes with high costs. The bar to justify this investment is high, but for a truly differentiated use case, the value generated from the model could offset the cost.

LLMs, Vector search databases, and Serverless GPU services are also making it easier to build cost-effective AI-powered SaaS applications. While the AI-SaaS and AI-powered chatbot market is growing, the interactions can be stiff and scripted. Companies that build engaging multi-modal AI experiences, can also increase data collection for better personalization and engagement. Hence our investment in startups like Ex-Human that are creating human equivalent interactions and contexts.

By using LLMs, Vector search databases and Serverless GPU services, building sophisticated AI-powered SaaS applications has become much easier. The capabilities of LLMs and Vector search databases are not only revolutionizing the field of search by enabling semantic matching and real-time contextual information to user queries across various modalities (including text, images, and videos), but also extend to a wide range of applications, including large-scale image and video recognition, natural language understanding, humanized recommendation systems, and more.

The AI-SaaS and AI-powered App market is expanding rapidly, but the interactions between users and AI-powered chatbots are often lacking in engagement. According to Apptopia, the number of apps being published to the app stores with the term “AI Chat” or “AI Chatbot” in their app name or description has increased by 1480% year-over-year in the first quarter alone, and as of March 2023, there are already 158 AI chatbots available in app stores. The public launch of OpenAI’s ChatGPT product has caused a consumer-facing artificial intelligence gold rush, particularly in the realm of mobile apps. However, while the AI chatbot market is growing, the interactions are often stiff, scripted, and lack the ability to drive engagement, especially if they use the same open models and data that are available to everyone else.

Unfortunately, very few companies are building AI chatbots that are engaging and customizable, with information about themselves and their end-users, to facilitate lengthy, meaningful conversations. Those that are building such chatbots are increasing their rate of data collection to improve personalization and one-to-one engagement.

Since LLMs take significant resources to train, many assume that it is not feasible to compete with big tech companies that own these models. But an LLM alone is not sufficient to provide the unique value. It is usually a combination of ease of bot configurability, chatbot response model, personality classifier, user-specific memory, and the speaking avatar.

Such engaging multi-modal Bots can create human equivalent interactions and contexts, that provide relevant value to improve their models upon re-training. This will enable them to create a flexible hierarchy of large and small machine learning models. Although the overall sector of generative AI is crowded, going after businesses that care about engagement and entertainment, rather than search accuracy, dramatically narrows the number of competitors. Hence our investment in startups like Ex-Human that are creating human equivalent interactions and contexts.

A recent paper published by Stanford University and Google Researchers, “Generative Agents:
Interactive Simulacra of Human Behavior”
describes an architecture that extends a LLM to store a complete record of the agent’s experiences using natural language, synthesize those memories over time into higher-level reflections, and retrieve them dynamically to plan behavior. By fusing LLMs with computational, interactive generative agents, this work introduces architectural and interaction patterns for enabling believable simulations of human behavior.

The next wave of business transformation will shift from building isolated digital capabilities to creating the foundations of a shared new reality that seamlessly combines our physical lives and our digital ones. The goal is not incremental improvement, but a step change. Many people are using AI to generate purely digital images and content, but we already see how it will shape the future of science, business, medicine, government, enterprise data, product design and manufacturing, and so much more.

By Ravi Sundararajan, Partner, AIspace Ventures