Everyone is talking about the possibilities of generative artificial intelligence (GenAI), its potential impact on business and how the technology can be leveraged to deliver near-human responses. Less discussed is the role that real-time data plays in getting those responses right.
AI has been around for a long time, but the kind of AI based on large language models (LLMs), such as that behind the breakthrough generative AI platform ChatGPT, has captured the imagination of the public and the business world alike.
It makes sense: as a technology that is easy to use and easy to access, which can come up with astoundingly original responses to input prompts by users, it’s no surprise it has grown in popularity so quickly. ChatGPT alone reached an estimated 100 million users within two months of its public launch in late 2022, but how should businesses be approaching the effective use of Generative AI, and what’s needed to get it right?
The Business Case for GenAI
It’s not just consumer-facing generative AI platforms like ChatGPT that are making waves. LLMs in general have the potential to revolutionise different industries and drive significant productivity improvements across business, especially as more organisations build them into customer-facing services.
For instance, airlines have the potential to use generative AI systems to help deliver up-to-the-second flight information directly to passengers’ mobile devices, providing a superior customer experience than if those passengers were to look and wait for arrivals or departure boards to update. A simple chatbot answering pre-configured questions can be turned into a dynamic customer-facing solution that can be plugged into any organisation.
Clearly, GenAI technology has the potential to revolutionise customer service transactions by enabling a faster, more relevant, more personalised experience. This, in turn, has the potential to drive revenue and bolster brand loyalty. Indeed, it has the potential to transform many areas of business.
Fresh is Best
Behind the seemingly magical process that powers generative AI technology lies an ocean of data. Herein lies the ‘large’ aspect of the LLM acronym. Generative AI systems relying on LLMs are, as the name suggests, trained on large data sets. The larger the data set, the more expansive and comprehensive the responses of an LLM AI system will be.
That is why, when it comes to building GenAI applications for business, an organisation’s AI strategy is deeply interrelated to its data strategy. In this context, having clean, real-time data is foundational to any AI solution, especially when it comes to services providing real-time information to users.
If businesses want to succeed in this new era of generative AI, they have to overcome the challenges that legacy ‘data-at-rest’ architectures result in. With this in mind, it’s crucial that businesses shift their data from an ‘at rest’ state into one that is transient and reflective of up-to-date information, so they can provide accurate services and applications in real time.
Move That Data
The root cause behind old, and thus unreliable, data often lies in outdated data integration methods, built on slow, batch-based pipelines. These cumbersome systems typically take far too long to deliver data, rendering it stale and inconsistent by the time it arrives. Poor governance and scalability can often compound the problem further.
That is why data streaming has become the de facto industry standard for AI and business innovation. Data streaming involves IT architecture and solutions that enable data to continuously feed into critical business systems and AI solutions. With data streaming, businesses can establish a dynamic, real-time knowledge repository to drive AI applications.
A data streaming approach to data management also enables the integration of real-time context at an AI query execution, while allowing experimenting, scaling and innovation with greater agility. More broadly, it makes for more highly governed, secure and trustworthy AI data.
On the Right Path
The most effective first step for businesses wanting to leverage AI is to create a data streaming capability by building a real-time data mesh across the organisation. This can enable AI applications to find data sets and tap into them.
Having a native stream processing capability gives organisations the ability to transform and optimise the treatment of raw data, at the time of generation, and turn them into actionable insights using real-time enrichment.
Decoupling data science tools and production AI applications paves the way for a more streamlined approach to testing and building processes, easing the path of innovation as new AI applications and models become accessible.
Additionally, data placed in context with other systems becomes more valuable for AI models. For example, businesses can connect customer insights with services like predictive support and hyper-personalised recommendations.
These are just a few tips to get businesses started on ensuring they have access to real-time data to get started on their generative AI journey. With data streaming capabilities in place, businesses will find themselves in a better position to make the most of the GenAI revolution.
About The Author
As VP of the APAC Digital Natives & Emerging Markets business, Deepak leads Go-to-Market strategy & execution to grow Confluent’s revenues in ANZ & Asia markets. Prior to taking on the expanded role & responsibilities of Regional Executive Leadership, he led the APAC Commercial business with an amazing team for 2.5 years, to successfully grow Confluent Cloud revenues from digital natives, startups & mid-market customers. When he is not working, he loves to spend time with family & friends, travel to new places, watch stage plays/live shows/documentaries & practice self care.