Latest insight
How do you use LLMs with internal company data?

How do you use internal company data with large language models (such as ChatGPT)

‍ As large language models (LLMs) become more advanced, they offer increasingly valuable capabilities for analyzing and processing speech in business. Integrating LLMs with a company's data infrastructure can enable more sophisticated analysis of customer feedback, financial documents, or other important information. The possible applications, such as informed chatbots or log summaries, are endless.

However, this can be complex and demanding and requires careful planning and execution. While the benefits of integrating LLMs with company data are obvious, it is not always clear how to set up an appropriate system. So where do you start? The typical scenario involves the process of fine-tuning. Newer approaches, tailored to the increasing popularity of LLMs, include linking queries to a certain context. Let's take them step by step.

The “old” way: fine-tuning

‍ Fine-tuning refers to adapting a pre-trained language model to a specific task or domain. It involves retraining the model on a data set that is related to the specific task or domain to improve its performance. In a typical scenario, you could therefore take a standard model that was trained on a large general data set and then feed it with carefully prepared data of your own.

However, fine tuning isn't always easy. It requires careful data processing, and the data must be available in an appropriate amount (which usually means a large amount of data). Sometimes the model “forgets” what has already been learned in favor of the newly acquired knowledge. In machine learning literature, this is referred to as “catastrophic forgetting.” The downside of this approach is also that it requires a comprehensive deep learning infrastructure that is able to train LLMs. Because training takes time, this approach may also be too slow to use for use cases with rapidly changing data — any addition of data requires retraining.

The “new” way: contextual learning

With the advent of LLMs, a new paradigm became popular: contextual learning. For many, the ability to rely on previous information (context) was one of the most impressive features of ChatGPT. Contextual learning is the ability to learn how to solve new tasks without changing the model. In other words, our network is able to solve tasks that were not yet known at the time of training without changing its weights (as with fine tuning). In addition, the models can learn without being specifically trained to do so. In addition, only a few examples are usually sufficient, which is in stark contrast to the typical perception of machine learning models as data-hungry. The roots of this phenomenon may be based on Bayesian inference and are currently the subject of intensive research efforts.

Contextual learning through prompt engineering

The simplest type of contextual learning is prompt engineering, where the prompt is simply a piece of text that is sent to the model. This refers to designing and refining prompts or instructions given to a language model such as ChatGPT to get the answers you want. It is about formulating specific and clear instructions with examples that guide the behavior of the model and encourage it to produce accurate and relevant output. Prompt engineering is critical because language models such as ChatGPT generate responses based on the information provided, including the initial prompt and any subsequent contexts.

Developing prompts is often an iterative process that requires experimentation and refinement. This may include using specific keywords or providing relevant examples. By adjusting and optimizing prompts, users can improve model performance, improve the quality of responses, and reduce potential bias or errors that may occur. It is important that prompt engineering relates to the design and configuration of the language model and does not include any changes to the underlying architecture or the model's training data.

So is it that easy? Well, not really. Because of the limited size of the query window (or, more formally, the token limit), we can only link a limited amount of information to context. Since corporate data tends to only grow, this approach could quickly become inadequate — not to mention costs.

Smarter contextual learning with indexing and vector storage

Fortunately, there are smarter ways to connect context to queries. If you have a large amount of your own data, attaching everything to the command prompt isn't an option. However, there are a few techniques you can use to provide the right context. One of them is creating a series of indexes on your data. These indexes help identify relevant “pieces” of your data and attach only what you need to the query.

So before the prompt is sent to the model, let's first search for the relevant parts of our data and then add them to the prompt. With llamaIndex, for example, we can save various documents (e.g. PDF files or SQL data) as indexed nodes that are structured in such a way that they can be queried later. Of course, you have to consider the speed and costs of indexing. Another approach would be vector databases, such as Pinecone. Vector databases store embeddings, which are a form of numeric representation of data. Such embeds can later be queried for approximate resemblance to our request, and this characteristic can be used to create the correct context for the prompt.

Other factors to consider

When deploying a language model, there are significant differences between deploying on-premises and using external models (such as ChatGPT via an API). With on-prem deployment, the software is hosted on a company's own servers or in a company's data centers, which provides complete control over the infrastructure. This approach makes it possible to manage data security and data protection in accordance with specific requirements. It may be preferable for sensitive or confidential data that must remain within the corporate network. However, on-premise deployment typically requires significant upfront investments in hardware (such as GPUs), infrastructure setup, and ongoing maintenance. They also lack the inherent scalability and flexibility of external solutions. Therefore, the final decision should always be based on careful planning and calculation. Cloud solutions can be used as an interim solution.

inferences

At Perelyn, we specialize in helping companies seamlessly integrate their data with language models. Our expertise lies in bridging the gap between your valuable data sources and the powerful capabilities of language models such as ChatGPT. By understanding your specific needs and goals, we can help you prepare and structure your data for optimal use. Our team of data engineers and experts will help you clean, preprocess, and transform your data into appropriate formats for integration with language models so that AI can work for you.

About the author

Dominik Filipiak

Lead Computer Vision and IT Services

Our latest articles

AI Compliance & Security
Developing responsible AI systems: An introduction

In our latest article, we highlight the central role of compliance and quality management in the development of AI systems. We show how innovative approaches can be reconciled with regulatory requirements — an important step for companies that want to drive sustainable innovations in the fast-moving world of technology.

#AI Security
#Autonomous AI
#Trustworhy AI
Johannes Kuhn
March 13, 2024
Read
Generative AI
The next big leap in retail

Discover the impact of generative AI in retail. This technology unites departments, personalises content, and transforms customer experiences. Leading brands such as Coca-Cola and Walmart are already using their potential to optimise operations and drive innovation. Explore the future of retail with generative AI...

#LLM
#Generative AI
Sebastian Fetz
February 28, 2024
Read
Generative AI
Superhuman Assistant - LLMs as a Strategic Priority

In this article, our CEO Sebastian highlights the transformative power of large language models (LLMs) in a corporate context and provides guidance for their implementation. This is not just a dream of the future, but a strategic necessity that future-oriented companies are taking on today to drive groundbreaking innovations.

#LLM
#AI Strategy
#Generative AI
#LLMOps
Sebastian Fetz
January 10, 2024
Read

Contact us

Name*
Email*
message*
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Help us improve our website. By “Accept” click, do you agree that we store a cookie on your device to analyse the use of the website. Read our Privacy statement for more information.
Something went wrong. Please try again.