LLM-Chatbots — An introduction to the new world of bots

Sophie Hundertmark
12 min read3 days ago

For the following article, my AI assistant has also created a podcast episode. If you prefer to listen rather than read, you can access the podcast via the following link (ATTENTION: the podcast was created by AI only, no guarantee for correctness).

Listen to the podcast episode of Sophie’s AI Assistant

Chatbots and voicebots have undergone a significant change in terms of quality and implementation methods due to the integration of Large Language Models (LLMs). In the past, companies had to laboriously predefine every question and answer, which made the development process lengthy and inflexible. On the user side, the experience was also rather sobering, as the predefined answers were rather general and not very user-centred. Today, LLMs and methods such as Retriever-Augmented Generation (RAG), chatbots and voicebots can be trained quickly and efficiently so that they are able to communicate in a very specific and target group-oriented way. This initially facilitates development and implementation, as the bots can now respond dynamically to a wide range of queries without having to program every possible conversation scenario in advance. At the same time, it has taken the customer experience to a whole new level. The quality of the answers has increased many times over in terms of both correctness and individuality.

In the following article, everything you need to know about the new type of chatbot.

To recap

What are Large Language Models (LLMs)?

A Large Language Model (LLM) is an advanced machine learning model that specialises in understanding and generating human language. These models, which are based on deep neural network architectures such as transformers, are trained with gigantic amounts of text data. Through this training, they learn to recognise patterns, structures and the meanings behind words and sentences, which enables them to communicate in natural language. LLMs can be used for a variety of applications, including text generation, translation, summarisation and question answering, by effectively generating new text based on the learned context and input prompts.

What is retriever-augmented generation (RAG)?

Retriever-Augmented Generation (RAG) is a method in natural language processing that combines an information retriever and a text generator to generate precise answers to user questions. The process begins with the retriever pulling relevant documents or data from a large database based on how well they match the user question asked. This selected information is then passed to the generator, typically an advanced language model such as a transformer-based large language model. The generator uses this information to formulate a coherent and informed response. This method makes it possible to generate responses that are not only based on pre-trained knowledge but also incorporate current, specific, and contextual information, significantly improving the accuracy and relevance of the responses.

What is an LLM chatbot?

LLM chatbots, or Large Language Model chatbots, are advanced AI systems that use generative AI to understand and generate human language. These intelligent chatbots are based on large language models such as GPT-4 or other open source models that have been trained with enormous amounts of text data to develop an in-depth understanding of context, syntax and semantics. This advanced language processing enables LLM chatbots to take on a variety of tasks, from answering questions and creating content to automating customer support.

In the context of LLM chatbots, methods such as Retriever-Augmented Generation (RAG) play an important role. RAG combines the capabilities of a retrieval system, which retrieves relevant documents or information from a database, with the generation capability of a Large Language Model. This enables LLM chatbots to not only respond based on the trained model, but also to integrate specific, contextual information from company-owned sources to generate more precise and informed responses. Using RAG significantly expands the functionality of LLM chatbots by enabling companies to individually supplement the chatbot’s knowledge. Companies can even define that the LLM chatbot should only access the content provided by the company. This ensures that the bot does not access unwanted or incorrect information.

How does an LLM chatbot work?

LLM chatbots consist of several main components. I like the illustrated representation by TrueBlue, which breaks down LLM chatbots into five main components:

1. The brain

The brain is the fundamental part of an LLM chatbot and acts as a central processing unit or the ‘brain’. The brain manages all the bot’s logic and behavioural traits, just like in humans. It interprets user input, applies logical conclusions, and determines the most appropriate course of action based on the chatbot’s capabilities and objectives as defined by the company. The brain ensures that the bot acts correctly and consistently according to predefined guidelines or learned behaviours.

2. Memory

The memory serves as a store for the chatbot’s internal logs and user interactions. This is where data is stored, organised and accessed. This enables the bot to remember previous conversations, user preferences and contextual information, and thus provide personalised and relevant responses. The memory is crucial because it provides a time frame and stores additional details relevant to specific users or tasks. Companies can decide for themselves where the brain stores the data and thus ensure that their own data protection requirements are taken into account by the LLM chatbot.

3. Workflows

Workflows are predefined processes or tasks that the chatbot should be able to perform. These workflows can range from answering complex queries to coding to searching for information and performing other specialised tasks. They are similar to the various applications and utilities in a computer that enable a wide range of functions. Each workflow is designed for a specific purpose, and the brain intelligently decides which tool to use based on the context and nature of the task. This modular approach gives companies a high degree of flexibility and scalability, as new workflows can be added or existing ones updated without affecting the overall functionality of the chatbot. This makes it easy for chatbots to learn new skills and functions.

4. The planning module

The planning module is the component that enables the chatbot to solve complex problems and refine execution plans. It is comparable to a strategic layer over the brain and workflows that enables the LLM chatbot to not only respond to immediate requests but also to plan long-term goals or more complicated tasks. The planning module evaluates different approaches, anticipates potential challenges and develops strategies to achieve the desired outcome. This could be, for example, the overarching goal of ‘increasing sales’.

5. Prompts

We are familiar with prompts mainly through the use of ChatGPT or similar technologies. LLM chatbots also work with prompts. Thanks to prompts, companies can define the chatbot’s behaviour and largely prevent unwanted reactions on the part of the bot. There are two main types of prompts:

General prompt:

  • This prompt outlines the bot’s abilities and behaviour and forms the basis for the agent’s interaction and reaction. It acts as a high-level guide that shapes the entire functioning of the agent.

Task-related prompt:

  • This prompt defines the specific goal that the LLM chatbot must achieve and guides its actions and decision-making processes. It ensures that the chatbot’s responses are aligned with the task at hand, whether it is answering a customer query or performing a complex analysis.

How do companies implement LLM chatbots?

The implementation of LLM chatbots involves the following seven steps:

  • Data collection
  • Data preprocessing
  • Training the language model
  • Fine-tuning
  • Testing and optimising
  • Deployment and integration
  • Continuous learning and improvement

First, a comprehensive and business-relevant collection of content is compiled to serve as the basis for language model training. The collected data is then cleaned and tokenised to prepare it for training.

During the training phase, machine learning methods, in particular NLP strategies, are used to train the model on the cleaned data set. This is followed by fine-tuning for specific use cases to increase accuracy for certain tasks. After the first testing of the LLM chatbot, which identifies areas for improvement, iterative refinement follows through adjustments to the training data and further model parameters.

Once satisfactory performance is achieved, the LLM chatbot is implemented in the company’s target environment and integrated into existing systems via APIs. To ensure timeliness and relevance, the chatbot is regularly retrained with new data and continuously improved through feedback loops. These steps ensure that the LLM chatbot provides accurate and relevant answers that meet current user needs.

What are some use cases for LLM chatbots?

LLM chatbots are already very versatile and can be used in numerous areas. Here are some of the most important applications:

  1. Customer service: LLM chatbots are frequently used in customer service to answer frequently asked questions, manage support tickets and offer solutions. They can be available 24/7 and thus significantly reduce waiting times for customers.
  2. Personalising marketing campaigns: LLM chatbots can send personalised messages based on customer preferences and previous behaviour. They can also help conduct surveys to gather better customer feedback.
  3. E-commerce and retail: In online shops, LLM chatbots can help customers select products, make product recommendations and support the purchasing process.
  4. Healthcare: In the medical sector, LLM chatbots can provide patients with information on symptoms, support initial pre-diagnosis and offer advice on medication. They also serve as a first point of contact to assess the urgency of cases and allocate resources accordingly.
  5. Financial services: In the financial sector, LLM chatbots help to automate requests for account balances, transactions and can provide advice on basic financial matters.
  6. Education and training: LLM chatbots can act as interactive learning assistants, offering learning materials, conducting quizzes and addressing specific questions from students.
  7. HR and recruitment: LLM chatbots can support the recruitment process by sifting through CVs, conducting initial interviews and automating communication with applicants.
  8. Internal business processes: LLM chatbots can also be used internally to give employees quick access to company information and to facilitate administrative tasks such as booking rooms or managing calendars.

What are the advantages of LLM chatbots?

LLM chatbots offer a variety of advantages for companies and end users. The following are some general advantages of chatbots and then specific advantages of LLM chatbots compared to chatbots without LLMs:

General advantages of chatbots:

  1. Availability: Chatbots are available 24/7 and can answer user queries without interruption, which is particularly valuable outside of business hours.
  2. Scalability: bots can handle thousands of requests at once, making them ideal for large companies or high-traffic events.
  3. Cost efficiency: chatbots reduce the need for human staff and can significantly reduce the cost of customer support and care.
  4. Consistency: bots provide consistent quality of answers and user experience, contributing to brand consistency.
  5. Data collection: Chatbots can collect valuable data about user interactions that can be analysed to improve products, services and customer experiences.

Advantages of LLM chatbots over traditional chatbots:

  1. Improved understanding: LLM chatbots, which are based on Large Language Models such as GPT, have a deeper understanding of language, which enables them to provide more natural and contextually relevant responses. They can better understand and respond to complex queries.
  2. Adaptability: LLM chatbots can adapt more quickly to new topics and queries based on their trained understanding of language and context, without the need for explicit programming for each new requirement. This makes the development process and the process of adapting bots significantly easier and faster.
  3. Personalisation: Thanks to advanced language comprehension, LLM chatbots can offer more personalised interactions by taking into account the tone, mood and previous interactions to make communication more individual. This greatly enhances the customer experience.
  4. Long-text generation capability: Unlike older models, which were mostly limited to generating short and simple texts, LLM chatbots are able to create more in-depth and informative content, making them useful for applications such as content creation, detailed product descriptions and educational purposes.
  5. Integrating external knowledge: LLM chatbots, especially those using RAG, can tap into company-specific data sources to inform and improve their responses. This enables them to deliver up-to-date, accurate and in-depth information that is a perfect fit for the company.

Who are LLM chatbots suitable for?

The main task of LLM chatbots is to answer questions or trigger predefined processes automatically. Consequently, LLM chatbots are suitable for all companies where employees have to answer similar questions repeatedly during their workflows. It should be noted that LLM chatbots can be used internally for employees or externally for customers. This means that an IT helpdesk or an HR department that regularly receives requests from internal employees can also be supported by an LLM chatbot. LLM chatbots for customers are mostly used in customer service or occasionally in marketing.

When is it worth using LLM chatbots?

Due to the simplified implementation of LLM chatbots and their greatly improved quality compared to rule- or intent-based systems, investing in LLM chatbots pays off sooner than it did a few years ago. In general, LLM chatbots are useful wherever the content for answering queries already exists in existing knowledge sources. Companies that fall into this category and receive a high number of queries on a daily basis should consider using an LLM chatbot.

Are there still chatbots without LLMs?

Chatbots without the integration of an LLM are rarely implemented today. However, more and more hybrid forms are emerging. In many cases, these are chatbots that were initially developed without an LLM and without a RAG and are now being retrofitted.

Personally, I find these mixed forms tricky. The chatbot then gives a mixture of fixed or predefined impersonal answers and, at the same time, LLM-generated answers that are much more specific and personal. This mixture often represents an interruption in the customer experience.

What do companies need to consider when introducing and using LLM chatbots?

As mentioned above, the realisation and implementation of LLM chatbots is relatively simple and structured. However, the following points should be given special consideration.

  1. Data protection: The location of the LLM chatbot’s data storage must be ensured and this location must comply with the company’s compliance rules.
  2. Data sources: Companies must have clean and relevant data sources for the LLM chatbot. Many companies use their own website as a basis. Provided that this is properly managed, this does not present any challenges. However, if the website contains outdated data, companies must first clean it up.
  3. Training employees: The role of employees should not be ignored. Companies must provide their employees with sufficient training and also explain the background of the LLM chatbot.
  4. User experience: When using LLMs, many chatbots tend to provide very long and detailed answers. Companies need to find a good balance between the depth of the answer and the scope of the answer. This can vary depending on the request.

What risks do LLM chatbots pose?

  1. Quality issues: Even though LLM chatbots are generally given fixed rules of conduct in advance and companies can also limit the chatbot’s training knowledge, incorrect responses may still occur in rare cases. This cannot be completely ruled out, but it is constantly being improved.
  2. Lack of control: LLM chatbots generate a new response for each user query. Companies have no direct control over the chatbot at that moment. This makes it all the more important that the bot is sufficiently tested before it is published.
  3. Data protection and security: LLM chatbots store conversation data and other information. It is important to ensure that no data is passed on to third parties without consent and that the way in which data is stored complies with the company’s compliance requirements.

How do customers react to LLM chatbots?

Chatbots still have a somewhat negative image due to the poor quality of rule-based bots. However, this negative attitude is diminishing more and more. Numerous best practices, such as that of Helvetia Switzerland, show customers and companies that LLM chatbots are of significantly higher quality and provide more accurate answers. Initial figures show that customers and users of chatbots are increasingly understanding this and that their aversion to LLM chatbots is diminishing. Customers are increasingly motivated to start a chat with an LLM chatbot and thus continuously gain new and positive experiences.

What are the best practices for LLM chatbots?

The LLM chatbot at Helvetia Insurance

The first LLM chatbot in Switzerland is called Clara and is from Helvetia Insurance. The LLM chatbot initially uses the information on the website to answer customers‘ and potential customers’ questions about insurance. In further iterations, the insurance company has added knowledge and skills through further internal connections. You can find out more about the LLM chatbot at Helvetia in my interview with Florian Nägele about LLM chatbots in the insurance industry.

The LLM chatbot at Jumbo, the retail hobby market

For more than a year now, the Swiss DIY and hobby store Jumbo has been providing a LLM chatbot to advise website visitors. The bot’s task is to advise on products and it is available via the website. Customers can ask questions about product details or product recommendations and the chatbot responds based on its own knowledge base. The knowledge base was compiled by the Jumbo-Digital team and roughly contains the website content, as well as further product detail documents. You can read more about the chatbot from JumBot in my post on LLM bots in retail.

You can find more examples of LLM chatbots in my post on the best practices of LLM chatbots.

Conclusion: LLM chatbots have changed the world of chatbots and voicebots

When I wrote my master’s thesis on chatbots almost 10 years ago, I had to do most of my experiments with rule-based chatbots. Many times I even just used mockups because even implementing simple chatbots was very time-consuming. LLM chatbots have changed the world of bots and will continue to do so. The implementation and deployment is significantly simplified by the methods of LLMs and RAG, and the quality has already increased many times over. In the long term, an LLM chatbot on a website or internally at a company will probably become a commodity and just as much a matter of course as having your own website.

And now?

If you would like to know more about LLM chatbots or even gain some initial experience yourself, please feel free to send me a message. You can send your message via WhatsApp message or email.

Contact Sophie

By the way, this article is also available as a podcast episode

Attention! The podcast was created entirely by my AI assistant based on my article — no guarantee for incorrect content.

Listen to the podcast by Sophie’s AI Assistant

--

--