Introduction and implementation of LLM chatbots — a guide

Sophie Hundertmark
10 min readOct 16, 2024

Chatbots and voicebots have undergone a significant change in terms of quality and implementation methods through the integration of Large Language Models (LLMs). In the past, companies had to laboriously predefine every question and answer, which made the development process lengthy and inflexible. Today, LLMs and methods such as Retriever-Augmented Generation (RAG) enable chat and voicebots to be trained quickly and efficiently, enabling them to communicate in a very specific and targeted manner. This naturally also has an impact on the implementation and introduction process of LLM chatbots. Even at the use case discovery stage, there are new requirements that mean the LLM chatbot implementation process needs to be redefined in some places. In the following post, I’ll take you step by step through the planning, implementation, and publishing of an LLM chatbot.

To recap

What are Large Language Models (LLMs)?

A Large Language Model (LLM) is an advanced machine learning model that specialises in understanding and generating human language. These models, which are based on deep neural network architectures such as transformers, are trained on gigantic amounts of text data. Through this training, they learn to recognise patterns, structures and the meanings behind words and sentences, which enables them to communicate in natural language.

What does retriever-augmented generation (RAG) mean?

Retriever-augmented generation (RAG) is a natural language processing method that combines an information retriever and a text generator to generate precise answers to user questions. The process begins with the retriever pulling relevant documents or data from a large database based on how well they match the user question. This selected information is then passed to the generator, typically an advanced language model such as a transformer-based large language model. The generator uses this information to formulate a coherent and informed response.

What is an LLM chatbot?

LLM chatbots, or Large Language Model chatbots, are advanced AI systems that use generative AI to understand and generate human language. These intelligent chatbots are based on large language models such as GPT-4 or other open source models that have been trained with enormous amounts of text data to develop an in-depth understanding of context, syntax and semantics. This advanced language processing enables LLM chatbots to take on a variety of tasks, from answering questions and creating content to automating customer support.

In the context of LLM chatbots, methods such as Retriever-Augmented Generation (RAG) play an important role. RAG combines the capabilities of a retrieval system, which retrieves relevant documents or information from a database, with the generation capability of a Large Language Model. This enables LLM chatbots to not only respond based on the trained model, but also to integrate specific, contextual information from company-owned sources to generate more precise and informed responses. Using RAG significantly expands the functionality of LLM chatbots by enabling companies to individually supplement the chatbot’s knowledge. Companies can even define that the LLM chatbot should only access the content provided by the company. This ensures that the bot does not access unwanted or incorrect information.

LLM chatbot: This is how a project progresses

Since the end of 2022, more and more LLM chatbots have been coming onto the market. Of course, every LLM chatbot project is unique, individual and requires different resources and priorities. However, in general, companies go through the following steps during the planning, realisation and implementation of an LLM chatbot.

Depending on the company’s budget and objectives, the scope of each phase can vary.

1. Clarify requirements and define use case

The initial step in planning an LLM chatbot has not changed at its core. Every project begins with a careful analysis of requirements and the identification of a suitable use case.

However, the fact that LLM chatbots can be implemented much more efficiently than earlier intent-based chatbots changes the cost-benefit analysis with regard to the question: ‘Does a chatbot make sense for our company?’ An increasing number of companies are now able to implement their first LLM chatbot with little effort.

The possibilities of LLMs and RAG technology have also brought about significant changes in terms of use cases. While there used to be a tendency to define smaller use cases for limited topics, LLM chatbots with the appropriate data can handle a wide range of topics simultaneously. The limitations of a use case only become relevant if third-party systems or additional partners need to be integrated in later process steps.

You may find inspiration for your use cases by taking a look at my collection of best practices for LLM chatbots.

2. Define technical and content requirements

Once the use case of the LLM chatbot has been outlined, the second step is to define the technical and content requirements in detail.

The content requirements include determining the knowledge that the LLM chatbot should possess, including the ability to answer specific queries and perform additional tasks, such as carrying out further process steps such as changing an address. The tone of voice of the LLM chatbot is also an important aspect to consider, including how much of a personality the LLM chatbot should take on and how that personality relates to the company and the user. Some LLM chatbots strongly align their personality with that of the user, while others are more aligned with the company’s tone of voice, and some have almost no personality or tone of voice.

In terms of technical requirements, the question of data protection is crucial, including where the data used and stored by the LLM chatbot may be processed. Companies have the choice between their own servers, servers in their own country or servers worldwide, such as in America. Further technical requirements include integration with other systems, such as CRM systems or internal ticketing systems, as well as deciding where the chatbot will be implemented, such as on a website, in an app or in a closed login area. In addition, multilingualism is also an important aspect to consider.

It is crucial that this step is carried out carefully and in detail, as missing requirements can have a negative impact on the success of the project. It is therefore important to involve all relevant stakeholders in this project phase and to take their needs into account.

3. Selecting the language model and technology

Once all requirements have been precisely defined, the technology and language model are selected. The decision in favour of a particular language model is primarily based on the company’s data protection requirements. The higher the data protection requirements and the desire for in-house hosting, the more likely it is that an open-source language model will be chosen.

In terms of technology or technology partners, the first step is to check whether the company already has existing partnerships or contracts with technology providers that could potentially also implement the LLM chatbot. For example, many companies have partnerships with Microsoft partners that could potentially also implement the planned LLM chatbot. (If a company does not have such partnerships, I am available to recommend a variety of trustworthy partners and am happy to provide support in this regard).

It is possible that an official tender will be required as part of the technology selection process. This depends on both the company’s internal guidelines and the scope of the project.

4. Collecting and preparing data

Once the technology and language model have been decided upon, the data needs to be provided. Here, companies should work closely with the technology provider to ensure that the data is provided in a usable structure.

At the same time, companies are required to work very carefully here. If training data is forgotten or of poor quality, this will have a significant impact on the final quality of the LLM chatbot.

You can read more about data provision in my post LLM-Chatbots — An introduction to the new world of bots.

5. Create prompts

In addition to the data that the LLM chatbot is to learn from, the prompts also play an important role in terms of the performance and behaviour of the final chatbot. With the help of prompts, companies can define the chatbot’s behaviour. This includes topics such as tonality, but also how it behaves in different situations. For example, companies can use prompts to determine how the LLM chatbot should deal with insults.

The skills of the prompt engineering are evident when creating the prompt. Again, the more precise and accurate the prompts are, the more likely the LLM chatbot is to behave as it should. However, it should also be noted that despite good and tested prompts, the LLM chatbot will never be 100% under control. There can always be possible outliers.

6. Implementation: Enriching and fine-tuning the LLM with prompts and data

As soon as the data is ready and the prompts have been defined, implementation can begin. This process step is usually the responsibility of the technology partner. Of course, it is also possible for companies to work without an additional technology partner and to connect to the language model themselves.

Generally speaking, if the data and prompts have been clearly defined and prepared in advance, the actual implementation should be relatively quick. The next steps then require more resources and time.

7. Testing and optimising

This phase shows the quality of the preparatory work. The LLM chatbot must be thoroughly tested and probably further optimised. Functional tests are usually unproblematic; however, content-related tests are much more critical and complex. Here, it is important to evaluate how the LLM chatbot behaves in different scenarios and whether it provides the expected answers in an appropriate manner. If the results are not satisfactory, optimisation measures must be defined. It is often necessary to adjust the prompts or add additional content. If necessary, training content must also be corrected or removed if it is misleading or incorrect.

This testing phase is repeated until the LLM chatbot has reached a satisfactory quality.

8. Integrate the chatbot into existing channels (chat or website)

After the testing and optimisation is complete, the LLM chatbot is ready to be integrated into the final chat channels. Most companies implement their LLM chatbot directly on their website or in their own app. This integration takes place during this phase.

9. Publish the LLM chatbot

After completion, the LLM chatbot is ready to be published. Companies can decide individually whether they prefer a soft launch or a launch with extensive publicity. In the case of a soft launch, the LLM chatbot is usually put online without fanfare and with little promotion. Other companies may even send out a press release to mark the occasion.

Regardless of whether a soft launch or a big launch is chosen, it is essential that companies inform their employees about the new LLM chatbot. This includes detailed information on how the LLM chatbot works and what it can do.

10. Get feedback, optimise and further develop the LLM chatbots

If you think that the last step of an LLM project is publishing, you are mistaken. After publication, the work continues. The LLM chatbot must be continuously monitored and user feedback must be carefully reviewed and analysed. In most projects, this phase leads to further optimisations and adjustments to the LLM chatbot, despite previous test phases.

In addition to the optimisations that contribute to improving the existing use case, the further development of the chatbot should also be considered after publication. It is likely that extensions to the use case or additional functionalities will be identified that the LLM chatbot could take on in the future.

LLM Chatbot Project: Frequently Asked Questions

How long does an LLM chatbot project take?

The duration of the LLM chatbot project depends on the scope of the use case. In principle, companies should plan for at least two months or more. Very fast companies can also do it in a month. However, this is rather the exception.

What does an LLM chatbot cost?

The costs of an LLM chatbot are usually manageable. Small projects can start at as little as EUR 10,000. However, it is important to bear in mind that the LLM chatbot also has operating costs, which vary depending on the language model and technology partner. Furthermore, additional resources are of course required for further developments.

Are there also LLM voicebots?

LLM voicebots are like LLM chatbots, but they work with spoken language. The process of an LLM voicebot project is almost identical to that of an LLM chatbot project.

How secure are LLM chatbots?

The security of LLM chatbots depends on the language model chosen and the technology provider. In principle, LLM chatbots can be implemented very securely. There are already some LLM chatbots in the financial sector. These would not exist if security were not guaranteed.

How do customers react to LLM chatbots?

Initial evaluations of LLM chatbots show positive reactions from customers with regard to LLM chatbots. Customers are learning that the new technologies from LLMs and RAG have had a positive effect on the quality of chatbots. Customers are using LLM chatbots more and more and the complexity of their queries is increasing.

Where can I see examples of LLM chatbots?

In my post on the collection of best practices of LLM chatbots, you will find an overview of various LLM chatbots sorted by industry.

And when are you starting?

Do you feel like starting your own LLM chatbot project? Or do you have any further questions first?

In both cases, feel free to contact me. I have already supported and managed many LLM chatbot projects and would be very happy to support you as well.

Just send me a message — preferably via WhatsApp message or as an email.

Contact Sophie

By the way, this post is also available as a podcast episode

Attention! The podcast was created entirely by my AI assistant based on my post — no guarantee for incorrect content.

Listen to the podcast by Sophie’s AI Assistant

*I used the AI technology of SwissGPT to optimise the language of this post.

--

--

Sophie Hundertmark
Sophie Hundertmark

Written by Sophie Hundertmark

AI and Bots @ Speaker, Researcher and Consultant

No responses yet