Build Your First LLM App Today!

Sayali Shelke
3 min readMay 19, 2024

--

Photo by Onur Binay on Unsplash

The most popular and widely known example of an LLM app is ChatGPT. Have you ever thought about how one of these LLM apps is made? Let me help you get through this! According to Alireza Goudarzi and Albert Ziegler, it can be covered in five steps.

Let’s break it down!

  1. Focus on a Single Problem: Humans know that when we try to tackle too many problems at once, we often don’t get good results and can end up frustrated. The same goes for models. If focused on a single problem, models will learn faster, and progress can be seen more quickly. For example, GitHub Copilot cleverly focuses on coding functions within an IDE.
  2. Choose the Right Pre-trained LLM: If you want to build an LLM app for commercial purposes, follow these steps to select the right pre-trained model:
  • License: If your goal is to sell the app, make sure to get the appropriate API license for its use.
  • Model Size: LLMs can range from 1 billion to over 1.75 trillion parameters. For instance, GPT-4 is believed to have 1.7 trillion parameters, while the largest Llama-2 model has 70 billion parameters.
  • Model Performance: Customizing the model is important, but don’t rush to choose one based solely on size or licensing. Evaluate how well, fast, and consistently the model generates your desired output. To measure model performance, use offline evaluations.
Photo by Myriam Jessier on Unsplash

3. Customize Your LLM: Customizing a pre-trained language model involves adapting it for specific tasks, such as generating text on a particular subject or in a distinct style. Here are some methods to customize your LLM:

  • In-context Learning: This involves providing the model with specific instructions or examples during inference, guiding it to produce a contextually relevant output.
  • Reinforcement Learning from Human Feedback (RLHF): RLHF combines reinforcement learning with human guidance to train reward models, helping AI agents perform better. The goal is to generate factually accurate and engaging text, eliminating the need for supervised learning and broadening the criteria for acceptable outputs.
  • Fine-tuning: According to OpenAI, fine-tuning improves upon few-shot learning by training on many more examples than can fit in the prompt. This allows for better results across various tasks and reduces the need for extensive examples in the prompt, saving costs and enabling lower-latency requests.
  • RAG: According to IBM, RAG is an AI framework for retrieving facts from an external knowledge base to ground large language models (LLMs) on the most accurate, up-to-date information and to give users insight into LLMs’ generative process.

4. Set Up Your App’s Architecture: Setting up an app requires different components, which can be divided into three categories:

  • User Input: Includes a user interface, a large language model (LLM), and an app hosting platform.
  • Input Enrichment and Prompt Construction: Encompasses your data source, embedding model, vector database, prompt construction and optimization tools, and data filters.
  • Efficient and Responsible AI Tools: Include an LLM cache, an LLM content classifier or filter, and a telemetry service to assess the output of your LLM application.

5. Evaluation: These evaluations are termed “online” because they assess the LLM’s performance during user interaction. For instance, online evaluations for GitHub Copilot are measured by the acceptance rate (how frequently a developer accepts a suggested completion) and the retention rate (how often and to what extent a developer edits an accepted completion).

Good luck with creating your first LLM app!

--

--

No responses yet