Introduction to LangChain

Aishwarya Valse
4 min readOct 30, 2023

--

Hello, dear readers! I’m excited to introduce you to my latest blog— LangChain. In today’s fast-paced world, where time is of the essence, I’ve crafted a succinct guide to LangChain along with a hands-on introduction to Python. In just a matter of minutes, you’ll gain a comprehensive understanding of LangChain, making it an ideal option for those who don’t have the luxury of poring over extensive documentation. So, let’s dive in and discover the world of LangChain, demystify its fundamentals, and get hands-on with Python to unlock its potential. Stay tuned for a quick yet thorough journey into this exciting domain!

https://assets.zilliz.com/Conversational_Memory_in_Lang_Chain_7c1b4b7ba9.png

· What is LangChain: With Lang-Chain, you can integrate LLMs into your projects, harnessing their extraordinary capabilities

· Why use LangChain: The most use of LangChain is customer support chatbots. E-commerce platforms that recommend products so accurately that customers can’t resist making a purchase

· Building an application: LangChain provides many modules that can be used to build language models applications

Modules can be used as stand-alone in simple applications and they can be combined for more complex use cases

Below are the three modules of LangChain:

1. LLM: The language model is the core reasoning engine here. In order to work with LangChain, you need to understand different types of language models and how to work with them

2. Prompt Template: This provides instructions to the language model. This controls what the language model outputs

Most LLM applications do not pass user input directly into an LLM usually they will add the user input to a larger piece of text called a prompt template, that provides additional context on the specific task at hand

3. Output Parser: This translates the raw response from the LLM to a more workable format

Output parser convert the raw output of an LLM into a format that can be used downstream

Two types of language models, which is LangChain called:

LLM: This is a language model which takes a string as input and returns a string

ChatModels: This is a language models which takes a list of messages as input and returns a message

A ChatMessage has two required components:

· Content- This is the content of the message

· Role- This is the role of the entity from which the ChatMessage is coming from

LangChain provides several objects to easily distinguish between different roles:

· HumanMessage: A ChatMessage coming from a human/user

· AI Message: A ChatMessage coming from AI/assistant

· System Message: A ChatMessage coming from the system

· Function Message: A ChatMessage coming from a function call

The standard interface that LangChain provides has two methods:

- Predict: Takes a string, returns a string

  • Predict_Message: Takes in a list of messages, returns a message

That’s the basics! We’ve just looked at how to make the central part of LangChain apps. Keep in mind, there’s a lot of detail in these parts (like LLMs, prompts, and output parsers), and there are many more pieces to discover too

Here are links to my previous blogs:

--

--

Aishwarya Valse

🚀 Data Science & ML Enthusiast 📊 | Blogger 🖋️ | Lifelong Learner 🌱 | Inspiring growth in tech & personal development 💡✨ | Let's code, learn, and thrive tog