Skip to main content

How to return sources

Often in Q&A applications itโ€™s important to show users the sources that were used to generate the answer. The simplest way to do this is for the chain to return the Documents that were retrieved in each generation.

Weโ€™ll be using the LLM Powered Autonomous Agents blog post by Lilian Weng for retrieval content this notebook.

Setupโ€‹

Dependenciesโ€‹

Weโ€™ll use an OpenAI chat model and embeddings and a Memory vector store in this walkthrough, but everything shown here works with any ChatModel or LLM, Embeddings, and VectorStore or Retriever.

Weโ€™ll use the following packages:

npm install --save langchain @langchain/openai cheerio

We need to set environment variable OPENAI_API_KEY:

export OPENAI_API_KEY=YOUR_KEY

LangSmithโ€‹

Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with LangSmith.

Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces:

export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=YOUR_KEY

Chain without sourcesโ€‹

Here is the Q&A app we built over the LLM Powered Autonomous Agents blog post by Lilian Weng in the Quickstart.

import "cheerio";
import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";
import { pull } from "langchain/hub";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { formatDocumentsAsString } from "langchain/util/document";
import {
RunnableSequence,
RunnablePassthrough,
} from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
const loader = new CheerioWebBaseLoader(
"https://lilianweng.github.io/posts/2023-06-23-agent/"
);

const docs = await loader.load();

const textSplitter = new RecursiveCharacterTextSplitter({
chunkSize: 1000,
chunkOverlap: 200,
});
const splits = await textSplitter.splitDocuments(docs);
const vectorStore = await MemoryVectorStore.fromDocuments(
splits,
new OpenAIEmbeddings()
);

// Retrieve and generate using the relevant snippets of the blog.
const retriever = vectorStore.asRetriever();
const prompt = await pull<ChatPromptTemplate>("rlm/rag-prompt");
const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 });

const ragChain = RunnableSequence.from([
{
context: retriever.pipe(formatDocumentsAsString),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);
await ragChain.invoke("What is task decomposition?");
"Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. I"... 208 more characters

Adding sourcesโ€‹

With LCEL itโ€™s easy to return the retrieved documents:

import {
RunnableMap,
RunnablePassthrough,
RunnableSequence,
} from "@langchain/core/runnables";
import { formatDocumentsAsString } from "langchain/util/document";

const ragChainFromDocs = RunnableSequence.from([
RunnablePassthrough.assign({
context: (input) => formatDocumentsAsString(input.context),
}),
prompt,
llm,
new StringOutputParser(),
]);

let ragChainWithSource = new RunnableMap({
steps: { context: retriever, question: new RunnablePassthrough() },
});
ragChainWithSource = ragChainWithSource.assign({ answer: ragChainFromDocs });

await ragChainWithSource.invoke("What is Task Decomposition");
{
question: "What is Task Decomposition",
context: [
Document {
pageContent: "Fig. 1. Overview of a LLM-powered autonomous agent system.\n" +
"Component One: Planning#\n" +
"A complicated ta"... 898 more characters,
metadata: {
source: "https://lilianweng.github.io/posts/2023-06-23-agent/",
loc: { lines: [Object] }
}
},
Document {
pageContent: 'Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are'... 887 more characters,
metadata: {
source: "https://lilianweng.github.io/posts/2023-06-23-agent/",
loc: { lines: [Object] }
}
},
Document {
pageContent: "Agent System Overview\n" +
" \n" +
" Component One: Planning\n" +
" "... 850 more characters,
metadata: {
source: "https://lilianweng.github.io/posts/2023-06-23-agent/",
loc: { lines: [Object] }
}
},
Document {
pageContent: "Resources:\n" +
"1. Internet access for searches and information gathering.\n" +
"2. Long Term memory management"... 456 more characters,
metadata: {
source: "https://lilianweng.github.io/posts/2023-06-23-agent/",
loc: { lines: [Object] }
}
}
],
answer: "Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. I"... 256 more characters
}

Check out the LangSmith trace


Was this page helpful?


You can leave detailed feedback on GitHub.