top of page

Empowering Financial Analysis with GPT and LangChain

Updated: Feb 12



Financial analysis using langchain

In today's data-driven financial landscape, the ability to extract valuable insights from vast amounts of information is crucial for making informed decisions. Traditional methods of financial analysis often require significant time and resources, leading to delays in decision-making. However, with the emergence of LangChain, a new era of intelligent financial analysis has dawned.


In this blog, we will explore the immense potential of using GPT (Generative Pre-trained Transformers) in conjunction with LangChain to analyze financial reports. We will explore the integration of GPT and LangChain within a Streamlit app, enabling dynamic analysis of financial documents through user input questions


LangChain


LangChain is a framework for developing applications powered by language models. The two major advantages of LangChain are:

  1. Easily connect a language model to other sources of data

  2. Allows a language model to interact with its environment

LangChain provides multiple tools to work with LLM's. The one's used in this blog are:

1. Indexes: Indexes refer to ways to structure documents so that LLMs can best interact with them. Once the document is loaded, The text is split into smaller chunks. On input query, only relevant documents are retrieved using similarity scores to combine with language models.

2. Models: There are two types of models, one is LLM: Here we will define the llm for our question answering use case, in our case it is GPT-4, the other is Text Embedding Models: This model is used to get the embeddings for document and is later used to retreive similar documents

3. Prompts: a prompt refers to the input given to the model to generate a response, you typically provide a prompt to guide the model's response in a specific direction. The prompt can be a question, an incomplete sentence, or a statement that sets the context for the generated text. The model then uses the provided prompt as a starting point to generate a continuation, completing the text based on the learned patterns and knowledge encoded in its training.

4. Chains: combines a PromptTemplate, a Model, and Guardrails to take user input, format it accordingly, pass it to the model and get a response, and then validate and fix (if necessary) the model output.


LLM


LLM (Large Language Models) refers to powerful language models like GPT that have been pre-trained on massive amounts of text data, enabling them to understand and generate human-like text. LLMs enable the processing and interpretation of complex financial data, including reports, statements, and market trends. Their ability to generate coherent and insightful text makes them invaluable tools for extracting key information, identifying patterns, and facilitating decision-making in the financial domain.



Streamlit App


We will create a streamlit app to demonstrate the use case. We will use colab as the execution environment. The user here will be able to upload the financial documents, and ask relevant questions.


Approach: The development process follows a structured three-step approach:

  1. Data Input and Preprocessing:

    • The app allows users to upload their financial documents directly through the Streamlit interface.

    • Once uploaded, the app reads and preprocesses the files, ensuring the data is in an optimal format for further analysis.

    • The preprocessed data is then passed on to the subsequent step.


2. Model Integration:

  • This step focuses on creating a vectorstore and loading the relevant language model, which powers the analysis.

  • By leveraging the loaded model, the app generates responses based on the user's input and the uploaded financial data.

  • The chain serves as a comprehensive analysis of the financial documents, presenting valuable insights.


3. Output Postprocessing and Visualization:

  • The final step involves refining the generated output.

  • The processed output is then displayed on the Streamlit interface. Users can view the history and download the conversation.


1. Read the pdf files and preprocessing


Within the "read_files" function mentioned above, we have implemented various methods to read the uploaded file, catering to different file formats. This ensures compatibility with multiple file types. As part of the functionality, the function also performs document segmentation by splitting it into smaller chunks of 1500 tokens, with an overlap of 200. This strategic segmentation allows us to overcome the token length limitation of GPT while still achieving the desired analysis outcomes.


  • Build the Model with OpenAI embeddings and GPT4

Within the mentioned "model" function, we employ faiss to create a vectorstore for the document chunks, which are then stored locally. This approach eliminates the need to recreate the vectorstore for every question, resulting in improved efficiency. For the prompt, a simple and straightforward statement is used, indicating that the document is of financial nature and requires analysis. In the question-answering process, we leverage RetrievalQAWithSourcesChain, which utilizes an Index for document lookup and provides answers along with their sources. While GPT-4 is utilized as the primary model, alternative models such as GPT-3.5 can also be employed within the same context.

In the provided code snippet, when a user poses a question, we retrieve the corresponding output from the chain and present it within the Streamlit interface. Additionally, we utilize session states to facilitate the display of the conversation history, allowing users to review the entire interaction. Moreover, the option to download the complete conversation is made available, providing users with the ability to save and access the conversation at their convenience.


To get the streamlit interface in colab, we can use localtunnel, you will receive a URL on !npx localtunnel --port command. Upon clicking the URL, you will be prompted to provide an endpoint IP address. To obtain the IP address, use the command "!curl ipv4.icanhazip.com" within the notebook. Once you enter the IP address and submit it, you will be redirected to the Streamlit app. The following images depict the step-by-step process.


public ip on colab using localtunnel

access public ip on colab

Langchain output

example:

Documents uploaded: aws quarterly revenue Q1 2023,Q3 2022.


Question 1: What is the net income

GPT Answer: The earnings for the respective periods are as follows:

  1. For the three months ended September 30, 2022, the net income was $2,872 million.

  2. For the nine months ended September 30, 2022, the net income (loss) was $(3,000) million.

  3. For the three months ended March 31, 2023, the net income was $3,172 million. Source:


Conclusion

The development of a Streamlit app for financial analysis, powered by GPT and LangChain, opens up new possibilities for efficient analysis of financial reports. By showcasing the app's code and working, we have demonstrated how this integrated solution can streamline the analysis process, extract key insights, and provide a user-friendly interface. As we continue to refine and enhance such applications, we can expect further advancements in automating financial analysis, improving decision-making, and uncovering valuable opportunities in the dynamic world of finance.

Future Work

  1. The pdf reader sometimes misses out some numbers in the table, we can use other libraries and modify the read pdf function to better handle pdf's.

  2. GPT-4 gives better response on financial documents when compared to GPT-3.5.


207 views1 comment
bottom of page