LLM Strategy and Implementation
Background
We aim to provide the best experience for those seeking public health information on services and organizations offering them. Utilizing a Large Language Model, we hope to achieve chat and speech functionality to provide better contextual results and feedback to our users.
Purpose of the document
This document serves as a comprehensive guide, outlining our strategic approach to implementing LLM.
Flow
We are optimizing our search functionality by adding an LLM layer between the client and the data on our knowledge graph. This layer would handle embedding and parsing the response retrieved. The ideal flow is:
The client sends the request with the query value.
The LLM layer embeds the query: turning it into a vector.
Our algorithm compares the data in our knowledge graph, ordering based on relevance - distance between the vectors.
The LLM layer converts the data back into natural language.
The client displays the result to the user.

Expected Outcome
Guide
We will be using the Neo4j & LLM Fundamentals course for the implementation.
Last updated
Was this helpful?