Q&A Pipeline Deployment on GKE for Scalability with LlamaIndex and Qdrant". 🚀
This repository contains a full Q&A pipeline using the LlamaIndex framework, Qdrant as a vector database, and deployment on Google Kubernetes Engine (GKE) using a FastAPI app and Dockerfile. Python files from my repositories are loaded into the vector database, and the FastAPI app processes requests. The main goal is to provide fast access to your own code, enabling reuse of functions.
For detailed project descriptions, refer to this Medium article.
Main Steps
- Data Ingestion: Load data from GitHub repositories.
- Indexing: Use SentenceSplitter for indexing in nodes.
- Embedding: Implement FastEmbedEmbedding.
- Vector Store: Use Qdrant for inserting metadata.
- Query Retrieval: Implement RetrieverQueryEngine.
- FastAPI and GKE: Handle requests via the FastAPI app deployed on GKE.
- Streamlit: UI component.
Feel free to ⭐ and clone this repo 😉
Tech Stack
Project Structure
The project has been structured with the following files:
.github/workflows:CI/CD pipelinestests: unittestDockerfile:DockerfileMakefile: install requirements, formating, linting, testing and clean upapp.py:FastAPIpyproject.toml: linting and formatting using ruffcreate_qdrant_collection.py:script to create the collection in Qdrantdeploy-gke.yaml:deployment functionkustomization.yaml:kustomize deployment functionrequirements.txt:project requirements
Project Set Up
The Python version used for this project is Python 3.10. You can follow along the medium article.
-
Clone the repo (or download it as a zip file):
git clone https://github.com/benitomartin/rag-aws-qdrant.git -
Create the virtual environment named
main-envusing Conda with Python version 3.10:conda create -n main-env python=3.10 conda activate main-env -
Execute the
Makefilescript and install the project dependencies included in the requirements.txt:pip install -r requirements.txt or make install -
You can test the app locally running:
uvicorn app:app --host 0.0.0.0 --port 8000then go to one of these addresses
-
Create GCP Account, project, service account key, and activate GKE API
-
Make sure the
.envfile is complete:OPENAI_API_KEY= QDRANT_API_KEY= QDRANT_URL= COLLECTION_NAME= ACCESS_TOKEN= GITHUB_USERNAME= -
Add the following secrets into github:
OPENAI_API_KEY QDRANT_API_KEY QDRANT_URL COLLECTION_NAME GKE_SA_KEY GKE_PROJECT # PROJECT_ID -
Be sure to authenticate in GCP:
gcloud auth logingcloud config set project PROJECT_ID -
Create Kubernetes Cluster
gcloud container clusters create llama-gke-cluster \ --zone=europe-west6-a \ --num-nodes=5 \ --enable-autoscaling \ --min-nodes=1 \ --max-nodes=10 \ --machine-type=n1-standard-4 \ --enable-vertical-pod-autoscalingafter creation check the nodes
kubectl get nodes -
Push the GitHub Actions workflows to start the deployment
-
Verify Kubernetes is running after deployment
kubectl get po kubectl get svcunder svc the external ip is the endpoint (34.65.3.225), that can be added in the streamlit app
-
Check some pods and logs
kubectl logs llama-gke-deploy-8476f496bc-gxhms kubectl describe pod llama-gke-deploy-8476f496bc-gxhms -
Clean up to avoid costs deleting the cluster and the docker image
gcloud container clusters delete app-llama-gke-cluster --zone=europe-west6-a kubectl delete deployment llama-gke-deploy
Streamlit UI
Run the streamlit app adding the endpoint url that you get after deployment:
streamlit run streamlit_app.py
<p align="center">
<img width="767" alt="lambda-gke" src="https://github.com/benitomartin/mlops-car-prices/assets/116911431/b4a7e10c-52f9-4ca2-ade3-f2136ff6bbdf">
</p>