Files
scale-gke-qdrant-llama/README.md
Benito Martin 4f8c435d62 first deploy
2024-06-30 00:32:19 +02:00

6.3 KiB

Q&A Pipeline Deployment on GKE for Scalability with LlamaIndex and Qdrant". 🚀

aws

This repository contains a full Q&A pipeline using the LlamaIndex framework, Qdrant as a vector database, and deployment on Google Kubernetes Engine (GKE) using a FastAPI app and Dockerfile. Python files from my repositories are loaded into the vector database, and the FastAPI app processes requests. The main goal is to provide fast access to your own code, enabling reuse of functions.

For detailed project descriptions, refer to this Medium article.

Main Steps

  • Data Ingestion: Load data from GitHub repositories.
  • Indexing: Use SentenceSplitter for indexing in nodes.
  • Embedding: Implement FastEmbedEmbedding.
  • Vector Store: Use Qdrant for inserting metadata.
  • Query Retrieval: Implement RetrieverQueryEngine.
  • FastAPI and GKE: Handle requests via the FastAPI app deployed on GKE.
  • Streamlit: UI component.

Feel free to and clone this repo 😉

Tech Stack

Visual Studio Code Jupyter Notebook Python OpenAI Anaconda Linux Ubuntu Google Cloud Kubernetes FastAPI Git Docker GitHub Actions Streamlit

Project Structure

The project has been structured with the following files:

  • .github/workflows: CI/CD pipelines
  • tests: unittest
  • Dockerfile:Dockerfile
  • Makefile: install requirements, formating, linting, testing and clean up
  • app.py: FastAPI
  • pyproject.toml: linting and formatting using ruff
  • create_qdrant_collection.py: script to create the collection in Qdrant
  • deploy-gke.yaml: deployment function
  • kustomization.yaml: kustomize deployment function
  • requirements.txt: project requirements

Project Set Up

The Python version used for this project is Python 3.10. You can follow along the medium article.

  1. Clone the repo (or download it as a zip file):

    git clone https://github.com/benitomartin/rag-aws-qdrant.git
    
  2. Create the virtual environment named main-env using Conda with Python version 3.10:

    conda create -n main-env python=3.10
    conda activate main-env
    
  3. Execute the Makefile script and install the project dependencies included in the requirements.txt:

    pip install -r requirements.txt
    
    or
    
    make install
    
  4. You can test the app locally running:

    uvicorn app:app --host 0.0.0.0 --port 8000
    

    then go to one of these addresses

    http://localhost:8000/docs http://127.0.0.1:8000/docs

  5. Create GCP Account, project, service account key, and activate GKE API

  6. Make sure the .env file is complete:

    OPENAI_API_KEY=
    QDRANT_API_KEY=
    QDRANT_URL=
    COLLECTION_NAME=
    ACCESS_TOKEN=
    GITHUB_USERNAME=
    
  7. Add the following secrets into github:

    OPENAI_API_KEY
    QDRANT_API_KEY
    QDRANT_URL
     COLLECTION_NAME
    GKE_SA_KEY
    GKE_PROJECT # PROJECT_ID
    
  8. Be sure to authenticate in GCP:

    gcloud auth login
    
    gcloud config set project PROJECT_ID
    
  9. Create Kubernetes Cluster

    gcloud container clusters create llama-gke-cluster \
        --zone=europe-west6-a \
        --num-nodes=5 \
        --enable-autoscaling \
        --min-nodes=1 \
        --max-nodes=10 \
        --machine-type=n1-standard-4 \
        --enable-vertical-pod-autoscaling
    

    after creation check the nodes

    kubectl get nodes
    
  10. Push the GitHub Actions workflows to start the deployment

  11. Verify Kubernetes is running after deployment

    kubectl get po 
    kubectl get svc 
    

    Pods Running

    under svc the external ip is the endpoint (34.65.3.225), that can be added in the streamlit app

    lambda-gke

    http://34.65.191.211:8000

  12. Check some pods and logs

    kubectl logs llama-app-gke-deploy-79bf48d7d8-4b77z
    kubectl describe pod llama-app-gke-deploy-79bf48d7d8-4b77z
    
  13. Clean up to avoid costs deleting the cluster and the docker image

    gcloud container clusters delete app-llama-gke-cluster --zone=europe-west6-a
    kubectl delete deployment llama-app-gke-deploy
    

Streamlit UI

Run the streamlit app adding the endpoint url that you get after deployment:

streamlit run streamlit_app.py
<p align="center">
<img width="767" alt="lambda-gke" src="https://github.com/benitomartin/mlops-car-prices/assets/116911431/b4a7e10c-52f9-4ca2-ade3-f2136ff6bbdf">
</p>