refactor: docs refactor fixes

This commit is contained in:
Florian Hönicke
2023-04-08 23:31:53 +02:00
parent 56727bad24
commit 10fc0b0d01
8 changed files with 283 additions and 164 deletions

215
README.md
View File

@@ -1,5 +1,5 @@
<h1 align="center"> <h1 align="center">
GPT Deploy: The Microservice Magician 🧙🚀 GPT Deploy: One line to create them all 🧙🚀
</h1> </h1>
@@ -33,23 +33,29 @@ Your imagination is the limit!
</p> </p>
This project streamlines the creation and deployment of microservices. This project streamlines the creation and deployment of microservices.
Simply describe your task using natural language, and the system will automatically build and deploy your microservice. Simply describe your task using natural language, and the system will automatically build and deploy your microservice.
To ensure the executor accurately aligns with your intended task, you can also provide test scenarios. To ensure the microservice accurately aligns with your intended task a test scenario is required.
# Quickstart ## Quickstart
## Installation ### Installation
```bash ```bash
pip install gptdeploy pip install gptdeploy
gptdeploy configure --key <your openai api key> gptdeploy configure --key <your openai api key>
``` ```
If you set the environment variable `OPENAI_API_KEY`, the configuration step can be skipped. If you set the environment variable `OPENAI_API_KEY`, the configuration step can be skipped.
## run ### run
```bash ```bash
gptdeploy --description "Given a PDF, return it's text" --test "https://www.africau.edu/images/default/sample.pdf" gptdeploy create --description "Given a PDF, return it's text" --test "https://www.africau.edu/images/default/sample.pdf"
``` ```
To create your personal microservice two things are required:
- A `description` of the task you want to accomplish.
- A `test` scenario that ensures the microservice works as expected.
The creation process should take between 5 and 15 minutes.
During this time, GPT iteratively builds your microservice until it finds a strategy that make you test scenario pass.
Once the microservice is created and deployed, you can test it using the generated Streamlit playground.
# Overview ## Overview
The graphic below illustrates the process of creating a microservice and deploying it to the cloud. The graphic below illustrates the process of creating a microservice and deploying it to the cloud.
```mermaid ```mermaid
graph TB graph TB
@@ -85,29 +91,35 @@ graph TB
- Creates a Streamlit playground where you can test the microservice. - Creates a Streamlit playground where you can test the microservice.
6. If it fails 10 times in a row, it moves on to the next approach. 6. If it fails 10 times in a row, it moves on to the next approach.
# Examples ## Examples
## 3d model info
### Rhyme Generator
```bash ```bash
gptdeploy --description "Given a 3d object, return vertex count and face count" --test "https://raw.githubusercontent.com/polygonjs/polygonjs-assets/master/models/wolf.obj" gptdeploy create --description "Given a word, return a list of rhyming words using the datamuse api" --test "hello"
```
<img src="res/rhyme_generator_example.png" alt="Rhyme Generator" width="600" />
### 3d model info
```bash
gptdeploy create --description "Given a 3d object, return vertex count and face count" --test "https://raw.githubusercontent.com/polygonjs/polygonjs-assets/master/models/wolf.obj"
``` ```
<img src="res/obj_info_example.png" alt="3D Model Info" width="600" /> <img src="res/obj_info_example.png" alt="3D Model Info" width="600" />
## Table extraction ### Table extraction
```bash ```bash
--description "Given a URL, extract all tables as csv" --test "http://www.ins.tn/statistiques/90" --description "Given a URL, extract all tables as csv" --test "http://www.ins.tn/statistiques/90"
``` ```
<img src="res/table_extraction_example.png" alt="Table Extraction" width="600" /> <img src="res/table_extraction_example.png" alt="Table Extraction" width="600" />
### Audio to mel spectrogram
## Audio to mel spectrogram
```bash ```bash
gptdeploy --description "Create mel spectrograms from audio file" --test "https://cdn.pixabay.com/download/audio/2023/02/28/audio_550d815fa5.mp3" gptdeploy create --description "Create mel spectrograms from audio file" --test "https://cdn.pixabay.com/download/audio/2023/02/28/audio_550d815fa5.mp3"
``` ```
<img src="res/audio_to_mel_example.png" alt="Audio to Mel Spectrogram" width="600" /> <img src="res/audio_to_mel_example.png" alt="Audio to Mel Spectrogram" width="600" />
## Text to speech ### Text to speech
```bash ```bash
gptdeploy --description "Convert text to speech" --test "Hello, welcome to GPT Deploy!" gptdeploy create --description "Convert text to speech" --test "Hello, welcome to GPT Deploy!"
``` ```
<a href=res/text_to_speech_example.wav><img src="res/text_to_speech_example.png" alt="Text to Speech" width="600" /></a> <a href=res/text_to_speech_example.wav><img src="res/text_to_speech_example.png" alt="Text to Speech" width="600" /></a>
@@ -116,125 +128,123 @@ gptdeploy --description "Convert text to speech" --test "Hello, welcome to GPT D
Your browser does not support the audio element. Your browser does not support the audio element.
</audio> </audio>
## QR Code Generator ### QR Code Generator
```bash ```bash
gptdeploy --description "Generate QR code from URL" --test "https://www.example.com" gptdeploy create --description "Generate QR code from URL" --test "https://www.example.com"
``` ```
<img src="res/qr_example.png" alt="QR Code Generator" width="600" /> <img src="res/qr_example.png" alt="QR Code Generator" width="600" />
# TO BE TESTED ## TO BE TESTED
## ASCII Art Generator ### ASCII Art Generator
```bash ```bash
gptdeploy --description "Convert image to ASCII art" --test "https://images.unsplash.com/photo-1602738328654-51ab2ae6c4ff" gptdeploy create --description "Convert image to ASCII art" --test "https://images.unsplash.com/photo-1602738328654-51ab2ae6c4ff"
``` ```
## Color Palette Generator ### Color Palette Generator
```bash ```bash
gptdeploy --description "creates aesthetically pleasing color palettes based on a seed color, using color theory principles like complementary or analogous colors" --test "red" gptdeploy create --description "creates aesthetically pleasing color palettes based on a seed color, using color theory principles like complementary or analogous colors" --test "red"
``` ```
## Password Strength Checker ### Password Strength Checker
```bash ```bash
gptdeploy --description "Given a password, return a score from 1 to 10 indicating the strength of the password" --test "Pa$$w0rd" gptdeploy create --description "Given a password, return a score from 1 to 10 indicating the strength of the password" --test "Pa$$w0rd"
``` ```
## Morse Code Translator ### Morse Code Translator
```bash ```bash
gptdeploy --description "Convert text to morse code" --test "Hello, welcome to GPT Deploy!" gptdeploy create --description "Convert text to morse code" --test "Hello, welcome to GPT Deploy!"
``` ```
## IP Geolocation ### IP Geolocation
```bash ```bash
gptdeploy --description "Given an IP address, return the geolocation information" --test "142.251.46.174" gptdeploy create --description "Given an IP address, return the geolocation information" --test "142.251.46.174"
``` ```
## Rhyme Generator ### Currency Converter
```bash ```bash
gptdeploy --description "Given a word, return a list of rhyming words using the datamuse api" --test "hello" gptdeploy create --description "Converts any currency into any other" --test "1 usd to eur"
``` ```
## Currency Converter ### Image Resizer
```bash ```bash
gptdeploy --description "Converts any currency into any other" --test "1 usd to eur" gptdeploy create --description "Given an image, resize it to a specified width and height" --test "https://images.unsplash.com/photo-1602738328654-51ab2ae6c4ff"
``` ```
## Image Resizer ### Weather API
```bash ```bash
gptdeploy --description "Given an image, resize it to a specified width and height" --test "https://images.unsplash.com/photo-1602738328654-51ab2ae6c4ff" gptdeploy create --description "Given a city, return the current weather" --test "Berlin"
```
## Weather API
```bash
gptdeploy --description "Given a city, return the current weather" --test "Berlin"
``` ```
## Sudoku Solver ### Sudoku Solver
```bash ```bash
gptdeploy --description "Given a sudoku puzzle, return the solution" --test "[[2, 5, 0, 0, 3, 0, 9, 0, 1], [0, 1, 0, 0, 0, 4, 0, 0, 0], [4, 0, 7, 0, 0, 0, 2, 0, 8], [0, 0, 5, 2, 0, 0, 0, 0, 0], [0, 0, 0, 0, 9, 8, 1, 0, 0], [0, 4, 0, 0, 0, 3, 0, 0, 0], [0, 0, 0, 3, 6, 0, 0, 7, 2], [0, 7, 0, 0, 0, 0, 0, 0, 3], [9, 0, 3, 0, 0, 0, 6, 0, 4]]" gptdeploy create --description "Given a sudoku puzzle, return the solution" --test "[[2, 5, 0, 0, 3, 0, 9, 0, 1], [0, 1, 0, 0, 0, 4, 0, 0, 0], [4, 0, 7, 0, 0, 0, 2, 0, 8], [0, 0, 5, 2, 0, 0, 0, 0, 0], [0, 0, 0, 0, 9, 8, 1, 0, 0], [0, 4, 0, 0, 0, 3, 0, 0, 0], [0, 0, 0, 3, 6, 0, 0, 7, 2], [0, 7, 0, 0, 0, 0, 0, 0, 3], [9, 0, 3, 0, 0, 0, 6, 0, 4]]"
``` ```
## Carbon Footprint Calculator ### Carbon Footprint Calculator
```bash ```bash
gptdeploy --description "Estimate a company's carbon footprint based on factors like transportation, electricity usage, waste production etc..." --test "Jina AI" gptdeploy create --description "Estimate a company's carbon footprint based on factors like transportation, electricity usage, waste production etc..." --test "Jina AI"
``` ```
## Real Estate Valuation Estimator ### Real Estate Valuation Estimator
```bash ```bash
gptdeploy --description "Create a microservice that estimates the value of a property based on factors like location, property type, age, and square footage." --test "Berlin Friedrichshain, Flat, 100m², 10 years old" gptdeploy create --description "Create a microservice that estimates the value of a property based on factors like location, property type, age, and square footage." --test "Berlin Friedrichshain, Flat, 100m², 10 years old"
```
## Chemical Structure Drawing
```bash
gptdeploy --description "Convert a chemical formula into a 2D chemical structure diagram" --test "C6H6"
```
## Gene Sequence Alignment
```bash
gptdeploy --description "Align two DNA or RNA sequences using the Needleman-Wunsch algorithm" --test "AGTC, GTCA"
```
## Markdown to HTML Converter
```bash
gptdeploy --description "Convert markdown to HTML" --test "# Hello, welcome to GPT Deploy!"
```
## Barcode Generator
```bash
gptdeploy --description "Generate a barcode from a string" --test "Hello, welcome to GPT Deploy!"
```
## File Compression
```bash
gptdeploy --description "Compress a file using the gzip algorithm" --test "content of the file: Hello, welcome to GPT Deploy!"
```
## Meme Generator
```bash
gptdeploy --description "Generate a meme from an image and a caption" --test "Surprised Pikachu: https://media.wired.com/photos/5f87340d114b38fa1f8339f9/master/w_1600%2Cc_limit/Ideas_Surprised_Pikachu_HD.jpg, TOP:When you see GPT Deploy create and deploy a microservice in seconds"
```
## Watermarking Images
```bash
gptdeploy --description "Add a watermark (GPT Deploy) to an image" --test "https://images.unsplash.com/photo-1602738328654-51ab2ae6c4ff"
```
## File Metadata Extractor
```bash
gptdeploy --description "Extract metadata from a file" --test "https://images.unsplash.com/photo-1602738328654-51ab2ae6c4ff"
```
## Video Thumbnail Extractor
```bash
gptdeploy --description "Extract a thumbnail from a video" --test "http://techslides.com/demos/sample-videos/small.mp4"
```
## Gif Maker
```bash
gptdeploy --description "Create a gif from a list of images" --test "https://images.unsplash.com/photo-1564725075388-cc8338732289, https://images.unsplash.com/photo-1584555684040-bad07f46a21f, https://images.unsplash.com/photo-1584555613497-9ecf9dd06f68"
```
## Heatmap Generator
```bash
gptdeploy --description "Create a heatmap from an image and a list of relative coordinates" --test "[[0.1, 0.2], [0.3, 0.4], [0.5, 0.6], [0.2, 0.1], [0.7, 0.2], [0.4, 0.2]]"
``` ```
# 🔮 vision ### Gene Sequence Alignment
```bash
gptdeploy create --description "Align two DNA or RNA sequences using the Needleman-Wunsch algorithm" --test "AGTC, GTCA"
```
### Markdown to HTML Converter
```bash
gptdeploy create --description "Convert markdown to HTML" --test "# Hello, welcome to GPT Deploy!"
```
### Barcode Generator
```bash
gptdeploy create --description "Generate a barcode from a string" --test "Hello, welcome to GPT Deploy!"
```
### File Compression
```bash
gptdeploy create --description "Compress a file using the gzip algorithm" --test "content of the file: Hello, welcome to GPT Deploy!"
```
### Meme Generator
```bash
gptdeploy create --description "Generate a meme from an image and a caption" --test "Surprised Pikachu: https://media.wired.com/photos/5f87340d114b38fa1f8339f9/master/w_1600%2Cc_limit/Ideas_Surprised_Pikachu_HD.jpg, TOP:When you see GPT Deploy create and deploy a microservice in seconds"
```
### Watermarking Images
```bash
gptdeploy create --description "Add a watermark (GPT Deploy) to an image" --test "https://images.unsplash.com/photo-1602738328654-51ab2ae6c4ff"
```
### File Metadata Extractor
```bash
gptdeploy create --description "Extract metadata from a file" --test "https://images.unsplash.com/photo-1602738328654-51ab2ae6c4ff"
```
### Video Thumbnail Extractor
```bash
gptdeploy create --description "Extract a thumbnail from a video" --test "http://techslides.com/demos/sample-videos/small.mp4"
```
### Gif Maker
```bash
gptdeploy create --description "Create a gif from a list of images" --test "https://images.unsplash.com/photo-1564725075388-cc8338732289, https://images.unsplash.com/photo-1584555684040-bad07f46a21f, https://images.unsplash.com/photo-1584555613497-9ecf9dd06f68"
```
### Heatmap Generator
```bash
gptdeploy create --description "Create a heatmap from an image and a list of relative coordinates" --test "https://images.unsplash.com/photo-1574786198875-49f5d09fe2d2, [[0.1, 0.2], [0.3, 0.4], [0.5, 0.6], [0.2, 0.1], [0.7, 0.2], [0.4, 0.2]]"
```
## Open Challenges
### Chemical Structure Drawing
```bash
gptdeploy create --description "Convert a chemical formula into a 2D chemical structure diagram" --test "C6H6"
```
## 🔮 vision
Use natural language interface to create, deploy and update your microservice infrastructure. Use natural language interface to create, deploy and update your microservice infrastructure.
# TODO ## TODO
critical critical
- [ ] add buttons to README.md - [ ] add buttons to README.md
- [ ] fix problem with package installation - [ ] fix problem with package installation
@@ -261,8 +271,9 @@ Make sure it is only printed twice in case it changed.
- [ ] allow to update your microservice by providing feedback - [ ] allow to update your microservice by providing feedback
- [ ] bug: it can happen that the code generation is hanging forever - in this case aboard and redo the generation - [ ] bug: it can happen that the code generation is hanging forever - in this case aboard and redo the generation
- [ ] feat: make playground more stylish by adding attributes like: clean design, beautiful, like it was made by a professional designer, ... - [ ] feat: make playground more stylish by adding attributes like: clean design, beautiful, like it was made by a professional designer, ...
- [ ] support for other large language models like ChatGLM]
# challenging tasks: ## challenging tasks:
- The executor takes an image as input and returns a list of bounding boxes of all animals in the image. - The executor takes an image as input and returns a list of bounding boxes of all animals in the image.
- The executor takes 3D objects in obj format as input and outputs a 2D image rendering of that object where the full object is shown. - The executor takes 3D objects in obj format as input and outputs a 2D image rendering of that object where the full object is shown.
- The executor takes an mp3 file as input and returns bpm and pitch. - The executor takes an mp3 file as input and returns bpm and pitch.

34
main.py
View File

@@ -4,6 +4,7 @@ import click
from src import gpt, jina_cloud from src import gpt, jina_cloud
from src.jina_cloud import push_executor, process_error_message, jina_auth_login from src.jina_cloud import push_executor, process_error_message, jina_auth_login
from src.key_handling import set_api_key
from src.prompt_tasks import general_guidelines, executor_file_task, chain_of_thought_creation, test_executor_file_task, \ from src.prompt_tasks import general_guidelines, executor_file_task, chain_of_thought_creation, test_executor_file_task, \
chain_of_thought_optimization, requirements_file_task, docker_file_task, not_allowed chain_of_thought_optimization, requirements_file_task, docker_file_task, not_allowed
from src.utils.io import recreate_folder, persist_file from src.utils.io import recreate_folder, persist_file
@@ -15,6 +16,7 @@ import re
from src.constants import FILE_AND_TAG_PAIRS from src.constants import FILE_AND_TAG_PAIRS
gpt_session = gpt.GPTSession()
def extract_content_from_result(plain_text, file_name): def extract_content_from_result(plain_text, file_name):
pattern = fr"^\*\*{file_name}\*\*\n```(?:\w+\n)?([\s\S]*?)```" pattern = fr"^\*\*{file_name}\*\*\n```(?:\w+\n)?([\s\S]*?)```"
@@ -80,7 +82,7 @@ def create_executor(
+ executor_file_task(executor_name, description, test, package) + executor_file_task(executor_name, description, test, package)
+ chain_of_thought_creation() + chain_of_thought_creation()
) )
conversation = gpt.Conversation() conversation = gpt_session.get_conversation()
executor_content_raw = conversation.query(user_query) executor_content_raw = conversation.query(user_query)
if is_chain_of_thought: if is_chain_of_thought:
executor_content_raw = conversation.query( executor_content_raw = conversation.query(
@@ -95,7 +97,7 @@ def create_executor(
+ wrap_content_in_code_block(executor_content, 'executor.py', 'python') + wrap_content_in_code_block(executor_content, 'executor.py', 'python')
+ test_executor_file_task(executor_name, test) + test_executor_file_task(executor_name, test)
) )
conversation = gpt.Conversation() conversation = gpt_session.get_conversation()
test_executor_content_raw = conversation.query(user_query) test_executor_content_raw = conversation.query(user_query)
if is_chain_of_thought: if is_chain_of_thought:
test_executor_content_raw = conversation.query( test_executor_content_raw = conversation.query(
@@ -113,7 +115,7 @@ def create_executor(
+ wrap_content_in_code_block(test_executor_content, 'test_executor.py', 'python') + wrap_content_in_code_block(test_executor_content, 'test_executor.py', 'python')
+ requirements_file_task() + requirements_file_task()
) )
conversation = gpt.Conversation() conversation = gpt_session.get_conversation()
requirements_content_raw = conversation.query(user_query) requirements_content_raw = conversation.query(user_query)
if is_chain_of_thought: if is_chain_of_thought:
requirements_content_raw = conversation.query( requirements_content_raw = conversation.query(
@@ -130,7 +132,7 @@ def create_executor(
+ wrap_content_in_code_block(requirements_content, 'requirements.txt', '') + wrap_content_in_code_block(requirements_content, 'requirements.txt', '')
+ docker_file_task() + docker_file_task()
) )
conversation = gpt.Conversation() conversation = gpt_session.get_conversation()
dockerfile_content_raw = conversation.query(user_query) dockerfile_content_raw = conversation.query(user_query)
if is_chain_of_thought: if is_chain_of_thought:
dockerfile_content_raw = conversation.query( dockerfile_content_raw = conversation.query(
@@ -150,7 +152,9 @@ def create_playground(executor_name, executor_path, host):
+ wrap_content_in_code_block(file_name_to_content['executor.py'], 'executor.py', 'python') + wrap_content_in_code_block(file_name_to_content['executor.py'], 'executor.py', 'python')
+ wrap_content_in_code_block(file_name_to_content['test_executor.py'], 'test_executor.py', 'python') + wrap_content_in_code_block(file_name_to_content['test_executor.py'], 'test_executor.py', 'python')
+ f''' + f'''
Create a playground for the executor {executor_name} using streamlit. Create a playground for the executor {executor_name} using streamlit.
The playground must look like it was made by a professional designer.
All the ui elements are well thought out and the user experience is great.
The executor is hosted on {host}. The executor is hosted on {host}.
This is an example how you can connect to the executor assuming the document (d) is already defined: This is an example how you can connect to the executor assuming the document (d) is already defined:
from jina import Client, Document, DocumentArray from jina import Client, Document, DocumentArray
@@ -159,7 +163,7 @@ response = client.post('/', inputs=DocumentArray([d])) # always use '/'
print(response[0].text) # can also be blob in case of image/audio..., this should be visualized in the streamlit app print(response[0].text) # can also be blob in case of image/audio..., this should be visualized in the streamlit app
''' '''
) )
conversation = gpt.Conversation() conversation = gpt_session.get_conversation()
conversation.query(user_query) conversation.query(user_query)
playground_content_raw = conversation.query( playground_content_raw = conversation.query(
f"General rules: " + not_allowed() + chain_of_thought_optimization('python', 'app.py')) f"General rules: " + not_allowed() + chain_of_thought_optimization('python', 'app.py'))
@@ -204,7 +208,7 @@ def debug_executor(output_path, package, description, test):
f"...code...\n" f"...code...\n"
f"```\n\n" f"```\n\n"
) )
conversation = gpt.Conversation() conversation = gpt_session.get_conversation()
returned_files_raw = conversation.query(user_query) returned_files_raw = conversation.query(user_query)
for file_name, tag in FILE_AND_TAG_PAIRS: for file_name, tag in FILE_AND_TAG_PAIRS:
updated_file = extract_content_from_result(returned_files_raw, file_name) updated_file = extract_content_from_result(returned_files_raw, file_name)
@@ -226,7 +230,7 @@ class MaxDebugTimeReachedException(BaseException):
def generate_executor_name(description): def generate_executor_name(description):
conversation = gpt.Conversation() conversation = gpt_session.get_conversation()
user_query = f''' user_query = f'''
Generate a name for the executor matching the description: Generate a name for the executor matching the description:
"{description}" "{description}"
@@ -271,20 +275,23 @@ package2,package3,...
... ...
``` ```
''' '''
conversation = gpt.Conversation() conversation = gpt_session.get_conversation()
packages_raw = conversation.query(user_query) packages_raw = conversation.query(user_query)
packages_csv_string = extract_content_from_result(packages_raw, 'packages.csv') packages_csv_string = extract_content_from_result(packages_raw, 'packages.csv')
packages = [package.split(',') for package in packages_csv_string.split('\n')] packages = [package.split(',') for package in packages_csv_string.split('\n')]
packages = packages[:threads] packages = packages[:threads]
return packages return packages
@click.group(invoke_without_command=True)
def main():
pass
@click.command() @main.command()
@click.option('--description', required=True, help='Description of the executor.') @click.option('--description', required=True, help='Description of the executor.')
@click.option('--test', required=True, help='Test scenario for the executor.') @click.option('--test', required=True, help='Test scenario for the executor.')
@click.option('--num_approaches', default=3, type=int, help='Number of num_approaches to use to fulfill the task (default: 3).') @click.option('--num_approaches', default=3, type=int, help='Number of num_approaches to use to fulfill the task (default: 3).')
@click.option('--output_path', default='executor', help='Path to the output folder (must be empty). ') @click.option('--output_path', default='executor', help='Path to the output folder (must be empty). ')
def main( def create(
description, description,
test, test,
num_approaches=3, num_approaches=3,
@@ -320,6 +327,11 @@ def main(
) )
break break
@main.command()
@click.option('--key', required=True, help='Your OpenAI API key.')
def configure(key):
set_api_key(key)
if __name__ == '__main__': if __name__ == '__main__':
main() main()

View File

@@ -1,4 +1,5 @@
jina==3.14.1 jina==3.14.1
click==8.1.3 click==8.1.3
streamlit==1.20.0 streamlit==1.20.0
openai==0.27.4 openai==0.27.4
psutil==5.9.4

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

View File

@@ -24,4 +24,9 @@ FILE_AND_TAG_PAIRS = [
EXECUTOR_FOLDER_v1 = 'executor_v1' EXECUTOR_FOLDER_v1 = 'executor_v1'
EXECUTOR_FOLDER_v2 = 'executor_v2' EXECUTOR_FOLDER_v2 = 'executor_v2'
FLOW_URL_PLACEHOLDER = 'jcloud.jina.ai' FLOW_URL_PLACEHOLDER = 'jcloud.jina.ai'
PRICING_GPT4_PROMPT = 0.03
PRICING_GPT4_GENERATION = 0.06
PRICING_GPT3_5_TURBO_PROMPT = 0.002
PRICING_GPT3_5_TURBO_GENERATION = 0.002

View File

@@ -5,46 +5,66 @@ from typing import List, Tuple
import openai import openai
from openai.error import RateLimitError, Timeout from openai.error import RateLimitError, Timeout
from src.constants import PRICING_GPT4_PROMPT, PRICING_GPT4_GENERATION, PRICING_GPT3_5_TURBO_PROMPT, \
PRICING_GPT3_5_TURBO_GENERATION
from src.prompt_system import system_base_definition from src.prompt_system import system_base_definition
from src.utils.io import timeout_generator_wrapper, GenerationTimeoutError from src.utils.io import timeout_generator_wrapper, GenerationTimeoutError
from src.utils.string_tools import print_colored from src.utils.string_tools import print_colored
PRICING_GPT4_PROMPT = 0.03 class GPTSession:
PRICING_GPT4_GENERATION = 0.06 def __init__(self):
PRICING_GPT3_5_TURBO_PROMPT = 0.002 self.get_openai_api_key()
PRICING_GPT3_5_TURBO_GENERATION = 0.002 if self.is_gpt4_available():
self.supported_model = 'gpt-4'
self.pricing_prompt = PRICING_GPT4_PROMPT
self.pricing_generation = PRICING_GPT4_GENERATION
else:
self.supported_model = 'gpt-3.5-turbo'
self.pricing_prompt = PRICING_GPT3_5_TURBO_PROMPT
self.pricing_generation = PRICING_GPT3_5_TURBO_GENERATION
self.chars_prompt_so_far = 0
self.chars_generation_so_far = 0
if 'OPENAI_API_KEY' not in os.environ: def get_openai_api_key(self):
raise Exception('You need to set OPENAI_API_KEY in your environment') if 'OPENAI_API_KEY' not in os.environ:
openai.api_key = os.environ['OPENAI_API_KEY'] raise Exception('You need to set OPENAI_API_KEY in your environment')
openai.api_key = os.environ['OPENAI_API_KEY']
def is_gpt4_available(self):
try:
openai.ChatCompletion.create(
model="gpt-4",
messages=[{
"role": 'system',
"content": 'test'
}]
)
return True
except openai.error.InvalidRequestError:
return False
def cost_callback(self, chars_prompt, chars_generation):
self.chars_prompt_so_far += chars_prompt
self.chars_generation_so_far += chars_generation
print('\n')
money_prompt = round(self.chars_prompt_so_far / 3.4 * self.pricing_prompt / 1000, 2)
money_generation = round(self.chars_generation_so_far / 3.4 * self.pricing_generation / 1000, 2)
print('money prompt:', f'${money_prompt}')
print('money generation:', f'${money_generation}')
print('total money:', f'${money_prompt + money_generation}')
print('\n')
def get_conversation(self):
return _GPTConversation(self.supported_model, self.cost_callback)
try: class _GPTConversation:
openai.ChatCompletion.create( def __init__(self, model: str, cost_callback, prompt_list: List[Tuple[str, str]] = None):
model="gpt-4",
messages=[{
"role": 'system',
"content": 'test'
}]
)
supported_model = 'gpt-4'
pricing_prompt = PRICING_GPT4_PROMPT
pricing_generation = PRICING_GPT4_GENERATION
except openai.error.InvalidRequestError:
supported_model = 'gpt-3.5-turbo'
pricing_prompt = PRICING_GPT3_5_TURBO_PROMPT
pricing_generation = PRICING_GPT3_5_TURBO_GENERATION
total_chars_prompt = 0
total_chars_generation = 0
class Conversation:
def __init__(self, prompt_list: List[Tuple[str, str]] = None, model=supported_model):
self.model = model self.model = model
if prompt_list is None: if prompt_list is None:
prompt_list = [('system', system_base_definition)] prompt_list = [('system', system_base_definition)]
self.prompt_list = prompt_list self.prompt_list = prompt_list
self.cost_callback = cost_callback
print_colored('system', system_base_definition, 'magenta') print_colored('system', system_base_definition, 'magenta')
def query(self, prompt: str): def query(self, prompt: str):
@@ -54,8 +74,18 @@ class Conversation:
self.prompt_list.append(('assistant', response)) self.prompt_list.append(('assistant', response))
return response return response
def get_response_from_stream(self, response_generator):
response_generator_with_timeout = timeout_generator_wrapper(response_generator, 10)
complete_string = ''
for chunk in response_generator_with_timeout:
delta = chunk['choices'][0]['delta']
if 'content' in delta:
content = delta['content']
print_colored('' if complete_string else 'assistent', content, 'green', end='')
complete_string += content
return complete_string
def get_response(self, prompt_list: List[Tuple[str, str]]): def get_response(self, prompt_list: List[Tuple[str, str]]):
global total_chars_prompt, total_chars_generation
for i in range(10): for i in range(10):
try: try:
response_generator = openai.ChatCompletion.create( response_generator = openai.ChatCompletion.create(
@@ -71,27 +101,17 @@ class Conversation:
for prompt in prompt_list for prompt in prompt_list
] ]
) )
response_generator_with_timeout = timeout_generator_wrapper(response_generator, 10)
total_chars_prompt += sum(len(prompt[1]) for prompt in prompt_list) complete_string = self.get_response_from_stream(response_generator)
complete_string = ''
for chunk in response_generator_with_timeout:
delta = chunk['choices'][0]['delta']
if 'content' in delta:
content = delta['content']
print_colored('' if complete_string else 'assistent', content, 'green', end='')
complete_string += content
total_chars_generation += len(content)
print('\n')
money_prompt = round(total_chars_prompt / 3.4 * pricing_prompt / 1000, 2)
money_generation = round(total_chars_generation / 3.4 * pricing_generation / 1000, 2)
print('money prompt:', f'${money_prompt}')
print('money generation:', f'${money_generation}')
print('total money:', f'${money_prompt + money_generation}')
print('\n')
return complete_string
except (RateLimitError, Timeout, ConnectionError, GenerationTimeoutError) as e: except (RateLimitError, Timeout, ConnectionError, GenerationTimeoutError) as e:
print(e) print(e)
print('retrying') print('retrying, be aware that this affects the cost calculation')
sleep(3) sleep(3)
continue continue
chars_prompt = sum(len(prompt[1]) for prompt in prompt_list)
chars_generation = len(complete_string)
self.cost_callback(chars_prompt, chars_generation)
return complete_string
raise Exception('Failed to get response') raise Exception('Failed to get response')

69
src/key_handling.py Normal file
View File

@@ -0,0 +1,69 @@
import os
import platform
import subprocess
import click
try:
import psutil
except ImportError:
psutil = None
def get_shell():
if psutil is None:
return None
try:
p = psutil.Process(os.getpid())
while p.parent() and p.parent().name() != "init":
p = p.parent()
return p.name()
except Exception as e:
click.echo(f"Error detecting shell: {e}")
return None
def set_env_variable(shell, key):
shell_config = {
"bash": {"config_file": "~/.bashrc", "export_line": f"export OPENAI_API_KEY={key}"},
"zsh": {"config_file": "~/.zshrc", "export_line": f"export OPENAI_API_KEY={key}"},
"fish": {
"config_file": "~/.config/fish/config.fish",
"export_line": f"set -gx OPENAI_API_KEY {key}",
},
}
if shell not in shell_config:
click.echo("Sorry, your shell is not supported.")
return
config_file = os.path.expanduser(shell_config[shell]["config_file"])
with open(config_file, "a") as file:
file.write(f"\n{shell_config[shell]['export_line']}\n")
click.echo(f"OPENAI_API_KEY has been set in {config_file}.")
def set_api_key(key):
system_platform = platform.system().lower()
if system_platform == "windows":
set_env_variable_command = f'setx OPENAI_API_KEY "{key}"'
subprocess.call(set_env_variable_command, shell=True)
click.echo("OPENAI_API_KEY has been set.")
elif system_platform in ["linux", "darwin"]:
if "OPENAI_API_KEY" in os.environ:
if not click.confirm("OPENAI_API_KEY is already set. Do you want to overwrite it?"):
click.echo("Aborted.")
return
shell = get_shell()
if shell is None:
click.echo("Error: Unable to detect your shell or psutil is not available. Please set the environment variable manually.")
return
set_env_variable(shell, key)
else:
click.echo("Sorry, this platform is not supported.")

View File

@@ -103,6 +103,7 @@ from jina import Client, Document, DocumentArray
client = Client(host='{FLOW_URL_PLACEHOLDER}') client = Client(host='{FLOW_URL_PLACEHOLDER}')
d = Document(uri='...') d = Document(uri='...')
d.load_uri_to_blob() d.load_uri_to_blob()
d.tags['style'] = 'abstract' # tags must be a flat dictionary where keys are strings and values are strings, ints, floats, or bools
response = client.post('/', inputs=DocumentArray([d])) # the client must be called on '/' response = client.post('/', inputs=DocumentArray([d])) # the client must be called on '/'
print(response[0].text) print(response[0].text)
``` ```