In a previous article, the process of retrieving OpenAI models and their capabilities using Python was demonstrated.
In this post, the same functionality will be extended by creating a web interface using Streamlit, allowing users to interact with OpenAI models via their browser.
The web app will list available OpenAI models, perform basic tests, and display the results in real-time.
Why Streamlit?
Streamlit is an easy-to-use Python framework that enables developers to build interactive web applications with minimal effort.
By using Streamlit, complex tasks like interacting with APIs can be simplified into a user-friendly interface, making it accessible even to those who may not be comfortable with the command line.
Prerequisites
To follow along, users will need:
- Python 3.x
- The following Python packages:
streamlit
,openai
,pandas
, andpython-dotenv
To install the required packages, run the following command:
pip install streamlit openai pandas python-dotenv
Setting Up Environment Variables
The OpenAI API key should be securely stored in a .env
file within the project directory. Add the following line to the .env
file:
OPENAI_API_KEY=your_openai_api_key_here
This will allow the app to retrieve the API key without hardcoding it in the source code.
The Code
Below is the complete code for the Streamlit web app, which will be accessible on port 6060:
import streamlit as st
import os
import openai
import pandas as pd
from dotenv import load_dotenv
from datetime import datetime
# Load .env environment variables
load_dotenv()
# Check if the API key is set in the environment
env_api_key = os.getenv('OPENAI_API_KEY')
# Function to run the model check
def run_model_check(api_key):
openai.api_key = api_key
# Retrieve models from OpenAI and store in DataFrame
models = openai.Model.list()
data = pd.DataFrame(models["data"])
data['created'] = pd.to_datetime(data['created'], unit='s')
data.set_index('id', inplace=True)
# Sort the data by 'owned_by' and 'created' in descending order
data.sort_values(by=['owned_by', 'created'], ascending=[False, False], inplace=True)
# Initialize a new column 'CHECK' with default value 'NO'
data['CHECK'] = 'NO'
# Progress bar setup
progress_bar = st.progress(0)
total_models = len(data)
checked_models = 0
# Iterate over each model and test it
for model_id in data.index:
try:
response = openai.ChatCompletion.create(model=model_id, messages=[{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What is the capital of Norway?"}])
if "Oslo" in response['choices'][0]['message']['content']:
data.loc[model_id, 'CHECK'] = 'OSLO'
except Exception as e:
pass
checked_models += 1
progress_bar.progress(checked_models / total_models)
# Reorder the columns to place 'CHECK' as the second column
data = data[['CHECK', 'object', 'created', 'owned_by']]
# Display the table in Streamlit
st.write("Current Date and Time: ", datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
st.dataframe(data)
# Streamlit webpage title
st.title("OpenAI Model Checker")
if env_api_key:
# If API key is available in environment, run the model check automatically
run_model_check(env_api_key)
else:
# If API key is not available, ask for it in the UI
user_api_key = st.text_input("Enter OpenAI API Key:", type="password")
run_button = st.button("Run Model Check")
if run_button and user_api_key:
run_model_check(user_api_key)
Code Breakdown
- API Key Handling: The OpenAI API key is retrieved from a
.env
file. If the key is not found, the user is prompted to input it manually in the web interface. - Retrieving Models: The app uses the OpenAI API to list available models, which are stored in a Pandas DataFrame for further manipulation.
- Model Testing: Each model is tested by sending a basic query (“What is the capital of Norway?”). Models that return the correct answer (“Oslo”) are marked accordingly.
- Interactive Interface: The progress of the model checks is displayed with a progress bar, and the results are shown in a dynamically updating table.
Running the Streamlit App
Once the code is saved to a Python file (e.g., app.py
), the app can be run on port 6060 using the following command:
streamlit run stream-ai-models-6.py 0.0.0.0 --server.port 6060
This will launch the web app in the browser, accessible at localhost:6060
. From there, users can interact with the OpenAI API, view available models, and see the results of the tests directly in the browser.
See an updated list of available models below
The streamlit code snippet can also be found here
The next (future) step will be to create a docker image to be able to run this code as a docker container or a kubernetes pod.