Today, I will explain how I built a chat app using Python and Streamlit that runs on the OpenAI API. It is a fun and rewarding project that allows you to customize your chatbot to fit your needs.
To start building the chat app, you need to obtain your OpenAI API key. The API key authenticates your requests to the OpenAI API. Not having the API key will prevent you from accessing the OpenAI API. You won't be able to utilize its features. After creating an account on the OpenAI website, you can obtain the API key from there.
Step 1: Sign up for an account on the OpenAI website to access your API key.
Step 2: Create your project directory. Its structure should look like this:
project
├── .env
├── .gitignore
├── app.py
├── requirements.txt
└── README.md
Step 3: copy and paste to the requirements.txt. Run in terminal pip install -r requirements.txt
aiohappyeyeballs==2.3.5
aiohttp==3.10.3
aiosignal==1.3.1
altair==5.4.0
annotated-types==0.7.0
anyio==4.4.0
attrs==24.2.0
blinker==1.8.2
cachetools==5.4.0
certifi==2024.7.4
charset-normalizer==3.3.2
click==8.1.7
distro==1.9.0
frozenlist==1.4.1
gitdb==4.0.11
GitPython==3.1.43
h11==0.14.0
httpcore==1.0.5
httpx==0.27.0
idna==3.7
Jinja2==3.1.4
jiter==0.5.0
jsonschema==4.23.0
jsonschema-specifications==2023.12.1
markdown-it-py==3.0.0
MarkupSafe==2.1.5
mdurl==0.1.2
multidict==6.0.5
narwhals==1.3.0
numpy==2.0.1
openai==1.40.3
packaging==24.1
pandas==2.2.2
pillow==10.4.0
protobuf==5.27.3
pyarrow==17.0.0
pydantic==2.8.2
pydantic_core==2.20.1
pydeck==0.9.1
Pygments==2.18.0
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
pytz==2024.1
referencing==0.35.1
requests==2.32.3
rich==13.7.1
rpds-py==0.20.0
six==1.16.0
smmap==5.0.1
sniffio==1.3.1
streamlit==1.37.1
tenacity==8.5.0
toml==0.10.2
tornado==6.4.1
tqdm==4.66.5
typing_extensions==4.12.2
tzdata==2024.1
urllib3==2.2.2
watchdog==4.0.2
yarl==1.9.4
Step 4: Open the app.py file in a text editor like VS Code and start to write or copy the code below into it.
Model Options: Place the models you want to use. At present, GPT-4o-mini is efficient and cheap.
Persona: This is a system message that will directly affect the way our chat app responds to our prompt. So, it should be carefully crafted.
import streamlit as st
import sqlite3
import openai
from openai import OpenAI
import time
import os
# Constants
MODEL_OPTIONS = ["gpt-4o-mini", "gpt-4o", "gpt-3-5"]
PERSONAS_OPTIONS = {
"Analytical": "Provide detailed, logical analyses.",
"Business_Consultant": "Offer strategic business advice and insights.",
"Chef": "Share cooking tips, recipes, and culinary advice.",
"Code_Reviewer": "Analyze code snippets for best practices and potential bugs.",
"Concise": "Give brief, to-the-point responses.",
"Creative": "Offer imaginative and original responses.",
"Default": "Act as a helpful assistant.", # Default persona
}
TONE_OPTIONS = [
"Professional",
"Casual",
"Friendly",
"Formal",
"Humorous"
]
DEFAULT_MODEL = "gpt-4o-mini"
DEFAULT_PERSONA = "Default"
Step 5: Export the API key to the system environment so that our app can safely access the key.
# Initialize OpenAI client
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
Step 6: Now, let's set up the SQLite3 database. After setting up the database, we need to set up the cache. The cache is a temporary storage location for data that is frequently accessed. It speeds up the processing of user input.
# Database setup
def init_db():
conn = sqlite3.connect('chat_history.db')
c = conn.cursor()
c.execute('''CREATE TABLE IF NOT EXISTS messages
(role TEXT, content TEXT, timestamp REAL)''')
conn.commit()
return conn
st.set_page_config(page_title="Chatbee🐝", page_icon="🐝")
# Cache setup
@st.cache_data(ttl=3600)
def get_openai_response(messages, model, max_tokens, temperature):
response = client.chat.completions.create(
model=model,
messages=messages,
max_tokens=max_tokens,
temperature=temperature
)
return response.choices[0].message.content
# Process user input
def process_user_input(prompt):
# Add user message to chat history
st.session_state.messages.append({"role": "user", "content": prompt})
with st.chat_message("user"):
st.markdown(prompt)
# Prepare messages for API call
messages = [
{"role": "system", "content": f"You are acting as a {persona_key} persona. {persona}\n\nTone: {tone}"},
*st.session_state.messages
]
# Get AI response
with st.chat_message("assistant"):
message_placeholder = st.empty()
full_response = ""
response = get_openai_response(messages, model, max_tokens, temperature)
full_response += response
message_placeholder.markdown(full_response + "▌")
message_placeholder.markdown(full_response)
# Add AI response to chat history
st.session_state.messages.append({"role": "assistant", "content": full_response})
# Save to database
conn = init_db()
c = conn.cursor()
c.executemany("INSERT INTO messages VALUES (?, ?, ?)", [
("user", prompt, time.time()),
("assistant", full_response, time.time())
])
conn.commit()
conn.close()
Step 7: All the configuration and parameters for this application are placed in the sidebar. We can select the model, persona, and tone that we want to use. We can also adjust the advanced settings, such as the maximum number of tokens and the temperature.
# Sidebar configuration
with st.sidebar:
st.markdown("<h3 style='text-align: center;'>⚙️ Configurations 🔧</h3>", unsafe_allow_html=True)
model = st.sidebar.selectbox("Model", MODEL_OPTIONS, index=MODEL_OPTIONS.index(DEFAULT_MODEL))
persona_key = st.sidebar.selectbox("Persona", list(PERSONAS_OPTIONS.keys()), index=list(PERSONAS_OPTIONS.keys()).index(DEFAULT_PERSONA))
persona = PERSONAS_OPTIONS[persona_key]
tone = st.sidebar.selectbox("Tone", TONE_OPTIONS)
with st.sidebar.expander("Advanced Settings"):
max_tokens = st.slider("Max Tokens", min_value=50, max_value=2000, value=150, step=50)
temperature = st.slider("Temperature", min_value=0.0, max_value=1.0, value=0.7, step=0.1)
st.sidebar.markdown("---") # Add a separator
if st.sidebar.button("Clear Chat History"):
st.session_state.messages = []
st.rerun()
#
Step 9: The main chat window displays the chat history between we and the AI assistant. We can type a message and send it to the AI assistant, which will respond with a message that is tailored to our needs.
# Main chat window
st.markdown('<h1 style="text-align: center; color: #6ca395;">Chatbee🐝</h1>', unsafe_allow_html=True)
st.markdown('<p style="text-align: center; color: #FF0000;">always at your service</p>', unsafe_allow_html=True)
# Initialize session state
if 'messages' not in st.session_state:
st.session_state.messages = []
# Display chat history
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
# Chat input
if prompt := st.chat_input("Hello, how can I help you?"):
process_user_input(prompt)
Complete code is available for cloning: https://github.com/emeeran/Chatbee.git