Recently, OpenAI upgraded its API from the original “chat completion” to the new “Assistants” API.
Although marked as “beta” it works really well, and its abstractions are closer to common usage; I built a Jupyter Notebook to illustrate its usage end-to-end.

The full Jupyter Notebook is available at my GitHub repository, and shows the step-by-step Python API calls that are required to build a conversational thread with GPT.
The core is summarized here as:
name = "Go Dev"
instructions = """You are a dedicated GoLang developer,
and [...] full code for functions and types.
"""
content = """
Using the `go-openai` library in Go, [...].
"""
# 0. We need an assistant
new_assistant(name, instructions)
# 1. Get a Thread, and append a message to it
thread = new_thread()
add_msg_to_thread(thread, content)
# 2. Get a new Run, and associate it with our Thread
# We will use the Assistant (Go Dev) we created before.
run = new_run(thread=thread, asst_name=name)
# 3. We then ask GPT for advice
if wait_on_run(run, thread):
response = get_response(thread)
print(f"{name} says:\n{response}")
else:
print(f"We failed! Status: {run.status}, {run.incomplete_details}")
Please see the full Notebook for the complete listing of all the functions.
The full API documentation is here, where there is also a Playground to test with different assistants’ configurations.
Please feel free to reach out if you have an interesting LLM project (OpenAI-related or otherwise) and would like some help in getting it off the ground: my LinkedIn profile is here.





Leave a comment