With the recent drama coming from OpenAI, here is how you avoid vendor lock-in.
Fri, December 8 2023
Jake Landers
Developer and Creator
Table of Contents
The recent drama surrounding OpenAI has been an interesting rollercoaster to say the least... Regardless of your personal feelings about this news, one thing does become clear: Having your products and/or systems rely on the OpenAI api is a recipe for disaster.
Tools like LangChain can be a great way of being model agnostic, but there is a lot of hidden functionality and magic wand waving in massive packages such as that.
This guide is for developers that do not choose to use larger frameworks like that, and opt to use the api's directly, in Python or not.
Before we look at how to migrate your code, lets take a look at what other potential llm options we have besides OpenAI:
In this tutorial, we will be using Anthropic Claude 2.1
First, we will create a bog-simple application using the openai python package to generate a JSON object response giving us book ideas.
1import openai 2import json 3 4client = openai.OpenAI() 5 6SYS_MSG = """ 7 You are an AI that generates creative book ideas. 8 You will be passed a topic, and you will respond with 3 book ideas. 9 You must respond with the following JSON schema: 10 {"ideas": [{"title": str, "author": str, "genre" str}]} 11""" 12 13messages = [ 14 {"role": "system", "content": SYS_MSG}, 15 {"role": "user", "content": "Topic: Realistic science fiction about space travel."}, 16] 17 18# send the request 19response = client.chat.completions.create( 20 model="gpt-4-1106-preview", 21 response_format={"type": "json_object"}, 22 temperature=1.0, 23 messages=messages, 24 timeout=30, 25) 26 27# error handling 28if len(response.choices) == 0: 29 exit(1) 30if response.choices[0].finish_reason != "stop": 31 exit(1) 32 33# attempt to parse the object 34obj = json.loads(response.choices[0].message.content) 35 36print(json.dumps(obj, indent=4)) 37 38
When we run this code, we get the response we expect. Great.
1{ 2 "ideas": [ 3 { 4 "title": "The Kepler Memoirs", 5 "author": "Adrian Clarke", 6 "genre": "Science Fiction / Adventure" 7 }, 8 { 9 "title": "Void Horizon", 10 "author": "Eleanor Voss", 11 "genre": "Science Fiction / Hard Sci-Fi" 12 }, 13 { 14 "title": "Gravity's Echo", 15 "author": "Leonard S. Huxley", 16 "genre": "Science Fiction / Speculative Fiction" 17 } 18 ] 19} 20
Now, it is time to develop the anthropic equivilent using the anthropic
python package.
1import anthropic 2import json 3 4client = anthropic.Anthropic() 5 6PROMPT = """ 7 8Human: 9You are an AI that generates creative book ideas. 10You will be passed a topic, and you will respond with 3 book ideas. 11You **MUST ONLY** respond with the following JSON schema: 12{"ideas": [{"title": str, "author": str, "genre" str}]} 13Topic = Realistic science fiction about space travel. 14 15Assistant:""" 16 17# send the request 18response = client.completions.create( 19 model="claude-2.1", 20 temperature=0.5, 21 timeout=30, 22 max_tokens_to_sample=1024, 23 prompt=PROMPT, 24) 25 26# parse the response 27obj = json.loads(response.completion) 28print(json.dumps(obj, indent=4)) 29
As you can see, the code structure is very similar, with some differences with how anthropic expects your prompts to be created. You must start with \n\nHuman:
when sending to the api, and they do not have a concept of system messages so the prompt was modified as such.
You can read more about the anthopic api here.
Also, in our testing we found that anthropic needed a bit more nudging to get a JSON output. Markdown bolding seemed to give the most consistent results, but your mileage may vary.
1{ 2 "ideas": [ 3 { 4 "title": "Wormholes to the Stars", 5 "author": "John Smith", 6 "genre": "Science Fiction" 7 }, 8 { 9 "title": "The Mars Colonization Project", 10 "author": "Jane Doe", 11 "genre": "Science Fiction" 12 }, 13 { 14 "title": "Journey to a Distant Planet", 15 "author": "Bob Johnson", 16 "genre": "Science Fiction" 17 } 18 ] 19} 20
Voila! As you can see, we get a very similar result using the anthropic API.