Sam Altman recently announced a new update to OpenAI’s GPT model, and everyone is talking about it. We’re talking about it, your friends are talking about it, and you will be talking about it very soon. So join us and learn what’s so special about GPT-4 Turbo.
What is GPT-4 Turbo?
During OpenAI’s DevDay on November 6th, Sam Altman announced a long-awaited new GPT-4 update. With it came many new and exciting features and updates. So, what exactly is there in OpenAI’s new AI model?
Updated Database
“We are just as annoyed as all of you, probably more, that GPT-4’s knowledge about the world ended in 2021,” said Altman in his presentation. To remedy their annoyance, the team updated the GPT-4’s knowledge cutoff to April 2023, with a promise of swifter updates in the future.
This is tremendous news, especially considering that Grok AI is nipping at OpenAI’s heels by offering access to real-time information.
More Context
Previously, GPT-4 could only support 4k tokens of context, which is plenty for most tasks. However, bigger tasks, such as thorough analysis of larger documents, require a higher token count.
Well, they completely blew us out of the water, raising the bar to 128k. According to Altman himself, this is equivalent to roughly 300 pages—finally, someone to proofread your novel.
Reduced Price
The GPT-4 updates have brought plenty of stability and efficiency improvements, allowing OpenAI to drop the price significantly for developers. GPT-4 Turbo input tokens are 3 times cheaper than GPT-4 at $0.01, and output tokens are 2 times cheaper at $0.03.
Function Calling Updates
Function calling allows you to use natural language to create JSON objects with proper arguments to call necessary functions. Previously, it required you to input each function description separately. Not anymore.
With the Turbo update to GPT-4, you can finally combine those commands into a single message. Moreover, they’ve improved accuracy, so GPT-4 Turbo is more likely to return correct parameters.
Improved Instruction Following and JSON mode
GPT-4 Turbo outshines prior models in tasks requiring precision, like generating specific formats (e.g., always delivering in XML). It also supports JSON mode for ensuring valid JSON responses, enabled by the API parameter ‘response_format.’ This model is a boon for developers using the Chat Completions API to generate syntactically correct JSON objects, even beyond function calls.
Reproducible Outputs and Log Probabilities
The new ‘seed’ parameter ensures consistent outputs, a beta feature beneficial for debugging, robust unit testing, and gaining better control over the model’s behavior. OpenAI already employs this for its internal tests, praising its usefulness.
Additionally, a new feature is poised to be released soon. It will provide log probabilities for tokens from GPT-4 Turbo and GPT-3.5 Turbo, aiding in functionalities like search autocomplete.
How Do I Update?
To get the new GPT-4 Turbo API, a paying developer needs to pass ‘gpt-4-1106-preview’ in the API. The stable product-ready model is coming in a few weeks.
Are We Getting ChatGPT-4 Turbo?
Absolutely! The ChatGPT-4 updates are live, and you don’t need to do anything. Frankly speaking, the transition was so seamless it would’ve gone under our radars if not for the announcement.
The same goes for the ChatGPT 3.5 model. However, its knowledge cutoff is in January 2022 — a bit farther than 4’s, but still suitable for a free alternative.
Anything Else?
Plenty! Besides updates to the GPT-4 model and the ChatGPT update, there are many new features for developers and users to explore.
GPT-4 Turbo with Vision
GPT-4 Turbo can now accept images as inputs with its Chat Completions API. Here’s what you can do with it:
- Generate captions
- Read physical media
- Identify objects
And that’s just off the top of our heads. You can find more app ideas for gpt-4 turbo with vision in our blog
There are plenty of wild use cases.
For example, Be My Eyes uses this new tech to assist people who are blind or have low vision in identifying products or navigating the physical space. This is just one example of artificial intelligence services being put to good use, and we are eagerly awaiting to see what other uses people come up with.
You can access this feature in the API by using ‘gpt-4-vision-preview’. The full release is planned with the product-ready release of GPT-4 Turbo. The price will depend on the input image size. OpenAI cites an image of 1080×1080 pixels costing $0.00765 with GPT-4 Turbo.
DALL·E 3
On the topic of images, DALL·E 3 can now be used by devs in their apps through the Images API by specifying ‘dall-e-3’ as the desired model. Snap, Coca-Cola, and Shutterstock already use this feature for their customers and marketing campaigns.
The prices start at $0.04 per image generated but differ depending on the quality and format.
Text-To-Speech (TTS)
This API now also includes the ability to synthesize human-like speech via ‘tts-1’ and ‘tts-1-hd’ models. The first is supposedly optimized for real-time generation, while the second is for higher quality.
You can check out the voice samples in their blog article. The quality itself is fairly good. There are still some kinks to iron, and it still lacks the full emotional expression of a real human being, especially a trained professional. But it’s getting there.
The price is set at $0.015 per 1000 characters.
GPTs
The feature name is a bit confusing, so let us elaborate. OpenAI allows users to create their own customized versions of ChatGPT. Anyone can create them, as no coding is required. You can create them for personal, commercial, and public use.
Moreover, the GPT Store is rolling out later this month, allowing you to access user-made models. The store will feature top creators and offer financial incentives according to the number of people using the GPT.
Example versions are already available to ChatGPT Plus and Enterprise users, with access coming for more users soon.
GPT-3.5 Turbo
An astute reader might have already noticed that we’ve mentioned updates to GPT-3.5. Yes, indeed, OpenAI has updated its GPT-3.5 model and its chatbot model.
While not as innovative, the updates are still pretty fundamental. This model has improved its accuracy and refreshed its database to January 2022. The context rate was also increased to 16k from 4k. Additionally, the API version became significantly cheaper, allowing you to use both the fine-tuned 4k version and the new 16k version.
Conclusion
There are plenty more small changes that add up to creating a truly incredible update for OpenAI’s brainchild. It always seems like the next ChatGPT update will not impress us, yet they manage to do it every time.
We are looking forward to what the future brings for ChatGPT and what it entails for the global development community. There are plenty of amazing things developers have already done, and it’s getting hard to imagine how far we can stretch the limits of possibility. If you’re up to building your own ChatGPT, LITSLINK offers top-notch artificial intelligence services to help you stand out. Contact us!