Listen to this article
|
OpenAI announced two new APIs for ChatGPT that will enable high-volume use and improve the functionality of ChatGPT for production applications. For roboticists and robotics application developers, this provides programmatic access to the speech-to-text functionality.
Developers can now integrate ChatGPT and Whisper models into their apps and products through the OpenAI-supported API. OpenAI has made the open-source Whisper large-v2 model in the API faster and cheaper for developers.
ChatGPT API users should expect model upgrades and dedicated capacity for model control. The company has also improved the API terms of service based on developer input. All of these changes encourage the use of the API in new applications.
A new ChatGPT model called gpt-3.5-turbo was released this week and is the same model used in the ChatGPT product. New pricing for the API access is priced at $0.002 per 1k tokens, which is 10x cheaper than the existing GPT-3.5 models. The company is claiming that only small changes are needed to existing prompts to use the API.
Robotics Summit (May 10-11) returns to Boston
Register Today
Finally, OpenAI is offering dedicated instances of ChatGPT for API users who need the performance and availability that a dedicated instance offers. Developers get full control over the instance’s load, the option to enable features such as longer context limits, and the ability to pin the model snapshot. According to the company, dedicated instances can make economic sense for developers running beyond ~450M tokens per day.
With a renewed focus on developers, the company is making several changes to its policies with this release:
- Data submitted through the API is no longer used for service improvements
- A default 30-day data retention policy for API users has been implemented
- Pre-launch review is removed
- Improved developer documentation
What does it mean for robotics?
Traditionally, GPT models read unstructured text, which the model sees as a series of “tokens.” ChatGPT models, on the other hand, take in a series of messages along with metadata. There is a new model raw format called Chat Markup Language (“ChatML”), which is used to read the tokens in the API.
This opens the door to creative new ways that ChatGPT can be used in robotics applications. The system could be leveraged to enhance interactivity between a robot and its end users, especially in interactions with service robots or robotic applications that interface with non-professional users (i.e. the public).
API implementation of the ChatGPT functionality will enable developers to filter both user inputs (to better model a prompt) and the ChatGPT response. Snapchat, Instacart and Shopify are already implementing API access to ChatGPT and this will help ensure that the API is scalable and hardened for the high volumes of API usage that these applications will deliver.
The system has already been proven to be useful for generating code that can operate a robot as Microsoft engineers recently demonstrated.
Tell Us What You Think!