OpenAI has announced many new improvements to its large language models GPT-4 and GPT-3.5, including updated knowledge bases and a much longer context window. The company has now announced a new language model called GPT-4 Turbo. It also said it would follow the path of Google and Microsoft and start protecting its customers against copyright lawsuits.
GPT-4 Turbo is cheaper and more powerful
GPT-4 Turbo, currently available via an API preview, has been trained with information stretching back to April 2023, the company announced at its first developer conference on Monday. The previous version of GPT-4, released in March, was only trained with data up to September 2021. The new Turbo therefore offers a much more up-to-date memory.
GPT-4 Turbo will also “process” more data with 128 thousand context windows, which OpenAI says is “equivalent to more than 300 pages of text in a single prompt.” In general, larger context windows enable large language models like GPT to understand more of the question and provide more thoughtful answers.
Previously, OpenAI had released two versions of GPT-4, one with only 8K context windows and the other with 32K. This newest version of GPT-4 will continue to accept image prompts, text-to-speech requests, and integrate DALL-E 3, a feature first announced in October.
OpenAI says GPT-4 Turbo is cheaper for developers. The cost of entry is only $0.01 per 1,000 tokens, the basic unit of text or code that LLMs can read, compared to $0.03 on GPT-4. Each output will be US$0.03 per 1,000 tokens. Overall, OpenAI says the new version of GPT-4 is three times cheaper than previous versions.
The company says improvements to GPT-4 Turbo mean that users can ask the model to perform more complex tasks in one command. People will even be able to specifically tell GPT-4 Turbo to use the coding language of their choice for results, such as code in XML or JSON.
So what do you think about this topic? You can share your thoughts with us in the comments section.