Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Blog

OpenAI’s commitment to pushing the boundaries of AI technology continues with the launch of GPT-4 Turbo, an impressive new model that promises to revolutionise the world of artificial intelligence. With this latest offering, OpenAI is making AI more capable, affordable, and versatile than ever before. Let’s delve into the remarkable features and developments that GPT-4 Turbo brings to the table, and how it is set to impact the AI landscape.

GPT-4 Turbo: The Next Evolution of ChatGPT

GPT-4 Turbo builds upon the success of its predecessor, GPT-4, which was initially released in March. Now, OpenAI is taking it a step further by launching a preview of GPT-4 Turbo. This new model is not only more capable but also boasts knowledge of world events up to April 2023. With a substantial 128k context window, it can process the equivalent of more than 300 pages of text in a single prompt. Moreover, OpenAI has optimised its performance to offer GPT-4 Turbo at a fraction of the cost of GPT-4, making AI more accessible for users and developers alike. GPT-4 Turbo is set to become the standard for high-quality AI interactions and will be available for developers to try via the API.

Function Calling Updates

Function calling allows users to describe functions of their app or external APIs to models, enabling them to output a JSON object containing arguments for these functions. OpenAI is rolling out several improvements, including the ability to call multiple functions in a single message, streamlining interactions with the model. This update enhances the model’s accuracy in returning the right function parameters, making it more efficient and user-friendly.

Improved Instruction Following and JSON Mode

GPT-4 Turbo excels in following instructions accurately, making it a valuable tool for tasks that require precision, such as generating content in specific formats. Additionally, it introduces a JSON mode that ensures the model responds with valid JSON. This feature is particularly useful for developers generating JSON in the Chat Completions API outside of function calling. The response_format API parameter allows developers to constrain the model’s output to generate syntactically correct JSON objects, enhancing the model’s versatility.

Reproducible Outputs and Log Probabilities

With the new seed parameter, GPT-4 Turbo enables reproducible outputs, providing consistent completions most of the time. This feature is beneficial for various use cases, such as replaying requests for debugging and writing comprehensive unit tests, giving developers greater control over the model’s behaviour. Additionally, OpenAI is set to launch a feature that returns the log probabilities for the most likely output tokens generated by GPT-4 Turbo and GPT-3.5 Turbo. This feature will be invaluable for building functionalities like autocomplete in search experiences.

Updated GPT-3.5 Turbo

In addition to GPT-4 Turbo, OpenAI is releasing a new version of GPT-3.5 Turbo. This model supports a 16K context window by default, improving instruction following, JSON mode, and parallel function calling. Developers can easily access this new model by calling gpt-3.5-turbo-1106 in the API. Applications currently using the gpt-3.5-turbo name will be automatically upgraded to the new model on December 11. Older models will remain accessible via gpt-3.5-turbo-0613 in the API until June 13, 2024.

Assistants API: Building Agent-Like Experiences

OpenAI introduces the Assistants API, empowering developers to create agent-like AI experiences within their applications. These assistants are purpose-built AIs with specific instructions, leveraging additional knowledge and the ability to call models and tools to perform tasks. The API includes a Code Interpreter and Retrieval, among other capabilities, simplifying complex tasks and enabling the development of high-quality AI apps. The Assistants API supports a wide range of use cases, from natural language-based data analysis apps to voice-controlled DJ assistants, offering limitless possibilities.

New Modalities in the API

GPT-4 Turbo is not limited to text-based interactions; it now supports vision, image creation (DALL·E 3), and text-to-speech (TTS) functionalities:

– Vision: GPT-4 Turbo can accept images as inputs in the Chat Completions API, making it possible to generate captions, analyse images, and read documents with figures. This feature is ideal for applications like BeMyEyes, which assists visually impaired users.

– DALL·E 3: Developers can integrate DALL·E 3 directly into their apps and products, enabling the generation of images and designs programmatically. This feature has already been adopted by prominent companies such as Snap, Coca-Cola, and Shutterstock.

– Text-to-Speech (TTS): OpenAI now offers the ability to generate human-quality speech from text, with options for different voices and quality levels. This feature is versatile, catering to real-time use cases and applications that prioritise audio quality.

Custom Models and Fine-Tuning

OpenAI is launching an experimental access program for GPT-4 fine-tuning. While preliminary results suggest that GPT-4 fine-tuning requires more effort for meaningful improvements compared to GPT-3.5, developers currently using GPT-3.5 fine-tuning will have the option to apply for the GPT-4 program as it evolves.

For organisations requiring extensive customization, OpenAI is introducing a Custom Models program. This program enables organisations to work closely with OpenAI researchers to train custom GPT-4 models tailored to their specific domain. The program includes comprehensive customisation, from domain-specific pre-training to a custom reinforcement learning post-training process, ensuring exclusive access and privacy for organisations.

Lower Prices and Higher Rate Limits

OpenAI is reducing prices across the platform, making AI more affordable for developers. The cost of input and output tokens for GPT-4 Turbo is significantly lower than that of GPT-4, creating cost-effective opportunities for users. Rate limits are also being doubled for GPT-4 customers, allowing applications to scale more efficiently.

Copyright Shield and Open Source Advancements

OpenAI is taking steps to protect customers from legal claims related to copyright infringement. The new Copyright Shield initiative will defend customers and cover the costs incurred if they face such claims, ensuring greater security and support for users.

Additionally, OpenAI is introducing Whisper large-v3, the next version of its open-source automatic speech recognition model (ASR), with improved performance across languages. The Consistency Decoder, an open-source alternative to the Stable Diffusion VAE decoder, enhances the quality of images compatible with this technology.

Conclusion

OpenAI’s continuous innovation and commitment to democratising AI are evident in the launch of GPT-4 Turbo and the Assistants API, accompanied by various enhancements and advancements in the AI landscape. These developments promise to revolutionise the way AI is used, making it more capable, cost-effective, and user-friendly. As OpenAI continues to expand its offerings, the future of AI applications looks brighter than ever. Developers and users alike can look forward to more accessible and versatile AI experiences, paving the way for groundbreaking innovations and possibilities in the world of artificial intelligence.