Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Blog

At Inflection, they are announced the successful training completion of Inflection-2, a groundbreaking language model that stands as the best in its compute class and is currently the second most capable Language Model (LLM) globally. Their commitment revolves around creating a personal AI accessible to everyone, and Inflection-2 represents a significant leap forward from its predecessor, Inflection-1, which powers Pi.

Unveiling Inflection-2’s Capabilities

Inflection-2 exhibits notable enhancements, showcasing superior factual knowledge, improved stylistic control, and a remarkable boost in reasoning abilities. To illustrate its prowess, Figure 1 compares Inflection-1, Google’s PaLM 2-Large, and Inflection-2 across various widely used academic benchmarks.

Figure 1. Comparison of Inflection-1, Google’s PaLM 2-Large, and Inflection-2 across a range of commonly used academic benchmarks. (N-shots in parentheses). Source: https://inflection.ai/inflection-2

Inflection-2’s training involved harnessing the power of 5,000 NVIDIA H100 GPUs in fp8 mixed precision, totaling approximately 10^25 FLOPs. This positions them within the same training compute class as Google’s flagship PaLM 2 Large model, outperforming it across several standard AI performance benchmarks, including MMLU, TriviaQA, HellaSwag, and GSM8k.

Efficient Serving with Inflection-2

Designed with serving efficiency in mind, Inflection-2 is set to power Pi. Through a transition from A100 to H100 GPUs and their optimized inference implementation, they’ve managed to reduce costs and increase serving speed compared to Inflection-1, despite Inflection-2 being significantly larger. This achievement marks a crucial milestone in their journey to provide a personal AI for everyone.

A Glimpse into the Future

As they continue to scale, they eagerly anticipate training even larger models on the full capacity of their 22,000 GPU cluster. Stay tuned for more exciting developments on this front!

Prioritising Safety and Trust

Training large models demands meticulous attention to safety, security, and trustworthiness. At Inflection, they take these responsibilities seriously, ensuring their models undergo rigorous evaluation and incorporate best-in-class approaches to alignment. They are proud to be early signatories of the White House’s July 2023 voluntary commitments, actively supporting global alignment and governance mechanisms for this critical technology.

Results Speak Louder

Benchmarking against the state of the art is crucial to validating their progress. The results showcase Inflection-2’s performance across diverse benchmarks, comparing it to Inflection-1 and other powerful external models. From general knowledge tasks to question answering and coding benchmarks, Inflection-2 consistently demonstrates its excellence.

Inflection-2 marks a significant stride towards their goal of making personal AI accessible to all, and they’re excited about the possibilities it brings to Pi and beyond.