
In a move set to redefine the artificial intelligence (AI) performance assessment, Primate Labs has launched Geekbench AI 1.0, a benchmarking tool designed to evaluate AI capabilities across various devices and platforms. This transformative release marks the evolution of the previously known Geekbench ML (Machine Learning) into a user-friendly AI testing suite.
Geekbench, a household name in the world of benchmarking, has long been renowned for its comprehensive evaluation of CPUs and GPUs, providing users with clear and comparative performance scores. These scores have become a staple in product reviews and marketing materials, empowering consumers and professionals to assess how well their devices will perform in real-world scenarios, from gaming and content creation to software development.
With the launch of Geekbench AI 1.0, Primate Labs aims to bring that same standardized approach to measuring AI performance across various platforms and devices. The benchmark evaluates how well a device handles real-world machine learning tasks using CPU, GPU, and NPU (neural processing unit) capabilities. By integrating ten different AI workloads that closely mimic real-world use cases, Geekbench AI provides a holistic assessment of device performance in areas like computer vision and natural language processing.

One of the key strengths of Geekbench AI lies in its multidimensional approach to evaluating AI performance. The benchmark uses three distinct scores: Single Precision, Half Precision, and Quantized. These scores offer valuable insights into how different hardware components perform at various levels of data precision, from high-performance tasks requiring exact outputs to those that can tolerate less precision.
Geekbench AI goes beyond mere speed assessment by measuring accuracy alongside performance. This ensures that the benchmark provides a comprehensive picture of how reliably a model achieves correct results, rather than solely focusing on task completion speed.
Geekbench AI boasts an impressive range of compatibility, supporting multiple AI frameworks across platforms. This includes OpenVINO and ONNX on Windows, Core ML on Apple devices, and vendor-specific TensorFlow Lite Delegates on Android. Such broad support enables developers and users to assess AI performance across various hardware and software configurations, facilitating informed decision-making and optimization efforts.
To ensure fair comparisons and prevent performance tuning, all workloads in Geekbench AI run for a minimum of one second. This allows devices to reach their peak performance before results are recorded, providing a more accurate reflection of real-world capabilities.
The launch of Geekbench AI marks a significant milestone in AI performance assessment. By providing users with a standardized, cross-platform tool to evaluate AI capabilities, Primate Labs is empowering individuals and organizations to make informed decisions about their device investments and optimize their AI workflows.
Geekbench AI 1.0 is now available for download on Windows, macOS, Linux, Android, and iOS platforms. As users begin to explore the capabilities of this tool, it is expected to shed light on the real-world performance of devices that heavily rely on on-device AI, such as Copilot Plus PCs and the latest smartphones.
While traditional benchmarks have focused on metrics like framerates and loading times, Geekbench AI introduces a new paradigm in performance evaluation. By assessing the precision of predictive text or the outcomes generated by AI-driven image editing software, this tool opens up new avenues for understanding and optimizing AI performance.