Primate Labs has unveiled Geekbench AI, a cutting-edge benchmarking tool crafted for the evaluation of machine learning and AI workloads. This launch marks a significant milestone, reflecting years of meticulous development and collaboration with key stakeholders in the AI engineering community.
Previously known as Geekbench ML during its preview phase, Geekbench AI 1.0 has been rebranded to align with industry standards and clarify its focus on AI-centric tasks. The tool is now accessible across various platforms, including Windows, macOS, and Linux, and is also available on the Google Play Store and Apple App Store for mobile users.
Geekbench AI introduces a standardized approach to measuring AI performance, offering a comprehensive benchmarking solution for developers, hardware vendors, and AI enthusiasts. The tool stands out by delivering three overall scores that encapsulate the complexity and diversity of AI workloads. This multi-dimensional scoring system is designed to address the challenges of measuring performance across different platforms, each with its own unique hardware optimizations and precision levels.
Primate Labs emphasizes that accurate performance measurement extends beyond simple speed tests; it requires an understanding of which metrics are most critical across various platforms and use cases. To that end, Geekbench AI includes accuracy measurements alongside speed metrics, providing a more holistic view of a device’s AI capabilities.
The benchmark supports a broad spectrum of AI frameworks, including OpenVINO on Linux and Windows, and TensorFlow Lite delegates like Samsung ENN, ArmNN, and Qualcomm QNN on Android. This ensures that Geekbench AI remains relevant in a rapidly evolving AI landscape, reflecting the latest tools and methodologies.
Geekbench AI also utilizes diverse and extensive datasets, enhancing its accuracy evaluations and better representing real-world AI scenarios. All workloads run for a minimum of one second, allowing devices to fully ramp up to their peak performance levels, while still capturing the bursty nature of real-world applications.
In keeping with its commitment to transparency and industry-standard testing, Primate Labs has published detailed technical descriptions of the workloads and models used in Geekbench AI 1.0. The benchmark is integrated with the Geekbench Browser, enabling easy cross-platform comparisons and result sharing.
Primate Labs plans to release regular updates to Geekbench AI, ensuring it keeps pace with the latest developments in AI technology. Already, major industry players like Samsung and Nvidia are incorporating Geekbench AI into their workflows, underscoring the tool’s reliability and relevance in professional settings.
With Geekbench AI, Primate Labs has set a new standard in AI benchmarking, providing a robust, versatile tool for measuring and comparing AI capabilities across diverse platforms and architectures.