PA’s Andrew McCullough, AI technology expert, discusses the developments in microprocessors designed to support Artificial Intelligence (AI) applications and processes.
The growing use of artificial intelligence (AI) is upscaling many standard types of IT workload as well as powering new services driven by advanced algorithmic data processing and machine learning techniques. Developers of AI systems, however, have been somewhat constrained by the limitations of standard microprocessors.
So far, contrary to general trends of the CPU market, AI chips have been largely predicated along proprietorial lines: vendors have created chips engineered to their own specific design with less concern for direct functional compatibility with rival products. Each product launch has included claims about achievable performance, some gauged in terms of the IPS (inferences per second) performance metric rather than FLOPS (floating-point operations per second).
Andrew states that “like-for-like comparisons carry less traction where solutions claim to optimise a specific AI workload”.
He continues: “The most important benchmark for AI chip performance depends on the application. Overall, speed tends to be the critical quality but for some edge devices power efficiency is just as important. Mobile devices fall into this category when AI processing has to be implemented on the edge device itself, rather than away in the cloud.”
Established chipmakers might also find it too taxing to make a full-blown break with the past, Andrew adds: “They tend to have intellectual property back catalogues wedded to a particular programming paradigm. There comes a point where starting from scratch can produce a better solution due to a step-change in technology.”