Where will and AI/ML workloads be executed and who should handle them? The industry-wide rising tide towards the public cloud is not a foregone conclusion, as we were reminded by a Micron-commissioned report by Forrester Consulting that surveyed 200 business professionals who manage architecture, systems, or strategy for complex data at large enterprises in the US and China.
As of mid-2018, 72 percent are analyzing complex data within on-premises data centers and 51 percent do so in a public cloud. Three years from now, on-premises-only use will drop to 44 percent and public cloud use for analytics will rise to 61 percent. Those using edge environments to analyze complex data sets will rise from 44 to 53 percent.
Those figures make a strong case for the cloud, but they do not actually say the complex data is actually related to AI/ML. Many analytics workloads actually deal with BI instead of tasks that require high-performance computing (HPC) capabilities. While not all AI/ML workloads fall into that category, some do require hardware customized to maximize performance when training AI/ML models. Early adopters of AI/ML models actually have been relying more on the public cloud rather than their own equipment.
The complete article can be found here.