Director – Power Systems
I attended SuperComputing 2016 in Salt Lake City.
There was a strong presence of customers, vendors and innovators who presented the view that “supercomputing” is not just about doing massive calculations for genomics, Molecular Dynamics, space shots, etc. Supercomputing is expanding in definition, capabilities, and applicability.
From a business perspective, data volume, speed of data, and relevance of data is growing at a never before seen rate. The data explosion is, in part, due to the advent of consumer devices with corresponding applications (Facebook, Snapchat, Twitter, etc.), the “cloud,” the social interaction on the Internet, Google, etc. The connected world is really connected data. This is all leading to the need to compute faster, aka high-performance computing or supercomputing. Consider Google or Facebook…. how do you think they would survive if the compute speed were slow? You sign onto Google and do a search for “all blue boots sold by Zappos.” You are used to a response time of a few seconds, but what if that response time was two hours? Google is aware of this perceived problem and reacts by providing high-performance computing via technology (i.e., huge clusters), as well as storage and search techniques. What if you signed onto the Internet to book a cruise, but because of the various options and pricing choices, the response was terribly slow… what is your user experience? Not everyone is a Google or a Facebook, and most don’t have their volumes, but in a competitive world, customer experience and satisfaction are requirements. And, to analyze, am I bound by analytics on “old, stored” data, or can I work on live streaming data to see what is happening now?
I think all end-user serving systems do have a similar problem, with the growing data volumes and competitiveness. Data can now be accessed in Twitter, Facebook and real-time, but the sheer volume makes processing the data slow, and therefore, not in real-time, if normal processor technology is applied to the solution. How do you do analytics on real-time streaming data? What if there is technology that can now address the issue? Well, there is, and this technology has been available to the genomics, MD, and space guys for a long time… it is called GPUs and ASICs/FPGA. But, these were hard to program, and frameworks did not exist for applications other than computational oriented frameworks. This has changed, and now the expansion into super-fast computing is available, in order to keep up with the data and competitive needs.
GPUs and FPGAs can now support database parallelism and rapid searches. SSDs and NVMe can provide local data access faster than HDDs and SANs. 100 GB Ethernet and IB switching is available. This technology is now readily available and has the commercial orientations available. FPGAs can filter streaming data, and pass only relevant data into further processing and analysis. Machine learning can now be applied to almost all businesses. For example, a retailer can change images rapidly, based on fashion or equipment updates. And, how do you think the self-driving cars do that? They have ML processors under the hood to recognize and learn. What about the ability of facial recognition in retail and security applications? Normal processors of systems today cannot process at the application speed requirement. How do you process Twitter feeds or Facebook streams in real-time to get the pulse of people? How do you monitor and analyze implanted health devices in real-time when there are millions of them? By augmenting the systems’ processors with accelerators to allow high-speed computing, and work in conjunction with the data and processing of systems… that is how this can be done. These accelerators, along with the OpenSource community, have provided frameworks to support the application development needs. Caffe and Torch are deep learning and machine learning frameworks; CUDA is a low-level programming language for GPUs; OpenACC is a high- level programming language for GPUs; FPGA for Power Systems now have a programming framework to enable new solution… OpenCAPI for FPGAs on Power Systems.
In my opinion, HPC is now for everyone. Our imagination is the limit.
Please contact your Mainline Account Executive directly, or click here to contact us with any questions.