Machine Learning/Deep Learning is having rapid growth, due to the performance of the latest computer technologies. Part of the rapid acceptance is Power Systems’ extreme high performance, as well as integration of Nividia GPUs directly to the processor, with interchange speeds 3 times alternative technologies.
In addition, IBM has made available, for no charge, the ML/DL frameworks in a single compressed download that is 10x faster than previous download times. This reduces the time to enable ML/DL infrastructures significantly, in addition to getting computational results 3 to 10 times faster than alternative architectures.
But, maybe Machine Learning/Deep Learning (aka ML/DL) capabilities and usage is new to you. Let me provide an overview. A brief definition of Machine Learning/Deep Learning is a method used to devise complex models and algorithms that lend themselves to prediction; in commercial use, this is known as predictive analytics. These analytical models allow researchers, as well as data scientists, engineers, and analysts to “produce reliable, repeatable decisions and results” and uncover “hidden insights” through learning from historical relationships and trends in the data.
Does that sound confusing? I’m sure your answer is yes. How about another approach: ML/DL uses open source software frameworks to analyze data, determine patterns, and make probabilistic identification. Imagine a stop sign. By ingesting thousands of images of stop signs from all angles, light and distances, patterns can be created based on the pixels of the images. Now, let’s assume a random picture needs to be identified, and that picture is a stop sign. Using ML/DL, the item can probabilistically be identified as a stop sign.
Think of self-driving cars. How might they work? How would they recognize a stop sign from a person or a speed limit sign or a telephone pole? This is done by sensors and cars driving all over, while “seeing and capturing” images that are then identified. Then, if sensors in a self-driving car “see” a “thing,” it can quickly determine it is a stop sign, and not a person. And, it can stop accordingly. All it is doing is breaking the “unknown image pixels” down, and comparing them to a huge library of known pixels and determining “this must be a stop sign.”
How would you like that analysis and braking action to take 20 seconds, or even ten seconds? That would be a problem for sure, and there would be lots of accidents. So, what if we could do this in 1/1000th of a second? That would definitely be much better, especially if the probability of proper identification is 99.99999%.
Other patterns are possible to “learn” using ML/DL such as fraudulent money transactions, medical x-rays, facial recognition, and so on. Previously, the speed of computers limited applications of ML/DL to areas where speed was not important, since you were not risking timely requirements. Today in Power Systems, we have the fastest fast cores, integrated GPUs and large fast memory, so that restrictions on time have diminished, allowing for greater use of ML/DL, as in the example of self-driving cars.
This is a rather simple way to understand (but complex to do) exercise with high risk. But, it does demonstrate that we now have sufficient speed allowing us to enter areas that were not possible before. Another example might be object or facial recognition. It has been previously possible, but it took a very long time to identify a person. With today’s system (and by that, I mean Power Systems) being incredibly faster than previous systems, you can not only do facial recognition almost instantly, or close to it, such that you can now identify people in crowds. Examples of this are the TSA line (faster lines) or the people in retail stores (direct notification to a person by their usual buying patterns). While that may be considered invasive, what if we make that less personal and identify objects in a store? Then, we can use that for inventory management and restocking or real-time inventory. What if we could scan an engine and identify to the user the objects he/she is seeing? In this case, replacement parts can now be properly ordered based on what is there. This is sometimes called “augmented reality,” but the concept is very similar.
The frameworks are there to accelerate implementation, and Power Systems has a product called Power AI (artificial intelligence) which includes Caffe, Tensor Flow, Theano, Torch, and Chainer. Each of these products are enabled to run on Power Linux Servers with integrated GPUs.
I will close with this statement. Now, we must break down the barrier of how to apply creativity in order to find use cases. How can this be used in your organization? The systems and support are there, as well as defined use cases for various industries; but beyond those, the imagination is the only limiting factor.
Please contact your Mainline Account Executive directly, or click here to contact us with any questions.