Inspur Shares Innovative Deep Learning Technology at GTC16
SAN FRANCISCO, April 5, 2016 /PRNewswire/ -- Inspur released the Caffe-MPI, a multi node parallel version open source framework for Deep Learning, at the 2016 GPU Technology Conference (GTC16), which is being held from April 4-7 in Silicon Valley, California.
Inspur also announced its plan to launch a Deep Learning Speedup Program (DLSP), aimed at facilitating the accelerated development and efficient application of Deep Learning -- from the perspectives of hardware infrastructure, system optimization and parallel framework.
Caffe-MPI to Speed Up Deep Learning
The newly released version of Caffe-MPI features excellent cluster parallel scalability. Testing data shows that in a 4-node environment, the performance of the new version with 16 GPU cards is 13 times higher than the single GPU card version. Another feature of the new version is its support for the cuDNN library, which makes high-performance Deep Learning code development much easier for program developers.
DLSP Program to Facilitate Deep Learning Ecosystem Construction
During GTC16, Inspur announced its plan to launch the Deep Learning Speedup Program (DLSP), aimed at accelerated development and efficient application of Deep Learning from three perspectives: the innovation of hardware infrastructure, optimized system design, and improved parallel framework.
In the innovation of hardware infrastructure, Inspur plans to focus on the research and development of the offline training server, incorporating the latest Nvidia M40 GPU and the next generation Pascal GPU. Another focus is online identification applications based on M4 GPU, aimed at developing a GPU computing platform with better performance per watt.
In the optimized system design, Inspur will put together a team specializing in Deep Learning, based on the parallel computing laboratory - jointly established with NVIDIA - which will develop customized optimized solutions, based on the application demand for deep learning in various industries. This enables balanced design in system computing, storage and network, while fully tapping the potential of the system and ensuring satisfactory manageability.
In the improved parallel framework, Inspur will continue to increase its investment in the open-source project of the Caffe Deep Learning framework to attract more developers and users to get involved in community building. Currently, the open-source Caffe-MPI spearheaded by Inspur has attracted the attention of numerous companies and research institutes in China, India and the U.S.
Innovative Deep Learning: Enabling AI to Serve Society
For Inspur, the three Deep Learning plans announced have been, to a large extent, a result of the accomplished experience gathered from serving world-class internet companies such as Baidu, Alibaba and Tencent, enabling Inspur to build up strong R&D and innovation capability. Additionally, this has allowed Inspur to gather even further experience in internet data center products, more confidence in creating a Deep Learning computing platform to meet the demands of the internet, and other fields.
At present, Inspur's Deep Learning solutions have been applied in numerous internet companies including Tencent, Baidu, Alibaba, Qihoo, Iflytek and Jingdong, supporting "super brains" of various types, and providing intelligent services for society. With the three Deep Learning projects gradually rolling out, it is expected that Inspur's Deep Learning Solutions will be adopted by more companies in the future.
A key area of focus is that Inspur also presented NX5460M4, a Deep Learning server for industrial customers. The NX5460M4 is a high-performance blade server of Inspur I9000; a converged architecture of the blade server series specially optimized for Deep Learning applications, which supports a maximum of eight Deep Learning computing nodes and 16 GPU accelerator cards in a 12U space, as well as high-density servers, 4- and 8- socket key business servers, software defined storage and multiple computing schemes. This includes heterogeneous computing aimed at providing commercial corporate customers with the Deep Learning infrastructure, featuring high reliability, and high performance.
SOURCE Inspur Group Co., Ltd.
Inspur Group Co., Ltd.
CONTACT: Yan Panpan, T: +86 (0)10-82581473, M: +86 18710190569, yanpanpan@Inspur.com