News

HPE embraces decentralized machine learning with its Swarm Learning solution

HPE goes deep into decentralized machine learning with HPE Swarm Learningwhich takes advantage of the recently developed approach of Swarm Learning Artificial Intelligence to derive value from data generated at the edge or in distributed locations, and to do so without compromising security and equitably.

With HPE Swarm Learning, which is available now, organizations can share only the learnings of the Artificial Intelligence model, and not the datawith other organizations. In this way they will be able to take advantage of larger distributed data sets, improve the accuracy of the models and reduce biases. All without losing control of the data or putting your privacy at risk.

Its possible uses are multiple, and in fact progress has already been made by testing it in the fight against colon cancer at the University of Aachen, but it can also be used in areas such as credit fraud detection, in certain financial services, in factories to predict when machinery maintenance will be needed or in hospitals, and to improve information from scanners and other imaging tests to optimize diagnoses.

Powered by Hewlett Packard Labs, HPE’s research and development organization, Swarm Learning is a decentralized machine learning environment which has among its purposes the preservation of privacy. As we have mentioned, it works for both edge and distributed locations. It is a solution that offers customers containers that are easily integrated with Artificial Intelligence models through the HPE Swarm API.

Currently, most AI model training takes place in a central location, which is based on centralized and fused data sets. But this approach can be inefficient and costly, due to the need to move large volumes of data to a single source. It can also be complicated by data privacy and ownership regulations, limiting data movement and sharing. This can lead to imprecise and biased models. But by training models and getting data to the edge, companies can make decisions faster, and at the point where they have an impact. This leads to better experiences and results.

Sharing data with third parties can be a problem for companies and organizations that have to comply with governance regulations and laws that require data to remain in its original location. But HPE Swarm Learning allows organizations to use distributed data at its source, increasing the size of the data set they can use in training to build machine learning models for learning in a fair way, and without neglecting privacy or governance. of data.

To ensure that only learning captured at the edge, and not your data, is shared, HPE Learning uses blockchain technology to securely insert members, dynamically elect a leader, and fuse model parameters to provide resiliency and security to the learning network. Additionally, by sharing only learnings, HPE Swarm Learning allows users to tap into large training data sets, without reducing the level of privacy. It also contributes to the elimination of biases to increase the accuracy of the models.

Justin Hotard, Vice President and Head of HPC & AI at HPEand the person in charge of presenting this new solution, has pointed out that «swarm learning is a new and powerful approach to Artificial Intelligence, which has advanced in addressing global challenges, such as advanced patient care in healthcare and the improvement of anomaly detection that help in efforts to detect fraud and Predictive Maintenance. HPE is contributing to the swarm learning movement in a significant way, offering an enterprise solution that enables organizations to collaborate, innovate, and accelerate the power of AI models, while preserving governance standards, data privacy, and user privacy. data and ethics of each entity«.

New machine learning development system from HPE

In addition to this solution, HPE has also announced a complete and ready-to-use machine learning development solution, with which its users can create and train machine learning models immediately and at scale, and do so from the first moment they use it. . With this solution, based on Determined AIcomplex and costly AI infrastructure issues can be addressed, and time to value can be delivered in a matter of days, much less time than usual.

Purposefully designed for AI, the system is a solution that encompasses a software platform, specialized computing with accelerators, networks, services, and communications to develop and train AI models faster, more accurately, and at scale. In addition, the system helps improve model accuracy faster with state-of-the-art distributed training, automated hyperparameter optimization, and neural architecture search—key elements for machine learning algorithms.

This machine learning development system from HPE offers optimized, accelerated compute and networking, key performance factors to efficiently scale models for mix of workloads. For this, it is based on a configuration of 32 GPUs that can be expanded to a larger one, with 256 GPUs. In the first of the configurations, the HPE machine learning development system offers a scaling efficiency of around 90% for workloads related to natural language processing (NLP) and computer vision. Also, based on various tests, the HPE ML system with 32 GPUs is up to 5.7 times faster for a workload with the same number of GPUs but with worse interconnect.

HPE’s Machine Learning (ML) Development System, available now, is offered as an integrated solution that provides a completely preconfigured AI infrastructure to develop turnkey models and training at scale. As part of the offer, HPE Pointnext Services will offer on-premises software installation and configurationwhich will allow to immediately deploy and train machine learning models.

It is offered from a base configuration, which has options to grow and initially has a machine learning platform with HPE Machine Learning Development Environment to scale accurate models from POC to production. It also offers an optimized Artificial Intelligence infrastructure with the system HPE Apollo 6500 Gen10 to have massive and specialized computing capacity for model training and optimization, starting with 8 Nvidia A100 80 GB GPUs.

In addition, it will have HPE Performance Cluster Management, which will provide the system with centralized and accurate monitoring and management capabilities for performance optimization. This system uses HPE ProLiant DL325 servers and Aruba CX 6300 1 GB Ethernet Switches, as well as the NVIDIA Quantum InfiniBand communications platform.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *