In this podcast, the Radio Free HPC team looks at the Top Technology Stories for High Performance Computing in 2015.
Henry thinks new NVM Non-Volatile Memory (3D Xpoint) technologies will have the most dramatic impact on HPC architectures going forward. Rich says that the analytics for 3D XPoint indicate that many of his readers at insideHPC are eager to learn more.
Dan believes that the Coral Announcements made in the last 15 months or so will result in new levels of architectural diversity in the TOP500. In what he calls the “Rebel Alliance” the co-design approach from IBM, Nvidia, and Mellanox will be deployed at two of the three Coral systems in the 2017 timeframe. On the other side of the coin, the Aurora system at Argonne will use the Scalable System Framework architecture from Intel.
Rich sees tremendous potential in NVMe technologies, having seen his first demonstration at the Scalable Informatics booth at SC15. The NVM Express specification defines an optimized register interface, command set and feature set for PCI Express (PCIe)-based Solid-State Drives (SSDs). The goal of NVM Express is to unlock the potential of PCIe SSDs now and in the future, and standardize the PCIe SSD interface.
Henry is impressed by the increasing presence of FPGAs on the show floor and he sees this as an important trend in high performance computing.
Dan is really impressed with Allinea Performance Reports profiling tool and how easy it is to use. The Student Cluster Teams were using it this year and he saw how it made a big difference in their code optimizations. Dan also gives us an update on the SC15 Student Cluster Competition, where nine teams are going for the performance gold this year in Austin.
Rich sees SC15 as the crossroads that we’ll remember where Intel squared off with the official launch of their Omni-Path Interconnect and Scalable System Framework against the co-design alliance of OpenPOWER with IBM, Mellanox, and Nvidia.
Rich fears that the NSCI National Supercomputing Initiative created by Obama’s Executive Order will have trouble obtaining funding in these tough economic times. He thinks its time to bring in professional communicators and talk about HPC and Exascale in terms of breakthrough discoveries.
Dan has night terrors about security back-doors built into chips. As a recipient of a rather nasty rootkit on his home PC last year, he continues to lose sleep over this kind of thing.
Henry gets goosebumps at the lack of concern he sees out there in terms of end-to-end cybersecurity. Some of the recent breaches should really scare us all and encryption is not going to get us where we need to go.
Dell Inc. and EMC Corporation have signed a definitive agreement under which Dell, together with its owners, Michael S. Dell, founder, chairman and chief executive officer of Dell, MSD Partners and Silver Lake, the global leader in technology investing, will acquire EMC Corporation, while maintaining VMware as a publicly-traded company.
Along the way, we try to answer the following questions:
Does this merger make sense in an age where the enterprise seems to be marching away from Datacenters and into the Cloud?
Will Dan ever accept that the enterprise seems to be marching away from Datacenters and into the Cloud?
How will this merger affect the industry at large?
Traditionally, the focus within the high performance computing community has been on optimizing systems to handle hardcore scientific problems — stressing modeling and simulation. But with the emergence of big data, researchers in diverse domains such as healthcare, genomics, financial analytics, and social behavior see the need as well for the analysis and visualization of large and complex data sets. They need systems that help them manage and analyze data to produce deeper insights. The high performance computing systems of the future must be able to handle both kinds of computing challenges.
In this podcast, the Radio Free HPC team goes over a Trip Report from Rich Brueckner, who’s been on the road for the past month at a series of HPC conferences:
HPC User Forum in Broomfiled, CO. The event featured two full days of talks. Here we discuss a couple of standouts that you can watch on insideHPC:
Processing 1 Exabyte per Day for the SKA Radio Telescope. Peter Braam from Cambridge University presented this talk in the Disruptive Technologies session. “The Square Kilometre Array is an international effort to investigate and develop technologies which will enable us to build an enormous radio astronomy telescope with a million square meters of collecting area.”
ExaNeSt Technology: Targeting Exascale in 2018. Peter Hopton from Iceotope presented this talk at the HPC User Forum. “ExaNeSt will develop, evaluate, and prototype the physical platform and architectural solution for a unified Communication and Storage Interconnect and the physical rack and environmental structures required to deliver European Exascale Systems. The consortium brings technology, skills, and knowledge across the entire value chain from computing IP to packaging and system deployment; and from operating systems, storage, and communication to HPC with big data management, algorithms, applications, and frameworks. Building on a decade of advanced R&D, ExaNeSt will deliver the solution that can support exascale deployment in the follow-up industrial commercialization phases.”
User Agency Panel Discussion on the NSCI Initiative. In this video (with transcript) from the 2015 HPC User Forum in Broomfield, Bob Sorenson from IDC moderates a User Agency panel discussion on the NSCI initiative. “You all have seen that usable statement inside the NSCI, and we are all about trying to figure out how to make usable machines. That is a key critical component as far, as we’re concerned. But the thing that I think we’re really seeing, we talked about the fact that a single thread performance is not increasing, and so what we’re doing is we’re simply increasing the parallelism and then the physics limitations, if you will, of how you cool and distribute power among the parts that are there. That really is leading to a paradigm shift from something that’s based on how fast you can crunch the numbers to how fast you can feed the chips with data. It’s really that paradigm shift, I think, more than anything else that’s really going to change the way that we have to do our computing.”
PBS Works User Group. Job schedulers like PBS Works are key to keeping supercomputers running efficiently. Notable talks included:
HPC Across the Enterprise: How HPC Transforms the Corporate IT Ecosystem. Thomas Leung from the GE Global Research Center presented this talk at the PBS Works User Group. “The commercial world uses significant HPC resources for simulation and product design. An increasing number of HPC systems are deployed in the commercial space and their scale is getting larger and larger. These advanced systems push limits in every aspect of Enterprise IT. Accommodating such systems within the enterprise is a challenge, and there have been many recent changes to enterprise IT infrastructures and architectures resulting from the need to support HPC.”
Video: HPC Technology Panel at the PBS Works User Group. Rich Brueckner from insideHPC moderated this panel discussion on current trends in HPC. “President Obama’s Executive Order establishing the National Strategic Computing Initiative (NSCI) will set the stage for a new chapter in leadership computing for the United States. In this panel discussion, thought leaders from leading supercomputing vendors share their perspectives on current HPC trends and the way forward.”
Communication Frameworks for HPC and Big Data. DK Panda from Ohio State University presented this talk at the HPC Advisory Council Spain Conference. “Dr. Panda and his research group members have been doing extensive research on modern networking technologies including InfiniBand and 10-40GE/iWARP. His research group is currently collaborating with National Laboratories and leading InfiniBand and 10-40GE/iWARP companies on designing various subsystems of next generation high-end systems.”
The Road to Exascale. Rich Graham from Mellanox looks at how interconnects are evolving to get us to the next grand challenge of supercomputing performance.
On the Role of Flash in Large-Scale Storage Systems. Nathan Rutman presented this talk. “So why is a spinning disk company talking about Flash? Last year, Seagate acquired Avago LSI’s flash division. We now have an array of flash-based storage. So I have nothing against Flash. This presentation is really on: Where does Flash make sense? I also have a personal agenda because I hate the term “Burst Buffer.” Everyone says “Burst Buffer” instead of saying “Flash.” It drives me crazy. So I’m going to explain what a Burst Buffer is and what it is not.”
The SC tutorials program is one of the highlights of the SC Conference series, and it is one of the largest tutorial programs at any computing-related conference in the world. It offers attendees the chance to learn from and to interact with leading experts in the most popular areas of high performance computing (HPC), networking, and storage.
The SC15 conference takes place Nov. 15-20 in Austin, Texas.
Following more than a decade of research and development, 3D XPoint technology was built from the ground up to address the need for non-volatile, high-performance, high-endurance and high-capacity storage and memory at an affordable cost.