RFHPC201: A Look at Lincoln Labs new paper on Spectre/Meltdown Performance Hits

In this podcast, the Radio Free HPC team looks at a new whitepaper from Lincoln Labs focused on the performance hits Spectre/Meltdown mitigations. The news is not good for HPC workloads.

After that, Shahin point us to the story about how DARPA just allocated $75 Million in awards for thinking-outside-the-box computing innovation. They call it the Electronics Resurgence Initiative and the list of projects funded includes something called Software Defined Hardware.

After that, we do the Catch of the Week:

Download the MP3 * Subscribe on iTunes * RSS Feed

Sign up for our insideHPC Newsletter.

RFHPC200: A Look at China’s Three-Pronged Plan for Exascale

In this podcast, the Radio Free HPC team goes through a fascinating presentation that provides details on China’s Three-Pronged Plan for Exascale.

China may not be the first to Exascale, but they are building three divergent architectural prototypes that pave the way forward. We’ve got the details in this not-to-miss podcast.

We should probably note that this is our 200th Episode of Radio Free HPC. We would like to thank all 13 of our regular listeners for their continued support!

Download the MP3 * Subscribe on iTunes * RSS Feed

Sign up for our insideHPC Newsletter.

RFHPC199: Trip Report from ISC 2018


In this podcast, the Radio Free HPC team offers up a Trip Report from ISC 2018 in Frankfurt. It was a whirlwind week for news with a new USA machine on the TOP500, but the other big news centered around the convergence of HPC & AI. This common theme was all over the show floor, with use cases on display in dozens of exhibits.

Topics include:

Download the MP3 * Subscribe on iTunes * RSS Feed

Sign up for our insideHPC Newsletter.

RFHPC198: The USA Returns to #1 on the TOP500

In this podcast, the Radio Free HPC team reviews the latest TOP500 list of the world’s fastest supercomputers.

The TOP500 celebrates its 25th anniversary with a major shakeup at the top of the list. For the first time since November 2012, the US claims the most powerful supercomputer in the world, leading a significant turnover in which four of the five top systems were either new or substantially upgraded.

Highlights:

#1 is Summit, an IBM-built supercomputer now running at the Department of Energy’s (DOE) Oak Ridge National Laboratory (ORNL), captured the number one spot with a performance of 122.3 petaflops on High Performance Linpack (HPL), the benchmark used to rank the TOP500 list. Summit has 4,356 nodes, each one equipped with two 22-core Power9 CPUs, and six NVIDIA Tesla V100 GPUs. The nodes are linked together with a Mellanox dual-rail EDR InfiniBand network.

#2 is Sunway TaihuLight, a system developed by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, drops to number two after leading the list for the past two years. Its HPL mark of 93 petaflops has remained unchanged since it came online in June 2016.

#3 is Sierra, a new system at the DOE’s Lawrence Livermore National Laboratory took the number three spot, delivering 71.6 petaflops on HPL. Built by IBM, Sierra’s architecture is quite similar to that of Summit, with each of its 4,320 nodes powered by two Power9 CPUs plus four NVIDIA Tesla V100 GPUs and using the same Mellanox EDR InfiniBand as the system interconnect.

#4 is Tianhe-2A, also known as Milky Way-2A, moved down two notches into the number four spot, despite receiving a major upgrade that replaced its five-year-old Xeon Phi accelerators with custom-built Matrix-2000 coprocessors. The new hardware increased the system’s HPL performance from 33.9 petaflops to 61.4 petaflops, while bumping up its power consumption by less than four percent. Tianhe-2A was developed by China’s National University of Defense Technology (NUDT) and is installed at the National Supercomputer Center in Guangzhou, China.

#5 The AI Bridging Cloud Infrastructure (ABCI) is the fifth-ranked system on the list, with an HPL mark of 19.9 petaflops. The Fujitsu-built supercomputer is powered by 20-core Xeon Gold processors along with NVIDIA Tesla V100 GPUs. It’s installed in Japan at the National Institute of Advanced Industrial Science and Technology (AIST).

#6 is Piz Daint in Switzerland with 19.6 petaflops.

#7-10 Titan (17.6 petaflops), Sequoia (17.2 petaflops), Trinity (14.1 petaflops), and Cori (14.0 petaflops) move down to the number six through 10 spots, respectively.

Download the MP3 * Subscribe on iTunes * RSS Feed

Sign up for our insideHPC Newsletter.

RFHPC197: Previewing ISC 2018 Student Cluster Competition & Ancillary Events

In this podcast, the Radio Free HPC team previews the ISC 2018 Student Cluster Competition.

“Now in its seventh year, the ISC-HPCAC Student Cluster Competition enables international STEM teams to take part in a real-time contest focused on advancing STEM disciplines and HPC skills development at ISC 2018 from June 25-27. To take home top honors, twelve teams will have the opportunity to showcase systems of their own design, adhering to strict power constraints and achieve the highest performance across a series of standard HPC benchmarks and applications.”

After that, Rich describes a number of ancillary events have been scheduled in Frankfurt.

Events in chronological order:

  • HP-CAST will take place June 22-23 at the Frankfurt Marriott. HP-CAST is an organization of HPE customers and partners who provide input to HP to increase the capabilities of HP solutions for large-scale, scientific and technical computing.
  • The Dell EMC HPC Community will get together for a half-day meeting on Sunday, June 24 at the Frankfurt Marriott.
  • DDN User Group will be held on Monday, June 25 from 9:00am – 12:30am at the Movenpick Hotel.
  • D-Wave Systems will host a seminar on Quantum Computing on Monday, June 25 starting at 2:00pm at the Frankfurt Marriott.
  • Intel Special Session: Dr. Raj Hazra, Corporate Vice President at Intel, will discuss AI & HPC emerging technologies that will accelerate discovery and innovation at 6:00 pm Monday, June 25 in the Panarama 2 room at the Frankfurt Messe.
  • The Hyperion Research  Breakfast Briefing will take place on Tuesday, June 26 at 7:45am at the Grandhotel Hessischer Hof.
  • Univa will host a Breakfast Seminar on Cloud HPC on Wednesday, June 27 at 8:00am at the Frankfurt Marriott.
  • The Women in HPC network is running a half day workshop on Thursday, June 28.

After that, we do our Catch of the Week:

Download the MP3 * Subscribe on iTunes * RSS Feed

Sign up for our insideHPC Newsletter.

RFHPC196: A Closer Look at the Summit Supercomputer at ORNL

In this podcast, the Radio Free HPC team looks at the new 200 Petaflop Summit supercomputer that was unveiled this week at ORNL. Powered by IBM POWER9 processors, 27,648 NVIDIA GPUs, and Mellanox InfiniBand, the Summit supercomputer is also the first Exaop AI system on the planet.

This massive machine, powered by 27,648 of our Volta GPUs, can perform more than three exaops, or three billion billion calculations per second,” writes Ian Buck on the NVIDIA blog. “That’s more than 100 times faster than Titan, previously the fastest U.S. supercomputer, completed just five years ago. And 95 percent of that computing power comes from GPUs. Built for the U.S. Department of Energy, this is a machine designed to tackle the grand challenges of our time. It will accelerate the work of the world’s best scientists in high-energy physics, materials discovery, healthcare, and more, with the ability to crank 200 petaflops of computing power to high precision scientific simulations.

IBM designed a whole new heterogeneous architecture for Summit that integrates the robust data analysis of powerful IBM POWER9 CPUs with the deep learning capabilities of GPUs,” writes Dr. John E. Kelly from IBM. “The result is unparalleled performance on critical new applications. And, IBM is selling this same technology in Summit to enterprises today.”

Summit takes GPU accelerated computing to the next level, with more computing power, more memory, an enormous high-performance file system, and fast data paths to tie it all together,” said James Hack, director of ORNL’s National Center for Computational Sciences. “That means researchers will be able to explore more complex phenomena at higher levels of fidelity in less time than with previous generations of supercomputer systems.”

After that, we do our Catch of the Week:

  • Dan points us to the story about why Microsoft sent a datacenter to the bottom of the sea. What they learn from the experience could pave the way for off-shore datacenters that are immune from natural disasters.
  • Rich likes the news about the SC18 Coffee Shop, which will provide an interactive exhibit space at the conference for the first time this year.
  • Shahin is impressed with with the Fujitsu Digital Annealer, which is reaching for quantum speeds through silicon technology.

Download the MP3 * Subscribe on iTunes * RSS Feed

RFHPC195: New NVIDIA HGX-2 Reference Platform for HPC & AI

Dual GPU baseboard with 16 GPU fully connected at full NVLink speed of 300GB/s

In this podcast, the Radio Free HPC team looks at the new NVIDIA HGX-2 Reference Platform for HPC & AI.

“The HGX-2 cloud server platform supports multi-precision computing, supporting high-precision calculations using FP64 and FP32 for scientific computing and simulations, while also enabling FP16 and Int8 for AI training and inference. This unprecedented versatility meets the requirements of the growing number of applications that combine HPC with AI. HGX-2 is a part of the larger family of NVIDIA GPU-Accelerated Server Platforms, an ecosystem of qualified server classes addressing a broad array of AI, HPC and accelerated computing workloads with optimal performance.”

After that, we do our Catch of the Week:

Download the MP3 * Subscribe on iTunes * RSS Feed

RFHPC194: Rationalizing GPDR Regulations

In this podcast, the Radio Free HPC team looks at ramifications for the European GPDR laws, which went into effect May 25, 2018.

The EU General Data Protection Regulation (GDPR) is the most important change in data privacy regulation in 20 years – we’re here to make sure you’re prepared.

After that, we do our Catch of the Week:

In this NSFW video, comedian Jordan Peele demonstrates how deepfake technology can put words into anyone’s mouth, including our former President.

Download the MP3 * Subscribe on iTunes * RSS Feed

RFHPC193: Results from the ASC18 Student Cluster Competition

In this podcast, the Radio Free HPC team reviews the results of the ASC 2018 Student Cluster Competition.

“The ASC 2018 Student Supercomputer Challenge finalist were announced on March 20, 2018. Twenty of the 300+ enrolled teams around the world including: Tsinghua University-China, Friedrich-Alexander, Erlangen-Nuremberg University- Germany, Saint Petersburg State University – Russia, University of Miskolc – Hungary, Texas A&M University – USA, and Hong Kong Baptist University, will compete from May 5 to 9, 2018 in the final round at Nanchang University. The 20 finalists will design and build supercomputers up to 3,000 Watts, solve exceptionally difficult problems in AI reading comprehension, perform RELION optimization as a core application of the Nobel winning cryo-electron microscopy, and utilize CFL3D, HPL, and HPCG.”

In this video, Overall Winners Team Tsinghua describe their efforts to master the Siesta Mystery Application.

Satoshi Matsuoka shows off the prototype board fro the Post K Supercomputer coming to RIKEN.

After that, we do our Catch of the Week:

  • Shahin was excited to see photos of the Post K Supercomputer Prototype. ARM processors will provide the computational muscle behind one of the most powerful supercomputers in the world, replacing the current K computer at the RIKEN Advanced Institute for Computational Science (AICS) in Japan.
  • Dan wants to know if the coveted space in front of the jetliner bulkhead is an acceptable passageway.
  • Rich is impressed with the new Tachyum Prodigy chip. According to Tachyum, the new chip has “ten times the processing power per watt” and is capable of running the world’s most complex compute tasks.

Download the MP3 * Subscribe on iTunes * RSS Feed

RFHPC192: How Many Accelerators will it take to build an Exascale Machine?

In this podcast, the Radio Free HPC team takes a look at daunting performance targets for the DOE’s CORAL-2 RFP for Exascale Computers.

“So, 1.5 million TeraFlops divided by 7.8 is how many individual accelerators you need, and that’s 192,307, which by the way looks like a prime number. Now, multiply that by 300 watts per accelerator, and it is clear we are going to need something all-new to get where we want to go.”

The Request for Proposals is designed to get bids from vendors to build two and (potentially) three new exascale supercomputers. Each system is expected to cost between $400 – $600 million.

“These CORAL-2 systems represent the next generation in supercomputing and will be critical tools both for our nation’s scientists and for U.S. industry,” Secretary Perry said.  “They will help ensure America’s continued leadership in the vital area of high performance computing, which is an essential element of our national security, prosperity, and competitiveness as a nation.”

The new RFP calls for systems to be housed at:

  • One will be at ORNL
  • One at LLNL
  • A possible third system at Argonne

Specifications:

  • According to the RFP, baseline performance for each system should be at least 1300 Petaflops/sec.
  • Power budget will go up to 60 Megawatts. Preferred power consumption for the system is 20-60 Megawatts.
  • MTBF is requested to somewhere around 6 Days

As far as predictions go, Dan thinks one machine will go to IBM and the other will go to Intel. Rich thinks HPE will win one of the bids with an ARM-based system designed around The Machine memory-centric architecture. They have a wager, so listen in to find out where the smart money is.

Download the MP3 * Subscribe on iTunes * RSS Feed