RFHPC146: Day-by-Day Preview of ISC 2017 in Frankfurt

In this podcast, Rich gives us a day-by-day preview of the upcoming ISC 2017 conference. The event takes place June 18-22 in Frankfurt, Germany.

“ISC High Performance focuses on HPC technological development and its application in scientific fields, as well as its adoption in commercial environments. In 2017 we offer you 13 fascinating HPC topics grouped under three categories: systems, applications, and emerging topics. All topics will be addressed in different power-packed sessions. The ISC tutorials, workshops and the exhibition will complement these sessions.”

Friday, June 16

  • HP-CAST is the high performance user group meeting for HPE customers and partners. Over 300 attendees are expected for the two-day meeting, which you can read about here on insideHPC.

Saturday, June 17

  • The Student Cluster Competition teams begins their system buildout at the Frankfurt Messe. An exhibitor pass is required for entry to the hall.
  • HP-CAST Day 2 has a focus on partner sessions.

Sunday, June 18

Monday, June 19

Tuesday, June 20

  • Exhibits 10-6
  • DDN User Group, 12:30pm – 5:00pm at the Movenpick Hotel

Wednesday, June 21

  • Exhibits 10-6
  • Student Cluster Awards Ceremony

Thursday, June 22

  • Workshops all day at the Marriott
  • Women in HPC Workshop 9:00am – 1:00pm
  • Dell HPC Community meeting, 8:00am – 3:00pm. The Dell EMC HPC Community event will feature keynote presentations by HPC experts and a networking breakfast to discuss best practices in the use of Dell EMC HPC Systems.
  • Rich heads out for Motorcycle tour of the Alps

Monday, June 26-28

After that, we do our Catch of the Week:

  • Dan interviews Carolyn Posti from Redline on the topic of HPC Benchmarking
  • Rich buys an Antminer S7 for mining Bitcoins

Download the MP3 * Subscribe on iTunes * RSS Feed

RFHPC144: Henry’s Trip to Best Buy

In this podcast, Henry goes on a shopping spree at Best Buy. His mom was moving into a new place, so he got her all-new electronics to the tune of $1624. Is Henry a good son or did he go cheap? We’ll find out.

After that, we do our Catch of the Week:

Rich notes that recent reports about the Aurora supercomputer were incorrect. Rick Borchelt from DoE: “On the record, Aurora contract is not cancelled.”

Shahin has been trying to keep up with the boom in cryptocurrency, which now has a market cap of something like $91 Billion dollars.

Dan is excited that Hitachi has stopped building its own mainframes but will supply IBM z Systems loaded with Hitachi VOS3 operating system software.

Download the MP3 * Subscribe on iTunes * RSS Feed

RFHPC142: A Look at the New Nvidia Volta GPUs

In this podcast, the Radio Free HPC team looks at Volta, Nvidia’s new GPU architecture that delivers up to 5x the performance of its predecessor.

At the GPU Technology Conference, Nvidia CEO Jen-Hsun Huang introduced a lineup of new Volta-based AI supercomputers including a powerful new version of our DGX-1 deep learning appliance; announced the Isaac robot-training simulator; unveiled the NVIDIA GPU Cloud platform, giving developers access to the latest, optimized deep learning frameworks; and unveiled a partnership with Toyota to help build a new generation of autonomous vehicles.

Built with 21 billion transistors, the newly announced Volta V100 “delivers deep learning performance equal to 100 CPUs.” Representing an investment by NVIDIA of more than $3 billion, the processor is built “at the limits of photolithography,” Huang told the crowd.

Download the MP3 * Subscribe on iTunes * RSS Feed

RFHPC140: Catching up with the Exascale Computing Project

In this podcast, the Radio Free HPC team looks at a recent update on the Exascale Computing Project by Paul Messina.

“The Exascale Computing Project (ECP) was established with the goals of maximizing the benefits of HPC for the United States and accelerating the development of a capable exascale computing ecosystem. The ECP is a collaborative effort of two U.S. Department of Energy organizations – the Office of Science (DOE-SC) and the National Nuclear Security Administration (NNSA).”

Recent milestones include:

  • PathForward will soon announce six awards to vendors to develop new technologies that will be instrumental in Exascale system development.
  • The ECP Industry Council met for the first time recently with C-Level executives from industry to lay our application requirements for exascale systems. The end goal is to improve industrial competitiveness in the United States.

After that, we do our Catch of the Week:

  • Shahin is impressed by the new Wolfram Data Depository, a public resource that hosts an expanding collection of computable datasets, curated and structured to be suitable for immediate use in computation, visualization, analysis and more. Building on the Wolfram Data Framework and the Wolfram Language, the Wolfram Data Repository provides a uniform system for storing data and making it immediately computable and useful. With datasets of many types and from many sources, the Wolfram Data Repository is built to be a global resource for public data and data-backed publication.
  • Henry informs us to always tug on the front panel of your ATM before using. “Once you understand how easy and common it is for thieves to attach “skimming” devices to ATMs and other machines that accept debit and credit cards, it’s difficult not to closely inspect and even tug on the machines before using them.”

Download the MP3 * Subscribe on iTunes * RSS Feed

RFHPC129: Cray Looks to ARM HPC

In this podcast, the Radio Free HPC team looks at two hot stories from last week:

Cray to Develop ARM-based Isambard Supercomputer for UK Met Office. The GW4 Alliance, together with Cray and the UK Met Office, has been awarded £3m by EPSRC to deliver a new Tier 2 HPC service for UK-based scientists. This unique new service, named ‘Isambard’ after the renowned Victorian engineer Isambard Kingdom Brunel, will provide multiple advanced architectures within the same system in order to enable evaluation and comparison across a diverse range of hardware platforms.

Steve Pawlowski presentation from Persistent Memory Summit“As data proliferation continues to explode, computing architectures are struggling to get the right data to the processor efficiently, both in terms of time and power. But what if the best solution to the problem is not faster data movement, but new architectures that can essentially move the processing instructions into the data? Persistent memory arrays present just such an opportunity. Like any significant change, however, there are challenges and obstacles that must be overcome. Industry veteran Steve Pawlowski will outline a vision for the future of computing and why persistent memory systems have the potential to be more revolutionary than perhaps anyone imagines.” 

Download the MP3 * Subscribe on iTunes * RSS Feed

 

RFHPC128: Quantum Software Goes Open Source

In this podcast, the Radio Free HPC team looks at D-Wave’s new open source software for quantum computing. The software is available on github along with a whitepaper written by Cray Research alums Mike Booth and Steve Reinhardt.

D-Wave Systems released the open-source, quantum software tool as part of its strategic initiative to build and foster a quantum software development ecosystem. The new tool, qbsolv, enables developers to build higher-level tools and applications leveraging the quantum computing power of systems provided by D-Wave, without the need to understand the complex physics of quantum computers.

Just as a software ecosystem helped to create the immense computing industry that exists today, building a quantum computing industry will require software accessible to the developer community,” said Bo Ewald, president, D-Wave International Inc. “D-Wave is building a set of software tools that will allow developers to use their subject-matter expertise to build tools and applications that are relevant to their business or mission. By making our tools open source, we expand the community of people working to solve meaningful problems using quantum computers.”

 After that, we do the Catch of the Week: 

  • Shahin points us to the story about the miniaturization of accelerometers that could help with motion sickness and thus save lives.
  • Hater Dan shares the story that users are suing the Apple Store for being a Monopoly.
  • Rich notes that Cray has announced the appointment of Stathis Papaefstathiou to the position of senior vice president of research and development. He fills the slot vacated by Peg Williams, who will retire.

Download the MP3 * Subscribe on iTunes * RSS Feed

 

RFHPC127: Technologies We’re Looking Forward to in 2017

In this podcast, the Radio Free HPC team shares the things we’re looking forward to in 2017.

  • Shahin is looking forward to the iPhone 8. Henry and Dan will stick with Android. Shahin is also actively watching for much needed advancements in IoT security.
  • Henry is looking forward to storage innovations and camera technologies in the fight against crime. He also heralds the return of specialized processing devices for specific application worksloads. 
  • Dan thinks the continuing technology wars between processors and GPUs and Omni-Path vs InfiniBand are great theater.
  • Rich is looking forward to traveling to a great set conferences in the first half of the year. He has just updated the insideHPC Events Calendar with the lion’s share of major HPC events for 2017.

After that, we each share our Catch of the Week:

Download the MP3 * Subscribe on iTunes * RSS Feed

RFHPC123: SC16 Student Cluster Competition & Results

In this podcast, the Radio Free HPC team reviews the results from SC16 Student Cluster Competition. 

This year, the advent of clusters with the new Nvidia Tesla P100 GPUs made a huge impact, nearly tripling the Linpack record for the competition.

The Student Cluster Competition returned for its 10th year at SC16, The competition which debuted at SC07 in Reno and has since been replicated in Europe, Asia and Africa, is a real-time, non-stop, 48-hour challenge in which teams of six undergraduates assemble a small cluster at SC16 and race to complete a real-world workload across a series of scientific applications, demonstrate knowledge of system architecture and application performance, and impress HPC industry judges. The students partner with vendors to design and build a cutting-edge cluster from commercially available components, not to exceed a 3120-watt power limit and work with application experts to tune and run the competition codes.

For the first-time ever, the team that won top honors also won the award for achieving highest performance for the Linpack benchmark application. The team “SwanGeese” is from the University of Science and Technology of China. In traditional Chinese culture, the rare Swan Goose stands for teamwork, perseverance and bravery. This is the university’s third appearance in the competition.

Also, an ACM SIGHPC Certificate of Appreciation is presented to the authors of a recent SC paper to be used for the SC16 Student Cluster Competition Reproducibility Initiative. The selected paper was “A Parallel Connectivity Algorithm for de Bruijn Graphs in Metagenomic Applications” by Patrick Flick, Chirag Jain, Tony Pan and Srinivas Aluru from Georgia Institute of Technology.

After that, we go round-robin for our Catch of the Week:

Download the MP3 * Subscribe on iTunes * RSS Feed

See our complete coverage of SC16, which takes place Nov. 13-18 in Salt Lake City.

RFHPC122: A Look at the New TOP500

Top500In this podcast, the Radio Free HPC team reviews the latest TOP500 list of the world’s fastest supercomputers.

The 48th edition of the TOP500 list saw China and United States pacing each other for supercomputing supremacy. Both nations now claim 171 systems apiece in the latest rankings, accounting for two-thirds of the list. However, China has maintained its dominance at the top of the list with the same number 1 and 2 systems from six months ago: Sunway TaihuLight, at 93 petaflops, and Tianhe-2, at 34 petaflops. This latest edition of the TOP500 was announced Monday, November 14, at the SC16 conference in Salt Lake City, Utah.

After US and China, Germany claims the most systems with 32, followed by Japan with 27, France with 20, and the UK with 17. A year ago the US was the clear leader with 200 systems, while China had 108, Japan had 37, Germany had 33, and both France and the UK had 18.

In addition to matching each other in system count in the latest rankings, China and the US are running neck and neck in aggregate Linpack performance. The US holds the narrowest of leads, with 33.9 percent of the total; China is second with 33.3 percent. The total performance of all 500 computers on the list is now 672 petaflops, a 60 percent increase from a year ago.

The top of the list did receive a mild shakeup with two new systems in the top ten.
The Cori supercomputer, a Cray XC40 system installed at Berkeley Lab’s National
Energy Research Scientific Computing Center (NERSC), slipped into the number 5
slot with a Linpack rating of 14.0 petaflops. Right behind it at number 6 is the new
Oakforest-PACS supercomputer, a Fujitsu PRIMERGY CX1640 M1 cluster, which
recorded a Linpack mark of 13.6 petaflops. Oakforest-PACS is up and running at Japan’s Joint Center for Advanced High Performance Computing (JCAHPC). Both machines owe their computing prowess to the Intel “Knights Landing” Xeon Phi 7250, a 68-core processor that delivers 3 peak teraflops of performance. The addition of Cori and Oakforest-PACS pushed every system below them a couple of notches down, with the exception of Piz Daint, a Cray supercomputer installed at the Swiss National Supercomputing Centre (CSCS). It maintained its spot at number 8 as a result of a massive 3.5 petaflop upgrade, courtesy of newly installed NVIDIA P100 Tesla GPUs.

SC16Piz Daint also has the honor of being the second most energy-efficient supercomputer in the TOP500, with a rating of 7.45 gigaflops/watt. It is topped by NVIDIA’s in-house DGX Saturn V system, the only other system on the list equipped with the new P100 GPUs. It is a 3.3-petaflop cluster of DGX-1 servers that delivers 8.18 gigaflops/watt. To offer some perspective here, the nominal goal for the first exascale systems is 50 gigaflops/watt.

Download the MP3 * Subscribe on iTunes * RSS Feed

See our complete coverage of SC16, which takes place Nov. 13-18 in Salt Lake City.