Supercomputing to Modernize the Electric Grid

We open the show by talking some weather. It’s so cold at Henry’s house (in Minnesota) that he’s becoming a human superconductor and quantum computing experimenters are showing up at his house to test their systems under uber cold conditions. Shahin adds an inane joke about cold and levitation that Dan threatens to cut out of the final edit of the show.

Our first topic is how Lawrence Livermore National Lab is working to simulate and then help modernize the electric grid. We talk about how the ‘new grid’ will need to be two-way, both delivering and accepting electricity. The new grid will also have to communicate with smart homes and other buildings in order to predict demand and adjust real time pricing.

When the discussion turned to solar power, Henry related the problems of low payouts from utilities to consumers who have installed solar panels. Dan pointed out the current shortfalls in solar power, bringing up an example of the world’s largest solar plants still not generating enough juice to power the NYC subway system. Henry called Dan a dirty liar and an embarrassment to his family. Dan provided the following links to justify his take:

  1. On an annual basis, the NYC subway system uses 1.8 billion kilowatt hour of electricity. This is according to This is 1,800 megawatts of electricity.
  2. According to an article published by Origin Energy on 10/24/18, the largest single location solar field is located in India and generates 648 MW of electricity. This is obviously less than the 1,800 megawatts necessary to power the NYC subway. Dan is vindicated.

Next up, we discuss some of the applications that are being run on the Summit supercomputer, the world’s largest system. Some of the applications include exploring the origin of the universe and whole-cell simulation, along with a host of other stuff. Our discussion strays into the recent announcement that scientists in Israel have supposedly cured cancer. This claim has since been debunked, or at least partially debunked…leaving it barely bunked at all.  As the conversation strays even further, Shahin suggests putting a giant mirror behind the sun in order to give us more solar energy. One hell of a good idea.

Catch of the Week

Shahin’s Catch of the Week starts as a mix of buzzwords combined together but clarifies itself (a bit) through explanation. What he’s talking about is a paper titled “Semi-device-independent quantum money with coherent states” that discusses using quantum computing to create unforgeable quantum banknotes and credit cards – definitely a good thing.

Dan’s Catch of the Week is the dust up between Apple and Facebook and how the two goliaths have become embroiled in a slap fight. In the ensuing discussion, Dan coins the phrase “if you’re not paying for an app, it’s a virus.” The gang also points out Facebook’s naiveté (whether it’s real or put on) when it comes to user privacy issues. Dan, warming to the topic of tech giants controlling our lives, brings up the example of Microsoft’s new “Newsguard” browser feature that passes judgement on whether news sites are credible or nor credible. Newsguard is a browser extension that users can activate on new versions of Microsoft’s Edge browser. Here are a few representative discussions about Newsguard and possible implications:  Gizmodo, Publishing Insider, and Breitbart.

Shahin believes that Newsguard is an AI fueled tool, which, upon further research, turns out to be incorrect. Newsguard uses ‘trained journalists’ to review and rate thousands of news and information websites. After a little more desultory conversation, the podcast ends on this disquieting note.

Download the MP3 * Subscribe on iTunes * RSS Feed

Sign up for our insideHPC Newsletter

What’s an AI Supercomputer? What’s up with software SMP?

We start our discussion by contemplating the fact that Shahin doesn’t have a middle name (he says he never needed one) and touching on why Henry has picked up the nick name ‘Gator’ Newman.

What’s an AI supercomputer?

Our first topic is whether a supercomputer can or cannot be a “AI Supercomputer.” This is based on France (along with HPE) unveiling a new AI system which will double the capacity of French supercomputing. So what are the differences between a traditional super and a AI super. According to Dan, it mostly comes down to how many GPUs the system is configured with, while Shahin and Henry think it has something to do with the datasets. Send us a note or a tweet if you have an opinion on this.

Software SMP hits 10k

The guys also discuss ScaleMP and how their announcement of record results, with close to 10,000 customers as of the close of 2018. This led to talk about SMP vs. MPP from a performance standpoint. Henry asserted that a clustered approach will always be superior to a big SMP approach, all things being equal. Dan doesn’t agree and Shahin confesses his love of ‘fat node’ clustering. Dan agrees with Shahin, but wonders why no one is doing it.

We also note that Mellanox got a nice design win with the Finns, as they’ll be installing 200 Gb/s HDR InfiniBand interconnect in a new Finnish supercomputer to be deployed in 2019 and 2020. The interconnect will be used in a Dragonfly topology.

Catch of the Week

  1. Shahin’s catch of the week is a mathematical puzzle titled “The most unexpected answer to a counting puzzle.” Here’s a link to the video.
  2. Dan likes a good comeback story and in light of that, his catch of the week is AMD nabbing a design win at Nikhef.
  3. Henry HAS NO CATCH OF THE WEEK. This makes him the “RF-HPC Villain of the Week” 🙂

Download the MP3 * Subscribe on iTunes * RSS Feed

Sign up for our insideHPC Newsletter

China Exascale Again (Tianhe-3 is coming), GDPR shows its teeth

After a short talk about the weather in Henry’s basement (it had just reached 60 F by the time we recorded the show), we got right down to business with an important announcement:  our pal Rich Brueckner is leaving the show. He just has too much on his plate and something had to give.

While we’re worried about the impact Rich’s departure might have on our listenership, we did take note of and welcome listeners 13, 14, and 15, who made themselves known to Henry on one of his recent business trips. Yay us.

Our first topic is China rolling out a successor to Tianhe-1, dubbed Tianhe-3. According to news articles, Tianhe-3 will be 200 times faster than Tianhe-1, with 100x more storage. What we don’t know is if these comparisons are relative to Tianhe-1 or Tianhe 1A. The later machine weighs in at 2.256 PFlop/s which means that Tianhe-3 might be as fast as 450 PFlop/s when complete. We also made a reference to a past episode, which we know you remember vividly, where we discussed China’s three-pronged strategy for exascale.

As we’re moving into our popular “Catch of the Week” segment, Shahin hijacks the conversation by questioning if anyone knows the real-world utilization rates of non-commodity configurations in public clouds. This leads to this bold estimate from Dan “I’ll bet that there isn’t a public cloud out there that has a higher than 60-65% utilization rate.” We have a spirited discussion about this pseudo-metric and how infrastructures are sized to handle peaks. We also brought up a story that malware can bring down public clouds, although someone would have to own your system before doing it.

Catch of the Week:

  1. Henry hipped us to a website that shows whether your email address or password have been powned:
  2. Shahin brought up Google’s recent 50 million euro fine for GDPR violations:
  3. Dan discussed the case of a Dutch surgeon who won a landmark case to get her medical disciplinary records removed from Google searches.

Download the MP3 * Subscribe on iTunes * RSS Feed

Sign up for our insideHPC Newsletter

Weather Forecasting Goes Crowdsourcing, Q means Quantum

In this episode of Radio Free HPC, Dan, Henry, and Shahin start with a spirited discussion of IBM’s recent announcement regarding their crowd sourced weather prediction application. Henry was dubious as to whether Big Blue could get access to the data they need in order to truly put out a valuable product. Dan had questions about the value of the crowd sourced data and how it could be scrubbed in order to be useful. Shahin was pretty favorable towards IBM’s plans and believes that they will solve the problems that Henry and Dan raised.

IBM came up again in the show as the boys kick around IBM’s quantum computing commercial system. Shahin brought out the point that for a market that has few applications and success stories, it attracted nearly every big vendor in the business.

Catch of the Week:

Henry told the guys about a new security flaw as pointed out by Krebs, this one concerning an exploit of credit cards.

Shahin talked about the newly proposed Deep500 benchmark, designed to compare deep learning and inference performance.

Dan discussed a recent interview with a VC who believed that by 2035, more than 40% of jobs world wide would be taken over by AI. This prompted a discussion of how technology has impacted employment and the economy in the past and how the accelerating pace of economic displacement in the era of AI is much quicker than in any other time.

We end the episode by denouncing attorneys.

Download the MP3 * Subscribe on iTunes * RSS Feed

Sign up for our insideHPC Newsletter

A Look Back at the 2018 CHPC Conference in South Africa

In this podcast, the Radio Free HPC team looks back at the highlights of the 2018 CHPC Conference in South Africa. With over 500 attendees, the event featured a set of keynotes on high performance computing as well as a Student Cluster Challenge and a Cyber Security Competition.

The comprehensive program included national and international contributions as well as contributions from cyberinfrastructure system partners: the South African National Research Network and the Data Intensive Research Initiative of South Africa. Captains of the HPC Industry across the globe provided key talks and workshops during the conference week and they included: Patricia Damkrogel, Vice President and General Manager of Intel, Thomas Sterling from Indiana University, USA; Michael Foley who has recently retired from the World Bank, Bhekisipho Twala from the University of South Africa, Khutso  Ngoasheng from the South African  Radio Astronomy Observatory, Elmarie Biermann from the Cyber Security Institute and many others.

The SADC HPC Collaboration Forum participated in discussions around the HPC framework and implementations plans for a regional HPC facility that would be used to find scientific solutions for common problems and other research in which member states could collaborate.

This year, following the theme of the conference on how HPC Transforms for the Future, increasing the participation of women in HPC was prominent. This was supported by the introduction of a sponsorship for an outstanding female in the Student Cluster Challenge. The award in this newly introduced category, sponsored by Intel, was taken by Ms. Mapule Madzena, a student from the University of the Free State. She was hailed as the best female student and walked away with R64 500.

Student Competition Highlights included:

  • Six students from the University of Cape Town (UCT) and the University of the Witwatersrand (WITS), who came out tops at a national student cluster competition to build a supercomputer, will fly the South African flag high at the International Student Cluster competition in Germany, in June next year.
  • During the competition, 10 teams of students from various universities in the country battled it out to build small high performance computing clusters on the exhibition floor – using hardware provided by CHPC and its industrial partners – and raced to demonstrate the best performance across a series of benchmarks and applications.
  • Sefan Schroder, Dilon Heald, Jehan Singh and Clara Staasen from UCT; Anita de Mello Koch and Kaamilah Desai from WITS, will test their skills against their international counterparts’ when they compete with students from 11 countries that including China, Germany, Poland, Singapore and Thailand, among others.
  • In the cybersecurity challenge, the University of Pretoria came first, followed by Stellenbosch University. This competition provides a platform for students to compete in real-time and come up with ideas that could protect South Africa from cybercrimes. The winning team will compete at an appropriate international competition, such as the European Cyber Security Challenge.

Speaking at the conference, DST Chief Director: Emerging Research Areas and Infrastructure, Dr Daniel Adams, said that the event was critical to develop the skills needed in the country.

The DST remains committed to supporting skills development and new interventions. The CHPC is a great platform to stimulate the pipeline and boost human capital development. Initiatives such as these have led to slow, but steady, improvement in the enrollment of doctoral degrees,” he said.

CHPC Director, Dr Happy Sithole was impressed with the calibre of student who participated in the competitions. “I am very proud of the kind of innovation displayed by the students. I believe that they will represent us well at international stages.These competitions are critical to equip the future generation with cyberinfrastructure, supercomputing experience and expose science, technology, engineering, mathematics and innovation students to an array of opportunities”.

Over the years, South Africa has performed well at the International Student Cluster competition, wining it in 2013, 2014 and 2016, while coming second in 2015 and 2017. In June this year, the team came third, after two teams from China at the cluster challenge in Frankfurt, Germany.

Dr Sithole was also pleased with the progress made in building a strong high performance computing community in the country.

This year, our focus was on the transformation of both the use and development of cyberinfrastructure, which will help the industries, academics and the nations from across the continent. Looking back at the first meeting where we engaged in the discussions of building a strong high performance computing community in South Africa, and advocating for financial support from government, significant growth has been achieved. Notably, it is now a continental focus, not only on computing, but overall cyberinfrastructure growth and demonstration of impact”, he said.

Student Poster Competition

The conference also had 60 students showcasing their research posters for work conducted through the use of high performance computing. The poster students were judged mostly by external adjudicators under the following criteria: quality of the poster, the high performance computing content and quality of research, their ability to communicate the science content and the general impression of the presentation. The winner of the Masters-level poster was Beauty Shibiri from the University of Limpopo for the abstract title: Investigating the Structural and Volume Changes of Composite Layered-Spinel Nanoporous Li-Mn-O Electrode Materials. In the Doctoral-level, the award went to Elkana Rugut from WITS for the abstract title: Thermoelectric Properties of CdAl204 spinel.

Exhibition area

The expo zone of the conference consisted of industries who provided valuable support to the funding of the conference, companies like Intel (diamond sponsor), Altair and Dell EMC (platinum sponsors) and Mellanox and Hewlett Packard (gold sponsors) ensured that the CHPC is able to bring leading speakers to add to the stature of the conference.

Other sponsors included Student Cluster Competition sponsors: DELL EMC, Eclise Holdings, Altair, Bright Computing, Mellanox, Microsoft and Intel; as well as Student Cyber Security Challenge sponsors: Microsoft and MWR.

Please visit the gallery to see more photos from the conference.

Download the MP3 * Subscribe on iTunes * RSS Feed

Sign up for our insideHPC Newsletter

RFHPC215: A Hard Look at Santa’s Big Data Challenges

In this podcast video, the Radio Free HPC team looks at the monumental IT challenges that Santa faces each Holiday Season.

With nearly 2 billion children to serve, Santa’s operations are an IT challenge on the grandest scale. If the world’s population keeps growing by 83 million people per year, Santa may need to build a hybrid cloud just to keep up. With billions of simultaneous queries, the Big Data analytics required will certainly require an 8-socket numa machines with 4 terabytes of central memory.

Sign up for our insideHPC Newsletter

Download the MP3 * Subscribe on iTunes * RSS Feed

RFHPC214: A Look at TOP500 Trends on the Road to Exascale

From left, Henry Newman, Dan Olds, Shahin Khan, and Rich Brueckner are the Radio Free HPC team

In this podcast, the Radio Free HPC team looks at the semi-annual TOP500 BoF presentation by Jack Dongarra.

The TOP500 list of supercomputers serves as a “Who’s Who” in the field of High Performance Computing (HPC). It started as a list of the most powerful supercomputers in the world and has evolved to a major source of information about trends in HPC. The 52nd TOP500 list will be published in November 2018 just in time for SC18. This BoF will present detailed analyses of the TOP500 and discuss the changes in the HPC marketplace during the past years. The BoF is meant as an open forum for discussion and feedback between the TOP500 authors and the user community.

After that, we do our Catch of the Week.

See our complete coverage of SC18

Sign up for our insideHPC Newsletter

Download the MP3 * Subscribe on iTunes * RSS Feed

RFHPC213: Running Down the TOP500 at SC18

In this podcast, the Radio Free HPC team looks back on the highlights of SC18 and the newest TOP500 list of the world’s fastest supercomputers.

Buddy Bland shows off Summit, the world’s fastest supercomputer at ORNL.

The latest TOP500 list of the world’s fastest supercomputers is out, a remarkable ranking that shows five Department of Energy supercomputers in the top 10, with the first two captured by Summit at Oak Ridge and Sierra at Livermore. With the number one and number two systems on the planet, the “Rebel Alliance” vendors of IBM, Mellanox, and NVIDIA stand far and tall above the others.

Summit widened its lead as the number one system, improving its High Performance Linpack (HPL) performance from 122.3 to 143.5 petaflops since its debut on the previous list in June 2018. Sierra also added to its HPL result from six months ago, going from 71.6 to 94.6 petaflops, enough to bump it from the number three position to number two. Both are IBM-built supercomputers, powered by Power9 CPUs and NVIDIA V100 GPUs.

Sierra’s ascendance pushed China’s Sunway TaihuLight supercomputer, installed at the National Supercomputing Center in Wuxi, into third place. Prior to last June, it had held the top position on the TOP500 list for two years with its HPL performance of 93.0 petaflops. TaihuLight was developed by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC).

In this video from ISC 2018, Yan Fisher from Red Hat and Buddy Bland from ORNL discuss Summit, the world’s fastest supercomputer. Red Hat teamed with IBM, Mellanox, and NVIDIA to provide users with a new level of performance for HPC and AI workloads.

Tianhe-2A (Milky Way-2A), deployed at the National Supercomputer Center in Guangzho, China, is now in the number four position with a Linpack score of 61.4 petaflops. It was upgraded earlier this year by China’s National University of Defense Technology (NUDT), replacing the older Intel Xeon Phi accelerators with the proprietary Matrix-2000 chips.

Top-500, Green-500, IO-500, HPCG, and now CryptoSuper-500 all point to growing versatility of supercomputers,” said Shahin Khan from OrionX. “It’s time to more explicitly recognize that. Counting systems which are capable of doing Linpack but In fact are doing something else continues to be an issue. We need additional info about systems so we can tally them correctly and make this less of a game.”

At number five is Piz Daint, a Cray XC50 system installed at the Swiss National Supercomputing Centre (CSCS) in Lugano, Switzerland. At 21.2 petaflops, it maintains its standing as the most powerful system in Europe. It is powered by a combinations of Intel Xeon processors and NVIDIA Tesla P100 GPUs

Trinity, a Cray XC40 system operated by Los Alamos National Laboratory and Sandia National Laboratories improved its performance to 20.2 petaflops, enough to move it up one position to the number six spot. It uses Intel Xeon Phi processors, the only top ten system to do so.

The AI Bridging Cloud Infrastructure (ABCI) installed in Japan at the National Institute of Advanced Industrial Science and Technology (AIST) is listed at number seven with a Linpack mark of 19.9 petaflops. The Fujitsu-built system is powered by Intel Xeon Gold processors, along with NVIDIA Tesla V100 GPUs.

Germany provided a new top ten entry with SuperMUC-NG, a Lenovo-built supercomputer installed at the Leibniz Supercomputing Centre (Leibniz-Rechenzentrum) in Garching, near Munich. With more than 311,040 Intel Xeon cores and an HPL performance of 19.5 petaflops, it captured the number eight position.

Titan, a Cray XK7 installed at the DOE’s Oak Ridge National Laboratory, and previously the most powerful supercomputer in the US, is now the number nine system. It achieved 17.6 petaflops using NVIDIA K20x GPU accelerators.

Sequoia, an IBM BlueGene/Q supercomputer installed at DOE’s Lawrence Livermore National Laboratory, is the 10th-ranked TOP500 system. It was first delivered in 2011, achieving 17.2 petaflops on HPL.

See our complete coverage of SC18

Sign up for our insideHPC Newsletter

Download the MP3 * Subscribe on iTunes * RSS Feed

RFHPC212: Inside the Spaceborne Supercomputer from HPE

In this podcast, the Radio Free HPC team sits down with Mark Fernandez from HPE to discuss the Spaceborne Supercomputer that it currently orbiting the planet in the International Space Station.

Last week, HPE announced it is opening high-performance computing capabilities to astronauts on the International Space Station (ISS) as part of its continued experiments on the Spaceborne Computer project.

Spaceborne Computer is the first commercial off-the-shelf (COTS) supercomputer that HPE and NASA launched into space for a one-year experiment to test resiliency and performance, achieving one teraFLOP (a trillion floating point operations per second) and successfully operating on the International Space Station (ISS).  After completing its one-year mission proving it can withstand harsh conditions of space – such as zero gravity, unscheduled power outages, and unpredictable levels of radiation – Spaceborne Computer will now, for the first time ever, open its supercomputing capabilities for use aboard the ISS. These “above-the-cloud” services will allow space explorers and experimenters to run analyses directly in space instead of transmitting data to and from Earth for insight.

Our mission is to bring innovative technologies to fuel the next frontier, whether on Earth or in space, and make breakthrough discoveries we have never imagined before,” said Dr. Eng Lim Goh, Chief Technology Officer and Vice President, HPC and AI, HPE. “After gaining significant insights from our first successful experiment with Spaceborne Computer, we are continuing to test its potential by opening up above-the-cloud HPC capabilities to ISS researchers, empowering them to take space exploration to a new level.”

After that, we do our Catch of the Week:

  • Shahin saw some cartoony Beethoven thing that he liked a lot. He also like this video of artists from over 124 countries singing Ghandi’s favorite bhajan, “Vaishnav Jan To Tene Kahiye.”
  • Dan likes the story about the European HPC Handbook. You can’t tell all the players without a program.
    • Dan is starting a 4800 mile road tour on his roundtrip to Dallas for SC18. He’s hitting all the labs and supercomputer centers he can along the way.
  • Rich is proud to announce that his first documentary film has already raised $700 for the Multnomah County Animal Shelter.

Download the MP3 * Subscribe on iTunes * RSS Feed

Sign up for our insideHPC Newsletter.

RFHPC211: A Preview of the SC18 Student Cluster Competition

In this podcast, Radio Free HPC Previews the SC18 Student Cluster Competition.

The Student Cluster Competition was developed in 2007 to provide an immersive high performance computing experience to undergraduate and high school students. With sponsorship from hardware and software vendor partners, student teams design and build small clusters, learn designated scientific applications, apply optimization techniques for their chosen architectures, and compete in a non-stop, 48-hour challenge at the SC conference to complete a real-world scientific workload, showing off their HPC knowledge for conference attendees and judges. Teams are composed of six students, at least one advisor, and vendor partners. The advisor provides guidance and recommendations, the vendor provides the resources (hardware and software) and the students provide the skill and enthusiasm. Students work with their advisors to craft a proposal that describes the team, the suggested hardware, and their approach to the competition. The SCC committee reviews each proposal and provides comments for all submissions received before the deadline.”

Download the MP3 * Subscribe on iTunes * RSS Feed

Sign up for our insideHPC Newsletter.