RFHPC117: Calling for a Forever Data Format

In this podcast, the Radio Free HPC team discuss Henry Newman’s recent editorial calling for a self-descriptive data format that will stand the test of time. Henry contends that we seem headed for massive data loss unless we act.

In 20 years, much less thousands of years, how is anyone going to figure out what data is stored in each of these file formats? Of course, some of them are open source, but many are not. And even for open source, who is going to save the formats and information for a decade or more? I cannot even open some MS Office documents from the early 2000s, and that is less than two decades. The same can be said for many other data formats. There are self-describing data formats such as HDF (Hierarchical Data Format), which is about 30 years old, but outside of the HPC community, it is not widely used. There are other self-describing technologies in other communities, and maybe like HDF they could be used for virtually any data type. However, everyone wants what they have, not something new or different, and NIH is what usually happens in our industry.

Already we are seeing data formats that rely on antiquated hardware. Rich notes that data translation sites like Zamzar can help, and Shahin notes that the Living Computer Museum in Seattle has a mission to keep legacy computer systems running and available for people to see in action.

Rich points out that this is not just a problem for future scientific data. A recent article in the Economist describes how the number of genomics papers packaged with error-ridden spreadsheets is increasing by 15% a year, far above the 4% annual growth rate in the number of genomics papers published.

To wrap things up in our Catch of the Week, Rich points to talk by Larry Smarr on 50 Years of Supercomputing. And Henry can’t help bun ring the security klaxon now that Yahoo has disclosed a breach of half a billion user accounts.

Download the MP3 * Subscribe on iTunes * RSS Feed

RFHPC116: New Technologies for HPC Power & Cooling

In this podcast, the Radio Free HPC team looks at some interesting new developments in HPC Power & Cooling:

  • Solar-Powered Hikari Supercomputer at TACC Demonstrates HVDC Efficiencies. The HVDC technologies in the Hikari supercomputer at TACC could save 15 percent compared to conventional systems. “The 380 volt design reduces the number of power conversions when compared to AC voltage systems,” said James Stark, director of Engineering and Construction at the Electronic Environments Corporation (EEC), a Division of NTT FACILITIES. “What’s interesting about that,” Stark added, “is the computers themselves – the supercomputer, the blade servers, cooling units, and lighting – are really all designed to run on DC voltage. By supplying 380 volts DC to Hikari instead of having an AC supply with conversion steps, it just makes a lot more sense. That’s really the largest technical innovation.”
  • Aquila Launches Liquid Cooled OCP Server Platform. Using a fan-less design based on liquid cooling technology from Clustered Systems, the new Aquarius rack from Aquila can support up to 100 Kw. “Aquarius is designed from the ground up to meet reliability and the feature-specific demands of high performance and high density computing. Our design goal was to reduce the cost of cooling server resources to well under 5% of overall data center usage.”

Download the MP3 * Subscribe on iTunes * RSS Feed

RFHPC115: How XSEDE 2.0 Funding Begs the Question of Exascale Application Development

In this podcast, the Radio Free HPC team discusses the recent news that Intel has sold its controlling stake in McAfee and that NSF has funded the next generation of XSEDE.

Plus, we continue our new regular feature: The Catch of the Week, where:
  • Dan can’t resist taking potshots at Apple, a company who’s products he’s never purchased.
  • Henry fondly remembers Star Trek, one of the only TV shows he’s seen in the last 50 years

Download the MP3 * Subscribe on iTunes * RSS Feed

RFHPC114: Previewing the HPC User Forum & StartupHPC Events

In this podcast, the Radio Free HPC team previews the HPC User Forum & StartupHPC Events coming up in the Fall of 2016.

  • HPC User Forum, Sept. 6-8, Austin, Texas. The HPC User Forum was established in 1999 to promote the health of the global HPC industry and address issues of common concern to users (www.hpcuserforum.com). The organization has grown to 150 members. It is directed by a volunteer Steering Committee of users from government, industry and academia, and operated for the users by market analyst firm IDC. They hold two full-membership meetings a year in the United States, and also hold two meetings annually in international locations. International HPC User Forums are coming up in Beijing, China on Sept. 22 and Oxford in the UK Sept. 29-30
  • StartupHPC, Nov. 14, Salt Lake City, Utah. Today StartupHPC announced its third annual summit will be held in Salt Lake City at the Grand America Hotel. Held in conjunction with SC16, the conference features speakers with in-depth experience in entrepreneurship and the startup culture. The StartupHPC community caters to students and young entrepreneurs, corporations, venture capital firms, academia, government agencies, and support organizations.
Plus, we continue our new regular feature: The Catch of the Week.

Download the MP3 * Subscribe on iTunes * RSS Feed

RFHPC113: Alternative Processors for HPC

In this podcast, the Radio Free HPC team looks at why it’s so difficult for new processor architectures to gain traction in HPC and the datacenter.

Along the way, we try to take on the following questions:

  • ARM servers have been talked about for a few years now, but haven’t become anything near mainstream. Why is that? 
  • Can we categorize the challenges that non-x86 CPUs face?
  • Is HPC the right target market for ARM? Is ARMv8 going to move the needle?
  • What is new in ARMv8? What did Phytium announce this week at Hot Chips? What will Fujutsu use in the Post-K supercomputer?
Plus, we introduce a new regular feature for our show: The Catch of the Week.

Download the MP3 * Subscribe on iTunes * RSS Feed

RFHPC112: Looking Back at IDF 2016

In this podcast, the Radio Free HPC team reviews the recent 2016 Intel Developer Forum.

“How will Intel return to growth in the face of a declining PC market? At IDF, they put the spotlight on IoT and Machine Learning. With new threats rising from the likes of AMD and Nvidia, will Chipzilla make the right moves? Tune in to find out.”

Download the MP3 * Subscribe on iTunes * RSS Feed

RFHPC111: A First Look at HPE’s Pending Acquisition of SGI

In this podcast, the Radio Free HPC team looks HPE’s pending acquisition of SGI.

“At HPE, we are focused on empowering data-driven organizations,” said Antonio Neri, executive vice president and general manager, Enterprise Group, Hewlett Packard Enterprise. “SGI’s innovative technologies and services, including its best-in-class big data analytics and high performance computing solutions, complement HPE’s proven data center solutions designed to create business insight and accelerate time to value for customers.”

Download the MP3 * Subscribe on iTunes * RSS Feed

RFHPC110: Machine Learning & Data Locality

In this podcast, the Radio Free HPC team looks at use cases for Machine Learning where data locality is critical for effective results.

Most of the Machine Learning hearing stories we hear involve a central data repository. Henry says he is not hearing enough about how Machine Learning is going to deal with the problem of massive data streams from things like sensors. Such data, he contends, will have to be processed at the source.

Download the MP3 * Subscribe on iTunes * RSS Feed

RFHPC109: New Chip Architectures — Picking the Winners

In this podcast, the Radio Free HPC team welcomes Shahin Khan from OrionX to a discussion on chip architectures for HPC.

More and more new alternative architectures were in evidence at ISC in Germany this year, but what does it take for a chip architecture to be a winner? Looking back, chips like DEC Alpha had many advantages over the competition, but it did not survive.

Henry contends that a viable chip architecture needs a memory controller that has the muster to keep up with new memory hierarchies. Rich thinks it isn’t about features so much and that survival is all about the chip’s ecosystem.

Download the MP3 * Subscribe on iTunes * RSS Feed

RFHPC108: First Look at the Sunway TaihuLight 93 Petaflop Supercomputer

In this podcast, Shahin Khan from OrionX joins the Radio Free HPC team for a look at the new TOP500 list of the world’s fastest supercomputers.

As announced this morning at ISC 2016, the fastest supercomputer on the TOP500 is new Sunway TaihuLight System in Wuxi, China. Developed by the National Research Center of Parallel Computer Engineering & Technology (NRCPC), TaihuLight scored a whopping 93 Petaflops on the LINPACK benchmark. To put that in perspective, that is nearly three times faster than the previous #1 system, the Tianhe-2 supercomputer, which has moved to #2 after ruling the roost for some three years or so TaihuLight is also five times faster than Titan, the 17 Petaflop machine at ORNL, which is still the fastest machine in the USA.

Sunway in Wuxi, China

The Sunway TaihuLight Suercomputer in Wuxi, China is the world’s fastest supercomputer.

Here is the rundown:

  • Linpack: 93 Petaflops (Rmax)
  • Peak performance: 125.4 Petflops (Rpeak)
  • Processor: Sunway SW26010 1.4 GHz processor
  • Cores per socket: 260
  • Instruction Set: RISC instruction set developed by Sunway
  • Interconnect: their TOP500 submission says “Sunway design” but Mellanox supplied the Host Channel Adapter (HCA) and switch chips. Sunway may not call it InfiniBand, but that is exactly what it is. China has political reasons for characterizing the overall system domestic technology.
  • Cabinets: 40 Water-cooled cabinets, each with 3 Petaflops of peak performance
  • Power consumption: 15.27 Megawatts
  • Mflops/watt: 6051

Download the MP3 * Subscribe on iTunes * RSS Feed

Sign up for our insideHPC Newsletter