In this episode, the Radio Free HPC team looks at the Xennet initiative, a “public, distributed, and decentralized Supercomputer.” As the brainchild of Israeli computer scientist Ohad Asor, Xennet is essentially a free-market alternative to AWS that sounds a lot like the marriage of BitCoin and SETI@Home.
In this episode, the Radio Free HPC team discusses the new TPCx-HS benchmark for Big Data. Designed to asses a broad range of system topologies and implementation methodologies, TPCx-HS is the industry’s first objective specification enabling measurement of both hardware and software including Hadoop Runtime, Hadoop Filesystem API compatible systems and MapReduce layers.
TPCx-HS is a major achievement in two distinct arenas,” said Raghunath Nambiar, chairman of the TPCx-HS committee, and a distinguished engineer at Cisco. ”TPCx-HS is the first vendor-neutral benchmark focused on big data systems – which have become a critical part of the enterprise IT ecosystem. Secondly, TPCx-HS is the first Express-class benchmark issued by the TPC. Express-class benchmarks are being developed in response to overwhelming demand for a turnkey alternative to enterprise-class benchmarks, which have distinct advantages but are also substantially more time-intensive and costly to run.”
In this episode, the Radio Free HPC team discusses the concept of offloading computation to networked devices such a storage controllers. During a recent Analyst Call with Dan Olds, Mellanox described this technology as potential growth area. Henry wants to more about the interface to such and environment before he renders an opinion, while Rich notes that the companies like Solarflare have been doing this at the NIC level for level for several years now.
In this episode, the Radio Free HPC teams looks at Henry Newman’s recent straw proposal for better resource management for Linux in HPC. His Enterprise Storage Forum column on this topic got Slashdotted recently, and the question remains: does Linux want to be more like the mainframe? Rich and Dan take him to task to learn more.
With next-generation technology like non-volatile memories and PCIe SSDs, there are going to be more resources in addition to the CPU that need to be scheduled to make sure everything fits in memory and does not overflow. I think the time has come for Linux – and likely other operating systems – to develop a more robust framework that can address the needs of future hardware and meet the requirements for scheduling resources. This framework is not going to be easy to develop, but it is needed by everything from databases and MapReduce to simple web queries.”
- Intel revealed more about their upcoming Knights Landing processor, including the new Omni Scale Fabric.
- Mellanox rolled out the World’s First 100Gb/s EDR InfiniBand Switch as well as the HPC-X Scalable Software Toolkit.
- ARM64 made its debut on the show floor, and it seems to be best suited for accelerated systems.
- Team South Africa won the Student Cluster Competition for the second year in a row, while Team EPCC set a LINPACK record of 10.14 Teraflops.
In this episode, the Radio Free HPC team looks at a couple of techie books for Summer reading.
- Flash Boys is about a small group of Wall Street guys who figure out that the U.S. stock market has been rigged for the benefit of insiders and that, post–financial crisis, the markets have become not more free but less, and more controlled by the big Wall Street banks. Working at different firms, they come to this realization separately; but after they discover one another, the flash boys band together and set out to reform the financial markets.
- Daemon. When a designer of computer games dies, he leaves behind a program that unravels the Internet’s interconnected world. It corrupts, kills, and runs independent of human control. It’s up to Detective Peter Sebeck to wrest the world from the malevolent virtual enemy before its ultimate purpose is realized: to dismantle society and bring about a new world order.
The third HPCAC-ISC Student Cluster Competition, featured at ISC’14, is an opportunity to showcase student expertise in a friendly yet spirited competition. The competition will feature small teams that compete to demonstrate the incredible capabilities of state-of- the-art high-performance cluster hardware and software. In a real-time challenge, teams of six undergraduate and/or high school students will build a small cluster of their own design on the ISC exhibit floor and race to demonstrate the greatest performance across a series of benchmarks and applications. The students will have a unique opportunity to learn, experience and demonstrate how high-performance computing influence our world and day-to-day learning. Held in collaboration of the HPC Advisory Council and ISC, the Student Cluster Competition is designed to introduce the next generation of students to the high performance computing world and community.
In this episode, the Radio Free HPC team looks at a grab bag of technical news items for the week of May 11, 2014:
- A landmark ruling by the European Court of Justice (ECJ) ruling that an individual could demand that “irrelevant or outdated” information be deleted from Google search results.
- A new Bull supercomputer coming to DKRZ with a 45 Petabyte filesystem based on Lustre and ClusterStor technology.
- The Call for Submissions by the Graph500 and the Green Graph500.
- Dan’s visit to Sun Yat-sen University in Guangzhou, China to see Tianhe-2, the world’s fastest supercomputer.
In a nutshell, Henry thinks that the notion of SSDs replacing spinning disks in the datacenter is built on several of flawed assumptions regarding flash storage.
- First, some assume that the price of MLC NAND flash will continue to decrease at a rapid and predictable rate that will make it competitive with HDDs for bandwidth, and nearly for capacity, by 2014 or 2015. This downward trend, it is assumed, will make flash a viable alternative for large storage and to act as a memory or “buffer” to improve performance.
- Second, there is a general assumption that prices for bandwidth ($/GB/s) for SSDs is much lower than for HDDs, and that enterprises will measure costs in these terms instead of capacity.
- Third, there is no distinction made between flash in general, such as consumer SSDs, and enterprise storage SSDs. It is assumed that MLC NAND will not only reduce in price ($/GB) but also that it will increase in density and larger capacity drives will be developed.
- Fourth, it is assumed that the quality of MLC NAND will either remain constant or increase as prices decrease and densities increase, allowing it to improve not only performance, but also reliability and power consumption of the systems it is used in.
- Fifth, it is assumed that power consumption for SSDs is, or will shortly be, significantly lower than that of HDDs overall, on a per GB basis and on a per GB/s basis.
- Sixth, they assume disk performance will grow at a constant rate of about 20 percent per generation and not improve.
- Seventh, they assume file system data layout will not improve to allow better disk utilization.
According to Henry, most of these assumptions were made in early 2012. So far they have turned out to be partially true at best and wrong at worst. A big fan of the partially true, Dan pushes back as a matter of principle. Rich, on the other hand, has heard his share of SSD hype and thinks we need need to look at the data and see what it has to say.