In this podcast, the Radio Free HPC team looks at the announcements coming from Google IO conference. Of particular interest was their second-generation TensorFlow Processing Unit (TPU2).
After that, we do our Catch of the Week:
Shahin discusses some details about the new prototype of HPE’s The Machine. “The prototype unveiled today contains 160 terabytes (TB) of memory, capable of simultaneously working with the data held in every book in the Library of Congress five times over—or approximately 160 million books. It has never been possible to hold and manipulate whole data sets of this size in a single-memory system, and this is just a glimpse of the immense potential of Memory-Driven Computing.”
In this podcast, the Radio Free HPC team looks at Volta, Nvidia’s new GPU architecture that delivers up to 5x the performance of its predecessor.
At the GPU Technology Conference, Nvidia CEO Jen-Hsun Huang introduced a lineup of new Volta-based AI supercomputers including a powerful new version of our DGX-1 deep learning appliance; announced the Isaac robot-training simulator; unveiled the NVIDIA GPU Cloud platform, giving developers access to the latest, optimized deep learning frameworks; and unveiled a partnership with Toyota to help build a new generation of autonomous vehicles.
Built with 21 billion transistors, the newly announced Volta V100 “delivers deep learning performance equal to 100 CPUs.” Representing an investment by NVIDIA of more than $3 billion, the processor is built “at the limits of photolithography,” Huang told the crowd.
In this podcast, the Radio Free HPC team reviews the results from the ASC17 Student Cluster Competition finals in Wuxi, China. In the end, Tsinghua University won the overall competition, beating 20 teams from around the world.
“As the world’s largest supercomputing competition, ASC17 received applications from 230 universities around the world, 20 of which got through to the final round held this week at the National Supercomputing Center in Wuxi after the qualifying rounds. During the final round, the university student teams were required to independently design a supercomputing system under the precondition of a limited 3000W power consumption. They also had to operate and optimize standard international benchmark tests and a variety of cutting-edge scientific and engineering applications including AI-based transport prediction, genetic assembly, and material science. Moreover, they were required to complete high-resolution maritime simulation on the world’s fastest supercomputer, Sunway TaihuLight.
The grand champion, team Tsinghua University, completed deep parallel optimization of the high-resolution maritime data simulation mode MASNUM on TaihuLight, expanding the original program up to 10,000 cores and speeding up the program by 392 times. This helped the Tsinghua University team win the e Prize award. MASNUM was nominated in 2016 for the Gordon Bell Prize, the top international prize in the supercomputing applications field.
Shahin likes a new story about AI-enhanced justice in the court systems where they look at setting bail and the risks associated with the defendant.
Dan was having trouble with slow internet and two-factor authentication, which sounds like a vicious circle.
Rich notes that Andrew Klein from Backblaze will present at MSST on what they’ve learned about hard drives over the years including failure rates by model, and the ability to predict drive failure before it happens.
“The Exascale Computing Project (ECP) was established with the goals of maximizing the benefits of HPC for the United States and accelerating the development of a capable exascale computing ecosystem. The ECP is a collaborative effort of two U.S. Department of Energy organizations – the Office of Science (DOE-SC) and the National Nuclear Security Administration (NNSA).”
Recent milestones include:
PathForward will soon announce six awards to vendors to develop new technologies that will be instrumental in Exascale system development.
The ECP Industry Council met for the first time recently with C-Level executives from industry to lay our application requirements for exascale systems. The end goal is to improve industrial competitiveness in the United States.
After that, we do our Catch of the Week:
Shahin is impressed by the new Wolfram Data Depository, a public resource that hosts an expanding collection of computable datasets, curated and structured to be suitable for immediate use in computation, visualization, analysis and more. Building on the Wolfram Data Framework and the Wolfram Language, the Wolfram Data Repository provides a uniform system for storing data and making it immediately computable and useful. With datasets of many types and from many sources, the Wolfram Data Repository is built to be a global resource for public data and data-backed publication.
Henry informs us to always tug on the front panel of your ATM before using. “Once you understand how easy and common it is for thieves to attach “skimming” devices to ATMs and other machines that accept debit and credit cards, it’s difficult not to closely inspect and even tug on the machines before using them.”
Matthew O’Keefe is the Program Chair of the MSST Conference
In this podcast, the Radio Free HPC team discusses the upcoming MSST Mass Storage Conference with Program Chair Matthew O’Keefe from Oracle. The conference takes place May 15-19 in Santa Clara, California.
Since the conference was founded by the leading national laboratories, MSST has been a venue for massive-scale storage system designers and implementers, storage architects, researchers, and vendors to share best practices and discuss building and securing the world’s largest storage systems for high-performance computing, web-scale systems, and enterprises.
In this podcast, the Radio Free HPC team looks at the week’s top stories:
Quantum Startup Rigetti Computing Raises $64 Million in Funding. Today Rigetti Computing, a leading quantum computing start-up, announced it has raised $64 million in Series A and B funding. “Quantum computing will enable people to tackle a whole new set of problems that were previously unsolvable,” said Chad Rigetti, founder and chief executive officer of Rigetti Computing. “This is the next generation of advanced computing technology. The potential to make a positive impact on humanity is enormous.”
Rex Computing has their produced their processing chip and thing works. With a little help from some DARPA funding, they taped out about a month ago and the initial results are very encouraging.
In this podcast, Radio Free HPC looks at a recent report that the USA needs to take aggresive action to keep up with China in High Performance Computing. Produced by the NSA-DOE Technical Meeting on High Performance Computing, the report states that we need to change course now or the U.S. will lose leadership and not control its own future in HPC.
Microsoft, NVIDIA, and Ingrasys announced a new industry standard design to accelerate Artificial Intelligence in the next generation cloud. The Project Olympus hyperscale GPU accelerator chassis for AI, also referred to as HGX-1, is designed to support eight of the latest “Pascal” generation NVIDIA GPUs and NVIDIA’s NVLink high speed multi-GPU interconnect technology, and provides high bandwidth interconnectivity for up to 32 GPUs by connecting four HGX-1 together. The HGX-1 AI accelerator provides extreme performance scalability to meet the demanding requirements of fast growing machine learning workloads, and its unique design allows it to be easily adopted into existing datacenters around the world.
ARM comes to Azure. As another part of its Project Olympus, Azure is announced it is deploying a large number of ARM-based servers.
Springtime in Naples. AMD hopes to give Intel a run for its money with the new Zen-based Naples server platform. While the server benchmarks aren’t out yet, the desktop Zen chips have shown impressive applications performance for less money than Intel I7 chips.
Huawei Ranks Third Globally for 2016 Q4 Server Shipments. Gartner is out with surprising server market news that shows Huawei showing up at #3 in terms of shipments in 4Q2016. The numbers don’t seem to jive with what IDC says, but an 88 percent jump in severs sales quarter-to-quarter is great news for Huawei.
IBM’s Atomic Storage. This week, IBM announced it has created the world’s smallest magnet using a single atom – and stored one bit of data on it. Currently, hard disk drives use about 100,000 atoms to store a single bit. The ability to read and write one bit on one atom creates new possibilities for developing significantly smaller and denser storage devices, that could someday, for example, enable storing the entire iTunes library of 35 million songs on a device the size of a credit card.
In this podcast, Radio Free HPC looks at a Startup called Storj, which will pay you to use your excess data capacity as cloud-based storage based on Blockchain technology.
“Our mission is to rethink cloud storage, to provide the security, privacy, and transparency it’s missing. That’s why we are building an open-source cloud platform, that aim to fundamentally change the way people and devices own data.”
In this podcast, the Radio Free HPC team hosts Dan’s daughter Elizabeth. How did Dan get this way? We’re on a mission to find out even as Elizabeth complains of the early onset of Curmudgeon’s Syndrome. Somehow she has turned out well, though, and has a great gig with an Oil company in Washington, D.C., so we also get her take on what is going on in the Nation’s capital.
Shahin and Henry just came back from the RSA Security Conference. While stories of Hacker breaches are in the headlines every day, Shahin is alarmed that every company sounds the same when you go talk to them in their booths.