The Polar Research Coordination Network (polar-computing.org) aims to connect the Polar Science, Data and High-Performance Computing (HPC) communities to enable deeper penetration of computing methods and cyberinfrastructure into the polar sciences. CU Boulder has strengths in each of these communities - in this seminar/discussion 5 presenters shared their experiences in working across disciplinary boundaries.
Panelists included:
- Mike Willis from CIRES who talked about his experiences using HPC to build high resolution DEMs from satellite imagery,
- Mary Jo Brodzik from NSIDC who shared her experience with a large data reprocessing project,
- Jordan Powers from NCAR who told the audience about the Antarctic WRF Mesoscale Prediction System,
- Karl Rittger from NSIDC who presented work on measuring snow and ice in High Mountain Asia, and
- Pete Ruprecht from CU Research Computing who told us about the computing, storage, networking, and HPC training resources available on the CU campus.
There were many points of resonance among the polar scientists. For all of them, HPC enabled science that would have otherwise been impossible, including quickly processing large throughputs and delving into highly detailed analyses.
However, using HPC resources is not for the faint of heart. There are significant hurdles to overcome (learning schedulers, architectures, environments, etc.), so it is important for new HPC users to identify the need for the increased resources. In addition, one panelist identified local GPU computing as a possible intermediate or alternative for some users.
All panelists stressed that resources are available to help both new and experienced HPC users. For example, CU Research Computing provides weekly seminars and quarterly trainings on a range of topics. Personal advice and consulting services are available at a range of levels, including transferring desktop workflows to HPC resources and optimizing workflows. Research Computing support is also available to be able to scale up from a beginner allocation to a larger allocation or for support in funding proposals.
Despite the enthusiasm that the polar scientists all shared for HPC in their research, they identified a range of bottlenecks. Common hardships included identifying the right HPC resources for particular tasks, modifying workflows for different computing architectures and environments (miniconda saved one panelist), handling large data transfers (Globus was highly recommended – accidentally shutting down Internet access to two research buildings was not), understanding best practices, and even knowing the right questions to ask to get help.
Future events will use build on this discussion to lessen roadblocks and increase collaboration between the polar, computing, and data communities. Thank you again to panel participants and attendees for a great discussion. For more information and updates, please see polar-computing.org