As supercomputing shifts to accelerator hardware to maintain performance growth as Moore's Law withers away, GPUs take center stage for Jupyter in HPC. Jupyter has become a principal platform for GPU-powered AI and analytics application workflows built on Tensorflow, PyTorch, and RAPIDS. NERSC's first production system with a GPU partition, Perlmutter, is arriving in 2021 with "built-in" Jupyter support from the vendor. System vendors see that Jupyter is a key component of the data/AI ecosystem and are motivated to engage with HPC centers to reach their users.
Probably the most important lesson learned that we can share, especially to other HPC centers, is that building support for Jupyter within an institution that provides supercomputing power takes data, persuasion, and management that sees the benefits of expanded access through rich user interfaces like Jupyter. We have been able to expand support for Jupyter from just one shared node on Cori to four plus batch nodes (requiring consensus from many internal stakeholders) through plots like Figure \ref{952831} that management can digest and project forward. Users with a direct line to federal research agency stakeholders like program managers are a potential source of external motivation and support.
HPC community building and collaboration. HPC community building and collaboration.