As supercomputer use expands beyond traditional modeling and simulation to include data analytics and artificial intelligence workflows, as supercomputer user demographics shift in the direction of data scientists, and as new data science tools and libraries develop in the broader context,
as user demographics and skill sets correspondingly shift toward data science,
As use cases for supercomputing expand beyond traditional modeling and simulation to include data analytics and artificial intelligence workflows, as user demographics correspondingly shift more in the direction of data scientists, and as new data science tools and libraries develop in the broader context, demand for ever more rich interfaces to supercomputing
All users have expectations and (opinions) about how they interact with computers, and users of supercomputers are no different. These expectations are informed by requirements, previous experience (good and bad), and training. As the average supercomputer user shifts from the Fortran modeling/simulation type toward the data sciences we have seen increasing demand for a new kind of interface to supercomputing: Jupyter notebooks.
High-performance computing (HPC) is about composing and concentrating resources for computation, data storage, and networking so as to provide users with performance far beyond what they can obtain from a laptop or desktop computer. What distinguishes supercomputing from other forms of HPC is that supercomputers tend to be purpose-built to solve very large, very complex problems using hardware with unique capabilities and software designed to exploit those capabilities. [To make things concrete, give example of an application or library that takes advantage of some form of supercomputing hardware to solve some problem. Bonus points if it can resonate with Jupyter.]