Jupyter's core design values make it a most viable option for NERSC. Modular design almost always provides us with a way to expand a service and make it more responsive to the needs of our users. We take advantage of this modularity to preserve the core Jupyter user experience and functionality, while adding enhancements that allow users to take advantage of HPC resources more easily. This is something we have leveraged heavily with JupyterHub in our Phase 3 deployment. JupyterLab is built on an extension mechanism that essentially allows us to customize virtually any aspect of the framework, since "everything is an extension." We have developed and integrated tools into the JupyterLab deployment on Cori itself to enable key integrations particularly important for HPC centers:
Filesystem Browsing: HPC centers often have large shared filesystems with complex and sprawling folder hierarchies. In order to effectively manage points of entry relevant to the user we introduced a new extension (JupyterLab favorites [ref]) to bookmark commonly used files and directories. We can also use this to provide easy browsing to commonly used folders for all users. We also created JupyterLab-Recents [ref], an extension for accessing recently browsed directories and files. These extensions addressed a major source of trouble tickets: Users getting "lost" on the NERSC global file system in JupyterLab.
Scaling Compute: We support the Dask distributed computing framework as the recommended way to scale tasks through Jupyter. We do this through a combination of worked example scripts and notebooks, as well as Dask JupyterLab integrations. Dask worker processes run on compute nodes and enable us to farm out notebook code to a set of remote workers enabling us to scale up. 
HPC systems often use a batch computing model, where users submit jobs to an underlying batch scheduler that  manages and schedules which tasks are run based on priority and node availablity. We use the Slurm Workload Manager at NERSC to manage batch compute tasks. We have developed a JupyterLab Slurm extension [ref] that allows users to submit and monitor jobs directly from JupyterLab, thus serving as a common interface to both the interactive components and batch components of a workflow. 
Reproducibility: A common pattern for team science involves a user developing a recipe or template notebook that captures a scientific workflow, alongside a software environment (with packages and dependencies installed). To allow users to share notebooks that can run in a common environment we have developed a clonenotebooks extension [ref] that allows users to browse a  gallery of notebooks (using the nbviewer tool [ref]), and clone them into their own home directory, along with a pointer to the pre-built software stack for that notebook. This allows users in a collaboration to reproduce a given workflow and then tweak or modify it as needed with their own data and parameters.
We also have experimental support for running Jupyter through Docker containers [ref: UDA] that contain the users software environment and notebook based analyses, on NERSC HPC resources thus enabling a fully packaged reproducible workflow.
Other widgets and tools: Jupyter notebooks can also support custom user interaction and visualization elements through the use of "Jupter Widgets".  In our interactions with scientists, we have worked on a suite of front end tools to help manage interactions with data, including on-demand image slicers / data viewers, file selection tools and plotting libraries.
We have also integrated other existing Jupyter extenstions from the community into our setup including resource monitoring tools (to ensure users don't go over CPU or memory limits on a shared system), git integration for code management, and nbdime to compare differences between notebook versions, among others.

Use Cases

We noticed some common patterns in our engagements with scientific users that use Jupyter for their computational workflows on NERSC systems. At the highest level there is a need for combining exploration of very large datasets with some computational and analytical capabilities. Crucially the scale of data or compute (or both) required to enable these workflows typically exceeds the capacity of the users own machines and the users need a user-friendly way to drive these large-scale workflows interactively.
We often see a two phased approach, where the user performs some local notebook development and then runs these on machines like NERSC on their production data and compute pipelines. It is important to be able to seamlessly go between these modes and our approach is grounded in trying to make sure that a user can easily take a notebook and its associated environment over to our systems, with minimal effort and making sure that they have a consistent user experience.
As an example, we describe a use case [ref: Heagy et al.] applying geophysical simulations and inversions for imaging the subsurface. This was done by running 1000 1D inversions that each produces a layered model of the subsurface conductivity, which are then stitched together to create a 3D model. The goal of this particular survey was to try and understand why the Murray River in Australia was becoming more saline. This involved running simulations, data analysis, and machine learning ML on HPC systems. The outputs of these runs need to be visualized and queried interactively. The initial workflow was developed on the users local  laptop environment, and needed to be scaled up at NERSC.
In practice this involves running Jupyter at NERSC in a Docker container with a pre-defined reproducible software environment. Parallel computing workers are launched on Cori from Jupyter with the "Dask-jobqueue" Jupyter extension. Workers can be scaled up or down on-demand. The SimPeg Inversion notebook farms out parallel tasks to Dask - the results of these parallel runs are pulled into notebook and visualized. Running of a large batch of simulations is then used to generate data for a machine learning application.