So Jupyter is a great tool for experimental science. Running a jupyter notebook though can be tricky; especially if you want to maintain all of the data that is stored in it. I have seen many strategies; but I have come up with one that I like best of all. It is based on my “Micro Services for Data Science” strategy. By using decoupled data and compute we can literally thrash our Jupyter notebook and all of our data and notebooks still live. So why not put it in a self healing orchestrater and deploy via Kubernetes :D.