So here is the big ticket item; How in the world do I write files to persisted storage from PySpark? There are tons of docs on RDD.toTextFile() or things of that nature; but that only matters if you are dealing with RDD’s or .csv files. What if you have a different set of needs. In this case; I wanted to visualize a decision decision forest I had built; but there are no good bindings that I could find between PySpark’s MLLIB and Matplot lib (or similiar) to visualize the decision forest.
I’m not sure the title really nailed it well enough, but we are going to talk about solving VERY big problems as fast as we possibly can using highly sophisticated techniques. This blog article is really a high level overview of what you want to set up as opposed necessarily to the usual how to set it up. There are a ton of steps to the actual how to; I thought it best to just provide an overview in this article to what you want to do instead of how to do it.
This one is more for me than for you. I often find a piece of software that needs just some magic environment variable set with some magic path that never seems to get properly configured during installation. Below is an example of how to get that path set, and then ensure it is always set when you log on to the server from then on out.
# These instructions are for bash
$ echo $SHELL
# Check the current value of your envvar
$ echo $CAFFE_ROOT
# Add the envvar to ~/.profile so it will load automatically when you login
$ echo "export CAFFE_ROOT=/home/username/caffe/" >> ~/.profile
# Load the new configuration
$ source ~/.profile
# Check the new envvar value
$ echo $CAFFE_ROOT
If you are not familiar with Microsoft CoCos, you should be. Its a treasure trove of data for your learning pleasure! There just happens to be one pesky problem with it, and that is the fact that when attempting to find the files for training/testing; the Annotation file that ships with MS CoCo does not include the actual file name, but rather the image id. This sounds fine, except the data when you download it has a bunch of trailing stuff! In this article we will go through how to get it ready.
In this article I’m going to go through how to set up CNTK with Visual Studio Code and take advantage of those PASCAL GPUs I know everybody has these days. I will also do a breif overview of what CNTK and Visual Studio Code are and why they are so incredible for machine learning scientists.
So today we are going to do something really awesome. Operationalize Keras with Azure Machine Learning. Why in the world would we want to do this? Well we can configure Deep Neural Nets and train them on GPU. In fact, in this article, we will train a 2 depth neural network which outputs a linear prediction of energy efficiency of buildings and then operationalize that GPU trained network on Azure Machine Learning for production API usage.
If you couldn’t make it to the event, rats! Here is the slide deck!
This is quick and dirty post, because I run into this problem all the time and need a place to find the answer quickly.
Here is what happens:
ImportError: libcudart.so.8.0: cannot open shared object file: No such file or directory
Here is the answer:
drcrook@BigBen:/usr/local/cuda$ sudo cp include/cudnn.h /usr/include
drcrook@BigBen:/usr/local/cuda$ sudo cp lib64/libcudnn* /usr/lib/x86_64-linux-gnu/
drcrook@BigBen:/usr/local/cuda$ sudo chmod a+r /usr/lib/x86_64-linux-gnu/libcudnn*
If you are struggling getting your GPU initialized with Theano, Tensorflow or really any deep learning framework, this is probably something you may want to do.
Occasionally I get asked questions in an interview or from students or some other place. When those questions come in written form, I like to reply to them on my blog for the rest of the world to be able to see. Today’s theme is about communication and communication technologies in a software engineering ecosystem. One item of interest is that our teams are constantly becoming multi-cultural, multi-timezone, more distributed and more diverse. This has its benefits, but from a communications perspective, it does have its challenges as well.