Hacker Newsnew | past | comments | ask | show | jobs | submit | ymt123's commentslogin


Thank you ! Glances seems to be a good netdata + htop replacement


Glances also shows docker containers like this.


Have you tried Sacred[1]? It definitely doesn't answer the "infrastructure for deep learning" challenge but it is helpful for understanding what experiments have been run/where did this model come from (including what version of the code/parameters produced it)

[1] https://github.com/IDSIA/sacred


It's great to see people talking about the infrastructure they use to manage their deep learning workloads.

One area where we've had trouble with other orchestration tools (e.g. Docker Swarm) was in managing resources at anything beyond whole boxes. They are all good at managing CPU/RAM/Disk but we've had trouble with give this task GPU2. We had planned to try Mesos (given that we already run it for other things) but it sounds like maybe we should take a harder look at Kubernetes first.


If I'm understanding correctly that sounds similar to Pachyderm (http://www.pachyderm.io)


Pachyderm looks quite cool, but I think it's lacking a quick-start and a way of running things locally in a simple way.

I grabbed the repo and clicked through a few links in the docs and hit a 404. I searched on google and found a link to a way of running it just locally simply but that doesn't work with the new version. Then I followed the instructions and hit a problem installing something to do with k8 about mapped paths and the fix printed in the console doesn't work.

I understand that this is a personal complaint and others might not care at all about having it setup locally because it solves the big problems so well but I just want to try it at least locally.


It's not quite the same (since it doesn't become a Map-Reduce job) but if you're mostly interested in the programming paradigm/scalability the Python API for Apache Spark might be a good alternative


I think this was the problem [1] was targeting [1] http://engineeringblog.yelp.com/2016/01/dumb-init-an-init-fo...


One of many.

Which is why u think might as well get systemd working on docker...


There are actually dockerized versions of many (maybe all) of the deep learning libraries. The docker containers can take advantage of the GPU for training. You still have to install CUDA on the box (outside the docker container) but then you can try out different deep learning libraries.

Libraries we've started from in my lab: Caffe: https://hub.docker.com/r/kaixhin/caffe/ Torch :https://hub.docker.com/r/kaixhin/torch/ Theano: https://hub.docker.com/r/kaixhin/theano/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: