The recent advances in machine learning and artificial intelligence are amazing! Yet, in order to have real value within a company, data scientists must be able to get their models off of their laptops and deployed within a company’s data pipelines and infrastructure. Those models must also scale to production size data.
In this webinar, we will implement a deep learning model locally using Intel® Nervana™ Neon™ framework. We will then take that model and deploy both it's training and inference in a scalable manner to a production cluster with Pachyderm*. We will also learn how to update the production model online, track changes in our model and data, and explore our results.
What you can expect to learn:
- How to build a distributed, containerized data pipeline.
- How to version data.
- How to track the provenance of results.
- How to manage DL models in production.