
Version Control, Pipeline Management and Machine Orchestration for Deep Learning
Train on a hundred GPUs at the click of a button.
Run parallel hyperparameter sweeps & distributed training in minutes.
Version control with a full audit trail; from preprocessing to inference.
Run everything from TensorFlow & Keras to bash scripts on any Docker container.
Build models the way Uber, Google, & Netflix do it. Standardize & delete home-made scripts.
Visualize every training session & monitor in the UI, command line or via the API.
Setting up the deep learning infrastructure and comparing models is traditionally a slow process. With Valohai automating the DevOps work, your team will be free to concentrate on building your models. You’ll put unlimited processing power at the fingertips of your machine learning team, enabling them to train a model in minutes that would otherwise take a week.

Train your models in the cloud or on your own server-farm with the click of a button, the call of an API, or a CLI one-liner. Valohai enables you to use the right amount of processing units - maximizing your results while saving time & money.
You’ll be able to select a deployed model and trace back through its hyperparameters, training data, script version, associated cost, sibling models & even the team members involved in training it.
Everything in Valohai is built API-first for easy integration of your ML pipeline into your existing software pipeline, e.g. through Jenkins or any other continuous integration platform.
Valohai speeds up data scientists' work by 10X
Thanks to Valohai, there is a clear process. My features, predictions, metrics and models are always stored automatically and the platform helps me to communicate the results and iterate.
Head of Data

Our challenge is largely which model approach is going to work where, so it is a lot of data crunching, testing different models and picking the best one for the job. Valohai makes the process a lot easier for us thanks to parallel hyperparameter searches!
CEO

For someone like me, with an engineering background and familiarity with Docker images, it was very easy to just jump into Valohai. We just configured our testing environment in Docker images and then configured the tests themselves in the Valohai YAML file, imported the project and boom! We had 30 hyperparameter sweeps on our first try.
Data Scientist

The predictive models we’ve built have required significant work and the transparency Valohai has provided into how the trainings are progressing have been as valuable as gold!
AI lead