- Work closely with the research team to iterate on novel unsupervised learning algorithms at petabyte scale.
- Work closely with the research team to implement and maintain a steady stream of deep learning experiments.
- Develop and maintain petabyte scale data/training/experiment analysis infrastructure on EC2.
- Ability to choose which abstractions not to build, and the skills to build the right ones faster than most people think possible.
- Experience customizing neural networks with Tensorflow.
- Mathematical fluency.
- >=BS in Computer Science + >= 1 year experience working on data pipelines at scale.
Helm.ai is an algorithm- and AI- centric company working on the fundamental perception problem for autonomous navigation. Our ultimate goal is achieving full autonomy - that is, safe algorithmic navigation completely independent from any human input - for self-driving cars, drones and consumer robots. We are productizing along the full spectrum of autonomy, using our state of the art computer vision and semi-supervised deep learning technology.
We are currently a team of 7, expanding to 10-12. We are looking to hire
* Researchers, with strong backgrounds in at least one of applied mathematics, machine learning, or deep learning.
* Engineers, with strong backgrounds in at least one of scaling data pipelines, deep learning, general software engineering.