- Work closely with the research team to iterate on novel unsupervised learning algorithms at petabyte scale.
- Work closely with the research team to implement and maintain a steady stream of deep learning experiments.
- Develop and maintain petabyte scale data/training/experiment analysis infrastructure on EC2.
- Ability to choose which abstractions not to build, and the skills to build the right ones faster than most people think possible.
- Experience customizing neural networks with Tensorflow.
- Mathematical fluency.
- >=BS in Computer Science + >= 1 year experience working on data pipelines at scale.
Helm.ai is a team of researchers and engineers, building and productizing technology that unlocks the full potential of AI for autonomous driving. Our goal is enabling full scale proliferation of L4/L5 autonomy, well beyond the currently limited market, which necessarily involves building technologies that operate completely independently from any human input (no teleoperation, no human annotation for mapping, and no human in the loop for training). Our approach leverages a novel combination of cutting edge tools from applied mathematics and deep learning to create arbitrarily scalable learning pipelines, giving our technology performance advantages across the full spectrum of autonomy, including perception and sensor fusion, intent modeling, mapping, path planning and control.