Avatar for Seldon

Machine Learning Deployment Platform

Machine Learning Engineer (Product)

Apply now

Following a year of rapid growth in our open-source ML projects and winning top tier customers across sectors to support strategic AI platform initiatives, Seldon is expanding rapidly! We are looking for a Machine Learning Engineer to join our Product Engineering function and help us build the future of production machine learning.

About Seldon
Seldon is a London based scale-up that builds open source and enterprise machine learning frameworks that power massive scale deployments of production AI systems. Our open source frameworks benefit from over 2,000,000 installations, and power our enterprise product Seldon Deploy, which is currently being used by some of the leading global organisations across industries such as automotive, pharma, finance, etc.

About the role
Your role at Seldon will primarily involve:
Building and extending production machine learning systems at scale based on our open source and enterprise products
Work with our R&D and Product teams to integrate data science functionality into product features
Contributing to our open source projects to extend their functionality
Architecting solutions and optimising the performance of critical industry machine learning systems
Identifying & documenting best practices for ML Engineering
Contributing to global technology conferences
Growing within a scaling startup crafting a role of your own 🚀

Required skills:
A degree or higher level academic background in a scientific or engineering subject.
Strong computer science foundations.
Strong understanding of the data science lifecycle.
Strong System architecture knowledge/experience.
Familiarity with linux based development.
Experience architecting/applying technology to solve real world challenges.
Experience delivering production-level client-facing projects.

Nice-to-have:
Experience with Kubernetes and the ecosystem of Cloud Native tools.
Experience maintaining / deploying machine learning models in production.

Benefits:
Share options to align you with the long-term success of the company.
Access to discounted lunches, gyms, shopping and cinema tickets.
Healthcare benefits.
Tier 2 visa support.
Flexible work-from-home policy.
Cycle To Work Scheme.

About our tech stack
Some of our high profile technical projects:
We are core authors and maintainers of Seldon Core, the most popular Open Source model serving solution in the Cloud Native (Kubernetes) ecosystem
We built and maintain the black box model explainability tool Alibi
We are co-founders of the KFServing project, and collaborate with Microsoft, Google, IBM, etc on extending the project
We are core contributors of the Kubeflow project and meet on several workstreams with Google, Microsoft, RedHat, etc on a weekly basis
We are part of the SIG-MLOps Kubernetes open source working group, where we contribute through examples and prototypes around ML serving
We run the largest Tensorflow meetup in London
And much more 🚀

Some of the technologies we use in our day-to-day:
Go is our primary language for all-things backend infrastructure including our Kubernetes Operator, and our new GoLang Microservice Orchestrator)
Python is our primary language for machine learning, and powers our most popular Seldon Core Microservices wrapper, as well as our Explainability Toolbox Alibi
We leverage the Elastic Stack to provide full data provenance on inputs and outputs for thousands of models in production clusters
Metrics from our models collected using Prometheus, with custom Grafana integrations for visualisation and monitoring
Our primary service mesh backend leverages the Envoy Proxy, fully integrated with Istio, but also with an option for Ambassador
We leverage gRPC protobufs to standardise our schemas and reach unprecedented processing speeds through complex inference graphs
We use React.js for our all our enterprise user products and interfaces
Kubernetes and Docker to schedule and run all of our core cloud native technology stack

Logistics
Our interview process is normally a phone interview, a coding task, and 2-3 hours of final interview (carried out virtually). We promise not to ask you any brain teasers or trick questions. We might design a system together on a whiteboard, the same way we often work together, but we won’t make you write code on one. Our recruitment process has an average length of 3 weeks.

APPLY HERE

Location
London
Job type
Full-time
Visa sponsorship
Not Available
Hiring contact

Seldon at a glance

Machine Learning Deployment Platform

Seldon focuses on Open Source, Machine Learning, Artificial Intelligence, and Predictive Analytics. Their company has offices in London. They have a small team that's between 11-50 employees.

You can view their website at https://www.seldon.io or find them on Twitter, Facebook, and LinkedIn.