Avatar for MetaRouter

Secure Streaming Data Routing

Sr. Data Engineer

$105k - $120k • No equity
Apply now
In this role, you will serve primarily as a product engineer on our “Enterprise” (private-cloud) platform, helping the team architect, plan, develop, deploy, and maintain new and existing data-engineering-related features.

As features are shipped and adopted by customers, you will also be expected to provide implementation support. The MetaRouter platform deals in unbounded data sets, and strong experience with ETL in the streaming and micro-batching context is important.

You will be operating on Dockerized Golang and Python projects within Kubernetes environments running on GCP services. You will be building product features that may utilize a variety of new and established streaming and micro-batching data tools, such as Google DataStream, Apache Kafka, Apache Beam, Apache Spark, Hadoop, and many others.

Responsibilities:

- Engage in technical leadership and strategy to improve the whole lifecycle of the Enterprise product.
- Serve as a primary data engineer and product developer on ETL-related features.
- Maintain features once they are live by measuring and monitoring availability, latency, scalability, and health.
- Participate in Agile development activities such as system design consultation, sprint planning, requirements gathering, capacity planning, and launch reviews.
- Write and maintain excellent documentation, both internal and client-facing.

Required Background & Skills:

- Friendly attitude and strong motivation to see this product succeed and mature.
- 3+ years of experience building and maintaining large-scale ETL systems, preferable with open-source tooling.
- 2+ years experience with Golang and either Python or Java.
- 1+ years experience with using Kuberentes and containerization
- Experience with the GCP ecosystem
- Experience working with unbounded data sets, inserting data from multiple schemas into a centralized system.
- Experience with data transport layers and message queues, such as Kafka, PubSub, and Kinesis.
- Experience with ETL tools such as Spark, Beam, DataFlow, Flink, & Hadoop.
- Experience connecting, cleaning, and maintaining complex data sets in transit and at rest.
- Experience with common data warehousing tools that leverage SQL-esque interfaces, such as Amazon Redshift and Google BigQuery.

Bonus Round:

- Experience crafting, maintaining and scaling machine learning algorithms. If not practical experience, at least a strong understanding of the industry.
- Experience with AWS, Azure, and other cloud providers, as well as Helm, and Terraform.
- Experience working with Analytics.js or similar client-side user-behavior-tracking systems.

More jobs at MetaRouter

View all jobs

Solutions Architect

Apply now

Sr. Javascript Engineer

Apply now