Avatar for GridCure

Big Data for Utilities

Data Engineer

$80k – $110k • 0.1% – 1.0%
Apply now
About the Role:

As a data engineer at GridCure, your general responsibilities will include bolting together the core components of our platform that allow us to interact with utility company data stores, helping build systems to better catalogue and analyze smart grid data, and creating features to automate the process of launching analytical models as new clients come on board.

More specific to this role, you’ll be in charge of building and managing ETL pipelines for batch and streaming data that comes to us from utilities all around the world. Familiarity with significant differences in velocity of data entering a given pipeline, processing error handling/logging (data is never consistently clean), and containerization/autoscaling of services are key.

About Your Background:

We use cloud infrastructure, and we're looking for familiarity with the standard Amazon Web Services and Google Cloud offerings. Most of our clients use more traditional database systems to store their smart grid data, and familiarity with working with some older-school system integrations is helpful. We want to hear your enthusiasm and experience dealing with changing data formats and how you’ve handled that in the past, your stories about ‘that one crazy file format we had to parse’, and how you built systems that scaled up to handle large data flows overall.

We also value experience in the utility space (bonus points if you can tell us the difference between AMI and AMR), and experience deploying production ETL pipelines to enterprise-scale customers that you’ve babysat for a little while. We’re mostly a python shop, so experience with python required.

More jobs at GridCure

View all jobs

Software Engineer (Python + Django)

Apply now