Avatar for Cerebri AI

Measure customer engagement to drive financial results

Data Engineer

$85k – $105k • 0.01% – 0.03%
Apply now
TO APPLY FOR SPECIFIC OFFICE LOCATION, COPY AND PASTE LINK IN YOUR BROWSER:

Austin, TX - jobs.lever.co/cerebri/756cc3d9-6b0a-4b8f-b61b-e45aea44dc93?lever-origin=appli…

Washington DC - jobs.lever.co/cerebri/5d019c0a-1a1c-4474-a3a5-fbbe21af7b3f?lever-origin=appli…

Toronto, Ontario - jobs.lever.co/cerebri/d3052bb8-59c9-4b4e-9e84-eb5bea2aa2a5?lever-origin=appli…


Cerebri AI is an advanced customer analytics company that uses state-of-the-art AI technologies including reinforcement learning to analyze customer touchpoints across multiple channels. Cerebri AI measures a customer’s commitment to a brand or a product, at any point in time, expressed in monetary values and derive Best Actions that drive customer commitment and financial results.

Our Series A financing was led by M12 (formerly Microsoft Ventures). To date, the team has filed 11 patents pertaining to the Cerebri AI way. We now have 60 employees in three offices in Austin, Toronto and Washington DC. Over 80% of the staff are in technical roles in data science and software engineering. Our team of senior executives averages 20+ years selling and deploying software successfully to enterprises worldwide. Cerebri AI is a proud Microsoft Partner and an active member of the Mastercard Start Path network.
To learn more, visit cerebriai.com.

"Cerebri AI was named a 2019 Cool Vendor in Artificial Intelligence for Customer Analytics by Gartner"

Role:
Design, develop and build out data pipelines to ingest data into our proprietary data structures, and be a key collaborator in the data discovery and exploratory analysis process during our client engagements.

Responsibilities:
Build pipelines to ingest and maintain complex data sets into Cerebri AI’s proprietary data stores for use in machine learning modeling.
Develop and maintain data ontologies for key market segments.
Collaborate with data scientists to perform exploratory data analysis and to map data fields into proprietary data stores and to find signals in client data.
Collaborate with clients to develop pipeline infrastructure, and to ask appropriate questions to gain deep understanding of client data.
Write quality documentation on the discovery process and software projects.
Work equally well in a team environment and on your own.
Communicate complex ideas clearly with both team members and clients.
Travel up to 25%.

Qualifications:
At least one (1) year of experience designing and building data processing solutions and ETL pipelines for varied data formats, ideally at a company that leverages machine learning models.
At least two (2) years of experience in SQL, Python, Apache Spark, pyspark.
Experience working directly with relational database structures and flat files
Ability to write efficient database queries, functions and views to include complex joins and the identification and development of custom indices.
Knowledge of professional software engineering practices and best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, continuous integration and development, and operations.
Good verbal and written communication skills, with both technical and non-technical stakeholders.

Nice to Haves:
Experience in Java and/or Scala.
Experience with data management processing tools such as Kafka, Elasticsearch and Logstash.
Experience with NoSQL distributed databases such as Cassandra.
Experience in business intelligence visualization tools such as Grafana, Superset, Redash or Tableau.
Experience with Microsoft Azure or similar cloud computing solutions.
Master’s degree or higher in a relevant quantitative subject.

More jobs at Cerebri AI

View all jobs

Data Scientist

Apply now

Operations Research (OR) Analyst

Apply now