Scop of work
Provide engineering support for early stage projects: Help identify where data lives,how to get access to it, how to make it available and well suited
Provide engineering support for in progress projects: Design, implement and maintain data pipelines that feed proof of concept implementations
Provide engineering support for production systems: Design, implement and maintain data pipelines, model training and deployment workflows, and APIs
Implement data ingestion routines both real time and batch using best practices in data modeling and ETL/ELT processes
Becoming a Cloud Platform and on-premises data expert on services relating to data engineer stuff
Perform testing, validation, and deployment in order to deliver the accuracy of data transformations and data verification
Monitor the big data tools and frameworks required to provide requested capabilities
Bachelor's or Master's degree in Computer Science, Engineering, Mathematics, or a related field or 2+ years industry experience
At least 1 year of experience with Google Cloud Platform (Big Query , Buckets etc.)
2+ years of experience with demonstrated strength in ETL/ELT, data modeling, data warehouse technical architecture, infrastructure components and reporting/analytic tools.
2+ years of hands-on experience in writing complex, highly-optimized SQL queries across large data sets.
Experience with Python programming is advantage
Must have excellent communication & interpersonal skills
Quick learner, looking for new challenges