Jobs Responsibility
Support a community of data engineers and data scientists by understanding their problems and answering
their questions and helping them write the solutions on the Data Platform
Develop data pipeline on Bigdata platform by the following technology stack: Python3, scripting (Bash/Python), Hadoop, Docker, Kubernetes, etc.
Building and monitoring data pipeline architecture to manage big data and ETL onto data warehouse/data lake
Developing the Data Management policy, support, and QA activities of data management developments.
Qualifications
A bachelor's degree in computer engineering, computer science, mathematics, statistics, or a related field. A master's or Ph.D. is a plus.
Understand the design and implementation of several of the following: Master & Reference Data Management,
Metadata Management, Data Quality Management, Data Analytics, Data Modelling, and Data Exchanges.
Knowledgeable in using Databases (such as PostgreSQL, MySQL, Big Query, Athena) and writing SQL queries.
Experience in ETL tools or data pipeline language such as Python or related.
Experience building data pipelines, e.g., Apache Airflow, Dagster, or related.
Familiarity with model development/deployment life cycle (CI/CD, MLOps)
Industry experience building / owning end-to-end AI applications or data products is a big bonus.
We Offer
Become a part of a dynamic, growing company that values your contributions and embraces new ideas.
Smart and fun teammates to collaborate with and learn from.
A great culture that allows you to experiment, learn from mistakes and grow fast
A competitive compensation package with generous benefits.
