About the Role
We are seeking a Data Engineer with strong DevOps skills to join our growing engineering team. This hybrid role bridges data engineering with DevOps practices to build, deploy, and maintain robust and scalable data infrastructure. You will play a key role in architecting and developing data pipelines, ensuring high availability, and enabling rapid, reliable deployment of data solutions.
Your-Know-How
- Design, develop, and maintain scalable and reliable data pipelines and data architectures.
- Implement monitoring, alerting, and automated recovery for data systems and pipelines.
- Build and maintain robust CI/CD workflows using GitLab CI/CD for both data engineering and infrastructure projects.
- Deploy and manage applications and services on Kubernetes using ArgoCD and Kustomize.
- Collaborate with data scientists, analysts, and other engineers to deliver high-quality data solutions.
- Automate infrastructure provisioning and configuration in a cloud-native environment.
Requirements
- 3+ years of experience in data engineering or a similar role.
- Proficient in at least one major cloud platform (GCP, AWS, or Azure).
- Solid programming skills in Python, SQL, or similar languages used for data processing.
- Strong proficiency in DevOps practices, especially around CI/CD, containerization, and orchestration.
- Solid experience in designing and building data pipelines (Airflow, Spark, dbt, etc.)
- Hands-on experience deploying and managing services on Kubernetes using ArgoCD and Kustomize.
- Familiarity with observability tools like Prometheus, Grafana, or OpenTelemetry.
- Experience with infrastructure-as-code tools is a plus (Terraform, Helm).