Senior Data Engineer – Remote Cloud & ML Infrastructure Expert
Join a globally recognized broadcasting enterprise as a Senior Data Engineer. You will be instrumental in developing high-performance data infrastructure and deploying scalable machine learning systems in production. This remote opportunity empowers you to architect cloud-native solutions, collaborate with data scientists, and shape intelligent platforms for millions of users worldwide.
Key Responsibilities
- Design and develop scalable, versioned data pipelines to support machine learning workflows.
- Provide expert guidance to data science teams, focusing on deploying ML models into robust, high-availability production environments.
- Build microservices for serving machine learning models via REST APIs, including health checks and monitoring hooks.
- Ensure deployment readiness of services in cloud-native environments, meeting SLAs for uptime and performance.
- Automate deployment using Infrastructure-as-Code principles (Terraform) within GCP ecosystems.
- Optimize pipeline performance and manage data quality and lineage through tools like Airflow and MLflow.
Required Skills
- 5+ years of professional experience in Data Engineering or Software Engineering roles.
- Advanced programming skills in Python and SQL; exposure to PySpark for big data processing.
- Proven experience with Google Cloud Platform (GCP) services including BigQuery, BigTable, and Cloud Functions.
- Strong hands-on experience with Docker, Kubernetes, and GitLab CI/CD for container orchestration and automation.
- Solid understanding of data pipeline orchestration using Apache Airflow.
- Familiarity with machine learning lifecycle tools such as MLflow, including model tracking and deployment.
- Competency in designing and managing RESTful APIs for model serving.
- Strong problem-solving skills, proactive communication, and the ability to work cross-functionally in distributed teams.
Nice to Have
- Experience with other public cloud platforms (AWS, Azure).
- Exposure to Terraform for provisioning infrastructure in declarative code.
- Knowledge of data versioning, data contracts, and schema evolution practices.
- Prior work on media, telecom, or high-traffic B2C systems.
- Contributions to open-source projects in the data engineering or ML infrastructure space.
Why Join Us
- Work on cutting-edge, high-impact systems in a multinational broadcasting company reaching millions daily.
- Collaborate with talented engineers and data scientists in a dynamic, fast-paced environment.
- Enjoy the freedom of a 100% remote role with flexible hours and global reach.
- Gain access to enterprise-grade GCP environments, production-level ML pipelines, and modern DevOps tooling.
This is more than a typical data engineering role—it's your chance to lead in production-scale ML deployment, shape intelligent infrastructure, and elevate global broadcasting technology.