Über
enabling data scientists, analysts, and application teams to reliably develop, deploy, and operate machine learning solutions in production . You will help define platform standards, architectural patterns, and best practices across data engineering and ML Ops. Key Responsibilities Serve as a
senior technical owner
for core components of the
Data Lakehouse and ML Platform , including data ingestion, feature pipelines, metadata, and orchestration. Architect and implement
scalable ML Platform capabilities
that support the full ML lifecycle: data preparation, feature engineering, model training, deployment, monitoring, and retraining. Partner closely with
Data Science teams to operationalize machine learning, forecasting, and simulation models , ensuring reproducibility, reliability, and performance in production. Design and maintain
MLOps pipelines and frameworks
for model versioning, promotion, rollback, and monitoring across environments. Establish
platform-level CI/CD standards
for data and ML workloads, including automated testing, validation, and quality checks. Collaborate with the
Digital Development team
to expose data and ML capabilities via APIs and services that support customer-facing applications. Lead the design and delivery of
Snowflake-based analytical and feature-ready data models
to support BI and ML use cases. Define and enforce
data, feature, and model governance standards , including lineage, metadata, access control, and auditability. Implement
automated data and model quality assurance , drift detection, and operational monitoring to ensure platform reliability. Build and operate
containerized data and ML services
using Docker, ECS Fargate, and Infrastructure as Code (CDK). Act as a
technical mentor
to other engineers and influence architectural direction across the analytics and ML ecosystem. Qualifications Required
Bachelor's degree in Computer Science or a related technical field, with
7+ years of experience
building and operating data platforms in AWS. Demonstrated experience designing or contributing to a
shared ML Platform or MLOps framework
used by multiple teams. Deep expertise in
Python and SQL , with a strong track record of production-grade data and ML systems. Strong hands-on experience with
MLOps best practices , including:
Model lifecycle management and deployment patterns Feature engineering and reusable feature pipelines Experiment tracking, reproducibility, and model governance Model performance monitoring and drift detection
Advanced knowledge of
AWS services
including Glue, Lambda, ECS Fargate, and Apache Spark, with experience operating them at scale. Experience with
Arrow-based data and streaming technologies
(ADBC, Arrow?ODBC, PyArrow). Proficiency with
Dagster (or similar orchestration platforms)
for managing complex data and ML workflows. Strong experience delivering
containerized, infrastructure-as-code solutions
using Docker and CDK. Deep understanding of
data warehousing, lakehouse, and ML-ready data architectures , including Delta Lake;
Snowflake
experience is a strong plus. Proven ability to
influence standards, architecture, and best practices
across engineering and data science teams. Excellent communication skills, with the ability to translate complex
platform and ML concepts
for technical and non-technical stakeholders.
Sprachkenntnisse
- English
Hinweis für Nutzer
Dieses Stellenangebot stammt von einer Partnerplattform von TieTalent. Klicken Sie auf „Jetzt Bewerben“, um Ihre Bewerbung direkt auf deren Website einzureichen.