About
• Strong AWS: fundamental understanding of cloud trail, cloud watch, S3, and ML platform experience (Glue, Lambda, AWS console management)
• Experienced with both AWS and Redshift
• Knowledge of legacy ETL platforms is good
• Able to look at and know how to migrate over using Databricks/python
• Experience monitoring AWS - Managing Databricks on AWS platform experience integrating with Redshift and Data bricks environment
• 40% development/automation
• 60% management/administration - Databricks, deployment CI/CD pipelines, provisioning projects and pipelines
JD: We're looking for a Sr. Data Platform Engineer who thrives in a hybrid role-60% administration and 40% development/support-to help us scale our data and DataOps infrastructure. You'll work with cutting-edge technologies like Databricks, Apache Spark, Delta Lake, and AWS CloudOps, Cloud Security, while supporting mission-critical data pipelines and integrations. If you're a hands-on engineer with strong Python skills, deep AWS experience, and a knack for solving complex data challenges, we want to hear from you.
Responsibilities :
Design, develop, and maintain scalable ETL pipelines and integration frameworks. Administer and optimize Databricks and Apache Spark environments for data engineering workloads. Build and manage data workflows using AWS services such as Lambda, Glue, Redshift, SageMaker, and S3. Support and troubleshoot DataOps pipelines, ensuring reliability and performance across environments. Automate platform operations using Python, PySpark, and infrastructure-as-code tools. Collaborate with cross-functional teams to support data ingestion, transformation, and deployment. Provide technical leadership and mentorship to junior developers and third-party teams. Create and maintain technical documentation and training materials. Troubleshoot recurring issues and implement long-term resolutions.
Requirements:
Bachelor's or Master's degree in Computer Science or related field. 5+ years of experience in data engineering or platform administration. 3+ years of experience in integration framework development with a strong emphasis on Databricks, AWS, and ETL. Strong AWS: fundamental understanding of cloud trail, cloud watch, S3, and ML platform experience (Glue, Lambda, AWS console management) Experience managing Databricks on AWS platform and integrating with Redshift and Databricks Strong programming skills in Python and PySpark. Expertise in Databricks, Apache Spark, and Delta Lake. Proficiency in AWS CloudOps, Cloud Security, including configuration, deployment, and monitoring. Strong SQL skills and hands-on experience with Amazon Redshift. Experience with ETL development, data transformation, and orchestration tools. Kafka for real-time data streaming and integration. Fivetran and DBT for data ingestion and transformation. Familiarity with DataOps practices and open-source data tooling. Experience with integration tools such as Apache Camel and MuleSoft. Understanding of RESTful APIs, message queuing, and event-driven architectures.
Nice-to-have skills
- AWS
- AWS Lambda
- PySpark
- Python
Work experience
- Data Engineer
- Fullstack
- DevOps
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.