Über
Architect and design scalable, reliable data platforms and complex ETL/ELT and streaming workflows for the Databricks Lakehouse Platform (Delta Lake, Spark). Hands-On Development:
Write, test, and optimize code in Python, PySpark, and SQL for data ingestion, transformation, and processing. DataOps & Automation:
Implement CI/CD, monitoring, and automation (e.g., with Azure DevOps, DBX) for data pipelines. Stakeholder Collaboration:
Work with BI developers, analysts, and business users to define requirements and deliver data-driven solutions. Performance Optimization:
Tune delta tables, Spark jobs, and SQL queries for maximum efficiency and scalability. GenAI Applications Development:
It is a big plus to have experience in GenAI application development What You’re Looking For: Join a fast-growing organization that thrives on innovation and collaboration. You’ll work alongside talented, motivated colleagues in a global environment, helping clients solve their most critical business challenges. At OZ, your contributions matter – you’ll have the chance to be a key player in our growth and success. If you’re driven, bold, and eager to push boundaries, we invite you to join a company where you can truly make a difference. About Us: OZ is a 28-year-old global technology consulting, services, and solutions leader specializing in creating business-focused solutions for our clients by leveraging disruptive digital technologies and innovation. OZ is committed to creating a continuum between work and life by allowing people to work remotely. We offer competitive compensation and a comprehensive benefits package. You’ll enjoy our work style within an incredible culture. We’ll give you the tools you need to succeed so you can grow and develop with us and become part of a team that lives by its core values. Requirements 8+ years of experience in data engineering, with strong hands-on expertise in Databricks and Apache Spark. Proven experience designing and implementing scalable ETL/ELT pipelines in cloud environments. Strong programming skills in Python and SQL; experience with PySpark required. Hands-on experience with Databricks Lakehouse, Delta Lake, and distributed data processing. Experience working with cloud platforms such as Microsoft Azure, AWS, or GCP (Azure preferred). Experience with CI/CD pipelines, Git, and DevOps practices for data engineering. Strong understanding of data architecture, data modeling, and performance optimization. Experience working with cross-functional teams to deliver enterprise data solutions. Tackles complex data challenges, ensuring data quality and reliable delivery. Qualifications: Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field. Experience designing enterprise-scale data platforms and modern data architectures. Experience with data integration tools such as Azure Data Factory or similar platforms. Familiarity with cloud data warehouses such as Databricks, Snowflake, or Azure Fabric. Experience supporting analytics, reporting, or AI/ML workloads is highly desirable. Databricks, Azure, or cloud certifications are preferred. Strong problem-solving, communication, and technical leadership skills. Technical Proficiency in: Databricks, Apache Spark, PySpark, Delta Lake Python, SQL, Scala (preferred) Cloud platforms: Azure (preferred), AWS, or GCP Azure Data Factory, Kafka, and modern data integration tools Data warehousing: Databricks, Snowflake, or Azure Fabric DevOps tools: Git, Azure DevOps, CI/CD pipelines Data architecture, ETL/ELT design, and performance optimization
Sprachkenntnisse
- English
Hinweis für Nutzer
Dieses Stellenangebot stammt von einer Partnerplattform von TieTalent. Klicken Sie auf „Jetzt Bewerben“, um Ihre Bewerbung direkt auf deren Website einzureichen.