About
This is a remote position. We are seeking a Data Engineer to join a cross-functional delivery team working on the development of data lakes, analytical platforms, and real-time streaming solutions using modern Azure technologies. This role offers the opportunity to work in an Agile environment with direct access to product and business owners, shaping high-impact data products from concept to deployment. Main Responsibilities
Design, build, and monitor ETL pipelines using Azure and Spark technologies Implement scalable, cloud-native data processing workflows using PySpark Configure and operate core Azure services: Databricks, Azure Data Factory, Azure Data Lake Storage, and Azure Functions Collaborate closely with analysts, data scientists, and software engineers to deliver robust data solutions Translate business needs into reliable and secure data products Ensure data quality, governance, and performance best practices across solutions Requirements
At least 4 years of professional experience in data engineering Proven experience with large-scale data processing and transformation pipelines Hands-on knowledge of Azure or other major cloud platforms Solid coding skills in Python (especially pandas/numpy) and SQL Familiarity with Git workflows in a collaborative development setting Fluency in Polish and English (C1 minimum level in English required) Nice to Have
Experience with Linux/Bash scripting Familiarity with Docker or Kubernetes Domain experience in retail, financial services, energy, or the public sector Benefits
Solid, competitive salary Work in multilingual, multinational and multicultural environment on international projects Medical care
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.