This job offer is no longer available
About
Architects and delivers full‑stack analytics solutions spanning data ingestion, transformation, modeling, and visualization using cloud and big‑data technologies (Azure, Databricks, SQL, Spark). Builds resilient data pipelines and analytical datasets that support descriptive, diagnostic, and advanced analytics use cases, including data science and forecasting models. Writes complex SQL and Spark code optimized for scale, governance, and CI/CD deployment. Acts as a technical and analytical expert, independently tackling highly complex problems, leading functional initiatives, and translating raw data into trusted insights that improve the performance and decision‑making capability of Clients ecosystem.
For a complete understanding of this opportunity, and what will be required to be a successful applicant, read on.
Responsibilities:
• Provide Data Intelligence and Data Warehousing (DW) solutions and support by leveraging project standards and leading data platforms • Design and operationalize data science workflows within Databricks, including exploratory analysis, feature engineering, and production-ready analytical datasets integrated into CI/CD pipelines. • Build and maintain Azure data pipelines using DevSecOps process • Define and build data integration processes to be used across the organization • Build conceptual and logical data models for stakeholders and management • Work directly with business leadership to understand data requirements; propose and develop solutions that enable effective decision-making and drives business objectives • Prepare advanced project implementation plans which highlight major milestones and deliverables, leveraging standard methods and work planning tools • Document existing and new processes to develop and maintain technical and non-technical reference materials. • Recognize potential issues and risks during the analytics project implementation and suggest mitigation strategies • Communicate and own the process of manipulating and merging large datasets • Perform other duties as assigned. Qualifications and Education Requirements:
• Master’s degree in Information Systems, Computer Science, Engineering, or related field, or the equivalent combination of education, training and experience • Expert skill in Databricks • Advanced skill in Azure SQL, Azure Data Lake, and Azure App Service, Python, T-SQL • Experienced in sourcing, maintaining, and updating data in Cloud environments • Knowledge of and the ability to perform basic statistical analysis • Experienced in the use of ETL tools and techniques • Experience Designing and building of data pipelines using API ingestion and Streaming ingestion methods. xywuqvp • Demonstrates change management and/or excellent communication skills • Understands data warehousing, data cleaning, data quality, data pipelines and other analytical techniques required for data usage.
Languages
- English
Notice for Users
This job was posted by one of our partners. You can view the original job source here.