Dieses Stellenangebot ist nicht mehr verfügbar
Über
Responsibilities
Combine technical expertise and problem‑solving passion to work closely with clients, turning complex ideas into end‑to‑end solutions that transform our clients’ business.
Lead, design, develop and deliver large‑scale data systems, data processing and data transformation projects that deliver business value for clients.
Automate data platform operations and manage post‑production system and processes.
Conduct technical feasibility assessments and provide project estimates for the design and development of the solution.
Provide technical inputs to agile processes, such as epic, story, and task definition to resolve issues and remove barriers throughout the lifecycle of client engagements.
Create and maintain infrastructure‑as‑code for cloud, on‑prem and hybrid environments using tools such as Terraform, CloudFormation, Azure Resource Manager, Helm and Google Cloud Deployment Manager.
Mentor, help and grow junior team members.
Qualifications
Demonstrable experience in data platforms involving implementation of end‑to‑end data pipelines.
Hands‑on experience with Amazon Web Services Cloud Platform.
Implementation experience with column‑oriented database technologies (e.g., Redshift, Vertica), NoSQL database technologies (e.g., DynamoDB, Cosmos DB) and traditional database systems (e.g., SQL Server, Oracle, MySQL).
Experience in implementing data pipelines for both streaming and batch integrations using tools/frameworks like Glue ETL, Lambda, Spark, PySpark streaming, etc.
Ability to handle module or track level responsibilities and contribute to tasks “hands‑on”.
Experience in data modeling, warehouse design and fact/dimension implementations.
Experience working with code repositories and continuous integration.
Data modeling, querying and optimization for relational, NoSQL, time‑series and graph databases and data warehouses and data lakes.
Data processing programming using SQL, DBT, Python.
Experience with data processing platforms such as Databricks.
Logical programming in Python, Spark, PySpark, Java, JavaScript and/or Scala.
Data ingest, validation and enrichment pipeline design and implementation.
Cloud‑native data platform design with a focus on streaming and event‑driven architectures.
Test programming using automated testing frameworks, data validation and quality frameworks, and data lineage frameworks.
Metadata definition and management via data catalogs, service catalogs and stewardship tools such as AWS Glue Catalog, OpenMetadata, DataHub, Alation and similar.
Code review and mentorship.
Bachelor’s degree in Computer Science, Engineering or related field.
Benefits Pay Range: $103,000 – $154,000
An inclusive workplace that promotes diversity and collaboration.
Access to ongoing learning and development opportunities.
Competitive compensation and benefits package.
Flexibility to support work‑life balance.
Comprehensive health benefits for you and your family.
Generous paid leave and holidays.
Wellness program and employee assistance.
As part of our dedication to an inclusive and diverse workforce, Publicis Sapient is committed to Equal Employment Opportunity without regard to race, color, national origin, ethnicity, gender, protected veteran status, disability, sexual orientation, gender identity or religion. We also provide reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, you may contact us at hiring@publicis.sapient.com.
#J-18808-Ljbffr
Sprachkenntnisse
- English
Hinweis für Nutzer
Dieses Stellenangebot wurde von einem unserer Partner veröffentlicht. Sie können das Originalangebot einsehen hier.