This job offer is no longer available
About
The Senior Data Engineer will have a significant impact on driving our organization's digital evolution by collaborating with various teams at Adobe. You will collaborate closely with product, analytics, and engineering teams to build, construct, and enhance scalable data pipelines and analytic solutions that generate insights into product usage. This position is highly data engineering oriented, focusing on the collection, integration, and transformation of large and varied data sets from multiple platforms to support business intelligence, product development, and strategic decision-making. What You'll Do
You will be responsible for developing and maintaining analytics infrastructure, including ETL pipelines and automated data quality tools. The ideal candidate will have a strong background in modern frameworks, cloud platforms, data technologies, SQL, Python, and visualization tools including Power BI or Tableau. In addition, you will apply strong communication skills and a customer-focused attitude to deliver effective solutions. Your key tasks will be collaborating with multi-functional teams to develop and refine adaptable data pipelines and analytical solutions. You will build connectors to a variety of API sources, enabling the integration of diverse data sources, and apply cloud-based platforms to perform advanced analytics while upholding rigorous standards for data quality and security. Projects will require you to actively contribute to planning, solution design, and deployment of analytic tools that drive and support business goals. Your efforts will ensure that the data infrastructure efficiently serves the evolving needs of the organization. What You Need To Succeed
Bachelor's Degree required, master's Preferred in Computer Science or relevant experience 8+ Years Experience with BS or 5+ Years Experience with MS 5+ Years Experience with Python or equivalent Experienced in developing data pipelines using Python, SQL, and Bash on Linux. Expertise working with Databricks, Spark SQL, Spark and DBFS systems. Proficient in orchestration tools like Airflow or Databricks Workflows. Experience deploying and managing services and applications on AWS and Azure Cloud platforms Application design and architecture with a drive for delivering data with high throughput and low latency Experience implementing and managing CI/CD pipelines using Jenkins or similar automation tools. Committed to finding solutions to challenges and enjoy creative problem solving Ability to solve problems collaboratively and build strong relationships Experience delivering quality process and outcomes Process orientated with strong attention to detail. Additional Consideration Given
Experience with LLMs, vector databases, and embeddings. Exposure to graph databases, Elastic Stack, or real-time streaming (Kafka, Kinesis). Proficiency in managing and orchestrating containers using Kubernetes.
Nice-to-have skills
- AWS
- Azure
- Jenkins
- Kubernetes
- Python
- SQL
- Spark
- Databricks
Work experience
- Data Engineer
- Data Infrastructure
Languages
- English
Notice for Users
This job was posted by one of our partners. You can view the original job source here.