Data Platform Engineer
- San Jose, Arizona, United States
- San Jose, Arizona, United States
About
You'll be joining Adobe on a contract opportunity, employed through NextDeavor.
Benefits You'll Love
- NextDeavor offers health, vision and dental benefits for contract employees
- Paid sick leave eligibility is contingent on state of residence
- Optional 401k Plan (excludes employer match)
- Opportunity to get your foot in the door at a well-established corporation, with potential for extended or permanent full-time employment
Become a Key Player as a Data Platform Engineer
As a Data Platform Engineer, you will build and operate production-scale Databricks and data services that power Adobe's IDS Data Platform. You'll support multiple tenant teams, coordinating storage, microservices, security, and compliance while triaging issues and mentoring teammates. This is a staffing contract covering a leave of absence, 40 hours per week, expected from 12/12/2025 to 06/26/2026 in San Jose, CA.
Here's How You'll Make an Impact on the Team
- Set up and maintain production-scale Databricks environments on Microsoft Azure and AWS.
- Manage production-scale data storage platforms (ADLS and S3) for multiple tenant teams.
- Coordinate production microservices for job scheduling, security, financial, and administrative services.
- Maintain enterprise-scale SQL Server and SSAS environments.
- Triage support issues raised by tenant teams and provide mentorship to teammates.
- Develop tools and automation for configuration management, service deployments, monitoring, and alerting.
- Ensure security and privacy compliance, implementing Adobe Security & Compliance solutions to secure data in the data lake.
- Explore GenAI technologies to integrate technical improvements and enhance user experience.
- Collaborate with third-party vendors to resolve issues, run proofs of concept, and improve the product.
Here's What You'll Need to Be Successful in This Role
- BS in Computer Science, Computer Engineering, or equivalent experience.
- Cloud infrastructure administration and automation experience on AWS and Azure.
- Proficiency with storage technologies: ADLS Gen2, S3, MongoDB, and vector databases.
- Experience setting up and operating Databricks, SQL Server & SSAS, and Airflow.
- Experience with Azure Kubernetes Service (AKS) or Elastic Kubernetes Service (EKS) and Kubernetes.
- Monitoring and alerting tools: DBX system tables, Prometheus, Splunk, and MLflow.
- Experience maintaining Linux servers and Kubernetes environments.
- Operational tools: Jira and ServiceNow.
- Experience with AI agent development and maintenance (e.g., LangChain).
Pay Range
$ $79.56/hour
Ready to Make Your Mark?
This role may fill quickly. Submit your resume to be considered.
Languages
- English
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.