Dieses Stellenangebot ist nicht mehr verfügbar
Über
Requirements
5+ years of experience in data engineering or platform/infrastructure roles, with a focus on scaling tools and systems. Expertise in Python or Scala, and strong proficiency in SQL for data modeling and optimization. Deep experience with data warehouse technologies like Snowflake, including clustering, performance tuning, query profiling, and access management. Experience with data lake and lakehouse architectures such as Databricks, Delta Lake, Iceberg, or Apache Hudi, and query engines like Athena or Presto. Proven ability to design and implement scalable ETL pipelines using technologies like dbt for transformation and Databricks for large?scale processing. Familiarity with managing infrastructure?as?code, job orchestration (Dagster, Airflow), and CI/CD workflows. A proactive mindset and strong problem?solving skills, especially when troubleshooting complex infrastructure issues. Excellent collaboration and communication skills to support cross?functional teams and data consumers.
Nice to have
Experience implementing row?level security and data masking for PHI/PII use cases. Exposure to governance tools (e.g., Collibra, Amplitude, Looker admin, Unity Catalog). Familiarity with AWS services, especially for storage and compute cost optimization.
Benefits
Unlimited PTO 100% paid employee health benefit options Employer funded 401(k) match Corporate wellness programs with Headspace and Peloton Parental leave Cell Phone reimbursement Commuter Benefits Catered lunch everyday along with snacks (when back in office)
How to Apply
Interested in this position? Please submit your resume and cover letter through the application portal. #J-18808-Ljbffr
Sprachkenntnisse
- English
Hinweis für Nutzer
Dieses Stellenangebot wurde von einem unserer Partner veröffentlicht. Sie können das Originalangebot einsehen hier.