XX
Sr. Data EngineerCapstone LogisticsUnited States
XX

Sr. Data Engineer

Capstone Logistics
  • US
    United States
  • US
    United States

About

Sr. Data Engineer
Pay: Competitive
Shift: Monday - Friday 8am - 5pm
JOB SUMMARY:
The ideal candidate will possess strong background on Data Engineering development technologies & experience with cloud-based resources. The candidate must possess excellent written and verbal communication skills with the ability to collaborate effectively with domain experts and technical experts in the team. This role will be responsible for both team collaboration as well as project initiative ownership and leadership. This role is for contributing to both technical expertise as well as process leadership.
An Azure Databricks Engineer designs, builds, and optimizes scalable data pipelines and lakehouse solutions within the Microsoft Azure ecosystem.
Core Responsibilities Data Engineering & ETL:
Develop and maintain batch/streaming data pipelines using Azure Databricks, Azure Data Factory and PowerBI Azure Integration:
Integrate Databricks with (ADLS), Power-BI, ADO, Key Vault Performance Optimization:
Optimize Spark jobs, cluster configurations, Serverless Policy & Cost Optimization Data Governance & Security:
Implement Unity Catalog for access control, data lineage, and security compliance. DevOps & Automation:
Build CI/CD pipelines for Databricks notebooks and workflows using Azure DevOPS & Terraform. Collaboration:
Work with data Engineers and analysts to enable AI/ML workloads Required Qualifications
Education:
Bachelor's degree in computer science, Engineering, or a related field. Technical Skills:
Proficiency in Python (PySpark), SQL. Cloud Experience:
Strong hands-on experience with Microsoft Azure services. CI/CD:
Azure DevOPS Databricks Expertise:
Experience with Delta Lake, Spark SQL, and Databricks Jobs. SUPERVISORY RESPONSIBILITIES:
No direct reports but responsible for project ownership which may have reports. Responsible for mentoring of Jr. resources. ESSENTIAL FUNCTIONS:
Responsibilities:
Understand the business data needs, both current and future to help deliver and guide a data strategy Set and ensure proper adherence to strategic and tactical patterns Set technical direction for obtaining and implementing technology Help evaluate current and future staffing needs Drive the creation of new standards and best practices as technology evolves; communicate and drive adoption across the company Drive creation of new data capabilities and offerings Develop and maintain batch/streaming data pipelines using Azure Databricks, Azure Data Factory and PowerBI Integrate Databricks with (ADLS), Power-BI, ADO, Key Vault Optimize Spark jobs, cluster configurations, Serverless Policy & Cost Optimization Implement Unity Catalog for access control, data lineage, and security compliance. Build CI/CD pipelines for Databricks notebooks and workflows using Azure DevOPS & Terraform. Work with data Engineers and analysts to enable AI/ML workloads *Performs Additional Responsibilities As Assigned*
QUALIFICATIONS:
E
ducation and/or experience:
Bachelor's Degree in a technical field (statistics, mathematics, science, accounting, finance) or related field 4+ Heavy Data experience across the full data life cycle Experience in architecting enterprise-wide data environments and platforms Heavy experience in data movement, transformation, and modeling processes (ELT/ETL) 4+ years' experience working with large datasets or database systems 2+ Years working with cloud data technologies Azure Experience Proficiency in Python (PySpark), SQL. Strong hands-on experience with Microsoft Azure services. Azure DevOPS Experience with Delta Lake, Spark SQL, and Databricks Jobs. Knowledge, skills and abilities:
Cloud data experience Storage, computer, movement orchestration, reporting Full data stack experience Traditional and modern data warehousing expertise Data modeling expertise Experience in data lake patterns and process Excellent interpersonal and communication skills (written and verbal) Professional Skills:
Demonstrated ability to manage time and prioritize projects to meet deadlines Excellent listening, written, and oral communication skills Excellent critical thinking skills to help solve business problems and make decisions, paired with a desire to take initiative Ability to maintain project plans, resourcing schedules, and forecasted activities Ability to work well under continually changing deadlines and priorities #LI-ML1
Education and/or experience:
Bachelor's Degree in a technical field (statistics, mathematics, science, accounting, finance) or related field 4+ Heavy Data experience across the full data life cycle Experience in architecting enterprise-wide data environments and platforms Heavy experience in data movement, transformation, and modeling processes (ELT/ETL) 4+ years experience working with large datasets or database systems 2+ Years working with cloud data technologies AWS/Azure Experience MPP technology Experience (Snowflake/Synapse/Redshift) Knowledge, skills and abilities:
Cloud data experience. Storage, compute, movement orchestration, reporting Full data stack experience Traditional and modern data warehousing expertise Data modeling expertise Experience in data lake patterns and process Excellent interpersonal and communication skills (written and verbal)
  • United States

Languages

  • English
Notice for Users

This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.