XX
Lead Data EngineerBurtch WorksUnited States
XX

Lead Data Engineer

Burtch Works
  • US
    United States
  • US
    United States

About

Job Title:
Lead Data Engineer
Location:
Tampa, FL; Cary, NC; Wilmington, DE; Bridgewater, NJ; New York, NY (Hybrid – 3 days onsite per week)
About The Company This organization is a leading enterprise focused on advancing data-driven decision-making through scalable, cloud-based analytics solutions. The Data & Analytics organization partners across business and technology teams to deliver modern data platforms, enabling actionable insights, operational efficiency, and innovation at scale.
Job Summary The
Lead Data Engineer
plays a critical role in designing and delivering large-scale data and analytics solutions within a modern cloud-based ecosystem. This role is responsible for data architecture, pipeline development, and enterprise data platform modernization, leveraging cutting-edge technologies across Azure and big data platforms.
This position will lead the development of scalable data pipelines, data lakes, and data warehouses, supporting both real-time and batch analytics use cases. The ideal candidate brings deep technical expertise in Azure Databricks, Spark, and cloud data engineering, along with the ability to collaborate across global teams and drive high-impact data initiatives.
Key Responsibilities Data Architecture & Engineering
Design and execute large-scale data migration initiatives, ensuring data quality, reconciliation, and consistency
Build scalable, high-performance data pipelines using Azure Databricks, Azure Data Factory, and related services
Design and implement data lakes, data warehouses, and analytics data stores for enterprise consumption
Develop reusable frameworks for data ingestion, transformation, validation, and reconciliation
Optimize Spark jobs, pipelines, and frameworks for performance, scalability, and cost efficiency
Cloud & Platform Development
Leverage Azure cloud technologies (Databricks, Data Factory, Delta Lake) to build enterprise-grade data solutions
Support both real-time and batch data processing use cases
Implement dynamic scaling solutions including throttling and bursting for high-volume workloads
Develop and maintain API-based data services for secure and standardized data access
Data Quality & Optimization
Ensure data accuracy, consistency, and integrity across multiple data sources
Implement data validation and reconciliation frameworks
Optimize data pipelines and storage for performance and reliability
Apply best practices for code quality, testing, and performance tuning
Collaboration & Delivery
Partner with business analysts and stakeholders to gather requirements and deliver data solutions
Collaborate with global teams to drive project delivery and continuous improvement
Recommend and implement enhancements to data architecture and engineering practices
Stay current with emerging technologies and continuously improve platform capabilities
Engineering Best Practices
Establish and promote modern software development practices, including CI/CD and automated testing
Utilize version control and DevOps tools to support continuous integration and deployment
Develop scalable and maintainable solutions aligned with enterprise standards
Required Qualifications Education
Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field
Experience
10+ years of experience in software/data solution development
6+ years of hands‑on experience in data engineering
Experience designing and delivering enterprise‑scale data platforms
Strong experience working with unstructured and large‑scale datasets
Technical Skills
Expertise in Azure Databricks, Azure Data Factory, and Delta Lake
Strong proficiency in:
Spark (Scala/Python)
SQL
Python
Experience with:
Data lakes and data warehousing architectures
Real‑time and batch data processing
API development and data services
Performance tuning for Spark and cloud‑based data platforms
Strong understanding of:
Data architecture patterns (traditional and modern cloud‑based)
Data ingestion, transformation, and integration frameworks
Cloud‑native data engineering best practices
Additional Skills
Strong problem‑solving and analytical abilities
Excellent communication skills (written and verbal)
Ability to collaborate effectively with technical and business stakeholders
Preferred Qualifications
Azure or Databricks certifications
Experience with:
Data reconciliation frameworks and migration tools
Shell, Bash, or PowerShell scripting
Azure DevOps and CI/CD pipelines
Large‑scale ERP transformation programs
Exposure to AI/ML-driven automation within data engineering workflows
Experience working in enterprise‑scale, global environments
Work Environment
Hybrid work model: 3 days onsite per week
Collaborative, fast‑paced environment focused on innovation and scalability
Opportunity to work with cutting‑edge cloud and big data technologies
#J-18808-Ljbffr
  • United States

Languages

  • English
Notice for Users

This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.