XX
Sr Data EngineerHoneywellUnited States

Dieses Stellenangebot ist nicht mehr verfügbar

XX

Sr Data Engineer

Honeywell
  • US
    United States
  • US
    United States

Über

As a Senior Data Engineer, you will be part of a high‑performing global team delivering advanced AI and data solutions for Honeywell’s industrial customers, with a focus on IoT and real‑time data processing. In this role, you will design and implement scalable data architectures and pipelines that enable next‑generation AI capabilities, including large‑scale machine learning models, intelligent automation, and real‑time analytics. You will work closely with cross‑functional teams to transform high‑volume IoT telemetry into reliable, actionable insights that support Honeywell’s connected industrial solutions.
You will report directly to our Data Engineering Manager and work out of our Atlanta, GA location on a hybrid schedule (requiring 100% onsite for the first 90 days).
Data Engineering & AI Pipeline Development
Design and implement scalable data architectures to process high‑volume IoT sensor data and telemetry streams, ensuring reliable data capture and processing for AI/ML workloads
Build and maintain data pipelines for AI product lifecycle, including training data preparation, feature engineering, and inference data flows
Develop and optimize RAG (Retrieval Augmented Generation) systems, including vector databases, embedding pipelines, and efficient retrieval mechanisms
Lead the architecture and development of scalable data platforms on Databricks
Drive the integration of GenAI capabilities into data workflows and applications
Optimize data processing for performance, cost, and reliability at scale
Create robust data integration solutions that combine industrial IoT data streams with enterprise data sources for AI model training and inference
DataOps
Implement DataOps practices to ensure continuous integration and delivery of data pipelines powering AI solutions
Design and maintain automated testing frameworks for data quality, data drift detection, and AI model performance monitoring
Create self‑service data assets enabling data scientists and ML engineers to access and utilize data efficiently
Design and maintain automated documentation for data lineage and AI model provenance
Collaboration & Innovation
Partner with ML engineers and data scientists to implement efficient data workflows for model training, fine‑tuning, and deployment
Mentor team members and provide technical leadership on complex data engineering challenges
Establish data engineering best practices, including modular code design and reusable frameworks
Drive projects to completion while working in an agile environment with evolving requirements in the rapidly changing AI landscape
Qualifications
Minimum 5 years of experience building production data pipelines in Databricks processing TB‑scale data
Extensive experience implementing medallion architecture (Bronze/Silver/Gold) with Delta Lake, Delta Live Tables, and Lakeflow for batch and streaming pipelines from Event Hub or Kafka sources
Strong hands‑on proficiency with PySpark for distributed data processing and transformation
Strong experience working with cloud platforms such as Azure, GCP, and Databricks, especially in designing and implementing AI/ML‑driven data workflows
Proficient in CI/CD practices using Databricks Asset Bundles, Git workflows, GitHub Actions, and understanding of DataOps practices including data quality testing and observability
Hands‑on experience building RAG applications with vector databases, LLM integration, and agentic frameworks like LangChain, LangGraph
Natural analytical mindset with demonstrated ability to explore data, debug complex distributed systems, and optimize pipeline performance at scale
We Value
Experience building RAG and agentic architecture solutions and working with LLM‑powered applications
Expertise in real‑time data processing frameworks (Apache Spark Streaming, Structured Streaming)
Knowledge of MLOps practices and experience building data pipelines for AI model deployment
Experience with time‑series databases and IoT data modeling patterns
Familiarity with containerization (Docker) and orchestration (Kubernetes) for AI workloads
Strong background in data quality implementation for AI training data
Experience working with distributed teams and cross‑functional collaboration
Knowledge of data security and governance practices for AI systems
Experience working on analytics projects with Agile and Scrum Methodologies
US Person Requirement Candidate must be a U.S. Person, defined as a U.S. citizen, a U.S. permanent resident, or someone with protected status in the U.S. under asylum or refugee status, or having the ability to obtain an export authorization.
Benefits of Working for Honeywell In addition to a competitive salary, Honeywell employees are eligible for a comprehensive benefits package that includes employer‑subsidized Medical, Dental, Vision, and Life Insurance; Short‑Term and Long‑Term Disability; 401(k) match, Flexible Spending Accounts, Health Savings Accounts; EAP; and Educational Assistance; Parental Leave; Paid Time Off; and 12 Paid Holidays.
#J-18808-Ljbffr
  • United States

Sprachkenntnisse

  • English
Hinweis für Nutzer

Dieses Stellenangebot wurde von einem unserer Partner veröffentlicht. Sie können das Originalangebot einsehen hier.