Cette offre d'emploi n'est plus disponible
À propos
Acuity Inc. is a market-leading industrial technology company. We use technology to solve problems in spaces, light and more things to come. Through our two business segments, Acuity Brands Lighting (ABL) and Acuity Intelligent Spaces (AIS), we design, manufacture, and bring to market products and services that make a valuable difference in people's lives. We achieve growth through the development of innovative new products and services, including lighting, lighting controls, building management solutions, and an audio, video and control platform. We focus on customer outcomes and drive growth and productivity to increase market share and deliver superior returns. We look to aggressively deploy capital to grow the business and to enter attractive new verticals. Acuity Inc. is based in Atlanta, Georgia, with operations across North America, Europe and Asia. The Company is powered by approximately 13,000 dedicated and talented associates. Acuity Brands is seeking a Senior Data Engineer to join its Atrius Analytics team. Acuity Brands is driving smarter, safer, and greener outcomes across industries like building automation, HVAC, AV systems, refrigeration, and lighting. By harnessing valuable data within a space, we are powering advanced analytics and AI-driven insights that transform environments and maximize occupant experiences. The Atrius Analytics team is building a scalable, intelligent data platform that ingests and normalizes IoT telemetry from across our diverse verticals. Supporting both batch and real-time processing, this platform provides the foundation for industry-leading cloud applications in building performance and spatial intelligence. We value trust, respect, asynchronous communication, creativity, and customer focus. Our day-to-day is guided by an agile design methodology, with close collaboration among software engineers, firmware engineers, data scientists, and third-party developers. Our team works across Azure, AWS, and GCP IOT services. The ideal candidate is passionate about driving impact through data while developing business, technical, and leadership acumen. Primary Responsibilities Include: Design and implement scalable data engineering pipelines for ingesting, transforming, and storing IoT telemetry data using Apache Flink, Apache Spark, and Databricks Build and maintain time-series data solutions using PostgreSQL and TimescaleDB to support high-resolution telemetry analytics Integrate data governance frameworks for metadata management, lineage tracking, and compliance within a data lake ecosystem Apply performance tuning techniques in Databricks (e.g., partitioning, caching, adaptive query execution) to optimize batch processing speed and resource utilization Design and implement Infrastructure as Code (IaC) solutions using Terraform and Azure Bicep to provision and manage cloud resources across multi-cloud environments Optimize and monitor pipeline performance and resource utilization across distributed environments such as Kubernetes and Databricks clusters Leverage telemetry and diverse data sources to design, test, and deploy AI/ML solutions, collaborating with data scientists and engineers to build deep learning capabilities within the platform for prediction, early alerting, and prescriptive recommendations Define appropriate business metrics to measure the impact of AI/ML solutions within the platform Write scalable, distributed, and highly efficient code, in languages such as Python, Java, PySpark, Scala, and R Qualifications: A BS in Computer Science, Statistics, Mathematics, or a related field 5+ years of experience in building end-to-end analytics solutions, including designing and implementing real-time and batch data pipelines for IoT telemetry or time-series data Excellent problem-solving, critical thinking, and communication skills Demonstrated initiative to find solutions to complex problems at scale and operationalize them Demonstrated ability to work in ambiguous situations and across organizational boundaries Lead with respect, accountability, integrity, and a positive can-do attitude Experience working in a data-intensive environment and translating business needs into data requirements 2+ years of experience in building and operationalizing analytics pipelines and services, adopting the container ecosystem of Docker and Kubernetes 2+ years of demonstrated AI/ML pipeline development with relevant code experience 2+ years' experience using one or more of the following: TensorFlow, MLFlow, PyTorch, SparkML, etc. 2+ years' experience using one or more of the following frameworks: Apache Flink, Spark, Spark (structured) Streaming, Akka, Kafka, etc. 2+ years' experience building scalable batch data pipelines on cloud-based platforms, such as Databricks The range for this position is $120,800.00 to $217,400.00. Placement within this range may vary, depending on the applicant's experience and geographic location. Acuity offers generous benefits including health care, dental coverage, vision plans, 401K benefits, and commissions/incentive compensation depending on the role. We value diversity and are an equal opportunity employer. All qualified applicants will be considered for employment without regards to race, color, age, gender, sexual orientation, gender identity and expression, ethnicity or national origin, disability, pregnancy, religion, covered veteran status, protected genetic information, or any other characteristic protected by law.
Compétences linguistiques
- English
Avis aux utilisateurs
Cette offre a été publiée par l’un de nos partenaires. Vous pouvez consulter l’offre originale ici.