About
We are seeking an
Artificial Intelligence (AI)/Machine Learning (ML) Engineer
with hands-on experience in
image, video, and LiDAR data processing
to build advanced analytics and machine learning solutions. The ideal candidate will have strong expertise in computer vision, deep learning, and Python-based model development, with experience deploying solutions on the
Azure AI/ML stack . Experience with front-end development using
React
is a strong plus. Locations:
New York, Washington DC, Denver, Seattle, Los Angeles, Chicago, Austin, or Dallas . Impact
Image, Video & LiDAR Data Analytics
Develop and optimize computer vision models for image classification, object detection, segmentation, OCR, and anomaly detection. Build pipelines for processing large-scale
video streams
(real-time or batch). Work with
LiDAR point-cloud data
for feature extraction, 3D object detection, scene reconstruction, and spatial analytics. Implement preprocessing, augmentation, and feature engineering workflows for multimodal datasets. Machine Learning & Deep Learning Development
Design, train, evaluate, and deploy deep learning models using frameworks such as
PyTorch, TensorFlow, OpenCV, MMDetection, Detectron2 , etc. Apply techniques like transfer learning, fine-tuning, and model optimization (quantization, pruning). Maintain reproducible experimentation using
MLflow , notebooks, and versioning best practices. Azure Cloud & MLOps
Build and deploy models on
Azure Machine Learning , Azure Databricks, and Azure Cognitive Services. Develop scalable data pipelines using
Azure Data Lake, Azure Functions, Azure Storage, Event Hubs , etc. Implement CI/CD workflows, containerization (Docker), and model deployment using
AKS, ACI , or serverless options. Software & API Development
Build Python-based microservices for model inference and data processing. Develop REST APIs to integrate machine learning models into downstream applications. (Nice to have) Build lightweight front-end dashboards using
React
for visualization of image/video results. Cross-functional Collaboration
Work closely with product, engineering, and domain teams to translate requirements into technical solutions. Document workflows, architectures, and best practices. Who You Are
Required Qualifications
Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, AI/ML, or related fields. 5 - 7 years of hands‑on experience in AI/ML engineering with focus on computer vision. Strong proficiency in
Python
and popular ML/CV libraries: PyTorch, TensorFlow, OpenCV, scikit‑learn, NumPy, pandas, Image/video processing libraries (Pillow, FFmpeg, Open3D, PCL). Experience with
LiDAR/point‑cloud processing
(Open3D, PDAL, PyTorch3D or similar). Experience with
Azure AI/ML stack
(Azure ML, Data Lake, Functions, DevOps). Solid understanding of deep learning architectures (CNNs, transformers for vision, 3D models). Experience in model evaluation, benchmarking, and optimization. Strong problem‑solving skills, ability to work with noisy/unstructured multimodal data. Travel up to 15% of the time. Preferred Qualifications
Experience with
React.js
for building simple visualization dashboards. Experience with multimodal AI (image + text, video + sensor data). Exposure to edge deployments (NVIDIA Jetson, ONNX Runtime, TensorRT). Familiarity with MLOps tools (DVC, MLflow, Kubeflow). Benefits & Compensation
Opportunity to work on cutting‑edge AI/ML solutions in image, video, and 3D analytics. Collaborative environment with strong learning and growth support. Exposure to enterprise‑scale Azure AI projects. Compensation: Expected Salary (all locations): $97,000 - $141,350 WSP USA (and all of its U.S. companies) is an Equal Opportunity Employer Race/Age/Color/Religion/Sex/Sexual Orientation/Gender Identity/National Origin/Disability or Protected Veteran Status. The selected candidate must be authorized to work in the United States.
#J-18808-Ljbffr
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.