Cette offre d'emploi n'est plus disponible
À propos
Pay/Schedule:
$130K-210K/year (stock options and other incentives included) | Hybrid: 2 days in-office, 3 days remote | Full-time permanent position | Relocation assistance available
Position Overview:
Build, train, and deploy large-scale, self-supervised foundation models that learn rich representations of time series, sequential sensor data, textual, and vision data. Fine-tune models for anomaly/event detection, predictive maintenance, forecasting, classification, and multi-modal sensor fusion for industrial and scientific applications.
Key Responsibilities: Build and train large-scale foundation models using self-supervised and semi-supervised learning methods for time series, sensor, text, and vision data Process, augment, and engineer features for diverse sensor modalities including accelerometers, temperature, vibration, audio, and images with real-world noise handling Integrate heterogeneous data types (time series, images, text, audio, structured data) into robust deep learning architectures with cross-modal representation learning Implement transfer learning and fine-tuning strategies at scale using prompt/adapter-based methods, temporal domain adaptation, and few-shot learning Collaborate with cross-disciplinary teams including domain experts, engineers, and product owners to deliver interpretable models with quantified uncertainty and business impact Required Qualifications:
MS or PhD in Computer Science, Data Science, AI, or related field required 3+ years relevant experience in data science and AI Expert Python programming (NumPy, SciPy, Pandas) and C++/CUDA for custom kernels Deep expertise in PyTorch (Lightning, Distributed), TensorFlow/Keras, or JAX/Flax Strong experience with self-supervised/semi-supervised learning: masked modeling, contrastive methods, temporal predictive coding, multimodal alignment Proficiency with sequence models (RNNs, GRU/LSTM, TCN), CNNs, Transformers (BERT, ViT, TimeSFormer), graph neural networks Experience with large-scale training: multi-GPU/multi-node clusters, mixed-precision, ZeRO optimization Strong foundation in time series analysis, signal processing (Fourier/wavelet analysis, filters), and sensor data processing Expertise in data engineering: building robust pipelines for large-scale, time-synchronized multi-sensor datasets Strong mathematical background: linear algebra, probability, statistics, optimization, numerical methods Excellent communication skills with ability to present complex model behaviors and value impact to technical and non-technical stakeholders Must be authorized to work in the US
Compétences linguistiques
- English
Avis aux utilisateurs
Cette offre a été publiée par l’un de nos partenaires. Vous pouvez consulter l’offre originale ici.