Cette offre d'emploi n'est plus disponible
À propos
We’re a venture‑backed company at the frontier of precision mental health. In partnership with the world’s leading medical schools and psychiatric hospitals, we have secured non‑dilutive funding from the NIH, ARPA‑H, DARPA, the FDA, and the Wellcome Trust. We deploy multimodal AI systems in clinical trials and healthcare settings across four continents, and we’re hiring the engineering team to build what comes next. About the Role
A patient wears an Oura ring to sleep. Their phone picks up a shift in activity patterns overnight. The next morning, a conversational AI agent conducts a brief voice‑based check‑in, and the vocal features, facial action units, and linguistic markers from that session all flow into the same clinical picture alongside wearable data. Your job is to make sure every one of those signals – from raw sensor stream to clinically meaningful feature – arrives reliably, on time, and at quality. You’ll architect and own the data infrastructure across all clinical data modalities: audio‑visual features from conversational assessments, wearable biometrics, passive mobile sensing, and the feature pipelines that prepare them for fusion in our multimodal ML models. You’ll also own the overall data architecture – how data flows into and through Deliberate AI, how it’s stored, cataloged, and governed, and how it scales as we deploy across clinical trial sites on four continents. This isn’t just a pipeline‑building role; it’s defining the technical strategy for how clinical data works at a company building the future of precision mental health care. What You’ll Do
Design and implement the overall data architecture for ingestion, storage, cataloging, and governance of all clinical datasets – audio, video, wearable, mobile sensing, and physiological data from clinical sites worldwide. Build and maintain API integrations with commercial wearable devices (e.g., Oura Ring, Fitbit) to collect raw sensor streams (HRV, sleep stages, activity, heart rate) and engineer biometric features. Develop systems to capture and process passive mobile signals that trigger adaptive assessments, including real‑time streaming and synchronization across modalities. Build automated QA systems to detect missing data, sensor failures, and anomalous readings – with data lineage tracking, pipeline observability, monitoring, alerting, and incident triage so problems are caught and resolved before they affect downstream models or clinical decisions. Design participant monitoring systems with automated data completeness checks, device health monitoring, and alert mechanisms supporting global deployment. Implement reliable incremental load patterns – idempotent runs, backfill strategies, and late‑arriving data handling – so the platform stays correct as clinical sites come online across time zones and connectivity conditions. Evaluate and select the core data stack – orchestration, warehousing, transformation, and observability tooling – and own those decisions as the foundation the team builds on. Desired Experience
Significant experience in data engineering, including hands‑on work in at least two of: audio/video processing, IoT/wearables, or mobile sensing. Expert‑level programming skills in
Python
with experience in performance optimization. Proven track record architecting and scaling
data pipelines
for multimedia or sensor data. Deep experience with
wearable device APIs
(e.g., Fitbit, Oura, Apple Health). Strong expertise in
time‑series data processing , real‑time streaming architectures, and feature engineering. Experience with
cloud infrastructure
(GCP / AWS) and distributed computing. Use of
agentic programming
tools (e.g., Claude Code, Codex) as part of your workflow. Strong understanding of
signal processing
fundamentals across multiple modalities. Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience; Master’s preferred). Strong Candidates May Also:
Have experience with
healthcare or clinical research data
(HIPAA compliance, PHI handling). Have knowledge of affective computing or speech processing. Background in real‑time streaming architectures (Kafka, Pub/Sub, WebSockets) and distributed computing frameworks (Spark, Dask). Experience with machine learning for audio, video, or sensor applications. Publications or open‑source contributions in data engineering or digital health. Compensation & Benefits
Base Salary : $160,000 – $220,000 (commensurate with experience, qualifications, and location). Early‑stage equity with meaningful ownership – you’re joining at a stage where individual grants are substantial. Comprehensive health, dental, and vision insurance. 401(k) with company match. Flexible PTO policy. Publication co‑authorship
on peer‑reviewed clinical research – your data architecture shows up in the scientific record, not just the git log. Location
This is a hybrid role. We work in‑person roughly 50% of the time in NYC or Boston – this is how we build culture and solve hard problems together as an early, fast‑growing team. Candidates should be based in or willing to relocate to one of these cities. Work Authorization
Candidates must be authorized to work in the United States. We welcome applicants who hold U.S. citizenship, permanent residency, or existing work authorization including H‑1B (transfer‑eligible), OPT/STEM OPT, or TN visa (Canadian and Mexican citizens). If you already hold an H‑1B, we will sponsor your green card if desired but we are not currently able to sponsor new H‑1B petitions. EEO Statement
Deliberate AI evaluates candidates based on merit, qualifications, and the skills needed to succeed in the role.
#J-18808-Ljbffr
Compétences linguistiques
- English
Avis aux utilisateurs
Cette offre a été publiée par l’un de nos partenaires. Vous pouvez consulter l’offre originale ici.