Cette offre d'emploi n'est plus disponible
À propos
Absentia Labs is building the data and intelligence infrastructure that powers the next generation of biomedical discovery. We work at the intersection of biology, chemistry, machine learning, and large-scale systems, transforming fragmented scientific data into reliable, machine-learning-ready knowledge. Biomedical data is dispersed, semi-structured, and inherently noisy, yet deeply interconnected across experiments, assays, compounds, and biological systems. Extracting value from this complexity requires deliberate schema design, principled abstractions, and rigorous post-processing pipelines that can support both scientific reasoning and large-scale AI. We believe breakthroughs start with strong data foundations. This role sits at the architectural core of our platform, shaping how scientific data is modeled, validated, versioned, and served across the organization. The Role
As a Senior Data Engineer, you will own the design and evolution of Absentia Labs’ biomedical data platform. You will operate with a high degree of autonomy, making long-horizon architectural decisions while remaining hands-on in implementation. This role is ideal for an engineer who enjoys working in high-ambiguity, research-driven environments, and who understands that data engineering for AI is as much about representation and correctness as it is about scale. What You’ll Do
Architect and lead the design of end-to-end data systems for large-scale biomedical datasets (chemical, biological, toxicology, omics, assay, clinical, and experimental data). Define and evolve schema-driven data models that reconcile noisy, semi-structured, and heterogeneous sources into coherent, interoperable representations. Establish best practices for data quality, validation, provenance, lineage, and versioning suitable for scientific and ML workflows. Build and maintain cloud-native data infrastructure (data lakes, warehouses, object storage, streaming systems) with an emphasis on scalability and reliability. Design pipelines that support both batch and streaming access for ML training, evaluation, and inference. Partner closely with ML engineers, scientists, and product leads to translate research needs into durable data abstractions. Make principled trade-offs around performance, cost, flexibility, and correctness in production systems. Provide technical leadership through design reviews, architectural guidance, and mentorship of other engineers. Identify and proactively address systemic risks in data integrity, scalability, and operational complexity. Who You Are
You are a data engineer who thinks in systems and interfaces, not just pipelines. You are comfortable owning poorly defined problems and converging on robust solutions through thoughtful design and iteration. You understand that biomedical data is rarely “clean,” and that schema design, normalization, and semantics are first-order engineering problems—especially in AI-driven settings. You Likely Have
5+ years of experience in data engineering, platform engineering, or ML infrastructure roles, with clear ownership of production systems. Proven experience designing and operating large-scale, production-grade data pipelines. Strong proficiency in Python and data-centric software engineering practices. Deep experience with cloud platforms (AWS, GCP, or Azure), including storage, compute, and security primitives. Familiarity with distributed data processing and orchestration systems (e.g., Spark, Beam, Ray, Airflow, Dagster). Experience supporting ML/AI workloads, including dataset generation, feature pipelines, and reproducible training workflows. Strong architectural judgment and the ability to communicate technical decisions clearly across disciplines. Bonus If You Have
Competitive compensation, including meaningful equity participation, allows you to share directly in the long-term success and growth of the company. Prior work with biomedical or life-science data (e.g., omics, assays, molecular representations, clinical or toxicology data). Experience with streaming platforms (Kafka, Pub/Sub, Kinesis). Exposure to ontology-aware data modeling or schema evolution in scientific domains. Infrastructure-as-code and systems experience (Terraform, Docker, Kubernetes). Experience in early-stage startups or research-heavy environments. Open-source contributions or technical publications. What We Offer
A chance to architect the data backbone of an AI-driven biomedical platform. Direct impact on how scientific data is translated into machine intelligence. High autonomy, high trust, and ownership over critical systems. Flexible remote or hybrid work arrangements. A deeply technical, low-ego culture focused on learning and rigor. How to Apply
Please submit your resume and a short note on why this role resonates with you. Links to GitHub, technical writing, or relevant projects are encouraged.
#J-18808-Ljbffr
Compétences linguistiques
- English
Avis aux utilisateurs
Cette offre a été publiée par l’un de nos partenaires. Vous pouvez consulter l’offre originale ici.