Über
Opportunity As a
Senior Data Engineer
at Orijin, you will be a technical leader responsible for building, scaling, and modernizing the company’s data platform. Your primary focus will be on
data modeling, pipelines, architecture, reliability, and performance , ensuring that data is trusted, timely, and production‑ready.
Job Requirements Data Platform & Architecture Leadership
Design and evolve Orijin’s data architecture to support scalability, reliability, and near‑real‑time use cases
Define standards for data modeling, orchestration, versioning, and deployment
Lead efforts around data governance, security, lineage, and compliance in partnership with stakeholders
Drive the transition toward modern data stack best practices (event‑driven ingestion and streaming where appropriate)
Data Engineering & Pipeline Ownership
Own the design, build, and maintenance of production‑grade data pipelines across batch and streaming workloads
Build systems that support:
Monitoring, alerting, and observability for data pipelines
Backfills, re‑runs, and safe rollbacks when failures or data issues occur
High data quality and reliability through automated checks and validation
Optimize pipelines for performance, cost efficiency, and scalability
Lead the move toward near real‑time data processing where it delivers business value
Tooling & Infrastructure
Architect and maintain data systems using tools such as:
AWS (S3, RDS, Redshift, Lambda, DMS, Glue, etc.)
Data orchestration and ETL tools like Airflow, Airbyte, and dbt
Improve CI/CD for data workflows, including testing, deployment, and environment management
Evaluate and introduce new tooling for orchestration, monitoring, and data quality as the platform matures
ML & AI Enablement
Design, build, and operate data and feature pipelines that support machine learning and AI‑driven product features, including training, evaluation, inference, monitoring, and safe rollout to downstream systems
Support vectorization and embedding workflows, including generation, storage, refresh, and backfill of embeddings
Partner with team stakeholders to translate model requirements into scalable, reliable data systems
Contribute to early experimentation and prototyping of ML‑powered features
Analytics Enablement & Collaboration
Partner with analysts and product teams to ensure pipelines and data models support meaningful analysis and reporting
Provide architectural input on metrics design, data models, and semantic layers
Enable self‑service analytics by ensuring clean, well‑documented, and accessible datasets
Basic proficiency in data visualization platforms with demonstrated ability to build and maintain data dashboards
Contribute to exploratory analysis or metric definition when deeper engineering context is required
Efficiency & Reliability Focus
Continuously improve:
Query performance
Storage and compute costs
Pipeline runtime and failure rates
Lead incident response for data outages and quality issues, including root‑cause analysis and permanent fixes
Establish SLAs and reliability standards for critical data assets
Qualifications
Bachelor’s or advanced degree in Computer Science, Engineering, Data Science, or equivalent work experience
Expertise in the areas of data engineering, platform engineering, or backend engineering roles
Proven experience designing and operating large‑scale data pipelines and data platforms in production environments
Strong proficiency in Python and SQL for data engineering workflows
Hands‑on experience with AWS data tools like Redshift, Lambda, and Glue, or equivalents; experience with data orchestration and ETL tools like Airflow, Airbyte, and dbt in production environments
Experience implementing monitoring, alerting, and data quality frameworks
Familiarity with streaming or near‑real‑time systems (e.g., Kafka, Kinesis, or similar) is a plus
Hands‑on experience with PostgreSQL databases and NoSQL style databases like MongoDB, DynamoDB, etc.
Experience supporting machine learning or AI workflows (e.g., feature engineering, embedding pipelines, model inputs/outputs, embeddings, vector databases)
Strong collaboration and communication skills – able to translate business and analytical needs into robust technical systems
Experience with data governance, security, and compliance in regulated or sensitive‑data environments
Equal Opportunity Employer Orijin is an Equal Opportunity Employer and firmly believes in creating a workplace that respects and values diversity of cultural, ethnic, and experiential backgrounds. We encourage all qualified applicants to apply. As an organization committed to the successful reentry of justice‑involved persons, we strongly encourage candidates who share the life experiences of the citizens we serve to apply.
Disclaimer The above statements are intended to describe the general nature and level of work being performed by the individual assigned to this position. They are not intended to be an exhaustive list of all duties, responsibilities, and skills required. Job duties may change or new duties assigned at any time with or without notice.
We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
Salary And Benefits
Orijin offers competitive compensation commensurate with experience and a generous 100% employer‑paid benefits package for you and your dependents
Fully remote with option to work in person at the NYC office
Occasional travel as needed and the ability to work EST
#J-18808-Ljbffr
Sprachkenntnisse
- English
Hinweis für Nutzer
Dieses Stellenangebot stammt von einer Partnerplattform von TieTalent. Klick auf „Jetzt Bewerben”, um deine Bewerbung direkt auf deren Website einzureichen.