This job offer is no longer available
About
We are looking for an experienced and versatile
Data Engineer
to join our dynamic and fast-growing team. If you are passionate about data, solving complex problems, and working directly with enterprise stakeholders to translate business needs into scalable technical solutions, this role could be the perfect fit. ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with
Fortune 500 companies . We deliver digital solutions built to help accelerate the growth of businesses across various industries by focusing on creating value through innovation. In addition to strong technical expertise, we are seeking someone with
strong business awareness and the ability to lead client and stakeholder communication . The ideal candidate will be comfortable collaborating with
enterprise-level clients , translating complex technical concepts into business outcomes, and ensuring alignment between engineering execution and strategic objectives. Job Responsibilities
Design, build, and maintain
scalable and reliable batch and real-time ETL/ELT data pipelines
using cloud services such as
GCP Dataflow, Cloud Functions, Pub/Sub, and Cloud Composer .
Architect and implement robust data infrastructure capable of handling
high-volume data ingestion and processing .
Develop and manage our
central data warehouse in Google BigQuery .
Design and implement
data models, schemas, and table structures
optimized for performance, scalability, and long-term maintainability.
Write
clean, efficient, and maintainable SQL and Python code
to transform raw data into curated, analysis-ready datasets.
Build reliable transformation workflows that support
analytics, reporting, and data science initiatives .
Monitor, troubleshoot, and optimize data infrastructure to ensure
high performance, reliability, and cost efficiency .
Implement
BigQuery best practices , including
partitioning, clustering, query optimization, and materialized views .
Build and maintain
curated data models that serve as the “source of truth”
for business intelligence and reporting.
Ensure data is optimized and readily accessible for
BI tools such as Looker
and other analytics platforms.
Implement
automated data quality checks, validation rules, and monitoring frameworks
to ensure the integrity and reliability of data pipelines and warehouse systems.
Establish processes for
data governance, observability, and lineage tracking .
Work closely with
software engineers, data analysts, and data scientists
to understand their data requirements and provide the necessary infrastructure and data products.
Lead and support client and stakeholder communication , working with enterprise clients to translate business needs into scalable data solutions.
Partner with product teams and leadership to ensure that
technical data solutions align with business strategy and client expectations .
Take
ownership of data platforms and architecture decisions , helping shape the future direction of our analytics and data infrastructure.
Identify opportunities to
improve data reliability, automate workflows, and generate new insights through data .
Contribute to a
collaborative, high-performing engineering culture
with strong communication and teamwork.
Basic Qualifications
5+ years of hands-on experience
in data engineering, data integration, or data platform development.
Degree in
Computer Science, Engineering, Mathematics, or related STEM discipline .
Strong programming and query skills in
SQL and Python .
Experience working with
distributed version control systems such as Git
in an
Agile/Scrum environment .
Experience designing and orchestrating
ETL pipelines , particularly with
Databricks .
Experience working within
cloud environments (GCP, AWS, or Azure) .
Experience with
database systems such as MongoDB and Elasticsearch .
Strong understanding of
data warehousing and dimensional modeling methodologies .
Hands-on experience with
Airflow and Hadoop .
Experience using
Docker
for containerized workflows and reproducible environments.
Ability to identify opportunities to
improve data quality, reliability, and automation .
Strong
business awareness and communication skills , with the ability to collaborate with both technical teams and business stakeholders.
Experience within the
retail industry
is a plus.
Preferred Qualifications
Master’s degree
in Computer Science, Engineering, or related discipline.
Experience working with
enterprise-scale data platforms and Fortune 500 clients .
Familiarity with
Druid and its Python API , including
Kafka integrations .
Strong experience using
Apache Spark
for large-scale data processing.
Experience designing
real-time streaming data architectures .
Experience working with
AI-driven platforms, data infrastructure supporting AI/ML systems, or agentic AI workflows
Why You’ll Love Working at ShyftLabs
At ShyftLabs, your work matters. We’re a growing data product company making a big impact with Fortune 500 clients and as we scale, you’ll have the chance to shape solutions, influence strategy, and grow your career alongside us. Here’s what you can expect when you join our team: -Work Arrangement:
This role is currently
fully remote , providing flexibility to work from home. As the team and organization continue to grow, there may be an opportunity for the role to
transition into a hybrid work model in the future , with occasional in-office collaboration. -Comprehensive Benefits: We cover 100% of health, dental, and vision insurance premiums for you and your dependents which means no out-of-pocket costs. Eligibility starts from day one itself. -Growth & Learning: Access extensive learning and development resources to keep leveling up your skills. Inclusion at ShyftLabs We’re building something big, and we want you on the journey with us. If you’re ready to use data and innovation to make an impact, apply today and let’s grow together. ShyftLabs is an equal-opportunity employer committed to creating a safe, diverse, and inclusive environment. We encourage applicants of all backgrounds including ethnicity, religion, disability status, gender identity, sexual orientation, family status, age, and nationality to apply. If you require accommodation during the interview process, let us know and we’ll be happy to support you.
#J-18808-Ljbffr
Languages
- English
Notice for Users
This job was posted by one of our partners. You can view the original job source here.