This job offer is no longer available
About
Senior Data Engineer
to lead the design, development, and maintenance of scalable, high-performance data solutions focused on Data Centralization, Governance, and Integrity. This is a highly technical role requiring expertise in
Google Cloud Platform (GCP)
native services (BigQuery, Dataflow, Dataproc) to build automated, cost-efficient, and complex data pipelines. The ideal candidate will be a technical leader, making key architectural decisions, implementing robust data frameworks, and mentoring junior engineers. This position requires a strong commitment to working
full-time onsite
(with potential remote flexibility on Fridays) and leveraging data engineering expertise to drive business insights and analytics.
Key Requirements
Experience & Education:
5-6 years
of experience as a Data Engineer or in a comparable Business Intelligence role. Bachelor of Science in Information Technology, Computer Science, or a related technical field.
Cloud & Pipeline Expertise (Required):
Proficiency with GCP
(Google Cloud Platform) offering is required Expertise in designing and building large-scale, automated data pipelines using GCP native services (e.g.,
BigQuery, Dataflow, Dataproc ). Experience with
Terraform
to create and maintain intermediate to advanced infrastructure-as-code scripts is a plus
Database & Coding Skills:
4-5 years
minimum experience with relational databases (
MS-SQL, Postgres, or Oracle ). 3-4 years
of experience with NoSQL databases (
Capella or MongoDB ). 2-3 years
of experience with
Python
or a similar language. bility to work with multi-statement complex queries and enhance performance/troubleshoot issues.
rchitecture & Governance:
Proven ability to make
key architectural decisions
related to data modeling, storage, and real-time processing. Experience assisting with data governance, quality, and lineage practices using tools like
Data Catalog, DLP, or Cloud Audit Logs . Experience translating business requirements into technical design documents.
Work Schedule & Location:
Full-time onsite
commitment required (Monday-Friday, 8a-5p), with remote flexibility for Fridays pending manager approval.
Desired Experience (Bonus):
Experience in the
logistics or transportation
industry (1-3 years). Proven ability to mentor junior engineers and provide technical guidance. Experience with monitoring and optimizing GCP resources for cost management.
Job Responsibilities • Design and build large-scale automated and cost-efficient data pipelines that are integrated across multi-platform and disparate data sources using GCP native services (e.g., BigQuery, Dataflow, Dataproc) • Make key architectural decisions related to data modeling, storage, real-time processing, and orchestration • Work with multi statement complex queries and enhance performance or able to troubleshoot issues related to performance • Implement scalable, reusable and efficient data frameworks that ensure data quality, data integrity and performance consistently • ssist in establishing data governance, quality, and lineage practices using tools like Data Catalog, Data Loss Prevention (DLP), or Cloud Audit Logs. • Creates and maintains intermediate to advanced terraform scripts • Mentor junior and mid-level engineers, conduct code reviews, and provide technical guidance on best practices • Translate business requirements into technical design documents and implementation plans • Participate in support rotation and proactively identify issues while implementing issue resolutions as appropriate. DDITIONAL DESIRED DUTIES/RESPONSIBILITIES: • Monitor and optimize GCP resources to manage costs effectively • Stay updated with latest GCP services and features • Evaluate and recommend new tools and technologies to improve data engineering practices
Languages
- English
Notice for Users
This job was posted by one of our partners. You can view the original job source here.