XX
Data engineerArtech IncUnited States

This job offer is no longer available

XX

Data engineer

Artech Inc
  • US
    United States
  • US
    United States

About

Request ID: 73205-1
Title:
Data Engineer Locations: Seattle , WA (Onsite) Duration:
6 Months (Possibility of Extension ) Salary Range: $455 - $60/Hour on W2 (All inclusive)
Introduction
We are seeking skilled professionals for an exciting opportunity to work as a Data Engineer. Candidates must be U.S. citizens or Green Card holders and able to work directly for Artech on a W2 basis. This is a 6-month contract position with the possibility of extension. The role is based in Seattle, WA, with options for on-site work in St Louis, Dallas/Plano, Charleston SC, or Ridley Park, Pennsylvania.
Required Skills & Qualifications Strong programming skills in Python and PySpark. Advanced proficiency writing SQL for analytics and ETL processes. Proven experience building and optimizing complex data pipelines in Azure. Hands-on experience with Azure Databricks: cluster management, job scheduling, workspace governance. Strong working knowledge of core Azure services, including Storage Account, Synapse, Key Vault, VMSS, Function Apps, Web Apps, Log Analytics Workspace, service principals, and managed identities. Experience with container services (ACA, container instances) and containerized data workloads. Familiarity with Azure networking concepts and secure network integration for data platforms. Experience creating Azure infrastructure using ARM templates. Proficient with GitLab and Azure DevOps for CI/CD and source control workflows. Strong analytical, problem-solving, and communication skills; proven ability to work cross-functionally. Experience working in Agile teams and understanding of data governance frameworks. Prior work experience at client or in client's Industry. Preferred Skills & Qualifications
Experience with additional programming languages or data technologies. Knowledge of advanced data governance and security practices. Experience in operations support and on-call support for production issues and deployments. Day-to-Day Responsibilities
Design, develop, and maintain end-to-end data pipelines and ETL/ELT workflows using PySpark and Python. Implement, optimize, and monitor large-scale data processing workloads in Azure Databricks. Build and maintain data integration and orchestration solutions using Azure services. Collaborate with data consumers and stakeholders to gather business requirements and translate analytical objectives into technical designs. Implement secure data access patterns using Azure Active Directory, Managed Identities, and service principals. Author Infrastructure-as-Code for Azure resources (ARM templates). Configure and operate Azure components, including Storage Account, Synapse, Key Vault, and more. Collaborate with networking and security teams to design and implement Azure networking for data solutions. Implement monitoring, alerting, and cost optimization for data workloads. Use GitLab and Azure DevOps for source control, CI/CD pipelines, and release management. Follow Agile/Scrum practices and participate in sprint planning, standups, and retrospectives. Ensure solutions meet data governance, lineage, and compliance requirements. Company Benefits & Culture
Competitive salary and benefits package. Opportunities for professional growth and advancement. Dynamic and collaborative work environment.
  • United States

Languages

  • English
Notice for Users

This job was posted by one of our partners. You can view the original job source here.