XX
Data EngineerIntellisoft TechnologiesUnited States
XX

Data Engineer

Intellisoft Technologies
  • US
    United States
  • US
    United States

About

Position: Senior Data Engineer Location: Waltham, MA / Remote (EST) Duration: 6+ Months Client: National Grid - Boston Visa Restrictions: None Sub Vending: No Pay Rate: $58/Hr on W2 without benefits Bill Rate: $80
Candidate must have a Public Profile. Yes
100% remote! EST hours
Required Skills • Skilled in object-oriented programming (Python in particular) • Strong Experience in Python, PySpark and SQL • Strong Experience in Databricks • Experience developing integrations across multiple systems and APIs • Experience with or knowledge of Agile software development methodologies • Experience with cloud-based databases, specifically Azure technologies (e.g., Azure data lake, ADF, Azure DevOps and Azure Functions) • Experience using SQL queries as well as writing and perfecting SQL queries in a business environment with large-scale, complex datasets • Experience with data warehouse technologies. Experience creating ETL and/or ELT jobs • Excellent problem solving and troubleshooting skills • Process oriented with great documentation skills • Experience designing data schemas and operating SQL/NoSQL database systems is a plus Additional Skills • Experience in software engineering • Experience in Snowflake Experience in Kafka, Flink ,Fivetran and Matillion is nice to have • Experience in Data Science and Machine Learning is nice to have Job Description Position:
Senior Data Engineer Location:
Remote
Job Description: As a Senior Data Engineer, you will be part of a cross-functional development team that is focused on creating a forecasting platform at National Grid available to our business teams. Using the agile framework, you will build end-to-end pipelines based on rigorous engineering standards and coding practices to deliver data that is accessible and of the highest quality. A Senior Data Engineer will also contribute to the modernization of our architecture and tools to help increase our output, scalability, and speed.
A Senior Data Engineer will design and develop highly scalable and extensible data pipelines which enable collection, storage, distribution, modeling, and analysis of large data sets from many channels. This position requires an innovative software engineer who is passionate about data & data quality. The ideal candidate will have strong data warehousing and API integration experience and the ability to develop scalable data pipelines that make data management and analytics/reporting faster, more insightful, and more efficient. • Develop, test, document and support scalable data pipelines • Build out and evolve data integrations including APIs to support continuing increases in data volume and complexity • Establish and follow data governance processes and guidelines to ensure data availability, usability, consistency, integrity, and security • Build, implement, and maintain scalable solutions that align to our data governance standards and architectural road maps for data integrations, data storage, reporting, and analytic solutions • Collaborate with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility, and fostering data-driven decision making across the organization • Design and develop data integrations and a data quality framework. Write unit/integration/functional tests and document work • Design, implement, and automate deployment of our distributed system for collecting and processing streaming events from multiple sources • Perform data analysis needed to troubleshoot data-related issues and aid in the resolution of data issues • Guide and mentor junior engineers on coding best practices and optimization
Qualifications: • Education: 4-year college degree or equivalent combination of education and experience. Prefer an academic background in Computer Science, Mathematics, Statistics, or related technical field • 5 years of relevant work experience in analytics, data engineering, business intelligence or related field • Skilled in object-oriented programming (Python in particular) • Strong Experience in Python, PySpark and SQL • Strong Experience in Databricks • Experience developing integrations across multiple systems and APIs • Experience with or knowledge of Agile software development methodologies • Experience with cloud-based databases, specifically Azure technologies (e.g., Azure data lake, ADF, Azure DevOps and Azure Functions) • Experience using SQL queries as well as writing and perfecting SQL queries in a business environment with large-scale, complex datasets • Experience with data warehouse technologies. Experience creating ETL and/or ELT jobs • Excellent problem solving and troubleshooting skills • Process oriented with great documentation skills • Experience designing data schemas and operating SQL/NoSQL database systems is a plus • Experience in Kafka, Flink ,Fivetran and Matillion is nice to have • Experience in Data Science and Machine Learning is nice to have
Nice to have: • Experience in software engineering • Experience in Snowflake
  • United States

Languages

  • English
Notice for Users

This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.