Data Engineer/Database Developer
- +3
- +24
- North Carolina, United States
- +3
- +24
- North Carolina, United States
À propos
Title: Data Engineer/Developer
contract: 12+ months
Location: Charlotte NC - Onsite- Local preferred
The primary role of the
Data Engineer-Developer
is to function as a critical member of a data team by designing data integration solutions that deliver business value in line with the company's objectives. They are responsible for the design and development of
data/batch processing, data manipulation, data mining, and data extraction/transformation/loading
into large data domains using
Python/Pyspark
and
AWS
tools.
Responsibilities:
• Provide scoping, estimating, planning, design, development, and support services to a project.
• Identify and develop the Technical detail design document.
• Work with developers and business areas to design, configure, deploy and maintain custom ETL Infrastructure
to support project initiatives
• Design and develop data/batch processing, data manipulation, data mining, and data extraction/transformation/loading ( ETL Pipelines ) into large data domains.
• Document and present solution alternatives to clients, which support business processes and business objectives.
• Work with business analysts to understand and prioritize user requirements
• Design, development, test, and implement application code
• Follow proper software development lifecycle processes and standards
• Quality Analysis of the products, responsible for the Defect tracking and Classification
• Track progress and intervene as needed to eliminate barriers and ensure delivery.
• Resolve or escalate problems, and manage risk for both development and production support.
• Coordinate vendors and contractors for specific projects or systems.
• Maintain deep knowledge and awareness of technical & industry best practices and trends, especially in technology & methodologies.
Skills and Knowledge:
• Developer experience specifically focusing on
Data Engineering
• Hands-on experience in Development using
Python and Pyspark as an ETL tool
• Experience in AWS services like
Glue, Lambda, MSK (Kafka), S3, Step functions, RDS, EKS etc
• Experience in Databases like
Postgres, SQL Server, Oracle, Sybase
• Experience with SQL database programming, SQL performance tuning, relational model analysis, queries, stored procedures, views,
functions and triggers
• Strong technical experience in Design (Mapping specifications, HLD, LLD), Development (Coding, Unit testing).
• Knowledge in developing
UNIX scripts, Oracle SQL/PL-SQL
• Experience with data models, data mining, data analysis and data profiling. Working knowledge of
ERWIN
a plus.
• Experience in Reporting tools like
Tableau, Power BI is a plus
• Experience in working with REST API's
• Experience in work load automation tools like Control-M, Autosys etc.
• Good knowledge in
CI/CD DevOps process
and tools like
Bitbucket, GitHub, Jenkins
• Strong experience with Agile/SCRUM methodology
• Experience with other
ETL tools (DataStage, Informatica, Pentaho, etc.)
• Knowledge in
MDM, Data warehouse and Data Analytics
• Working knowledge of
Data Science concepts a plus
Best Regards,
Divya D
Talent Acquisition Specialist
Phone: 571-350-0519
Email: divya.d@technogeninc.com
Web: www.technogeninc.com
4229 Lafayette Center Dr, Suite 1880, Chantilly, VA 20151
Compétences idéales
- Python
- PySpark
- AWS
- Lambda
- Kafka
- S3
- SQL Server
- Oracle
- Sybase
- SQL
- Unix
- Tableau
- Power BI
- REST API
- Control-M
- AutoSys
- DevOps
- Bitbucket
- Github
- Jenkins
- Agile
- Scrum
- Data Analytics
- Data Science
Expérience professionnelle
- Data Engineer
- Data Infrastructure
- Data Analyst
Compétences linguistiques
- English