This job offer is no longer available
About
About the Role: We are seeking a highly motivated and skilled Developer to design, implement, and maintain high-performance data ingestion pipelines on AWS. This role will be crucial in building and optimizing our data infrastructure, ensuring data quality and consistency from source systems to target repositories. The ideal candidate will have strong experience with Spark, streaming technologies, AWS cloud services, and Agile/DevOps methodologies. Responsibilities:
Design and implement highly performant data ingestion pipelines from multiple sources on AWS. Integrate end-to-end data pipelines, ensuring data quality and consistency throughout the process. Work with event-based/streaming technologies to ingest and process data. Develop and maintain Spark code using Python or SQL. Implement and manage data solutions within AWS cloud architecture. Utilize Big Data components such as PySpark and Spark SQL. Work with databases and data warehousing solutions. Deliver proof-of-concept and production implementations within an Agile/DevOps methodology, using iterative sprints. Apply Vanguard pattern methodology for mid-tier design. Collaborate with other developers, data engineers, and stakeholders to understand data requirements and ensure seamless integration. Monitor and troubleshoot data pipelines to ensure performance, reliability, and security. Optimize data pipelines for cost efficiency and scalability. Document data pipeline processes and maintain up-to-date knowledge of best practices. Qualifications:
Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience in designing and implementing data ingestion pipelines. Strong proficiency in Spark development using Python or SQL. Hands-on experience with AWS cloud services relevant to data processing and storage (e.g., S3, EMR, Glue, Kinesis). Experience with event-based/streaming technologies (e.g., Kafka, Kinesis Streams). Solid understanding of Big Data components such as PySpark and Spark SQL. Experience with databases and data warehousing concepts. Familiarity with Agile/DevOps methodologies and tools. Knowledge of Vanguard pattern methodology for mid-tier design. Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a team. Preferred Qualifications:
Experience with specific data ingestion tools and frameworks. Experience with data quality and data governance practices. Knowledge of other Big Data technologies (e.g., Hadoop, Hive). AWS certifications.
Diverse Lynx LLC is an Equal Employment Opportunity employer. All qualified applicants will receive due consideration for employment without any discrimination. All applicants will be evaluated solely on the basis of their ability, competence and their proven capability to perform the functions outlined in the corresponding role. We promote and support a diverse workforce across all levels in the company.
Languages
- English
Notice for Users
This job was posted by one of our partners. You can view the original job source here.