XX
Principal Data EngineerFidelity InvestmentsUnited States
XX

Principal Data Engineer

Fidelity Investments
  • US
    United States
  • US
    United States

À propos

Job Description:
Position Description :
Creates data visualizations using object oriented and object function scripting languages -- Python, Java, C++, and Scala. Performs data analysis using Big Data tools (Hadoop, Spark, Kafka, and Kubernetes). Tracks the data lifecycle with data management tools (Collibra, Alation, and BigID). Works in various data councils, with data owners, data stewards, and data custodians. Manages projects with new and emerging technologies and products in the early strategy and design stage. Performs big data analytics leveraging Amazon Web Services (AWS) Cloud services (EC2, EMR, Snowflake, and Elastic-Search).
Primary Responsibilities:
Applies data management practices including data governance, data catalog, data privacy, data quality, and data lineage to ensure data is secure, private, accurate, available, and usable. Works with data governance groups across the enterprise to align and scale effective practices. Partners with key stakeholders to understand key business questions, and delivers analytic self-service solutions. Simplifies and effectively communicates data governance challenges, solutions options, and recommendations to business partners and technology leadership. Collaborates with business stakeholders, chapter leads, squad leads, tech leads, and architects to drive Fidelity's data strategy forward. Applies process and technology to deliver innovative solutions to meet business challenges. Understands detailed requirements and delivers solutions that meet or exceed customer expectations. Confers with data processing or project managers to obtain information on limitations or capabilities for data processing projects. Develops or directs software system testing, validation procedures, programming, or documentation. Maintains databases within an application area, working individually or coordinating database development as part of a team. Analyzes information to determine, recommend, and plan computer software specifications on major projects and proposes modifications and improvements based on user need. Education and Experience :
Bachelor's degree in Computer Science, Engineering, Information Technology, Information Systems, or a closely related field (or foreign education equivalent) and five (5) years of experience as a Principal Data Engineer (or closely related occupation) designing, developing, and maintaining large-scale data infrastructure, emerging technologies, data pipelines, and platforms, using AWS, Snowflake, Jenkins, Control-M and technologies (SQL and Python).
Or, alternatively, Master's degree in Computer Science, Engineering, Information Technology, Information Systems, or a closely related field (or foreign education equivalent) and three (3) years of experience as a Principal Data Engineer (or closely related occupation) designing, developing, and maintaining large-scale data infrastructure, emerging technologies, data pipelines, and platforms, using AWS, Snowflake, Jenkins, Control-M and technologies (SQL and Python).
Skills and Knowledge :
Candidate must also possess:
Demonstrated Expertise ("DE") performing enterprise-scale data modeling, ingestion, cleansing, transformation, and integration, using Extract, Transform, Load / Extract, Load, Transform (ETL/ELT) frameworks (PySpark, SnapLogic, DbT, Kafka Streaming, and Snowflake); and ensuring high performance and governance through Collibra and Alation. DE deploying, orchestrating, and optimizing solutions for analytics and financial operations projects, using big data technologies, distributed systems, delta lake, and data warehousing solutions (Databricks, Snowflake, or Apache Spark), using containerized environments with Docker for scalable, cost-efficient data solutions. DE architecting, automating, and monitoring data workflows (while optimizing Cloud platforms (AWS or Azure) for efficiency, scalability, and cost savings), using scalable, reusable data engineering solutions (Python and SQL) to enable performance-optimized queries and CI/CD pipelines (GitHub Actions and Jenkins) for deployment automation and scheduling through Control-M. DE designing, developing, and testing high-volume, fault-tolerant, real-time data processing, analytics, and reporting pipelines to ensure high availability for enterprise applications; and incorporating DevOps practices, Agile methodology, security, and observability frameworks (Prometheus, Datadog, Grafana, and OTEL) to ensure security, reliability and proactive monitoring.
#PE1M2
#LI-DNI
Certifications:
Category:
Information Technology
Most roles at Fidelity are Hybrid, requiring associates to work onsite every other week (all business days, M-F) in a Fidelity office. This does not apply to Remote or fully Onsite roles.
Please be advised that Fidelity's business is governed by the provisions of the Securities Exchange Act of 1934, the Investment Advisers Act of 1940, the Investment Company Act of 1940, ERISA, numerous state laws governing securities, investment and retirement-related financial activities and the rules and regulations of numerous self-regulatory organizations, including FINRA, among others. Those laws and regulations may restrict Fidelity from hiring and/or associating with individuals with certain Criminal Histories.
  • United States

Compétences linguistiques

  • English
Avis aux utilisateurs

Cette offre provient d’une plateforme partenaire de TieTalent. Cliquez sur « Postuler maintenant » pour soumettre votre candidature directement sur leur site.