Data Engineer, Data Innovation and Tools RationalizationU.S. Bank • United States
Data Engineer, Data Innovation and Tools Rationalization
U.S. Bank
- United States
- United States
À propos
At U.S. Bank, we're on a journey to do our best. Helping the customers and businesses we serve to make better and smarter financial decisions and enabling the communities we support to grow and succeed. We believe it takes all of us to bring our shared ambition to life, and each person is unique in their potential. A career with U.S. Bank gives you a wide, ever-growing range of opportunities to discover what makes you thrive at every stage of your career. Try new things, learn new skills and discover what you excel atall from Day One. We are seeking a skilled and motivated data engineer to join the Data Innovation & Tools Rationalization team within the Enterprise Data Office. This role will contribute to the modernization of enterprise data capabilities by designing, building, and supporting scalable data product solutions aligned with the Enterprise Data Strategy. The data engineer will work with modern data platforms, cloud technologies, and analytics tools to help improve data accessibility, reliability, and reuse across the organization. About the Data Innovation and Tools Rationalization Team We are the innovation and tooling engine for the Enterprise Data Office, focused on reusable patterns, accelerators, and tool rationalization that reduce friction and speed up delivery and adoption of governed data products. Vision | Make data products and AI capabilities easier to build, safer to deploy, and faster to adopt across the bank. Mission | Deliver reusable data product patterns, accelerators, and clear integration pathways that help teams ship data products faster while enabling safer AI adoption and reducing technology sprawl through disciplined tool evaluation and rationalization. Values | In addition to U.S. Bank core values, we prioritize: Head high: We build with excellence. Our work is intentional, high-quality, and designed to last, so we are always proud of what we deliver and comfortable standing behind it. Accountability Over Activity: We take end-to-end accountability, from problem framing through delivery, adoption, and outcomes. Strategic Intelligence: We think in systems, anticipate downstream impact, and collaborate to win as a pod, not as individuals. Relentless Craft: We are passionate about the work we do. Our drive comes from curiosity, purpose, and a genuine love of building impactful solutions. About the Role We are seeking a highly skilled and forward-thinking Data Engineer to join the Data Innovation and Tools Rationalization team. This role is focused on building and scaling next-generation data product engineering patterns that enable faster, more consistent, and more reliable delivery across the enterprise. The ideal candidate will combine stand-out hands-on engineering skills with a product mindset. You will design reusable frameworks, define engineering standards, evaluate emerging technologies, and partner closely with execution and enablement teams to operationalize modern data patterns at scale. This role plays a critical part in accelerating platform adoption, improving developer productivity, and reducing fragmentation across data and analytics solutions. Key Activities Key responsibilities include: Designing and building next-generation data product engineering patterns on modern cloud platforms including Snowflake and Databricks. Developing reusable engineering assets such as frameworks, build kits, CI/CD templates, and performance optimization approaches. Partnering with Enablement and Execution teams to operationalize and scale data engineering patterns across delivery teams. Evaluating, testing, and experimenting with emerging data and AI tools, platforms, and services. Participate in technical proofs of concept, comparing alternative solutions, and making data-driven recommendations for platform and tool rationalization. Documenting project outcomes, transition plans, adoption guides, and solution usage scripts to support enterprise rollout. Supporting platform modernization efforts through hands-on development, tuning, and optimization. Collaborating with data product owners, architects, and platform teams to align engineering solutions with enterprise data strategy. This role requires effective communication and collaboration skills, along with the ability to work effectively with stakeholders across data, technology, and product owner teams. The successful candidate will bring solid technical fluency across modern data platforms, analytics tools, and cloud capabilities, and will contribute to advancing the Enterprise Data Strategy through thoughtful engineering, innovation, and continuous improvement. Core Competencies: Knowledge: Deep understanding of financial institution/Banking concepts Strong understanding of modern data engineering concepts, including batch and streaming data processing, data modeling, and data product design. Experience building scalable data solutions on cloud-based data platforms. Familiarity with enterprise data ecosystems and shared platform models. Ability to assess tradeoffs across tools, architectures, and implementation approaches. Strong analytical and problem-solving skills with a focus on root cause analysis and optimization. Technical Competence: Proficiency with big data technologies (Spark, Airflow, Hadoop, Hive) Hands-on experience with Snowflake and Databricks, including performance tuning. Proficiency in SQL and Python, with experience building production-grade data pipelines. Experience with CI/CD pipelines and infrastructure-as-code patterns for data platforms. Familiarity with orchestration and workflow management tools. Experience developing reusable libraries, templates, or internal frameworks. Exposure to cloud platforms such as Azure, AWS, or GCP and cloud-native data services. Understanding data quality, observability, and monitoring practices. Familiarity with AI and ML tooling as it relates to data engineering and platform enablement is a plus. Basic Qualifications: Bachelor's Degree in a quantitative field such as computer science, data science, mathematics, or statistics. 5 to 7 years of statistical and/or analytical experience. Preferred Skills: Typically, 6+ years of experience in data engineering, analytics engineering, or platform engineering roles. Demonstrated experience building and supporting data solutions in a cloud environment. Proven track record of designing reusable components or standards adopted by multiple teams. Experience working in regulated or large-scale enterprise environments preferred. Strong organizational skills with the ability to work on multiple initiatives concurrently. Deep understanding of banking and financial institutions concepts. Knowledge of banking regulation and requirements for regulatory reporting. Strong analytical, organizational and problem-solving. Hands-on experience with programming languages such as Python and SQL. Proficiency with big data technologies including Hadoop, Hive, and Spark. Expertise in visual analytics tools such as Power BI, Tableau, or equivalent platforms. Experience with Power Platform tools such as Power Automate and Power Apps Proven track record in automating and optimizing ETL processes at scale. Hands-on experience with cloud platforms (e.g., Azure, AWS, GCP) and cloud-native data services. Excellent written and verbal communication skills for documenting technical processes and engaging with cross-functional teams.
Compétences linguistiques
- English
Avis aux utilisateurs
Cette offre provient d’une plateforme partenaire de TieTalent. Cliquez sur « Postuler maintenant » pour soumettre votre candidature directement sur leur site.