Über
Roles & Responsibilities
You will build systems, core libraries and frameworks that power our batch and streaming Data and ML applications. The services you build will integrate directly with LendingClub's products, opening the door to new features. You will work with modern data technologies such as Hadoop, Spark, DBT, Dagster/Airflow, Atlan, Trino, etc., modern data platforms such as Databricks and Snowflake and cloud technologies across AWS stack Build data pipelines that transform raw data into canonical schema representing business entities and publish it into the Data Lake Implement internal process improvements: automating manual processes, optimizing data delivery, reducing cloud costs, redesigning infrastructure for greater scalability, etc. Work with stakeholders including the Business, Product, Program and Engineering teams to deliver required data in time with high quality at reasonable cost Implement processes and systems to monitor Data Quality, Observability, Governance and Lineage. Support operations to manage the production environment and help in resolving production issues with RCA Write unit/integration tests, adopt Test-driven development, contribute to engineering wiki, and document design/implementation etc. Roles & Resposibilities
4+ years of experience and a bachelor's degree in computer science or a related field; or equivalent work experience Working experience of distributed systems Hadoop, Spark, Hive, Kafka, DBT and Airflow/Dagster At least 2 year of production coding experience in data pipeline implementation in Python Experience working with public cloud platforms, preferably AWS Experience working with Databricks and/or Snowflake Experience in Git, JIRA, Jenkins, shell scripting Familiarity with Agile methodology, test-driven development, source control management and test automation Experience supporting and working with cross-functional teams in a dynamic environment You have excellent collaborative problem solving and communication skills and are empathetic to others You believe in simple and elegant solutions and give paramount importance to quality You have a track record of building fast, reliable, and high-quality data pipelines
Wünschenswerte Fähigkeiten
- Hadoop
- Hive
- Kafka
- Python
- Spark
- Databricks
Berufserfahrung
- Data Engineer
Sprachkenntnisse
- English
Hinweis für Nutzer
Dieses Stellenangebot stammt von einer Partnerplattform von TieTalent. Klicken Sie auf „Jetzt Bewerben“, um Ihre Bewerbung direkt auf deren Website einzureichen.