Über
Job Title:Data Engineer (Python, Spark, Cloud)
Location: Iselin NJ or Charlette NC
Pay :$90000 per year DOE
Term : Contract
Work Authorization: US Citizens only
( may need Security clearance in future)
Job Summary:
We are seeking a mid-level Data Engineer with strong Python and Big Data skills to design, develop, and maintain scalable data pipelines and cloud-based solutions. This role involves hands-on coding, data integration, and collaboration with cross-functional teams to support enterprise analytics and reporting.
Key Responsibilities:
- Build and maintain ETL pipelines using Python and PySpark for batch and streaming data.
- Develop data ingestion frameworks for structured/unstructured sources.
- Implement data workflows using Airflow and integrate with Kafka for real-time processing.
- Deploy solutions on Azure or GCP using container platforms (Kubernetes/OpenShift).
- Optimize SQL queries and ensure data quality and governance.
- Collaborate with data architects and analysts to deliver reliable data solutions.
Required Skills:
- Python (3.x) – scripting, API development, automation.
- Big Data: Spark/PySpark, Hadoop ecosystem.
- Streaming: Kafka.
- SQL: Oracle, Teradata, or SQL Server.
- Cloud: Azure or GCP (BigQuery, Dataflow).
- Containers: Kubernetes/OpenShift.
- CI/CD: GitHub, Jenkins.
Preferred Skills:
- Airflow for orchestration.
- ETL tools (Informatica, Talend).
- Financial services experience.
Education & Experience:
- Bachelor's in Computer Science or related field.
- 3–5 years of experience in data engineering and Python development.
Keywords for Visibility:
Python, PySpark, Spark, Hadoop, Kafka, Airflow, Azure, GCP, Kubernetes, CI/CD, ETL, Data Lake, Big Data, Cloud Data Engineering.
Reply with your profiles to this posting and send it to
Flexible work from home options available.
Sprachkenntnisse
- English
Hinweis für Nutzer
Dieses Stellenangebot stammt von einer Partnerplattform von TieTalent. Klicken Sie auf „Jetzt Bewerben“, um Ihre Bewerbung direkt auf deren Website einzureichen.