About
Syntasa is hiring a cleared Data Engineer to design scalable data pipelines, optimize Spark workloads, and deliver high-performance cloud solutions. You'll be working across all major cloud providers to build cost-efficient, production-ready systems that power advanced analytics and AI initiatives.
Key Responsibilities • Optimize large-scale data pipelines for ingestion, transformation, and processing. • Develop robust, reusable code in Python and Spark to support distributed data workflows. • Manage and tune Spark jobs on cloud-based platforms with Kubernetes orchestration. • Implement scalable data solutions for storage and retrieval. • Drive reliability, performance, and cost efficiency across cloud infrastructure.
Required Skills • Strong Python experience • Experience with automation of job monitoring, optimization, and debugging at scale • Experience working with any of the major cloud providers • Excellent communication skills with the ability to work in cross-functional teams • TS/SCI w/CI Poly preferred
Desired Skills • Apache Spark • Background in building and maintaining CI/CD pipelines • Knowledge of Kubernetes and containerization • Experience building dashboards • Using notebook-based tools such as Jupyter and Databricks • Knowledge of Scala, SQL and R
Cleared Secret but TS/SCI w/CI Poly preferred
Nice-to-have skills
- Jupyter
- Kubernetes
- Python
- R
- SQL
- Scala
- Databricks
Work experience
- Data Engineer
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.