Data Engineer
Syntasa is hiring a cleared Data Engineer to design scalable data pipelines, optimize Spark workloads, and deliver high-performance cloud solutions. You’ll be working across all major cloud providers to build cost-efficient, production-ready systems that power advanced analytics and AI initiatives.
Key Responsibilities
• Optimize large-scale data pipelines for ingestion, transformation, and processing.
• Develop robust, reusable code in Python and Spark to support distributed data workflows.
• Manage and tune Spark jobs on cloud-based platforms with Kubernetes orchestration.
• Implement scalable data solutions for storage and retrieval.
• Drive reliability, performance, and cost efficiency across cloud infrastructure.
Required Skills
• Strong Python experience
• Experience with automation of job monitoring, optimization, and debugging at scale
• Experience working with any of the major cloud providers
• Excellent communication skills with the ability to work in cross-functional teams
• TS/SCI w/CI Poly preferred
Desired Skills
• Apache Spark
• Background in building and maintaining CI/CD pipelines
• Knowledge of Kubernetes and containerization
• Experience building dashboards
• Using notebook-based tools such as Jupyter and Databricks
• Knowledge of Scala, SQL and R
Cleared Secret but TS/SCI w/CI Poly preferred
Please mention you found this job on AI Jobs. It helps us get more startups to hire on our site. Thanks and good luck!
Be the first to apply. Receive an email whenever similar jobs are posted.
Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.
Data Engineer Q&A's