hatch I.T. is partnering with Expression to find a Data Ops Engineer. See details below:
About The Role:
Expression is seeking a skilled Data Ops Engineer to join their team in Annapolis, MD on a hybrid role. As a Data Ops Engineer you will play a crucial role in bridging their data and infrastructure teams. This strategically designed position offers you the chance to tackle challenging tasks that will foster your professional growth and development. You will be at the forefront of ensuring our data systems are reliable, efficient, and scalable, enabling seamless data flows that empower data-driven decision-making for their clients. This is an opportunity to grow your career while working on challenging projects that make a real impact. You will be a vital part of a collaborative environment that encourages innovation, learning, and professional development.
About the Company:
Founded in 1997 and headquartered in Washington DC, Expression provides data fusion, data analytics, software engineering, information technology, and electromagnetic spectrum management solutions to the U.S. Department of Defense, Department of State, and national security community. Expression’s “Perpetual Innovation” culture focuses on creating immediate and sustainable value for our clients via agile delivery of tailored solutions built through constant engagement with our clients. Expression was ranked #1 on the Washington Technology 2018's Fast 50 list of fastest growing small business Government contractors and a Top 20 Big Data Solutions Provider by CIO Review.
Responsibilities:
- Design, implement, and maintain robust data infrastructure, including databases, data warehouses, and data lakes, to support our rapidly expanding data landscape. You will engage and lead initiatives that enhance our data capabilities and drive innovation.
- Develop, deploy, and test ETL pipelines for extracting, transforming, and loading data from various sources. You will ensure data quality and integrity, playing a key role in the accuracy of our analytical insights.
- Collaborate with data scientists and data engineers to integrate and test machine learning models within our data systems, ensuring smooth functionality and high performance. This aspect of the role provides an exciting opportunity to work at the intersection of data engineering and machine learning.
- Implement cutting-edge automation and orchestration tools to streamline data operations, minimize manual processes, and boost efficiency. Your contributions will significantly enhance our operational capabilities.
- Continuously assess and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness. You will identify and resolve bottlenecks, ensuring that our systems can handle growing demands.
- Establish proactive monitoring and alerting mechanisms to detect and address potential issues in real time. Your vigilance will help maintain high availability and reliability of our data systems.
- Work closely with cross-functional teams—including data scientists, analysts, and software engineers—to understand evolving data requirements. You will be instrumental in delivering tailored solutions that align with business objectives.
- Create comprehensive documentation of data infrastructure, pipelines, and processes. Help us promote a culture of continuous improvement by sharing knowledge and best practices within the team.
Requirements:
- Top Secret with capability to obtain a CI Poly
- Security+ certification (or willingness to get certified within the first month)
- Associates degree or higher in engineering, computer science, or related field and 5+ years of experience as a DevOps/Cloud/Software engineer -OR- 8+ years of experience as a DevOps/Cloud/Software engineer
- Proficiency in programming languages such as Python, Java, or Scala.
- Strong experience with relational databases (e.g., PostgreSQL, MySQL) and big data technologies (e.g., Hadoop, Spark).
- Experienced with Elasticsearch and Cloud Search.
- Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform.
- Experience with data pipeline orchestration tools (e.g., Airflow, Luigi) and workflow automation tools (e.g., Jenkins, GitLab CI/CD).
- Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes) is a plus.
- Data pipeline management
- Proven experience maintaining production systems for external customers
- Experience working with Open Source Technologies such as Red Hat (OpenShift) and Linux/Unix
- Engaging with Data engineers in troubleshooting issues
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills.