IFS is hiring a

Data Engineer

Colombo, Sri Lanka
Full-Time

This role is all about hands-on technical prowess. You'll be in the driver's seat, working with autonomy, accountability, and technical brilliance. Your mission includes:

· Spotting high-value data opportunity within our product offerings, translating raw data into powerful features and reusable data assets.

· Serving as our data whisperer, guiding us towards the latest and greatest data technology and platform trends. You'll be the guru behind our data processing estimates and data platform evolution.

· Crafting and integrating data projects from the ground up. From framing problems and experimenting with new data sources and tools to the grand finale of data pipeline implementation and deployment. You'll ensure scalability and top-tier performance.

· Locking arms with ML Engineers, Data Scientists, Software Engineers, Solution Architects, and Product/Program Managers. Together, you will define, create, deploy, monitor, and document data pipelines to power advanced

analytics and AI solutions.

 

· 1-5 years in data engineering, skilled in scalable solutions like Data Lakes, Graph and Vector Databases (e.g., ADLS, neo4j, Elasticsearch).

· Proficient in data pipelines across cloud/on-premises, using Azure and other technologies.

· Proven ability in orchestrating data workflows and Kubernetes clusters on AKS using Airflow, Kubeflow, Argo, Dagster.

· Skilled with data ingestion tools like Airbyte, Fivetran for diverse data sources.

· Worked in large-scale data processing with Spark or Dask.

· Strong in Python, Scala, C# or Java, cloud SDKs and APIs.

· AI/ML expertise for pipeline efficiency, familiar with TensorFlow, PyTorch, AutoML, Python/R, and MLOps (MLflow, Kubeflow).

· Worked in DevOps, CI/CD automation with Bitbucket Pipelines, Azure DevOps, GitHub.

· Automate deployment of data pipelines and applications using Bash, PowerShell, or Azure CLI, Terraform, Helm Chats etc.

· Proven ability in leveraging Azure AI Search or Elasticsearch for content analysis and indexing, with a focus on creating advanced RAG (Retrieval Augmented Generation) services using Langchain.

· Proficiency in building IoT data pipelines, encompassing real-time data ingestion, transformation, security, scalability, and seamless integration with IoT platforms.

· Design, develop, and monitor streaming data applications using Kafka and related technologies.

· Implement and enforce data governance policies and standards across the data platform.

· A results-driven attitude, a passion for innovation, and a self-starting, proactive nature. You're organized, capable of juggling multiple tasks, and your creativity knows no bounds. You're a strategic thinker, always on the hunt for the next big thing.

Interviews and selections are made continuously. If you are interested, apply as soon as possible.

As a step in our recruitment process, all final candidates will undergo a background check, to get us an understanding of our future employees.

We respectfully decline all offers of recruitment and/or advertising assistance.

Apply for this job

Please mention you found this job on AI Jobs. It helps us get more startups to hire on our site. Thanks and good luck!

Get hired quicker

Be the first to apply. Receive an email whenever similar jobs are posted.

Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Data Engineer Q&A's
Report this job
Apply for this job