Data Architect (N)

AI overview

Join a mission-driven engineering team at a fast-growing B2B SaaS company transforming behavioral health through innovative data solutions.
Why Blue Coding?  At Blue Coding, we specialize in hiring excellent developers and amazing people from all over Latin America and other parts of the world. For the past 11 years, we’ve helped cutting-edge companies in the United States and Canada build great development teams and develop great products. Large multinationals, digital agencies, Saas providers, and software consulting firms are just a few of our clients. Our team of over 150 engineers, project managers, QA, UX/UI designers, and many more is distributed in more than 10 countries across the Americas. We are a fully remote company working with a wide array of technologies, and we have expertise in every stage of the software development process. Our team is highly connected, united, and culturally diverse, and our collaborators are involved in many initiatives around the world, from wildlife preservation to volunteering at local charities. We stand for honesty, fairness, respect, efficiency, hard work, and cooperation. This position is open exclusively to candidates based in LATAM countries. What are we looking for? We are looking for an experienced Data Architect to work with one of our US clients, a fast-growing B2B SaaS company dedicated to transforming behavioral health and whole-person care through technology. You will join a mission-driven engineering team that builds secure, scalable, and high-performance data platforms supporting critical healthcare workflows. In this role, you will design, build, and operate robust data integrations while also playing a key role in evaluating and proposing the right tools, platforms, and frameworks for each integration scenario. You will collaborate closely with the Senior Data Architect, participating in architectural discussions and jointly defining the data integration strategy, standards, and best practices. If you are fluent in English, proactive, detail-oriented, and passionate about building high-quality data solutions, this role offers the opportunity to work remotely with international teams while contributing to meaningful digital health innovation Here are some of the exciting day-to-day challenges you will face in this role:
  • Design and implement end-to-end data integrations from files and source systems into analytical and operational data stores.
  • Evaluate and recommend tooling, frameworks, and platforms for ingestion, transformation, and orchestration.
  • Set up, build, and maintain ELT/ETL pipelines primarily using AWS services, with optional use of Microsoft tooling when appropriate.
  • Develop ingestion patterns for structured and semi-structured data (JSON, CSV, Parquet, Avro), including CDC and streaming integrations.
  • Create and manage data models, staging layers, and integration structures that support analytics and downstream applications.
  • Implement orchestration, scheduling, retries, and error handling to ensure scalable, stable, and cost-efficient pipelines.
  • Partner with stakeholders to define data requirements, SLAs, and integration contracts.
  • Design and implement monitoring, alerting, and observability, including metrics, logging, tracing, lineage, and data-quality controls.
  • Apply data governance practices: metadata management, access controls, retention, and auditability.
  • Support CI/CD pipelines for data workloads and contribute to Infrastructure-as-Code when relevant.
  • Troubleshoot production issues and drive root-cause analyses with clear follow-through.
  • You will shine if you have:
  • 3+ years of experience building and supporting production data pipelines in a Data Warehouse environment.
  • Deep understanding of Data Warehouse modeling (facts/dimensions, SCD Type 2, relational modeling, and performance tuning).
  • Strong experience with AWS data services, ideally including S3, IAM, Lambda, Glue, Athena, Redshift, RDS/DMS, Kinesis, Step Functions, and CloudWatch.
  • Proficiency in SQL and a strong understanding of analytical and relational database concepts.
  • Knowledge of modern storage formats and patterns (delta tables, data lakes).
  • Proficiency in at least one programming language used for data engineering (Python preferred; C# also welcome).
  • Practical knowledge of batch, streaming, and CDC integration patterns.
  • Comfort with Git-based workflows, automated deployments, and CI/CD.
  • Experience building monitoring and observability for pipelines (metrics, alerts, logging, lineage, quality checks).
  • It doesn't hurt if you also have:
  • Experience with Microsoft data platforms (Azure Data Factory, SQL Server, Synapse, Microsoft Fabric, Azure SQL). (Nice to have.)
  • Experience with Databricks.
  • Familiarity with event-driven architectures (SNS/SQS, EventBridge, Kafka equivalents).
  • Experience with Microsoft Fabric Lakehouse/Warehouse or Synapse.
  • Understanding of security and compliance for sensitive data (PII/PHI).
  • Experience with data catalog, lineage, or observability tools.
  • Infrastructure-as-Code experience (Terraform, CloudFormation).
  • Familiarity with BI/reporting tools such as Power BI or Tableau.
  • What we offer:
  • Salary in USD
  • 100% Remote
  • Integration into a high-performing international engineering team
  • Opportunity to shape data integration standards and contribute to architectural decisions
  • Work on meaningful technology that impacts real people’s well-being
  • Ready to learn more? Apply below! 
    Ace your job interview

    Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

    Data Architect Q&A's
    Report this job
    Apply for this job