Rackspace is hiring a

Sr. Data Engineering Delivery Architect (Azure Data Services)

Full-Time
Remote
In this role you will be helping to grow the Rackspace Cloud data practice.  You will be the expert in the region and support the delivery of our data projects on Azure.  Our Data Architects are experienced technologists with technical depth and breadth, along with strong interpersonal skills. In this role, you will work directly with customers and our team to help enable innovation through continuous, hands-on, deployment across technology stacks, will serve as the Data SME for our customers and be the focal touchpoint in engagements.
 
As the expert you’ll be involved in presenting to customers at events and promoting our capabilities to Azure.  With a large Azure data team of over 250+ globally it is also an opportunity to mentor and shape the data practice during our growth phase. 
 
More generally, Data Architect are the cornerstone of our delivery success, overseeing our most strategic accounts. They are empowered to make key delivery decisions and work closely with technical resources as well as our Engagement/Project Managers to ensure our customers experience successful outcomes.  
 
If you get a thrill from working with cutting-edge technology and love to help solve customers’ problems, we’d love to hear from you. 

In a highly Collaborative Environment, you will:

  • Lead, define and implement end-to-end modern data platforms on Azure public cloud using 1st party services in support of analytics and AI use cases 
  • Design and Drive reusable assets, growing Analytics and DevOps capability 
  • Solution & Deliver Modern Data Platforms and Advanced Analytics solutions for clients 
  • Collaborate with enterprise architects, data architects, ETL developers & engineers, data scientists, and information designers to lead the identification and definition of required data structures, formats, pipelines, metadata, and workload orchestration capabilities 
  • Address aspects such as data privacy & security, data ingestion & processing, data storage & compute, analytical & operational consumption, data modelling, data virtualization, self-service data preparation & analytics, AI enablement, and API integrations 
  • Be the technical liaison between customers and engineering teams 
  • Be a data evangelist by educating a variety of customers on the value of cloud and data services 
  • Present recommendations to clients using both written deliverables and face-to-face discussions 
  • Develop deep relationships with clients and act as the single point of contact for client executives on data engagements 

Key Requirements:

  • 10+ years' experience leading engagements from design to implementation of creative data solutions leveraging the latest in Spark based modern data platforms on public cloud
  • At least 5 full lifecycle data platform deployments on Azure using 1st party Services.
  • Extensive experience with following technologies: Azure Data Factory, Azure Synapse Analytics, Azure Databricks,
  • Strong Spark, SQL, Data Modeling, Data lakehouse concepts.
  • Strong programming/scripting experience using python and scala.
  • Strong Experience using Data & AI tools such as Azure Storage, Stream Analytics, CosmosDB, SQL DW, Azure Databricks, Azure Machine Learning, Azure Data Catalog, and Azure Data Factory (ADF), Blob, Azure SQL, Polybase, Delta, Engineering Pipeline Design, Azure Technical Architecture and Data Lake Design
  • 8+ years' experience architecting solutions for optimal extraction, transformation and loading of data from a wide variety of traditional and non-traditional sources such as structured, unstructured, and semi-structured using SQL, NoSQL and data pipelines for real-time, streaming, batch and on-demand workloads 
  • 5+ years' experience with analytics/data management strategy formulation, architectural blueprinting and effort estimation of analytics 
  • 5+ years working in cloud or multi-server complex environments.  Extensive experience with Azure is required.
  • Ability to simplify complex technical concepts into an easy-to-understand non-technical language to facilitate, communicate and interact with executives and business stakeholders 
  • Experience with Agile development methods in data-oriented projects 
  • Experience with Dashboarding and Reporting Tools used in the industry (Tableau, Power BI, Qlik, etc.) 
  • Certifications in architecture, data engineering and development from AZURE (Preferred) / AWS / GCP.
  • Knowledge of software configuration management environments and tools such as JIRA, Git, Jenkins, TFS, Shell, PowerShell, Bitbucket.

The following information is required by the Colorado Equal Pay Transparency Act and the New York City Pay Transparency Act. This applies only to individuals working in the state of Colorado or in New York City. 

The anticipated starting pay range of Colorado and New York City applicants for this role is $143,700 - 204,000 and $ 167,400 - 223,200. Based on eligibility, compensation for the role may include variable compensation in the form of bonus, commissions, or other discretionary payments.

These discretionary payments are based on company and/or individual performance, and may change at any time.

Actual compensation is influenced by a wide array of factors including but not limited to skill set, level of experience, licenses and certifications, and specific work location. 

Information on benefits offered is here.

#LI-VM1
#LI-Remote
#LI-USA
#rackspace


#LI-VM1
#LI-Rackspace
#Rackspace
#LI-USA
#LI-Canda
#LI-Remote
Apply for this job

Please mention you found this job on AI Jobs. It helps us get more startups to hire on our site. Thanks and good luck!

Get hired quicker

Be the first to apply. Receive an email whenever similar jobs are posted.

Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Senior Data Engineer Q&A's
Report this job
Apply for this job