Azure Databricks Engineer
This is a rare opportunity to apply serious data engineering in a domain where latency, correctness, and reliability carry direct commercial weight. Requirements 6+ years data engineering in production environments; Python expertise - idiomatic, well-tested, production-grade code, not notebook scripts ETL/ELT pipeline design and implementation at scale; orchestration with Airflow, Prefect, or equivalent; reliability-first mindset including backfill, retry, and exactly-once semantics Azure data platform - Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Data Lake Storage; infrastructure as code for data workloads (Terraform or Bicep) Databricks - Delta Lake, Unity Catalog, job cluster vs interactive cluster trade-offs, cost-aware compute management, Spark job optimisation Relational databases: PostgreSQL at production scale - query optimisation, indexing strategies, table partitioning, replication, schema design for both OLTP and analytical workloads MongoDB - document modelling, aggregation pipelines, indexing strategy, replica sets; clear judgment on when document vs relational storage is the right architectural call Containerisation: Docker and Kubernetes-based deployment of data workloads; reproducible, environment-agnostic data infrastructure Data modelling for analytical workloads - dimensional modelling, data vault, or equivalent; schema evolution, slowly changing dimensions, and downstream impact analysis Stream and batch processing patterns; late ..... full job details .....
Perform a fresh search...
-
Create your ideal job search criteria by
completing our quick and simple form and
receive daily job alerts tailored to you!