The Senior Data Engineer will design and build scalable ETL/ELT solutions using Databricks and Azure. Responsibilities include optimizing data pipelines, implementing Lakehouse architecture, data quality checks, and leading technical guidance for the team.
Overview
We are looking for a Senior Data Engineer with strong expertise in Databricks, PySpark, Delta Lake, and cloud-based data pipelines. The ideal candidate will design and build scalable ETL/ELT solutions, implement Lakehouse/Medallion architectures, and integrate data from multiple internal and external systems. This role requires strong technical leadership and hands-on architecture experience.
Key Responsibilities
- Design, build, and optimize data ingestion and transformation pipelines using Databricks, PySpark, and Python.
- Implement Delta Lake and Medallion architecture for scalable enterprise data platforms.
- Develop ingestion frameworks for data from SFTP, REST APIs, SharePoint/Graph API, AWS, and Azure sources.
- Automate workflows using Databricks Workflows, ADF, Azure Functions, and CI/CD pipelines.
- Optimize Spark jobs for performance, reliability, and cost efficiency.
- Implement data validation, quality checks, and monitoring with automated alerts and retries.
- Design secure and governed datasets using Unity Catalog and cloud security best practices.
- Collaborate with analysts, business users, and cross-functional teams to deliver curated datasets for reporting and analytics.
- Provide technical leadership and guidance to junior team members.
Required Skills
- 5–8+ years of experience in Data Engineering.
- Strong hands-on experience with Databricks, PySpark, Delta Lake, SQL, Python.
- Experience with Azure Data Lake, ADF, Azure Functions, or AWS equivalents (S3, Lambda).
- Experience integrating data from APIs, SFTP servers, vendor data providers, and cloud storage.
- Knowledge of ETL/ELT concepts, Lakehouse/Meddalion architecture, and distributed processing.
- Strong experience with Git, Azure DevOps CI/CD, and YAML pipelines.
- Ability to optimize Spark workloads (partitioning, caching, Z-ordering, performance tuning).
Good to Have
- Exposure to Oil & Gas or trading analytics (SPARTA, KPLER, IIR, OPEC).
- Knowledge of Power BI or data visualization concepts.
- Familiarity with Terraform, Scala, or PostgreSQL.
- Experience with SharePoint development or .NET (optional).
Top Skills
Adf
Azure Data Lake
Azure Devops Ci/Cd
Azure Functions
Databricks
Delta Lake
Git
Pyspark
Python
SQL
Yaml
Similar Jobs
Digital Media • Information Technology • Software
This internship involves hands-on experience in merchant acquisition, digital payment solutions, and sales strategies, focusing on engaging with SMEs and reporting sales activities.
Top Skills:
Digital Payment SolutionsPayment Gateway
Gaming • Software
This is an expression of interest submission for potential future opportunities at Double Eleven in Kuala Lumpur.
Fintech • Insurance • Software • Financial Services
Design, develop, and maintain web applications. Ensure code quality, debug issues, and document processes while collaborating with teams and continuously improving skills.
Top Skills:
Angular 13C# .Net Core 6GCPSearch Engine Optimization (Seo)SQL Server
What you need to know about the Delhi Tech Scene
Delhi, India's capital city, is a place where tradition and progress co-exist. While Old Delhi is known for its rich history and bustling markets, New Delhi is defined by its modern architecture. It's clear the region places a strong emphasis on preserving its cultural heritage while embracing technological advancements, particularly in artificial intelligence, which plays a central role in shaping the city's tech landscape, fueled by investments in research and development.


