Guardian Life Logo

Guardian Life

Senior Data Engineer

Posted 2 Days Ago
Be an Early Applicant
Chennai, Tamil Nadu
Senior level
Chennai, Tamil Nadu
Senior level
As a Senior Data Engineer, you will analyze raw data sources, create scalable data pipelines, monitor and troubleshoot data pipeline performance, extract text data, develop API endpoints, maintain data lifecycle tools, and collaborate with teams to build machine learning pipelines. You will also document processes and stay updated on industry trends.
The summary above was generated by AI

Job Description:Qualifications:

As a Data Engineer, you will play a key role in this exciting journey. Your contributions will go beyond coding, as you'll help bring life to ideas, transforming innovative ideas into tangible solutions that directly impact our business and customers.


You'll work in an innovative, fast-paced environment, collaborating with bright minds while enjoying a balance between strategic and hands-on work. We value continuous learning, and you will have the chance to expand your skillset, mastering new tools and technologies that advance our company's goals.

We look forward to welcoming a committed team player who thrives on creating value through innovative solutions and is eager to make a significant impact.

You will

  • Perform detailed analysis of raw data sources by applying business context and collaborate with cross-functional teams to transform raw data into curated & certified data assets to be used for ML and BI use cases. Create scalable and trusted data pipelines which generates curated data assets in centralized data lake / data warehouse ecosystem.
  • Monitor and troubleshoot data pipeline performance, identifying and resolving bottlenecks and issues.
  • Extract text data from variety of sources like documents (Word, PDFs, Text Files, JSON etc.), logs, text notes stored in databases, using Web scrapping method from web pages to support development of NLP / LLM solutions.
  • Collaborate with data science and data engineering team to build scalable and reproducible machine learning pipelines for inference.
  • Leverage different public / private APIs for the purpose of extracting data, invoking functionalities as required for the use cases.
  • Develop real time data solutions by developing new API endpoints or streaming frameworks.
  • Develop, test, and maintain robust tools, frameworks, and libraries that standardize and streamline the data & machine learning lifecycle.
  • Implement robust data drift and model monitoring frameworks to use them across pipelines.
  • Collaborate with cross-functional teams of Data Science, Data Engineering, business units and various IT teams.
  • Create and maintain effective documentation for project and practices ensuring transparency and effective team communication.
  • Stay up to date with the latest trends in modern data engineering, machine learning & AI.

You Have

  • Bachelor’s or master’s degree with 8+ years of experience in Computer Science, Data Science, Engineering, or a related field.
  • 4+ years of experience in working with Python, SQL, PySpark and bash scripts. Proficient in software development lifecycle and software engineering practices.
  • 3+ years of experience in developing and maintaining robust data pipelines for both structured and unstructured data to be used by Data Scientists to build ML Models.
  • 3+ years of experience working with Cloud Data Warehousing (Redshift, Snowflake, Databricks SQL or equivalent) platforms and experience in working with distributed frameworks like Spark.
  • 2+ years of hands-on experience in using Databricks platform for data engineering. Detailed knowledge of Delta Lake, Databricks Workflow, Job Clusters, Databricks CLI, Databricks Workspace etc.
  • Solid understanding of machine learning life cycle, data mining, and ETL techniques.
  • Familiarity with commonly used machine learning libraries (like scikit-learn, xgboost) in terms of exposure and handling of code base which makes use of these libraries for model training & scoring.
  • Proficiency in understanding of REST APIs, experience in using different types of APIs to either extract data or perform a functionality exposed by APIs.
  • Familiarity in Pythonic API development frameworks like Flask / FastAPI. Experience in using containerization frameworks like Docker / Kubernetes.
  • Hands-on experience in building and maintaining tools and libraries which have been used by multiple teams across organization. e.g. Creating Data Engineering common utility libraries, DQ Libraries etc.
  • Proficient in understanding and incorporating software engineering principles in design & development process.
  • Hands on experience with using CI/CD tools (e.g., Jenkins or equivalent), version control (Github, Bitbucket), Orchestration (Airflow, Prefect or equivalent)
  • Excellent communication skills and ability to work and collaborate with cross functional teams across technology and business.

Life at Guardian: https://youtu.be/QEtkY6EkEuQ

Location:

This position can be based in any of the following locations:

Chennai, Gurgaon

Current Guardian Colleagues: Please apply through the internal Jobs Hub in Workday

Top Skills

Pyspark
Python
SQL

Guardian Life Gurugram, Haryana, IND Office

Candor One Infospace, Tikri Sector 48, , Gurugram, Haryana , India, 122018

Similar Jobs

6 Days Ago
Chennai, Tamil Nadu, IND
5,000 Employees
Mid level
5,000 Employees
Mid level
Agency • Digital Media • eCommerce • Professional Services • Software • Analytics • Consulting
The Senior Data Engineer will develop scalable ETL and integration systems, work on data migration projects, evaluate data ecosystems, configure CI/CD pipelines, and collaborate with project managers and architects to meet data requirements. They will also implement architecture and assist in infrastructure setup.
Be an Early Applicant
2 Days Ago
Chennai, Tamil Nadu, IND
6,774 Employees
Senior level
6,774 Employees
Senior level
Artificial Intelligence • Big Data • Cloud • Machine Learning • Software
The Senior Data Engineer is responsible for designing, building, and maintaining data pipelines, optimizing data infrastructure, and developing ETL processes. This role requires collaboration with data architects, ensuring data compliance, and mentoring junior engineers, all while adhering to best practices.
Be an Early Applicant
6 Days Ago
Chennai, Tamil Nadu, IND
5,001 Employees
Senior level
5,001 Employees
Senior level
Fintech • Real Estate • Financial Services
Design and construct data infrastructure and pipelines while optimizing processing and storage solutions. Ensure data quality, collaborate with data scientists and analysts, and troubleshoot any data-related issues.

What you need to know about the Delhi Tech Scene

Delhi, India's capital city, is a place where tradition and progress co-exist. While Old Delhi is known for its rich history and bustling markets, New Delhi is defined by its modern architecture. It's clear the region places a strong emphasis on preserving its cultural heritage while embracing technological advancements, particularly in artificial intelligence, which plays a central role in shaping the city's tech landscape, fueled by investments in research and development.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account