As a Data Engineer at InfyStrat, you'll design and maintain data architecture, ensuring data accessibility and reliability while developing data pipelines. You'll collaborate with teams to create scalable data solutions for business intelligence.
At InfyStrat, we are on the lookout for an innovative and skilled Data Engineer to join our dynamic team. In this position, you will be responsible for building and maintaining our data architecture, ensuring that our data is accessible, reliable, and timely. You will leverage your expertise to design robust data pipelines and optimize data flows to support analytical and operational needs across the organization. Collaborating with data analysts, data scientists, and other stakeholders, you will gather requirements to create scalable data solutions that drive business intelligence and insights. If you have a passion for data and enjoy tackling complex challenges, we want to hear from you!
Core Responsibilities- Design, develop, and maintain data pipelines to support ETL processes.
- Ensure data quality and integrity across various data sources and systems.
- Collaborate with cross-functional teams to identify data requirements and create data models.
- Utilize big data technologies for large-scale data processing.
- Monitor and troubleshoot performance issues related to data processes.
- Stay updated on the latest technologies and best practices within the data engineering field.
Requirements
- Degree in Computer Science, Information Technology, or a related field.
- 5 years of experience in data engineering or related roles.
- Expertise in SQL and experience with multiple data storage solutions (NoSQL, relational databases).
- Familiarity with cloud computing platforms (AWS, Azure, Google Cloud).
- Experience with data processing frameworks such as Apache Kafka, Spark, or similar.
- Programming experience in Python, Java, or Scala.
- Strong analytical skills and the ability to work with complex data sets.
- Excellent communication and teamwork skills.
Top Skills
Apache Kafka
AWS
Azure
GCP
Java
NoSQL
Python
Relational Databases
Scala
Spark
SQL
Similar Jobs
Fintech • Information Technology • Insurance • Financial Services • Big Data Analytics
You will build and implement data ingestion processes using big data tools and collaborate with a team on machine learning projects. Responsibilities include performance monitoring and defining data security principles.
Top Skills:
DatabricksDelta LakeFlumeHdfsHiveKafkaPythonScalaSparkSQL
Cloud • Fintech • Information Technology • Machine Learning • Software • App development • Generative AI
Lead architecture, design, development, and delivery of enterprise data platform components. Build and optimize ETL/ELT pipelines, data models, and warehouses (Snowflake). Mentor engineers, drive data governance, ensure performance, security, and scalability, and collaborate cross-functionally to adopt best practices and new tools.
Top Skills:
Cloud-Native (Google Cloud Preferred)CtesData WarehouseElt/Etl ToolsJavaNoSQLPythonRdbmsSnowflakeSQLStored ProceduresUdfs
Software
Seeking experienced Data Engineers with 5-7 years of experience in Databricks, PySpark, and Data Fabric concepts to aid in data transformation initiatives.
Top Skills:
Azure Data FactoryDatabricksPyspark
What you need to know about the Delhi Tech Scene
Delhi, India's capital city, is a place where tradition and progress co-exist. While Old Delhi is known for its rich history and bustling markets, New Delhi is defined by its modern architecture. It's clear the region places a strong emphasis on preserving its cultural heritage while embracing technological advancements, particularly in artificial intelligence, which plays a central role in shaping the city's tech landscape, fueled by investments in research and development.



