Lead development of big data solutions, focusing on text data processing, data pipelines, and cloud services, primarily in AWS and GCP.
Company Description
At DemandMatrix, our vision is to disrupt the $100 billion sales and marketing intelligence industry by using domain knowledge, machine learning and AI. Fortune 100 companies like Microsoft, Google, Adobe, Amazon, IBM trust us to identify their next customer.
Job DescriptionWhat will you do?
- To help us go to the next level we are looking to onboard a hands-on SME in leveraging big data tech to solve the most complex data issues. You will spend almost half of time with hands-on coding.
- It involves large scale text data processing, event driven data pipelines, in-memory computations, optimization considering CPU core to network IO to disk IO.
- You will be using cloud native services in AWS and GCP.
Who Are You?
- Solid grounding in computer engineering, Unix, data structures and algorithms would enable you to meet this challenge.
- Designed and built multiple big data modules and data pipelines to process large volume.
- Genuinely excited about technology and worked on projects from scratch.
Must have:
- 7+ years of hands-on experience in Software Development with a focus on big data and large data pipelines.
- Minimum 3 years of experience to build services and pipelines using Python.
- Expertise with a variety of data processing systems, including streaming, event, and batch (Spark, Hadoop/MapReduce)
- Understanding of at least one NoSQL stores like MongoDB, Elasticsearch, HBase
- Understanding of how data models, sharding and data location strategies for distributed data stores in large scale high-throughput and high-availability environments and their effect in non-structured text data processing
- Experience with running scalable & high available systems with AWS or GCP.
Good to have:
- Experience with Docker / Kubernetes
- Exposure with CI/CD
- Knowledge of Crawling/Scraping
- Entire Work From Home
- Birthday Leave
- Remote Work
Top Skills
AWS
Docker
Elasticsearch
GCP
Hadoop
Hbase
Kubernetes
MongoDB
Python
Spark
Similar Jobs
Fintech • Information Technology • Insurance • Financial Services • Big Data Analytics
Lead development and maintenance of Big Data solutions for Disability & Absence products, ensuring high-quality, efficient, and scalable applications.
Top Skills:
AzureGCPHadoopHbaseHiveIn-Memory Data ProcessingKafkaNifiNoSQLPigPythonScalaShell ScriptSolrSpark
Information Technology • Consulting
The Big Data Lead will design and develop data pipelines, improve EY.ai Data Marketplace's services, and ensure their robustness while collaborating across teams.
Top Skills:
AdbAdfPysparkPythonSparkSQL
Big Data • Food • Hardware • Machine Learning • Retail • Automation • Manufacturing
Responsible for the implementation, support, and optimization of Active Directory and Microsoft Entra ID platforms. Leads incident management and drives automation and Zero Trust principles for enterprise IAM capabilities.
Top Skills:
Active DirectoryCyberarkMicrosoft Entra IdMicrosoft Graph ApiPingPowershellVenafi
What you need to know about the Delhi Tech Scene
Delhi, India's capital city, is a place where tradition and progress co-exist. While Old Delhi is known for its rich history and bustling markets, New Delhi is defined by its modern architecture. It's clear the region places a strong emphasis on preserving its cultural heritage while embracing technological advancements, particularly in artificial intelligence, which plays a central role in shaping the city's tech landscape, fueled by investments in research and development.


