Enable Data Incorporated Logo

Enable Data Incorporated

Data Scientist (3 to 7 Years)

Posted 9 Days Ago
Remote
Hiring Remotely in India
Mid level
Remote
Hiring Remotely in India
Mid level
The Data Scientist role involves predictive analytics, NLP, and Deep Learning with a focus on feature engineering for structured and text data.
The summary above was generated by AI

Job Description – Data Scientist (3–7 years)

Location:
Role Type: Full‑time
Category: Data Science – Structured Data / Text Data (NLP & GenAI)

About the Role

We are seeking a highly skilled Data Scientist (3–7 years of experience) to join our team and work across two major data science domains:

  1. Structured Data (80–90%) – Predictive analytics, forecasting, cost estimation, likelihood modeling, and batch‑oriented machine learning pipelines.
  2. Text / Unstructured Data (NLP & GenAI) – Building low‑latency real‑time systems using deep learning, LLMs, prompt engineering, and agentic AI frameworks.

This role requires strong expertise in Big Data processing, modern ML tools, and the ability to build scalable, production-ready data science solutions.

Key Responsibilities

Structured Data – Machine Learning & Analytics

  • Build, deploy, and optimize ML models for predictive analytics, forecasting, classification, and regression.
  • Perform large-scale feature engineering using PySpark and Big Data tools.
  • Work on batch pipelines, model versioning, and experiment tracking.
  • Develop cost estimation and risk/likelihood models using statistical and ML techniques.

Text Data / NLP / GenAI

  • Build NLP pipelines using deep learning frameworks such as PyTorch, TensorFlow, or similar.
  • Develop real‑time, low‑latency inference systems for text classification, embeddings, semantic search, summarization, and retrieval.
  • Create prompts, context graphs, and agentic workflows for LLM-based systems.
  • Apply knowledge of prompt engineering, context engineering, and autonomous agent frameworks to production systems.

Core Data Science Engineering & MLOps

  • Work in Databricks for ETL, feature engineering, ML training, and orchestration.
  • Use Azure services for model deployment, data pipelines, and infrastructure.
  • Collaborate using Git-based workflows; leverage tools like GitHub Copilot, Claude Code, etc.
  • Implement model monitoring, observability, drift detection, and performance tracking.

Required Skills & Experience

✅ Core Skills

  • Strong hands-on experience with Databricks (Delta Lake, MLflow, Job Orchestration).
  • Excellent PySpark skills for large-scale distributed data processing.
  • Proficiency in Azure cloud services (ADF, Azure ML, AKS, Databricks on Azure).
  • Strong understanding of ML algorithms, statistical methods, and data analysis.
  • Experience with deep learning frameworks:
    • PyTorch
    • TensorFlow
    • Transformers (HuggingFace)
  • Experience with model monitoring and ML observability.
  • Ability to write clean, optimized code and leverage AI code assistants.

✅ NLP / GenAI Specific Skills

  • Prompt engineering (task prompts, chain of thought, tool calling, retrieval prompts).
  • Context engineering (retrieval pipelines, RAG, memory management, context structuring).
  • Knowledge of LLM-based agentic frameworks (LangChain, Semantic Kernel, CrewAI, AutoGen, etc.).
  • Experience with vector databases and embedding models is a plus.

Good to Have Skills

  • Experience with containerization (Docker, Kubernetes, AKS).
  • Experience deploying models to production (REST APIs, real-time endpoints).
  • Knowledge of streaming technologies (Kafka, EventHub, Spark Streaming).
  • Understanding of CI/CD for ML (Azure DevOps / GitHub Actions).

Who You Are

  • A problem solver who is comfortable working with both structured and unstructured data.
  • Someone who enjoys using modern AI tools to accelerate development.
  • A data scientist who writes clean, production-grade code.
  • A collaborator who thrives in cross-functional teams and fast-paced environments.

Top Skills

Azure
Databricks
Pyspark
PyTorch
TensorFlow

Similar Jobs

14 Days Ago
In-Office or Remote
Senior level
Senior level
Blockchain • Fintech • Payments • Financial Services • Cryptocurrency • Web3
As a Senior Data Scientist, you'll analyze data, support strategic decision-making in payments, build visualizations, and enhance data products by collaborating with business partners.
Top Skills: PythonRSQL
Yesterday
Remote or Hybrid
Senior level
Senior level
Big Data • Information Technology
The Senior Data Scientist at NielsenIQ drives data-driven solutions, manages multiple projects, and ensures data quality and effective communication across teams.
Top Skills: PythonSQL
31 Minutes Ago
Easy Apply
Remote
India
Easy Apply
Senior level
Senior level
Artificial Intelligence • Fintech • Hardware • Information Technology • Sales • Software • Transportation
As a Senior Product Manager in Analytics, you will drive the delivery of analytics experiences, improve AI models, and ensure customer success by collaborating with engineering and data teams.
Top Skills: AIAnalyticsAPIsCloud Data PlatformsData ModelsMetrics

What you need to know about the Delhi Tech Scene

Delhi, India's capital city, is a place where tradition and progress co-exist. While Old Delhi is known for its rich history and bustling markets, New Delhi is defined by its modern architecture. It's clear the region places a strong emphasis on preserving its cultural heritage while embracing technological advancements, particularly in artificial intelligence, which plays a central role in shaping the city's tech landscape, fueled by investments in research and development.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account