Branch International Logo

Branch International

Data Platform Engineer

Reposted 2 Days Ago
Be an Early Applicant
Remote
Hiring Remotely in India
Senior level
Remote
Hiring Remotely in India
Senior level
The Data Platform Engineer will design and maintain data platforms, ensuring reliability and efficiency while collaborating with cross-functional teams to fulfill product and analytics needs.
The summary above was generated by AI
Branch Overview
Branch is a leading AI-based lending fintech with 50M+ downloads across India and Africa. We use alternative data to reach millions of people that are largely excluded from the financial sector.
Headquartered in Silicon Valley with operations in India, Nigeria and Kenya, Branch is a for-profit, socially conscious company built for scale and impact. Our mission-driven team—founded and led by the former CEO of Kiva.org—now spans 400+ employees globally. We’re backed by investors such as Andreessen Horowitz, Visa, and the IFC.
Job Overview
Branch launched in India in early 2019 and has seen rapid adoption and growth. In 2026 we are building out a full Engineering team in India to accelerate our success here. This team will work closely with our engineering team (based in the United States, Nigeria, and Kenya) to strengthen the capabilities of our existing product and build out new product lines for the company.
We are looking for a Senior Data Platform Engineer who will build and evolve the foundational data platform that powers analytics, underwriting/credit models, experimentation, and operational reporting. You will work closely with our Product, Data Science and Business teams to design and maintain multiple technologies, including our API backend, credit scoring and underwriting systems, payments integrations, and operations tools. We face numerous interesting technical challenges ranging from maintaining complex financial systems to accessing and processing creative data sources for our algorithmic credit model. This is not a “pipeline-only” role. You will own platform primitives: ingestion patterns, data modeling standards, orchestration, governance, reliability, and developer experience—so that downstream teams can move faster with high-quality, trusted data. 
As a company, we are passionate about our customers, fearless in the face of barriers, and driven by data. As an engineering team, we value bottom-up innovation and decentralized decision-making: We believe the best ideas can come from anyone in the company, and we are working hard to create an environment where everyone feels empowered to propose solutions to the challenges we face. We are looking for individuals who thrive in a fast-moving, innovative, and customer-focused setting.
Responsibilities
  • Design and build the data platform to support TB-scale datasets with strong reliability, observability, and cost efficiency.
  • Build standardized ingestion frameworks (batch + near-real-time where needed), including schema evolution, backfills, and data quality controls.
  • Own warehouse/lakehouse modeling patterns (dimensional, event-based, and domain-oriented models), with clear semantic layers for self-serve analytics.
  • Implement data governance & security: access controls, lineage, PII handling, retention policies, and auditability.
  • Partner with stakeholders to translate ambiguous needs into crisp platform roadmaps and phased delivery.
  • Scaling our backend services to ever-growing levels of traffic and complexity.
Qualifications
  • 4-7 years building production-grade data/backend systems, with meaningful ownership of platform components end-to-end.
  • Experience leveraging modern AI tools for efficiency and effectiveness
  • You have strong knowledge of software development fundamentals, including relevant background in computer science fundamentals, distributed systems, data storage, and agile development methodologies.
  • You are pragmatic and combine a strong understanding of technology and product needs to arrive at the best solution for a given problem.
  • You are highly entrepreneurial and thrive in taking ownership of your own impact. You take the initiative to solve problems before they arise.
  • You are an excellent collaborator & communicator. You know that startups are a team sport. You listen to others, aren’t afraid to speak your mind and always try to ask the right questions. 
  • You are excited by the prospect of working in a distributed team and company, working with teammates from all over the world.
  • Experience in designing, building, and maintaining scalable data architectures that integrate disparate data sources.
  • Strong fundamentals in distributed systems, data storage, and data modeling
  • Hands-on experience with a modern stack similar to dbt + orchestration (Airflow/Dagster) + cloud warehouse (Snowflake/BigQuery/Redshift).
  • Strong programming ability in Python or Go, plus SQL fluency.
Why Join Us
  • Competitive salary and equity package
  • Fast-paced, collaborative, and high-autonomy work culture
  • Hybrid work setup designed for flexibility and work-life balance
  • Fully paid group medical insurance and personal accident insurance
  • Generous paid time off, plus company-declared public holidays
  • Fully paid parental leave for fathers and non-birthing parents (12 weeks), in addition to 26 weeks of statutory maternity leave
  • Monthly WFH stipend, along with a one-time home office setup budget
  • $500 annual professional development budget
  • Quarterly social meet-ups and sponsored monthly team lunches
Branch International is an Equal Opportunity Employer. The company does not and will not discriminate in employment on any basis prohibited by applicable law. We’re looking for more than just qualifications -- so if you’re unsure that you meet the criteria, please do not hesitate to apply!

Top Skills

Airflow
BigQuery
Dagster
Dbt
Go
Python
Redshift
Snowflake
SQL

Similar Jobs

4 Days Ago
In-Office or Remote
2 Locations
Mid level
Mid level
Other
The Data Engineer will develop data pipelines and workflows, manage storage solutions, and ensure data quality for the AI platform at TaskUs.
Top Skills: AirflowAWSKafkaPythonSparkSQL
8 Days Ago
Remote or Hybrid
Mid level
Mid level
Artificial Intelligence • Machine Learning • Security • Software • Analytics • Big Data Analytics
Support enterprise big data infrastructure deployments, provide customer site support, manage installations and upgrades, develop automation scripts, and enhance monitoring tools.
Top Skills: AnsibleApache KafkaSparkApache ZookeeperBashHarborHdfsJenkinsKubernetesLonghornOpen ShiftPythonRedhat LinuxRkeVmware VsphereYarn
8 Days Ago
Remote
India
Senior level
Senior level
Analytics • Business Intelligence • Consulting
The Senior Data Engineer will design and implement data engineering solutions on Google Cloud Platform, leading projects and mentoring junior staff.
Top Skills: App EngineArtifact RegistryBig TableBigQueryCloud ComposerCloud Data FusionCloud FilestoreCloud Source RepositoriesCloud StorageCompute EngineContainer RegistryData PipelinesDataflowDatalabDataprocDatastoreETLFirestoreGoogle Cloud PlatformMemorystorePersistent DiskVmware Engine

What you need to know about the Delhi Tech Scene

Delhi, India's capital city, is a place where tradition and progress co-exist. While Old Delhi is known for its rich history and bustling markets, New Delhi is defined by its modern architecture. It's clear the region places a strong emphasis on preserving its cultural heritage while embracing technological advancements, particularly in artificial intelligence, which plays a central role in shaping the city's tech landscape, fueled by investments in research and development.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account