HighLevel Logo

HighLevel

Data Engineer

Job Posted 4 Days Ago Posted 4 Days Ago
Be an Early Applicant
Remote
Hiring Remotely in Delhi, Connaught Place, New Delhi, Delhi
Mid level
Remote
Hiring Remotely in Delhi, Connaught Place, New Delhi, Delhi
Mid level
Responsible for designing and maintaining data infrastructure, developing backend systems for real-time processing, and ensuring data reliability and performance.
The summary above was generated by AI

About HighLevel:

HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have ~1200 employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.


Our Website - https://www.gohighlevel.com/

YouTube Channel - https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g

Blog Post https://blog.gohighlevel.com/general-atlantic-joins-highlevel/


Our Customers:

HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 500K businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.


Scale at HighLevel:

We operate at scale, managing over 40 billion API hits and 120 billion events monthly, with more than 500 micro-services in production. Our systems handle 200+ terabytes of application data and 6 petabytes of storage.


About the Role:

We are seeking a talented and motivated data engineer to join our team who will be responsible for designing, developing, and maintaining our data infrastructure and developing backend systems and solutions that support real-time data processing, large-scale event-driven architectures, and integrations with various data systems. This role involves collaborating with cross-functional teams to ensure data reliability, scalability, and performance. The candidate will work closely with data scientists, analysts and software engineers to ensure efficient data flow and storage, enabling data-driven decision-making across the organisation.

Responsibilities:

  • Software Engineering Excellence: Write clean, efficient, and maintainable code using JavaScript or Python while adhering to best practices and design patterns
  • Design, Build, and Maintain Systems: Develop robust software solutions and implement RESTful APIs that handle high volumes of data in real-time, leveraging message queues (Google Cloud Pub/Sub, Kafka, RabbitMQ) and event-driven architectures
  • Data Pipeline Development: Design, develop and maintain data pipelines (ETL/ELT) to process structured and unstructured data from various sources
  • Data Storage & Warehousing: Build and optimize databases, data lakes and data warehouses (e.g. Snowflake) for high-performance querying
  • Data Integration: Work with APIs, batch and streaming data sources to ingest and transform data
  • Performance Optimization: Optimize queries, indexing and partitioning for efficient data retrieval
  • Collaboration: Work with data analysts, data scientists, software developers and product teams to understand requirements and deliver scalable solutions
  • Monitoring & Debugging: Set up logging, monitoring, and alerting to ensure data pipelines run reliably
  • Ownership & Problem-Solving: Proactively identify issues or bottlenecks and propose innovative solutions to address them

Requirements:

  • 3+ years of experience in software development
  • Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field
  • Strong Problem-Solving Skills: Ability to debug and optimize data processing workflows
  • Programming Fundamentals: Solid understanding of data structures, algorithms, and software design patterns
  • Software Engineering Experience: Demonstrated experience (SDE II/III level) in designing, developing, and delivering software solutions using modern languages and frameworks (Node.js, JavaScript, Python, TypeScript, SQL, Scala or Java)
  • ETL Tools & Frameworks: Experience with Airflow, dbt, Apache Spark, Kafka, Flink or similar technologies
  • Cloud Platforms: Hands-on experience with GCP (Pub/Sub, Dataflow, Cloud Storage) or AWS (S3, Glue, Redshift)
  • Databases & Warehousing: Strong experience with PostgreSQL, MySQL, Snowflake, and NoSQL databases (MongoDB, Firestore, ES)
  • Version Control & CI/CD: Familiarity with Git, Jenkins, Docker, Kubernetes, and CI/CD pipelines for deployment
  • Communication: Excellent verbal and written communication skills, with the ability to work effectively in a collaborative environment
  • Experience with data visualization tools (e.g. Superset, Tableau), Terraform, IaC, ML/AI data pipelines and devops practices are a plus

EEO Statement:

The company is an Equal Opportunity Employer. As an employer subject to affirmative action regulations, we invite you to voluntarily provide the following demographic information. This information is used solely for compliance with government recordkeeping, reporting, and other legal requirements. Providing this information is voluntary and refusal to do so will not affect your application status. This data will be kept separate from your application and will not be used in the hiring decision.


#LI-Remote #LI-NJ1

Top Skills

Airflow
Spark
AWS
Ci/Cd
Dbt
Docker
Firestore
GCP
Git
Google Cloud Pub/Sub
JavaScript
Jenkins
Kafka
Kubernetes
MongoDB
MySQL
Postgres
Python
RabbitMQ
Snowflake

Similar Jobs

4 Days Ago
Remote
Hybrid
10 Locations
Mid level
Mid level
Artificial Intelligence • Healthtech • Machine Learning • Natural Language Processing • Biotech • Pharmaceutical
Lead AI and Data Engineering projects, develop AI models, oversee data solutions and collaborate with teams to improve patient outcomes.
Top Skills: Amazon NeptuneAmazon RedshiftApache AirflowApache NifiApache SolrAWSAzureDockerElasticsearchGoogle BigqueryGoogle Cloud PlatformHadoopInformaticaJavaKafkaKubernetesNeo4JPrefectPythonScalaSnowflakeSparkSQLTalend
5 Days Ago
Remote
Hybrid
2 Locations
Senior level
Senior level
Artificial Intelligence • Cloud • Sales • Security • Software • Cybersecurity • Data Privacy
SailPoint seeks a Senior Data Engineer to design and implement robust data ingestion and processing systems. Responsibilities include developing scalable data pipelines, integrating diverse data sources, leveraging AWS services, and using tools like Apache Airflow for orchestration. Candidates should have extensive experience in data engineering and relevant technologies.
5 Days Ago
Remote
India
Senior level
Senior level
Artificial Intelligence • Consumer Web • Edtech • HR Tech • Information Technology • Software • Conversational AI
The Data Integration Engineer at Skillsoft will manage ETL processes, ensure data integrity, and support database administration. Key responsibilities include optimizing data workflows, transitioning to event-driven architectures, and implementing innovative data integration solutions while collaborating with cross-functional teams.

What you need to know about the Delhi Tech Scene

Delhi, India's capital city, is a place where tradition and progress co-exist. While Old Delhi is known for its rich history and bustling markets, New Delhi is defined by its modern architecture. It's clear the region places a strong emphasis on preserving its cultural heritage while embracing technological advancements, particularly in artificial intelligence, which plays a central role in shaping the city's tech landscape, fueled by investments in research and development.
By clicking Apply you agree to share your profile information with the hiring company.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account