N-iX Logo

N-iX

Middle Data Engineer

Reposted 2 Days Ago
Be an Early Applicant
Remote
Hiring Remotely in Ukraine
Mid level
Remote
Hiring Remotely in Ukraine
Mid level
The Middle Data Engineer will design and maintain data pipelines, manage cloud-based solutions, and resolve data issues while collaborating with teams.
The summary above was generated by AI

N-iX is looking a proactive Middle Data Engineer to join our vibrant team.

As a Middle Data Engineer, you will play a critical role in designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within Palantir Foundry. The ideal candidate will possess a robust background in cloud technologies, data architecture, and a passion for solving complex data challenges.

Key Responsibilities:

  • Collaborate with cross-functional teams to understand data requirements, and design, implement and maintain scalable data pipelines in Palantir Foundry, ensuring end-to-end data integrity and optimizing workflows.
  • Gather and translate data requirements into robust and efficient solutions, leveraging your expertise in cloud-based data engineering. Create data models, schemas, and flow diagrams to guide development.
  • Develop, implement, optimize and maintain efficient and reliable data pipelines and ETL/ELT processes to collect, process, and integrate data to ensure timely and accurate data delivery to various business applications, while implementing data governance and security best practices to safeguard sensitive information.
  • Monitor data pipeline performance, identify bottlenecks, and implement improvements to optimize data processing speed and reduce latency. 
  • Troubleshoot and resolve issues related to data pipelines, ensuring continuous data availability and reliability to support data-driven decision-making processes.
  • Stay current with emerging technologies and industry trends, incorporating innovative solutions into data engineering practices, and effectively document and communicate technical solutions and processes.

Tools and skills you will use in this role:

  • Palantir Foundry
  • Python
  • PySpark
  • SQL
  • TypeScript

Requirements:

  • 3-4+ years of experience in data engineering, preferably within the pharmaceutical or life sciences industry;
  • Strong proficiency in Python and PySpark;
  • Proficiency with big data technologies (e.g., Apache Hadoop, Spark, Kafka, BigQuery, etc.);
  • Hands-on experience with cloud services (e.g., AWS Glue, Azure Data Factory, Google Cloud Dataflow);
  • Expertise in data modeling, data warehousing, and ETL/ELT concepts;
  • Hands-on experience with database systems (e.g., PostgreSQL, MySQL, NoSQL, etc.);
  • Proficiency in containerization technologies (e.g., Docker, Kubernetes);
  • Effective problem-solving and analytical skills, coupled with excellent communication and collaboration abilities;
  • Strong communication and teamwork abilities;
  • Understanding of data security and privacy best practices;
  • Strong mathematical, statistical, and algorithmic skills.

Nice to have:

  • Certification in Cloud platforms, or related areas;
  • Experience with search engine Apache Lucene, Webservice Rest API;
  • Familiarity with Veeva CRM, Reltio, SAP, and/or Palantir Foundry;
  • Knowledge of pharmaceutical industry regulations, such as data privacy laws, is advantageous;
  • Previous experience working with JavaScript and TypeScript.

We offer*:

  • Flexible working format - remote, office-based or flexible
  • A competitive salary and good compensation package
  • Personalized career growth
  • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
  • Active tech communities with regular knowledge sharing
  • Education reimbursement
  • Memorable anniversary presents
  • Corporate events and team buildings
  • Other location-specific benefits

*not applicable for freelancers

Top Skills

Apache Hadoop
Aws Glue
Azure Data Factory
BigQuery
Docker
Google Cloud Dataflow
Kafka
Kubernetes
MySQL
NoSQL
Palantir Foundry
Postgres
Pyspark
Python
Spark
SQL
Typescript

Similar Jobs

Yesterday
Remote
Mid level
Mid level
Information Technology • Consulting
Design, build, and maintain scalable data systems for analytics. Responsibilities include data pipeline development, ETL processes, and collaboration with teams.
Top Skills: AdfAzureDbtDockerEltETLPythonSnowflakeSparkSQL
Yesterday
Remote
Senior level
Senior level
Aerospace • Information Technology • Software • Cybersecurity • Design • Defense • Manufacturing
The Systems Engineer will develop and document technical requirements for avionics systems, design hardware and software, ensure compliance, and support international teams in aerospace projects.
Top Skills: Arp4754ADo-178Do-254
3 Days Ago
Remote or Hybrid
Senior level
Senior level
Digital Media • Gaming • Information Technology • Software • Sports • Esports • Big Data Analytics
Lead the design and execution of performance testing strategies, develop scalable testing frameworks, and mentor engineering teams while ensuring system resilience and reliability.
Top Skills: C#GatlingGoJavaJmeterK6LocustPython

What you need to know about the Delhi Tech Scene

Delhi, India's capital city, is a place where tradition and progress co-exist. While Old Delhi is known for its rich history and bustling markets, New Delhi is defined by its modern architecture. It's clear the region places a strong emphasis on preserving its cultural heritage while embracing technological advancements, particularly in artificial intelligence, which plays a central role in shaping the city's tech landscape, fueled by investments in research and development.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account