xperience: 6 - 8 Years
Location: Gurugram
Data Engineer with strong expertise in PySpark, SQL and Apache Airflow, and Kafka. The candidate will be responsible for building, optimizing, and managing data pipelines, ensuring efficient data movement and transformation across platforms.
This role requires hands-on experience in PySpark using big data processing techniques, and orchestrating workflows in a cloud or on-premise environment.
Cloud expertise preferably AWS(S3,EMR).
Skills required: python, sql.
Responsibilities
xperience: 6 - 8 Years
Location: Gurugram
Data Engineer with strong expertise in PySpark, SQL and Apache Airflow, and Kafka. The candidate will be responsible for building, optimizing, and managing data pipelines, ensuring efficient data movement and transformation across platforms.
This role requires hands-on experience in PySpark using big data processing techniques, and orchestrating workflows in a cloud or on-premise environment.
Cloud expertise preferably AWS(S3,EMR).
Skills required: python, sql.
Salary : As per industry standard.
Industry :IT-Software / Software Services
Functional Area : IT Software - Application Programming , Maintenance