Work Location: Bangalore
Experience Required: 2 to 7 years
Qualification: BE/B. Tech/ME/MTech
Description:
Mandatory skill sets:
a . 2 + years of experience in bigdata technologies like pyspark , hadoop , trino , druid
b. Strong experience on query optimisation in trino /pyspark
c. Strong hands on in Airflow / scheduler
d. Expertise in python
About the Role:
Hands-on Data Engineers to build and maintain scalable data solutions and
services . The role includes :
a. Maintain , develop data engineering pipelines to ensure seamless data
flow for BI applications
b. Create data models to ensure seamless query system
c. Develop or onboard opensource tools to make the data platform up to
date
d. Optimize queries and scripts over large-scale datasets (TBs) with a focus on
performance and cost-efficiency ยท
e. Implement data governance and security best practices in kubernetes
environments
f. Collaborate across teams to translate business requirements into robust
technical solutions
