Corner Tree Consulting P Ltd

Big Data Engineer - PySpark/Hadoop

Click Here to Apply

Job Location

pune, India

Job Description

Job Description : - Develop efficient ETL pipelines as per business requirements, following the development standards and best practices. - Perform integration testing of different created pipelines in AWS environments. - Provide estimates for development, testing, and deployments on different environments. - Participate in code peer reviews to ensure our applications comply with best practices. - Create a cost-effective AWS pipeline with required AWS services i. e S3 IAM, Glue, EMR, Redshift. Requirements : - 4-10 years of good hands-on exposure to Big Data technologies - pySpark (Data frame and SparkSQL), Hadoop, and Hive. - Good hands-on experience with Python and Bash Scripts. - Good understanding of SQL and data warehouse concepts. - Strong analytical, problem-solving, data analysis, and research skills. - Demonstrable ability to think outside of the box and not be dependent on readily available tools. - Excellent communication, presentation, and interpersonal skills are a must. Good to have : - Hands-on experience with using Cloud Platform provided Big Data technologies (i. e. IAM, Glue, EMR, RedShift, S3 Kinesis). - Orchestration with Airflow and Any job scheduler experience. - Experience in migrating workloads from on-premise to cloud and cloud-to-cloud migrations. (ref:hirist.tech)

Location: pune, IN

Posted Date: 11/23/2024
Click Here to Apply
View More Corner Tree Consulting P Ltd Jobs

Contact Information

Contact Human Resources
Corner Tree Consulting P Ltd

Posted

November 23, 2024
UID: 4944924375

AboutJobs.com does not guarantee the validity or accuracy of the job information posted in this database. It is the job seeker's responsibility to independently review all posting companies, contracts and job offers.