Corner Tree Consulting P Ltd
Big Data Engineer - PySpark/Hadoop
Job Location
pune, India
Job Description
Job Description : - Develop efficient ETL pipelines as per business requirements, following the development standards and best practices. - Perform integration testing of different created pipelines in AWS environments. - Provide estimates for development, testing, and deployments on different environments. - Participate in code peer reviews to ensure our applications comply with best practices. - Create a cost-effective AWS pipeline with required AWS services i. e S3 IAM, Glue, EMR, Redshift. Requirements : - 4-10 years of good hands-on exposure to Big Data technologies - pySpark (Data frame and SparkSQL), Hadoop, and Hive. - Good hands-on experience with Python and Bash Scripts. - Good understanding of SQL and data warehouse concepts. - Strong analytical, problem-solving, data analysis, and research skills. - Demonstrable ability to think outside of the box and not be dependent on readily available tools. - Excellent communication, presentation, and interpersonal skills are a must. Good to have : - Hands-on experience with using Cloud Platform provided Big Data technologies (i. e. IAM, Glue, EMR, RedShift, S3 Kinesis). - Orchestration with Airflow and Any job scheduler experience. - Experience in migrating workloads from on-premise to cloud and cloud-to-cloud migrations. (ref:hirist.tech)
Location: pune, IN
Posted Date: 11/23/2024
Location: pune, IN
Posted Date: 11/23/2024
Contact Information
Contact | Human Resources Corner Tree Consulting P Ltd |
---|