Juniper Consultancy Services
Azure Data Engineer - PySpark/AWS/Big Data
Job Location
bangalore, India
Job Description
Who are we looking for : We are looking for an experienced AWS Data Engineer with expertise in PySpark and Python to join our dynamic team. You will be responsible for designing, implementing, and maintaining scalable data pipelines and infrastructure,leveraging the power of AWS services and big data technologies. Technical Skills : - 8 years of experience in data engineering with a strong focus on AWS services. - Hands-on experience with PySpark and Python for big data processing. - Strong experience with AWS technologies such as S3, Glue, Lambda, EMR, Redshift, Athena. - Expertise in building and managing ETL pipelines for large datasets - Develop ETL processes using PySpark, Python, and AWS Glue to extract, transform, and load (ETL) large datasets. - Experience in AWS S3, Redshift, EC2 and Lambda services - Extensive experience in developing and deploying Bigdata pipelines. - Experience in Azure data lake - Strong hands on in SQL development and in-depth understanding of optimization and tuning techniques in SQL with Redshift - Development in Notebooks (like Jupyter, DataBricks, Zeppelin etc) - Development experience in pySpark - Experience in scripting language like python and any other programming language Roles and Responsibilities : - Candidate must have hands on experience in AWS Data Databricks - Good development experience using Python/Scala, Spark SQL and Data Frames - Hands-on with Databricks, Data Lake and SQL knowledge is a must. - Performance tuning, troubleshooting, and debugging SparkTM Process Skills : Agile - Scrum Qualification : Bachelor of Engineering (Computer background preferred) (ref:hirist.tech)
Location: bangalore, IN
Posted Date: 11/21/2024
Location: bangalore, IN
Posted Date: 11/21/2024
Contact Information
Contact | Human Resources Juniper Consultancy Services |
---|