Recro
AWS Data Engineer - SQL/Python
Job Location
in, India
Job Description
Job Title: AWS Data Engineer. Experience : 3 years. Looking for candidates who have experience working from the scratch and are currently serving their notice period, with availability to join within 10-15 days. We are seeking a Data Engineer with strong hands-on experience in data engineering techniques to create data pipelines and build data assets. The ideal candidate must possess deep expertise in PySpark, Python, SQL, and AWS services (Glue, S3, Redshift). The role involves code optimization, architecture design, and maintaining best practices for scalability and performance. Key Responsibilities : - Design, develop, and optimize data pipelines and data assets using PySpark and Python. - Perform code optimization using Spark SQL and PySpark for enhanced performance. - Work closely with AWS services, especially S3, Glue, Redshift, Lambda, and EC2, and explain the benefits of each. - Conduct code refactorization of legacy codebases for better readability and maintainability. - Apply unit testing and Test-Driven Development (TDD) practices to ensure code quality. - Debug and fix complex bugs, addressing performance, concurrency, or logic issues. Mandatory Skills : - PySpark and Python. - Strong expertise in SQL and experience in Spark SQL for data optimization. - AWS services knowledge, particularly in Glue, S3, Redshift, and Lambda. - Proficiency in code versioning tools (Git) and artifact repositories like JFrog Artifactory. - Code refactorization and modernization skills for maintaining code quality. - Experience with Unit Testing and TDD. Nice to Have : - Familiarity with other AWS services like EC2 and CloudFormation. - Experience with Boto3 for AWS automation and integration. Location : Remote Exp : 3-5 years (ref:hirist.tech)
Location: in, IN
Posted Date: 11/24/2024
Location: in, IN
Posted Date: 11/24/2024
Contact Information
Contact | Human Resources Recro |
---|