Resource Quick
Senior Python/PySpark Developer - Data Engineering
Job Location
bangalore, India
Job Description
About the Role : We are seeking a highly skilled Senior Python/PySpark Developer to lead our data engineering and analytics initiatives. The ideal candidate will have a proven track record in designing, developing, and deploying robust and scalable data solutions on AWS. Key Responsibilities : Data Engineering : - Develop, test, and deploy efficient data pipelines using Python and PySpark. - Design and implement ETL processes to extract, transform, and load data from various sources. - Optimize data pipelines for performance and scalability. Cloud Architecture : - Design and implement cloud-native solutions on AWS, leveraging services like EC2, Lambda, S3, RDS, Redshift, and Glue. - Ensure the security, reliability, and cost-effectiveness of cloud infrastructure. Data Management and Transformation : - Manage and transform large datasets, supporting data ingestion and migration through well-defined pipelines. - Utilize SQL Server and stored procedures for data management and transformation. ETL and Orchestration : - Develop, deploy, and manage scalable ETL and orchestration solutions to automate data processing tasks. Team Leadership : - Mentor and guide junior team members. - Collaborate with cross-functional teams to deliver high-quality solutions. Required Skills and Experience : - Strong proficiency in Python and PySpark. - In-depth knowledge of AWS services (EC2, Lambda, S3, RDS, Redshift, Glue). - Experience with data warehousing and data lake concepts. - Solid understanding of SQL and database technologies. - Strong problem-solving and analytical skills. - Excellent communication and collaboration skills. - Experience with data visualization tools (i.e., Tableau, Power BI) is a plus (ref:hirist.tech)
Location: bangalore, IN
Posted Date: 11/28/2024
Location: bangalore, IN
Posted Date: 11/28/2024
Contact Information
Contact | Human Resources Resource Quick |
---|