Quantumbricks

Senior AWS Data Engineer - Data Pipeline

Click Here to Apply

Job Location

in, India

Job Description

Job Summary. We are seeking a highly skilled and experienced Senior AWS Data Engineer to lead our cloud data architecture, design, and development initiatives. The ideal candidate will have deep expertise in building and managing large-scale data pipelines and solutions on AWS, as well as strong experience in data architecture, ETL, and real-time data processing. You will collaborate with cross-functional teams to design, build, and maintain robust, scalable, and efficient data infrastructure and analytics solutions. Key Responsibilities : Data Pipeline Design and Development : - Architect and build large-scale, distributed data processing pipelines on AWS using services such as AWS Glue, Lambda, Kinesis, S3, and Redshift. - Develop and optimize ETL processes to extract, transform, and load data from a variety of structured and unstructured data sources. - Design real-time and batch data integration workflows using tools such as AWS Data Pipeline, Kafka, or Apache Airflow. Data Architecture and Modeling : - Design, implement, and maintain data architectures (i.e., data lakes, data warehouses) that are scalable, flexible, and cost-efficient. - Collaborate with data scientists, analysts, and other stakeholders to understand business requirements and translate them into technical designs. - Develop complex data models that support various analytical and business intelligence (BI) requirements. Cloud Infrastructure Management : - Manage and optimize AWS infrastructure for data processing, including EC2, EMR, RDS, DynamoDB, and other relevant AWS services. - Implement monitoring, logging, and alerting systems to ensure the availability, reliability, and scalability of data solutions. - Ensure security and compliance by implementing best practices such as encryption, IAM roles, VPC configurations, and monitoring. Performance Optimization and Automation : - Automate data pipeline deployment and monitoring using CI/CD tools such as Jenkins or AWS CodePipeline. - Tune performance of AWS services like Redshift, Athena, and EMR to handle large volumes of data efficiently. - Perform root cause analysis of data pipeline failures and implement preventive measures. Collaboration and Mentorship : - Work closely with engineering teams, product managers, and business stakeholders to deliver scalable and efficient data solutions. - Mentor junior data engineers and provide technical leadership across the team. - Participate in design reviews, code reviews, and architecture discussions. Required Skills & Qualifications : - 10 years of experience in Data Engineering, with a strong focus on cloud platforms and distributed data systems. - Deep expertise in AWS cloud services, especially S3, Redshift, Glue, Kinesis, Lambda, EMR, and DynamoDB. - Strong hands-on experience with ETL/ELT processes, data integration, and transformation. - Proficient in programming languages such as Python, Java, or Scala. - Experience with orchestration tools like AWS Step Functions, Airflow, or similar. - Strong knowledge of SQL and database design for data lakes and data warehouses. - Familiarity with big data processing frameworks such as Apache Spark, Hadoop, or Kafka. - Hands-on experience with DevOps practices, CI/CD pipelines, and infrastructure-as-code tools such as Terraform or CloudFormation. - Experience with data governance, data quality, and metadata management in a cloud environment. - Strong problem-solving, communication, and collaboration skills. (ref:hirist.tech)

Location: in, IN

Posted Date: 11/24/2024
Click Here to Apply
View More Quantumbricks Jobs

Contact Information

Contact Human Resources
Quantumbricks

Posted

November 24, 2024
UID: 4902196701

AboutJobs.com does not guarantee the validity or accuracy of the job information posted in this database. It is the job seeker's responsibility to independently review all posting companies, contracts and job offers.