SGS Technical Services Pvt. Ltd
Big Data Engineer - Hadoop/PySpark
Job Location
pune, India
Job Description
Position : Big Data Engineer Experience : 3-5 years Location : Pune Notice Period : Immediate joiner preferred Key Responsibilities : - Design, develop, and maintain scalable data processing systems using Hadoop and PySpark. - Implement data pipelines to process large datasets efficiently. - Work with data warehousing solutions to ensure proper storage and retrieval of data. - Perform ETL tasks and optimize performance for data ingestion and transformation. - Collaborate with data scientists, analysts, and other stakeholders to meet business objectives. - Ensure data security and integrity during processing and storage. Mandatory Skills : - Hadoop : Strong experience with Hadoop ecosystem (HDFS, MapReduce, HBase, Hive, Pig, etc.). - PySpark : Hands-on experience in developing and optimizing large-scale data processing using PySpark. - Experience with data ingestion, processing, and storage in distributed systems. Preferred Skills : - Knowledge of data warehousing concepts and tools. - Familiarity with cloud platforms like AWS, GCP, or Azure for Big Data solutions. - Proficiency in SQL and NoSQL databases. - Understanding of CI/CD pipelines for Big Data applications. Qualifications : - Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. - Excellent problem-solving skills and ability to handle large datasets efficiently. (ref:hirist.tech)
Location: pune, IN
Posted Date: 11/24/2024
Location: pune, IN
Posted Date: 11/24/2024
Contact Information
Contact | Human Resources SGS Technical Services Pvt. Ltd |
---|