Talent Pro

Spark Developer - Hive/Cassandra

Click Here to Apply

Job Location

in, India

Job Description

Role : Spark Developer Job Description : - Work with development teams and product managers to build new product features, enhancements etc. - Build the front-end of applications through appealing visual design - Develop and manage well-functioning databases and applications - Design & develop data and event processing pipelines - Test software to ensure responsiveness and efficiency - Troubleshoot, debug and upgrade software - As a Spark developer you will manage the development of scalable distributed Architecture defined by the Architect or tech Lead in our team. - Analyse, assemble large data sets to designed for the functional and non-functional requirements. - You will develop ETL scripts for big data sources. - Identify, design optimise data processing automate for reports and dashboards. - You will be responsible for workflow optimizations, data optimizations and ETL optimization as per the requirements elucidated by the team. - Work with stakeholders such as Product managers, Technical Leads Service Layer engineers to ensure end-to-end requirements are addressed. - Strong team player to adhere to Software Development Life cycle (SDLC) and documentations needed to represent every stage of SDLC. - Must have good knowledge on Big data tools HIVE and HBASE tables - Should have experience on Spark Streaming - Must have good knowledge on SQL - Must have good knowledge on Data warehouse concepts o Must have good analytical skills to analyse the issue o Should have Hands-on Unix/Linux knowledge - Must have very good client communication and interfacing skills o Knowledge on AWS will be will be an advantage. General Qualifications: - Overall experience in the field Java/Scala as a Backend developer could be between 3 years. - Working experience in Bigdata is important. Technical Skills : - Programming experience in Java/Scala is needed. Experience in Python also will have an advantage. - Writing ETL Stack in scalable optimized fashion using Apache Spark/Hadoop/Kafka etc - Should have working experience in writing distributed optimized Apache Spark jobs for various Machine Learning Algorithms. - Experience building and optimizing data pipeline and data sets is essential. - Should have worked with Kafka, at least one of No SQL databases (eg Cassandra, MongoDB, elastic DB) and at least one of RDBMS (eg MySQL) - Working experience to use cache such as Redis, Apache Ignite, Hazelcast will advantage. - Working knowledge of Docker and Kubernetes will be a plus. - Kindly find the below inputs from customer to get quality profiles - Include the criteria apart from Spark Experience in No sql databases (elastic search OR Cassandra OR MongoDB), Apache Kafka. - Look for Spark Streaming (people who have used Sqoop, likely doesn t know Spark Streaming). - Scala or Java based work experience using Spark is a MUST. - Python usage with Spark is optional, not must. - Spark streaming ,Kafka experience is mandatory for this requirement. (ref:hirist.tech)

Location: in, IN

Posted Date: 11/24/2024
Click Here to Apply
View More Talent Pro Jobs

Contact Information

Contact Human Resources
Talent Pro

Posted

November 24, 2024
UID: 4913354264

AboutJobs.com does not guarantee the validity or accuracy of the job information posted in this database. It is the job seeker's responsibility to independently review all posting companies, contracts and job offers.