Infometry

Data Scientist

Click Here to Apply

Job Location

in, India

Job Description

Role : Data Scientist Location : Bangalore (Remote) Job Description : We are looking for Software/Backend/Data Scientists with a passion for developing robust machine learning (ML) systems and working in a collaborative environment. The ideal candidates will have strong Python skills, a background in software engineering, and experience working across the end-to-end ML lifecycle. As leaders, you will be responsible for reviewing code, guiding problem-solving sessions, and communicating technical knowledge to both technical and non-technical teams. Key Responsibilities : - Lead the design, development, and deployment of machine learning models, ensuring they meet the requirements of different stakeholders across the organization. - Collaborate with cross-functional teams (management, data engineering, brand teams) to build scalable, production-ready machine learning pipelines. - Develop and implement model serving pipelines and automate processes using MLOps frameworks (e.g., MLflow, SageMaker, etc.). - Review peers' code and contribute to team-wide problem-solving discussions, encouraging collaboration and adherence to best practices. - Apply distributed computing frameworks such as Snowpark or PySpark to scale data processing and model training efforts. - Build and refine machine learning models ranging from simple linear/logistic regression to deep learning architectures, depending on the needs of the project. - Design and implement strategies for training models on imbalanced datasets Utilize appropriate training schemas for unbiased model training (and perform model tuning for optimal performance. - Articulate model performance and evaluation metrics to both technical and non-technical stakeholders, ensuring model decisions are well-understood. - Diagnose model/data drift and implement time-wise tracking of model performance to ensure reliability over time. Required Skills and Qualifications : - Strong Python skills, with a focus on developing efficient and scalable machine learning pipelines. - Demonstrated experience working with distributed computing frameworks like Snowpark or PySpark. - Software engineering background with knowledge of best practices in software design, testing, and deployment. - Proven experience in the end-to-end machine learning lifecycle, from data preprocessing to model deployment. - Experience with all sizes of machine learning models, from linear/logistic regression to deep learning. - Experience handling largely imbalanced training datasets and developing strategies for effective learning. - Proficiency with training schemas for unbiased model evaluation and tuning, such as cross-validation techniques. - Strong problem-solving skills and the ability to contribute to technical discussions in a collaborative environment. - Familiarity with MLOps frameworks for model tracking, deployment, and automation of pipelines. - Experience working with Dataiku or other data science-enabling tools (e.g., SageMaker, Databricks). (ref:hirist.tech)

Location: in, IN

Posted Date: 11/24/2024
Click Here to Apply
View More Infometry Jobs

Contact Information

Contact Human Resources
Infometry

Posted

November 24, 2024
UID: 4943653900

AboutJobs.com does not guarantee the validity or accuracy of the job information posted in this database. It is the job seeker's responsibility to independently review all posting companies, contracts and job offers.