About
Interview Process (Is face to face required?) Video interview is fine.
Job Details:
Minimum years of experience required: 8+ years Certification needed: Not mandatory
Must Have Skills: Python & Spark, advanced ML/statistical modeling, scalable ETL/Big Data engineering, and solid AWS + SQL experience. Good to Have Skills: Generative AI/NLP, MLOps tools, Linux, deep learning, and telecom domain knowledge.
Detailed Job Description:
Overview As part of the QLB Quota Proposal initiative, the Senior Data Scientist will play a critical role in developing analytical and machine learning solutions that improve the security, performance, and operational efficiency of client's enterprise network infrastructure. This role blends advanced Data Science, Machine Learning, and Generative AI with robust Data Engineering, particularly using Spark and AWS, to deliver scalable, production-grade analytical products. You will design end-to-end data solutions-from data ingestion to model deployment-while partnering with engineering and product teams to translate insights into actionable outcomes.
Roles & Responsibilities • Lead the full ML development lifecycle: problem framing, hypothesis formulation, feature engineering, model development, validation, deployment, and monitoring. • Develop, test, and optimize machine learning models including: o Supervised & unsupervised learning o Statistical modeling and forecasting o Natural Language Processing (NLP) o Generative AI techniques for automation and insight extraction o Graph/network analytics for analyzing network behaviors and relationships • Build advanced anomaly detection, predictive maintenance, and risk scoring models for network security and operational efficiency. • Conduct large-scale exploratory data analysis (EDA) to identify trends, data quality issues, and opportunities for automation. • Define and implement model evaluation and A/B testing strategies. • Collaborate with ML engineering teams to operationalize models using MLOps best practices. • Communicate complex analytical findings through clear narratives, visualizations, and presentations tailored to technical and non-technical audiences.
Data Engineering & ETL • Design, develop, and maintain scalable, fault-tolerant ETL pipelines using Spark to support analytics and machine learning workloads. • Implement monitoring, alerting, and automated recovery mechanisms to ensure data pipeline reliability. • Build robust feature pipelines that enable real-time and batch ML processing. • Integrate data from a wide range of sources: o APIs o Flat files o Relational databases o Distributed file systems (HDFS/S3) • Support continuous integration and continuous delivery (CI/CD) workflows for data and ML components.
Collaboration & Leadership • Partner with engineering, operations, security, and business teams to embed machine learning solutions into production systems. • Provide mentorship to junior data scientists and analysts. • Evangelize data science best practices across the organization and contribute to the development of internal frameworks, tools, and standards. • Help educate teams on analytic techniques, statistical reasoning, and responsible AI practices.
Required Qualifications • Strong communication, presentation skills, and ability to translate analytics into business value. • Expertise in programming languages commonly used in data science: o Python (primary) o Scala or Java (preferred for ETL/engineering) • Proven experience with Spark and large-scale distributed data processing. • Deep understanding of: o Statistical modeling o Hypothesis testing o Experimental design o Causality and multicollinearity • Strong SQL skills and experience with relational and NoSQL databases. • Expertise across a wide range of ML methodologies: o Regression, classification, clustering o Time-series forecasting o Bayesian methods o NLP and text analytics o Graph analytics • Experience with data preprocessing, feature engineering, and EDA. • Familiarity with data architectures such as data lakes, warehouses, and marts. • Demonstrated ability to continuously learn, adapt, and share knowledge.
Preferred Qualifications • Experience with AWS services (S3, EMR, Lambda, Glue, SageMaker). • Prior exposure to Generative AI, LLMs, prompt engineering, or building AI-driven automation systems. • Experience with Linux-based systems. • Background in text mining, document classification, or large-scale unstructured data processing. • Bachelor's degree in Computer Science, Data Science, Statistics, Mathematics, Physics, Engineering, Operations Research, or a related field. • Master's degree with 6+ years or Bachelor's degree with 8+ years of relevant work experience.
Diverse Lynx LLC is an Equal Employment Opportunity employer. All qualified applicants will receive due consideration for employment without any discrimination. All applicants will be evaluated solely on the basis of their ability, competence and their proven capability to perform the functions outlined in the corresponding role. We promote and support a diverse workforce across all levels in the company.
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.