Data Scientist (ML, Speech, NLP & Multimodal Expertise) | London

Overview
We are looking to hire a Data Scientist with strong expertise in machine learning, speech and language processing, and multimodal systems. This role is essential to driving our product roadmap forward, particularly in building out our core machine learning systems and developing next-generation speech technologies.The ideal candidate will be capable of working independently while effectively collaborating with cross-functional teams. In addition to deep technical knowledge, we are looking for someone who is curious, experimental, and communicative.Key Responsibilities
Create maintainable, elegant code and high-quality data products that are modeled, well-documented, and simple to use.Build, maintain, and improve the infrastructure to extract, transform, and load data from a variety of sources using SQL, Azure, GCP and AWS technologies.Perform statistical analysis of training datasets to identify biases, quality issues, and coverage gaps.Implement automated evaluation pipelines that scale across multiple models and tasks.Create interactive dashboards and visualization tools for model performance analysis.Additional Responsibilities
Design and implement robust data ingestion pipelines for massive-scale text and speech corpora including automated data preprocessing and cleaning pipelines.Create data validation frameworks and monitoring systems for dataset quality.Develop sampling strategies for balanced and representative training data.Implement comprehensive experiment tracking and hyperparameter optimization frameworks.Conduct statistical analysis of training dynamics and convergence patterns.Design A/B testing frameworks for comparing different training approaches.Create automated model selection pipelines based on multiple evaluation criteria.Develop cost-benefit analyses for different training configurations.Design comprehensive benchmark suites with statistical significance testing.Develop fairness metrics and bias detection systems.Build real-time monitoring systems for model performance in production.Implement feature drift detection and data quality monitoring.Design feedback loops to capture user interactions and model effectiveness.Create automated retraining pipelines based on performance degradation signals.Develop business metrics and ROI analysis for model deployments.Required Skills, Experience and Qualifications
Programming and Software Engineering
Python (Expert Level) : Advanced proficiency in scientific computing stack (NumPy, Pandas, SciPy, Scikit-learn).Version Control : Git workflows, collaborative development, and code review processes.Software Engineering Practices : Testing frameworks, CI/CD pipelines, and production-quality code development.Machine Learning and Language Model Expertise
Traditional Machine Learning and Deep Learning Knowledge : Proficiency in classical ML algorithms (Naive Bayes, SVM, Random Forest, etc.) and Deep Learning architectures.Understanding of Transformer Architecture : Attention mechanisms, positional encoding, and scaling laws.Training Pipeline Knowledge : Data preprocessing for large corpora, tokenization strategies, and distributed training concepts.Evaluation Frameworks : Experience with standard NLP benchmarks (GLUE, SuperGLUE, etc.) and custom evaluation design.Fine-tuning Techniques : Understanding of PEFT methods, instruction tuning, and alignment techniques.Model Deployment : Knowledge of model optimization, quantization, and serving infrastructure for large models.Collaboration and Adaptability
Strong communication skills are a mustSelf-reliant but knows when to ask for helpComfortable working in an environment where conventional development practices may not always apply:PBIs (Product Backlog Items) may not be highly detailedExperimentation will be necessaryAbility to identify what’s important in completing a task or partial task and explain/justify their approachCan effectively communicate ideas and strategies
Proactive and takes initiative rather than waiting for PBIs to be assigned when circumstances call for itStrong interest in AI and its possibilities, a genuine passion for certain areas can provide that extra sparkCurious and open to experimenting with technologies or languages outside their comfort zoneMindset and Work Approach
Takes ownership when things don’t go as plannedCapable of working from high-level explanations and general guidance on implementations and final outcomesContinuous, clear communication is crucial, detailed step-by-step instructions won’t always be availableSelf-starter, self-motivated, and proactive in problem-solvingEnjoys exploring and testing different approaches, even in unfamiliar programming languagesAdditional Skills, Experience and Qualifications
Machine Learning and Deep Learning
Framework Proficiency : Scikit-learn, XGBoost, PyTorch (preferred) or TensorFlow for model implementation and experimentation.MLOps Expertise : Model versioning, experiment tracking, model monitoring (MLflow, Weights and Biases), data monitoring and validation (Great Expectations, Prometheus, Grafana), and automated ML pipelines (GitHub CI/CD, Jenkins, CircleCI, GitLab etc.).Statistical Modeling : Hypothesis testing, experimental design, causal inference, and Bayesian statistics.Model Evaluation : Cross-validation strategies, bias-variance analysis, and performance metric design.Feature Engineering : Advanced techniques for text, time-series, and multimodal data.Data Engineering and Infrastructure
Big Data Technologies : Spark (PySpark), Hadoop ecosystem, and distributed computing frameworks (DDP, TP, FSDP).Cloud Platforms : AWS (SageMaker, S3, EMR), GCP (Vertex AI, BigQuery), or Azure ML.Database Systems : NoSQL databases (MongoDB, Elasticsearch), graph databases (Neo4j), and vector databases (Pinecone, Milvus, ChromaDB, FAISS etc.).Data Pipeline Tools : Airflow, Prefect, or similar orchestration frameworks.Containerization : Docker, Kubernetes for scalable model ..... full job details .....