Senior Machine Learning Engineer
On-site
OnMobile
Enterprise
Product
B2B
₹ 25-35 Lacs PA
IPO/Public
Telecom / Digital Entertainment
Bangalore, Karnataka, India
Post Status: Active
Permanent
1 applications
Experience: 4-7 Years
Skills
CI/CD
BigQuery
Tensorflow
Docker
Prometheus
PyTorch
Scikit-Learn
Python
RAG
MLOps
Posted 2 days ago

About the job

OnMobile Global, headquartered in Bangalore and present in 65+ countries, is a listed company and a

leader in mobile gaming and entertainment. With a proven track record of defining what digital

engagement looks like, we transformed into a mobile gaming-first company with platforms like

Challenges Arena and ONMO, while also building enterprise solutions such as Gamize and Buzzmo. We

pair cutting-edge tech with a culture that’s collaborative, high-energy and fun. If you’re passionate about

building the future of digital experiences, we’d love to have you take the next step in your journey here.

The ML Engineer is a core member of the AI & Data Science CoE, responsible for taking machine

learning models from prototype to production and building the infrastructure that enables scalable,

reliable, and automated ML at OnMobile.

This role bridges the gap between data science research and engineering operations. Working with

15+ TB of data across user behavior, transactions, and operational telemetry, the ML Engineer

builds and maintains production ML pipelines, feature stores, model serving infrastructure, and

monitoring systems that power OnMobile’s analytics capabilities across all product pods.

Roles & Responsibilities

Model Productionization & Deployment

• Take ML models built by data scientists (churn prediction, LTV forecasting, campaign optimization) from notebook prototypes to production-grade services.

• Build scalable model serving infrastructure using Vertex AI, Cloud Run, or equivalent GCP services.

• Implement batch and real-time inference pipelines depending on use case requirements.

• Ensure models meet latency, throughput, and reliability SLAs for production workloads.

ML Pipeline & Infrastructure

• Design, build, and maintain end-to-end ML pipelines using Vertex AI Pipelines, Apache Airflow etc

• Implement automated model training, evaluation, and retraining workflows triggered by data drift or performance degradation.

• Manage experiment tracking, model versioning, and artifact management (MLflow or Vertex AI Experiments).

Monitoring, Observability & Reliability

• Build model monitoring dashboards to track prediction quality, data drift, feature drift, and

concept drift.

• Conduct root cause analysis when models underperform and collaborate with data scientists on

remediation.

• Ensure all ML systems follow best practices for logging, versioning, reproducibility, and

auditability.

Platform & Integration

• Integrate ML model outputs with downstream systems: BI dashboards, automated reporting, campaign orchestration engines.

• Collaborate with data engineers on ETL/ELT pipeline design to ensure ML-ready data availability.

• Work with AI/GenAI engineers to support LLM-based applications with embedding pipelines,

vector stores, and retrieval infrastructure.

• Contribute to the design and maintenance of the CoE’s overall ML platform architecture.

Required Skills and Competencies

Technical Skills

• Strong proficiency in Python and software engineering best practices (testing, CI/CD, code review, modular design).

• Hands-on experience with ML frameworks: scikit-learn, XGBoost, TensorFlow, or PyTorch.

• Deep knowledge of MLOps: model serving (TF Serving, Triton, or custom APIs), pipeline orchestration, containerization (Docker, Kubernetes).

• Experience with GCP services: BigQuery, Vertex AI, Cloud Functions, Cloud Run, Pub/Sub, Dataflow.

• Familiarity with feature store design, model registries, and experiment tracking tools (MLflow, Weights & Biases).

• Strong SQL skills for data extraction and transformation from BigQuery.

Infrastructure & Engineering

• Understanding of distributed computing, parallel processing, and data-intensive workloads.

• Knowledge of API design (REST/gRPC) for model serving endpoints.

• Familiarity with monitoring and observability tools (Prometheus, Grafana, Cloud Monitoring).

Collaboration & Process

• Experience working closely with data scientists to translate research code into production-quality systems.

• Strong communication skills to document ML systems and train pod-embedded analysts on model usage.

• Familiarity with Agile/Scrum methodologies and cross-functional team collaboration.

Good-to-Have

• Experience with LLM deployment, RAG pipelines, or vector database systems (Pinecone, Weaviate, pgvector).

• Familiarity with MCP (Model Context Protocol) for AI agent frameworks.

• Experience with streaming ML (real-time feature computation, online learning).

• Prior experience in telecom, subscription services, or digital marketing analytics.

Benefits & Perks

  • Flexible Work Hours

  • Industry best Insurance Coverage