Senior Engineering Manager (Inference Service)
Gruve
About Gruve
Gruve is an innovative software services startup dedicated to transforming enterprises to AI powerhouses. We specialize in cybersecurity, customer experience, cloud infrastructure, and advanced technologies such as Large Language Models (LLMs). Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
About the Role
We are seeking a highly experienced and visionary Senior Engineering Manager – Inference Services to lead and scale our team responsible for building high-performance inference systems that power cutting-edge AI/ML products. This role requires a blend of strong technical expertise, leadership skills, and product-oriented thinking to drive innovation, scalability, and reliability of our inference infrastructure.
Key Responsibilities:
-
Leadership & Strategy
-
Lead, mentor, and grow a team of engineers focused on inference platforms, services, and optimizations.
-
Define the long-term vision and roadmap for inference services in alignment with product and business goals.
-
Partner with cross-functional leaders in ML, Product, Data Science, and Infrastructure to deliver robust, low-latency, and scalable inference solutions.
-
-
Engineering Excellence
-
Architect and oversee development of distributed, production-grade inference systems ensuring scalability, efficiency, and reliability.
-
Drive adoption of best practices for model deployment, monitoring, and continuous improvement of inference pipelines.
-
Ensure high availability, cost optimization, and performance tuning of inference workloads across cloud and on-prem environments.
-
-
Innovation & Delivery
-
Evaluate emerging technologies, frameworks, and hardware accelerators (GPUs, TPUs, etc.) to continuously improve inference efficiency.
-
Champion automation and standardization of model deployment and lifecycle management.
-
Balance short-term delivery with long-term architectural evolution.
-
-
People & Culture
-
Build a strong engineering culture focused on collaboration, innovation, and accountability.
-
Provide coaching, feedback, and career development opportunities to team members.
-
Foster a growth mindset and data-driven decision-making.
-
Basic Qualifications
-
Experience
-
12+ years of software engineering experience with at least 4–5 years in engineering leadership roles.
-
Proven track record of managing high-performing teams delivering large-scale distributed systems or ML platforms.
-
Experience in building and operating inference systems, ML serving platforms, or real-time data systems at scale.
-
-
Technical Expertise
-
Strong understanding of machine learning model deployment, serving, and optimization (batch & real-time).
-
Proficiency in cloud-native technologies (Kubernetes, Docker, microservices architecture).
-
Hands-on knowledge of inference frameworks (TensorFlow Serving, Triton Inference Server, TorchServe, etc.) and hardware accelerators.
-
Solid background in programming languages (Python, Java, C++ or Go) and performance optimization techniques.
-
Preferred Qualifications
-
Experience with MLOps platforms and end-to-end ML lifecycle management.
-
Prior work in high-throughput, low-latency systems (ad-tech, search, recommendations, etc.).
-
Knowledge of cost optimization strategies for large-scale inference workloads.
Why Gruve
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.