Description:
Role: AI Engineer location: Seattle, WA contrat role 10+ years of AI engineerSetup and operation of AI inference service monitoring for performance and availability.Experience deploying and troubleshooting LLM models on a containerized platform, monitoring, load balancing, etc.Operation and support of MLOps/LLMOps pipelines, using TensorRT-LLM and Triton Inference server to deploy inference services in productionExperience deploying and troubleshooting LLM models on a containerized platform, mon
Feb 24, 2026;
from:
dice.com