Generative AI is rapidly transforming industries, driving demand for secure, high-performance inference solutions to scale increasingly complex models efficiently and cost-effectively.
Expanding its collaboration with NVIDIA, Amazon Web Services (AWS) today announced at the annual AWS re:Invent conference that it will extend NVIDIA NIM microservices across key AWS AI services to accelerate AI inference and generate AI applications. revealed that it supports reducing latency.
NVIDIA NIM microservices are now available directly from the AWS Marketplace as well as Amazon Bedrock Marketplace and Amazon SageMaker JumpStart, enabling developers to leverage NVIDIA-optimized inference on commonly used models. Now it’s even easier to deploy at scale.
NVIDIA NIM is part of the NVIDIA AI Enterprise Software Platform available on the AWS Marketplace, a suite of software designed to securely and reliably deploy high-performance, enterprise-grade AI model inference across clouds and data. provides developers with easy-to-use microservices. centers and workstations.
These pre-built containers are built on robust inference engines such as NVIDIA Triton Inference Server, NVIDIA TensorRT, NVIDIA TensorRT-LLM, and PyTorch, and can handle everything from open source community models to NVIDIA AI Foundation models and custom models. Supports a wide range of AI models. .
NIM microservices can be deployed across various AWS services such as Amazon Elastic Compute Cloud (EC2), Amazon Elastic Kubernetes Service (EKS), and Amazon SageMaker.
In the NVIDIA API Catalog, developers can access over 100 NIM micros built from commonly used models and model families, including Meta’s Llama 3, Mistral AI’s Mistral and Mixtral, NVIDIA’s Nemotron, and Stability AI’s SDXL. You can preview the service. The most commonly used ones can be self-hosted, deployed on AWS services, and are optimized to run on NVIDIA-accelerated compute instances on AWS.
NIM microservices currently available directly from AWS include:
NVIDIA Nemotron-4. Available on Amazon Bedrock Marketplace, Amazon SageMaker Jumpstart, and AWS Marketplace. It is a state-of-the-art LLM designed to generate diverse synthetic data that closely mimics real-world data, enhancing the performance and robustness of custom LLMs across a variety of domains. Llama 3.1 8B-Instruct, available on AWS Marketplace. This 8 billion parameter multilingual large-scale language model is pre-trained and has tailored instructions for language understanding, inference, and text generation use cases. Llama 3.1 70B-Instruct, available on AWS Marketplace. This 70 billion parameter pre-trained, instruction-tuned model is optimized for multilingual interactions. Mixtral 8x7B Instruct v0.1, available on AWS Marketplace. This high-quality sparse expert model with open weights can follow instructions, complete requests, and generate creative text formats.
NIM on AWS that anyone can use
Customers and partners across industries rely on NIM on AWS to get to market faster, maintain security and control over the AI applications and data they produce, and reduce costs.
SoftServe, an IT consulting and digital services provider, has developed six generative AI solutions fully deployed on AWS and accelerated by NVIDIA NIM and AWS services. Solutions available on AWS Marketplace include SoftServe Gen AI Drug Discovery, SoftServe Gen AI Industrial Assistant, Digital Concierge, Multimodal RAG System, Content Creator, and Speech Recognition Platform.
All of this is based on NVIDIA AI Blueprints, a comprehensive reference workflow that accelerates the development and deployment of AI applications, including NVIDIA Acceleration Libraries, software development kits, NIM microservices for AI agents, digital twins, and more. Equipped with:
Get started with NIM on AWS today
Developers can deploy NVIDIA NIM microservices on AWS according to their unique needs and requirements. This enables developers and enterprises to achieve high-performance AI using NVIDIA-optimized inference containers across a variety of AWS services.
Visit the NVIDIA API Catalog to try over 100 different NIM optimization models, request a developer license or a 90-day NVIDIA AI Enterprise trial license, and start deploying microservices on AWS services. Developers can also explore NIM microservices on AWS Marketplace, Amazon Bedrock Marketplace, or Amazon SageMaker JumpStart.
Check out our software product information notice.