1 DeepSeek R1 Model now Available in Amazon Bedrock Marketplace And Amazon SageMaker JumpStart
Alina Venable edited this page 3 months ago


Today, we are thrilled to reveal that DeepSeek R1 distilled Llama and Qwen models are available through Amazon Bedrock Marketplace and Amazon SageMaker JumpStart. With this launch, you can now deploy DeepSeek AI's first-generation frontier model, DeepSeek-R1, together with the distilled variations varying from 1.5 to 70 billion specifications to develop, experiment, and responsibly scale your generative AI ideas on AWS.

In this post, we show how to get going with DeepSeek-R1 on Amazon Bedrock Marketplace and SageMaker JumpStart. You can follow similar actions to release the distilled variations of the models also.

Overview of DeepSeek-R1

DeepSeek-R1 is a big language design (LLM) established by DeepSeek AI that uses support learning to improve thinking abilities through a multi-stage training process from a DeepSeek-V3-Base foundation. An essential differentiating feature is its reinforcement knowing (RL) step, which was utilized to fine-tune the model's responses beyond the standard pre-training and fine-tuning process. By including RL, DeepSeek-R1 can adjust better to user feedback and objectives, ultimately improving both importance and clearness. In addition, DeepSeek-R1 utilizes a chain-of-thought (CoT) method, implying it's equipped to break down complicated queries and factor through them in a detailed way. This guided thinking procedure permits the design to produce more precise, transparent, and detailed responses. This model combines RL-based fine-tuning with CoT abilities, aiming to generate structured reactions while concentrating on interpretability and user interaction. With its extensive capabilities DeepSeek-R1 has actually captured the market's attention as a flexible text-generation model that can be incorporated into different workflows such as representatives, sensible reasoning and data analysis jobs.

DeepSeek-R1 utilizes a Mixture of Experts (MoE) architecture and is 671 billion parameters in size. The MoE architecture permits activation of 37 billion specifications, enabling efficient reasoning by routing inquiries to the most appropriate expert "clusters." This method allows the model to focus on various issue domains while maintaining total efficiency. DeepSeek-R1 requires a minimum of 800 GB of HBM memory in FP8 format for inference. In this post, we will utilize an ml.p5e.48 xlarge instance to deploy the model. ml.p5e.48 xlarge includes 8 Nvidia H200 GPUs providing 1128 GB of GPU memory.

DeepSeek-R1 distilled designs bring the thinking abilities of the main R1 design to more efficient architectures based on popular open designs like Qwen (1.5 B, 7B, 14B, and 32B) and Llama (8B and 70B). Distillation refers to a process of training smaller sized, more efficient designs to imitate the behavior and thinking patterns of the bigger DeepSeek-R1 model, using it as a teacher model.

You can deploy DeepSeek-R1 model either through SageMaker JumpStart or Bedrock Marketplace. Because DeepSeek-R1 is an emerging model, we recommend deploying this model with guardrails in location. In this blog site, we will utilize Amazon Bedrock Guardrails to introduce safeguards, prevent harmful content, and evaluate designs against key security criteria. At the time of writing this blog site, for DeepSeek-R1 releases on SageMaker JumpStart and Bedrock Marketplace, Bedrock Guardrails supports just the ApplyGuardrail API. You can develop multiple guardrails tailored to different usage cases and use them to the DeepSeek-R1 design, improving user experiences and standardizing security controls across your generative AI applications.

Prerequisites

To deploy the DeepSeek-R1 model, you require access to an ml.p5e instance. To inspect if you have quotas for P5e, open the Service Quotas console and under AWS Services, choose Amazon SageMaker, and validate you're utilizing ml.p5e.48 xlarge for endpoint use. Make certain that you have at least one ml.P5e.48 xlarge circumstances in the AWS Region you are deploying. To request a limit increase, develop a limit boost request and [forum.batman.gainedge.org](https://forum.batman.gainedge.org/index.php?action=profile