Install DeepSeek on Linux in 3 Minutes

DeepSeek, founded in 2023 by Liang Wenfeng, is a Chinese artificial intelligence company that develops open-source large language models (LLMs). Their flagship model, DeepSeek-R1, is gaining popularity for its advanced reasoning capabilities, with performance comparable to OpenAI-o1 across tasks like math, code, and general reasoning. In this guide, we’ll walk you through installing DeepSeek-R1 on a Linux system, along with important details about the model variants.

At the end of 2023, I wrote a similar article which you might find interesting: Install AI Models on Linux: Discover LLMs and Chatbots for Linux. I will continue to expand on this guide, but for now, here’s a no-nonsense 3-minute guide to get you up and running.

Prerequisites

Before diving in, ensure the following:

  • Operating System: Ubuntu 22.04 or a similar Linux distribution. (Debian/Debian-based will make your life easier)
  • Hardware: Modern CPU with at least 16 GB of RAM; a dedicated GPU. (NVIDIA GPUs are already well tested)
  • Software: Python 3.8 or later, and Git installed on your system. (Probably already installed, check first)
  • Free disk space: At least 10 GB of for smaller models; larger models like 671b require significantly more!!

Step 1: Install Ollama

curl -fsSL https://ollama.com/install.sh | sh output

Ollama is a tool designed for running AI models locally. Open your terminal and run:

curl -fsSL https://ollama.com/install.sh | sh

This command downloads and executes the Ollama installation script. During the installation, Ollama will automatically configure itself and start the required services. After the process completes, verify the installation by checking the version:

ollama --version

To check if Ollama is already running, use:

systemctl is-active ollama.service

If the output is active, the service is running, and you can skip to the next step. If it’s not, start it manually:

sudo systemctl start ollama.service

To always start the service when your system boots run:

sudo systemctl enable ollama.service

Step 2: Download and Run DeepSeek-R1

DeepSeek Linux install: Screenshot showing how to install DeepSeek.

DeepSeek-R1 includes various distilled* models fine-tuned from Qwen and Llama architectures**, each optimized for specific performance and resource requirements. Here’s how to get started:

To download and run the 7b model, use the command:

ollama run deepseek-r1:7b

If your system has limited resources (like mine, 16 GB RAM and only 8 GB AMD GPU), you can choose a smaller model:

  • 1.5b: Minimal resource usage.
  • 7b: Balanced performance and resource requirements.
  • 8b, 14b, 32b: Intermediate options for higher performance.

The download size for these models varies:

  • 1.5b: ~2.3GB
  • 7b: ~4.7GB
  • 70b: ~40GB+

Visit the DeepSeek Model Library for a complete list of models, their sizes, and details.

Step 3: Begin Prompting DeepSeek

Example prompt sent to DeepSeek running on my Linux PC.
That’s it, done!

Once the installation command completes, it also automatically runs DeepSeek R1, meaning there’s nothing left to configure—your setup is complete. You’ve successfully installed DeepSeek on Linux! Go ahead and enter your first prompt.

Any time you would like to launch DeepSeek again, simply repeat the run command.

Listing and removing Models

ollama list screenshot

To view all models downloaded, run the following command:

ollama list

To remove an installed model and free up disk space, use the following command:

ollama rm deepseek-r1:70b

Replace 70b with the appropriate model size, such as 7b or 8b. This will delete the specified model from your system. Once removed, you can proceed to download and run a different model.

Using DeepSeek offline on Linux. (Screenshot)
Update: New screenshot added. DeepSeek offline mode on Linux.

Conclusion

With this guide, you’ve learned how to install DeepSeek-R1 on your Linux system and explore its versatile models. Whether you’re running the lightweight 1.5b model or the performance-driven 70b, DeepSeek offers cutting-edge reasoning capabilities directly on your machine. Ready to take the next step? Start experimenting with DeepSeek!

* Distilled models are smaller versions of larger language models, created through the knowledge distillation process. This process involves training a smaller model (student) to mimic a larger, more complex model (teacher). The goal is to transfer the knowledge and reasoning of the larger model into the smaller one, with much less compute.

** DeepSeek-R1 is built on top of Qwen and Llama architectures, which are advanced neural network designs for large language models. Qwen architecture by Alibaba is optimized for natural language understanding and reasoning tasks, supports scalable and modular design to build high-performance models like Qwen-2.5 and Qwen-7B. Llama architecture by Meta AI is efficient in both training and inference, models like Llama 2-7B, 13B, 70B are state-of-the-art for text generation and reasoning. These architectures are the base for DeepSeek’s fine-tuned and distilled models like DeepSeek-R1-Distill-Qwen-7B and DeepSeek-R1-Distill-Llama-70B which use their respective designs and are optimized for reasoning tasks and resource efficiency.

Tags: , ,

Discussion

  1. I was able to fix the crashes. The cause was CPU overclocking. I disabled overclocking for now:


    Next, I need to give it another shot (tried about a year ago) to get my AMD GPU working—time to stop overworking the poor CPU! :smile:

  2. If you installed ollama using this method, how do you uninstall it?

  3. Welcome to our Linux Community!

    You can uninstall using:

    sudo systemctl stop ollama
    sudo systemctl disable ollama
    sudo rm /etc/systemd/system/ollama.service
    

    Remove binary:
    sudo rm $(which ollama)

    Clean up leftover models and previously created user and group:

    sudo rm -r /usr/share/ollama
    sudo userdel ollama
    sudo groupdel ollama
    
  4. Nice article! I might try it, but first I want to ask if it’s available only in English or does it supports some other languages? I would like to chat with it in Italian, please let me know if it’s doable, if you know it.
    Thanks.



Top ↑