Skip to main content
Michael Wandzik

How to Install Ollama, Download Your First LLM, and Use It via Console

michael 2 months ago

So, you’ve decided to dive into the world of large language models (LLMs) with Ollama—congratulations! You’re about to embark on a journey filled with AI-powered adventures. Let’s get you set up, download your first LLM, and start using it via the console.

Step 1: Installing Ollama

First things first, let’s get Ollama on your system. The installation process is straightforward whether you’re on Windows, macOS, or Linux.

For Linux:

  1. Download and Install:
    Open your terminal and run the following command to download and install Ollama:

    bash: curl -fsSL https://ollama.com/install.sh | sh

    This script will handle the download and installation for you.
  2. Verify Installation:
    Once installed, you can verify by running:

    bash: ollama --version

For Windows and macOS:

  1. Download the Installer:
    Head over to the Ollama website and download the installer for your platform.
  2. Run the Installer:
    Execute the downloaded file and follow the on-screen instructions to complete the installation.

Step 2: Downloading Your First LLM

With Ollama installed, it’s time to download your first LLM. Let’s take “Llama3” as an example.

  1. Open Terminal or Command Prompt:
    Fire up your terminal or command prompt.
  2. Pull the Model:
    Use the following command to download the Llama3 model:

    bash: ollama pull llama3

    This command fetches the model from the Ollama repository and makes it available on your machine.

Step 3: Using Ollama via Console

Now that you have your model, let’s put it to work!

  1. Run the Model:
    To start using the Llama3 model, run:

    bash: ollama run llama3

    This command will launch the model in a REPL (Read-Eval-Print Loop), where you can interact with it by entering prompts.


  2. Interact with the Model:
    Type your queries or tasks into the console, and watch as the LLM processes and responds to your inputs. To exit the REPL, you can usually type /bye or use a similar command.

Conclusion

And there you have it! You’ve successfully installed Ollama, downloaded your first LLM, and started using it via the console. Whether you’re generating text, answering questions, or just exploring the capabilities of LLMs, Ollama makes it easy and fun. Now go forth and conquer the AI world, one prompt at a time!