Using Ollama with CLI
Post

Using Ollama with CLI

Installation

1
2
3
curl -fsSL https://ollama.com/install.sh | sh
apt update
apt install -y ca-certificates

Verifying:

1
ollama --version

Check if the server ios running:

1
curl http://localhost:11434/api/tags

Setting Environment Variables:

1
2
3
export OLLAMA_MODELS=/usr/share/ollama/.ollama/models
export OLLAMA_GPU_MEMORY=4096
export OLLAMA_CPU_ONLY=true

Pulling a Model:

1
ollama pull phi

Listing Models:

1
ollama list

Running your first interference:

1
ollama run phi "Explain quantum computing in simple terms"

Running multi-turn conversation (interactive session):

1
ollama run phi

Stopping a running model:

1
2
ollama ps
ollama stop phi

Experimenting with different models:

1
2
3
4
ollama pull gemma:2b
ollama pull llama2
ollama pull mistral
ollama pull phi

Commands Overview

CommandDescription
ollama runRun a model in interactive mode or with a single prompt
ollama pullDownload a model
ollama listList downloaded models
ollama rmRemove a model
ollama cpCopy a model
ollama psList running models
ollama killStop a running model
ollama serveStart the Ollama server
ollama createCreate a custom model from a Modelfile

Using System Prompts

1
2
3
4
cat > modelfile-pentest << 'EOF'
FROM mistral
SYSTEM You are a cybersecurity expert specializing in penetration testing. Always format your responses with markdown and include practical examples.
EOF
1
2
3
ollama create pentest-expert -f modelfile-pentest
ollama run pentest-expert

Using multiple Prompts

1
2
3
4
5
6
7
8
9
10
11
12
for model in phi mistral gemma:2b; do
  echo "Evaluating $model..."
  cat test_questions.txt | while read question; do
    echo "Q: $question"
    # Create a temporary file with just the question
    echo "$question" > temp_question.txt
    # Time the model running with input from the file
    time ollama run $model < temp_question.txt > /dev/null
    echo "---"
  done
  rm -f temp_question.txt
  echo "===================="