Installation
1
2
3
| curl -fsSL https://ollama.com/install.sh | sh
apt update
apt install -y ca-certificates
|
Verifying:
Check if the server ios running:
1
| curl http://localhost:11434/api/tags
|
Setting Environment Variables:
1
2
3
| export OLLAMA_MODELS=/usr/share/ollama/.ollama/models
export OLLAMA_GPU_MEMORY=4096
export OLLAMA_CPU_ONLY=true
|
Pulling a Model:
Listing Models:
Running your first interference:
1
| ollama run phi "Explain quantum computing in simple terms"
|
Running multi-turn conversation (interactive session):
Stopping a running model:
1
2
| ollama ps
ollama stop phi
|
Experimenting with different models:
1
2
3
4
| ollama pull gemma:2b
ollama pull llama2
ollama pull mistral
ollama pull phi
|
Commands Overview
Command | Description |
---|
ollama run | Run a model in interactive mode or with a single prompt |
ollama pull | Download a model |
ollama list | List downloaded models |
ollama rm | Remove a model |
ollama cp | Copy a model |
ollama ps | List running models |
ollama kill | Stop a running model |
ollama serve | Start the Ollama server |
ollama create | Create a custom model from a Modelfile |
Using System Prompts
1
2
3
4
| cat > modelfile-pentest << 'EOF'
FROM mistral
SYSTEM You are a cybersecurity expert specializing in penetration testing. Always format your responses with markdown and include practical examples.
EOF
|
1
2
3
| ollama create pentest-expert -f modelfile-pentest
ollama run pentest-expert
|
Using multiple Prompts
1
2
3
4
5
6
7
8
9
10
11
12
| for model in phi mistral gemma:2b; do
echo "Evaluating $model..."
cat test_questions.txt | while read question; do
echo "Q: $question"
# Create a temporary file with just the question
echo "$question" > temp_question.txt
# Time the model running with input from the file
time ollama run $model < temp_question.txt > /dev/null
echo "---"
done
rm -f temp_question.txt
echo "===================="
|