top of page

Command Line AI App with Python and Ollama


This is script creates a command line app that allows you to ask AI questions using the Ollama framework. Ollama has to be installed and running on your system and the model has to be pulled for this to work.


We import the ollama and os modules


We then create an injection to direct Ollama how to answer our questions. Adding an injection that requests that Ollama answers in fewer than 25 words is useful so that you don't get back too much text.


We then create an ai() function and feed it a query. We call the chat() function and ask for the model to be used as phi. This is a small model that should work on any system, you can use a different model.


We then feed it the message and for content we use an f string to concatenate our query with the injection. And then we return the response variable value.


For the main script we create a while True loop so that we can keep asking questions. We use the input() function to assign a value to the query variable.


We then send the query to the ai() function and assign the results to the response variable.


We then use the system() function to clear the screen.


Finally we print out the original query on one line, and the ai response on the next.



Install Ollama Framework on your system

Install Ollama on Your System: https://ollama.com

Pull Phi LLM Model

ollama pull phi

Install Ollama Module for Python

python3 -m pip install ollama

import ollama
import os

injection = 'answer in fewer than 25 words'

def ai(query):
    response = ollama.chat(
        model='phi', 
        messages=[{'role': 'user',
                'content': f'{query} - {injection}'}])
    return response

while True:
    query = input('How can I help you? ')
    response = ai(query)
    os.system('clear')
    print(query)
    print(response['message']['content'])

Comments


bottom of page