Neovim

How To Configure a Large Language Model with Neovim

Table of contents:

1. Why Should You Use A Large Language Model In Your Computer

2. How to Download a Large Language Model (LLM)

3. How to Set Up A Basic Neovim Code Editor

4. Install The Plugin gen.nvim In Your Neovim To Use an LLM

5. Use The Large Language Model In Your Code Editor

I also created a video on this.

1

Why Should You Use a Large Language Model In Your Computer

You can download LLMs and use it as a compressed form of internet without you needing an internet connection.

This is better than ChatGPT because you don't have to give your data away.

Software developers like me can use LLMs to answer their questions and to understand code written by someone else.

Writers can leverage this tool to refine their concepts and rephrase their linguistic expressions

2

How To Download a Large Language Model

Ollama is a great tool that packages LLMs in a way you can use in your computer.

You can download the llama3 model

After you download ollama you can use the below command command; the model will download and a chat will open up in the terminal

You can use it as a chat interface to ask any questions.

This chat is also available through an API endpoint which will be used in your neovim

homeFolder/Code ~ ollama run llama3

>>>Send a message (/? for help)

When you type 'ollama list' like this

homeFolder/Code ~ ollama list

You will see this output

homeFolder/Code ~ ollama list

NAMEIDSIZEMODIFIED

llama3:latest365c0bd3c0004.7GB3 days ago

3

How To Set Up A Basic Neovim Code Editor

Why Neovim? This is a light-weighted code-editor that you can customize for yourself.

Here is a good tutorial.

4

Install The Plugin gen.nvim In Your Neovim To Use an LLM

Once you have set up your init.lua, go to gen.nvim repository by David Kunz

You will see this code in the README.md. Add this code to your init.lua file. Note that the model is 'llama3'

Add this to your ~/.config/nvim/init.lua

{

"David-Kunz/gen.nvim",

opts = {

-- The default model to use.

model = "llama3"

-- The host running the ollama service

host = "localhost"

-- The port on which the ollama service is listening

port = "11434"

-- set keymap for close the response window

quit_map = "q"

-- set keymap to re-send the current prompt

retry_map = <c-r>

init = function(options) pcall(io.popen, "ollama serve > /dev/null 2>&1 &") end,

-- Function to initialize Ollama

command = function(options)

local body = {model = options.model, stream = true}

return "curl --silent --no-buffer -X POST http://" .. options.host .. ":" .. options.port .. "/api/chat -d $body"

end

-- The command for the Ollama service. You can use placeholders $prompt, $model and $body (shellescaped).

-- This can also be a command string.

display_mode = "split"

show_prompt = false

show_model = false

no_auto_close = false

debug = false

}

}
5

Use The Large Language Model In Your Code Editor

Once you have added above to your init.lua file, close your neovim and reopen it

Let's say you want to ask a general question

Type colon (:) and then type 'Gen' like this (see bottom) and then press Enter.

There are nice

There are nice

Normal

Git main diff -

~/path/to/file

:Gen

You will see a dropdown box appear as below:

Scroll down and choose 'Chat'

There are nice

There are nice

Ask

Change

Change Code

Chat

Enhance Code

Enhance Grammar Spelling

Enhance Wording

Generate

Make Concise

Make List

Make Table

Normal

Git main diff -

~/path/to/file

:Gen

This will create a prompt input at the bottom of your screen where you can type your prompt and then press 'Enter'

There are nice

There are nice

Normal

Git main diff -

~/path/to/file

:Prompt: Which is the closest plant to the sun

You will see the answer appear in the right pane like this:

:Prompt: closest plant to the sun

The closest planet to the Sun is Mercury, with an average distance of about 58 million kilometers (36 million miles). However, if you're asking about the closest planet-like object or body to the Sun, that would be Venus.

Or you could highlight your code, type 'Gen' and ask a question like this

def pytest_collection_modifyitems(config, items):

requires_trainable_backend = pytest.mark.skipif(

backend() = "numpy"

reason="Trainer not implemented for NumPy backend.",

)

for items in items:

if "requires_trainable_backend" in item.keywords:

item.add_marker(requires_trainable_backend)

Normal

Git main diff -

~/path/to/file

:'<,'>Gen

If you click on 'Ask', it will open a prompt like below

def pytest_collection_modifyitems(config, items):

requires_trainable_backend = pytest.mark.skipif(

backend() = "numpy"

reason="Trainer not implemented for NumPy backend.",

)

for items in items:

if "requires_trainable_backend" in item.keywords:

item.add_marker(requires_trainable_backend)

Ask

Change

Change Code

Enhance Code

Chat

Enhance Grammar Spelling

Enhance Wording

Generate

Make Concise

Make List

Make Table

Normal

Git main diff -

~/conftest.py

python

140:36

Then it will open a prompt input like below

def pytest_collection_modifyitems(config, items):

requires_trainable_backend = pytest.mark.skipif(

backend() = "numpy"

reason="Trainer not implemented for NumPy backend.",

)

for items in items:

if "requires_trainable_backend" in item.keywords:

item.add_marker(requires_trainable_backend)

Normal

Git main diff -

~/conftest.py

python

140:36

:Prompt: what does this code do

Your screen will be split.

def pytest_collection_modifyitems(config, items):

requires_trainable_backend = pytest.mark.skipif(

backend() = "numpy"

reason="Trainer not implemented for NumPy backend.",

)

for items in items:

if "requires_trainable_backend" in item.keywords:

item.add_marker(requires_trainable_backend)

This is a custom hook function in Pytest that modifies the test collection. Specifically, it adds markers to certain test items based on their keywords.

Here is what it does:

1. It defines a marker `requires_trainable_backend` using the `pytest.mark.skipif` function. This marker will be added to test items that require a trainable backend (e.g., TensorFlow or PyTorch).

2. It defines a marker `requires_trainable_backend` using the `pytest.mark.skipif` function. This marker will be added to test items that require a trainable backend (e.g., TensorFlow or PyTorch).

3. For each item, it checks if the item has a keyword called `"requires_trainable_backend"`. This is done using the `in` operator and the `item.keywords` dictionary.

Normal

Git main diff -

~/conftest.py

:Prompt: what does this code do