With the ollama project it's easy to host our own AI models.
You can set up bring-your-own-key (BYOK) to connect to ollama server, and see if you can use StarCoder2 for code completion, llama models for chat.
Does it work at all? What we need to fix to make it better?