- https://ollama.com/
- https://github.com/open-webui/open-webui
- https://github.com/flexnst/ollama-mcp-bridge
- https://github.com/sparfenyuk/mcp-proxy
- https://github.com/modelcontextprotocol/servers/blob/main/src/fetch/README.md
- Clone and start docker compose environment:
git clone https://github.com/flexnst/ollama-mcp-example.git
cd ollama-mcp-example
make init- Wait for docker images pulled and built
- Go to Open-WebUI interface: http://localhost:3000
- Select and download your LLM models (see on https://ollama.com/search?c=tools)
make ollama-cli
ollama pull qwen2.5:7b # for example- Good luck with your AI experiments!
make init- Init new docker compose environmentmake up- Up environmentmake down- Stop enviromentmake ollama-cli- Docker ollama enviroment
This tool enables LLMs to retrieve and process content from web pages, converting HTML to markdown for easier consumption.
compose.yml
mcp-proxy:
container_name: mcp-proxy
image: mcp-proxy:latest
build:
context: mcp-proxy
dockerfile: Dockerfile
command: |
--pass-environment --host=0.0.0.0 --port=8096 --allow-origin=*
--transport=sse
-- uvx mcp-server-fetch --ignore-robots-txt --user-agent='${FETCH_USER_AGENT}'You can add your tools using uvx, for additional info see:
https://github.com/sparfenyuk/mcp-proxy#22-example-usage
https://github.com/modelcontextprotocol/servers
MIT.