Skip to content

Features Request: Ollama support #55

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
tysonchamp opened this issue Jan 24, 2025 · 10 comments
Open

Features Request: Ollama support #55

tysonchamp opened this issue Jan 24, 2025 · 10 comments
Labels
enhancement New feature or request self-hosted

Comments

@tysonchamp
Copy link

Please add ollama support

@fatihbaltaci
Copy link
Contributor

Hello @tysonchamp, its on our roadmap. I'll let you know, when we add the ollama support.

Thanks.

@kursataktas kursataktas changed the title Features Request Features Request: Ollama support Feb 21, 2025
@kursataktas kursataktas added enhancement New feature or request self-hosted labels Feb 21, 2025
@mugoosse
Copy link

+1

1 similar comment
@wagnerfnds
Copy link

+1

@richlysakowski
Copy link

Anything done on this yet? This is a REALLY important, high priority requirement for business use.

@fatihbaltaci
Copy link
Contributor

Hi folks! We’re exploring Ollama support for Gurubase. Could you share your specific use case? For example, is it for offline usage, privacy concerns, or to try other open-source models as the base LLM? Your input may help us prioritize this feature request.

@Xoeseko
Copy link

Xoeseko commented Mar 27, 2025

I would say for me it is mostly a question of privacy to support ollama or vllm. I wouldn't be looking to self-host a RAG system otherwise. It feels like a core tenant of the offering as a whole.

Neither of which need to be fully integrated but at least, if the now standard openai API is supported and I had the option to select the API URL and perhaps the model I would already be happy as a starting point.

@kay0ramon
Copy link

+1

@wagnerfnds
Copy link

Hi folks! We’re exploring Ollama support for Gurubase. Could you share your specific use case? For example, is it for offline usage, privacy concerns, or to try other open-source models as the base LLM? Your input may help us prioritize this feature request.

In my case, the company I work for has internal process documents that they do not allow to be sent to OpenAI for privacy reasons. So we run Ollama on a local server.

@joaopalma5
Copy link

@fatihbaltaci
I didn't setup the project on my machine but I will let you know how to do:

Go to src/gurubase-backend/backend/core/models.py L1178

and modify to:
client = OpenAI(api_key=self.openai_api_key, timeout=10, base_url= self.openai_api_base_url)

from openai import OpenAI

client = OpenAI(
    base_url = 'http://localhost:11434/v1',
    api_key='ollama', # required, but unused
)

response = client.chat.completions.create(
  model="llama2",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Who won the world series in 2020?"},
    {"role": "assistant", "content": "The LA Dodgers won in 2020."},
    {"role": "user", "content": "Where was it played?"}
  ]
)
print(response.choices[0].message.content)

source: https://ollama.com/blog/openai-compatibility
Add url to settings

DONE!

@fatihbaltaci
Copy link
Contributor

@joaopalma5 we're already aware of the OpenAI-compatible endpoint Ollama exposes. We use multiple models behind the scenes (OpenAI for base llm, gte-large for embeddings, bge-reranker for reranking, Gemini for summarization, etc.), so plugging Ollama in isn't just a one-line change. We're working on proper Ollama support to ensure the self-hosted experience matches the performance of Gurubase.io. We'll share updates as soon as it's ready.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request self-hosted
Projects
None yet
Development

No branches or pull requests

9 participants