Files
sure/app/models/setting.rb
soky srm 8cd109a5b2 Implement support for generic OpenAI api (#213)
* Implement support for generic OpenAI api

- Implements support to route requests to any openAI capable provider ( Deepsek, Qwen, VLLM, LM Studio, Ollama ).
- Keeps support for pure OpenAI and uses the new better responses api
- Uses the /chat/completions api for the generic providers
- If uri_base is not set, uses default implementation.

* Fix json handling and indentation

* Fix linter error indent

* Fix tests to set env vars

* Fix updating settings

* Change to prefix checking for OAI models

* FIX check model if custom uri is set

* Change chat to sync calls

Some local models don't support streaming. Revert to sync calls for generic OAI api

* Fix tests

* Fix tests

* Fix for gpt5 message extraction

- Finds the message output by filtering for "type" == "message" instead of assuming it's at index 0
- Safely extracts the text using safe navigation operators (&.)
- Raises a clear error if no message content is found
- Parses the JSON as before

* Add more langfuse logging

- Add Langfuse to auto categorizer and merchant detector
- Fix monitoring on streaming chat responses
- Add Langfuse traces also for model errors now

* Update app/models/provider/openai.rb

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: soky srm <sokysrm@gmail.com>

* handle nil function results explicitly

* Exposing some config vars.

* Linter and nitpick comments

* Drop back to `gpt-4.1` as default for now

* Linter

* Fix for strict tool schema in Gemini

- This fixes tool calling in Gemini OpenAI api
- Fix for getTransactions function, page size is not used.

---------

Signed-off-by: soky srm <sokysrm@gmail.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: Juan José Mata <juanjo.mata@gmail.com>
2025-10-22 16:02:50 +02:00

28 lines
1.3 KiB
Ruby

# Dynamic settings the user can change within the app (helpful for self-hosting)
class Setting < RailsSettings::Base
class ValidationError < StandardError; end
cache_prefix { "v1" }
field :twelve_data_api_key, type: :string, default: ENV["TWELVE_DATA_API_KEY"]
field :openai_access_token, type: :string, default: ENV["OPENAI_ACCESS_TOKEN"]
field :openai_uri_base, type: :string, default: ENV["OPENAI_URI_BASE"]
field :openai_model, type: :string, default: ENV["OPENAI_MODEL"]
field :brand_fetch_client_id, type: :string, default: ENV["BRAND_FETCH_CLIENT_ID"]
field :require_invite_for_signup, type: :boolean, default: false
field :require_email_confirmation, type: :boolean, default: ENV.fetch("REQUIRE_EMAIL_CONFIRMATION", "true") == "true"
# Validates OpenAI configuration requires model when custom URI base is set
def self.validate_openai_config!(uri_base: nil, model: nil)
# Use provided values or current settings
uri_base_value = uri_base.nil? ? openai_uri_base : uri_base
model_value = model.nil? ? openai_model : model
# If custom URI base is set, model must also be set
if uri_base_value.present? && model_value.blank?
raise ValidationError, "OpenAI model is required when custom URI base is configured"
end
end
end