mirror of
https://github.com/we-promise/sure.git
synced 2026-05-09 21:54:58 +00:00
05ef8bd9e744dc8aa48fb500f3a9cfd285a1f68f
11 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
ccd6a53071 |
fix(chat): eager pending AssistantMessage to fix Turbo subscribe race (#1657) (#1658)
* fix(chat): persist eager pending assistant message to fix subscribe race When the LLM replies in ~1-2s the assistant message broadcast could fire before the client's Turbo stream subscription was established, leaving the UI stuck on the thinking indicator while the response was already persisted. Create the AssistantMessage as `pending` synchronously in `Chat#ask_assistant_later`, so it is rendered server-side on the chat show page with a "Thinking ..." inline placeholder. The worker then finds and updates the existing row via `append_text!`, which flips the status to `complete` and broadcasts updates against a DOM id that is already in the page — no race possible. On error, the placeholder is destroyed if no content streamed, otherwise demoted to `failed`. Replaces the standalone thinking indicator partial and the `Assistant::Broadcastable` thinking helpers, both now redundant. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(chat): bind each assistant job to its specific pending placeholder Addressing review feedback on #1658: 1. The pending placeholder lookup based on `last pending` was racy — back-to-back user messages would let one job fill another job's placeholder. Pass the placeholder through the job arguments (`AssistantResponseJob.perform_later(user_message, pending)`) so each turn is bound to its own row. 2. In `Assistant::External#respond_to`, the configured/authorized guards raise before the local was bound, leaving rescue cleanup with `nil` and the placeholder visible forever. Bind the parameter first so cleanup can destroy it on the misconfigured path. The kwarg defaults to nil so the API#retry path (`AssistantResponseJob.perform_later(new_message)`) and the model-level test calls continue to work — they fall back to an in-memory new message, restoring the original test count assertions. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(chat): i18n the pending assistant placeholder string Move the hardcoded "Thinking ..." indicator into the locale file per CLAUDE.md i18n guidelines. With i18n.fallbacks enabled, non-en locales fall back to English until translated. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * Add thinking label translations * Fix chat pending assistant expectations * Fix external assistant pending test lookup * Scope chat stream targets per chat * Update message broadcast target tests --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
7b2b1dd367 |
Rebase PR #784 and fix OpenAI model/chat regressions (#1384)
* Wire conversation history through OpenAI responses API * Fix RuboCop hash brace spacing in assistant tests * Pipelock ignores * Batch fixes --------- Co-authored-by: sokiee <sokysrm@gmail.com> |
||
|
|
c09362b880 |
Check for pending invitations before creating new Family during SSO log in/sign up (#1171)
* Check for pending invitations before creating new Family during SSO account creation When a user signs in via Google SSO and doesn't have an account yet, the system now checks for pending invitations before creating a new Family. If an invitation exists, the user joins the invited family instead. - OidcAccountsController: check Invitation.pending in link/create_user - API AuthController: check pending invitations in sso_create_account - SessionsController: pass has_pending_invitation to mobile SSO callback - Web view: show "Accept Invitation" button when invitation exists - Flutter: show "Accept Invitation" tab/button when invitation pending https://claude.ai/code/session_019Tr6edJa496V1ErGmsbqFU * Fix external assistant tests: clear Settings cache to prevent test pollution The tests relied solely on with_env_overrides to clear configuration, but rails-settings-cached may retain stale Setting values across tests when the cache isn't explicitly invalidated. Ensure both ENV vars AND Setting values are cleared with Setting.clear_cache before assertions. https://claude.ai/code/session_019Tr6edJa496V1ErGmsbqFU --------- Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
84bfe5b7ab |
Add external AI assistant with Pipelock security proxy (#1069)
* feat(helm): add Pipelock ConfigMap, scanning config, and consolidate compose - Add ConfigMap template rendering DLP, response scanning, MCP input/tool scanning, and forward proxy settings from values - Mount ConfigMap as /etc/pipelock/pipelock.yaml volume in deployment - Add checksum/config annotation for automatic pod restart on config change - Gate HTTPS_PROXY/HTTP_PROXY env injection on forwardProxy.enabled (skip in MCP-only mode) - Use hasKey for all boolean values to prevent Helm default swallowing false - Single source of truth for ports (forwardProxy.port/mcpProxy.port) - Pipelock-specific imagePullSecrets with fallback to app secrets - Merge standalone compose.example.pipelock.yml into compose.example.ai.yml - Add pipelock.example.yaml for Docker Compose users - Add exclude-paths to CI workflow for locale file false positives * Add external assistant support (OpenAI-compatible SSE proxy) Allow self-hosted instances to delegate chat to an external AI agent via an OpenAI-compatible streaming endpoint. Configurable per-family through Settings UI or ASSISTANT_TYPE env override. - Assistant::External::Client: SSE streaming HTTP client (no new gems) - Settings UI with type selector, env lock indicator, config status - Helm chart and Docker Compose env var support - 45 tests covering client, config, routing, controller, integration * Add session key routing, email allowlist, and config plumbing Route to the actual OpenClaw session via x-openclaw-session-key header instead of creating isolated sessions. Gate external assistant access behind an email allowlist (EXTERNAL_ASSISTANT_ALLOWED_EMAILS env var). Plumb session_key and allowedEmails through Helm chart, compose, and env template. * Add HTTPS_PROXY support to External::Client for Pipelock integration Net::HTTP does not auto-read HTTPS_PROXY/HTTP_PROXY env vars (unlike Faraday). Explicitly resolve proxy from environment in build_http so outbound traffic to the external assistant routes through Pipelock's forward proxy when enabled. Respects NO_PROXY for internal hosts. * Add UI fields for external assistant config (Setting-backed with env fallback) Follow the same pattern as OpenAI settings: database-backed Setting fields with env var defaults. Self-hosters can now configure the external assistant URL, token, and agent ID from the browser (Settings > Self-Hosting > AI Assistant) instead of requiring env vars. Fields disable when the corresponding env var is set. * Improve external assistant UI labels and add help text Change placeholder to generic OpenAI-compatible URL pattern. Add help text under each field explaining where the values come from: URL from agent provider, token for authentication, agent ID for multi-agent routing. * Add external assistant docs and fix URL help text Add External AI Assistant section to docs/hosting/ai.md covering setup (UI and env vars), how it works, Pipelock security scanning, access control, and Docker Compose example. Drop "chat completions" jargon from URL help text. * Harden external assistant: retry logic, disconnect UI, error handling, and test coverage - Add retry with backoff for transient network errors (no retry after streaming starts) - Add disconnect button with confirmation modal in self-hosting settings - Narrow rescue scope with fallback logging for unexpected errors - Safe cleanup of partial responses on stream interruption - Gate ai_available? on family assistant_type instead of OR-ing all providers - Truncate conversation history to last 20 messages - Proxy-aware HTTP client with NO_PROXY support - Sanitize protocol to use generic headers (X-Agent-Id, X-Session-Key) - Full test coverage for streaming, retries, proxy routing, config, and disconnect * Exclude external assistant client from Pipelock scan-diff False positive: `@token` instance variable flagged as "Credential in URL". Temporary workaround until Pipelock supports inline suppression. * Address review feedback: NO_PROXY boundary fix, SSE done flag, design tokens - Fix NO_PROXY matching to require domain boundary (exact match or .suffix), case-insensitive. Prevents badexample.com matching example.com. - Add done flag to SSE streaming so read_body stops after [DONE] - Move MAX_CONVERSATION_MESSAGES to class level - Use bg-success/bg-destructive design tokens for status indicators - Add rationale comment for pipelock scan exclusion - Update docs last-updated date * Address second round of review feedback - Allowlist email comparison is now case-insensitive and nil-safe - Cap SSE buffer at 1 MB to prevent memory blowup from malformed streams - Don't expose upstream HTTP response body in user-facing errors (log it instead) - Fix frozen string warning on buffer initialization - Fix "builtin" typo in docs (should be "built-in") * Protect completed responses from cleanup, sanitize error messages - Don't destroy a fully streamed assistant message if post-stream metadata update fails (only cleanup partial responses) - Log raw connection/HTTP errors internally, show generic messages to users to avoid leaking network/proxy details - Update test assertions for new error message wording * Fix SSE content guard and NO_PROXY test correctness Use nil check instead of present? for SSE delta content to preserve whitespace-only chunks (newlines, spaces) that can occur in code output. Fix NO_PROXY test to use HTTP_PROXY matching the http:// client URL so the proxy resolution and NO_PROXY bypass logic are actually exercised. * Forward proxy credentials to Net::HTTP Pass proxy_uri.user and proxy_uri.password to Net::HTTP.new so authenticated proxies (http://user:pass@host:port) work correctly. Without this, credentials parsed from the proxy URL were silently dropped. Nil values are safe as positional args when no creds exist. * Update pipelock integration to v0.3.1 with full scanning config Bump Helm image tag from 0.2.7 to 0.3.1. Add missing security sections to both the Helm ConfigMap and compose example config: mcp_tool_policy, mcp_session_binding, and tool_chain_detection. These protect the /mcp endpoint against tool injection, session hijacking, and multi-step exfiltration chains. Add version and mode fields to config files. Enable include_defaults for DLP and response scanning to merge user patterns with the 35 built-in patterns. Remove redundant --mode CLI flag from the Helm deployment template since mode is now in the config file. |
||
|
|
111d6839e0 |
Feat/Abstract Assistant into module with registry (#1020)
* Abstract Assistant into module with registry (fixes #1016) - Add Assistant module with registry/factory (builtin, external) - Assistant.for_chat(chat) routes by family.assistant_type - Assistant.config_for(chat) delegates to Builtin for backward compat - Assistant.available_types returns registered types - Add Assistant::Base (Broadcastable, respond_to contract) - Move current behavior to Assistant::Builtin (Provided + Configurable) - Add Assistant::External stub for future OpenClaw/WebSocket - Migration: add families.assistant_type (default builtin) - Family: validate assistant_type inclusion - Tests: for_chat routing, available_types, External stub, blank chat guard * Fix RuboCop layout: indentation in Assistant module and tests * Move new test methods above private so Minitest discovers them * Clear thinking indicator in External#respond_to to avoid stuck UI * Rebase onto upstream main: fix schema to avoid spurious diffs - Rebase feature/abstract-assistant-1016 onto we-promise/main - Rename migration to 20260218120001 to avoid duplicate version with backfill_crypto_subtype - Regenerate schema from upstream + assistant_type only (keeps vector_store_id, realized_gain, etc.) - PR schema diff now shows only assistant_type addition and version bump --------- Co-authored-by: mkdev11 <jaysmth689+github@users.noreply.github.com> |
||
|
|
a8f318c3f9 |
Fix "Messages is invalid" error for Ollama/custom LLM providers and add comprehensive AI documentation (#225)
* Add comprehensive AI/LLM configuration documentation * Fix Chat.start! to use default model when model is nil or empty * Ensure all controllers use Chat.default_model for consistency * Move AI doc inside `hosting/` * Probably too much error handling --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: jjmata <187772+jjmata@users.noreply.github.com> Co-authored-by: Juan José Mata <juanjo.mata@gmail.com> |
||
|
|
8cd109a5b2 |
Implement support for generic OpenAI api (#213)
* Implement support for generic OpenAI api - Implements support to route requests to any openAI capable provider ( Deepsek, Qwen, VLLM, LM Studio, Ollama ). - Keeps support for pure OpenAI and uses the new better responses api - Uses the /chat/completions api for the generic providers - If uri_base is not set, uses default implementation. * Fix json handling and indentation * Fix linter error indent * Fix tests to set env vars * Fix updating settings * Change to prefix checking for OAI models * FIX check model if custom uri is set * Change chat to sync calls Some local models don't support streaming. Revert to sync calls for generic OAI api * Fix tests * Fix tests * Fix for gpt5 message extraction - Finds the message output by filtering for "type" == "message" instead of assuming it's at index 0 - Safely extracts the text using safe navigation operators (&.) - Raises a clear error if no message content is found - Parses the JSON as before * Add more langfuse logging - Add Langfuse to auto categorizer and merchant detector - Fix monitoring on streaming chat responses - Add Langfuse traces also for model errors now * Update app/models/provider/openai.rb Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Signed-off-by: soky srm <sokysrm@gmail.com> * handle nil function results explicitly * Exposing some config vars. * Linter and nitpick comments * Drop back to `gpt-4.1` as default for now * Linter * Fix for strict tool schema in Gemini - This fixes tool calling in Gemini OpenAI api - Fix for getTransactions function, page size is not used. --------- Signed-off-by: soky srm <sokysrm@gmail.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Co-authored-by: Juan José Mata <juanjo.mata@gmail.com> |
||
|
|
cbc653a63a | Track Langfuse sessions and users (#174) | ||
|
|
4f5068e7e5 |
feat(assistant): improve chat functionality and update tests - refactor configurable model, update OpenAI provider, enhance chat form UI, and improve test coverage (#2316)
Updated model to GPT 4.1 |
||
|
|
5cf758bd03 |
improvements(ai): Improve AI streaming UI/UX interactions + better separation of AI provider responsibilities (#2039)
* Start refactor * Interface updates * Rework Assistant, Provider, and tests for better domain boundaries * Consolidate and simplify OpenAI provider and provider concepts * Clean up assistant streaming * Improve assistant message orchestration logic * Clean up "thinking" UI interactions * Remove stale class * Regenerate VCR test responses |
||
|
|
2f6b11c18f |
Personal finance AI (v1) (#2022)
* AI sidebar * Add chat and message models with associations * Implement AI chat functionality with sidebar and messaging system - Add chat and messages controllers - Create chat and message views - Implement chat-related routes - Add message broadcasting and user interactions - Update application layout to support chat sidebar - Enhance user model with initials method * Refactor AI sidebar with enhanced chat menu and interactions - Update sidebar layout with dynamic width and improved responsiveness - Add new chat menu Stimulus controller for toggling between chat and chat list views - Improve chat list display with recent chats and empty state - Extract AI avatar to a partial for reusability - Enhance message display and interaction styling - Add more contextual buttons and interaction hints * Improve chat scroll behavior and message styling - Refactor chat scroll functionality with Stimulus controller - Optimize message scrolling in chat views - Update message styling for better visual hierarchy - Enhance chat container layout with flex and auto-scroll - Simplify message rendering across different chat views * Extract AI avatar to a shared partial for consistent styling - Refactor AI avatar rendering across chat views - Replace hardcoded avatar markup with a reusable partial - Simplify avatar display in chats and messages views * Update sidebar controller to handle right panel width dynamically - Add conditional width class for right sidebar panel - Ensure consistent sidebar toggle behavior for both left and right panels - Use specific width class for right panel (w-[375px]) * Refactor chat form and AI greeting with flexible partials - Extract message form to a reusable partial with dynamic context support - Create flexible AI greeting partial for consistent welcome messages - Simplify chat and sidebar views by leveraging new partials - Add support for different form scenarios (chat, new chat, sidebar) - Improve code modularity and reduce duplication * Add chat clearing functionality with dynamic menu options - Implement clear chat action in ChatsController - Add clear chat route to support clearing messages - Update AI sidebar with dropdown menu for chat actions - Preserve system message when clearing chat - Enhance chat interaction with new menu options * Add frontmatter to project structure documentation - Create initial frontmatter for structure.mdc file - Include description and configuration options - Prepare for potential dynamic documentation rendering * Update general project rules with additional guidelines - Add rule for using `Current.family` instead of `current_family` - Include new guidelines for testing, API routes, and solution approach - Expand project-specific rules for more consistent development practices * Add OpenAI gem and AI-friendly data representations - Add `ruby-openai` gem for AI integration - Implement `to_ai_readable_hash` methods in BalanceSheet and IncomeStatement - Include Promptable module in both models - Add savings rate calculation method in IncomeStatement - Prepare financial models for AI-powered insights and interactions * Enhance AI Financial Assistant with Advanced Querying and Debugging Capabilities - Implement comprehensive AI financial query system with function-based interactions - Add detailed debug logging for AI responses and function calls - Extend BalanceSheet and IncomeStatement models with AI-friendly methods - Create robust error handling and fallback mechanisms for AI queries - Update chat and message views to support debug mode and enhanced rendering - Add AI query routes and initial test coverage for financial assistant * Refactor AI sidebar and chat layout with improved structure and comments - Remove inline AI chat from application layout - Enhance AI sidebar with more semantic HTML structure - Add descriptive comments to clarify different sections of chat view - Improve flex layout and scrolling behavior in chat messages container - Optimize message rendering with more explicit class names and structure * Add Markdown rendering support for AI chat messages - Implement `markdown` helper method in ApplicationHelper using Redcarpet - Update message view to render AI messages with Markdown formatting - Add comprehensive Markdown rendering options (tables, code blocks, links) - Enhance AI Financial Assistant prompt to encourage Markdown usage - Remove commented Markdown CSS in Tailwind application stylesheet * Missing comma * Enhance AI response processing with chat history context * Improve AI debug logging with payload size limits and internal message flag * Enhance AI chat interaction with improved thinking indicator and scrolling behavior * Add AI consent and enable/disable functionality for AI chat * Upgrade Biome and refactor JavaScript template literals - Update @biomejs/biome to latest version with caret (^) notation - Refactor AI query and chat controllers to use template literals - Standardize npm scripts formatting in package.json * Add beta testing usage note to AI consent modal * Update test fixtures and configurations for AI chat functionality - Add family association to chat fixtures and tests - Set consistent password digest for test users - Enable AI for test users - Add OpenAI access token for test environment - Update chat and user model tests to include family context * Simplify data model and get tests passing * Remove structure.mdc from version control * Integrate AI chat styles into existing prose pattern * Match Figma design spec, implement Turbo frames and actions for chats controller * AI rules refresh * Consolidate Stimulus controllers, thinking state, controllers, and views * Naming, domain alignment * Reset migrations * Improve data model to support tool calls and message types * Tool calling tests and fixtures * Tool call implementation and test * Get assistant test working again * Test updates * Process tool calls within provider * Chat UI back to working state again * Remove stale code * Tests passing * Update openai class naming to avoid conflicts * Reconfigure test env * Rebuild gemfile * Fix naming conflicts for ChatResponse * Message styles * Use OpenAI conversation state management * Assistant function base implementation * Add back thinking messages, clean up error handling for chat * Fix sync error when security price has bad data from provider * Add balance sheet function to assistant * Add better function calling error visibility * Add income statement function * Simplify and clean up "thinking" interactions with Turbo frames * Remove stale data definitions from functions * Ensure VCR fixtures working with latest code * basic stream implementation * Get streaming working * Make AI sidebar wider when left sidebar is collapsed * Get tests working with streaming responses * Centralize provider error handling * Provider data boundaries --------- Co-authored-by: Josh Pigford <josh@joshpigford.com> |