* Check for pending invitations before creating new Family during SSO account creation
When a user signs in via Google SSO and doesn't have an account yet, the
system now checks for pending invitations before creating a new Family.
If an invitation exists, the user joins the invited family instead.
- OidcAccountsController: check Invitation.pending in link/create_user
- API AuthController: check pending invitations in sso_create_account
- SessionsController: pass has_pending_invitation to mobile SSO callback
- Web view: show "Accept Invitation" button when invitation exists
- Flutter: show "Accept Invitation" tab/button when invitation pending
https://claude.ai/code/session_019Tr6edJa496V1ErGmsbqFU
* Fix external assistant tests: clear Settings cache to prevent test pollution
The tests relied solely on with_env_overrides to clear configuration, but
rails-settings-cached may retain stale Setting values across tests when
the cache isn't explicitly invalidated. Ensure both ENV vars AND Setting
values are cleared with Setting.clear_cache before assertions.
https://claude.ai/code/session_019Tr6edJa496V1ErGmsbqFU
---------
Co-authored-by: Claude <noreply@anthropic.com>
* feat(helm): add Pipelock ConfigMap, scanning config, and consolidate compose
- Add ConfigMap template rendering DLP, response scanning, MCP input/tool
scanning, and forward proxy settings from values
- Mount ConfigMap as /etc/pipelock/pipelock.yaml volume in deployment
- Add checksum/config annotation for automatic pod restart on config change
- Gate HTTPS_PROXY/HTTP_PROXY env injection on forwardProxy.enabled (skip
in MCP-only mode)
- Use hasKey for all boolean values to prevent Helm default swallowing false
- Single source of truth for ports (forwardProxy.port/mcpProxy.port)
- Pipelock-specific imagePullSecrets with fallback to app secrets
- Merge standalone compose.example.pipelock.yml into compose.example.ai.yml
- Add pipelock.example.yaml for Docker Compose users
- Add exclude-paths to CI workflow for locale file false positives
* Add external assistant support (OpenAI-compatible SSE proxy)
Allow self-hosted instances to delegate chat to an external AI agent
via an OpenAI-compatible streaming endpoint. Configurable per-family
through Settings UI or ASSISTANT_TYPE env override.
- Assistant::External::Client: SSE streaming HTTP client (no new gems)
- Settings UI with type selector, env lock indicator, config status
- Helm chart and Docker Compose env var support
- 45 tests covering client, config, routing, controller, integration
* Add session key routing, email allowlist, and config plumbing
Route to the actual OpenClaw session via x-openclaw-session-key header
instead of creating isolated sessions. Gate external assistant access
behind an email allowlist (EXTERNAL_ASSISTANT_ALLOWED_EMAILS env var).
Plumb session_key and allowedEmails through Helm chart, compose, and
env template.
* Add HTTPS_PROXY support to External::Client for Pipelock integration
Net::HTTP does not auto-read HTTPS_PROXY/HTTP_PROXY env vars (unlike
Faraday). Explicitly resolve proxy from environment in build_http so
outbound traffic to the external assistant routes through Pipelock's
forward proxy when enabled. Respects NO_PROXY for internal hosts.
* Add UI fields for external assistant config (Setting-backed with env fallback)
Follow the same pattern as OpenAI settings: database-backed Setting
fields with env var defaults. Self-hosters can now configure the
external assistant URL, token, and agent ID from the browser
(Settings > Self-Hosting > AI Assistant) instead of requiring env vars.
Fields disable when the corresponding env var is set.
* Improve external assistant UI labels and add help text
Change placeholder to generic OpenAI-compatible URL pattern. Add help
text under each field explaining where the values come from: URL from
agent provider, token for authentication, agent ID for multi-agent
routing.
* Add external assistant docs and fix URL help text
Add External AI Assistant section to docs/hosting/ai.md covering setup
(UI and env vars), how it works, Pipelock security scanning, access
control, and Docker Compose example. Drop "chat completions" jargon
from URL help text.
* Harden external assistant: retry logic, disconnect UI, error handling, and test coverage
- Add retry with backoff for transient network errors (no retry after streaming starts)
- Add disconnect button with confirmation modal in self-hosting settings
- Narrow rescue scope with fallback logging for unexpected errors
- Safe cleanup of partial responses on stream interruption
- Gate ai_available? on family assistant_type instead of OR-ing all providers
- Truncate conversation history to last 20 messages
- Proxy-aware HTTP client with NO_PROXY support
- Sanitize protocol to use generic headers (X-Agent-Id, X-Session-Key)
- Full test coverage for streaming, retries, proxy routing, config, and disconnect
* Exclude external assistant client from Pipelock scan-diff
False positive: `@token` instance variable flagged as "Credential in URL".
Temporary workaround until Pipelock supports inline suppression.
* Address review feedback: NO_PROXY boundary fix, SSE done flag, design tokens
- Fix NO_PROXY matching to require domain boundary (exact match or .suffix),
case-insensitive. Prevents badexample.com matching example.com.
- Add done flag to SSE streaming so read_body stops after [DONE]
- Move MAX_CONVERSATION_MESSAGES to class level
- Use bg-success/bg-destructive design tokens for status indicators
- Add rationale comment for pipelock scan exclusion
- Update docs last-updated date
* Address second round of review feedback
- Allowlist email comparison is now case-insensitive and nil-safe
- Cap SSE buffer at 1 MB to prevent memory blowup from malformed streams
- Don't expose upstream HTTP response body in user-facing errors (log it instead)
- Fix frozen string warning on buffer initialization
- Fix "builtin" typo in docs (should be "built-in")
* Protect completed responses from cleanup, sanitize error messages
- Don't destroy a fully streamed assistant message if post-stream
metadata update fails (only cleanup partial responses)
- Log raw connection/HTTP errors internally, show generic messages
to users to avoid leaking network/proxy details
- Update test assertions for new error message wording
* Fix SSE content guard and NO_PROXY test correctness
Use nil check instead of present? for SSE delta content to preserve
whitespace-only chunks (newlines, spaces) that can occur in code output.
Fix NO_PROXY test to use HTTP_PROXY matching the http:// client URL so
the proxy resolution and NO_PROXY bypass logic are actually exercised.
* Forward proxy credentials to Net::HTTP
Pass proxy_uri.user and proxy_uri.password to Net::HTTP.new so
authenticated proxies (http://user:pass@host:port) work correctly.
Without this, credentials parsed from the proxy URL were silently
dropped. Nil values are safe as positional args when no creds exist.
* Update pipelock integration to v0.3.1 with full scanning config
Bump Helm image tag from 0.2.7 to 0.3.1. Add missing security
sections to both the Helm ConfigMap and compose example config:
mcp_tool_policy, mcp_session_binding, and tool_chain_detection.
These protect the /mcp endpoint against tool injection, session
hijacking, and multi-step exfiltration chains.
Add version and mode fields to config files. Enable include_defaults
for DLP and response scanning to merge user patterns with the 35
built-in patterns. Remove redundant --mode CLI flag from the Helm
deployment template since mode is now in the config file.
* Add comprehensive AI/LLM configuration documentation
* Fix Chat.start! to use default model when model is nil or empty
* Ensure all controllers use Chat.default_model for consistency
* Move AI doc inside `hosting/`
* Probably too much error handling
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: jjmata <187772+jjmata@users.noreply.github.com>
Co-authored-by: Juan José Mata <juanjo.mata@gmail.com>
* Implement support for generic OpenAI api
- Implements support to route requests to any openAI capable provider ( Deepsek, Qwen, VLLM, LM Studio, Ollama ).
- Keeps support for pure OpenAI and uses the new better responses api
- Uses the /chat/completions api for the generic providers
- If uri_base is not set, uses default implementation.
* Fix json handling and indentation
* Fix linter error indent
* Fix tests to set env vars
* Fix updating settings
* Change to prefix checking for OAI models
* FIX check model if custom uri is set
* Change chat to sync calls
Some local models don't support streaming. Revert to sync calls for generic OAI api
* Fix tests
* Fix tests
* Fix for gpt5 message extraction
- Finds the message output by filtering for "type" == "message" instead of assuming it's at index 0
- Safely extracts the text using safe navigation operators (&.)
- Raises a clear error if no message content is found
- Parses the JSON as before
* Add more langfuse logging
- Add Langfuse to auto categorizer and merchant detector
- Fix monitoring on streaming chat responses
- Add Langfuse traces also for model errors now
* Update app/models/provider/openai.rb
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: soky srm <sokysrm@gmail.com>
* handle nil function results explicitly
* Exposing some config vars.
* Linter and nitpick comments
* Drop back to `gpt-4.1` as default for now
* Linter
* Fix for strict tool schema in Gemini
- This fixes tool calling in Gemini OpenAI api
- Fix for getTransactions function, page size is not used.
---------
Signed-off-by: soky srm <sokysrm@gmail.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: Juan José Mata <juanjo.mata@gmail.com>
* AI sidebar
* Add chat and message models with associations
* Implement AI chat functionality with sidebar and messaging system
- Add chat and messages controllers
- Create chat and message views
- Implement chat-related routes
- Add message broadcasting and user interactions
- Update application layout to support chat sidebar
- Enhance user model with initials method
* Refactor AI sidebar with enhanced chat menu and interactions
- Update sidebar layout with dynamic width and improved responsiveness
- Add new chat menu Stimulus controller for toggling between chat and chat list views
- Improve chat list display with recent chats and empty state
- Extract AI avatar to a partial for reusability
- Enhance message display and interaction styling
- Add more contextual buttons and interaction hints
* Improve chat scroll behavior and message styling
- Refactor chat scroll functionality with Stimulus controller
- Optimize message scrolling in chat views
- Update message styling for better visual hierarchy
- Enhance chat container layout with flex and auto-scroll
- Simplify message rendering across different chat views
* Extract AI avatar to a shared partial for consistent styling
- Refactor AI avatar rendering across chat views
- Replace hardcoded avatar markup with a reusable partial
- Simplify avatar display in chats and messages views
* Update sidebar controller to handle right panel width dynamically
- Add conditional width class for right sidebar panel
- Ensure consistent sidebar toggle behavior for both left and right panels
- Use specific width class for right panel (w-[375px])
* Refactor chat form and AI greeting with flexible partials
- Extract message form to a reusable partial with dynamic context support
- Create flexible AI greeting partial for consistent welcome messages
- Simplify chat and sidebar views by leveraging new partials
- Add support for different form scenarios (chat, new chat, sidebar)
- Improve code modularity and reduce duplication
* Add chat clearing functionality with dynamic menu options
- Implement clear chat action in ChatsController
- Add clear chat route to support clearing messages
- Update AI sidebar with dropdown menu for chat actions
- Preserve system message when clearing chat
- Enhance chat interaction with new menu options
* Add frontmatter to project structure documentation
- Create initial frontmatter for structure.mdc file
- Include description and configuration options
- Prepare for potential dynamic documentation rendering
* Update general project rules with additional guidelines
- Add rule for using `Current.family` instead of `current_family`
- Include new guidelines for testing, API routes, and solution approach
- Expand project-specific rules for more consistent development practices
* Add OpenAI gem and AI-friendly data representations
- Add `ruby-openai` gem for AI integration
- Implement `to_ai_readable_hash` methods in BalanceSheet and IncomeStatement
- Include Promptable module in both models
- Add savings rate calculation method in IncomeStatement
- Prepare financial models for AI-powered insights and interactions
* Enhance AI Financial Assistant with Advanced Querying and Debugging Capabilities
- Implement comprehensive AI financial query system with function-based interactions
- Add detailed debug logging for AI responses and function calls
- Extend BalanceSheet and IncomeStatement models with AI-friendly methods
- Create robust error handling and fallback mechanisms for AI queries
- Update chat and message views to support debug mode and enhanced rendering
- Add AI query routes and initial test coverage for financial assistant
* Refactor AI sidebar and chat layout with improved structure and comments
- Remove inline AI chat from application layout
- Enhance AI sidebar with more semantic HTML structure
- Add descriptive comments to clarify different sections of chat view
- Improve flex layout and scrolling behavior in chat messages container
- Optimize message rendering with more explicit class names and structure
* Add Markdown rendering support for AI chat messages
- Implement `markdown` helper method in ApplicationHelper using Redcarpet
- Update message view to render AI messages with Markdown formatting
- Add comprehensive Markdown rendering options (tables, code blocks, links)
- Enhance AI Financial Assistant prompt to encourage Markdown usage
- Remove commented Markdown CSS in Tailwind application stylesheet
* Missing comma
* Enhance AI response processing with chat history context
* Improve AI debug logging with payload size limits and internal message flag
* Enhance AI chat interaction with improved thinking indicator and scrolling behavior
* Add AI consent and enable/disable functionality for AI chat
* Upgrade Biome and refactor JavaScript template literals
- Update @biomejs/biome to latest version with caret (^) notation
- Refactor AI query and chat controllers to use template literals
- Standardize npm scripts formatting in package.json
* Add beta testing usage note to AI consent modal
* Update test fixtures and configurations for AI chat functionality
- Add family association to chat fixtures and tests
- Set consistent password digest for test users
- Enable AI for test users
- Add OpenAI access token for test environment
- Update chat and user model tests to include family context
* Simplify data model and get tests passing
* Remove structure.mdc from version control
* Integrate AI chat styles into existing prose pattern
* Match Figma design spec, implement Turbo frames and actions for chats controller
* AI rules refresh
* Consolidate Stimulus controllers, thinking state, controllers, and views
* Naming, domain alignment
* Reset migrations
* Improve data model to support tool calls and message types
* Tool calling tests and fixtures
* Tool call implementation and test
* Get assistant test working again
* Test updates
* Process tool calls within provider
* Chat UI back to working state again
* Remove stale code
* Tests passing
* Update openai class naming to avoid conflicts
* Reconfigure test env
* Rebuild gemfile
* Fix naming conflicts for ChatResponse
* Message styles
* Use OpenAI conversation state management
* Assistant function base implementation
* Add back thinking messages, clean up error handling for chat
* Fix sync error when security price has bad data from provider
* Add balance sheet function to assistant
* Add better function calling error visibility
* Add income statement function
* Simplify and clean up "thinking" interactions with Turbo frames
* Remove stale data definitions from functions
* Ensure VCR fixtures working with latest code
* basic stream implementation
* Get streaming working
* Make AI sidebar wider when left sidebar is collapsed
* Get tests working with streaming responses
* Centralize provider error handling
* Provider data boundaries
---------
Co-authored-by: Josh Pigford <josh@joshpigford.com>