Add Family vector search function call / support for document vault (#961)

* Add SearchFamilyImportedFiles assistant function with vector store support

Implement per-Family document search using OpenAI vector stores, allowing
the AI assistant to search through uploaded financial documents (tax returns,
statements, contracts, etc.). The architecture is modular with a provider-
agnostic VectorStoreConcept interface so other RAG backends can be added.

Key components:
- Assistant::Function::SearchFamilyImportedFiles - tool callable from any LLM
- Provider::VectorStoreConcept - abstract vector store interface
- Provider::Openai vector store methods (create, upload, search, delete)
- Family::VectorSearchable concern with document management
- FamilyDocument model for tracking uploaded files
- Migration adding vector_store_id to families and family_documents table

https://claude.ai/code/session_01TSkKc7a9Yu2ugm1RvSf4dh

* Extract VectorStore adapter layer for swappable backends

Replace the Provider::VectorStoreConcept mixin with a standalone adapter
architecture under VectorStore::. This cleanly separates vector store
concerns from the LLM provider and makes it trivial to swap backends.

Components:
- VectorStore::Base — abstract interface (create/delete/upload/remove/search)
- VectorStore::Openai — uses ruby-openai gem's native vector_stores.search
- VectorStore::Pgvector — skeleton for local pgvector + embedding model
- VectorStore::Qdrant — skeleton for Qdrant vector DB
- VectorStore::Registry — resolves adapter from VECTOR_STORE_PROVIDER env
- VectorStore::Response — success/failure wrapper (like Provider::Response)

Consumers updated to go through VectorStore.adapter:
- Family::VectorSearchable
- Assistant::Function::SearchFamilyImportedFiles
- FamilyDocument

Removed: Provider::VectorStoreConcept, vector store methods from Provider::Openai

https://claude.ai/code/session_01TSkKc7a9Yu2ugm1RvSf4dh

* Add Vector Store configuration docs to ai.md

Documents how to configure the document search feature, covering all
three supported backends (OpenAI, pgvector, Qdrant), environment
variables, Docker Compose examples, supported file types, and privacy
considerations.

https://claude.ai/code/session_01TSkKc7a9Yu2ugm1RvSf4dh

* No need to specify `imported` in code

* Missed a couple more places

* Tiny reordering for the human OCD

* Update app/models/assistant/function/search_family_files.rb

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Juan José Mata <jjmata@jjmata.com>

* PR comments

* More PR comments

---------

Signed-off-by: Juan José Mata <jjmata@jjmata.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
This commit is contained in:
Juan José Mata
2026-02-11 15:22:56 +01:00
committed by GitHub
parent 1ebbd5bbc5
commit 9e57954a99
20 changed files with 1212 additions and 6 deletions

View File

@@ -0,0 +1,68 @@
class VectorStore::Base
SUPPORTED_EXTENSIONS = %w[
.c .cpp .css .csv .docx .gif .go .html .java .jpeg .jpg .js .json
.md .pdf .php .png .pptx .py .rb .sh .tar .tex .ts .txt .xlsx .xml .zip
].freeze
# Create a new vector store / collection / namespace
# @param name [String] human-readable name
# @return [Hash] { id: "store-identifier" }
def create_store(name:)
raise NotImplementedError
end
# Delete a vector store and all its files
# @param store_id [String]
def delete_store(store_id:)
raise NotImplementedError
end
# Upload and index a file
# @param store_id [String]
# @param file_content [String] raw file bytes
# @param filename [String] original filename with extension
# @return [Hash] { file_id: "file-identifier" }
def upload_file(store_id:, file_content:, filename:)
raise NotImplementedError
end
# Remove a previously uploaded file
# @param store_id [String]
# @param file_id [String]
def remove_file(store_id:, file_id:)
raise NotImplementedError
end
# Semantic search across indexed files
# @param store_id [String]
# @param query [String] natural-language search query
# @param max_results [Integer]
# @return [Array<Hash>] each { content:, filename:, score:, file_id: }
def search(store_id:, query:, max_results: 10)
raise NotImplementedError
end
# Which file extensions this adapter can ingest
def supported_extensions
SUPPORTED_EXTENSIONS
end
private
def success(data)
VectorStore::Response.new(success?: true, data: data, error: nil)
end
def failure(error)
wrapped = error.is_a?(VectorStore::Error) ? error : VectorStore::Error.new(error.message)
VectorStore::Response.new(success?: false, data: nil, error: wrapped)
end
def with_response(&block)
data = yield
success(data)
rescue => e
Rails.logger.error("#{self.class.name} error: #{e.class} - #{e.message}")
failure(e)
end
end

View File

@@ -0,0 +1,89 @@
# Adapter that delegates to OpenAI's hosted vector-store and file-search APIs.
#
# Requirements:
# - gem "ruby-openai" (already in Gemfile)
# - OPENAI_ACCESS_TOKEN env var or Setting.openai_access_token
#
# OpenAI manages chunking, embedding, and retrieval; we simply upload files
# and issue search queries.
class VectorStore::Openai < VectorStore::Base
def initialize(access_token:, uri_base: nil)
client_options = { access_token: access_token }
client_options[:uri_base] = uri_base if uri_base.present?
client_options[:request_timeout] = ENV.fetch("OPENAI_REQUEST_TIMEOUT", 60).to_i
@client = ::OpenAI::Client.new(**client_options)
end
def create_store(name:)
with_response do
response = client.vector_stores.create(parameters: { name: name })
{ id: response["id"] }
end
end
def delete_store(store_id:)
with_response do
client.vector_stores.delete(id: store_id)
end
end
def upload_file(store_id:, file_content:, filename:)
with_response do
tempfile = Tempfile.new([ File.basename(filename, ".*"), File.extname(filename) ])
begin
tempfile.binmode
tempfile.write(file_content)
tempfile.rewind
file_response = client.files.upload(
parameters: { file: tempfile, purpose: "assistants" }
)
file_id = file_response["id"]
begin
client.vector_store_files.create(
vector_store_id: store_id,
parameters: { file_id: file_id }
)
rescue => e
client.files.delete(id: file_id) rescue nil
raise
end
{ file_id: file_id }
ensure
tempfile.close
tempfile.unlink
end
end
end
def remove_file(store_id:, file_id:)
with_response do
client.vector_store_files.delete(vector_store_id: store_id, id: file_id)
end
end
def search(store_id:, query:, max_results: 10)
with_response do
response = client.vector_stores.search(
id: store_id,
parameters: { query: query, max_num_results: max_results }
)
(response["data"] || []).map do |result|
{
content: Array(result["content"]).filter_map { |c| c["text"] }.join("\n"),
filename: result["filename"],
score: result["score"],
file_id: result["file_id"]
}
end
end
end
private
attr_reader :client
end

View File

@@ -0,0 +1,89 @@
# Adapter that stores embeddings locally in PostgreSQL using the pgvector extension.
#
# This keeps all data on your own infrastructure — no external vector-store
# service required. You still need an embedding provider (e.g. OpenAI, or a
# local model served via an OpenAI-compatible endpoint) to turn text into
# vectors before insertion and at query time.
#
# Requirements (not yet wired up):
# - PostgreSQL with the `vector` extension enabled
# - gem "neighbor" (for ActiveRecord integration) or raw SQL
# - An embedding model endpoint (EMBEDDING_MODEL_URL / EMBEDDING_MODEL_NAME)
# - A chunking strategy (see #chunk_file below)
#
# Schema sketch (for reference — migration not included):
#
# create_table :vector_store_chunks do |t|
# t.string :store_id, null: false # logical namespace
# t.string :file_id, null: false
# t.string :filename
# t.text :content # the original text chunk
# t.vector :embedding, limit: 1536 # adjust dimensions to your model
# t.jsonb :metadata, default: {}
# t.timestamps
# end
# add_index :vector_store_chunks, :store_id
# add_index :vector_store_chunks, :file_id
#
class VectorStore::Pgvector < VectorStore::Base
def create_store(name:)
with_response do
# A "store" is just a logical namespace (a UUID).
# No external resource to create.
# { id: SecureRandom.uuid }
raise VectorStore::Error, "Pgvector adapter is not yet implemented"
end
end
def delete_store(store_id:)
with_response do
# TODO: DELETE FROM vector_store_chunks WHERE store_id = ?
raise VectorStore::Error, "Pgvector adapter is not yet implemented"
end
end
def upload_file(store_id:, file_content:, filename:)
with_response do
# 1. chunk_file(file_content, filename) → array of text chunks
# 2. embed each chunk via the configured embedding model
# 3. INSERT INTO vector_store_chunks (store_id, file_id, filename, content, embedding)
raise VectorStore::Error, "Pgvector adapter is not yet implemented"
end
end
def remove_file(store_id:, file_id:)
with_response do
# TODO: DELETE FROM vector_store_chunks WHERE store_id = ? AND file_id = ?
raise VectorStore::Error, "Pgvector adapter is not yet implemented"
end
end
def search(store_id:, query:, max_results: 10)
with_response do
# 1. embed(query) → vector
# 2. SELECT content, filename, file_id,
# 1 - (embedding <=> query_vector) AS score
# FROM vector_store_chunks
# WHERE store_id = ?
# ORDER BY embedding <=> query_vector
# LIMIT max_results
raise VectorStore::Error, "Pgvector adapter is not yet implemented"
end
end
private
# Placeholder: split file content into overlapping text windows.
# A real implementation would handle PDFs, DOCX, etc. via
# libraries like `pdf-reader`, `docx`, or an extraction service.
def chunk_file(file_content, filename)
# TODO: implement format-aware chunking
[]
end
# Placeholder: call an embedding API to turn text into a vector.
def embed(text)
# TODO: call EMBEDDING_MODEL_URL or OpenAI embeddings endpoint
raise VectorStore::Error, "Embedding model not configured"
end
end

View File

@@ -0,0 +1,81 @@
# Adapter for Qdrant — a dedicated open-source vector database.
#
# Qdrant can run locally (Docker), self-hosted, or as a managed cloud service.
# Like the Pgvector adapter you still supply your own embedding model; Qdrant
# handles storage, indexing, and fast ANN search.
#
# Requirements (not yet wired up):
# - A running Qdrant instance (QDRANT_URL, default http://localhost:6333)
# - Optional QDRANT_API_KEY for authenticated clusters
# - An embedding model endpoint (EMBEDDING_MODEL_URL / EMBEDDING_MODEL_NAME)
# - gem "qdrant-ruby" or raw Faraday HTTP calls
#
# Mapping:
# store → Qdrant collection
# file → set of points sharing a file_id payload field
# search → query vector + payload filter on store_id
#
class VectorStore::Qdrant < VectorStore::Base
def initialize(url: "http://localhost:6333", api_key: nil)
@url = url
@api_key = api_key
end
def create_store(name:)
with_response do
# POST /collections/{collection_name} { vectors: { size: 1536, distance: "Cosine" } }
# collection_name could be a slugified version of `name` or a UUID.
raise VectorStore::Error, "Qdrant adapter is not yet implemented"
end
end
def delete_store(store_id:)
with_response do
# DELETE /collections/{store_id}
raise VectorStore::Error, "Qdrant adapter is not yet implemented"
end
end
def upload_file(store_id:, file_content:, filename:)
with_response do
# 1. chunk file → text chunks
# 2. embed each chunk
# 3. PUT /collections/{store_id}/points { points: [...] }
# each point: { id: uuid, vector: [...], payload: { file_id, filename, content } }
raise VectorStore::Error, "Qdrant adapter is not yet implemented"
end
end
def remove_file(store_id:, file_id:)
with_response do
# POST /collections/{store_id}/points/delete
# { filter: { must: [{ key: "file_id", match: { value: file_id } }] } }
raise VectorStore::Error, "Qdrant adapter is not yet implemented"
end
end
def search(store_id:, query:, max_results: 10)
with_response do
# 1. embed(query) → vector
# 2. POST /collections/{store_id}/points/search
# { vector: [...], limit: max_results, with_payload: true }
# 3. map results → [{ content:, filename:, score:, file_id: }]
raise VectorStore::Error, "Qdrant adapter is not yet implemented"
end
end
private
def connection
@connection ||= Faraday.new(url: @url) do |f|
f.request :json
f.response :json
f.adapter Faraday.default_adapter
f.headers["api-key"] = @api_key if @api_key.present?
end
end
def embed(text)
raise VectorStore::Error, "Embedding model not configured"
end
end

View File

@@ -0,0 +1,70 @@
class VectorStore::Registry
ADAPTERS = {
openai: "VectorStore::Openai",
pgvector: "VectorStore::Pgvector",
qdrant: "VectorStore::Qdrant"
}.freeze
class << self
# Returns the configured adapter instance.
# Reads from VECTOR_STORE_PROVIDER env var, falling back to :openai
# when OpenAI credentials are present.
def adapter
name = adapter_name
return nil unless name
build_adapter(name)
end
def configured?
adapter.present?
end
def adapter_name
explicit = ENV["VECTOR_STORE_PROVIDER"].presence
return explicit.to_sym if explicit && ADAPTERS.key?(explicit.to_sym)
# Default: use OpenAI when credentials are available
:openai if openai_access_token.present?
end
private
def build_adapter(name)
klass = ADAPTERS[name]&.safe_constantize
raise VectorStore::ConfigurationError, "Unknown vector store adapter: #{name}" unless klass
case name
when :openai then build_openai
when :pgvector then build_pgvector
when :qdrant then build_qdrant
else raise VectorStore::ConfigurationError, "No builder defined for adapter: #{name}"
end
end
def build_openai
token = openai_access_token
return nil unless token.present?
VectorStore::Openai.new(
access_token: token,
uri_base: ENV["OPENAI_URI_BASE"].presence || Setting.openai_uri_base
)
end
def build_pgvector
VectorStore::Pgvector.new
end
def build_qdrant
url = ENV.fetch("QDRANT_URL", "http://localhost:6333")
api_key = ENV["QDRANT_API_KEY"].presence
VectorStore::Qdrant.new(url: url, api_key: api_key)
end
def openai_access_token
ENV["OPENAI_ACCESS_TOKEN"].presence || Setting.openai_access_token
end
end
end