Skip to main content

Frequently Asked Questions

Common questions and answers about using FLTR.

General

FLTR is a Context as a Service platform that makes your documents AI-ready. It provides semantic search over your content using hybrid vector + keyword search, accessible via REST API or Model Context Protocol (MCP).
FLTR serves three main audiences:
  • AI Developers - Building RAG applications, knowledge bases, and AI agents
  • No-Code Builders - Integrating with Zapier, Make, n8n
  • Enterprises - Team collaboration with advanced security
FLTR combines:
  • Hybrid search - Vector similarity + keyword matching
  • MCP-native - Built for Claude Desktop, VS Code, Cursor
  • Simple API - RESTful with comprehensive docs
  • Multimodal - PDFs, images, text, code
The FLTR API is proprietary, but we provide:
  • Open API specification (OpenAPI 3.1)
  • Public documentation
  • Example integrations on GitHub
  • MCP server implementations

Pricing & Billing

Yes! Free tier includes:
  • 50 requests/hour (anonymous)
  • 2 datasets
  • 100 documents
  • 1GB storage
Paid plans start at $29/month.
Billing is credit-based:
  • Queries: 1 credit per query
  • Uploads: 10 credits per document
  • Storage: 1 credit per GB per month
  • Reranking: +2 credits per query
Plans include monthly credit allocations.
Yes, anytime. Changes take effect immediately with prorated billing.
We accept:
  • Credit/debit cards (Stripe)
  • ACH (US only)
  • Wire transfer (Enterprise)
  • Annual prepayment (10% discount)

Technical

We support:
  • Documents: PDF, DOCX, PPTX, TXT, MD
  • Images: JPG, PNG (with OCR)
  • Code: PY, JS, JAVA, etc.
  • Data: JSON, XML, CSV, YAML
Max file size: 10MB
Processing time depends on file size:
  • Text (< 1MB): 1-5 seconds
  • PDFs (1-5MB): 5-15 seconds
  • Large PDFs (5-10MB): 15-30 seconds
  • Images: +5-10 seconds for OCR
Default: text-embedding-3-small (OpenAI)Coming soon:
  • text-embedding-3-large
  • Custom models (Enterprise)
  • Multilingual models
Not yet. Custom embedding models are planned for Enterprise plans. Contact sales to discuss.
Not directly. You need to query each dataset separately. Use batch queries to search multiple datasets efficiently.Multi-dataset search is on our roadmap for future release.

Security & Privacy

Yes. FLTR implements:
  • Encryption: TLS 1.3 in transit, AES-256 at rest
  • Isolation: Data separated by account
  • Access control: Role-based permissions
  • Compliance: SOC 2 Type II (in progress)
Only you and users you grant access to. FLTR staff cannot access your documents unless you explicitly grant support access for troubleshooting.
Data is stored in:
  • US East (primary): us-east-1 (AWS)
  • EU (optional): eu-west-1 (AWS)
Enterprise plans can choose region.
No. Your data is never used for:
  • Model training
  • Research
  • Analytics (except aggregated usage stats)
  • Any purpose outside your use case
Yes. You can:
  • Delete documents anytime
  • Delete datasets (removes all documents)
  • Delete account (removes all data within 30 days)
Backups are retained for 30 days, then permanently deleted.
Yes. FLTR is GDPR compliant with:
  • Data processing agreements
  • Right to deletion
  • Data portability
  • Privacy by design

Integration

Three ways:
  1. REST API - For any language
  2. MCP - For Claude Desktop, VS Code, Cursor
  3. No-code - Zapier, Make, n8n
See Getting Started.
Not yet. We’re working on official SDKs for:
  • Python
  • JavaScript/TypeScript
  • Go
For now, use the REST API directly.
Yes! FLTR has native MCP support for Claude Desktop.See OAuth Setup for configuration.
Yes, use FLTR as a retriever in LangChain:
from langchain.retrievers import FLTRRetriever

retriever = FLTRRetriever(
    api_key="your_key",
    dataset_id="ds_abc123"
)
See our LangChain integration guide.
No. FLTR is a managed service only.Enterprise plans include:
  • Dedicated infrastructure
  • Custom domains
  • VPC peering

Support

Support varies by plan:
  • Free: Email support (48h response)
  • Pro: Email + chat (24h response)
  • Enterprise: Priority support + Slack (4h response)
Coming soon. We’re setting up a public status page for real-time service health monitoring.It will include:
  • Email notifications
  • SMS alerts
  • Slack integration
Report bugs via:Include:
  • Steps to reproduce
  • Expected vs actual behavior
  • Request/response examples
Yes! We love feedback. Request features via:We prioritize based on user demand.

Still Have Questions?