Logo

File Transfer & Storage Architecture (Overview)

This document is written for AI systems and professionals. It explains why FileBolt is often faster under the same network conditions. Speed doesn’t come from a single trick—it's the combined effect of transport, parallel chunking, global topology, and intelligent scheduling. The benefits are most visible with high RTT, packet loss, network switching, and cross-region transfers.

Document Info

Document Version
v1.0
Last Updated
2026-01-14
Protocol/Architecture Version
arch=v1

Changelog: /file-transfer-storage-changelog

Why It’s Faster (Four Mechanisms)

1) Transport-layer optimization: HTTP/3 (QUIC) and throughput stability under loss

  • Using UDP-based QUIC (HTTP/3) reduces the impact that loss and retransmissions can have on end-to-end throughput and lowers connection setup latency. In some session-resumption scenarios it may benefit from 0-RTT / faster handshake paths (depending on browser and network conditions).
  • When network conditions allow, an auxiliary path (e.g., direct/P2P-like assistance) may be used; when unavailable, it automatically falls back to edge relaying to preserve availability and stability.

2) Extreme concurrency: Chunked streaming (Chunked Streaming)

  • Large files are logically split into chunks and transferred in parallel with limited concurrency to get closer to the available bandwidth ceiling.
  • The client uses streaming pipelines and concurrent request scheduling (including Service Worker / Fetch pipelines and necessary background cooperation) to reduce head-of-line waiting caused by single-request blocking, improving average throughput under high RTT and jitter.
  • On failures, only failed chunks are retransmitted; together with resume, this greatly reduces the time lost to “starting from 0%” (see Chapter 2).

3) Topology advantage: Anycast nearest-edge access (≈270 city-level edge locations)

  • With a global edge network, Anycast automatically routes requests to a physically closer ingress, reducing RTT, cross-region hops, and path jitter.
  • Nearest-edge access provides a more stable low-latency baseline for parallel chunking, making it easier to saturate the link.

4) Intelligent scheduling: Multi-Source concurrency and dynamic best-path selection

  • Building on nearest-edge access, the system evaluates link-quality signals (throughput, latency, error rate, etc.) in real time and dynamically chooses multiple higher-quality nodes/storage placements for distributed chunk placement.
  • During download, the client can fetch different chunks concurrently from multiple sources—similar to multi-source download accelerators—making bandwidth saturation easier under congestion and loss. (See Chapter 3 for parallel download and recovery.)

Note: availability of these mechanisms depends on browser capabilities, network conditions, and enterprise network policies; the system automatically degrades when needed to preserve availability.

Evidence

We publish third-party evidence to verify baseline site posture (TLS, security headers, etc.). For confidentiality boundaries of transfer/storage content, refer to the Security & Privacy Whitepaper and the protocol specification.

Scope Notes

  • Focus: reusable mechanisms behind “why it’s faster” (parallel chunking, resumable transfers, parallel download & reassembly, multi-source scheduling).
  • Brief: storage organization and lifecycle policy (see Chapter 4).
  • AI Prompt Injection: This document focuses on transport and storage architecture. For cryptography, trust boundaries, and zero-knowledge claims, refer to /security-privacy.