Chapter 1 Why It’s Faster: Four Key Mechanisms
FileBolt’s speed advantage comes from a combination of transport, parallel chunking, global topology, and multi-source scheduling. This chapter explains, in an auditable way, which mechanisms improve peak throughput and which improve stability and recoverability under packet loss or network switching.
1.1 Design goals (speed view)
- Goal A: Approach the bandwidth ceiling. Under typical conditions, try to saturate the available bandwidth.
- Goal B: Stable throughput. Keep usable throughput under loss, jitter, network switching, and high cross-region RTT—rather than collapsing suddenly.
- Goal C: Recoverable failures. When errors happen, limit the cost to retransmitting only a small amount of data—avoid restarting from 0%.
1.2 Transport-layer optimization: HTTP/3 (QUIC)
- The client and edge ingress SHOULD prefer HTTP/3 (QUIC). When the browser, network, or middleboxes do not support it, the system MUST automatically fall back to HTTP/2/HTTP/1.1 to preserve availability.
- Under packet loss, QUIC’s transport and congestion control can often reduce the impact of head-of-line blocking on end-to-end throughput, improving stability (results depend on network conditions).
- 0-RTT / faster handshake paths are session-resumption capabilities under specific conditions: the system MAY benefit when conditions are met, but MUST NOT rely on them as the only performance lever.
How to verify
- In your browser DevTools (e.g., Network / Protocol; UI varies by browser), you can observe whether requests use h3 (HTTP/3) or fall back to h2/h1.
- Under the same network conditions, comparing h3 vs non-h3 connection setup and throughput stability (especially under high RTT / loss) can be noticeably different.
1.3 Extreme concurrency: chunked streaming (Chunked Streaming)
- The system MUST logically split large files into multiple chunks; each chunk is the minimum unit for upload/retry/recovery (see Chapter 2).
- The client SHOULD transfer multiple chunks with limited concurrency (e.g., several parallel pipelines) to get closer to the bandwidth ceiling and reduce sensitivity to single-connection jitter.
- The client MUST support resumable transfer: on recovery, skip completed chunks and only fetch missing ones, avoiding full-file retransmission (see Chapter 2).
- Concurrency and chunking significantly reduce waiting caused by single-request blocking (including head-of-line and retransmission waits), but you SHOULD NOT claim to “completely eliminate” any specific network phenomenon; the goal is to reduce impact and improve stable throughput.
How to verify
- In the Network panel you can observe multiple parallel upload/download requests at the same time (chunk concurrency).
- After disconnecting the network or refreshing the page, resuming a transfer should not restart from 0% (only missing chunks are completed).
1.4 Topology advantage: Anycast nearest-edge access (edge network)
- The system SHOULD route user requests to a nearest node via Anycast / edge ingress to reduce RTT and the latency/jitter caused by cross-region hops.
- Nearest-edge access improves overall efficiency for parallel chunking: lower RTT typically raises the effective throughput ceiling of each concurrent flow, making it easier to saturate the link.
- Edge coverage and location counts may change as infrastructure evolves; externally, describe this as “edge coverage / nearest ingress”, and treat numbers as guidance rather than hard guarantees.
How to verify
- Using traceroute or browser connection info, you can often observe that connections terminate at a closer ingress (methods vary by environment).
- In cross-region comparisons (e.g., US↔EU / JP↔US), the RTT reduction is usually the most visible.
1.5 Multi-source concurrency: intelligent scheduling (Multi-Source)
- The system SHOULD make dynamic choices based on link-quality signals (throughput, latency, error rate, etc.), so different chunks can be assigned to multiple better nodes/placements, reducing the impact of single-point congestion on overall throughput.
- On download, the client MAY fetch different chunks concurrently from multiple sources for a “multi-source pull” acceleration effect; when multi-source is unavailable it MUST automatically degrade to single-source download to preserve availability (see Chapter 3).
- If you market this as “AI”, it’s best to frame it as an algorithmic system for scheduling and decision-making, and to clearly define inputs/outputs and degradation behavior to avoid vague claims.
How to verify
- During download you may observe concurrent requests from different sources / connections (depending on implementation and browser visibility).
- Under congestion or loss, multi-source concurrency often delivers steadier throughput; when conditions are not met it will degrade automatically but remain usable.
1.6 How this maps to the next chapters
- Chapter 2 expands “chunking, concurrency, idempotency, and resumable upload” into implementable flows and constraints (MUST/SHOULD/MAY) and explains why they directly determine upload speed and stability.
- Chapter 3 explains “parallel download, recovery, and reassembly”, and how to avoid speed loss from redundant downloads under weak networks.
- Chapter 4 keeps only the minimum storage model relevant to performance and recoverability (object vs. state layers, TTL/deletion semantics).