You need to compress a PDF. You find a tool. It says: upload your file. You drag the file onto the page, a progress bar appears, and thirty seconds later a smaller file comes back. You download it. You move on. The whole thing takes under a minute and you don’t think about it again.
What you don’t think about is the upload.
“Upload” sounds neutral. Technical. Like a verb that just describes a mechanism. But what it actually means, in plain terms, is this: your file left your machine and traveled to someone else’s server. It went over the internet, arrived at a data centre you’ve never visited, was written to a disk owned by a company you probably can’t name, sat in a processing queue alongside thousands of other files, and was handled by software running under terms of service you’ve never read. Then a processed version came back. The original may or may not still be there.
None of this is secret. It’s not a scandal. It’s just what upload means.
The question is whether it was necessary.
For most of the internet’s history, the answer was yes — or at least, yes by default. Processing files required server hardware. Compression algorithms were computationally expensive. Browsers were thin clients, good for rendering HTML, not for running serious workloads. If you wanted to compress a PDF or convert a document, you sent it somewhere with more compute. That was the architecture. That was how it worked.
Somewhere along the way this stopped being true, but the conventions didn’t catch up.
WebAssembly changed the equation quietly. It’s a compilation target — a way to take code written in C, C++, or Rust and run it in a browser at near-native speed. Ghostscript, the compression engine that has been processing PDFs since 1988, compiles to WebAssembly. So does FFmpeg, the video tool. So do image codecs, document parsers, cryptographic libraries. The heavyweight tools that used to require server infrastructure can now run in the browser tab you already have open.
This means the upload was never necessary for a category of tools that has relied on it for twenty years. The file didn’t need to go anywhere. The processing could have happened on your machine the whole time — using your CPU, your memory, your compute. The result would have been the same. The thirty-second wait would have been the same. The only thing that would have been different is that nobody else would have touched your file.
We do this without thinking because we’ve been trained to. Upload-process-download is the pattern. It’s familiar. It feels normal the way a lot of things feel normal until you examine them: the free tool that asks for your card number “just in case,” the app that wants your location to show you weather, the service that needs your date of birth to let you read an article. Each of these is a negotiation, and we accept them reflexively because the friction of questioning seems higher than the cost of complying.
The cost isn’t always high. Sometimes uploading a file to a server is genuinely fine — low-stakes document, reputable company, temporary retention. But people upload all kinds of files to free tools without making that calculation at all: contracts, financial statements, medical records, legal correspondence. Not because they’ve decided the risk is acceptable but because they haven’t thought about it as a risk at all. The tool just said upload, so they uploaded.
Local processing removes the question entirely. If the file never leaves your machine, there’s nothing to evaluate. No privacy policy to read, no retention period to worry about, no server to trust. The processing happens where the file already is, and it stays there when it’s done.
fwip was built on that idea. Your file stays on your device. Always. Try compress PDF →