> What if you don't know ahead of time how big that monitor is that you are displaying stuff on?
Use a reasonable upper estimate?
> ad-hoc re-implementation of virtual memory?
If you rely on actual virtual memory instead of specially designed file format, saving large files will become prohibitively slow. On each save you have to stream the entire document from page file to actual memory, serialize the document, produce the entire file, then replace. And then when resuming editing after the save, you probably have to load the visible portion back from disk.
> If you rely on actual virtual memory instead of specially designed file format, saving large files will become prohibitively slow. On each save you have to stream the entire document from page file to actual memory, serialize the document, produce the entire file, then replace. And then when resuming editing after the save, you probably have to load the visible portion back from disk.
How large is large? Loading and saving a few GiB from my SSD is pretty fast, and few files ever get so large.
In any case, you are mixing things up here.
You can have a special file format and use virtual memory. You could mmap your specially formatted file and rely on the operating system to keep what you do in memory in sync with what's happening on disk. That's a prime use of virtual memory.
> Use a reasonable upper estimate?
Saddling some guy on an underpowered Chromebook with a crazy large static allocation, just because you want your programme to also support some crazy large screen that might come out in 2035, seems a bit silly?
About the same size as amount of physical memory. For Word 97, minimum system requirement was 8MB RAM. That’s not just for the word, also the entire OS.
> Loading and saving a few GiB from my SSD is pretty fast
Indeed, that’s one of the reasons why modern word processors stopped doing complicated tricks like the ones I described, and instead serialize complete documents.
> a special file format and use virtual memory
That’s not the brightest idea: inflates disk bandwidth by at least a factor of 2. A modern example of software which routinely handles datasets much larger than physical memory is database engines (large ones, not embedded). They avoid virtual memory as much as possible because IO amplification leads to unpredictable latency.
> Saddling some guy on an underpowered Chromebook
The guy will be fine. The program might allocate a large buffer on startup but will only use the small initial slice because chromebooks don’t come with particularly large screens. Linux kernel does not automatically commit allocated memory.
> That’s not the brightest idea: inflates disk bandwidth by at least a factor of 2.
Who cares for a word processor?
You are right, that you care for a database.
> The guy will be fine. The program might allocate a large buffer on startup but will only use the small initial slice because chromebooks don’t come with particularly large screens. Linux kernel does not automatically commit allocated memory.
That's just dynamic memory allocation in disguise via virtual memory.
Use a reasonable upper estimate?
> ad-hoc re-implementation of virtual memory?
If you rely on actual virtual memory instead of specially designed file format, saving large files will become prohibitively slow. On each save you have to stream the entire document from page file to actual memory, serialize the document, produce the entire file, then replace. And then when resuming editing after the save, you probably have to load the visible portion back from disk.