*4.6. Computational Requirements*

The 'frozen' dataset of all 55,905 books and all levels of granularity has a size of 65 GB. However, focusing only on the one-gram counts requires only 3.6 GB. Running the pre-processing pipeline of the 'frozen' data took 8 h (without parallelization) on an CPU running at 3.40 GHz.
