**4. Computational Complexity**

Assume that the image has height *M* and width *N* respectively. In the pre-processing phase (where a lossless compression is used for the location map; however, we can assume that it can be done in time linearly in the number of pixels if we do not require the space usage as small as possible), training-and-prediction phase, embedding-and-shifting phase and extraction and recovery phase, the computational complexity is basically *O*(*MN*) because there are *O*(*MN*) pixels to be scanned for a constant number of times. We remark here that though the structure of the MLP neural network is fixed so that this part contributes a constant factor in the complexity, such a constant factor hidden in the asymptotic notation can actually be huge. More specifically, for each input data point (i.e., a set of four pixels) fed to the input layer of the MLP neural network in one iteration, there are 100 × 200 multiplications required to compute the activation of all the neurons.
