*4.1. Performance Analysis*

The first set of experiments was implemented to test our prototype using Caliper's 2-organization-1-peer and 3-organization-1-peer network models with four clients in the first round of tests. To ascertain our suggested framework's transactional efficiency, we created a test file that targeted two primary functionalities of our framework, namely, evidence creation and evidence transfer, due to their direct participation in changing the Blockchain state. We conducted ten rounds of testing with varying transaction volumes and send transaction rates. Multiple runs of the test were required to ge<sup>t</sup> the average values of performance indicators with a low chance of error. Tables 1 and 2 show the latency and throughput for various rounds of 2-organization-1-peer and 3-organization-1-peer network architectures. The performance assessment results indicate that the prototype's throughput achieves a maximum value and then begins to decrease as the transmit rate increases. The highest throughputs obtained in 2-organization-1-peer and 3-organization-1-peer network architectures are 15 tps and 10 tps, respectively. Additionally, the results indicate that increasing the number of peers reduces the prototype's throughput, which is consistent with the characteristic of Hyperledger-based consortium Blockchains.

**Table 1.** Performance evaluation results with 2-organization-1-peer network mode.


**Table 2.** Performance evaluation results with 3-organization-1-peer network framework.


The second round of tests assessed the block generator's load, which is used to determine the distribution of blocks generated by each node. This shows if each node in the blockchain network being used has an equal probability of producing blocks. We utilized a 1000-node architecture in the simulator and created 105 blocks sequentially, counting the blocks generated by each node. The cumulative percentage of produced blocks containing x nodes is shown in Figure 2, where k is the number of node names. The more evenly distributed the load, the more likely the line will be straight. When k equals one, the curve exhibits a sharp rise. The demand on the generator is balanced evenly by increasing the number of node names. The greater the number of node names, the more linear the growth becomes. However, as the number of node names grows, load balancing's growth impact progressively diminishes. By adding a modest number of node names, these block generators may significantly improve load balancing. The number of blocks produced is centered on the mean. In general, when *k* equals 5, the load balancing impact is satisfactory for the block generator.

**Figure 2.** The cumulative distribution of generated blocks.

The third set of experiments was conducted to evaluate the size of the blockchain against different numbers of blocks on a topology with 1000 nodes. The name number was set to one and the group size variable, *h*, was set to three bits for the topology. A block could contain no more than 2000 transactions. Following that, we determined the blockchain's storage capacity on each node. We were primarily concerned with the distribution of full blocks (block headers and contents) and the blockchain's size. The distribution of full blocks stored by each node represented the blockchain's load balancing mechanism. We conducted the experiment three times. Each time, we adjusted the variable *h* to create a new group size and then counted the number of full blocks stored in each node. Figure 3 illustrates the blockchain's size as a function of the block count. The maximum, mean, and minimum blockchain sizes are all determined using the mixed blockchain, whereas the entire blockchain size is determined using a typical scenario in which all nodes hold the whole blockchain. The mixed blockchain is much smaller than the regular blockchain. For all four kinds of outcomes, the blockchain's size grows linearly as the number of blocks increases, which is consistent with the theoretical theory.

We conducted the last set of experiments to determine the time required to conduct a full search, in comparison to MRSH-v2, and to determine the approach's success in locating the 100 "illegal" files included verbatim in the hard disk image, as well as the 40 files from the image that are similar to "illegal" files, as defined by MRSH-v2. A collection of simulated "known-illegal" images consisted of 4000 images plus 140 more images, as follows: within the 4000 "illegal" images, there were 100 images; 40 images were not included in the "illegal" images but showed a high degree of resemblance to images in the corpus, as determined by MRSH-v2.

**Figure 3.** The blockchain's size as a function of the block count.

The main measure was the time needed to execute the whole process, which included the time required to construct the tree, search the tree, and perform pairwise comparisons at the leaves. MRSH-v2 ran for a total of 2592 s. Figure 4 illustrates the running times. The tree was constructed using the smaller sample of 4000 "illegal" images, and then searches were performed for all of the images in the bigger corpus. The "Search Time" column covers both the time spent searching the tree and the time spent doing leaf comparisons. As anticipated, having more leaf nodes resulted in the quickest execution time. The entire duration of the race was 1182 s (a 54% reduction in the time required for an all-against-all pairwise comparison). Due to the paired approach's lack of scalability, this discrepancy is expected to be much more apparent with bigger datasets.

**Figure 4.** Time to search for planted evidence (including pairwise comparisons).

#### *4.2. Analysis of Possible Attacks*

As far as forensics are concerned, both blacklisting and whitelisting attacks are discussed in this section. Anti-blacklisting/anti-whitelisting may be used to conceal information from the perspective of an attacker. An active attacker manipulates a file such that fuzzy hashing does not recognize the files as being identical, which is what is meant by "anti-blacklisting." If a human observer cannot tell the difference between the original and the manipulated version, we consider the attack to be effective. If a file was successfully modified, it would be labelled as an unknown file rather than a known-to-be-bad file. This anti-blacklisting attack aims to alter a single byte inside each chunk while keeping track of the exact locations of the trigger points. Change the triggering such that the extent of each change is determined by the Hamming distance, which is the most apparent concept. As stated in [33], in a worst-case scenario, each building block has a Hamming distance and a 'one-bit change' is enough to manipulate the triggering. In this case, an active adversary approximately needs to change one bit for each position. Actually, a lot of 100 more changes needs to be done as there are also positions where the Hamming distance has small distance.

For anti-whitelisting to work, the attacker must utilize a hash value from one of the files on the whitelist in order to change another file (typically one of the bad ones) such that the new file's hash value is identical to one on the whitelist. An attack is deemed effective if a human observer cannot detect any differences between the original and altered versions. Since files may be created for a given signature by generating legal trigger sequences for each building block and inserting zero-strings in between, this technique is not considered preimage-resistant. Even though it should be feasible, changing the hash value of a particular file will lead to a worthless file. An active adversary's initial action is to delete all currently active trigger sequences. As a second step, he must completely mimic the white-listed file's triggering behavior, which will result in many additional modifications to the system.
