**1. Introduction**

Fuzzing is a popular and effective software testing technology for detecting bugs and vulnerabilities. In the past few years, it has gained widespread usage in mainstream software companies (such as Google [1–3], Microsoft [4], and Adobe [5]) and has found thousands of vulnerabilities.

Coverage-based Greybox Fuzzing (CGF) [6,7] is one of the most popular methods of fuzzing. It is based on the guidance that increasing code coverage usually leads to better crash detection. By using lightweight instrumentation, CGF automatically generates a large number of inputs to feed target programs, and continuously collects coverage information as feedback to guide fuzzing.

Inspired by the impressive achievements of CGF, many researchers have conducted studies and developed their own fuzzers from different perspectives [8–10]. AFLFast [11] assigns more energy to the low-frequency paths based on the Markov chain model. AFLGo [12], a directed grey-box fuzzer, is implemented to generate inputs to reach given sets of target program locations. FairFuzz [13] identifies rare branches in the program and adjusts mutation strategies to increase coverage. MOPT [14] leverages a mutation schedule based on particle swarm optimization (PSO) to accelerate the convergence speed. EcoFuzz [15] improves the power schedule for discovering new paths using a variant of the adversarial multi-armed bandit model. PerfFuzz [16] generates pathological inputs to detect algorithm complexity vulnerabilities. MemLock [17] utilizes memory consumption information to guide seed selection to trigger the weakness of memory corruption.

However, most previous approaches mainly leverage a single selection criterion to select seeds. While these approaches are simple and easy to use in solving specific problems, they are still inadequate to reach effective coverage and detect bugs within a reasonable amount of time. Cerebro [18] uses many objectives as the seed selection criteria, but it cannot dynamically adjust the seed selection strategy according to the fuzzing process. As a result, much useful information is ignored, affecting the discovery of bugs and paths.

In this paper, we propose a many-objective optimization seed schedule model, named MooFuzz, which is aimed at speeding up bug discovery and improving code coverage. MooFuzz performs static analysis on the code and marks the risky locations in order to collect edge risk information. In the fuzzing process, a novel measurement method is used to update useful information including the path risk, the path frequency, and the mutation information. According to the fuzzing process, MooFuzz divides the seed pool state into three categories: Exploration State, Search State, and Assessment State. In Exploration State, the fuzzer emphasizes the exploration of high-risk locations in the program. In Search State, the fuzzer spends more energy to find the new path. In Assessment State, the fuzzer aims to select and evaluate promising seeds. MooFuzz collects different information to measure the priority of seeds in each state and builds a many-objective optimization model to select optimal seed set using a non-dominated sorting algorithm [19]. Beside, we also observe that many studies have improved power schedule method, but they have not performed energy monitoring in power schedule. Therefore, MooFuzz uses multiple information to set the energy for selected seeds and monitors energy usage.

We design and implement our prototype by extending American Fuzzy Lop (AFL) [7], and evaluate it against for popular fuzzers AFL, AFLFast, FairFuzz, and PerfFuzz in terms of path discovery and bug detection. We conduct our evaluation on seven real-world applications. The experimental results demonstrate that MooFuzz performs better than others. Compared with AFL, AFLFast, Fairfuzz, and PerfFuzz, it triggers 46%, 32%, 34%, and 153% more crashes with almost the same execution time, respectively. Furthermore, we run cases and analyze the discovery of vulnerabilities. MooFuzz is able to trigger stack overflow, heap overflow, null pointer dereference, and memory leaks related vulnerabilities. The contributions of this paper are as follows.


The rest of this paper is organized as follows. Section 2 introduces the background and related work of many-objective optimization and CGF. Section 3 shows the design of MooFuzz. Section 4 presents the evaluation and we ge<sup>t</sup> the conclusion in Section 5.

#### **2. Background and Related Work**

In this section, we introduce the background of many-objective optimization and CGF and discuss related work.
