Next Article in Journal
Predicting the Unsaturated Hydraulic Conductivity of Compacted Polyurethane–Clay Using the Filter Paper Method
Previous Article in Journal
Study on Flow and Settlement Performance Evaluation and Optimization of Coal Gangue Slurry Filling Material Based on Fractal Gradation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Refactoring for Java-Structured Concurrency

School of Information Science and Engineering, Hebei University of Science and Technology, Shijiazhuang 050000, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(5), 2407; https://doi.org/10.3390/app15052407
Submission received: 7 January 2025 / Revised: 12 February 2025 / Accepted: 14 February 2025 / Published: 24 February 2025

Abstract

:
Structured concurrency treats multiple tasks running in different threads as a single unit, thereby improving reliability and enhancing observability. The existing IDE (Integrated Development Environment) does not provide sufficient support to leverage such an advanced structure and conduct refactoring automatically. Manual refactoring is tedious and error prone. There is an urgent need to provide adequate support to perform automatic refactoring. To this end, this paper proposes ReStruct, an automatic refactoring approach to transform unstructured concurrency to structured concurrency. ReStruct first employs visitor pattern analysis to acquire the target code for refactoring and then performs a precondition to filter out the code that meets the refactoring criteria. Subsequently, it performs scope analysis to guide the refactoring process. Finally, ReStruct performs refactoring on the AST of the source program. ReStruct is implemented as an Eclipse plugin and is evaluated in seven real-world projects via the number of refactorings and changes to LOCs (lines of code), time, and performance after refactoring. The experimental results illustrate that a total of 82 unstructured codes are refactored, with an average of 27.3 s per project. Furthermore, the performances of these refactored projects are improved by an average of 6.5%, demonstrating its effectiveness.

1. Introduction

Structured concurrency is a concurrent programming method that constrains the concurrent behavior between tasks by defining a concurrent region, enhancing the program’s maintainability and observability. The structured concurrency framework was first introduced in JDK19 [1], which defines a clear scope for a set of tasks that are executed simultaneously to manage them, aiming at more precise control and management of concurrent tasks. Structured concurrency can help to simplify thread management issues in concurrent programming and can realize the programming style of one task per thread.
In recent years, various methods and tools have been proposed in both academia and industry to enhance the performance of concurrent programs. Some researchers have proposed methods for converting sequential code in Java to concurrent code. Dig et al. [2] proposed a method to improve the responsiveness of mobile wearable devices by converting long-running blocking code to asynchronous execution. Tramontana et al. [3] proposed how to utilize program analysis techniques to parallelize identified serial code. Other researchers have suggested using new features in the JDK to transform serial streams to parallel streams. Midolo et al. [4] proposed a refactoring method to transform different for loops to the corresponding stream structures to improve the execution efficiency of the loops. Radoi et al. [5] proposed a refactoring method called Relooper, which transforms serial code arrays to parallel arrays that can be executed in parallel. Additionally, there has been work on lock-related refactoring in parallel programs; for example, Frank et al. [6] proposed an automatic refactoring tool called Relocker, which helps programmers to refactor synchronized blocks to reentrant or read–write locks. Zhang et al. introduced refactoring tools FineLock [7] and Clock [8] to improve lock patterns.
Most of the refactoring tools mentioned above are built based on the Java 8 APIs, such as Lambda expressions and the Stream API. These refactoring tools, although they have played certain roles in their specific application scenarios, because of the limitations of their technical foundation, none of them can effectively solve the complexity problems brought about by various aspects such as task coordination, error propagation control, and complex concurrent logic organization, in concurrent programming. Therefore, it is difficult for them to achieve in-depth optimization and efficient management of concurrent programming. However, our tool is based on the structured concurrency features in JDK 19. It can provide developers with a convenient, safe, and efficient concurrent programming model. In Java, although unstructured concurrency can be refactored to structured concurrency manually, manual refactoring is time consuming, labor intensive, and prone to errors. Automated refactoring can be employed, but it mainly faces the following challenges: (1) the identification of unstructured concurrency and refactoring it to structured concurrency through program analysis, (2) ensuring the independence of tasks during structured concurrency refactoring, and (3) ensuring that tasks do not exceed the boundaries of the structured-concurrency control during the structured-concurrency process.
To address these issues, this paper proposes an automated refactoring method. This method first uses visitor pattern analysis to locate the unstructured code for refactoring. Then, it performs a precondition to filter the unstructured code and conducts a scope analysis. Scope analysis ensures that the variable scope remains unchanged after refactoring. Finally, the refactoring is completed in the AST (abstract syntax tree) based on the results obtained. Using this method, we implemented the automated refactoring tool ReStruct as a plugin within the Eclipse JDT [9] framework. This tool was used to conduct experiments on seven real-world projects, including SystemML [10] and Light-4j [11]. The experimental results evaluated ReStruct in terms of the number of refactorings, the changed LOCs, the refactoring time, and the performances of the refactored programs. A total of 82 refactorings were refactored by ReStruct, with an average time of 27.3 s for each project. These results demonstrate that ReStruct can effectively refactor unstructured concurrency to structured concurrency in Java.
The remainder of this paper is organized as follows: Section 2 discusses related research work. Section 3 presents the motivation. Section 4 introduces the automated refactoring method for structured concurrency. Section 5 presents the consistency check of the program before and after refactoring. Section 6 presents the implementation of ReStruct. Section 7 discusses the experimental results. Section 8 presents threats to the validity of the experiment. Section 9 concludes the paper.

2. Related Work

We primarily focus on two aspects of related work: research on structured concurrency and concurrency-related refactoring tools.

2.1. Research on Structured Concurrency

Smith [12] first introduced the concept of structured concurrency to address the complexities and resource management issues in traditional concurrent programming. Stadler et al. [13] discussed the implementation of coroutines in the Java Virtual Machine, illustrating the advantages of coroutines over threads, and emphasized the lack of support for coroutines in current mainstream languages. D. Beronić et al. [14] explored the structured-concurrency model in the JVM, focusing on the issue that I/O operations are highly dependent on operating system threads and their scheduling. They compared the OS thread caller and the virtual thread scheduler in structured concurrency, demonstrating the execution efficiency of virtual threads and the potential of using a virtual thread scheduler. Chen et al. [15] examined the origins of structured concurrency from both historical and technical perspectives, proposing methods to leverage structured concurrency to simplify the complexities of managing concurrent tasks while also highlighting the relationship between structured concurrency and coroutines. Pufek [16] and Chirila [17] compared traditional Java threads with virtual threads that implement structured concurrency, explaining the reasons for the efficiency of the structured concurrency. Radovan et al. [18] introduced the implementation of structured concurrency in the programming languages Java and Kotlin and compared the coroutines in Kotlin with virtual threads in Java. Rosà et al. [19] proposed a new framework that can adaptively select worker threads as platform threads or virtual threads in different environments, thereby improving the throughput of programs.

2.2. Refactoring Tools

With the rapid development of the software industry, the cost of software maintenance continues to rise, and refactoring tools are receiving increasing attention. Franklin et al. presented a refactoring tool called LambdaFicator [20], which can convert collection operations to corresponding lambda expressions, simplifying code and enhancing the program’s parallel capabilities. In the research of Java program refactoring, bytecode-level refactoring techniques have attracted extensive attention because they can directly manipulate the intermediate code executed by the JVM. Especially in the field of concurrent programming, bytecode refactoring offers new possibilities for optimizing locking mechanisms and improving multithread performance. Zhang et al. [21] proposed a method based on bytecode transformation, which focuses on implementing customizable locking mechanisms. This research replaces traditional locking mechanisms with more efficient ones through bytecode-level static analysis and dynamic instrumentation techniques. Real-time refactoring tools have attracted attention because they can dynamically modify code while the program is running. Fernandes et al. [22] proposed a Java tool that supports real-time refactoring. This tool allows developers to dynamically modify the code’s structure during the program’s runtime without restarting the application. It achieves the real-time refactoring of methods, classes, and interfaces by combining bytecode instrumentation and dynamic-class-loading technologies. Subsequently, they continued to build a prototype of a real-time refactoring environment [23] by combining the features of dynamic languages and Just-In-Time (JIT) compilation technology. This prototype is capable of identifying, recommending, and applying "Extract Method" refactoring.

3. Motivation

This section illustrates motivation by selecting a method callApiAsyncMultiThread from the class Http2ClientPoolTest in the project Light-4j, as shown in Figure 1a. This method aims to use multiple threads to asynchronously call the API. First, it creates a thread pool with a fixed number of threads and a set of tasks (lines 2–4). Then, it submits tasks using the method invokeAll(), which returns a collection of futures representing the results of these tasks (line 5). Through the futures, we can obtain the execution results or the exception information of the tasks. Next, it iterates through the futures and uses the method Future.get() to obtain the results of the tasks (lines 6–9). Finally, it outputs the results (line 10). This is the way of the unstructured concurrent task submission. There will be several problems with this kind of code. First, the exception handling for the subtasks lacks synchronization with the main thread’s capturing mechanism. When the main thread catches an exception from a subtask and terminates, it may not be able to promptly cancel or abort the remaining subtasks. Second, the interruption signal captured by the main thread is not effectively transmitted to subtasks. Even if the main thread stops executing after encountering an interruption, because the interruption signal will not be propagated to the subtasks, this may cause the subtasks that should be canceled immediately to continue executing. Finally, the main thread’s response to exceptions from subtasks may experience delays. Because Future.get is a synchronous blocking method, when the main thread is waiting for the result of a particular task, it enters a blocked state. At this time, if other subtasks encounter an exception, the main thread may not be able to respond to this exception promptly. These issues may lead to the abuse of thread resources and could even cause thread leaks.
Figure 1b illustrates the results of the refactoring through IntelliJ IDEA [24]. This type of refactoring cannot be directly provided in IDEA; however, it can provide refactoring in the form of the template code. The template code means pre-editing the code that you want to generate and then generating it via shortcut keys at the place where it needs to be inserted. Lines 2–3 are the original code. The code template for the structured concurrency is on lines 4–10. It has no direct association with the original code and requires developers to manually modify the template according to the original code. First, during the task iteration process, the generic type (T) should be replaced by the return type of the task. Because the name of the task set is unknown, a placeholder is left. It is necessary to insert the name of the task set into the original code at this position (line 5). Second, the code to obtain the results of the tasks should be added after the method scope.throwIfFailed. The code for task acquisition in Figure 1a must be inserted on line 10 in Figure 1c. Therefore, its definition needs to be moved forward, as shown on line 4 in Figure 1c. In general, this refactoring belongs to a semiautomated refactoring approach. The current integrated development environments do not support the automatic refactoring of the unstructured concurrency to structured concurrency.
In contrast, Figure 1c shows the correct refactoring result. First, because refactoring affects the scope of the variable resultList, this variable is declared on line 4. Next, a scope object representing the structured concurrency context is created, which automatically releases the object once its context concludes. The strategy used by the scope object is shutdownOnFailure, indicating that the failure of any task will lead to the failure of the entire task (line 5). The tasks are then submitted using the method scope.fork (lines 7–9). The method scope.join is used to block the main thread until the subtasks are completed. If any subtask throws an exception, the method scope.throwIfFailed will immediately catch it. In this way, it can automatically cancel the execution of the remaining subtasks, effectively avoiding the risk of thread leakage (lines 10–11). Afterward, the results of the tasks are obtained with the resultNow method, which is not blocking and can immediately retrieve the results of the tasks (line 13). Finally, it outputs the results of the tasks. By comparing Figure 1b,c, we can find that there is still a lot of logic in the template method provided by IDEA that needs to be manually supplemented.
We also obtain the execution times before and after the refactoring using a benchmarking tool, JMH [25]. The results show that the execution time before refactoring is 4.2 s, while the execution time after refactoring is 3.8 s, resulting in a 9.5% reduction in the runtime. This indicates that refactoring unstructured concurrency to structured concurrency can effectively improve program execution efficiency. However, current integrated development environments still do not provide sufficient support to achieve this type of refactoring.

4. Design

This section introduces the method Restruct, which is a refactoring method that can assist developers in refactoring unstructured concurrent code to structured concurrent code. The design of this method consists mainly of four parts: the refactor probe, precondition, program analysis, and transformation.

4.1. Overview

Figure 2 presents an architectural overview of ReStruct. First, it parses the Java source code to generate an AST and traverses the AST with the Visitor Pattern to locate the unstructured code. Second, it performs a precondition on the unstructured code to validate whether it meets the requirements for refactoring. Third, ReStruct performs a scope analysis to collect the variables which scope will change during refactoring. Based on these results, the refactoring is undertaken on the AST. Finally, the refactored code is checked with the consistency rules.
To better describe the refactoring method, we make the following definitions, as shown in Figure 3. Here, cs represents the statement that creates a thread pool, ss represents the statement that submits tasks with the thread pool, gs represents the statement that obtains the result of the execution of the task, ds represents the statement that destroys the thread pool, and os represents the statements located between ss and gs. Among them, the scope of the variables defined in os may change after refactoring.
Definition 1.
(Method pending refactoring): In a program (P), given a set (M), collect all the methods to be refactored, M = { m 1 , m 2 , . . . , m n } . For m i M ( 1 i n ) , it holds that s s m i && g s m i denotes that m i must contain ss and gs.
Definition 2.
(Tasks in method): In a program (P), all the tasks in M are defined as T, T = { t 1 , t 2 , . . . , t n } . For t i T ( 1 i n ) , t i = { t i 1 , t i 2 , . . . , t i k } , which indicates that each t i corresponds to an m i , and t i contains k tasks. The variables t i b and t i a represent the numbers of tasks before and after refactoring in m i .
Definition 3.
(The type of method): In a method ( m i ), we classify m i according to the relative positions of s s and g s . If there exists a try structure that acts as a parent node for both statements, we classify the type of m i as K 1 , as shown in Figure 4a. In contrast, the type m i is defined as K 2 in Figure 4b, and this try structure is represented as t w r .

4.2. Refactor Probe

The purpose of the refactor probe is to identify unstructured code segments in a Java program. First, ReStruct uses the JDT’s AST parser to convert the entire project to the corresponding AST. The AST is a tree representation of the source code and can present the structure and syntax information of the program. In the AST, each variable has binding information. With the help of the binding information, the specific types of variables can be determined. Second, ReStruct utilizes Visitor Pattern to traverse the AST and check the type of the left-hand variable in each statement VariableDeclaration. Once it is found that the variable belongs to the ExecutorService interface or its implementation class, this variable will be recorded. Continuing the traversal, Restruct checks whether the variable is used in any MethodInvocation statement. If so, the method traversed currently will be marked and added to the set (M).

4.3. Precondition

The purpose of the precondition is to verify whether the methods in the set (M) meet the refactoring requirements. Basically, it focuses mainly on the following two aspects:

4.3.1. Precondition for shutdownOnFailure Strategy

ShutdownOnFailure is a strategy in structured concurrency. It regards all the tasks as a whole and requires them to succeed together. Moreover, when one of the tasks fails, it cancels the remaining tasks that are running. In order to make the code before refactoring comply with this principle, we need to check the following two aspects: First, exceptions in subtasks should be captured in a timely manner in the main thread. In this way, the main thread can be aware of the execution status of the subtasks. Once a subtask fails, the failure of the entire concurrent task can be judged and dealt with. Second, the subtasks submitted should be independent of each other. The independence of subtasks is to prevent exceptions of one subtask from spreading and affecting other subtasks because of dependency relationships, thus ensuring that all the subtasks can be regarded as a whole.
Check task result acquisition. In Java’s multithreaded task return mechanism, when a task is submitted, a Future object can be used to represent the returned result of the task. Using the method Future.get, we can obtain the result of the execution of the corresponding subtask or track the exception information from the subtask. If the method Future.get is not called in the main thread, even if an exception occurs in the subtask, it cannot be captured, as shown in Figure 5. The code snippet comes from the method aggregateUnaryMatrix of the class LibMatrixAgg in the project SystemML. In this code snippet, a fixed-sized thread pool is created, and a group of tasks is added to the task set (lines 2–10) first. Then, this group of tasks is submitted through the method invokeAll. Finally, the thread pool is closed (lines 11–12). However, this method does not obtain the returned result of invokeAll, so the main thread cannot use future.get to obtain the exceptions or the results of the subtasks. Therefore, regardless of whether the subtasks are executed successfully, the main thread will continue to execute. This clearly violates the strategy shutdownOnFailure in structured concurrency. Therefore, we need to determine whether a Future object has been returned after all the subtasks are submitted. Then, we need to judge whether the results of the task have been obtained through Future.get in the main thread. In this way, the main thread can capture exceptions in subtasks in a timely manner and terminate.
Check task dependency. When converting from unstructured concurrency to structured concurrency, it is necessary to ensure independence between tasks. From the perspective of error propagation, independent tasks have no complicated dependencies and can directly terminate all the tasks according to the strategy ShutdownOnFailure once a certain task fails. From the aspects of task scheduling and execution, in structured concurrency, the relationship between a parent task and subtasks is usually organized into a tree structure. This tree structure allows us to decompose complex tasks into a series of smaller subtasks, forming a clear hierarchical relationship. If the subtasks are independent of each other, it avoids mutual constraints in sibling tasks, enabling a more flexible organization and execution of tasks. In this way, structured concurrency can better manage and schedule tasks, improving the maintainability and execution efficiency of the program.
Task dependency analysis aims to identify dependencies between subtasks in a multithreaded environment. In multithreaded programming, the dependencies in tasks are typically manifested as control and data dependencies. Control dependence refers to the relationship between the execution order among tasks. Data dependency refers to the potential competition that may arise when multiple tasks access or modify shared resources. Algorithm 1 presents the task dependency analysis, which takes the method m and the tasks in m as input. By invoking the methods hasConDep and hasDataDep, the Algorithm 1 performs the control and data dependency analysis (line 2). If any dependency exists, it returns true (line 3).
Algorithm 1: Task Dependency Analysis
Applsci 15 02407 i001
The method hasConDep is used to detect control dependencies between tasks. Typically, tasks at the same level do not have mutual calling relationships. Therefore, the control dependency here primarily determines whether one task depends on the execution results of another task. The hasConDep method also takes m and tasks as input parameters. First, it obtains all the Future objects and the results of these Future objects in m, recording them as a set (res) (line 6). This step ensures that all the possible sources of dependencies are covered. Then, it iterates through the statements in tasks. If a statement uses a variable that belongs to res, it indicates that the current task depends on the results of the execution of other tasks. In this case, the algorithm returns true (lines 7–10). If no such variables are found, it checks the following statement. Finally, if no dependency is detected, it returns false (line 11).
The method hasDataDep is used to analyze data dependencies between tasks; hasDataDep determines dependency by analyzing the method (m) and the collection of tasks. First, it performs pointer analysis and returns the result of the pointer analysis (pta) (line 13). Here, pta is an instance of the interface PointsToAnalysis, which implements pointer analysis. PointsToAnalysis defines a series of methods for performing pointer analysis. Among them, the method reachingObject takes an object as input and returns the set of variables that reference this object. Next, a collection (set) is defined to store the pointer information of the variables for each class (line 14). Then, it iterates through each task, collecting the “value” objects and their corresponding pointer information in the task and storing them in the map (mp). “Value” represents a general variable in a program, including constants and variables. After the completion of the task traversal, mp is added to set (lines 15–20). In this way, the set contains the pointer information of the variables in m’s tasks. Finally, if the values of different maps in set have an intersection and one of them involves a write operation on its keys, true is returned (lines 21–25).

4.3.2. Precondition for Checking the Thread Pool

It is inadvisable to refactor the method using the customized thread pool. There are two main reasons for this: First, the customized thread pool has its own logic, which does not align properly with the APIs of the structured concurrency. For example, a customized thread pool may have unique pre-processing logic after task submission, which is quite different from the standard task submission process of the structured concurrency. Second, there are significant differences in aspects such as task state management, concurrency control mechanisms, and lifecycles, resulting in rather high difficulty in refactoring. For instance, with regard to concurrency control, a customized thread pool utilizes a particular locking mechanism to guarantee mutually exclusive access to resources. In contrast, structured concurrency employs different methods of task isolation and dependency control. Direct refactoring may cause conflicts. After manual refactoring, we found that after the refactoring of the customized thread pool, it frequently gives rise to unexpected errors.
Through the refactor probe, we locate the corresponding cs and check whether the types of variables declared in the cs inherit from ThreadPoolExecutor. If so, the currently traversed method must be removed from M.

4.4. Program Analysis

In this section, we will focus on scope analysis. This analysis provides a critical foundation for subsequent refactoring.
Scope analysis obtains the variables which scopes will change after refactoring. The structured concurrent code is typically contained within a try-with-resources structure. If the method is of type k 2 and defines variables in os, refactoring to a structured concurrency will reduce the scope of these variables because of the newly added try-with-resources structure. As shown in Figure 1a,c, after refactoring, the try-with-resources structure will include the code from line 5 to line 9 in Figure 1a. This will reduce the scope of resultList, as defined on line 6 in Figure 1a. Therefore, the definition of the variable resultList should be moved outside of the try-with-resources structure. Therefore, the purpose of scope analysis is to find these variables.
Algorithm 2 is the pseudocode for the scope analysis algorithm, with its input parameters being the method m and the output being a set. LocalUses is used to check the usage of a particular variable. Specifically, a “local” object represents a local variable. The getUsesOf method in LocalUses takes a “local” object as a parameter. This method yields a set of positional details on the usage of this local variable. The main steps of the algorithm are as follows:
  • ReStruct defines a collection (res) to collect variables which scopes may change after refactoring (line 1);
  • If the method is of type K 1 , the algorithm returns res directly (lines 2–3) because the methods of type k 1 will not introduce any new try-with-resource structures after refactoring. Otherwise, it identifies the sets of subsequent statements in ss and gs, denoted as s e t 1 and s e t 2 (lines 4–5). Then, it calculates the difference between s e t 1 and s e t 2 , which represents os (line 6);
  • If os is empty, it indicates that no variable’s scope will change after refactoring, and the algorithm returns res (lines 7–8);
  • Otherwise, it obtains localUses to acquire the location information of where each variable is used. Then, it iterates through os to check whether each statement contains a local variable. If so, it uses the method localUses.getUsesOf() to collect the location information of the statements that use the local variable. If this location information intersects with s e t 2 , it indicates that the scope of this local variable will change after refactoring, and this local variable is added to res (lines 9–12). Finally, the algorithm returns res (line 13).
Algorithm 2: Scope Analysis
Applsci 15 02407 i002

4.5. Transformation

This section elaborates on the refactoring transformation, which takes the method m as input. It introduces structured concurrency by modifying the AST source code. The key steps include adjusting the variable scope with the results of the scope Analysis, inserting and deleting AST nodes according to the immutability of the AST nodes, introducing the structured concurrency API, and performing a consistency check to ensure the correctness of the refactoring. Among them, the immutability of AST nodes means that because the AST is a syntax-based tree structure, each part of the code corresponds to a node. When modifying the code structure, a new node must be created first, and the information of the original node must be copied to the new node instead of directly modifying the parent–child relationships of the original node. Algorithm 3 presents a detailed delineation of the transformation.
(1)
ReStruct obtains a copy (m’) of the method m and uses it as the input for the subsequent consistency check (it will be presented in the next section) (line 1). Then, it imports the necessary structured concurrency packages and creates a scope object to represent the structured concurrency context (lines 2–3);
(2)
It obtains the result of the scopeAnalysis and records it as set. Then, it iterates the collection set, redefines the variables therein, and deletes their definitions in the original program (lines 7–9);
(3)
It checks if ss and gs have a common parent node, twr. If not, it creates a try-with-resource node as twr and inserts twr after the definitions above (lines 7–10);
(4)
It adds the scope object to the resource of twr and submits the task using the method scope.fork (lines 11–12);
(5)
For methods of type K 2 , because of the immutability of the AST node, it iterates os, using the method AST.copyNode to duplicate each statement and insert the copy into twr. Finally, it removes the original node from the AST (lines 13–16);
(6)
It inserts methods scope.join and scope.throwIfFailed and replaces Future.get with Future.resultNow (lines 17–18). Then, it tries to remove statements cs and ds from the AST (line 19);
(7)
Finally, it performs a consistency check on the refactored method (m). If it fails the consistency check, it returns false (lines 19–21).
Algorithm 3: Transformation
Applsci 15 02407 i003

5. Consistency Check

The consistency check is used to check the consistency of the program before and after refactoring. FlexSync [26] is a tool that can perform refactoring and conversion among different synchronization mechanisms by applying different markers. It defines a set of consistency check rules to verify the correctness of concurrent programs before and after refactoring. Similarly to FlexSync, we design the consistency check rules for ReStruct. Before presenting these rules, we will first make the relevant definitions.
Definition 4.
(Task dependency). The dependency relationship for t i ( 1 < = i < = n ) in m i is defined as the set D = { d 1 , d 2 , . . . , d k } . For d j D ( 1 < = j < = k ) , we have d j t i , indicating that the execution of task j may depend on multiple tasks.
Definition 5.
(Resources). For each os in m i , the definitions of the various variables, objects, and other resources contained in the set are represented as R = { r 1 , r 2 , . . . , r z } .
Definition 6.
(Resource access). For each m i M ( 1 < = i < = n ) , the subsequent read and write operations contained after the statements (gs) are denoted as { o p i 1 , o p i 2 , . . . , o p i q } . In m i , if there exists an operation ( o p i j ) m i ( 1 < = j < = q ) that accesses a certain resource (r), it is represented as o p i j . a c c e s s ( r ) .
Definition 7.
(Task cancellation mechanism). For m i , when tasks fail, the set of tasks that are canceled at this time is defined as the set c i . Among them, c i t i indicates that when these tasks fail, there will be multiple tasks to be canceled. However, the number of tasks canceled must be less than the total number of tasks minus one in m i .
Based on the definitions provided above, the following consistency check rules for refactoring are presented:
Rule 1: Before refactoring, t i T ( 1 < = i < = n ) , with t i = { t i 1 , t i 2 , , t i k } , it follows that t b e f o r e = k . After refactoring, for the same t i , it remains that t a f t e r = k .
This rule indicates that the number of tasks will not change after refactoring, thus ensuring the integrity of the tasks.
Rule 2: Before refactoring, for t i ( 1 < = i < = n ) in m i , for all d j D ( 1 < = j < = k ) , we have d j = . After refactoring, in this t i , for d j D , we still have d j = .
This rule indicates that if there is no dependency relationship among the tasks before refactoring then there will still be no dependency relationship among the tasks after refactoring.
Rule 3: Before refactoring t i T ( 1 < = i < = n ) , it is the case that c i = ( 1 < = i < = q ) . After refactoring, for the same t i , we have c i = { c i 1 , c i 2 , , c i r } ( 0 < = r < = k 1 ) , where r is the number of tasks that have not yet been completed at that time.
Rule 3 ensures that the code that cancels specific tasks will not be refactored to structured concurrency. The reason is that the shutdownOnFailure strategy does not allow the selective cancellation of individual tasks. In essence, this rule safeguards the integrity of the task cancellation mechanism during the refactoring process.
Rule 4: Before refactoring, for m i , r t R ( 1 < = t < = z ) , o p i j t i ( 1 < = j < = q ) , such that o p i j . a c c e s s ( r t ) . After refactoring, for m i , o p i j still contains o p i j . a c c e s s ( r t ) .
Rule 4 ensures that before refactoring, the resources in os are accessible by a specific read/write operation (op) and remain accessible by the same operation after refactoring.

6. Implementation

ReStruct is designed within the Eclipse JDT framework and implemented in the form of an Eclipse plugin. In this framework, Refactoring is the base class for refactoring operations and defines the basic lifecycle and methods of refactoring operations. The Change class represents an executable code modification unit, encapsulating the specific changes made to the code by the refactoring operation. The UserInputWizardPage class is the base class for user input wizard pages used to interact with users during the refactoring process. Developers can extend the UserInputWizardPage class to customize their own user input interfaces to meet the requirements of different refactoring operations. We demonstrate ReStruct’s refactoring capabilities in Figure 6. At the top of the picture, the number of all the refactored code blocks in a project is displayed. The left-hand side presents the code before refactoring, while the right-hand side displays the code after refactoring. As we can see, the code with the changes is highlighted. Finally, users can decide to cancel or accept this refactoring by observing the changes.

7. Evaluation

This section first introduces the experimental setup and benchmarks and then presents the research questions and illustrates the experimental results.

7.1. Experimental Setup

All the experiments are conducted on a workstation with an Apple M1 Chip @3.2 GHz and a 16 GB main memory. The machine runs macOS Ventura 13.2.1 and has installed JDK 19.0.2, Eclipse 4.27.0, and Soot 4.0.0.

7.2. Benchmarks

To evaluate the usefulness of ReStruct, we select seven real-world applications, including SystemML [10], Light-4j [11], MetaOmGraph [27], ModernJava [28], BigStitcher [29], TrackMate [30], and CodeJam [31]. SystemML is flexible software for large-scale data processing, capable of handling data at various scales in different environments. Light-4j is a fast, lightweight, and extensible platform used to build microservices. MetaOmGraph is software for processing, analyzing, and visualizing large-scale bioinformatics data. ModernJava is software that combines the new features of Java in practical applications. BigStitcher is software for stitching, registering, and detecting large-scale multiview microscopic images for restoration. TrackMate is a free tool capable of tracking analysis on multidimensional image data. CodeJam is an international programming competition project that challenges participants to solve algorithmic problems. These projects come from different fields and application scenarios, demonstrating diverse architectures and programming practices. The version information of the application programs, the number of unstructured concurrencies, and the number of lines of source code are presented in Table 1. When performing performance tests on the refactored programs, we selected projects Light-4j, CodeJam, and ModernJava. The reason for choosing these projects is that there are tests on the refactored code parts within them.

7.3. Research Questions

We evaluated the effectiveness of ReStruct by answering five research questions (RQs):
  • RQ1: How does ReStruct perform in the refactoring of these projects? Why does ReStruct fail to perform refactorings?
  • RQ2: How effective is ReStruct in improving efficiency?
  • RQ3: Are all the refactored programs correct?
  • RQ4: How effective is ReStruct in improving the software’s performance?
  • RQ5: How stable is ReStruct in enhancing the program’s performance?

7.3.1. Results of RQ1

To answer RQ1, we counted the number of refactorable and non-refactorable code segments in each project, with the results summarized in Table 2. The second column shows the number of code snippets identified by the refactor probe. The third column shows the amount of refactorable code in each project. Columns 4 and 5 outline the specific reasons for the refactoring failures. Here, #NSF indicates that the shutdownOnFailure strategy of the structured concurrency was not satisfied, and #CTP refers to the use of customized thread pools. The final column, "Refactorable", shows the proportion of the refactorable code in each project.
Among these projects, a total of 99 unstructured concurrent code segments were detected, of which 82 were successfully refactored, resulting in an average refactorable rate of 82.82%. Among these projects, SystemML contained the highest number of refactorable code segments, with a total of 48 unstructured concurrent code instances, of which 40 were successfully refactored, giving a refactorable rate of 83.33%. These code segments are primarily located in the org.apache.sysml.runtime package in the module that handles large-scale machine-learning functionalities. Projects Light-4j, TrackMate, and CodeJam achieved refactorable rates of 100%. The number of successful refactorings for Light-4j is 10, TrackMate is 5, and CodeJam is 5. For TrackMate, it is mainly concentrated in the package java.fiji.plugin.TrackMate.features, which is responsible for calculating and tracking the relevant features of the objects, covering feature computation, storage management, and feature model construction. For the other projects, BigStitcher, MetaOmGraph, and ModernJava had 10, 7, and 5 refactorable instances, with refactorable rates ranging from 70% to 71.43%.
Upon examination of the statistical data, it was determined that refactoring was infeasible for 17 of the instances. The SystemML project had a total of eight non-refactorable instances, five of which were because of the use of a customized thread pool, while the other three did not comply with the shutdownOnFailure strategy. The projects Light-4j, TrackMate, and CodeJam were able to complete all the refactorings. The project MetaOmGraph had three instances that did not comply with the strategy shutdownOnFailure. In projects BigStitcher and ModernJava, there were four instances in the first and two in the latter that could not be refactored because they used customized thread pools.

7.3.2. Results of RQ2

It is indeed challenging to measure developers’ efforts accurately. Ideally, we would observe developers during their refactoring processes and determine how much time was saved using ReStruct. However, the amount of time required for refactoring may vary depending on the familiarity of different developers with the unstructured concurrency and structured concurrency codes. We will answer this question by examining the changes in the number of lines of code before and after refactoring, as well as the amount of time taken for the refactoring process.
Table 3 presents a summary of the number of code segments refactored in each project, along with the corresponding changes in the code lines (columns 2–3). SLOC1 represents the number of lines of code before refactoring, while SLOC2 represents the number of lines of code after refactoring. Column 4 records the changes in the number of code lines after refactoring. Overall, a total of 82 unstructured code segments were refactored across all the projects, with an average of 110 K lines of code per project and an average of 38 lines modified in each project. We carried out an introspective examination of the code. The examination revealed that these segments were scattered in multiple files in these projects. Searching for a small amount of target code and refactoring it in such large Java projects are time consuming and labor intensive. The last column in Table 3 shows the amount of refactoring time for each project, with a total time of 191 s to refactor the seven projects, averaging 27.3 s per project. Among them, the SystemML project is the largest, with 400 K lines of code and a refactoring time of 63.8 s, the longest among all the projects. The projects BigStitcher, ModernJava, and CodeJam are relatively small in scale, and their refactoring times are 14.5, 17.8, and 12.3 s, respectively. For projects Light-4j, MetaOmGraph, and TrackMate, which are medium-sized, with an average of 70 K lines of code, the refactoring times varied from 24.8 to 33.3 s. From these refactoring times, we found that the refactoring time for ReStruct is primarily spent on program analysis, and for larger projects, more time is spent on refactoring analysis. Compared to manual refactoring, our tool (ReStruct) is fully automated, and even for projects with more than 400 K lines of code, ReStruct can complete refactoring in one minute. These results indicate that ReStruct can save developers significant effort.

7.3.3. Results of RQ3

To answer RQ3, we used a combination of running test cases and manual inspections to verify the correctness of the code refactored by ReStruct. The specific operations were as follows: First, for these projects, we executed all the existing test cases related to the refactored method. The results showed that the code refactored by ReStruct successfully passed all the test cases. Second, we checked whether the refactored code passed the consistency check rules. The results showed that all the refactored code passed all the check rules. Third, we conducted a detailed manual review of the refactoring results, focusing on the following points: 1. For the refactoring type k 2 , we verified that the try-with-resources structure was correctly inserted. 2. We checked the usage and order of the structured-concurrency APIs to ensure that they were reasonable. 3. For the set of variables returned by the scope analysis, we checked whether the definitions of these variables were advanced. 4. We checked whether statements cs and ds were appropriately deleted during the refactoring process.
The inspection results confirmed that all the code refactored using ReStruct maintained correctness, indicating that the ReStruct tool provides efficient refactoring capabilities while preserving the original functionality.

7.3.4. Results of RQ4

To answer RQ4, we selected Light-4j, ModernJava, and CodeJam for performance testing. We recorded the execution times of these methods before and after refactoring and tested their performances for various task quantities.
For the project Light-4j, we selected Http2ClientPoolTest, Http2ClientTest, and Http2ClientPoolIT for testing. Figure 7a illustrates the execution times of these methods. After refactoring, the execution times of these methods in Http2ClientPoolTest reduced by 9.9%. The execution times of these methods in Http2ClientPoolIT showed a reduction of 5.8%, while the execution times of these methods in Http2ClientTest showed a decrease of 6.3% after refactoring. For the ModernJava project, we selected TestMovieDb, LinkScraper, and ThreadedPrimeNumberFinder for testing, with the results shown in Figure 7b. After refactoring, these methods in TestMovieDb showed a 16.6% reduction in execution time, while these methods in LinkScraper showed a more significant reduction of 5.3%. Finally, the execution times of these methods in ThreadedPrimeNumberFinder reduced by 7.3%. As is evident therefrom, with regard to diverse projects, refactoring is capable of leading to enhancements in execution efficiency.
We also evaluated the changes in the throughput before and after refactoring for HttpClientTest in Light-4j and SenateEvacuationMT in CodeJam, with varying task counts ranging from 20 to 400, as shown in Figure 8. For HttpClientTest, the throughput before refactoring showed significant variability with different task quantities, as shown in Figure 8a. As the number of tasks increased from 20 to 400, the rate of the decrease in the performance continued to increase. During this period, the overall throughput dropped by 55.4%. When the number of tasks increased beyond 400, even memory overflow errors occurred. In contrast, the refactored code exhibited minimal fluctuations in the throughput with varying task numbers, showing only an 11.4% reduction in throughput when the number of tasks increased. For SenateEvacuationMT, the throughput variations before and after refactoring are generally consistent with different task quantities, as shown in Figure 8b. When the number of tasks increased from 20 to 400, the throughput for both the refactored and non-refactored code decreased by approximately 94%. The reason is that the underlying structure of the structured concurrency is based on virtual threads, which are lightweight threads attached to platform threads and significantly outnumber them. When handling I/O-bound tasks, the CPU often remains idle for extended periods, and the key factors limiting the program’s execution efficiency are the switching between threads and limitations on the number of threads. Because virtual threads are more lightweight than platform threads, they incur less switching overhead and can be provided in a much larger quantity. However, for CPU-bound tasks, the primary factor affecting the execution efficiency shifts to the CPU’s processing capability. Therefore, there are not many differences in performance before and after refactoring. However, in a multithreaded program, it can still reduce the overhead of the thread switching.

7.3.5. Results of RQ5

For the refactoring of concurrent programs, the stability of the program execution after the refactoring is of great importance. In order to demonstrate the stability of the program execution after refactoring, we conducted tests within the scenarios where the numbers of threads were set at 1, 2, 4, 8, and 16. The annotation @Threads in the JMH testing tool can be used to set different numbers of threads for testing. The default value of this annotation is 1. By observing the performance metrics of the throughput for these various thread configurations, we can comprehensively assess whether the refactored program can maintain stable operation and effectively handle concurrent workloads, thereby determining the effectiveness of the refactoring in enhancing the program stability.
Both classes SenateEvacuationMT and TroubleSortSmallMT are from the project CodeJam. As we can see in Figure 9, the horizontal axes in these two pictures represent the number of test threads. With the increase in the number of test threads, the performance of SenateEvacuationMT before refactoring increased from 315 ops/s to 472 ops/s, with an improvement of approximately 49.8%. Meanwhile, the throughput after refactoring increased from 387 ops/s to 540 ops/s, which is approximately improved by 39.5%, as shown in Figure 9a. Moreover, in any number of test threads, the throughput of the refactored code is always higher than that of the non-refactored code. The situation of TroubleSortSmallMT is similar, as shown in Figure 9b. This indicates the stability of the program after being refactored to structured concurrency using ReStruct.

8. Threats to Validity

This section discusses several aspects that may threaten the validity of the experimental results.
  • Because of the uncertainty of the concurrent program execution, slight deviations in the performance testing results cannot be ruled out. To avoid errors, all our experimental results are averaged over 10 runs;
  • In the experiments, in addition to verifying the consistency rules, we verified the correctness of the refactored program by manual inspection. However, manual inspection has its shortcomings, such as the possibility of inaccurate human validation. To reduce the likelihood of inaccuracies, we selected two groups of students to conduct two rounds of checks, ensuring the correctness of the results as much as possible;
  • We selected only seven applications to evaluate ReStruct, which may not represent all programs and does not indicate that ReStruct can successfully refactor all applications. However, to mitigate this threat, the selected applications cover multiple domains, including databases, communication, big data, and imaging, making them quite representative. In future work, we will test ReStruct with more applications.

9. Conclusions

This paper presents an automatic refactoring method to convert unstructured concurrency to structured concurrency. Following this method, an automatic refactoring tool called ReStruct was implemented as an Eclipse plugin, and the proposed method was validated in seven real-world projects. A total of 82 instances of unstructured concurrency were refactored during the experiments, with an average refactoring time of 27.3 s per program. The experimental results indicate that this tool effectively facilitates refactoring from unstructured concurrency to structured concurrency, significantly improving the efficiency of refactoring compared to that of manual refactoring. Our future work will mainly include the following aspects: First, as structured concurrency has different strategies, we will discover more concurrent structures that are suitable for structured concurrency. Second, as structured concurrency has been introduced to many high-level programming languages, we will attempt to apply our method to more languages.

Author Contributions

Y.Z.: Proposing the idea, Writing—review and editing; G.S.: Conducting experimentation, Writ-ing—original draft, review and editing; L.Z.: Experimental analysis, Writing—review and editing; M.Z.: Writing—review and editing. K.Z.: Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by the Hebei Provincial Department of Science and Technology under Grant No.246Z0109G.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors also gratefully acknowledge the insightful comments and suggestions of the reviewers, which have improved the presentation.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bateman, R.P.A. Structured Concurrency. 2021. Available online: https://openjdk.org/jeps/428 (accessed on 16 September 2022).
  2. Dig, D. Refactoring for Asynchronous Execution on Mobile Devices. IEEE Softw. 2015, 32, 52–61. [Google Scholar] [CrossRef]
  3. Tramontana, E. An Automatic Transformer from Sequential to Parallel Java Code. Future Internet 2023, 15, 306. [Google Scholar] [CrossRef]
  4. Midolo, E. Refactoring Java Loops to Streams Automatically. In Proceedings of the 2021 4th International Conference on Computer Science and Software Engineering (CSSE 2021), Singapore, 22–24 October 2021; pp. 135–139. [Google Scholar] [CrossRef]
  5. Radoi, C.; Tarce, M.; Minea, M.; Johnson, R. ReLooper: Refactoring for Loop Parallelism. In Proceedings of the 24th ACM SIGPLAN Conference Companion on Object Oriented Programming Systems Languages and Applications, Orlando, FL, USA, 25–29 October 2009. [Google Scholar]
  6. Schäfer, M.; Sridharan, M.; Dolby, J.; Tip, F. Refactoring Java programs for flexible locking. In Proceedings of the 33rd International Conference on Software Engineering, Honolulu, HI, USA, 21–28 May 2011; ACM: New York, NY, USA, 2011; pp. 71–80. [Google Scholar] [CrossRef]
  7. Zhang, Y.; Shao, S.; Zhai, J.; Ma, S. FineLock: Automatically refactoring coarse-grained locks into fine-grained locks. In Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis, Virtual Event, 18–22 July 2020; ACM: New York, NY, USA, 2020; pp. 565–568. [Google Scholar] [CrossRef]
  8. Zhang, Y.; Dong, S.; Zhang, X.; Liu, H.; Zhang, D. Automated Refactoring for Stampedlock. IEEE Access 2019, 7, 104900–104911. [Google Scholar] [CrossRef]
  9. Eclipse-jdt. Eclipse Development Platform Extension Toolkit. 2022. Available online: https://github.com/eclipse-jdt (accessed on 3 October 2023).
  10. Apache. Flexible, Scalable Machine Learning Platform (SystemML). 2023. Available online: https://systemds.apache.org/download.html (accessed on 15 October 2023).
  11. networknt. High-Performance Java Framework for Building Microservices (Light-4j). 2023. Available online: https://github.com/networknt/light-4j (accessed on 15 October 2023).
  12. Smith. Notes on Structured Concurrency, 2018. 2018. Available online: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/ (accessed on 10 August 2023).
  13. Stadler, L.; Würthinger, T.; Wimmer, C. Efficient coroutines for the Java platform. In Proceedings of the 8th International Conference on the Principles and Practice of Programming in Java, Vienna, Austria, 15–17 September 2010; pp. 20–28. [Google Scholar] [CrossRef]
  14. Beronić, D.; Pufek, P.; Mihaljević, B.; Radovan, A. On Analyzing Virtual Threads–a Structured Concurrency Model for Scalable Applications on the JVM. In Proceedings of the 2021 44th International Convention on Information, Communication and Electronic Technology (MIPRO), Opatija, Croatia, 27 September–1 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1684–1689. [Google Scholar]
  15. Chen, Y.A.; You, Y.P. Structured Concurrency: A Review. In Proceedings of the Workshop Proceedings of the 51st International Conference on Parallel Processing, Bordeaux, France, 29 August–1 September 2022; pp. 1–8. [Google Scholar]
  16. Pufek, P.; Beronic, D.; Mihaljevic, B.; Radovan, A. Achieving Efficient Structured Concurrency through Lightweight Fibers in Java Virtual Machine. In Proceedings of the 2020 43rd International Convention on Information, Communication and Electronic Technology (MIPRO), Opatija, Croatia, 28 September–2 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1752–1757. [Google Scholar] [CrossRef]
  17. Chirila, C.B.; Sora, I. Java Single vs. Platform vs. Virtual Threads Runtime Performance Assessment in the Context of Key Class Detection. In Proceedings of the 2024 IEEE 18th International Symposium on Applied Computational Intelligence and Informatics (SACI), Timisoara, Romania, 23–25 May 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 000273–000278. [Google Scholar]
  18. Beronic, D.; Modric, L.; Mihaljevic, B.; Radovan, A. Comparison of Structured Concurrency Constructs in Java and Kotlin—Virtual Threads and Coroutines. In Proceedings of the 2022 45th Jubilee International Convention on Information, Communication and Electronic Technology (MIPRO), Opatija, Croatia, 23–27 May 2022; pp. 1466–1471. [Google Scholar] [CrossRef]
  19. Rosà, A.; Basso, M.; Bohnhoff, L.; Binder, W. Automated Runtime Transition between Virtual and Platform Threads in the Java Virtual Machine. In Proceedings of the 2023 30th Asia-Pacific Software Engineering Conference (APSEC), Seoul, Republic of Korea, 4–7 December 2013; IEEE: Piscataway, NJ, USA, 2023; pp. 607–611. [Google Scholar]
  20. Franklin, L.; Gyori, A.; Lahoda, J.; Dig, D. LambdaFicator: From imperative to functional programming through automated refactoring. In Proceedings of the 2013 35th International Conference on Software Engineering (ICSE), San Francisco, CA, USA, 18–26 May 2013; pp. 1287–1290. [Google Scholar] [CrossRef]
  21. Zhang, Y.; Shao, S.; Liu, H.; Qiu, J.; Zhang, D.; Zhang, G. Refactoring Java programs for customizable locks based on bytecode transformation. IEEE Access 2019, 7, 66292–66303. [Google Scholar]
  22. Fernandes, S.; Aguiar, A.; Restivo, A. LiveRef: A Tool for Live Refactoring Java Code. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering, Rochester, MI, USA, 10–14 October 2022; pp. 1–4. [Google Scholar] [CrossRef]
  23. Fernandes, S. Towards a Live Environment for Code Refactoring. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering, Rochester, MI, USA, 10–14 October 2022; pp. 1–5. [Google Scholar] [CrossRef]
  24. JetBrains. Java Development Kit. 2023. Available online: https://www.jetbrains.com/idea/ (accessed on 8 May 2023).
  25. OpenJDK. Jmh, a Tool for Java Program Testing. 2021. Available online: https://jmeter.apache.org (accessed on 8 May 2023).
  26. Zhang, C. FlexSync: An aspect-oriented approach to Java synchronization. In Proceedings of the 2009 IEEE 31st International Conference on Software Engineering, Vancouver, BC, Canada, 16–24 May 2009; pp. 375–385. [Google Scholar] [CrossRef]
  27. Urmi21. Open Source Data Analysis and Visualization Tools (MetaOmGraph). 2020. Available online: https://github.com/urmi-21/MetaOmGraph (accessed on 15 October 2023).
  28. ThoughtFlow. Combines New Java Features into Practical Applications (ModernJava). 2021. Available online: https://github.com/ThoughtFlow/ModernJava (accessed on 15 October 2023).
  29. PreibischLab. Biological Microscopy Image Processing (Bigstitcher). 2023. Available online: https://imagej.net/plugins/bigstitcher/ (accessed on 15 October 2023).
  30. TrackMate. Biomedical Imaging. 2023. Available online: https://github.com/trackmate-sc/TrackMate (accessed on 15 October 2023).
  31. Salvo. Solutions to Google Code Problems. 2021. Available online: https://github.com/salvois/codejam (accessed on 15 October 2023).
Figure 1. Motivating example.
Figure 1. Motivating example.
Applsci 15 02407 g001
Figure 2. Overview of ReStruct.
Figure 2. Overview of ReStruct.
Applsci 15 02407 g002
Figure 3. Definition reference diagram.
Figure 3. Definition reference diagram.
Applsci 15 02407 g003
Figure 4. The types of methods to be refactored.
Figure 4. The types of methods to be refactored.
Applsci 15 02407 g004
Figure 5. Check task result acquisition.
Figure 5. Check task result acquisition.
Applsci 15 02407 g005
Figure 6. Refactoring GUI.
Figure 6. Refactoring GUI.
Applsci 15 02407 g006
Figure 7. Comparison of execution times before and after refactoring.
Figure 7. Comparison of execution times before and after refactoring.
Applsci 15 02407 g007
Figure 8. The throughput varies with the number of tasks.
Figure 8. The throughput varies with the number of tasks.
Applsci 15 02407 g008
Figure 9. The throughput with the number of test threads.
Figure 9. The throughput with the number of test threads.
Applsci 15 02407 g009
Table 1. Information on these programs.
Table 1. Information on these programs.
ProjectVersionNon-Structured ConcurrencyLOC
SystemML3.2.048412,560
BigStitcher2.2.11415,584
Light-4j2.1.361074,371
MetaOmGraph1.8.11080,177
ModernJava1.0.0723,045
TrackMate7.10.0558,873
CodeJam1.0.0511,106
Table 2. Refactoring results of ReStruct (1).
Table 2. Refactoring results of ReStruct (1).
ProjectTotalRefactoredNSFCTPRefactorable (%)
SystemML48403583.33
BigStitcher14100471.43
Light-4j101000100
MetaOmGraph1073070
ModernJava750271.43
TrackMate5500100
CodeJam5500100
SUM/AVG998261182.82
Table 3. Refactoring results of ReStruct (2).
Table 3. Refactoring results of ReStruct (2).
ProjectSLOC1SLOC2ChangedTime (s)
SystemML412,560412,66110163.8
BigStitcher15,58415,6122814.5
Light-4j74,37174,4063531.5
MetaOmGraph80,17780,1982133.3
ModernJava23,04523,0631817.8
TrackMate58,87358,8982524.8
CodeJam11,10611,1302412.3
SUM/AVG675,716675,968252191/27.3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Shen, G.; Zhang, L.; Zheng, M.; Zheng, K. Refactoring for Java-Structured Concurrency. Appl. Sci. 2025, 15, 2407. https://doi.org/10.3390/app15052407

AMA Style

Zhang Y, Shen G, Zhang L, Zheng M, Zheng K. Refactoring for Java-Structured Concurrency. Applied Sciences. 2025; 15(5):2407. https://doi.org/10.3390/app15052407

Chicago/Turabian Style

Zhang, Yang, Gaojie Shen, Liyan Zhang, Meiyan Zheng, and Kun Zheng. 2025. "Refactoring for Java-Structured Concurrency" Applied Sciences 15, no. 5: 2407. https://doi.org/10.3390/app15052407

APA Style

Zhang, Y., Shen, G., Zhang, L., Zheng, M., & Zheng, K. (2025). Refactoring for Java-Structured Concurrency. Applied Sciences, 15(5), 2407. https://doi.org/10.3390/app15052407

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop