Next Article in Journal
Penetration Overload Prediction Method Based on a Deep Neural Network with Multiple Inputs
Previous Article in Journal
The Stationary Thermal Field in a Multilayer Elliptic Cylinder
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Improvement in Runtime Speed for Frequency Domain Soil–Structure Interaction Analysis Using a Coarray Parallel Processing Technique

1
Department of Civil Engineering, Chonnam National University, Gwangju 61186, Republic of Korea
2
Department of Architecture and Civil Engineering, Graduate School, Chonnam National University, Gwangju 61186, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(4), 2356; https://doi.org/10.3390/app13042356
Submission received: 2 January 2023 / Revised: 4 February 2023 / Accepted: 10 February 2023 / Published: 11 February 2023
(This article belongs to the Special Issue Computational Mechanics in Seismic Wave Propagation Analyses)

Abstract

:
In this paper, we propose a new algorithm for introducing Coarray Fortran (CAF) into a program that dynamically analyzes the frequency domain in soil–structure interactions. This was implemented in KIESSI-3D, which is frequency domain soil-structure interaction analysis computer code based on finite and infinite element techniques, and the performance of analysis speed based on the number of images (or coarrays) and cores per image was evaluated. For the performance evaluation, we used examples of full-scale nuclear power plant buildings with a shallow foundation model as well as a deep foundation model. In terms of analysis speed, the new KIESSI-3D improved the analysis speed by an average of 2.78 times for the shallow foundation problem and 2.69 times for the deep foundation problem, compared with the existing KIESSI-3D, because of the new algorithm and CAF. Further, as the number of cores and the internal memory size in the computer systems increased, the efficiency of also parallel processing increased.

1. Introduction

Since large interaction effects between structures and soil appear in heavy structures such as nuclear power plants and LNG storage tanks, the dynamic soil–structure interaction (SSI) is considered without fail in seismic design [1]. According to the ASCE 4, SSI effects can be neglected for a hard rock condition with shear wave velocity is greater than or equal to 2550 m/s (=8500 ft/s) (a rare case in practice). Therefore, the continuously increasing need for seismic designs that consider SSI can be accomplished using the frequency and time domain analysis methods.
The frequency domain analysis approach approximates both structure and soil as linear viscoelastic bodies, performs independent analyses for discrete frequencies, and then converts the frequency response into the time domain to calculate the structure and soil responses [2]. This method has been combined with various other approaches, including the finite element, boundary element, boundary integral [3], and flexible volume methods [4], and has been used effectively in practice since the 1970s. SASSI (System for Analysis of Soil–Structure Interaction) is used worldwide as representative commercial software for the frequency domain analysis [4,5,6,7].
Although analyzing the SSI time domain is generally more expensive than frequency domain analysis, it can be effective for analyzing the nonlinear response history, which cannot be handled accurately using frequency domain analysis. Currently, time domain analysis is generally carried out using finite elements for the nonlinear structures as well as near-field and far-field soil models in which boundary condition elements such as viscous boundaries, perfectly matched discrete layers, and perfectly matched layers are used to simulate the effects of infinite soil domain [8,9,10]. General-purpose finite element analysis software is effective at describing the nonlinear behavior of structures and soil. However, a numerical model for the time domain SSI analysis was verified by comparing time-domain solutions for a small seismic input with those obtained using the corresponding frequency domain technique [1,11]. This is one of the reasons why more efficient frequency domain technologies are being developed.
Moreover, the development of multi-core CPUs and parallel computational programming technologies, such as open multi-processing (OpenMP), message passing interface (MPI), coarray, and the Fortran compiler’s DO CONCURRENT construct, has enabled cost-effective analyses of more complex models with shorter computation times [12,13,14]. Table 1 shows a comparison of the advantages and disadvantages of these methods [15]. In OpenMP, parallelization is performed using a shared memory method that uses computer resources (core and internal memory) to perform operations simultaneously. MPI is considered as the best parallelization technique in terms of analysis rate improvement and memory management because it has been continuously upgraded since its inception to configure multiple computers simultaneously as a single computer; the computers perform tasks independently of each other and share data through internal communication [13]. However, the method is rather inefficient because a construct different from the standard Fortran is required for communication between the computers, so that the entire source code needs to be modified to enable parallelization. The DO CONCURRENT construct has been used since Fortran 2008 as a simple replacement for the DO loop construct to express loop-level parallelism [16,17], with the intent of enabling more effective automatic parallel execution. However, it was found that the performance is poor [15,18].
The coarray method avoids the complexity of MPI and enables gradual parallelization of the existing source codes. This method has been adopted as a standard for Fortran compilers since Fortran 2008 [16,17]. Coarray Fortran (CAF) has been widely applied in recent years as it facilitates the parallelization of the existing source codes [19,20]. Figure 1 shows an example of the OpenMP and CAF parallel processing methods using 12 cores. Evidently, in the OpenMP construct all the cores and memory are used for iterative tasks such as do loops, whereas in the CAF method the user exploits as many computer resources as desired, dividing them into images that are independently run units, and the program is run independently for each image. Data transmission and reception between the images can be performed using a simple rule [21].
In this study, CAF was applied to the KIESSI-3D program to optimize the source code such thus reduce the analysis runtime. KIESSI-3D is a frequency domain SSI analysis program based on the finite element and dynamic infinite element methods, as illustrated in Figure 2 [22,23,24,25,26]. To verify the effectiveness of the developed parallel processing technique developed, analysis rate performance was evaluated using computer systems of various specifications. As an analysis example for performance evaluation, a full-scale nuclear power plant containment building was used with both shallow and deep foundation models.

2. Coarray Parallel Processing Technique

KIESSI-3D currently uses the OpenMP method and the Intel/MKL PARDISO direct sparse solver (2019 version). It calculates mass and stiffness matrices of the finite elements, and mass and stiffness matrices of the infinite elements and effective seismic load for each frequency to perform the frequency domain analyses, as illustrated in Figure 3. Furthermore, it stores the results for all frequency points shared after carrying out all the analyses on the hard drive. Because the code was parallelized using the OpenMP, the analysis rate varies depending on the number of cores. However, since the analysis rate achieved by the OpenMP is not linearly proportional to the number of cores, it was necessary to introduce a new parallel processing method, and thus CAF was adopted in this study. The source code was optimized to carry out an efficient analysis using this method. In addition, the analysis procedure was established and the program was modified so that the CAF functions could be maximized. A conceptual analysis diagram of our proposed new version of KIESSI-3D incorporating CAF is shown in Figure 4.
The modified KIESSI-3D consists of OpenMP parts running on a single image and two OpenMP+CAF parts running on multiple images, as shown in Figure 4. The OpenMP parts oversee the information required for all single-image analyses and distribute the information to the other images. At this time, each individual image acquires data in parallel for the system equation corresponding to the pre-allocated frequency points, analyzing the data using OpenMP. The proposed technique can be summarized, as follows:
  • Step 1. Assign frequency points to be analyzed in each image.
  • Step 2. Run all images simultaneously to construct element matrices of infinite elements and effective seismic loads (OpenMP+CAF part).
  • Step 3. Construct system matrices by running a single image and save them to the hard drive frequency by frequency.
  • Step 4. Run all images simultaneously for the pre-assigned frequency points to solve the matrix equation and save the solution to the hard drive (OpenMP+CAF part). At this time, OpenMP is used to run on each image.
  • Step 5. Construct frequency domain responses by running a single image, which is done by collecting and interpolating solutions at the frequency points for further post-processing.

3. Numerical Evaluation

In order to evaluate the applicability and performance of the KIESSI-3D with CAF incorporated, two SSI analysis problems were selected as representative of real nuclear power plant structures supported by a shallow and a deep foundation, as shown in Figure 5. In addition, eight computer systems as shown in Table 2 were used for the evaluation of various computer environments. Two are single-CPU systems and six are dual-CPU systems. Analysis rates were compared by number of images (or number of coarrays) and cores used in the individual images. In Figure 6 and Figure 7, the number of images and cores per image can be labeled as ca{ n 1 }nc{ n 2 }, where n 1 is the number of images, n 2 is the number of cores per image, and the total number of cores used for the analysis is n 1 × n 2 . For instance, for ca2nc4 the total number of cores used for analysis is 8. It should be noted here that the number of single-frequency solution processes in each coarray is the number of frequency points used in these analyses (NF in Figure 3 and Figure 4) divided by the number of images. Forty-five frequency points were used in the soil–structure interaction analyses (i.e., NF = 45). Thus, the number of single-frequency solution processes in each image is 22 or 23 for ca2nc{ n 2 }, 11 or 12 for ca4nc{ n 2 }, and so on.
The speedup ratios corresponding to the cores used in the soil–structure interaction analysis for a shallow foundation are summarized in Figure 6. As shown in Figure 6a, the result of using only one image (ca1nc{ n 2 }) is the case where only the OpenMP functions were used, without using the CAF functions. In Figure 6b, it can be seen that in the cases where the six dual-CPU systems are used, the performance of the OpenMP operation is not improved; rather, it degrades even when a large number of cores (larger than the number of cores for a single CPU) are used. This performance degradation can be attributed to factors such as the overhead due to communication between the CPUs. In addition, scalability, which can be defined as the ability to retain performance levels when adding additional processors, decreases when the number of cores per image exceeds four, even in sections where the performance increases with the increase in the number of cores. Based on these results, optimal performance can be expected when the number of cores per image does not exceed four.
The numbers of images and cores per image depend on the resources of the computer system (number of cores, RAM capacity) as well as the size of the problem (RAM capacity necessary for one image). As an example, in the case of a 12-core PC, various image–core number combinations (ca2nc6, ca4nc3, ca6nc2, ca12nc1, etc.) are possible. Notably, since the size of the internal memory (RAM) used by each image is the same, if the number of images increases then the amount of RAM used will also increase. That is, the number of images is determined by the size of the problem (the size of RAM used by one image) and total size of the RAM available. As an example, if one image uses 40 GB of RAM, and the total RAM of the computer system is 192 GB, then the maximum number of usable images is four.
When CAF is applied, the extent of performance improvement for the shallow foundation example depends on the number of images and cores per image, as shown in Figure 6. The extent of performance improvement for the deep foundation example was similar to that for the shallow foundation example, as shown in Figure 7.
In general, CAF+OpenMP parallelization performance improves as the number of images increased. However, OpenMP+CAF parallelization performance of converges for Intel CPU systems when the number of cores per image did not exceed four. Moreover, when the optimal image–core number combination for the six dual-CPU computer systems listed in Table 2 was used, the analysis rate for the shallow foundation example improved as the number of cores of the computer system increased, as shown in Figure 8a. In addition, with the implementation of the OpenMP+CAF algorithm as shown in Figure 4, the new KIESSI-3D code analysis rate was between 1.74 and 4.65 times faster than for the existing code, which used only OpenMP. In the case of the deep foundation example, the new KIESSI-3D code analysis rate was between 1.85 and 4.89 times faster than that of the existing code, as shown in Figure 8b.
The performance improvement measured by the speedup ratio (S) in parallel processing programming, as characterized by the parallelization ratio (P) of codes, can be expressed simply by Amdahl’s law as follows [27]:
S = 1 ( 1 P ) + P / N   or   P = 1 1 / S 1 1 / N
where N is the number of CPU cores.
The extent of performance improvement according to the number of cores and parallelization ratio shown in Equation (1) can be illustrated as shown in Figure 9. Even if the same number of cores is used, the efficiency of the operation will decrease if the parallelization ratio is low, indicating that the parallelization ratio is an important indicator in parallel processing programming. In addition, the parallelization ratio P achieved by the OpenMP+CAF algorithm shown in Figure 4 was calculated based on the speedup ratio S shown in Figure 8, and summarized in Figure 10. From this figure, it can be seen that the parallelization ratio increases as the number of cores increases. Specifically, when the number of cores is 64, the maximum parallelization ratio is 0.988, which is close to the maximum value of 1. These results confirm the effectiveness of the new KIESSI-3D source code for multi-core computer systems.

4. Conclusions

In this study, a new algorithm for introducing a coarray Fortran (CAF) parallel processing scheme into an existing computer program parallelized only by OpenMP for dynamic frequency-domain soil–structure interaction analysis was proposed and implemented in KIESSI-3D. The speed-enhancement performance of the algorithm according to the number of images (or coarrays) and the number of cores per image was evaluated. As analysis examples for performance evaluation, a full-scale nuclear power plant containment building with both shallow and deep foundation models was used. The results of the numerical-example analyses showed that the new KIESSI-3D improved the analysis rate by an average of 2.78 times for the shallow foundation problem and 2.69 times for the deep foundation problem compared with the existing KIESSI-3D, thanks to the new algorithm and the CAF incorporation. In addition, as the number of cores and the size of the internal memory of the computer system increased, the parallel processing efficiency increased. Furthermore, when the number of cores was 64, the parallelization ratio of the source code was shown to be 0.977 to 0.988, which is close to the theoretical maximum value of 1. Therefore, it can be confirmed that the new source code is an effective modification for a multi-core computer system.

Author Contributions

Conceptualization, J.-M.K.; methodology, J.-M.K.; software, J.-M.K.; formal analysis, J.-S.L. and H.-J.L.; data curation, J.-S.L. and H.-J.L.; writing-original draft preparation, J.-S.L.; writing-review and editing, J.-M.K. and H.-J.L.; supervision, J.-M.K.; funding acquisition, J.-M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This study is supported by Korea Energy Technology Evaluation and Planning (KETEP) funded by the Ministry of Trade, Industry and Energy (No. 20161520101130) and Basic Science Research Program through the National Research Foundation of Korea (NRF) grant funded by the Ministry of Education (NRF-2020R1I1A3069396).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in article.

Conflicts of Interest

We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work. There is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled, “Improvement of Runtime Speed for Frequency Domain Soil–Structure Interaction Analysis Using a Coarray Parallel Processing Technique”.

References

  1. ASCE/SEI 4-16; Seismic Analysis of Safety-Related Nuclear Structures. American Society of Civil Engineers: Reston, VA, USA, 2017.
  2. Wolf, J.P. Dynamic Soil Structure Interaction Analysis; Prentice-Hall Inc.: Hoboken, NJ, USA, 1985. [Google Scholar]
  3. Wong, H.; Luco, J. Dynamic interaction between rigid foundations in a layered half-space. Soil Dyn Earthq Eng 1986, 5, 149–158. [Google Scholar] [CrossRef]
  4. Lysmer, J.; Tabatabaie, M.; Tajirian, F.; Vahdani, S.; Ostadan, F. SASSI: A System for Analysis of Soil-Structure Interaction; Dept Civil Eng, University of California: Berkeley, CA, USA, 1981. [Google Scholar]
  5. Bechtel National Inc. User’s Manual for SASSI2010, Version 1.0; Bechtel National Inc.: Reston, VA, USA, 2011. [Google Scholar]
  6. Ghiocel, D.M. User’s Manual for ACS SASSI—An Advanced Computational Software for 3D Dynamic Analysis Including Soil-Structure In-teraction, version 3.0; Ghiocel Predictive Technologies Inc.: Pittsford, NY, USA, 2016. [Google Scholar]
  7. MTR & Associates. System for Analysis of Soil-Structure Interaction—User’s Manual; MTR & Associates: Lafayette, CA, USA, 2016. [Google Scholar]
  8. Berenger, J.P. A perfectly matched layer for the absorption of electromagnetic waves. J. Comput. Phys. 1994, 114, 185–200. [Google Scholar] [CrossRef]
  9. Basu, U.; Chopra, A.K. Perfectly matched layers for transient elastodynamics of unbounded domains. Int. J. Numer. Methods Eng. 2004, 59, 1039–1074. [Google Scholar] [CrossRef]
  10. Guddati, M.N.; Tassoulas, J.L. Continued-fraction absorbing boundary conditions for the wave equation. J. Comput. Acoust. 2000, 8, 139–156. [Google Scholar] [CrossRef]
  11. Bolisetti, C.; Whittaker, A.S.; Coleman, J.L. Linear and nonlinear soil-structure interaction analysis of buildings and safety-related nuclear structures. Soil Dyn. Earthq. Eng. 2018, 107, 218–233. [Google Scholar] [CrossRef]
  12. OpenMP. OpenMP Application Programming Interface, Version 5.1; OpenMP: Antwerp, Belgium, 2020. [Google Scholar]
  13. Message Passing Interface Forum. MPI: A Message-Passing Interface Standard, Version 3.1; Message Passing Interface Forum: Dallas, TX, USA, 2015. [Google Scholar]
  14. Zhao, B.; Liu, Y.; Goh, S.; Lee, F. Parallel finite element analysis of seismic soil structure interaction using a PC cluster. Comput. Geotech. 2016, 80, 167–177. [Google Scholar] [CrossRef]
  15. Shterenlikht, A. Parallel Programming with Fortran 2008 and 2018 Coarrays; Mech Eng Dept, The University of Bristol: Bristol, UK, 2018. [Google Scholar]
  16. Numrich, R.W.; Reid, J. Co-array Fortran for parallel programming. ACM SIGPLAN Fortran Forum 1998, 17, 1–31. [Google Scholar] [CrossRef]
  17. Reid, J.K.; Numrich, R.W. Co-Arrays in the Next FORTRAN Standard. J. Sci. Program. 2007, 15, 9–26. [Google Scholar] [CrossRef]
  18. DO CONCURRENT isn’t necessarily concurrent--LLVM Flang. Available online: https://flang.llvm.org/docs/DoConcurrent.html (accessed on 2 February 2023).
  19. Sharma, A.; Moulitsas, I. MPI to Coarray Fortran: Experiences with a CFD Solver for Unstructured Meshes. Sci. Program. 2017, 2017, 1–12. [Google Scholar] [CrossRef]
  20. Tracy, F.T.; Oppe, T.C.; Corcoran, M.K. A comparison of MPI and co-array Fortran for large finite element variably saturated flow simulations. Scalable Comput. Pract. Exp. 2018, 19, 423–432. [Google Scholar] [CrossRef]
  21. Metcalf, M.; Reid, J.; Cohen, M. Modern Fortran Explained: Incorporating Fortran 2018; Oxford University Press: Oxford, UK, 2018. [Google Scholar]
  22. Yang, S.-C.; Yun, C.-B. Axisymmetric infinite elements for soil-structure interaction analysis. Eng. Struct. 1992, 14, 361–370. [Google Scholar] [CrossRef]
  23. Yun, C.-B.; Kim, J.-M.; Hyun, C.-H. Axisymmetric elastodynamic infinite elements for multi-layered half-space. Int. J. Numer. Methods Eng. 1995, 38, 3723–3743. [Google Scholar] [CrossRef]
  24. Yun, C.-B.; Chang, S.-H.; Seo, C.-G.; Kim, J.-M. Dynamic infinite elements for soil-structure interaction analysis in a layered soil medium. Int. J. Struct. Stab. Dyn. 2007, 7, 693–713. [Google Scholar] [CrossRef]
  25. Ryu, J.-S.; Seo, C.-G.; Kim, J.-M.; Yun, C.-B. Seismic response analysis of soil–structure interactive system using a coupled three-dimensional FE–IE method. Nucl. Eng. Des. 2010, 240, 1949–1966. [Google Scholar] [CrossRef]
  26. Seo, C.-G.; Kim, J.-M. KIESSI program for 3-D soil-structure interaction analysis. Comput. Struct. Eng. 2012, 25, 77–83. [Google Scholar]
  27. Amdahl, G.M. Validity of the Single Processor Approach to Achieving Large Scale Computing Capabilities. In Proceedings of the Spring Joint Computer Conference, Atlantic City, NJ, USA, 18–20 April 1967; pp. 483–485. [Google Scholar]
Figure 1. Parallel processing example using OpenMP and CAF: (a) Shared memory of OpenMP; (b) Distributed memory of CAF.
Figure 1. Parallel processing example using OpenMP and CAF: (a) Shared memory of OpenMP; (b) Distributed memory of CAF.
Applsci 13 02356 g001
Figure 2. Modeling of soil-structure interaction system using KIESSI-3D program utilizing finite and dynamic infinite elements: (a) Modeling concept; (b) Modeling examples.
Figure 2. Modeling of soil-structure interaction system using KIESSI-3D program utilizing finite and dynamic infinite elements: (a) Modeling concept; (b) Modeling examples.
Applsci 13 02356 g002aApplsci 13 02356 g002b
Figure 3. Conceptual analysis diagram of the existing KIESSI-3D.
Figure 3. Conceptual analysis diagram of the existing KIESSI-3D.
Applsci 13 02356 g003
Figure 4. Conceptual diagram of the proposed KIESSI-3D with CAF incorporated (e.g. with a three coarray, 12-core computer system): Orange boxes represent accelerated parallelization by CAF combined with OpenMP, gray boxes represent parallelization using OpenMP only. NF denotes the number of frequency points to be analyzed.
Figure 4. Conceptual diagram of the proposed KIESSI-3D with CAF incorporated (e.g. with a three coarray, 12-core computer system): Orange boxes represent accelerated parallelization by CAF combined with OpenMP, gray boxes represent parallelization using OpenMP only. NF denotes the number of frequency points to be analyzed.
Applsci 13 02356 g004
Figure 5. Numerical examples for evaluating the performance of the proposed CAF parallelization technique for frequency domain soil–structure interaction analysis: (a) Shallow foundation (number of nodes = 22,690); (b) Deep foundation (number of nodes = 45,137).
Figure 5. Numerical examples for evaluating the performance of the proposed CAF parallelization technique for frequency domain soil–structure interaction analysis: (a) Shallow foundation (number of nodes = 22,690); (b) Deep foundation (number of nodes = 45,137).
Applsci 13 02356 g005
Figure 6. Speedup ratios for OpenMP and OpenMP+CAF for the shallow foundation example using various multi-core computer systems (abbreviation: ca{ n 1 }nc{ n 2 }; n 1 = number of coarrays; n 2 = number of cores/image): (a) Single-CPU computer systems; (b) Dual-CPU computer systems.
Figure 6. Speedup ratios for OpenMP and OpenMP+CAF for the shallow foundation example using various multi-core computer systems (abbreviation: ca{ n 1 }nc{ n 2 }; n 1 = number of coarrays; n 2 = number of cores/image): (a) Single-CPU computer systems; (b) Dual-CPU computer systems.
Applsci 13 02356 g006
Figure 7. Speedup ratios of OpenMP and OpenMP+CAF for the deep foundation example using various multi-core computer systems (abbreviation: ca{ n 1 }nc{ n 2 }; n 1 = number of coarrays; n 2 = number of cores/image): (a) Single-CPU computer systems; (b) Dual-CPU computer systems.
Figure 7. Speedup ratios of OpenMP and OpenMP+CAF for the deep foundation example using various multi-core computer systems (abbreviation: ca{ n 1 }nc{ n 2 }; n 1 = number of coarrays; n 2 = number of cores/image): (a) Single-CPU computer systems; (b) Dual-CPU computer systems.
Applsci 13 02356 g007
Figure 8. Maximum speedup ratios for OpenMP+CAF and OpenMP using dual-CPU computer systems: (a) Shallow foundation example; (b) Deep foundation example.
Figure 8. Maximum speedup ratios for OpenMP+CAF and OpenMP using dual-CPU computer systems: (a) Shallow foundation example; (b) Deep foundation example.
Applsci 13 02356 g008aApplsci 13 02356 g008b
Figure 9. Speedup ratio S according to Amdahl’s law.
Figure 9. Speedup ratio S according to Amdahl’s law.
Applsci 13 02356 g009
Figure 10. Parallelization ratio P achieved by the OpenMP+CAF version using dual-CPU computer systems: (a) Shallow foundation example; (b) Deep foundation example.
Figure 10. Parallelization ratio P achieved by the OpenMP+CAF version using dual-CPU computer systems: (a) Shallow foundation example; (b) Deep foundation example.
Applsci 13 02356 g010
Table 1. Comparison of features of the parallel processing methods [15].
Table 1. Comparison of features of the parallel processing methods [15].
Parallel Method/LanguageCoarray Fortran (CAF)DO CONCURRENTOpenMPMPI
Fortran standardYesYesNoNo
Shared memoryYesPossiblyYesYes
Distributed memoryYesPossiblyNoYes
Ease of useEasyEasyEasyHard
FlexibilityHighPoorLimitedHigh
Incremental improvementYesYesYesNo
PerformanceHighPoorLimitedHigh
Table 2. Specifications of the computer systems used in this study.
Table 2. Specifications of the computer systems used in this study.
PC
Identification
Number of CPUsCores/CPUCPU Model
(Clock Speed)
RAM
(GB)
Runtime for 45 Frequencies
with Ca1nc1 Option (Minutes)
Shallow
Foundation
Deep
Foundation
PC-118Intel Xeon W-2245
(3.9 GHz)
25611.731.9
PC-2116Intel i9-9960X
(3.1 GHz)
12812.734.6
PC-326Intel Xeon X5690
(3.46 GHz)
6447.6141.4
PC-428Intel Xeon E5-2687w
(3.1 GHz)
19219.258.5
PC-5212Intel Xeon E5-2687w v4
(3.0 GHz)
19220.154.0
PC-6216Intel Xeon Gold 6142
(2.6 GHz)
38413.637.0
PC-7224Intel Xeon Gold 6248R
(3.0 GHz)
76812.434.4
PC-8232AMD EPYC 7601
(2.2 GHz)
38462.2121.3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, J.-M.; Lim, J.-S.; Lee, H.-J. Improvement in Runtime Speed for Frequency Domain Soil–Structure Interaction Analysis Using a Coarray Parallel Processing Technique. Appl. Sci. 2023, 13, 2356. https://doi.org/10.3390/app13042356

AMA Style

Kim J-M, Lim J-S, Lee H-J. Improvement in Runtime Speed for Frequency Domain Soil–Structure Interaction Analysis Using a Coarray Parallel Processing Technique. Applied Sciences. 2023; 13(4):2356. https://doi.org/10.3390/app13042356

Chicago/Turabian Style

Kim, Jae-Min, Jae-Sung Lim, and Hyeok-Ju Lee. 2023. "Improvement in Runtime Speed for Frequency Domain Soil–Structure Interaction Analysis Using a Coarray Parallel Processing Technique" Applied Sciences 13, no. 4: 2356. https://doi.org/10.3390/app13042356

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop