Next Article in Journal
Contribution of Onshore Power Supply (OPS) and Batteries in Reducing Emissions from Ro-Ro Ships in Ports
Previous Article in Journal
A Balanced Path-Following Approach to Course Change and Original Course Convergence for Autonomous Vessels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Developing a Container Ship Loading-Planning Program Using Reinforcement Learning

by
JaeHyeok Cho
and
NamKug Ku
*
Department of Marine Design Convergence Engineering, Pukyong National University, Busan 48513, Republic of Korea
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2024, 12(10), 1832; https://doi.org/10.3390/jmse12101832
Submission received: 13 September 2024 / Revised: 2 October 2024 / Accepted: 11 October 2024 / Published: 14 October 2024
(This article belongs to the Section Ocean Engineering)

Abstract

This study presents an optimized container-stowage plan using reinforcement learning to tackle the complex logistical challenges in maritime shipping. Traditional stowage-planning methods often rely on manual processes that account for factors like container weight, unloading order, and balance, which results in significant time and resource consumption. To address these inefficiencies, we developed a two-phase stowage plan: Phase 1 involves bay selection using a Proximal Policy Optimization (PPO) algorithm, while Phase 2 focuses on row and tier placement. The proposed model was evaluated against traditional methods, demonstrating that the PPO algorithm provides more efficient loading plans with faster convergence compared to Deep Q-Learning (DQN). Additionally, the model successfully minimized rehandling and maintained an even distribution of weight across the vessel, ensuring operational safety and stability. This approach shows great potential for enhancing stowage efficiency and can be applied to real-world shipping scenarios, improving productivity. Future work will aim to incorporate additional factors, such as container size, type, and cargo fragility, to further improve the robustness and adaptability of the stowage-planning system. By integrating these additional considerations, the system will become even more capable of handling the complexities of modern maritime logistics.
Keywords: stowage plan; reinforcement learning; rehandling; Proximal Policy Optimization stowage plan; reinforcement learning; rehandling; Proximal Policy Optimization

Share and Cite

MDPI and ACS Style

Cho, J.; Ku, N. Developing a Container Ship Loading-Planning Program Using Reinforcement Learning. J. Mar. Sci. Eng. 2024, 12, 1832. https://doi.org/10.3390/jmse12101832

AMA Style

Cho J, Ku N. Developing a Container Ship Loading-Planning Program Using Reinforcement Learning. Journal of Marine Science and Engineering. 2024; 12(10):1832. https://doi.org/10.3390/jmse12101832

Chicago/Turabian Style

Cho, JaeHyeok, and NamKug Ku. 2024. "Developing a Container Ship Loading-Planning Program Using Reinforcement Learning" Journal of Marine Science and Engineering 12, no. 10: 1832. https://doi.org/10.3390/jmse12101832

APA Style

Cho, J., & Ku, N. (2024). Developing a Container Ship Loading-Planning Program Using Reinforcement Learning. Journal of Marine Science and Engineering, 12(10), 1832. https://doi.org/10.3390/jmse12101832

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop