Approximate dynamic programming for liner shipping network design

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

Approximate dynamic programming for liner shipping network design. / Lee, Sangmin; Boomsma, Trine Krogh; Holst, Klaus Kähler.

In: Annals of Operations Research, 2024.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Lee, S, Boomsma, TK & Holst, KK 2024, 'Approximate dynamic programming for liner shipping network design', Annals of Operations Research. https://doi.org/10.1007/s10479-024-06106-1

APA

Lee, S., Boomsma, T. K., & Holst, K. K. (2024). Approximate dynamic programming for liner shipping network design. Annals of Operations Research. https://doi.org/10.1007/s10479-024-06106-1

Vancouver

Lee S, Boomsma TK, Holst KK. Approximate dynamic programming for liner shipping network design. Annals of Operations Research. 2024. https://doi.org/10.1007/s10479-024-06106-1

Author

Lee, Sangmin ; Boomsma, Trine Krogh ; Holst, Klaus Kähler. / Approximate dynamic programming for liner shipping network design. In: Annals of Operations Research. 2024.

Bibtex

@article{7ead3dade8ff4f00afd34e00c100617d,
title = "Approximate dynamic programming for liner shipping network design",
abstract = "The global containerised trade heavily relies on liner shipping services, facilitating the worldwide movement of large cargo volumes along fixed routes and schedules. The profitability of shipping companies hinges on how efficiently they design their shipping network; a complex optimization problem known as the liner shipping network design problem (LSNDP). In recent years, approximate dynamic programming (ADP), also known as reinforcement learning, has emerged as a promising approach for large-scale optimisation. This paper introduces a novel Markov decision process for the LSNDP and investigates the potential of ADP. We show that ADP methods based on value iteration produce optimal solutions to small instances, but their scalability is hindered by high memory demands. An ADP method based on a deep neural network requires less memory and successfully obtains feasible solutions. The quality of solutions, however, declines for larger instances, possibly due to the discrete nature of high-dimensional state and action spaces.",
keywords = "Approximate dynamic programming, Combinatorial optimisation, Deep reinforcement learning, Liner shipping network design",
author = "Sangmin Lee and Boomsma, {Trine Krogh} and Holst, {Klaus K{\"a}hler}",
note = "Publisher Copyright: {\textcopyright} The Author(s) 2024.",
year = "2024",
doi = "10.1007/s10479-024-06106-1",
language = "English",
journal = "Annals of Operations Research",
issn = "0254-5330",
publisher = "Springer",

}

RIS

TY - JOUR

T1 - Approximate dynamic programming for liner shipping network design

AU - Lee, Sangmin

AU - Boomsma, Trine Krogh

AU - Holst, Klaus Kähler

N1 - Publisher Copyright: © The Author(s) 2024.

PY - 2024

Y1 - 2024

N2 - The global containerised trade heavily relies on liner shipping services, facilitating the worldwide movement of large cargo volumes along fixed routes and schedules. The profitability of shipping companies hinges on how efficiently they design their shipping network; a complex optimization problem known as the liner shipping network design problem (LSNDP). In recent years, approximate dynamic programming (ADP), also known as reinforcement learning, has emerged as a promising approach for large-scale optimisation. This paper introduces a novel Markov decision process for the LSNDP and investigates the potential of ADP. We show that ADP methods based on value iteration produce optimal solutions to small instances, but their scalability is hindered by high memory demands. An ADP method based on a deep neural network requires less memory and successfully obtains feasible solutions. The quality of solutions, however, declines for larger instances, possibly due to the discrete nature of high-dimensional state and action spaces.

AB - The global containerised trade heavily relies on liner shipping services, facilitating the worldwide movement of large cargo volumes along fixed routes and schedules. The profitability of shipping companies hinges on how efficiently they design their shipping network; a complex optimization problem known as the liner shipping network design problem (LSNDP). In recent years, approximate dynamic programming (ADP), also known as reinforcement learning, has emerged as a promising approach for large-scale optimisation. This paper introduces a novel Markov decision process for the LSNDP and investigates the potential of ADP. We show that ADP methods based on value iteration produce optimal solutions to small instances, but their scalability is hindered by high memory demands. An ADP method based on a deep neural network requires less memory and successfully obtains feasible solutions. The quality of solutions, however, declines for larger instances, possibly due to the discrete nature of high-dimensional state and action spaces.

KW - Approximate dynamic programming

KW - Combinatorial optimisation

KW - Deep reinforcement learning

KW - Liner shipping network design

U2 - 10.1007/s10479-024-06106-1

DO - 10.1007/s10479-024-06106-1

M3 - Journal article

AN - SCOPUS:85198843020

JO - Annals of Operations Research

JF - Annals of Operations Research

SN - 0254-5330

ER -

ID: 399183041