Deep reinforcement learning with combinatorial actions spaces: An application to prescriptive maintenance

Niklas Goby, Tobias Brandt*, Dirk Neumann

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

9 Citations (Scopus)
23 Downloads (Pure)

Abstract

In this paper, we leverage a prescriptive analytics approach based on deep reinforcement learning and adapt it for sequential decision-problems with large, noisy state spaces and combinatorial actions spaces. We implement a novel mechanism that uses deep learning to reduce the action space and apply the approach to the context of maintenance management. We show that our method substantially outperforms established baseline methods from practice and research, closing more than 90 percent of the cost gap between the next-best solution and the optimum under perfect information. In addition to reducing costs, the specifically-designed reward function incentivizes bundling maintenance actions in a way that fully utilizes the available number of workers. Thereby, the number of time steps in which any maintenance action occurs is reduced. This decreases the organizational and operational impact of maintenance in real-world settings as disruptions can be limited to a few days. Beyond this context, our work illustrates the potential of prescriptive approaches based on deep reinforcement learning in other applications that face similarly challenging problem settings.

Original languageEnglish
Article number109165
JournalComputers and Industrial Engineering
Volume179
DOIs
Publication statusPublished - May 2023

Bibliographical note

Publisher Copyright:
© 2023 Elsevier Ltd

Fingerprint

Dive into the research topics of 'Deep reinforcement learning with combinatorial actions spaces: An application to prescriptive maintenance'. Together they form a unique fingerprint.

Cite this