TY - GEN
T1 - Attention Mechanisms in Process Mining
T2 - 49th Latin American Computing Conference, CLEI 2023
AU - Rivera-Lazo, Gonzalo
AU - Astudillo, Hernan
AU - Nanculef, Ricardo
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Process Mining (PM) focuses on monitoring and optimizing long-running business processes by examining their execution event logs (usually complex and heterogeneous) to obtain insights and enable data-driven decisions. Several Machine Learning (ML) techniques have been recently proposed to exploit these logs as learning datasets and enable examination of past events and predict future ones, but their black-box nature makes hard for human analysts to interpret their results and recognize the key parts of input data. Attention mechanisms (AM) is an ML technique that does address these shortcomings, but it has been little used for PM. This article describes the design, results and findings of a systematic literature review of attention mechanisms for PM. We addressed three research questions: (a) for which applications are AM used? (b) which kinds of AM are used? and (c) how are AM combined with other ML techniques? An initial search yield 73 papers, and inclusion/exclusion criteria left sixteen, published between 2017 and 2023. Key finding are that: (1) the most common application is sequence prediction, (2) most studies combine global and item-wise attention, added as layers after an encoder generates the continuous representation, and (3) emerging research topics include anomaly detection and data representation. This study shows that attention mechanisms can help process analysts to get some sense of interpretability, and showcases the bright potential for process mining of attention mechanisms, which paradoxically have received little attention themselves.
AB - Process Mining (PM) focuses on monitoring and optimizing long-running business processes by examining their execution event logs (usually complex and heterogeneous) to obtain insights and enable data-driven decisions. Several Machine Learning (ML) techniques have been recently proposed to exploit these logs as learning datasets and enable examination of past events and predict future ones, but their black-box nature makes hard for human analysts to interpret their results and recognize the key parts of input data. Attention mechanisms (AM) is an ML technique that does address these shortcomings, but it has been little used for PM. This article describes the design, results and findings of a systematic literature review of attention mechanisms for PM. We addressed three research questions: (a) for which applications are AM used? (b) which kinds of AM are used? and (c) how are AM combined with other ML techniques? An initial search yield 73 papers, and inclusion/exclusion criteria left sixteen, published between 2017 and 2023. Key finding are that: (1) the most common application is sequence prediction, (2) most studies combine global and item-wise attention, added as layers after an encoder generates the continuous representation, and (3) emerging research topics include anomaly detection and data representation. This study shows that attention mechanisms can help process analysts to get some sense of interpretability, and showcases the bright potential for process mining of attention mechanisms, which paradoxically have received little attention themselves.
KW - Attention mechanisms
KW - Deep Learning
KW - Interpretability
KW - Process Mining
KW - Systematic Literature Review
UR - http://www.scopus.com/inward/record.url?scp=85182279827&partnerID=8YFLogxK
U2 - 10.1109/CLEI60451.2023.10346135
DO - 10.1109/CLEI60451.2023.10346135
M3 - Conference contribution
AN - SCOPUS:85182279827
T3 - Proceedings - 2023 49th Latin American Computing Conference, CLEI 2023
BT - Proceedings - 2023 49th Latin American Computing Conference, CLEI 2023
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 16 October 2023 through 20 October 2023
ER -