Thursday, July 17, 2025

Multimodal Cognitive Workload Assessment in Human-machine Interaction

Adamolekun, Azeez and Logah, Francis Xian and Alabi, Clement and Baanye, Jennifer and Wilson, Manuella and Ganiu, Olaitan and Seong, Younho and Yi, Sun, Toward a Unified Framework for Multimodal Cognitive Workload Assessment in Human-machine Interaction Systems (February 17, 2025). Available at SSRN: https://ssrn.com/abstract=5235980 or http://dx.doi.org/10.2139/ssrn.5235980

Abstract: Effective monitoring and management of cognitive workload are vital to enhancing the functionality and safety of human-machine interaction (HMI) systems, especially in increasingly dynamic and complex environments. In human-machine interaction, operators work alongside machines which are nowadays equipped with a collaborative robot which makes the safety of human highly researchable. Despite technological advancement in modern machines, highly profiled and qualified operators are required which gives negativity to their mental health due to high cognitive workload. This paper presents a systematic review of advances in multimodal cognitive workload assessment methodologies, emphasizing neural techniques such as electroencephalography (EEG) and functional nearinfrared spectroscopy (fNIRS), behavioral indicators like eye-tracking and performance metrics, and subjective tools including NASA-TLX. Over 500 papers were gathered, several papers were removed using the keywords, years and areas of specialization. However, 31 papers were analyzed for review. Implementing adaptive human-machine interaction (HMI) systems are made achievable by integrating multiple assessments, that provide a comprehensive method of analyzing mental workload. Some possible applications for the proposed unified framework for real-time multimodal workload assessments include transportation safety systems, healthcare operations, and industrial automation. Systematically addressing issues like motion artifacts, data synchronization, and computational complexity. This article spells out future research directions for creating adaptive, self-support and user-focused systems.

Come to think of it, my recent working paper, Melancholy, Growth, and Mindcraft, seems relevant here. It's in the same general ballpark, though a somewhat different region. Here's my conclusion (pp. 20-21):

If we are to navigate the future, we are going to need to develop new forms of mindcraft. Mindcraft, the crafting of minds. Reading, writing, and arithmetic are forms of mindcraft. So are the many meditation disciplines. The many forms of psychotherapy are forms of mindcraft as well, as is life coaching.

More than any previous technologies, computation is a mind technology. The tasks that computers do are tasks that had previously been done only by minds, human minds. During the early decades only a small group of people had to craft their minds for computer interaction. With the advent of personal computers more people could interact with computers, but most of us have interacted with them in only a superficial way. The recent, very recent, emergence of machine learning into the public sphere is bringing many more of us into deeper interaction with computers, interactions we don’t understand. Will these machines craft our minds as we craft them?

The question of whether or not these devices themselves have minds is a real one. For what it’s worth, I don’t believe any of these devices yet have minds. But I don’t rule it out.

That’s the future, how distant, I don’t know. My immediate concern is less conjectural: Recalling Claude's remarks about reorganizing processes in computers, what can we learn about our own minds by studying A.I. devices? More practically, how can we learn more about our own (individual) minds by tracking patterns of computer usage? How can we use those patterns to understand ourselves and to better manage our own lives. That is the mind crafting facing us now.

No comments:

Post a Comment