Please use this identifier to cite or link to this item:
http://cmuir.cmu.ac.th/jspui/handle/6653943832/77591
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kornprom Pikulkaew | en_US |
dc.contributor.author | Waraporn Boonchieng | en_US |
dc.contributor.author | Ekkarat Boonchieng | en_US |
dc.date.accessioned | 2022-10-16T07:48:47Z | - |
dc.date.available | 2022-10-16T07:48:47Z | - |
dc.date.issued | 2023-01-01 | en_US |
dc.identifier.issn | 23673389 | en_US |
dc.identifier.issn | 23673370 | en_US |
dc.identifier.other | 2-s2.0-85135917488 | en_US |
dc.identifier.other | 10.1007/978-981-19-1610-6_29 | en_US |
dc.identifier.uri | https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85135917488&origin=inward | en_US |
dc.identifier.uri | http://cmuir.cmu.ac.th/jspui/handle/6653943832/77591 | - |
dc.description.abstract | At present, in every corner of the world, including developing and developed, countries got affected by infectious diseases such as the COVID-19 virus. Our objective was to create a real-time pain detection for everyone that can use it by themselves before going to the hospital. In this research, we used a dataset from the University of Northern British Columbia (UNBC) and the Japanese Female Facial Expression (JAFFE) as a training set. Furthermore, we used unseen data from webcam or video as a testing set. In our system, pain is divided into three categories: mild, moderate-to-severe-to-painful, and severe. The system’s efficiency was assessed by contrasting its results with those of a highly qualified physician. Classification accuracy rates were 96.71, 92.16, and 98.40% for the not hurting, getting uncomfortable, and painful categories. To summarize, our research has created a simple, cost-effective, and readily understood alternate method for the general public and healthcare professionals to screen for pain before admission. | en_US |
dc.subject | Computer Science | en_US |
dc.subject | Engineering | en_US |
dc.title | Real-Time Pain Detection Using Deep Convolutional Neural Network for Facial Expression and Motion | en_US |
dc.type | Book Series | en_US |
article.title.sourcetitle | Lecture Notes in Networks and Systems | en_US |
article.volume | 448 | en_US |
article.stream.affiliations | Chiang Mai University | en_US |
Appears in Collections: | CMUL: Journal Articles |
Files in This Item:
There are no files associated with this item.
Items in CMUIR are protected by copyright, with all rights reserved, unless otherwise indicated.