Fuzzy Delphi Method: Designing a Tadabbur Al-Quran Model in Arabic Vocabulary Learning for Hearing-Impaired Muslim Adults and assisted with Augmented Reality Technology
Keywords:
Tadabbur al-Quran Model; Arabic vocabulary; hearing-impaired adults; Augmented Reality technologyAbstract
This study aims to develop a Tadabbur al-Quran Model integrating Augmented Reality (AR) technology to enhance Arabic vocabulary learning for hearing-impaired Muslim adults. Conducted in collaboration with Persatuan Orang Pekak Islam Malaysia (PRISMA), a national NGO for the Deaf Muslim community, the research built upon a previous needs analysis findings to design a structured learning model. A questionnaire was developed to gather expert opinions on the model’s key components, focusing on Arabic vocabulary learning and AR’s potential in Quranic education. It was distributed to 10 experts in Arabic Education, Quranic and Special Needs Education, Educational Technology, and Model Development. The Fuzzy Delphi Method (FDM), using Fuzzy Delphi Analysis V1.5, analysed expert responses to refine the model. The findings showed consensus on all components, meeting three key Fuzzy criteria: the threshold value (d) was ? 0.2, the expert agreement percentage was ? 75%, and the Defuzzification (alpha cut) value was ? 0.5. This study contributes to fostering a more inclusive and effective learning environment for hearing-impaired learners by advocating for the integration of interactive and visually supportive tools, such as AR books and digital sign language resources, into Arabic vocabulary education.
https://doi.org/10.26803/ijlter.24.5.13
References
Ahmad, H., Zainuddin, N. M. M., Yusoff, R., Mohd Azmi, N. F., & Hassan, W. (2019). Augmented reality model to aid Al-Quran memorization for hearing impaired students. In Intelligent and Interactive Computing. https://doi.org/10.1007/978-981-13-6031-2_12
Ahmad Yusoff, N. F., Abas, U.-H., Jaffar, M. N., Mat Ali, A. A., Md. Noor, M. L. A. H., Abdul Hanid, M. F., & Mohd Raus, N. (2024). Augmented reality technology for Arabic vocabulary learning as a model of Quran reflection for hearing-impaired adults. Ijaz Arabi: Journal of Arabic Learning, 7(3), 1117–1129. https://doi.org/10.18860/ijazarabi.v7i3.26641
Ahmed, S. & Patel, R. (2018). Assessing language learning in non-verbal contexts: Digital tools for the hearing-impaired. Educational Assessment Journal, 22(3), 98-113.
Almutairi, A., & Al-Megren, S. (2017). Augmented reality for the literacy development of deaf children: A preliminary investigation. Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility.
Andriyani, A, Buliali, J. L., & Pramudya, Y. (2022). Development of augmented reality media based on cybernetic learning theory to stimulate spatial abilities. Journal of ICSAR, 6(2), 151. https://doi.org/10.17977/um005v6i22022p151
Bisht, D., Kojage, M., Shukla, M., Patil, Y., & Bagade, P. (2022). Smart Communication System Using Sign Language Interpretation. 31st Conference of Open Innovations Association (FRUCT), 12-20. https://doi.org/10.23919/FRUCT54823.2022.9770914
Bisht, D., Kojage, M., Shukla, M., Patil, Y., & Bagade, P. (2022). Smart Communication System Using Sign Language Interpretation. 31st Conference of Open Innovations Association (FRUCT), 12-20.
Challenor, J., White, D., & Murphy, D. J. (2023). Hand-controlled user interfacing for head-mounted augmented reality learning environments. Multimodal Technologies and Interaction, 7, 55. https://doi.org/10.3390/mti7060055
Economou, D., Russi, M. G., Doumanis, I., Mentzelopoulos, M., Bouki, V., & Ferguson, J. (2020). Using serious games for learning British Sign Language combining video, enhanced interactivity, and VR technology. Journal of Universal Computer Science, 26, 996–1016. https://doi.org/10.3897/jucs.2020.053
Garcia, R., & Li, T. (2019). The role of contextualized vocabulary teaching in special education. International Journal of Language and Communication Disorders, 54(1), 103-119.
Haryati, R., & Salamah, U. (2024). Revitalizing Arabic vocabulary learning through the use of augmented reality media. Kitaba.
Hsu, C. H., & Sandford, B. A. (2007). The Delphi technique: Making sense of consensus. Practical Assessment, Research, and Evaluation, 12(10), 1-8.
Izaguirre, E. D. P., Abásolo, M., & Collazos, C. (2021). Educational methodologies for hearing impaired children supported by mobile technology and extended reality: Systematic analysis of literature. IEEE Revista Iberoamericana de Tecnologias del Aprendizaje, 16, 410–418. https://doi.org/10.1109/rita.2021.3135202
Jamil, M.R.M., & Noh, N.M, M.J. (2020). Kepelbagaian Metodologi Dalam Penyelidikan Reka Bentuk dan Pembangunan. Qaisar Prestige.
Jamil, M. R. M., Siraj, S., Hussin, Z., Noh, N. M., & Sapar, A. A. (2014;2017;2019). Pengenalan Asas kaedah Fuzzy Delphi Dalam Penyelidikan Reka Bentuk dan Pembangunan, Minda Intelek.
Khan, S., Rehman, U., & Martin, J. (2022). Visual aids in vocabulary acquisition for the hearing-impaired: A systematic review. Journal of Visual Learning Studies, 30(4), 200–215.
Momeni, L., & Bull, H. (2022). Automatic Dense Annotation of Large-Vocabulary Sign Language Videos. Proceedings of the European Conference on Computer Vision (ECCV), 295-310. https://doi.org/10.1007/978-3-030-01246-5_22
OpenAI. (2024). ChatGPT (Version 2). Retrieved from website [Jun 2024]
Ployjiw, U., & Michel, P. C. (2023). Development of augmented reality learning materials for the hearing-impaired students in Primary I. International Journal of Information and Education Technology, 13(11), 1978. https://doi.org/10.18178/ijiet.2023.13.11.1978
Razalli, A. R., Mamat, N., Razali, N., Mohd Yasin, M. H., Lakulu, M., Hashim, A., & Ariffin, A. (2021). Development of prayer mobile application software for the hearing impaired (deaf) based on Malaysian Sign Language. International Journal of Academic Research in Business and Social Sciences, 11(6), 10243. https://doi.org/10.6007/ijarbss/v11-i6/10243
Ridha, A. M., & Shehieb, W. (2021). Assistive technology for hearing-impaired and deaf students utilizing augmented reality. In Proceedings of the 2021 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE) (pp. 1–5). IEEE. https://doi.org/10.1109/CCECE53047.2021.9569193
Rusli, M. S., & Ibrahim, Z. (2022). Augmented reality (AR) for deaf and hard of hearing (DHH) for animation. e-Academia Journal, 11(2), 20442. https://doi.org/10.24191/e-aj.v11i2.20442
Seman, N. H., Idris, M. R., Daud, M. I., Rahman, N. A. N. A., & Rahman, M. N. A. (2019).
Shidende, D., Kessel, T., & Moebs, S. (2023). Towards accessible augmented reality learning authoring tool: A case of MirageXR. In Proceedings of the 2023 IST-Africa Conference (IST-Africa) (pp. 1–13). IEEE. https://doi.org/10.23919/IST-Africa60249.2023.10187746.
Smith, A., & Jones, M. (2020). Enhancing vocabulary acquisition through thematic learning for special-needs students. Journal of Special Education Studies, 34(2), 157–174.
Soogund, N.-U.-N., & Joseph, M. H. (2019). SignAR: A Sign Language Translator Application with Augmented Reality using Text and Image Recognition. 2019 IEEE International Conference on Intelligent Techniques in Control, Optimization and Signal Processing (INCOS 2019), 3(1), 1-5.
Tan, B.-L., Shi, J., Yang, S., Loh, H., Ng, D., Choo, C., & Medalia, A. (2022). The use of virtual reality and augmented reality in psychosocial rehabilitation for adults with neurodevelopmental disorders: A systematic review. Frontiers in Psychiatry, 13, 1055204. https://doi.org/10.3389/fpsyt.2022.1055204
Teaching Al-Quran to Deaf Students: Challenges for Islamic Education Teachers. The Online Journal of Islamic Education, Vol. 7, Issue 1, (pp. 46-53). Fakulti Pendidikan, Universiti Malaya, Kuala Lumpur.
Vo, T. T., Nguyen, T. H., & Pham, M. D. (2019). Video-Based Vietnamese Sign Language Recognition Using Deep Learning. International Conference on Artificial Intelligence and Signal Processing (AISP), 146-151. https://doi.org/10.1109/AISP.2019.8923990
Wang, L., Chen, X., & Robinson, H. (2021). Augmented reality in language acquisition for visually oriented learners: A focus on hearing-impaired students. Educational Technology & Society, 24(3), 45–61.
Wong, S. Y., Abdullah, Z., Hussin, M. S. H., Kadri, N. A., Obaidellah, U., & Zubir, N. (2021). Influence of augmented reality (AR) technology via mobile application for knowledge transfer program in Fourth Industrial Revolution era. ASEAN Journal of Community Engagement, 5(1), 1123. https://doi.org/10.7454/ajce.v5i1.1123.
Yang, C. K., & Lai, Y. C (2022). Video-Based Text to American Sign Language via Translational Motion Synthesis. 12th International Conference on Advanced Computer Information Technologies (ACIT), 520-524. https://doi.org/10.1109/ACIT54803.2022.9913078
Yulia, Chuan-Kai Yang, & Yuan-Cheng Lai. (2022). Video-Based Text to American Sign Language via Transitional Motion Synthesis. 12th International Conference on Advanced Computer Information Technologies (ACIT), 520-524. https://doi.org/10.1109/ACIT54803.2022.9913078
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Nabilah Fasihah Ahmad Yusoff, Ummu-Hani Abas, Ahmad Asyraf Mat Ali, Mohammad Najib Jaffar, Mohamad Lukman Al-Hakim Md. Noor, Mohd Fadzil Abdul Hanid, Norakyairee Mohd Raus

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
All articles published by IJLTER are licensed under a Creative Commons Attribution Non-Commercial No-Derivatives 4.0 International License (CCBY-NC-ND4.0).