A Formative Assessment Approach to Examine Cognitive and Attitudinal Effects of AI-based LLM Use among Undergraduate Engineering Students
DOI:
https://doi.org/10.16920/jeet/2026/v39is2/26067Keywords:
AI in education; Digital use patterns; Impacts of LLM; Formative assessment; Cognitive LevelsAbstract
The pace of integrating artificial intelligence (AI)- based large language models (LLMs) into education and research has grown exponentially in recent years. Being the key stakeholders, the nature and extent of dependence on AI tools among the educators, researchers, and students vary considerably across multiple factors, posing critical challenges in systematically evaluating and interpreting their impacts. The present study proposes a formative assessment approach to compare the comprehension level of selected fundamental engineering concepts for a group of eight students through (i) an offline inperson proctored tests and (ii) online feedback surveys. The tests comprise two sets of multiple-choice and short-answer questions with increasing cognitive levels, while the use of AI tools was permitted for the second set. This is followed by a feedback survey to capture the nature of responses to questions with and without AI assistance. The results show a growing dependency on AI tools for answering conceptual and analytical questions compared to factual and recall-type questions. The usage of AI tools showed a three-fold hike in the overall performance in the open-book test (78.75%) compared to the proctored assessment (26.25%). The observed patterns of AI use indicate a shift toward more methodological searches compared to random ones. The study recommends that students should first build strong conceptual foundations through conventional learning, and AI use for assignments or projects should be discouraged at least until the second year to ensure they develop independent thinking skills. Post-assessment follow-ups with mentoring has to be adopted, with attention to their deeper implications for behavioral traits and intellectual responsiveness.
Downloads
Downloads
Published
How to Cite
Issue
Section
References
Akolekar, H., Jhamnani, P., Kumar, V., Tailor, V., Pote, A., Meena, A., & Kumar, D. (2025). The role of generative AI tools in shaping mechanical engineering education from an undergraduate perspective. Scientific Reports, 15(1), 9214. https://doi.org/10.1038/s41598-025-93871-z
Amaro, I., Della Greca, A., Francese, R., Tortora, G., & Tucci, C. (2023). AI Unreliable Answers: A Case Study on ChatGPT. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2023. Lecture Notes in Computer Science (14051), Springer, Cham. https://doi.org/10.1007/978-3-031-35894-4_2.
Bhaskar, P., & Seth, N. (2024). Environment and sustainability development: A ChatGPT perspective. In Applied Data Science and Smart Systems (pp. 5462). CRC Press. https://dx.doi.org/10.1201/9781003471059-8
Belim, P., Bhatt, N., Lathigara, A., & Durani, H. (2025). Enhancing Level of Pedagogy for Engineering Students Through Generative AI. Journal of Engineering Education Transformations, 463-470. https://doi.org/10.16920/jeet/2025/v38is2/25057
Beneroso, D., & Robinson, J. (2021). A tool for assessing and providing personalised formative feedback at scale within a second in engineering courses. Education for Chemical Engineers, 36, 38-45. https://doi.org/10.1016/j.ece.2021.02.002
Berend, K., Duits, A., Gans, O.B. (2025) Challenging cases of hyponatremia incorrectly interpreted by Chat GPT. BMC Medical education, 25: 751. https://doi.org/10.1186/s12909-025-07235-2
Brown, A. (2018). Engaging students as partners in developing online learning and feedback activities for first-year fluid mechanics. European Journal of Engineering Education, 43(1), 26-39. https://doi.org/10.1080/03043797.2016.1232372
Cole, J. S., & Spence, S. W. (2012). Using continuous assessment to promote student engagement in a large class. European Journal of Engineering Education, 37(5), 508-525. https://doi.org/10.1080/03043797.2012.719002
Cossu, R., Awidi, I., & Nagy, J. (2024). Critical thinking activities in fluid mechanics–A case study for enhanced student learning and performance. Education for Chemical Engineers, 46, 35-42. https://doi.org/10.1016/j.ece.2023.10.004
Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2024). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228-239. https://doi.org/10.1080/14703297.2023.2190148
Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2024). A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education and Teaching International, 61(3), 460474. https://doi.org/10.1080/14703297.2023.2195846
Fütterer, T., Fischer, C., Alekseeva, A., Chen, X., Tate, T., Warschauer, M., & Gerjets, P. (2023). ChatGPT in education: global reactions to AI innovations. Scientific reports, 13(1), 15310. https://doi.org/10.1038/s41598-023-42227-6
Hopfenbeck, T. N., Zhang, Z., Sun, S. Z., Robertson, P., & McGrane, J. A. (2023). Challenges and opportunities for classroom-based formative assessment and AI: a perspective article. In Frontiers in Education (Vol. 8, p. 1270700). Frontiers Media SA. https://doi.org/10.3389/feduc.2023.1270700
Hudesman, J., Crosby, S., Flugman, B., Issac, S., Everson, H., & Clay, D. B. (2013). Using formative assessment and metacognition to improve student achievement. Journal of Developmental Education, 37(1), 2. https://files.eric.ed.gov/fulltext/EJ1067283.pdf
Jegham, N., Abdelatti, M., Koh, C. Y., Elmoubarki, L., & Hendawi, A. (2025). How hungry is ai? benchmarking energy, water, and carbon footprint of llm inference. arXiv preprint arXiv:2505.09598. https://doi.org/10.48550/arXiv.2505.09598
Johnson, D., Goodman, R., Patrinely, J., Stone, C., Zimmerman, E. et al. (2023) Asessing the accuracy and reliability of AI-generated medical responses: An evaluation of the Chat-GPT model. Nature Portfolio, https://doi.org/10.21203/rs.3.rs2566942/v1.
Kadiyala, S., Gavini, S., Kumar, D. S., Kiranmayi, V., & Rao, P. N. S. (2017). Applying blooms taxonomy in framing MCQs: An innovative method for formative assessment in medical students. Journal of Dr. YSR University of Health Sciences, 6(2), 86-91. https://doi.org/10.4103/2277-8632.208010
Klymkowsky, M., & Cooper, M. M. (2024). The end of multiple choice tests: using AI to enhance assessment. arXiv Preprint arXiv:2406.07481. https://doi.org/10.48550/arXiv.2406.07481
Li, T., Reigh, E., He, P., & Adah Miller, E. (2023). Can we and should we use artificial intelligence for formative assessment in science. Journal of Research in Science Teaching, 60(6), 1385-1389. https://ui.adsabs.harvard.edu/link_gateway/2023JRS cT..60.1385L/doi:10.1002/tea.21867
Li, R., Li, M., & Qiao, W. (2025). Engineering students’ use of large language model tools: An empirical study based on a survey of students from 12 universities. Education Sciences, 15(3), 280. https://doi.org/10.3390/educsci15030280
Li, X., & Cheung, S. C. (2025). A learning-centred computational fluid dynamics course for undergraduate engineering students. International Journal of Mechanical Engineering Education, 53(2), 256-276. https://doi.org/10.1177/03064190231224334
Na, S. J., Ji, Y. G., & Lee, D. H. (2021). Application of Bloom’s taxonomy to formative assessment in realtime online classes in Korea. Korean journal of medical education, 33(3), 191. https://doi.org/10.3946/kjme.2021.199
Natarajan, N., Varaprasad, A., & Vasudevan, M. (2020) An Efficient and Simplified Computer Program to Estimate the Infiltration Index. Indian Journal of Geo-Marine Sciences, 49(06), 965-969. http://nopr.niscpr.res.in/handle/123456789/54942
Ngo, A., Gupta, S., Perrine, O., Reddy, R., Ershadi, S., & Remick, D. (2024) ChatGPT 3.5 fails to write appropriate multiple choice practice exam questions. Academic Pathology, 11(1), 100099. https://doi.org/10.1016/j.acpath.2023.100099
Ooi, K. B., Tan, G. W. H., Al-Emran, M., Al-Sharafi, M. A., Capatina, A., Chakraborty, A., & Wong, L. W. (2025). The potential of generative artificial intelligence across disciplines: Perspectives and future directions. Journal of Computer Information Systems, 65(1), 76-107. https://doi.org/10.1080/08874417.2023.2261010
Pawar, M., Dhotare, V., Urunkar, N., Andalgavkarkulkarni, Y., Shahane, D., & Pawar, A. S. (2025). Comparative Impact of AI and Search Technologies on Outcome-Based Learning in Engineering Education. Journal of Engineering Education Transformations, 591-598. https://doi.org/10.16920/jeet/2025/v38is2/25073
Qadar, R., Syam, M., & Mahdiannur, M. A. (2025). Analyzing high school physics teachers’ understanding of cognitive process and knowledge dimensions in assessment design using the revised Bloom’s taxonomy. Discover Education, 4(1), 387. https://doi.org/10.1007/s44217-025-00807-w
Raje, M. S., & Tamilselvi, A. (2024). Gamified formative assessments for enhanced engagement of engineering English learners. Journal of Engineering Education Transformations, 500-507. https://journaleet.in/index.php/jeet/article/view/2451
Ramprakash, B., Nithyakala, G., Bhumika, K., & Avanthika, S. (2024). Comparing traditional instructional methods to ChatGPT: A comprehensive analysis. Journal of Engineering Education Transformations, 612-620. https://journaleet.in/index.php/jeet/article/view/2477
Sivapragasam, C., Dargar, S. K., & Natarajan, N. (2024). Enhancing Engineering Education Through Pedagogical Change: Application to Abstract. Journal of Engineering Education Transformations, 826-831. https://journaleet.in/index.php/jeet/article/view/2508
Sivapragasam, C., & Natarajan, N. (2023). The Use of ICT at the Induction Level Towards Bringing Equity and Inclusion in HEIs of India. In Handbook of Research on Implementing Inclusive Educational Models and Technologies for Equity and Diversity, 69-88. https://doi.org/10.4018/979-8-3693-0453-2.ch004
Suhonen, S. (2025). AI in Measurement-Based Learning: A Challenge for Assessment, an Opportunity for Tutoring. In 2025 6th International Conference of the Portuguese Society for Engineering Education (CISPEE), IEEE, 1-5. https://doi.org/10.1109/CISPEE64787.2025.11124148
Sundar, M. S., Natarajan, N., & Vasudevan, M. (2020). A handy tool for forecasting population to aid estimation of water demand. Indian Journal of Geo-Marine Sciences, 49(09), 1587-1592. http://nopr.niscair.res.in/handle/123456789/55517
Xiao, X., Li, Y., He, X., Fang, J., Yan, Z., & Xie, C. (2025). An assessment framework of higher-order thinking skills based on fine-tuned large language models. Expert Systems with Applications, 272, 126531. https://doi.org/10.1016/j.eswa.2025.126531
Zheng, S., Huang, J., & Chang, K. C. C. (2023). Why does chatgpt fall short in providing truthful answers?. arXiv preprint, arXiv:2304.10513. https://arxiv.org/abs/2304.10513
Zhai, X., & Nehm, R. H. (2023). AI and formative assessment: The train has left the station. Journal of Research in Science Teaching, 60(6), 1390-1398. https://doi.org/10.1002/tea.21885
Access to login into the old portal (Manuscript Communicator) for Peer Review-

