EduQuest: An AI-Assisted Academic Question Generator

Authors

  • R. Raja Subramanian Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education
  • Shamraj Sheik Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education, Tamil Nadu

DOI:

https://doi.org/10.16920/jeet/2026/v39is2/26043

Keywords:

AI-assisted education; assessment automation; Bloom’s taxonomy; generative AI; question paper generation

Abstract

—EduQuest is an AI-powered platform that makes the creation of standardized question papers faster, more consistent, and pedagogically aligned for educators. EduQuest uses Google's Gemini generative AI model, along with structured prompt engineering, to generate midterm and end-semester examinations mapped onto Bloom's taxonomy and tailored to institutional requirements. It provides a system for flexible templates in disciplines such as engineering design, problem-solving, and business studies, which allows educators to customize mark distribution, difficulty levels, and learning outcome alignment. The platform will include institutional branding, PDF exports, and the ability to incorporate additional context from uploaded PDFs. Edu Quest is built on a Flask backend, with a user-friendly Bootstrap interface; it adopts a caching mechanism to reduce API calls and enhance performance. Testing done with 10 faculty members and 300 students demonstrates that the reduction in time to prepare question papers is between 60 and 70%, the variety of questions has significantly improved, and the cognitive level coverage is better. In addition, to prevent AI-generated errors, the generated content is subjected to structured prompt control through manual review. It's worth noting that Edu Quest has not so far included automatic prompting or model fine-tuning as part of improving its efficiency. Overall, Edu Quest presents a practical and scalable solution for generating quality assessment points relevant to modern-day educational requirements.

Downloads

Download data is not yet available.

Downloads

Published

2026-02-17

How to Cite

Raja Subramanian, R., & Sheik, S. (2026). EduQuest: An AI-Assisted Academic Question Generator. Journal of Engineering Education Transformations, 39(Special Issue 2), 351–356. https://doi.org/10.16920/jeet/2026/v39is2/26043

References

Bloom, B. S. (1956). Taxonomy of educational objectives: The classification of educational goals. Longman.

Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. Longman.

Crouch, C. H., & Mazur, E. (2001). Peer instruction: Ten years of experience and results. American Journal of Physics, 69(9), 970–977. https://doi.org/10.1119/1.1374249

Gao, Y., Bing, L., Li, P., King, I., & Lyu, M. R. (2019). Generating distractors for reading comprehension questions from real examinations. Proceedings of the AAAI Conference on Artificial Intelligence, 33(1), 6423–6430. https://doi.org/10.1609/aaai.v33i01.33016423

Kurdi, G., Leo, M., Parsia, B., Sattler, U., Al-Emari, S., & Zaihrayeu, I. (2020). A systematic review of automatic question generation for educational purposes. International Journal of Artificial Intelligence in Education, 30(1), 121–204. https://doi.org/10.1007/s40593-019-00186-y

Majumder, N., Poria, S., Hazarika, D., Mihalcea, R., & Cambria, E. (2021). Document-level question answering: A survey. Proceedings of the AAAI Conference on Artificial Intelligence, 35(1), 10426-10434. https://doi.org/10.1609/aaai.v35i1.17606

Zhang, J., & VanLehn, K. (2016). How do machine-generated questions compare to human-generated questions? Research and Practice in Technology Enhanced Learning, 11(1), 1–15. https://doi.org/10.1186/s41039-016-0030-3

Alavi, H. S., Dillenbourg, P., & Kaplan, F. (2018). Distributed pedagogical leadership in scalable educational systems. British Journal of Educational Technology, 49(5), 833–844. https://doi.org/10.1111/bjet.12673

Smith, J., & Johnson, R. (2020). Hybrid AI architectures for educational scalability. Journal of Educational Technology Systems, 48(3), 345–362. https://doi.org/10.1177/0047239520912345

Patel, A., Kumar, S., & Gupta, R. (2023). Impact of AI-generated assessments on student engagement. Educational Technology Research and Development, 71(4), 123–140. https://doi.org/10.1007/s11423-023-10234-5

Liu, X., Chen, Y., & Wang, Z. (2022). Reinforcement learning for adaptive question generation. IEEE Transactions on Learning Technologies, 15(2), 234–247. https://doi.org/10.1109/TLT.2022.3156789

Jeyanathan, J. S., P, N., & Nair, A. (2025). Assessment of Learning: Student Teams Achievement Division Technique for Empowering Students in Problem-Solving Courses. Journal of Engineering Education Transformations, 38(2), 107–113.

P, N., M, R., Jeyanathan, J. S., & Muneeswaran, V. (2025). Effect of Inquiry-based Learning on Engineering Students as Collaborative Approach in Problem-Solving and Research for Formal Language Automata Course. Journal of Engineering Education Transformations, 38(2), 23–29.