Investigation of artificial intelligence as a tool for student learning outcome
Opportunities, ethical challenges, and mitigation strategies in the Nigerian tertiary context
Keywords:
Artificial Intelligence, Learning Outcomes, Academic Integrity, Academic Dishonesty, Ethical AI, Educational TechnologyAbstract
The integration of Artificial Intelligence (AI) into higher education is transforming teaching, learning, and assessment practices. While AI technologies offer significant opportunities to enhance student learning through personalisation, efficiency, and data-driven support, they also introduce complex ethical challenges, particularly regarding academic integrity. This study examines the role of AI in student learning outcomes within the Nigerian tertiary education context, with particular emphasis on the risks associated with its misuse. A qualitative case study was conducted at the University of Ilorin, Nigeria, using open-ended questionnaires and semi-structured interviews with academic staff and students. The findings reveal widespread adoption of AI tools among students, accompanied by emerging patterns of overreliance and integrity-related concerns, including automated essay generation and attempts to circumvent plagiarism detection systems. These developments raise important questions about the reliability and validity of conventional assessment practices. The study argues that effective integration of AI in higher education requires a balanced governance approach that combines pedagogical adaptation, institutional policy development, and technological safeguards. It recommends the adoption of advanced detection mechanisms, structured training programs on ethical AI use, and collaborative engagement among educators, administrators, and policymakers. Such measures are essential to harness the benefits of AI while preserving the credibility and integrity of higher education systems.
1. Introduction
The global educational landscape is undergoing a radical transformation driven by the rapid advancement of digital technologies, among which Artificial Intelligence (AI) stands out as one of the most disruptive and promising. AI, in its essence, refers to the capability of computer systems to perform tasks that traditionally required human intelligence, such as learning, reasoning, problem-solving, and natural language processing (Russell & Norvig, 1995). Its foray into the educational sector promises to move beyond traditional one-size-fits-all teaching models towards highly personalised, accessible, and efficient teaching and learning experiences for teachers and students. This was examined in (Wang et al., 2024) and (Boussouf et al., 2024) who presented several ways to incorporate AI technologies into the educational sector to enhance teaching and learning while giving consideration to ethical implications. AI also has potential economic impact that could potentially double Africa’s GDP growth rate (Jaldi, 2023). Within this macroeconomic promise lies the microcosm of education, where AI is poised to revolutionise how students learn and how educators teach.
The primary allure of AI in education lies in its capacity to optimise and personalise the learning journey. Adaptive learning platforms can assess a student’s strengths and weaknesses in real-time, tailoring content and pacing to maximise individual learning outcomes (Akintola et al., 2025). Furthermore, AI-driven analytics can provide educators with unprecedented insights into student performance and engagement, such as the use of virtual assistants and augmented realities. It can also enable data-informed instructional strategies and early interventions for students who are at risk. From automating administrative tasks to powering sophisticated research tools, AI’s role can be described as multifaceted and expanding. However, this technological promise is accompanied by significant ethical quandaries and practical challenges. The very tools designed to enhance learning can be subverted to undermine it (Chen & Lin, 2024). There is therefore a need to examine both impacts, especially on teaching and learning. More importantly, the ease of access to powerful generative AI models can give rise to new and sophisticated forms of academic dishonesty if not properly harnessed. This is because with this tool it is possible for students to generate essays, solve complex problems, and even complete online assessments with minimal original input, posing a fundamental threat to the authenticity of learning and the validity of assessment (Conrad & Openo, 2018). This challenge is particularly acute in developing nations like Nigeria, where rapid technological adoption often outpaces the development of corresponding ethical frameworks, regulatory policies, and digital literacy programmes. This study provides context-specific empirical evidence on students’ perceptions of AI usage, ethical concerns, and academic integrity risks within Nigerian higher education. Unlike prior studies that emphasise technological adoption, this research focuses on behavioural, ethical, and governance dimensions of AI integration. This is because today AI has become increasingly influential in higher education. Despite growing global research, limited empirical work exists within Nigerian universities. This study addresses this gap by examining student perceptions, usage behaviours, and academic integrity implications. The study adopted a qualitative case study design. Participants included 175 undergraduate students selected via convenience sampling and three academic staff selected purposively. Data were collected using open-ended questionnaires administered through Google Forms and semi-structured interviews. Ethical approval was obtained, and informed consent was secured.
1.1 Research questions
This study addresses the following research questions:
- What is the prevalence and nature of AI tool usage among undergraduate students at a Nigerian university?
- How do students perceive the role of AI tools in their learning processes, and what motivates their use or non-use of these tools?
- What ethical concerns and academic integrity risks do students and faculty associate with AI usage in academic work?
- How do current institutional policies and assessment practices address (or fail to address) AI-related academic integrity challenges?
- What strategies do students and faculty consider effective for promoting ethical AI integration and mitigating misuse?
2. Literature Review
The application of AI in education is not an entirely new concept. Its foundations can be traced back to the days of using basic computers for personalised learning, then to Intelligent Tutoring Systems in the late 20th century aimed at providing immediate and customised instruction or feedback to learners (Nye, 2015). However, contemporary AI, supercharged by machine learning (ML), natural language processing (NLP), and large language models (LLMs), represents a quantum leap in capability and application. This modern AI has the potential to enhance teaching and learning through several key mechanisms such as personalised learning, with AI algorithms creating a unique profile of a user’s knowledge state and learning style through collecting and analysing the user’s interactions with learning materials. It also allows for dynamic adaptation of content difficulty, presentation style, and learning pathways. This can be seen to effectively provide a personal tutor for every student. This personalisation has been shown to increase student engagement and improve knowledge retention (Xie et al., 2019).
The integration of Artificial Intelligence (AI) into higher education has become an important area of scholarly inquiry as educational institutions increasingly adopt digital technologies to enhance teaching, learning, and assessment processes. AI technologies are capable of analysing large volumes of educational data, enabling adaptive learning environments, automated assessment, and improved administrative efficiency (Zawacki-Richter et al., 2019). Through machine learning algorithms and data analytics, AI systems can identify patterns in student learning behaviours and support personalised instructional approaches. One of the most prominent applications of AI in education involves the automation of routine administrative and assessment tasks. Futhermore, AI-based systems can also assist in grading objective assessments, evaluating structured written responses, managing course scheduling, and generating timely feedback for students. By automating these time-consuming tasks, AI technologies reduce the administrative burden on educators and allow them to devote more time to higher-order pedagogical activities such as mentoring, facilitating critical discussions, and designing collaborative learning experiences (Selwyn, 2019). In this sense, AI functions not as a replacement for educators but as a complementary tool that enhances teaching effectiveness. Another significant development is the use of learning analytics and educational data mining to support data-driven decision-making in higher education. Learning analytics systems analyse student performance data, engagement metrics, attendance records, and behavioural indicators to identify patterns associated with academic success or difficulty. Siemens and Baker (2012) argue that learning analytics enables institutions to detect early warning signals indicating students who may be at risk of academic underperformance or dropout. Such insights allow institutions to implement timely and targeted interventions, including academic advising, personalised feedback, and adaptive learning pathways that can improve student retention and academic success. In addition, AI technologies also contribute to improved accessibility and inclusivity in education. Assistive AI applications such as speech-to-text transcription, automated language translation, and intelligent tutoring systems provide additional support for students with diverse learning needs. These tools help reduce barriers to participation and promote more equitable access to learning opportunities (Zawacki-Richter et al., 2019). As a result, AI has the potential to support more inclusive and learner-centred educational environments. Despite these benefits, scholars emphasise that the integration of AI into education requires careful consideration of ethical, pedagogical, and institutional implications. As AI systems increasingly influence assessment and learning processes, concerns have emerged regarding academic integrity and the potential misuse of AI technologies by students.
Alongside its educational benefits, the emergence of AI, particularly generative AI, has introduced new challenges to academic integrity. Academic integrity is widely recognised as a foundational principle of higher education and encompasses values such as honesty, trust, fairness, respect, and responsibility in scholarly work (Macfarlane, Zhang & Pun, 2014). The rapid development of AI-powered text generation systems has complicated the ability of institutions to uphold these principles (Khatri & Karki, 2023). Generative AI tools, including LLMs capable of producing coherent essays, reports, and programming code, enable students to generate original text with minimal effort. Unlike traditional plagiarism, where text is copied from existing sources, AI-generated content is often unique and therefore difficult to detect using conventional plagiarism detection systems(Lancaster, 2023; Lancaster et al., 2025). This development has led scholars to raise concerns about new forms of academic misconduct, including AI-assisted ghostwriting and contract cheating. (Cotton, Cotton & Shipway, 2024) argue that generative AI represents a significant challenge for higher education institutions because it allows students to produce sophisticated assignments without engaging fully in the learning process. The authors highlight that traditional assessment models, particularly take-home essays and unsupervised assignments, are increasingly vulnerable to AI-assisted completion. As a result, educators must reconsider assessment design and explore alternative approaches that emphasise critical thinking, originality, and authentic demonstration of knowledge. Research has also highlighted the limitations of current technologies designed to detect AI-generated text. (Weber-Wulff et al., 2023) conducted a systematic evaluation of multiple AI detection tools and found that these systems frequently produce inaccurate or inconsistent results. Their findings indicate that existing detection technologies cannot reliably distinguish between human-written and AI-generated text, thereby limiting their usefulness as a mechanism for enforcing academic integrity. Similarly, (Perkins et al., 2024) examined the ability of academic staff to identify AI-generated assignments and found that many educators struggle to detect such content with confidence. Their findings suggest that institutional procedures for addressing AI-related academic misconduct remain underdeveloped and that universities must develop clearer guidelines and policies to respond to the emerging challenges posed by generative AI. Beyond issues of detection, scholars also warn that excessive reliance on AI tools may undermine the development of critical academic skills. When students rely heavily on AI systems to generate ideas, construct arguments, or complete assignments, they may miss opportunities to develop essential competencies such as critical thinking, analytical reasoning, and academic writing (Cotton, Cotton & Shipway, 2024). This concern has led researchers to describe the potential emergence of a ‘learning authenticity’ crisis in which the intellectual effort traditionally associated with higher education is diminished.
Although research on AI in education has expanded rapidly, the geographical distribution of this research remains uneven. Zawacki-Richter et al., (2019) conducted a systematic review of AI research in higher education and found that most studies originate from North America, Europe, and East Asia. The authors highlight the relative scarcity of research from developing regions, including Africa. More recent reviews confirm this trend (Bond et al., 2024). conducted a meta-systematic review of AI research in higher education and observed that empirical studies remain concentrated in technologically advanced educational systems. The authors argue that greater attention must be paid to contextual factors when examining AI adoption in different regions of the world. Educational infrastructure, technological access, institutional capacity, and cultural attitudes toward academic integrity may all influence how AI technologies are adopted and used within specific educational contexts.
In many developing countries, including those in Africa, higher education institutions face additional challenges such as large class sizes, limited digital infrastructure, and resource constraints. These conditions may shape the ways in which AI technologies are adopted and integrated into educational systems. However, despite the increasing global attention given to generative AI tools, empirical studies examining student use of AI in African higher education remain limited. The absence of context-specific research is particularly significant because institutional responses to AI misuse must be informed by a clear understanding of how students interact with these technologies within local educational environments. Without such empirical evidence, universities may struggle to develop effective policies, assessment strategies, and academic integrity frameworks that address the unique challenges associated with AI adoption. While existing international literature has extensively explored the pedagogical potential of AI as well as its implications for academic integrity, empirical studies focusing on developing regions remain limited. Most current research has examined AI applications in technologically advanced educational systems, leaving significant gaps in understanding how these technologies are used in other contexts. In particular, there is limited empirical evidence regarding student AI usage patterns, perceptions, and academic integrity implications within African universities. Given the rapid diffusion of generative AI tools globally, this lack of context-specific research presents a significant challenge for educators and policymakers seeking to develop effective responses. This study therefore seeks to contribute to the emerging literature by examining the use of AI technologies within a Nigerian university context and exploring the associated implications for academic integrity and learning practices.
3. Methodology
The study adopts a qualitative exploratory design, supported by descriptive quantitative data from structured survey items and semi-structured interviews. This design enables triangulation of findings and improves interpretive validity.
3.1 Research design
This study adopted a qualitative research design to gain an in-depth understanding of the perceptions, experiences, and behaviours of students regarding the use and misuse of AI in academic work. While descriptive statistics were calculated for closed-ended survey items, the primary focus remained on qualitative interpretation of open-ended responses and interview data. A qualitative approach is best suited for exploring complex social phenomena where the aim is to understand the ‘why’ and ‘how’ behind certain behaviours (Creswell & Poth, 2016). Specifically, a case study design was employed, focusing on one cohort within a single institution to allow for detailed, contextualised analysis (Wang & Kattan, 2020).
3.2 Population and sampling
The study was conducted at the University of Ilorin, a large federal university in Nigeria, within the Department of Telecommunication Science, Faculty of Communications and Information Sciences. The target population was undergraduate students taking a module on Research Methods. A total of 175 fourth year (400-level) undergraduate students participated in the study. They were given an assignment on different topics to demonstrate what they had learned as part of the assessment for the Research Methods module. In addition, three academic staff members involved in teaching and assessment of the module were included. Purposive sampling was employed to select participants with direct experience of AI tools in academic work. This non-probability sampling technique is appropriate when the research aims to select individuals who can provide rich, relevant information about the phenomenon under study (Palinkas et al., 2015). All students enrolled in the module were invited to participate, yielding a 100% response rate. Staff participants were selected based on their direct involvement with the cohort and their expertise in teaching and assessment. Table 1 presents participant demographics.
| Demographic Variable | Category | Number of Participants (N) | Percentage (%) |
|---|---|---|---|
| Total Participants | 175 | 100 | |
| Participant Type | Undergraduate Students | 175 | 100 |
| Academic Staff | 3 | ||
| Level of Study | 400 Level | 175 | 100 |
| Faculty | Faculty of Communications and Information Sciences | 175 | 100 |
| Department | Department of Telecommunication Science | 175 | 100 |
| Gender Distribution | Male | 148 | 84.6 |
| Female | 27 | 15.4 | |
| Course Context | Students taking Research Methods module | 175 | 100 |
| AI Usage | Used AI tools | 114 | 65.1 |
| Mode of AI Use | Relied entirely on AI-generated content | 90 | 78.9 |
| Used AI to enhance own work | 24 | 21.1 | |
| Academic Outcome (AI Users) | Passed | 105 | 92.1 |
| Failed | 9 | 7.9 | |
| Academic Outcome (AI-Reliant Only) | Passed | 85 | 94.4 |
| Failed | 5 | 5.6 | |
| Student-to-Student Copying | Admitted copying | 89 | 50.9 |
| Passed (among those who copied) | 38 | 42.7 | |
| Failed (among those who copied) | 51 | 57.3 | |
| Did not admit copying | 86 | 49.1 | |
3.3 Data collection instruments
Data were collected using two instruments developed specifically for this study.
3.3.1 Student questionnaire
A self-administered questionnaire was designed to gather detailed information on how students completed their assignments. The questionnaire comprised closed-ended items including questions on AI tool awareness, usage frequency, specific tools used, mode of use covering entire reliance versus enhancement, sources consulted, and whether collaboration or copying from other students occurred during the assignment process. It also included open-ended items exploring how students used AI, their motivations for use or non-use, perceptions of acceptable use, and suggestions for institutional policy and practice. The questionnaire was developed based on themes identified in prior research on academic integrity and AI use (Cotton, Cotton & Shipway, 2024; Perkins et al., 2024). It was reviewed by two senior academics for face and content validity and pilot-tested with 15 students from a different department to check clarity and comprehensibility. Minor wording adjustments were made based on pilot feedback.
3.3.2 Staff interview guide
A semi-structured interview guide was developed to explore staff perspectives. The guide covered experiences with student assessment, awareness of AI tools and observations of potential AI misuse, perceived impacts on student learning, challenges in detecting AI-generated work, knowledge of existing institutional policies related to academic integrity and technology use, and recommendations for policy and practice. The guide was informed by the literature on AI and academic integrity and reviewed by a qualitative research expert before use.
3.4 Data collection procedures
The following data collection procedures were employed in this work.
3.4.1 Student Questionnaire Administration
Following completion of the Research Methods module and finalisation of grades, the questionnaire was distributed to all 175 students via Google Forms. To encourage honest and accurate responses, students were explicitly informed that data collection was solely for research purposes and would have no impact on their academic standing. They were assured that grades for the module had already been finalised and would not be altered regardless of the information disclosed. This assurance was intended to minimise response bias and promote transparency. Students completed the questionnaire voluntarily during a one-week period. No incentives were offered. The response rate was 100% representing 175 out of 175 students.
3.4.2 Staff Interviews
Individual semi-structured interviews were conducted with three academic staff members. Interviews took place in private offices on campus at times convenient for participants. Each interview lasted between 30 and 45 minutes. With participants’ informed consent, interviews were audio-recorded. The interviewer also took brief field notes during and immediately after each session. Audio recordings were transcribed verbatim, and transcripts were checked against recordings for accuracy by a member of the research team.
3.5 Data analysis
3.5.1 Quantitative data analysis
Quantitative data from closed-ended questionnaire items were exported from Google Forms to Microsoft Excel. Descriptive statistics including frequencies and percentages were calculated for AI usage prevalence, mode of AI use, pass and fail outcomes by usage category, and prevalence of student-to-student copying. Results are presented in tables and figures in Section 4. Inferential statistical analysis was not conducted in this study for two primary reasons. First, the study’s primary research questions were exploratory and interpretive in nature, seeking to understand the meanings, perceptions, and behaviours associated with AI use rather than to test hypotheses or establish causal relationships (Braun & Clarke, 2006; Byrne, 2022). Second, the sample comprised a single cohort within a specific module, which was not intended to be statistically representative of the broader student population. Descriptive statistics are therefore presented solely to contextualise the qualitative findings and to illustrate patterns within the sample, not to generalise to the wider population. This approach aligns with the study’s qualitative case study design, which prioritises depth of understanding over statistical generalisability (Creswell, 2002; Creswell & Poth, 2016).
3.5.2 Qualitative data analysis
Qualitative data from open-ended questionnaire responses and staff interview transcripts were analysed using thematic analysis following the six-phase framework of (Braun & Clarke, 2006). This approach was selected because it provides a systematic, transparent, and theoretically flexible method for identifying, analysing, and reporting patterns within qualitative data (Braun & Clarke, 2006).
Phase 1 involved familiarisation with the data. Two researchers (AO and MO) read all open-ended responses and interview transcripts multiple times to become immersed in the data. Initial impressions and potential patterns were recorded in a research journal.
Phase 2 involved generating initial codes. Both researchers independently coded the entire qualitative dataset. Coding was conducted manually using comment functions in Microsoft Word. A data-driven inductive approach was used, allowing codes to emerge from the data rather than applying a pre-existing framework (Braun & Clarke, 2006). Codes were applied to meaningful units of text ranging from phrases to paragraphs. After independent coding, the researchers met to compare codes. Disagreements were discussed, resolved through consensus, and the coding framework was refined accordingly.
Phase 3 involved searching for themes. The refined codes were grouped into potential themes based on patterns, relationships, and shared meaning. Codes were sorted and collated using thematic maps to visualise connections. This process resulted in an initial set of candidate themes.
Phase 4 involved reviewing themes. Candidate themes were reviewed at two levels. First, each theme was checked against the coded extracts to ensure internal coherence. Second, themes were reviewed in relation to the entire dataset to ensure they captured the full range of relevant data (Braun & Clarke, 2006). Themes were merged, split, or discarded based on this review.
Phase 5 involved defining and naming themes. Each theme was clearly defined, and a concise name was assigned. Definitions specified the scope and content of each theme, ensuring distinct boundaries between themes. A thematic framework was developed showing relationships between themes.
Phase 6 involved producing the report. Themes were written up with illustrative quotes from participants. Findings are presented in Section 4, organised by research question.
3.6 Trustworthiness and rigour
To ensure the trustworthiness of qualitative findings, this study adhered to the criteria proposed by (Guba & Lincoln, 1994), which remain widely used in qualitative research (Korstjens & Moser, 2018; Nowell et al., 2017). This is shown in Table 2 below.
| Criterion | Definition | Techniques used |
|---|---|---|
| Credibility | Confidence in the truth of findings | Triangulation of data sources (students and staff); peer debriefing with two colleagues not involved in the study; member checking with two staff participants who reviewed summary findings |
| Transferability | Applicability to other contexts | Thick description of study context, participants, and setting to enable readers to assess transferability (Guba & Lincoln, 1994) |
| Dependability | Consistency and reliability over time | Maintenance of a comprehensive audit trail including raw data, coding files, meeting notes, analytical memos, and codebook versions |
| Confirmability | Findings shaped by participants, not researcher bias | Reflexive journaling by primary researcher; independent coding by two researchers; consensus-building process |
3.7 Ethical considerations
Ethical approval for this study was obtained from the University of Ilorin Ethics Committee (Approval Reference: UERC/ASN/2024/3044 dated 15 December 2024). All participants provided written informed consent prior to participation. Student participants were explicitly informed that participation was voluntary and that non-participation would not affect their academic standing in any way. To minimise response bias and encourage honest reporting, students were assured that data collection occurred after final grades had been submitted and that individual responses would not be shared with course instructors. Anonymity was guaranteed through the use of participant codes (e.g., S1, S2 for students; Staff 1, Staff 2 for staff). All data were stored on password-protected university servers accessible only to the research team. Audio recordings of interviews were transcribed and subsequently deleted. Participants were informed of their right to withdraw from the study at any time without consequence. In addition, data protection measures complied with the Nigerian Data Protection Regulation (NDPR) 2019. No personally identifiable information was collected, and all reported findings are presented in aggregate or anonymised form to prevent identification of individual participants.
3.8 Use of AI in the research process
In accordance with principles of research transparency and given the study’s focus on AI use in academic contexts, the authors disclose that no Artificial Intelligence tools were used for data collection, data analysis, or interpretation of findings. All coding, thematic analysis, and synthesis of results were conducted manually by the research team. This approach was deliberately chosen to maintain human judgment in interpreting participant experiences and to model the ethical use of AI that the study advocates. AI-assisted technologies were used solely for linguistic refinement during manuscript preparation. Specifically, grammar checking and readability improvements were applied to selected sections after the substantive content was written by the authors. No AI tools were used to generate intellectual content, develop arguments, or interpret data. All intellectual contributions, analytical decisions, and conclusions are the original work of the authors. The authors take full responsibility for the integrity and originality of this work.
4. Results and Findings
4.1 Results of student questionnaire
The findings are presented in two parts: descriptive quantitative results (Figures 1–5) and qualitative themes derived from thematic analysis. The results shows that 65% of participants reported AI usage, while 35% stated that they did not use any AI tools during the assignment (Figure 1). Students were specifically asked whether they used artificial intelligence (AI) tools during the completion of the assignment and, if so, the extent to which such tools were employed. The responses indicate a substantial level of AI adoption among the participants, suggesting that AI-assisted practices are becoming increasingly prevalent and reflecting the growing accessibility and integration of AI-based technologies in academic work. Among users, a majority indicated reliance on AI-generated content. Thematic analysis revealed perceptions of AI as a productivity tool, normalisation of AI dependence, and ambiguity regarding academic integrity. Thematic analysis of open-ended responses revealed three primary motivations for AI use: time pressure, task difficulty, and peer influence. Students described AI as a tool for managing multiple deadlines, navigating challenging concepts, and keeping pace with peers who were also using AI. This peer dynamic reflects the ‘collective action problem’ identified by (Cotton, Cotton & Shipway, 2024). These findings align with emerging literature highlighting AI’s dual role as both an assistive and potentially disruptive academic technology, while patterns of overreliance raise concerns regarding authentic learning. The proportion of students who acknowledged using AI tools highlights the need for clearer institutional guidelines and pedagogical frameworks to ensure responsible and ethical use of AI in assessment contexts. Conversely, the significant minority of students who reported no AI usage may reflect factors such as limited awareness, restricted access, personal preferences, or concerns regarding academic integrity. Overall, the results underscore the importance of understanding students’ engagement with AI tools and their implications for teaching, learning, and assessment design in higher education, and suggest that institutional policies and assessment redesign are necessary to mitigate integrity risks.
Figure 1. Percentage of Students Who Used AI Tools (n = 175)
Figure 2 presents the distribution of students based on the manner in which AI tools were used during the completion of the assignment, particularly distinguishing between sole dependence on AI-generated content and more integrative forms of use. The results indicate that a large majority (79%) of the students acknowledged using AI and that they relied entirely on AI-generated outputs, without producing any substantial preliminary draft or making meaningful modifications to the content provided by the AI tools. This suggests that, for these students, the primary motivation for using AI was not to support or enhance their learning process, but rather to adopt a shortcut approach aimed at minimising effort and time investment in the assignment. In contrast, 21% of the respondents reported a more constructive and reflective use of AI, either by first developing their own ideas and written content before using AI tools to refine, improve, or structure their work, or by generating initial text with AI and subsequently revising it extensively to incorporate their own reasoning, interpretation, and academic voice. This subgroup reflects a more pedagogically aligned use of AI, where the technology serves as a supportive tool for learning and intellectual development rather than a replacement for independent thinking. Overall, the findings illustrated in Figure 2 highlight important differences in students’ approaches to AI use and underscore the need for instructional guidance, assessment redesign, and institutional policies that encourage responsible, transparent, and learning-oriented engagement with AI tools. Students expressed uncertainty about acceptable AI use, with many noting the absence of institutional guidance. Staff participants highlighted the inadequacy of existing detection tools, consistent with Weber-Wulff et al.’s (2023) finding that detection tools are ‘neither accurate nor reliable’.
Figure 2. Patterns of AI Engagement Among Student Users (n = 114)
Figure 3 illustrates the pass–fail distribution of students who reported complete reliance on AI tools for the assignment, based on the grades awarded before the students completed the questionnaires. The results reveal that a substantial majority of these students were successful, with 94% passing the module and only 6% failing. Notably, the assessment scripts were marked solely on the basis of the submitted work, without prior knowledge of whether AI tools had been used in their preparation. In line with institutional policy, all submissions were subjected to plagiarism detection using ‘Turnitin’, which was the officially approved tool for academic integrity checks at the time of marking. Despite this, the high pass rate among students who entirely depended on AI-generated content suggests that such tools can be used to circumvent traditional assessment mechanisms without being readily detected. This finding raises significant concerns regarding the effectiveness of existing plagiarism detection systems in identifying AI-generated work and highlights a potential vulnerability in current assessment practices. The results presented in Figure 3 therefore suggest that students may be able to pass a module with minimal or no independent intellectual input by relying exclusively on AI tools, underscoring the urgent need for revised assessment strategies, enhanced detection methods, and clearer guidelines to ensure academic integrity in an era of widespread AI adoption.
Figure 3. Academic Outcomes Among Students Who Relied Entirely on AI (n = 90)
Figure 4 presents a comparative analysis of student performance based on the use of AI. This did not distinguish between students who used AI tools to support or enhance their own work and those who relied entirely on AI-generated content. The results indicate that an overwhelming majority (approximately 92%) of students who employed AI in one form or another successfully passed the assignment, regardless of whether AI was used as a supplementary aid or as the primary means of completing the task. This high pass rate suggests that the use of AI tools is strongly associated with improved academic outcomes in the context of this assessment. However, while the findings imply that AI usage may significantly increase the likelihood of passing, it does not provide an absolute guarantee of success, as a small proportion of AI-using students still failed. These results highlight the influential role of AI in shaping student performance and raise important questions regarding the alignment of current assessment practices with learning objectives, the differentiation between genuine student understanding and AI-assisted output, and the need for assessment designs that more effectively evaluate individual cognitive engagement and original contribution.
Figure 4. Academic Outcomes Among All AI Users (n = 114)
One of the additional questions included in the questionnaire examined the issue of student-to-student copying, a practice that has historically been observed in large classes in Nigerian academic environments. In such contexts, students may copy directly from one another, and due to high enrolment numbers, it can be challenging for teachers to detect this behaviour, except in cases where scripts are submitted in sequence or where strong similarities in responses raise suspicion and prompt closer scrutiny. Figure 5 illustrates the pass–fail outcomes of students who admitted to copying from another student, based on their responses to the questionnaire. The results indicate that 43% of students who acknowledged copying from their peers passed the assessment, while a higher proportion, 57%, failed. This distribution suggests that, unlike AI-assisted submissions, traditional student-to-student copying does not consistently lead to successful academic outcomes and may be more readily identifiable during marking or moderation processes. The findings further imply that conventional copying is a comparatively less effective strategy for avoiding detection or achieving academic success, reinforcing the need to distinguish between different forms of academic misconduct when evaluating assessment integrity and designing appropriate mitigation strategies. Furthermore, these findings also suggest that, over time, educators in Nigeria have developed considerable experience and tacit expertise in detecting traditional forms of student-to-student copying, as this has been a long-standing challenge to academic integrity within the higher education system. Through years of exposure, instructors have become familiar with recurring patterns such as unusually similar phrasing, identical errors, parallel argument structures, and sequential submission similarities, which often trigger closer examination during marking. This accumulated institutional and professional knowledge likely contributes to the relatively higher failure rate observed among students who admitted to copying from their peers. In contrast to AI-generated content, which may appear more original, coherent, and less repetitive, conventional copying is increasingly easier for experienced educators to identify. Consequently, this contrast highlights a critical shift in the nature of academic misconduct, where emerging AI-based practices pose new detection challenges that differ substantially from traditional copying methods. It further underscores the need for capacity building, updated assessment strategies, and policy frameworks that reflect both local teaching realities in Nigeria and the evolving global landscape of AI-enabled academic practices.
Figure 5. Academic Outcomes Among Students Who Admitted Copying (n = 89)
Furthermore, staff interviews revealed that the university lacks AI-specific academic integrity policies. Detection relies on ‘Turnitin’, which participants acknowledged is ineffective against AI-generated content – a limitation empirically documented by Perkins et al. (2024). In addition to the quantitative patterns reported above, thematic analysis of open-ended responses and staff interviews revealed five main themes regarding AI use and academic integrity.
The thematic analysis yielded five overarching themes that collectively address the study’s research questions. These themes are presented below, each accompanied by illustrative quotes and interpretive analysis that explicitly connects the findings to the specific research questions they illuminate.
4.1.1 Theme 1: Widespread AI adoption
This theme directly addresses Research Question 1, revealing the high prevalence and normalised nature of AI adoption among students. Students described AI tools as deeply integrated into their daily academic routines, with many reporting regular use across multiple assignments. This perception was shared by staff, who observed significant changes in the quality and nature of student work. One student noted, ‘I used ChatGPT for this assignment. Actually, I use it for almost all my assignments now. It’s faster and the English is better than what I can write myself’ (Student 47, Male). Another student similarly remarked, ‘My friends introduced me to ChatGPT last semester. Now we all use it. It’s normal’ (Student 112, Female). A staff member corroborated this observation, stating, ‘I would estimate that at least half of my students are using AI in some form’ (Staff 2). These accounts collectively illustrate that AI adoption has moved beyond novelty to become a routine aspect of academic life for a significant proportion of students.
4.1.2 Theme 2: Patterns of AI engagement
This theme extends the findings of Research Question 1 by distinguishing how AI is used, revealing two qualitatively different patterns of engagement. The first pattern, AI as substitute, involves complete reliance on AI-generated content with minimal student input. A student explained, ‘I didn’t write anything myself. I just copied the questions into ChatGPT, took the answers, edited a little bit so it doesn’t look copied, and submitted’ (Student 18, Male). A staff member expressed concern about this pattern, stating, ‘The problem is not that they use AI. The problem is that they use it to bypass thinking altogether’ (Staff 1). The second pattern, AI as supplement, reflects a more integrative approach in which students use AI to enhance their own work. One student described this approach: ‘I wrote my draft first, then I used ChatGPT to help me organise the paragraphs and improve the flow. The ideas were mine’ (Student 31, Female). A staff member affirmed the legitimacy of this pattern, noting, ‘There is a legitimate way to use these tools. If a student uses AI to clarify concepts or improve their writing after they’ve done the work, that’s actually good pedagogy’ (Staff 3). These two distinct patterns highlight the complexity of AI engagement and the need for nuanced institutional responses.
4.1.3 Theme 3: Motivations for AI use
This theme answers Research Question 2 by identifying the key motivations driving student AI use. Students identified three primary drivers: time pressure, task difficulty, and peer influence. Regarding time pressure, one student explained, ‘I had three assignments due the same day. No way I could finish them all. AI saved me’ (Student 54, Male). Another student cited task difficulty as a motivating factor: ‘The topic was too hard. I didn’t understand it. So I let AI explain it and write the answer’ (Student 8, Male). A staff member observed the peer influence dynamic, describing it as ‘a collective action problem here. Students see their peers using AI and getting good grades with less effort’ (Staff 2). These motivations suggest that AI use is not merely a matter of convenience but is often driven by structural factors such as heavy workloads, challenging course content, and perceived competitive pressures from peers.
4.1.4 Theme 4: Academic integrity challenges
This theme addresses Research Questions 3 and 4, highlighting ethical concerns and the inadequacy of current institutional responses. Participants consistently identified gaps in detection mechanisms and expressed confusion about acceptable use. One student noted, ‘Turnitin didn’t flag anything. It was 0% plagiarism. So technically, I didn’t cheat according to the university’ (Student 23, Male), revealing how existing integrity frameworks fail to address AI-generated content. A staff member acknowledged the detection challenge, stating, ‘We are completely blind. Turnitin catches copied text, but it doesn’t catch AI-generated text’ (Staff 3). Another student highlighted the absence of clear institutional guidance: ‘The university has rules about plagiarism but nothing about AI. So I assume it’s allowed until they say otherwise’ (Student 136, Female). Together, these findings underscore that current institutional policies and detection mechanisms are ill-equipped to address the ethical challenges posed by generative AI, leaving both students and staff navigating an ambiguous landscape.
4.1.5 Theme 5: Call for institutional action
This theme responds to Research Question 5 by presenting participant-suggested strategies for ethical AI integration. Both students and staff expressed a clear need for clearer policies, training, and assessment reform. A student called for institutional clarity, stating, ‘We need clear rules about what is allowed. No one has told us’ (Student 45, Male). A staff member advocated for pedagogical change, suggesting, ‘We should redesign assessments so that students have to show their thinking process, not just submit a final product’ (Staff 1). These recommendations align with broader calls in the literature for assessment redesign, AI literacy programmes, and the development of comprehensive institutional policies that balance the opportunities and risks associated with AI adoption. The convergence of student and staff perspectives on these strategies provides a strong foundation for institutional action.
The findings from the University of Ilorin case study paint a picture of an academic environment at a critical juncture. On the one hand, the widespread adoption of AI tools among students validates their growing utility as powerful aids for learning, writing support, and academic productivity. On the other hand, the patterns of use observed in this study reveal a concerning tendency toward overreliance, where AI is frequently employed as a substitute for independent intellectual effort rather than as a complementary learning resource. The high pass rates among students who depended entirely on AI-generated content despite the absence of detectable plagiarism highlight significant vulnerabilities in existing assessment and quality assurance mechanisms. These findings suggest that traditional plagiarism detection tools and assessment designs may no longer be sufficient to safeguard academic integrity in the era of generative AI. At the same time, the comparatively lower success rates associated with conventional student-to-student copying indicate that educators, particularly within the Nigerian higher education context, have developed effective heuristics for identifying long-standing forms of academic misconduct. Collectively, these results underscore the urgent need for a rethinking of pedagogical strategies, assessment frameworks, and institutional policies that both harness the educational potential of AI and mitigate its misuse. Without deliberate intervention, there is a risk that AI adoption may erode core educational objectives such as critical thinking, originality, and deep learning, thereby redefining academic success in ways that are misaligned with the fundamental goals of higher education. Both students and staff proposed strategies, including: clear institutional policies on AI use, training programs for ethical AI engagement, and redesigned assessments that evaluate process rather than just final product. These recommendations align with those proposed in the literature on AI in higher education.
4.2 Thematic findings in relation to research questions
The Five themes identified through thematic analysis are mapped to the study’s research question as shown in Table 3.
| Research Question | Corresponding Theme(s) |
|---|---|
| RQ1: Prevalence and nature of AI usage | Theme 1: Widespread AI Adoption; Theme 2: Patterns of AI Engagement |
| RQ2: Student perceptions and motivations | Theme 3: Motivations for AI Use |
| RQ3: Ethical concerns and integrity risks | Theme 4: Academic Integrity Challenges |
| RQ4: Institutional policies and assessment practices | Theme 4: Academic Integrity Challenges |
| RQ5: Strategies for ethical integration | Theme 5: Call for Institutional Action |
5. Limitations
Thematic analysis was conducted systematically with independent coding and consensus-building. However, the qualitative findings are based on self-reported data from a single institution and may not be transferable to other contexts. The interpretation of themes reflects the researchers’ analytical lens, though peer debriefing and an audit trail were used to enhance trustworthiness.
6. Conclusion
This study has examined the role of artificial intelligence in shaping student learning outcomes and academic practices, revealing its fundamentally dual-edged nature. On one hand, AI demonstrates considerable potential to enhance higher education through personalised learning support, data-driven insights, and the automation of administrative and instructional tasks, all of which can contribute to improved efficiency and learning outcomes. On the other hand, the findings from the University of Ilorin case study indicate a growing tendency among students to rely excessively on AI tools, in some cases substituting independent intellectual effort with fully AI-generated content. The high success rates recorded among students who depended entirely on AI despite the use of approved plagiarism detection systems raise critical concerns about the adequacy of existing assessment and quality assurance mechanisms. These results suggest that traditional approaches to academic integrity, which were developed to address conventional forms of misconduct, are increasingly ill-equipped to address the challenges posed by generative AI technologies. At the same time, the study highlights that long-standing forms of academic dishonesty, such as student-to-student copying, are more readily detected by experienced educators, particularly within the Nigerian higher education context. Balancing the opportunities and risks associated with AI therefore represents one of the defining challenges for higher education institutions in the twenty-first century. If left unregulated or poorly integrated, AI risks undermining core educational values such as critical thinking, originality, and deep learning. However, with deliberate institutional action grounded in ethical principles, pedagogical innovation, and academic integrity, AI can be redirected toward its intended purpose enhancing human learning and empowering students rather than replacing their intellectual engagement. AI adoption among students presents both opportunities and challenges. Ethical integration requires governance frameworks, awareness programs, and adaptive assessment strategies.
7. Recommendations
Based on the findings of this study, several key recommendations are proposed for higher education institutions, particularly within the Nigerian context.
First, institutions should develop and clearly communicate comprehensive AI policies that explicitly define acceptable and unacceptable uses of AI in academic work. These policies should be integrated into academic integrity handbooks, course syllabi, and student orientation programmes to ensure shared understanding among both students and staff.
Second, universities should invest not only in AI detection tools but also in AI literacy and ethical-use platforms. While detection software remains imperfect, it should form part of a broader integrity framework. More importantly, institutions should provide access to approved, transparent AI tools that support legitimate learning activities, such as writing enhancement, research assistance, and citation support, thereby guiding students toward responsible use.
Third, mandatory training and awareness programmes should be introduced for both students and academic staff. For students, such training should emphasise digital citizenship, ethical AI use, and proper disclosure or citation of AI-assisted content. For staff, professional development should focus on recognising AI misuse, redesigning assessments, and integrating AI constructively into teaching and learning practices.
Fourth, there is a pressing need to rethink assessment design in order to develop more ‘AI-resilient’ evaluation methods. This may include increased use of invigilated assessments for core competencies, process-oriented coursework that requires drafts and reflective components, oral examinations and presentations to verify individual understanding, and authentic, context-specific projects that are difficult to outsource entirely to AI systems.
Finally, institutions should actively promote a culture of academic integrity that extends beyond punitive measures. This involves fostering ethical awareness, encouraging open dialogue about the responsible use of AI, and engaging students as partners in upholding academic standards. In addition, universities should support ongoing research and stakeholder engagement by establishing interdisciplinary committees comprising faculty members, students, IT specialists, and ethicists to continuously review AI policies and adapt institutional responses to evolving technological developments.
8. Future Work
While this study provides important insights into the use of artificial intelligence in student assessments and learning outcomes, it also highlights several avenues for further research. Future studies should extend this investigation beyond a single institution to include multiple universities across different regions of Nigeria and other developing countries, enabling broader generalisation of findings and comparative analysis across institutional types and disciplinary contexts. Further research should also explore longitudinal impacts of AI usage on student learning, skill development, and academic progression. Understanding how sustained reliance on AI tools influences critical thinking, problem-solving abilities, and subject mastery over time would provide deeper insight into the long-term educational implications of generative AI adoption. In addition, future work could examine disciplinary differences in AI usage patterns, as the role and impact of AI may vary significantly between STEM, social sciences, and humanities-based courses. Another important direction for future research involves the evaluation of alternative assessment models that are explicitly designed to be AI-resilient. Experimental studies comparing traditional assessments with redesigned formats such as oral examinations, reflective portfolios, and project-based evaluations would offer empirical evidence on their effectiveness in preserving academic integrity while supporting meaningful learning. Moreover, future work should investigate the effectiveness of emerging AI-detection tools and institutional interventions, including AI literacy programmes, policy frameworks, and ethical-use training, in mitigating misuse without discouraging the beneficial applications of AI. Integrating qualitative approaches, such as interviews and focus group discussions with students and educators, would also enrich understanding of perceptions, motivations, and ethical reasoning surrounding AI use. Finally, there is a need for interdisciplinary research that brings together educators, computer scientists, ethicists, and policymakers to develop context-sensitive governance frameworks for AI in education. Such collaborative efforts are essential to ensuring that AI technologies are aligned with educational goals, cultural realities, and regulatory structures, particularly within the Nigerian higher education system.
References
(2002). Pearson Education Inc.
A meta systematic review of artificial intelligence in higher education: A call for increased ethics, collaboration, and rigour. (2024). International Journal of Educational Technology in Higher Education, 21(1). https://doi.org/10.1186/s41239-023-00436-z
A worked example of Braun and Clarke’s approach to reflexive thematic analysis. (2022). Quality & Quantity, 56(3), 1391-1412. https://doi.org/10.1007/s11135-021-01182-y
Academic integrity: A review of the literature. (2014). Studies in Higher Education, 39(2), 339-358. https://doi.org/10.1080/03075079.2012.709495
Adaptive AI systems in education: real-time personalised learning pathways for skill development. (2025). J. Artif. Intell. Mach. Learn. Data Sci, 3, 2489-2494. https://doi.org/10.51219/JAIMLD/Akinyemi-Sadeeq-Akintola/534
Artificial intelligence (AI) in higher education: Growing academic integrity and ethical concerns. (2023). Nepalese Journal of Development and Rural Studies, 20(01), 1-7. https://doi.org/10.3126/njdrs.v20i01.64134
Artificial intelligence as a double-edged sword: Wielding the POWER principles to maximize its positive effects and minimize its negative effects. (2024). Contemporary Issues in Early Childhood, 25(1), 146-153. https://doi.org/10.1177/14639491231169813
Artificial intelligence in education: A systematic literature review. (2024). Data and Metadata, 3. https://doi.org/10.56294/dm2024288
Artificial intelligence in education: A systematic literature review. (2024). Expert Systems With Applications, 252. https://doi.org/10.1016/j.eswa.2024.124167
Artificial intelligence revolution in Africa: Economic opportunities and legal challenges. (2023). Policy Cent. New South, 7.
Artificial Intelligence: A modern approach. (1995). Prentice-Hall, Englewood Cliffs, 25(27), 79-80.
Assessment strategies for online learning: Engagement and authenticity. (2018). Athabasca University Press.
Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. (2024). Innovations in Education and Teaching International, 61(2), 228-239. https://doi.org/10.1080/14703297.2023.2190148
Cohort studies: design, analysis, and reporting. (2020). Chest, 158(1), S72-S78. https://doi.org/10.1016/j.chest.2020.03.014
Detection of GPT-4 generated text in higher education: Combining academic judgement and software to identify generative AI tool misuse. (2024). Journal of Academic Ethics, 22(1), 89-113. https://doi.org/10.1007/s10805-023-09492-6
Developing policies to address historic contract cheating and misuse of Generative Artificial Intelligence. (2025). Journal of Academic Writing, 15(S1), 1-13. https://doi.org/10.18552/joaw.v15iS1.1057
Handbook of qualitative research. (1994). Sage Publications.
Higher education computer science: A manual of practical approaches. (2023). Springer. https://doi.org/10.1007/978-3-031-29386-3_6
Intelligent tutoring systems by and for the developing world: A review of trends and approaches for educational technology in a global context. (2015). International Journal of Artificial Intelligence in Education, 25(2), 177-203. https://doi.org/10.1007/s40593-014-0028-6
Learning analytics and educational data mining: towards communication and collaboration. (2012). 252-254.
Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. (2015). Administration and Policy in Mental Health and Mental Health Services Research, 42(5), 533-544. https://doi.org/10.1007/s10488-013-0528-y
Qualitative inquiry and research design: Choosing among five approaches. (2016). Sage publications.
Series: Practical guidance to qualitative research. Part 4: Trustworthiness and publishing. (2018). European Journal of General Practice, 24(1), 120-124. https://doi.org/10.1080/13814788.2017.1375092
Should robots replace teachers?: AI and the future of education. (2019). Polity Press.
Systematic review of research on artificial intelligence applications in higher education–where are the educators?. (2019). International Journal of Educational Technology in Higher Education, 16(1). https://doi.org/10.1186/s41239-019-0171-0
Testing of detection tools for AI-generated text. (2023). International Journal for Educational Integrity, 19(1), 1-39. https://doi.org/10.1007/s40979-023-00146-z
Thematic analysis: Striving to meet the trustworthiness criteria. (2017). International Journal of Qualitative Methods, 16(1). https://doi.org/10.1177/1609406917733847
Trends and development in technology-enhanced adaptive/personalized learning: A systematic review of journal publications from 2007 to 2017. (2019). Computers &Amp; Education, 140. https://doi.org/10.1016/j.compedu.2019.103599
Using thematic analysis in psychology. (2006). Qualitative Research in Psychology, 3(2), 77-101. https://doi.org/10.1191/1478088706qp063oa
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 The Author(s)

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
All articles published in Artificial Intelligence Advances in Education are open access and distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).
This license permits non-commercial use, sharing, distribution, and reproduction in any medium or format, provided that proper credit is given to the original author(s) and the source, a link to the license is provided, and any changes to the material are clearly indicated.
Adaptations or derivatives of the material are not permitted under this license.
Images or other third-party material included in an article are covered by the article’s Creative Commons license unless otherwise indicated in a credit line. If any material is not included in the license and your intended use exceeds permitted statutory regulation, you must obtain permission directly from the copyright holder.