Assessing teachers’ AI Literacy for lifelong learning
A systematic review and framework alignment
Keywords:
Teacher AI literacy, AI literacy assessment, Teacher education, Teacher Professional Development (TPD), Lifelong learning, Workforce upskilling, Systematic ReviewAbstract
As AI tools spread across education and professional environments, AI literacy has become vital for teachers’ initial training, ongoing professional development, and continuous skills enhancement within the teaching workforce. This systematic review documents teacher-focused AI literacy and competency assessment methods and aligns them with UNESCO’s AI Competency Framework for Teachers (AI CFT) and the OECD-EC AILit Framework, making explicit the coverage and gaps in constructs for use in teacher education and lifelong learning systems. Searches in Web of Science™ following PRISMA 2020 resulted in 13 relevant studies and 16 distinct assessment methods (including self-report scales, objective tests, and qualitative approaches), with the landscape mainly dominated by self-assessments. Alignment analysis indicates frequent coverage of AI fundamentals and pedagogy, but significant gaps in human-centred mindset and AI for professional development within the AI CFT. The AILit Framework’s knowledge and skills are commonly evaluated, whereas attitudes are often reduced to or conflated with ethics. Notably, important theoretical aspects (such as attitudes and professional development) are sometimes omitted during factor analytic validation, revealing a tension between the framework’s scope and empirical fit that limits the diagnostic and evaluative usefulness of assessments for teacher development pathways. Common methodological weaknesses include limited cross sample revalidation, little evidence of test–retest reliability and measurement invariance, inconsistent sample size justifications, incomplete demographic reporting, and narrow language and geographic coverage, further complicated by terminological drift. The review offers a dual mapping to the AI CFT and AILit Framework, identifies key psychometric challenges, and proposes an agenda for framework-aligned, performance-based, and potentially AI-enhanced assessments to support teacher education, professional learning, and workforce development decision-making. Future efforts should retain human-centred mindset and professional development elements, combine self-report with performance and scenario tasks, develop cross-cultural adaptations and invariance, and explore AI and big-data–driven individualized assessments to improve comparability, relevance for credentialing, and policy utility across lifelong learning ecosystems.
1. Introduction
Artificial Intelligence is poised to fundamentally alter the way we live, work, and connect with one another. Alongside emerging technologies such as 3D printing, robotics, biotechnology, and quantum computing, it leads the charge in the so-called Fourth Industrial Revolution, impacting nearly every industry in every country (Schwab, 2016). Companies and governments are therefore preparing by securing not only access to energy, rare-earth elements, data, and the necessary processing power, but also by equipping today’s workforce with the knowledge, skills, and attitudes needed for ethical development and interaction with AI in various settings. The Organisation for Economic Co-operation and Development (OECD) plans to introduce a new Programme for International Student Assessment (PISA) 2029 domain, Media and AI Literacy (MAIL), to evaluate whether 15-year-olds can responsibly and effectively navigate AI-mediated content and contexts, highlighting a global policy priority for AI-ready education (OECD, n.d.). AI tools are swiftly becoming part of educational and professional workflows, creating immediate demands for responsible, effective, and equitable use across the teacher-development continuum—from initial teacher training to ongoing professional learning and workforce upskilling (Brandão et al., 2024). Consequently, teachers’ AI literacy is increasingly regarded as essential for safe implementation in classrooms and institutional contexts, yet robust methods to assess teacher AI literacy lag behind the surge of policy signals and competency frameworks (Lintner, 2024).
Two frameworks are particularly relevant for defining what teachers’ AI literacy and competency should encompass and for benchmarking the coverage of constructs in assessment tools: the AI Competency Framework for Teachers (AI CFT) (UNESCO, 2024) and the AI Literacy Framework for Primary and Secondary Education (AILit Framework) (OECD, 2025). Together, these span areas such as human-centred mindset, ethics, foundations and applications, pedagogy, professional development, and knowledge–skills–attitudes. However, the assessment of teachers’ AI literacy remains the missing link in lifelong learning. For teacher professional development (TPD), validated instruments are crucial to diagnose needs, personalize support, and evaluate impact (Younis, 2025). Yet, many available tools are bespoke, reliant on self-reporting, and not cross-validated across different teacher populations, languages, or contexts, which limits comparability across programmes and systems. Encouragingly, recent work indicates the potential of AI- and data-enhanced approaches to make assessments more precise and personalized (Benzer et al., 2025; Ning et al., 2025), an opportunity that remains largely untapped in teacher-focused AI literacy measurement.
This study addresses these gaps through a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)-guided systematic review of teacher AI literacy and competency assessment methods, along with a dual mapping of instrument constructs to UNESCO’s AI CFT and the OECD/EC AILit Framework to make coverage and omissions clear. The review explores three questions:
RQ1: Which methods are employed to evaluate AI competence in K-12 teachers in current research?
RQ2: How do the factors or dimensions of these methods align with the five aspects of UNESCO’s AI CFT?
RQ3: How do the methods correspond with the AILit Framework’s knowledge–skills–attitudes triad?
The study makes three key contributions. Conceptually, it clarifies the relationship between teachers’ competencies (AI CFT) and learners’ AI literacy (AILit) in teachers’ initial education and lifelong learning. Empirically, it maps and analyses teacher assessment instruments, highlighting where alignment is strong and where it is weak. This methodology reveals validation challenges and outlines pathways for developing framework-aligned, performance-informed, and potentially AI-assisted assessments tailored to teachers’ initial education and TPD cycles.
2. Theoretical background
2.1. AI literacy for TPD and lifelong learning
Due to technological progress, employees are required to update their knowledge, skills, and attitudes through continuous learning. According to the Future of Jobs Report 2025, AI and big data are the key emerging skills. Besides technology-related abilities, creative thinking, resilience, adaptability, agility, curiosity, and lifelong learning are expected to become more important from 2025 to 2030 (World Economic Forum, 2025). Emphasizing the importance of lifelong learning and ongoing professional development in addressing challenges caused by technological progress, a crucial aspect is the development of AI literacy. The term “AI literacy” refers to a set of skills that enable people to interact with AI critically and effectively in various contexts (Long & Magerko, 2020). As AI tools increasingly influence teaching, assessment, and professional workflows, AI literacy is becoming essential across the educator learning continuum—from initial teacher training to Continuing Professional Development and ongoing skills enhancement. In higher education, this evolution raises practical questions about how teacher-education programs and professional-learning providers define, develop, and assess educators’ ability to use AI responsibly and effectively in educational settings. A key challenge is that, although AI literacy is widely promoted through frameworks and policy signals, reliable and comparable methods for evaluating educators’ AI literacy or competency are still inconsistent, which hampers needs analysis and impact assessment in teacher development pathways.
2.2. AILit Framework
In a concerted effort, the European Commission and the OECD drafted the AI Literacy Framework for Primary and Secondary Education (AILit Framework) for teachers, education leaders, education policymakers, and learning designers to empower learners for the age of AI (OECD, 2025). It aims to provide an answer to what AI literacy is by building on existing definitions from the EU AI Act, UNESCO, and other organizations.
AI literacy represents the technical knowledge, durable skills, and future-ready attitudes required to thrive in a world influenced by AI. It enables learners to engage, create with, manage, and design AI, while critically evaluating its benefits, risks, and ethical implications. (OECD, 2025, p. 6)
Although designed for primary and secondary contexts, the knowledge–skills–attitudes structure provides a transferable benchmark for educator learning outcomes in teacher education and lifelong learning. Based on this definition and literature review, the framework includes the necessary knowledge, skills, and attitudes, along with competencies that can be practically applied in primary and secondary education (see Table 1).
| Knowledge | The nature of AIAI reflects human choices and perspectivesAI reshapes work and human rolesAI’s capabilities and limitationsThe role of AI in society |
| Skills | Critical thinking: Evaluating content created with AICreativity: Collaboration with AI to generate and refine ideasComputational thinking: Problem analysis and provision of instructionsSelf and social awareness: Recognising the influence of AICollaboration: Effective cooperation with AI and humansCommunication: Explanation of how to use AIProblem solving: Determining when and how to use AI |
| Attitudes | ResponsibilityCuriosityInnovationAdaptabilityEmpathy |
2.3. AI Competency Framework for Teachers (AI CFT)
While “AI literacy” focuses on understanding, “AI competency” emphasizes how individuals utilize AI for positive results (Chiu et al., 2024). In the AI CFT, it is represented by five aspects—human-centered mindset, AI ethics, AI foundations and applications, AI pedagogy, and AI for professional development (see Table 2)—each spanning Acquire, Deepen, and Create levels to guide progression from basic understanding to critical customization and leadership.
- Human-centered mindset: prioritizes human agency, responsibility, and social good, ensuring that AI advances human well-being and promotes inclusive educational opportunities.
- Ethics of AI: upholds values such as fairness, transparency, inclusivity, equal treatment, and respect for all languages and cultures. Teachers should be able to recognize ethical challenges and advocate for safe, responsible AI use.
- AI foundations and applications: provides teachers with essential understanding of AI fundamentals, tools, and classroom applications, enabling them to comprehend how AI functions and to utilize it for productive purposes.
- AI pedagogy: focuses on integrating AI effectively within teaching methods, empowering teachers to enhance student learning through tailored instruction, adaptive feedback, and personalized teaching strategies.
- AI for professional development: encourages educators to continually improve their skills in line with evolving AI technologies, supporting their lifelong learning and career progression through proficient use of AI resources.
| Aspects | Progression levels | ||
| Acquire | Deepen | Create | |
| Human-centered mindset | Human agency | Human accountability | Social responsibility |
| Teacher competencies | Critical understanding that AI development is driven by human decisions - Emphasis on human control when evaluating and using AI tools. | Deeper understanding of human responsibility in the development and use of AI. | Critical understanding of the impact of ICT on society – Participation and contribution to the creation of inclusive societies. |
| Ethics of AI | Ethical principles | Safe and responsible use | Co-creating ethical rules |
| Teacher competencies | Basic understanding of ethical principles for responsible human-AI interaction. | Learning and applying basic ethical principles in the evaluation and use of ICT tools in educational contexts. | Collaborative development of ethical guidelines for integrating AI into the educational process. |
| AI foundation and application | Basic AI techniques and applications | Application skills | Creating with AI |
| Teacher competencies | Basic conceptual knowledge of AI, combined with the ability to evaluate and use appropriate AI tools for educational purposes. | Deeper expertise in AI and data and algorithm skills aligned with ethical principles. | Adapting AI tools by leveraging advanced technical understanding and practical experience to develop an inclusive learning environment. |
| AI pedagogy | AI-assisted teaching | AI-pedagogy integration | AI-enhanced pedagogical transformation |
| Teacher competencies | Identifying and implementing the educational advantages of AI to improve the design of teaching and course assessment in specific cognitive subjects while mitigating risks. | Integrating AI into student-centered teaching design, enabling personalized learning and enhancing teacher-student collaboration, fostering empathy, critical thinking, and problem-solving skills in students. | Critical evaluation of the impact of AI on education – Designing learning experiences incorporating AI to enhance specific or interdisciplinary skills and critical analysis and using data-driven insights to promote student-centered teaching innovation. |
| AI for professional development | AI enabling lifelong professional learning | AI to enhance organizational learning | AI to support professional transformation |
| Teacher competencies | Application of AI tools for professional development and reflection practices, assessment of educational needs, and personalization of learning strategies. | Use of ICT tools to participate in professional learning communities, exchange resources, collaborate with colleagues, and promote innovation. | Adapting AI tools for professional advancement and improving strategies for the effective integration of AI to address individual and collective professional development needs. |
Previous analysis of teacher AI training curricula provided by university continuing-education centers in Greece indicates that professional development often focuses on practical classroom integration (AI foundations/applications and AI pedagogy), while paying less explicit attention to a human-centered mindset and the use of AI for teachers’ ongoing professional development (Tsioukas & Kostas, 2025). This pattern drives the present study’s focus on framework-aligned assessment coverage, because gaps in what is taught and what is measured can limit the design and evaluation of teacher development pathways across lifelong learning systems.
2.4. Assessing teachers’ AI literacy/AI competency for TPD and lifelong learning
Research indicates that TPD programmes can effectively develop AI literacy skills among teachers (Younis, 2024). Assessment acts as the fundamental mechanism for diagnosing prior knowledge, needs, targeting interventions, monitoring progress, and evaluating impacts and outcomes from initial teacher education through TPD programmes to lifelong learning. Although many articles address the question of assessing students’ AI literacy (e.g., Zhou et al., 2025), there is a gap in rigorously validated, framework-aligned instruments specifically designed to assess K-12 teachers’ AI literacy, especially performance-based measures (Lintner, 2024). This presents significant challenges for teacher educators and professional development providers who need valid, reliable, and framework-congruent measures to diagnose readiness, personalize learning pathways, and evaluate programme impact across diverse contexts. Closing these gaps requires systematic mapping of existing instruments against UNESCO and OECD frameworks to identify strengths and gaps in coverage, along with methodological advances that incorporate self-report, performance-based, and AI-assisted approaches to support evidence-based TPD and lifelong learning in the age of AI.
3. Methodology
The study offers a systematic review mapping methods for assessing the AI competence of K-12 teachers in current TPD literature and examining their alignment with the AI CFT and the AILit Framework (see Section 2), specifically pinpointing areas that need further research. Initially, relevant publications were retrieved from the Clarivate™ (Web of Science™) database (WoS). WoS was selected as an appropriate source of peer-reviewed articles as it remains one of the two major and most comprehensive sources, alongside Scopus (Pranckutė, 2021). While previous systematic reviews in the field of AI literacy have predominantly utilized the Scopus database (e.g., Lintner, 2024), this study intentionally focuses on the Web of Science (WoS) core collection to identify high-impact, peer-reviewed instruments that may have been overlooked in prior mappings. To ensure no relevant research was missed due to database selection, this primary search was supplemented by a rigorous citation ‘snowballing’ (backwards and forward) process. This combined approach enabled a more comprehensive longitudinal view of the field, capturing emerging, validated instruments across interdisciplinary contexts. To ensure maximum sensitivity and avoid missing emerging instruments in this rapidly evolving field, a broad search query was intentionally employed (see Table 3). This strategy explains the high initial yield (n = 2,848), which was subsequently refined through rigorous inclusion criteria (see Table 4).
| Source | Search Strings | Outcome |
| WoS | All fields: (AI OR “artificial intelligence”) AND (literacy OR skill* OR knowledge OR competenc*) AND (scale* OR test* OR exam* OR questionnaire OR survey* OR tool* OR instrument*) AND (teacher* OR educator*) | 2,848 |
| Category | Inclusion criteria | Exclusion criteria |
| Publication year | Up to 2025 | – |
| Research objective | Teachers’ AI literacy/competence assessment | Articles examining AI literacy on a different population other than teachers, articles primarily addressing digital literacy or other kinds of literacy, studies focusing predominantly on teachers’ self-efficacy/self-regulated learning or their attitudes/preparedness/beliefs/perceptions/readiness on AI or their intentions of using AI or the impact/anxiety AI has on them, articles solely examining the effect of AI literacy on job performance |
| Educational content | K-12, pre- and in-service teachers, science teachers, second language teachers etc. | Primarily focusing on students |
| Methodological design | Empirical research, mainly TPD interventions and validated assessment methods | – |
| Language | English | – |
| Accessibility | Open/institutional access | – |
The literature search was conducted on 1 July 2025 and includes all available literature up to mid-2025, with no limits on publication date, type, or stage. The data collection and screening process adhered to the guidelines of the updated PRISMA 2020 Statement (Page et al., 2021), as shown in the flow diagram in Figure 1. The search focused on English papers. Initially, articles were identified and screened using the WoS database based on the inclusion and exclusion criteria (see Table 4). These criteria involved reviewing titles and abstracts to determine the presence of assessment methods related to the level of AI literacy among K-12 teachers, including, but not limited to, science teachers, second language teachers, and others. The initial screening of titles and abstracts was conducted by the first author. To ensure the reliability of the selection process and minimize selection bias, the second author independently reviewed a random sample of 20% of the excluded records. No significant discrepancies were identified during this verification phase. Subsequently, the collected articles were manually evaluated by reviewing the remaining sections of the studies, thereby excluding publications not closely matching the study’s objectives. For instance, articles primarily addressing digital literacy or other kinds of literacy without a specific focus on AI were excluded. Additionally, articles that examined AI literacy in populations other than teachers were also excluded. Furthermore, studies primarily focusing on teachers’ self-efficacy, self-regulated learning, attitudes, preparedness, beliefs, perceptions, readiness regarding AI, or their intentions to use AI, as well as the impact or anxiety AI induces, were eliminated. However, we included one study on self-efficacy because, after full-text analysis, it was concluded that it mainly addresses teachers’ AI competencies through self-assessment. Lintner (2024) also notes in his research that the distinction between these aspects often appears unclear. Finally, articles solely investigating the effect of AI literacy on job performance were also omitted. In addition to database searches, relevant literature was gathered from reference lists of included studies to ensure comprehensive coverage of the research field. After applying the inclusion and exclusion criteria, a total of 13 articles remained to provide insights into the methods used to assess teachers’ AI literacy.
Figure 1. Prisma 2020 flow diagram Note. Created with https://estech.shinyapps.io/prisma_flowdiagram/
For RQ1, we extracted the name(s) of the author(s), the publication date, the name of the scale, survey, or questionnaire, the type (self-assessment, performance-based, or objective assessment), the number and type of items, the target population, participant characteristics, the number of factors, dimensions, or constructs and their descriptions, as well as the language(s) in which the method is available.
The extraction of instrument characteristics and their alignment with the UNESCO and OECD frameworks were initially conducted by the primary researcher. To ensure the reliability and conceptual consistency of the mapping, the second author performed a systematic audit of the alignment results. This verification process involved cross-referencing the categorized instrument items against the framework definitions. Any points of ambiguity were resolved through reflexive discussion between the authors until 100% consensus was reached on the final categorization.
For RQ2, the evaluation methods of teachers’ AI literacy and AI competency were further compared to the five aspects of the AI CFT as outlined in the literature review. Firstly, the dimensions, factors, or constructs of each method were examined for alignment, along with the individual items where available. To be considered aligned, a dimension must either explicitly mention the relevant aspect or closely address the competencies within the framework at any level of development. For example, if a method addresses teachers’ critical understanding of AI’s social impacts, or their ability to evaluate AI, algorithms, or tools, it would be regarded as aligned with the specified aspect.
Finally, for RQ3, the methods specifically designed to assess teachers’ AI literacy were analyzed based on the definition of AI literacy provided by the AILit Framework. To be more precise, their content was examined to include not only knowledge and skills but also attitudes. For a questionnaire to measure attitudes, it must have a section dedicated to evaluating teachers’ attitudes towards AI, AI tools, AI use, and so on. Ethics is often included as a cross-cutting theme or as a separate domain. However, a scale mainly focusing on ethics would not sufficiently cover the attitudes section unless it also contains items addressing attitudes such as curiosity, innovation, adaptability, or others mentioned in the AILit Framework.
4. Results
RQ1: Which methods are employed to evaluate AI competence in K–12 teachers in current research?
A total of 16 assessment methods for teachers’ AI Literacy were identified from 13 studies, as shown in Table 5. The table includes questionnaires/surveys, tests, reflective writing, assessments, and scales, most of which are validated and rely on self-assessment using Likert items. The target populations are pre-service and in-service teachers at different levels. Various sample sizes, age groups, and locations are represented in the table (see Figure 2), although some studies do not specify available languages or the age range of the sample. Most assessment methods were published in 2024 and 2025 and feature multiple factor or dimension structures. When necessary, we included both the original and final versions of the assessment methods for further analysis (RQ2).
| Scale/Survey/Questionnaire | Type | Items | Target Population | Validation Sample (Ν) | Factors/Dimensions | Availability |
| AI Competence Educators (AICO_edu) questionnaire (Delcker et al., 2025) | self-assessment | 45 five-point Likert items | teachers (at vocational schools) | 480 | Theoretical knowledge about AI, Legal framework and ethics, Implications of AI in education, Attitude toward AI, Teaching and learning with AI, Ongoing professionalization | included (English) |
| AI concept test (Kong & Yang, 2024) | objective assessment | 10 multiple-choice items | in-service primary school teachers | 31 | Understanding of tokens, Self-attention, Embeddings, Transformer, Prompting engineering, Other basic AI concepts and the implication of generative AI | not included |
| Survey on TPACK (Kong & Yang, 2024) | self-assessment | 14 five-point Likert scale | CK, PCK, TCK and TPACK | not included | ||
| Reflective writing (Kong & Yang, 2024) | self-assessment | N/A | Challenges encountered, Strategies used, Overall impressions of the usefulness of generative AI tools | N/A | ||
| Survey on assessing teachers’ ability to use text-based generative AI tools for teaching from the perspective of attention, relevance, confidence and satisfaction (Kong & Yang, 2024) | self-assessment | 12 five-point Likert scale | Attention, Relevance, Confidence and Satisfaction | not included | ||
| AI literacy assessment for non-technical individuals (Ding et al., 2024) | objective assessment | 31 true/false statements, multiple-choice questions, and sorting inquiries | pre- and in-service teachers | 186 | Understanding AI’s nature, Recognizing AI’s capabilities, Grasping AI’s underlying mechanisms, Discerning appropriate AI utilization and comprehending public perceptions of AI. | included (English) |
| AI-TPACK scale (Intelligent-TPACK scale) (Celik, 2023) | self-assessment | 27 seven-point Likert items | teachers | 428 | Intelligent-TK (Technological Knowledge), Intelligent-TPK (Technological Pedagogical Knowledge), Intelligent-TCK (Technological Content Knowledge), Intelligent-TPACK (Technological Pedagogical Content Knowledge), Ethics | included (English) |
| AI-TPACK scale (Celik, 2023, as employed in the study of Hava & Babayigit, 2025) | self-assessment | 27 seven-point Likert items | teachers | 401 | AI-TK, AI-TCK, AI-TPK, AI-TPACK, and Ethics | not included |
| AI-TPACK scale (Ning et al., 2024) | self-assessment | 42 five-point Likert items | pre- and in-service teachers | 366 | Content knowledge, Pedagogical knowledge, AI-technological knowledge, Pedagogical content knowledge, AI-technological content knowledge, AI-technological pedagogical knowledge, AI-technological pedagogical content knowledge | included (English) |
| Artificial Intelligence Literacy (AIL) Scale for Teachers (Younis, 2025) | self-assessment | 45 five-point Likert scale | secondary-level teachers | 292 | Teachers’ attitudes towards AI use, Understand AI and computational thinking concepts, Understand AI social impact, Understand AI ethics, Search and locate AI tools, Motivate students to use AI tools, Integrate AI tools in the classroom, Evaluate AI tools features, Apply AI tools for assessment | included (English) |
| Artificial Intelligence Literacy Scale for Teachers (AILST) (Ning et al., 2025) | self-assessment | 36 five-point Likert items | pre- and in-service teachers | 604 | AI perception, Knowledge and skills, Applications and innovation, Ethics | included (English) |
| GenAI Competence Scale for second language (L2) Teachers (GAICS-L2T) (Wu et al., 2025) | self-assessment | 48 6-point Likert items | second language teachers | 525 | Consciousness, Knowledge, Application, Responsibility, Professional development | included (Chinese, English) |
| Scale to assess the AI literacy of Chinese English as Foreign Language (EFL) teachers (Pan & Wang, 2025) | self-assessment | 31 five-point Likert items | EFL teachers | 782 | AI knowledge (7 items), AI use (8 items), AI assessment (5 items), AI design (5 items) and AI ethics (7 items) | not included |
| Self-assessment questions about teachers’ AI literacy competence (Tenberga & Daniela, 2024) | self-assessment | 10 questions measuring teachers’ proficiency levels across a seven-point scale, from “Know nothing about this competence” to (level 1) basic awareness, and (level 6) advanced application and leadership in digital and AI literacy competencies. | secondary school teachers | 42 | Understanding AI fundamentals, Critical evaluation, Ethics, Usage, Awareness, and Communication | included (English) |
| survey to measure the level of teachers’ AI literacy (no specific name/abbreviation) (Zhao et al., 2022) | self-assessment | 20 five-point Likert scale | primary, secondary and high school teachers | 1013 | Knowing and understanding AI, Applying AI, Evaluating AI application, and AI ethics | included (Chinese, English) |
| teacher artificial intelligence competence self-efficacy scale (TAICS) (Chiu et al., 2025) | self-assessment | 24 five-point Likert items | K-12 teachers | 434 | AI knowledge, AI pedagogy, AI assessments, AI ethics, Human-centered education, Professional engagement | included (English) |
Figure 2. Language/geographic coverage of the assessment methods
4.1. AICO_edu questionnaire
The AI Competence Educators questionnaire (Delcker et al., 2025) focuses on perceived AI competence among teachers, covering six dimensions that have been identified through systematic literature analysis and expert interviews. The six factors were measured using a total of 45 five-point Likert items. The questionnaire was distributed to 480 teachers at vocational schools in Germany over a two-month period in 2021, capturing the level of perceived AI competence at that time. Confirmatory factor analysis (CFA) examined construct validity and prompted revisions to the questionnaire by removing the entire dimensions of “Attitudes” and “Professionalisation”, as well as nine items from the remaining dimensions, resulting in a new structure of four factors comprising 21 items.
4.2. Kong and Yang (2024)
In a study on a human-centred learning and teaching framework using generative AI for developing self-regulated learning through domain knowledge in a K-12 setting, a 60-hour professional development programme for teachers was designed and implemented with a sample of 31 in-service primary school teachers, including four assessment methods related to AI literacy.
4.2.1. AI concept test
This objective assessment comprises ten multiple-choice questions that focus on teachers’ understanding of tokens, self-attention, embeddings, transformers, prompt engineering, other fundamental AI concepts, and the implications of generative AI. It offers a valuable tool to evaluate teachers’ comprehension of AI concepts without the limitations associated with self-assessment methods. The assessment was used to measure the impact of the teacher development programme by administering it before and after the intervention. However, it remains unclear how it was developed and why these particular items were chosen. Additionally, it is not mentioned whether this test was validated in any way.
4.2.2. Survey on (AI) TPACK
This survey evaluates teachers perceived abilities to use text-based generative AI tools for differentiated instruction and addressing learner differences through self-assessment using 14 five-point Likert items based on the TPACK framework and adapted from previously validated instruments. It is divided into four constructs: a) Content Knowledge (CK) focusing on prompt engineering; b) Pedagogical Content Knowledge (PCK) emphasizing addressing students’ individual differences with generative AI; c) Technological Content Knowledge (TCK) focusing on understanding the technological aspects of generative AI; and d) Technological Pedagogical Content Knowledge (TPACK), which integrates teaching and technology. It was administered both pre- and post-intervention to examine the program’s impact on teachers’ understanding of integrating technology with educational purposes. To mitigate potential inconsistencies in this self-assessment, results were complemented by an analysis of teachers’ written reflections.
4.2.3. Reflective writing
This method was employed to gain insights into the challenges faced by teachers during the development programme, the strategies they employed, and their overall perceptions of the significance of generative AI tools for enhancing students’ self-regulated learning in practice. Although written reflections lack objectivity due to their nature, this tool is easily adaptable to any research and offers an additional means of triangulating the quantitative data collected from other assessment methods.
4.2.4. Survey on assessing teachers’ ability to use text-based generative AI tools for teaching from the perspective of ARCS
To assess teachers’ capacity to incorporate generative AI into their teaching through pedagogical methods, a survey was developed based on the ARCS model, which stands for attention, relevance, confidence, and satisfaction. This self-assessment comprises 12 five-point Likert scale items across these four dimensions. Although it was adapted from previous research, no information is provided regarding the validity of this method. The quantitative data was triangulated with an analysis of teachers’ written reflections.
4.3. AI literacy assessment for non-technical individuals
The initial version of the AI literacy assessment for non-technical individuals (Ding et al., 2024) comprises 31 objective items, including true/false statements, multiple-choice questions, and sorting tasks, that relate to 17 competencies across five key areas of AI literacy. These areas are a) understanding AI’s nature, b) recognizing AI’s capabilities, c) comprehending AI’s underlying mechanisms, d) identifying suitable AI applications, and e) understanding public perceptions of AI. The assessment was reviewed by experts and administered to a total of 196 in-service and pre-service teachers with limited or no technical background in the Southern United States (June to September 2023), with 186 responses retained after data screening. Participants had approximately 20 minutes to complete the assessment. Following a thorough validity analysis, 25 items were retained for the final version.
4.3. AI-TPACK scale
The AI-TPACK scale, also known as the Intelligent-TPACK scale, was developed by Celik (2023) and comprises 27 seven-point Likert items, divided into five subscales: a) AI-TK (AI-Technological Knowledge), b) AI-TCK (AI-Technological Content Knowledge), c) AI-TPK (AI-Technological Pedagogical Knowledge), d) AI-TPACK (AI-Technological Pedagogical Content Knowledge), and e) Ethics. The survey was completed by 428 teachers aged 29 to 38 in Turkey. It is a tested and validated instrument for assessing teachers’ ability to ethically and pedagogically utilize “intelligent” technologies, building on the proven track record of TPACK. Aside from Celik and, as previously mentioned, Kong and Yang (2024), Hava and Babayigit (2024) also employed the AI-TPACK scale while investigating the relationship between teachers’ AI-TPACK competencies and digital proficiency. They distributed the questionnaires to 401 teachers working in public schools across different education levels in Turkey. The demographic data indicated moderate to high levels of technological proficiency among participants. The completion time was approximately 20 minutes. Another study (Ning et al., 2024) expanded the questionnaire to a final set of 42 five-point Likert items. It was refined using information collected from primary and secondary school teachers, as well as educational experts. Data was collected from valid questionnaires answered by 366 teachers with practical experience in applying AI technology. The data collection took place from July to September 2023. Validity was assessed using both Exploratory and CFA, further supporting the framework of teachers’ AI-TPACK as a reliable and effective foundation.
4.5. Artificial Intelligence Literacy (AIL) scale for teachers
The AIL scale for teachers (Younis, 2025) was developed based on the AI literacy competencies identified through a literature review, integrated with the theoretical framework of the AI literacy TPACK model (Ng et al., 2021b) and the DigCompEdu framework (Ghomi & Redecker, 2019). It was further refined by an expert committee consisting of teachers and university professors. The outcome was a self-report instrument comprising 45 five-point Likert items across nine constructs: a) Teachers’ attitudes towards AI use, b) Understanding AI and computational thinking concepts, c) Understanding AI’s social impact, d) Understanding AI ethics, e) Searching for and locating AI tools, f) Motivating students to use AI tools, g) Integrating AI tools into the classroom, h) Evaluating AI tools’ features, and i) Applying AI tools for assessment. A total of 292 secondary teachers from six countries completed the survey between October and December 2023, representing a broad range of demographic backgrounds. The scale’s reliability and validity were confirmed through CFA.
4.6. Artificial Intelligence Literacy Scale for Teachers (AILST)
The AILST (Ning et al., 2025) is another validated self-assessment tool that focuses on AI perception, knowledge and skills, applications and innovation, and ethics. Originally, it consisted of 40 five-point Likert items. After Exploratory Factor Analysis, it was given to 302 pre- and in-service teachers with prior knowledge and experience in AI teaching, excluding four items. Subsequently, it was redistributed to an equally homogenous group of 302 participants for CFA.
4.7. GenAI Competence Scale for second language (L2) Teachers (GAICS-L2T)
The GAICS-L2T (Wu et al., 2025) initially comprised five factors: a) Consciousness, b) Knowledge, c) Application, d) Responsibility, and e) Professional development. Following exploratory factor analysis, the “Professional development” dimension and half of the 48 items were eliminated. In the final phase of the research, the revised self-assessment tool was distributed to 525 Chinese second language (L2) teachers, demonstrating strong validity and reliability for assessing L2 teachers’ competence in GenAI.
4.8. Scale to assess the AI literacy of Chinese English as a Foreign Language (EFL) teachers
Pan and Wang (2025) examined the interaction between AI literacy and both the age and teaching experience of 782 English teachers in China. Consequently, they developed an instrument based on Ng et al.’s (2021a, 2022) theoretical framework. The scale was validated through CFA, ultimately comprising 31 five-point Likert items across five subscales: a) AI knowledge, b) AI use, c) AI assessment, d) AI design, and e) AI ethics.
4.9. Self-assessment questions about teachers’ AI literacy competence
This self-assessment questionnaire on teachers’ AI literacy (Tenberga & Daniela, 2024) forms the second part (Questions 38 to 47) of the complete survey used in this study, since the other questions (Questions 6 to 37) are identical to the Selfie for Teachers self-assessment framework (Economou, 2023). The unique aspect of teachers’ AI literacy is based on existing AI literacy frameworks, such as DigiCompEdu (Redecker, 2017) and the revised DigiCompEdu2.2 (Vuorikari et al., 2022), which include various AI competencies identified through literature review. It comprises six dimensions: a) understanding AI fundamentals, b) critical evaluation, c) ethics, d) usage, e) awareness, and f) communication. For the purpose of this study, it was distributed to 42 secondary school teachers for data analysis and to assess internal consistency. The dimensions identified in this study were not validated through CFA, as noted by the authors.
4.10. Survey to measure the level of teachers’ AI literacy (no specific name/abbreviation)
This survey is applied in the study of Zhao et al. (2022). It is built on criteria developed by other researchers in this field and consists of 20 five-point Likert items across four dimensions of AI literacy: a) Knowing and understanding AI, b) Applying AI, c) Evaluating AI application and d) AI ethics. After screening and removing invalid questionnaires, the sample comprised 1013 primary, middle, and high school teachers in China. This research uses this tool to examine the correlations among these four dimensions of AI literacy.
4.11. Teacher Artificial Intelligence Competence Self-efficacy scale
TAICS (Chiu et al., 2025) is a validated scale that measures teachers’ AI competence self-efficacy in K-12 education. The scale was developed using a Delphi method. In its final version, it includes 24 five-point Likert items across six dimensions: a) AI knowledge, b) AI pedagogy, c) AI assessment, d) AI ethics, e) human-centred education, and f) professional engagement, based on the AI CFT and the competencies suggested in Falloon’s (2020) teacher digital competence framework. The scale was validated on a sample of 434 K-12 teachers through CFA.
RQ2: How do the factors or dimensions of these methods align with the five aspects of UNESCO’s AI CFT?
The content analysis showed partial alignment with the five aspects of the AI CFT, as presented in Table 6 and Figure 3.
Figure 3. Summary of alignment with the five aspects of UNESCO’s AI CFT
| Scale/Survey/Questionnaire | AI Competency Framework for Teachers | ||||
| Human-centered mindset | Ethics of AI | AI foundations and applications | AI pedagogy | AI for professional development | |
| AI Competence Educators (AICO_edu) questionnaire (Delcker et al., 2025) | – | + | + | + | + |
| AI Competence Educators (AICO_edu) questionnaire (Delcker et al., 2025) final version | – | + | + | + | – |
| AI concept test (Kong & Yang, 2024) | – | – | + | – | – |
| Survey on TPACK (Kong & Yang, 2024) | – | – | + | + | – |
| Reflective writing (Kong & Yang, 2024) | – | – | – | – | – |
| Survey on assessing teachers’ ability to use text-based generative AI tools for teaching from the perspective of attention, relevance, confidence and satisfaction (Kong & Yang, 2024) | – | – | – | + | – |
| AI literacy assessment for non-technical individuals (Ding et al., 2024) | – | + | + | + | – |
| AI literacy assessment for non-technical individuals (Ding et al., 2024) final version | – | + | + | + | – |
| AI-TPACK scale (Intelligent-TPACK scale) (Celik, 2023) | – | + | + | + | – |
| AI-TPACK scale (Celik, 2023, as employed in the study of Hava & Babayigit, 2025) | – | + | + | + | – |
| AI-TPACK scale (Ning et al., 2024) | – | – | + | + | – |
| Artificial Intelligence Literacy (AIL) Scale for Teachers (Younis, 2025) | + | + | + | + | – |
| Artificial Intelligence Literacy Scale for Teachers (AILST) (Ning et al., 2025) | – | + | + | + | – |
| GenAI Competence Scale for second language (L2) Teachers (GAICS-L2T) (Wu et al., 2025) | + | + | + | + | + |
| GenAI Competence Scale for second language (L2) Teachers (GAICS-L2T) (Wu et al., 2025) final version | + | + | + | + | – |
| Scale to assess the AI literacy of Chinese English as Foreign Language (EFL) teachers (Pan & Wang, 2025) | + | + | + | + | – |
| Self-assessment questions about teachers’ AI literacy competence (Tenberga & Daniela, 2024) | + | + | + | + | + |
| survey to measure the level of teachers’ AI literacy (no specific name/abbreviation) (Zhao et al., 2022) | + | + | + | + | – |
| teacher artificial intelligence competence self-efficacy scale (TAICS) (Chiu et al., 2025) | + | + | + | + | + |
Fourteen out of sixteen assessment methods include items that align with the aspects of “AI foundations and applications” and “AI pedagogy”. However, five out of sixteen studies did not address the “Ethics of AI” and ten out of sixteen did not cover the “Human-centered mindset”. A small number of evaluation methods (four out of sixteen) initially incorporate the dimension of “AI for professional development”, but two of them remove this construct after factor analysis in their final version (see Table 7). Ultimately, only the self-assessment questions regarding teachers’ AI literacy competence (Tenberga & Daniela, 2024) and TAICS (Chiu et al., 2025) exhibit the highest level of alignment.
| Scale/Survey/Questionnaire | AI Literacy Framework for Primary and Secondary Education (AILit Framework) | ||
| Knowledge | Skills | Attitudes | |
| AI literacy assessment for non-technical individuals (Ding et al., 2024) | + | + | – |
| Artificial Intelligence Literacy (AIL) Scale for Teachers (Younis, 2025) | + | + | + |
| Artificial Intelligence Literacy Scale for Teachers (AILST) (Ning et al., 2025) | + | + | – |
| Scale to assess the AI literacy of Chinese English as Foreign Language (EFL) teachers (Pan & Wang, 2025) | + | + | – |
| Self-assessment questions about teachers’ AI literacy competence (Tenberga & Daniela, 2024) | + | + | – |
| survey to measure the level of teachers’ AI literacy (no specific name/abbreviation) (Zhao et al., 2022) | + | + | – |
RQ3: How do the methods correspond with the AILit Framework’s knowledge–skills–attitudes triad?
Out of the 16 assessment methods, six specifically mention “AI literacy”. Among these six questionnaires, one features a distinct section focused on teachers’ attitudes towards AI. The other five studies primarily explore aspects of AI ethics (see Table 7).
The remaining 10 assessment methods show a similar picture. Four questionnaires, surveys, or scales include dimensions or items addressing more than just ethics. However, the AICO_edu discards this construct after factor analysis in its final version (see Table 8).
| Scale/Survey/Questionnaire | AI Literacy Framework for Primary and Secondary Education (AILit Framework) | ||
| Knowledge | Skills | Attitudes | |
| AI Competence Educators (AICO_edu) questionnaire (Delcker et al., 2025) | + | + | + |
| AI Competence Educators (AICO_edu) questionnaire (Delcker et al., 2025) final version | + | + | – |
| AI concept test (Kong & Yang, 2024) | + | + | – |
| Survey on TPACK (Kong & Yang, 2024) | + | + | – |
| Reflective writing (Kong & Yang, 2024) | – | – | – |
| Survey on assessing teachers’ ability to use text-based generative AI tools for teaching from the perspective of attention, relevance, confidence and satisfaction (Kong & Yang, 2024) | + | + | + |
| AI-TPACK scale (Intelligent-TPACK scale) (Celik, 2023) | + | + | – |
| AI-TPACK scale (Celik, 2023, as employed in the study of Hava & Babayigit, 2025) | + | + | – |
| AI-TPACK scale (Ning et al., 2024) | + | + | – |
| GenAI Competence Scale for second language (L2) Teachers (GAICS-L2T) (Wu et al., 2025) | + | + | + |
| GenAI Competence Scale for second language (L2) Teachers (GAICS-L2T) (Wu et al., 2025) final version | + | + | + |
| Teacher Artificial Intelligence Competence Self-efficacy scale (TAICS) (Chiu et al., 2025) | + | + | + |
5. Discussion
This review identified 16 assessment methods for K-12 teachers’ AI literacy. The results show that current TPD research values validated assessment tools in creating effective professional development courses. Methods to assess teachers’ AI literacy, AI competency, or AI competence need to cover all five aspects of the AI CFT and include, besides knowledge and skills, the crucial section on teachers’ attitudes, as highlighted in the AILit Framework. Unfortunately, only the self-assessment questions about teachers’ AI literacy competence (Tenberga & Daniela, 2024) and TAICS (Chiu et al., 2025) are aligned with the five aspects of UNESCO’s AI CFT. Additionally, a single scale (AIL) (Younis, 2025) aligns with the AI literacy definition in the AILit Framework, including a dedicated section on teachers’ attitudes. No method fully addresses both frameworks at once. The AICO_edu (Delcker et al., 2025) questionnaire covers two key dimensions of teachers’ AI literacy/competency: “Attitudes” and “AI for professional development,” but these are removed after CFA. Similarly, GAICS-L2T (Wu et al., 2025) excludes the latter in its final version. This indicates that even when instruments are designed to reflect relevant frameworks, the demands of psychometric validation—such as securing clear factor structures, acceptable fit indices, and reliable scales—often lead to the rephrasing or deleting of items. Consequently, important aspects such as attitudes and professional development may be omitted, exposing a tension between comprehensive theory and empirical validation.
When researching methods for assessing AI literacy, precise wording is essential because terminology in the literature varies. “AI” is used interchangeably to refer to artificial intelligence, generative AI, or specific tools and practices such as ChatGPT and prompt engineering. It is also often conflated with related concepts like data literacy, machine learning (ML), deep learning, and neural networks, or simplified to pragmatic terms such as “teachers’ ability to use AI”. Rapid technological advances and the proliferation of buzzwords make construct clarity difficult. Regarding RQ3, this ambiguity has significant implications: under the AILit Framework, AI literacy must include knowledge, skills, and attitudes, yet many assessment methods emphasize ethics while overlooking other vital attitudes. Furthermore, the dimension of teachers’ attitudes towards AI has emerged as a distinct research area, as many articles during the literature review focused exclusively on this topic. These patterns indicate both systemic issues in how constructs are operationalized and case-specific nuances. Consequently, each assessment should clarify whether it evaluates AI literacy as a whole or specific dimensions (such as knowledge, practical skills, attitudes), whether it focuses on AI or generative AI particularly, and whether it measures self-perceived competence or actual, performance-based skills among non-technical or experienced pre- or in-service teachers.
Most identified instruments are self-assessment tools, which can differ from demonstrated competence and therefore benefit from supplementary performance-based measures. Although many studies report CFA, they often lack revalidation on larger or independent samples, test–retest evidence, or measurement invariance, which limits the robustness of conclusions. Sample sizes are not consistently justified. Participant demographics (e.g., age) are not always provided, and most instruments have not undergone cross-cultural validation, restricting their applicability across different settings. Finally, the effort to develop a standardized, widely applicable scale is impeded by limited language and geographic coverage.
Compared with Lintner’s (2024) review on AI literacy scales, both emphasize gaps in validation quality and the dominance of self-assessment tools over performance-based evidence, noting the risk that self-reports may not accurately reflect demonstrated competence. Additionally, each article highlights limited generalizability due to narrow samples, sparse reporting, and limited cross-cultural research, underscoring the need for revalidation with larger, more diverse cohorts. Furthermore, in Lintner’s systematic review, only one instrument was identified as assessing teachers’ perceived ability to incorporate AI-based tools into educational practice.
The relatively small number of included studies reflects the stringent inclusion criteria focused on validated assessment tools and emphasizes the need for more primary research into teacher-specific AI competencies. The results of this study extend previous reviews and offer valuable insights into developing scales that validate K-12 teachers’ AI literacy for designing effective professional development programmes and assessing the current state. However, this study does not address other important literacies that might influence AI literacy, such as digital and data literacies (Long & Magerko, 2020), factors affecting teachers’ technology acceptance (Feng et al., 2025), or the impact of digital technologies in schools (Mercader & Castro, 2025) or workplaces (Liu et al., 2025). The research concentrated on questionnaires distributed to a specific group—teachers—and does not include validated AI literacy scales aimed at ordinary users (e.g., Wang et al., 2023) or those adaptable to the target group. Another limitation and gap in current TPD research is that, to our knowledge, there are no ML-assisted methods for assessing teachers’ AI literacy. Such methods could enhance survey data analysis and pave the way for practical implementation of AI tools in educational research (Benzer et al., 2025). Recent developments in instruments suggest that as AI and big-data analytics progress, AI literacy assessment tools could utilize these technologies to provide more accurate, detailed, and increasingly personalized evaluations (Ning et al., 2025).
5.1. Future directions
Instrument design: Develop and validate teacher-specific, framework-aligned tools that preserve critical dimensions (human-centred mindset and AI for professional development).
Method diversification: Combine self-reports with performance- or scenario-based tasks to assess demonstrated competence.
Validation rigour: Perform cross-sample revalidation, test–retest studies, multi-group invariance (e.g., pre- vs in-service; regions; languages); transparently report demographic profiles and missing data procedures.
Scope clarity: Clarify whether instruments target AI or generative AI; differentiate ethics from broader attitudes (e.g., curiosity, openness, adaptability) to avoid construct collapse.
Coverage and access: Extend multilingual adaptations using forward–back translation and DIF (Differential Item Functioning) checks; promote open data and shared item banks to support replication, benchmarking, and cumulative meta-analytic work.
Tooling and analytics: Investigate ML-assisted scoring and validation pipelines to support scalable, responsive assessment in TPD contexts while upholding fairness and transparency safeguards. Prior research suggests leveraging AI and big-data analytics within AI literacy assessment to improve measurement accuracy and enable personalized feedback, providing a clear pathway for next-generation teacher instruments.
Researchers and consortia should collaboratively develop open, teacher-specific instrument suites aligned with the AI CFT, AILit, and other Digital Competence Frameworks, such as DigComp 3.0 (Cosgrove & Cachia, 2025), by releasing item pools, code, and anonymized data for reuse and cross-context benchmarking. Funders and ministries should prioritize projects that deliver performance-based assessments, cross-cultural adaptations, and invariance-tested instruments, linked to TPD cycles and classroom practice. TPD providers and districts should adopt dual-method assessments (self-report plus performance) and require explicit framework mapping in programme evaluation to ensure that human-centred mindsets and professional development are not sidelined.
6. Conclusions
The review highlights a ongoing gap in well-validated, teacher-specific AI literacy and competency tools that correspond with both UNESCO’s AI CFT and the AILit Framework. It reveals a conflict between broad theoretical coverage and psychometric accuracy, which hampers effective TPD design and progress in cumulative research.
Sixteen teacher-focused assessment methods were identified, with the landscape dominated by self-assessments and very limited objective or performance-based measures. Alignment with UNESCO’s AI CFT varies: most instruments cover AI foundations and pedagogy, fewer address ethics, and significantly fewer systematically capture a human-centred mindset or AI for professional development. Alignment with the AILit Framework’s knowledge–skills–attitudes triad is partial: while knowledge and skills are commonly represented, attitudes are often reduced to or conflated with ethics rather than distinguished as separate attitudinal constructs. Instruments initially designed to include “attitudes” and “AI for professional development” often omit these dimensions during factor-analytic validation, reflecting a trade-off between framework breadth and empirical fit. Methodological limitations include a predominant reliance on self-report, scarce cross-sample revalidation, limited evidence for test–retest reliability and measurement invariance, inconsistent sample-size justification, incomplete demographic reporting, and narrow language and geographic coverage. Conceptual ambiguity in the literature (AI versus generative AI versus ChatGPT versus prompting; conflation with related literacies) complicates construct clarity and comparability across studies.
This review offers the first dual mapping of teacher-oriented assessment methods against both UNESCO’s five AI CFT aspects and the AILit knowledge–skills–attitudes triad, clearly indicating where coverage is focused and where it is absent. It highlights systematic psychometric challenges, particularly the post-CFA removal of “attitudes” and “AI for professional development”, and connects them to tangible risks for TPD design and evaluation. It provides a straightforward, framework-based reference for researchers and TPD designers seeking instruments with proven alignment, thereby enhancing selection, adaptation, and future instrument development.
For TPD, incomplete coverage of constructs risks misdiagnosing needs and overemphasizing technical or ethical aspects while overlooking human-centred mindsets and ongoing professional growth. For policy and standards, fragmented metrics hinder monitoring and scaling efforts, making it hard to compare outcomes across different contexts or to promote evidence-based adoption of AI in K-12. For cumulative science, the shortage of performance-based measures and limited cross-cultural validation restricts generalisability, impedes replication, and slows progress towards shared benchmarks and interoperable evidence.
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 The Author(s)

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
All articles published in Artificial Intelligence Advances in Education are open access and distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).
This license permits non-commercial use, sharing, distribution, and reproduction in any medium or format, provided that proper credit is given to the original author(s) and the source, a link to the license is provided, and any changes to the material are clearly indicated.
Adaptations or derivatives of the material are not permitted under this license.
Images or other third-party material included in an article are covered by the article’s Creative Commons license unless otherwise indicated in a credit line. If any material is not included in the license and your intended use exceeds permitted statutory regulation, you must obtain permission directly from the copyright holder.