Each principle includes questions to discuss and consider, a description, and real-world examples. Visit the Sample Guidance section for an example of a resource based on these principles.
Discussion Questions
✅ How does our guidance highlight the purposeful use of AI to achieve our shared education vision and goals?
✅ How do we reduce the digital divide between students with easy access to AI tools at home and those dependent on school resources?
✅ How does our guidance ensure inclusivity, catering to diverse learning needs and linguistic and cultural backgrounds?
Education leaders should clarify the shared values that will guide the use of AI tools, especially those that were not specifically created for educational contexts. AI tools should be applied to serve existing goals, such as promoting student and staff well-being, enriching student learning experiences, and enhancing administrative functions.
Using AI tools to promote equity in education requires both access and thoughtful implementation. Equity is also addressed in the other principles in this toolkit, such as promoting AI literacy for all students or realizing the benefits of AI and addressing the risks.
Education systems should carefully evaluate the access of AI tools for students, rather than take the approach of general bans. Considerations should be given to age restrictions, data privacy, and security concerns as well as to alignment with teaching and learning goals, curriculum, and the overall district technology plan. Attempting to enforce broad bans on AI is a futile effort that widens the digital divide between students with independent access to AI on personal devices and students dependent on school or community resources. Closing the digital divide in an age of AI still begins with internet connectivity, device availability, and basic digital literacy.
Ensuring widespread access to AI tools presents opportunities to use their capabilities to promote equity; however, leaders must implement thoughtful safeguards and oversight to minimize associated risks. For example, AI tools can provide instant translations of both written and spoken language, allowing more engagement with non-native English speakers, but plagiarism-detection tools can be biased against those same speakers. Educators and administrators should be aware of these issues and thoroughly evaluate the use of these tools for accuracy, as well as cultural and linguistic inclusion.
Example: The Lower Merion School District, Pennsylvania, USA, states, "We believe in preparing students for the future. Our students will most certainly be engaging with artificial intelligence in years to come. As such, we view it as partly our responsibility to teach them how to use new technology and tools in ways that are appropriate, responsible, and efficient… Rather than ban this technology, which students would still be able to access off campus or on their personal networks and devices, we are choosing to view this as an opportunity to learn and grow."
The use of AI to pursue educational goals must be carefully aligned with the core values and ethics of the education system. This means identifying and mitigating the risks of AI in education so that the benefits may be realized (see Principle 4). Furthermore, students should learn about “the impact of AI on our lives, including the ethical issues it raises,” and teachers should be provided training to recognize misinformation. AI systems should be deployed in ways that support and maintain human decision-making in the process of teaching and learning.
Example: Peninsula School District, Washington, USA, AI Principles and Beliefs Statement. “Our unwavering commitment to Universal Design for Learning (UDL) shapes our belief that our use of AI should align with UDL's three core principles: diversified ways of representation, action/expression, and engagement. AI can facilitate presenting information in diverse formats, aligning with individual learners' needs.”
Age Restrictions and Parental Consent
ChatGPT currently requires that users be at least 13 years old and requires parent or legal guardian’s permission for students between the ages of 13 and 18. The website warns that “ChatGPT may produce output that is not appropriate for all audiences or all ages and educators should be mindful of that while using it with students or in classroom contexts."
Personalized Content and Review: AI can help generate personalized study materials, summaries, quizzes, and visual aids, help students (including those with disabilities) access and develop tailored resources to meet their specific needs, and help students organize thoughts and review content.
Aiding Creativity: Students can harness generative AI as a tool to spark creativity across diverse subjects, including writing, visual arts, and music composition. AI can suggest novel concepts or generate artwork or musical sequences to build upon.
Tutoring: AI technologies have the potential to democratize one-to-one tutoring and support, especially for students with financial or geographic constraints. Virtual teaching assistants powered by AI can provide round-the-clock support, help with homework, and supplement classroom instruction.
Critical Thinking and Future Skills: Students who learn about how AI works are better prepared for future careers in a wide range of industries. They develop computational thinking skills to break down complex problems, analyze data critically, and evaluate the effectiveness of solutions.
Plagiarism and cheating can occur when students copy from generative AI tools without approval or adequate documentation and submit AI-generated work as their original work.
Misinformation can be produced by generative AI tools and disseminated at scale, leading to widespread misconceptions.
Bullying and harassment by using AI tools to manipulate media in order to impersonate others can have severe consequences for students' well-being.
Overreliance on potentially biased AI models can lead to abandoning human discretion and oversight. Important nuances and context can be overlooked and accepted. People may overly trust AI outputs, especially when AI is seen as having human-like characteristics (i.e., anthropomorphization).
Unequal access to AI tools worsens the digital divide between students with independent and readily available access at home or on personal devices and students dependent on school or community resources.
In addition to being clear about when and how AI tools may be used to complete assignments, teachers can restructure assignments to reduce opportunities for plagiarism and decrease the benefit of AI tools. This may include evaluating the artifact development process rather than just the final artifact and requiring personal context, original arguments, or original data collection.
Students should learn how to critically evaluate all AI-generated content for misinformation or manipulation and be taught about the responsible development and sharing of content.
Staff and students should be taught how to properly cite and acknowledge the use of AI where applicable.
If an assignment permits the use of AI tools, the tools must be made available to all students, considering that some may already have access to such resources outside of school.
See Principle 1. Purpose and
Principle 5. Integrity for more information.
Benefits
Risks
Risk Mitigation
Societal Bias is often due to human biases reflected in the data used to train an AI model. Risks include reinforcing stereotypes, recommending educational interventions that are inappropriate, or making discriminatory evaluations, such as falsely reporting plagiarism by non-native English speakers.
Diminishing student and teacher agency and accountability is possible when AI technologies deprioritize the role of human educators in making educational decisions. While generative AI presents useful assistance to amplify teachers' capabilities and reduce teacher workload, these technologies should be a supporting tool to augment human judgment, not replace it.
Privacy concerns arise if AI is used to monitor classrooms for accountability purposes, such as analyzing teacher-student interactions or tracking teacher movements, which can infringe on teachers' privacy rights and create a culture of surveillance.
Benefits
Risks
Risk Mitigation
Select AI tools that provide an appropriate level of transparency in how they create their output to identify and address bias. Include human evaluation before any decisions informed by AI are made, shared, or acted upon.
Educate users on the potential for bias in AI systems so they can select and use these tools more thoughtfully.
All AI-generated content and suggestions should be reviewed and critically reflected upon by students and staff, thereby keeping “humans in the loop” in areas such as student feedback, grading, and when learning interventions are recommended by AI.
When AI tools generate instructional content, it's vital for teachers to verify that this content aligns with the curriculum standards and learning objectives.
See See Principle 3. Knowledge and Principle 6. Agency for more information.
Content Development, Enhancement, and Differentiation: AI can assist educators by differentiating curricula, suggesting lesson plans, generating diagrams and charts, and creating customized worksheets based on student needs and proficiency levels.
Assessment Design and Analysis: In addition to enhancing assessments by automating question creation, providing standardized feedback on common mistakes, and designing adaptive tests based on real-time student performance, AI can conduct diagnostic assessments to identify gaps in knowledge or skills and enable rich performance assessments.Teachers should be ultimately responsible for evaluation, feedback, and grading, and determining and assessing the usefulness of AI in supporting their grading work. AI should never be solely responsible for grading.
Continuous Professional Development: AI can guide educators by recommending teaching and learning strategies based on student needs, personalizing professional development to teachers’ needs, suggesting collaborative projects between subjects or teachers, and offering simulation-based training scenarios such as teaching a lesson or managing a parent/teacher conference.
Ethical Decisions: Understanding how AI works, including its ethical implications, can help teachers make critical decisions about the use of AI technologies and help them support ethical decision-making skills among students.
A National Level Guide for Applying Generative AI
In April 2023, the United Arab Emirates Office of AI, Digital Economy, and Remote Work released 100 Practical Applications and Use Cases of Generative AI, a guide that includes detailed use cases for students, such as outlining an essay and simplifying difficult concepts.
“The potential for AI is obvious, and educating our future generation is just the beginning.”
– H.E. Omar Sultan Al Olama
Opportunities
Risks
Risk Mitigation
Compromising privacy is a risk when AI systems gather sensitive personal data on staff and students, store personal conversations, or track learning patterns and behaviors. This data could be hacked, leaked, or exploited if not properly secured and anonymized. Surveillance AI raises all of the concerns above, as well as the issue of parental consent, potential biases in the technology, the emotional impact of continuous monitoring, and the potential misuse of collected data.
Discrimination is a main concern of AI-driven recruitment due to the potential for reinforcing existing biases. If the AI system is trained on historical hiring data that contains biases (e.g., preferences for candidates from certain universities, gender biases, or age biases), the system might perpetuate those biases in its selections.
Operational Efficiency: Staff can use tools to support school operations, including helping with scheduling, automating inventory management, increasing energy savings, and generating performance reports.
Data Analysis: AI can extract meaningful insights from vast amounts of educational data by identifying trends in performance, attendance, and engagement to better personalize instruction.
Communications: AI tools can help draft and refine communications within the school community, deploy chatbots for routine inquiries, and provide instant language translation.
Professional Development: AI can assist in talent recruitment by sifting through job applications to find the best matches and tailor professional development programs based on staff interests and career stages.
Evaluate AI tools for compliance with all relevant policies and regulations, such as privacy laws, but also ethical principles.
AI tools should be required to detail if/how personal information is used to ensure that personal data remains confidential and isn't misused.
Use AI as a supplementary tool rather than a replacement for human judgment. For example, AI can be used to filter out clearly unqualified candidates, but final decisions should involve human recruiters.
See Principle 2. Compliance for more information.
AI Guidance for Schools Toolkit © 2023 by Code.org, CoSN, Digital Promise, European EdTech Alliance, and PACE is licensed under CC BY-NC-SA 4.0
Discussion Questions
✅ What is the plan to conduct an inventory of systems and software to understand the current state of AI use and ensure adherence to existing security and privacy regulations?
✅ Does the education system enforce contracts with software providers, stipulating that any use of AI within their software or third-party providers must be clearly revealed to district staff and first approved by district leadership?
When implementing AI systems, the key areas of technology policy to comply with are privacy, data security, student safety, data transfer and ownership, and child and youth protection.
The Council of Great City Schools and the Consortium for School Networking (CoSN), in partnership with Amazon Web Services, have developed the K-12 Generative Artificial Intelligence (Gen AI) Readiness Checklist to help districts in the U.S. prepare for implementing AI technology solutions. The checklist provides a curated list of questions to help district leaders devise implementation strategies across six core focus areas: Executive Leadership, Operations, Data, Technology, Security, and Risk Management.
Example: Wayne RESA, Michigan, USA, created an AI guidance website and document with ethical, pedagogical, administrative, and policy considerations. “AI systems often need large amounts of data to function effectively. In an educational context, some uses could involve collecting and analyzing sensitive data about students, such as their learning habits, academic performance, and personal information. Therefore, maintaining student privacy is the primary ethical consideration. Even with consent, it is not appropriate to prompt public models with identifiable data because anything shared with a model, even if information is shared in prompt form, may be added to the model for future reference and even shared with other users of the model.”
Current regulations relevant to the use of AI in education
United States
International
The Common Sense Media AI Ratings System provides a framework “designed to assess the safety, transparency, ethical use, and impact of AI products.”
Foundational concepts of AI literacy include elements of computer science, as well as ethics, psychology, data science, engineering, statistics, and other areas beyond STEM. AI literacy equips individuals to engage productively and responsibly with AI technologies in society, the economy, and their personal lives. Schools can create opportunities for educators to collaborate and consolidate lessons learned to promote AI literacy across disciplines.
One of the major benefits of learning about AI is developing computational thinking, a way of solving problems, and designing systems that draw on concepts fundamental to computer science. Learning how AI works is an opportunity for learning computational thinking skills such as:
Article 26 of Argentina’s Framework for the Regulation of the Development and Use of AI states that “AI training and education will be promoted for professionals, researchers and students, in order to develop the skills and competencies necessary to understand, use and develop AI systems in an ethical and responsible manner.”
AI literacy has benefits for a wide range of stakeholders and a variety of purposes.
For example:
Example: The California Department of Education, offers information regarding the role of AI in California K12 education. “Knowing how AI processes data and generates outputs enables students to think critically about the results AI systems provide. They can question and evaluate the information they receive and make informed decisions. This is of particular significance as students utilize AI in the classroom, to maintain academic integrity and promote ethical use of AI.”
TeachAI’s AI Literacy Framework will be released in the summer of 2024 and will be useful to inform standards and curriculum, professional development, and integration across various subjects.
Graphic: In 2019, Gwinnett County Public Schools, Georgia, USA, launched a K-12 AI literacy initiative that includes both discrete and embedded learning experiences across content areas through the lens of their AI Learning Framework. High school students have the option to participate in the discrete three-course AI pathway, which dives beyond literacy to rigorous technical learning for those students interested in an AI career.
Graphic (below): The AI4K12 Five Big Ideas in AI describe what every K-12 student should know about how AI works.
Discussion Questions
✅ How does the education system support staff and students in understanding how to use AI and how AI works?
✅ What is the strategy for incorporating AI concepts into core academic classes, such as computer science?
✅ How is systemwide participation in AI education and professional development being encouraged and measured?
Guidance should include responsible use cases in line with AI’s potential to support community goals, such as improving student and teacher well-being and student learning outcomes. Rather than just acknowledging the risks of AI in schools, education systems should provide guidance on mitigating the risks so the potential benefits can be realized.
Discussion Questions
✅ Do our policies describe and support opportunities associated with using AI?
✅ Do our policies describe and proactively mitigate the risks associated with using AI?
Example: The Code of Student Conduct of the Madison City Schools, Alabama, USA, integrates an Artificial Intelligence Acceptable Use Policy into section 4.8.14 of the “Acceptable Use Of Computer Technology And Related Resource” and recognizes specific risks of AI use:
Looking ahead: As new research emerges on the applications of AI in educational settings, schools should rely on evidence-based methods to guide initiatives.
Guidance from the Allen Institute for AI
Though ethical AI research continues, current best practices exist.
Discussion Questions
✅ Do our policies sufficiently cover academic integrity, plagiarism, and proper attribution issues when using AI technologies?
✅ Do we offer professional development for educators to use commonly available AI technologies to support the adaptation of assignments and assessments?
✅Do students have clear guidance for citing AI usage, using it properly to bolster learning, and understanding the importance of their voice and perspective in creating original work?
While it is necessary to address plagiarism and other risks to academic integrity, AI simultaneously offers staff and students an opportunity to emphasize the fundamental values that underpin academic integrity – honesty, trust, fairness, respect, and responsibility. For example, AI tools can help staff and students quickly cross-reference information and claims, though they must still be critical of the output. AI’s limitations can also showcase the unique value of authentic, personal creation.
Existing academic integrity policies should be evaluated and updated to include issues specific to generative AI. Students should be truthful in giving credit to sources and tools and honest in presenting work that is genuinely their own for evaluation and feedback. Students should be instructed about properly citing any instances where generative AI tools were used.
Teachers should be transparent about their uses of AI and clear about how and when students are expected to use or not use AI. For example, a teacher might allow the limited use of generative AI on specific assignments or parts of assignments and articulate why they do not allow its use in other assignments.
Teachers should not use technologies that purport to identify the use of generative AI to detect cheating and plagiarism. The accuracy of these technologies is questionable, leading to the risk of false positives and negatives. Their use can promote a culture of policing assignments to maintain the status quo rather than preparing students for a future where AI usage is ubiquitous.
Example: Section 7.50 of the Round Lake Area Schools Student Handbook, Use of Artificial Intelligence, states that “AI is not a substitute for schoolwork that requires original thought. Students may not claim AI generated content as their own work [without citation]. In certain situations, AI may be used as a learning tool or a study aid. Students who wish to use AI for legitimate educational purposes must have permission from a teacher or an administrator. Students may use AI as authorized in their Individualized Education Program (IEP). Students may not use AI, including AI image or voice generator technology, to violate school rules or school district policies.”
Tips for Restructuring Assignments to Avoid Plagiarism
The Oregon Department of Education suggests multiple strategies.
How do you cite generative AI content?
Use one of the following resources:
OpenAI now has a “Get Citation” feature that creates a current citation. Example: OpenAI. (2023). ChatGPT (August 3 Version) [Large language model]. https://chat.openai.com
Be Clear About When and How to Use AI for Assignments
Restrictive
Moderate
Level of AI Use
Description
Students are allowed to utilize AI tools freely to assist in their assignments, such as generating ideas, proofreading, or organizing content.
Students can use AI tools for specific parts of their assignments, such as brainstorming or initial research, but the core content and conclusions should be original.
AI tools are not permitted for the assignment, and all work must be the student's original thoughts and words.
Example Instruction
"You may use AI tools as you see fit to enhance your assignment and demonstrate your understanding of the topic."
"You can employ AI tools for initial research and data analysis, but the main content, arguments, and conclusions should be your own."
"Do not use AI tools for this assignment. All content must be original, and any use of AI will be treated as plagiarism."
Permissive
Discussion Questions
✅ Do our policies clarify that staff are ultimately responsible for any AI-aided decision and that AI is not solely responsible for any major decision-making or academic practices?
✅ How do our policies ensure that students retain appropriate agency in their decisions and learning paths when using AI tools?
Any decision-making practices supported by AI must enable human intervention and ultimately rely on human approval processes. These decisions include instructional decisions, such as assessment or academic interventions, and operational decisions, such as hiring and resource allocation. AI systems should serve in a consultative and supportive role without replacing the responsibilities of students, teachers, or administrators.
Example: Peninsula School District, Washington, USA. AI Principles and Beliefs Statement: “The promise of Artificial Intelligence (AI) in the Peninsula School District is substantial, not in substituting human instructors but by augmenting and streamlining their endeavors. Our perspective on AI in education is comparable to using a GPS: it serves as a supportive guide while still leaving ultimate control with the user, whether the educator or the student.”
Looking ahead: Looking ahead: In the future, AI policies should expect increased transparency from AI providers on how tools work and include statements like, “Our school system aims to work with AI tools that are transparent in how they operate, that provide explanations for outputs, and that enable human oversight and override. We will require all providers to make it clear when a user is interacting with an AI versus a human.”
Discussion Questions
✅ Does our education system’s guidance on AI recognize the need for continuous change?
✅ Has the education system reassessed existing products, as their providers may have added AI features since their initial evaluation?
✅ Is there a plan for community input on AI policy and implementation, including feedback from students, parents, teachers, and other stakeholders?
Guidance should be reviewed and updated often to ensure it continues to meet the school’s needs and complies with changes in laws, regulations, and technology. Guidance and policies will benefit from feedback from various stakeholders, including teachers, parents, and students, especially as more is learned about the impact of AI in education. See suggestions for monitoring, ongoing support, and collecting feedback in Digital Promise’s Emerging Technology Adoption Framework.
Example: Gwinnett County Schools in Georgia, USA, created an office dedicated to computer science and AI. “The Office of Artificial Intelligence and Computer Science provides supports for the district’s K-12 Computer Science for All (CS4All) program as well as K-12 competition robotics. The office also supports the AI and Future-Readiness initiative in collaboration with other departments. Future-Readiness emphasizes the learning needed for students to be ready for their futures. As advanced technologies continue to impact the workplace and society, GCPS students will be future-ready as informed users and developers of those technologies.”
Looking ahead: Effective tools or features to monitor AI and its use in school should be developed and required as part of procurement procedures.