AI Tool Evaluation and Selection

AI Literacy Framework Level 2 Use and Apply AI:
AI Tool Evaluation and Selection
Core Competencies covered in this chapter:
- Demonstrate familiarity with the core features and capabilities of various Gen AI tools.
- Evaluate and select appropriate Gen AI tools to effectively address specific tasks or purposes.
Introduction
The rapid proliferation of AI tools presents higher education faculty with an overwhelming array of options, making it difficult to identify which technologies genuinely enhance teaching and learning versus those that distracts from our goals. This chapter provides an overview of the types of AI tools and a systematic framework for evaluating and selecting AI tools based on your specific educational challenges rather than adopting impressive-seeming technologies that may not address real pedagogical needs. You’ll learn a structured approach that helps you make informed decisions about whether, when, and how to integrate AI into your courses effectively.
Interactive Module: AI Tool Evaluation and Selection
Reflect and Apply: Educator’s Toolkit
Core Competencies for Educators
Educators should be familiar with popular Gen AI tools such as Copilot and Chat GPT as well as specialized tools within their discipline that may include media production, language, and research tools. They should also be able to articulate the differences in the tools’ capabilities and outline best-use scenarios for achieving their course learning objectives. Educators should also select appropriate AI tools for teaching and student learning.
Reflection Questions
- Reflecting on the various categories of AI tools (standalone, integrated, custom), how has your understanding of the AI landscape evolved? Which type of tool do you now see as most relevant for your immediate teaching context, and why?
- How did following the systematic AI tool selection process compare to any previous, more ad hoc methods you might have used for technology adoption? What specific advantages or insights did this structured approach provide?
- The selection process emphasizes starting with pedagogical needs before technology. How did beginning by defining your specific educational challenge influence your entire selection process, and did it prevent any potential pitfalls you might have otherwise encountered? what new teaching or learning challenges might you consider addressing with AI tools
- Which of the evaluation criteria – such as data privacy, accessibility, integration capabilities, error tolerance, or scalability – did you find most critical or challenging to assess for your specific needs, and what did you learn from this assessment?
- Institutional support often makes the difference between success and abandonment. How did investigating and considering your institution’s AI resources and support systems impact your choices, and what steps might you take to leverage these resources more effectively in the future?
- The selection process highlights the unpredictable nature of generative AI tools and the importance of hands-on testing with realistic scenarios. What unexpected capabilities or limitations did you discover during your testing phase that you couldn’t have predicted from initial evaluations or reviews?
- During the pilot phase, the selection process suggests being prepared to adjust teaching practices. What specific modifications to your assignments, guidance, or assessment methods did you identify as necessary for successful AI integration, and what does this imply about the role of technology in pedagogical change?
- Reflect on the argument that “deciding not to adopt an AI tool after systematic evaluation is often the right choice”. How does this perspective resonate with your experience, and how might it inform your future approach to evaluating new educational technologies?
Use the Padlet Discussion Board to share your thoughts with peer educators.
Tips and Best Practices
As AI tools continue evolving at a rapid pace, educators need reliable resources for both tracking new developments and conducting thorough evaluations. The following resources coule be helpful for keeping up with latest development and the evaluation process.
- AI for Education Pedagogy Model Benchmark : When evaluating AI tools for educational use, you often need to assess the underlying AI models’ capabilities. However, standard commercial AI benchmarks focus on general performance rather than education-specific competencies. The Pedagogy Benchmark addresses this gap by testing whether AI models can pass actual teacher certification exams and demonstrate pedagogical knowledge. This education-focused testing helps educators make informed decisions about which models are suitable for tutoring applications and other instructional purposes.
- What AI Can Do Today :This resource serves as your guide to understanding current AI capabilities. You can use it to search for AI tools that offer specific functionalities aligned with your teaching needs.
- ITHAKA S+R Generative AI Product Tracker : This maintained database functions as your comprehensive catalog for discovering and comparing available tools in the educational market. The tracker specifically focuses on generative AI products that are either designed for higher education or actively used by faculty and students for teaching, learning, and research activities. As a living document that receives regular updates, it proves particularly valuable for staying current with the rapidly changing landscape of educational AI tools.