High School ELA
Our English Language Arts (ELA) review teams spend approximately 5-10 hours per week over the course of several months to create the detailed reports on the site. Reviewers collect evidence from materials for identified indicators independently and meet weekly to share findings and decide on which evidence best represents the indicator. The review rubric that teams use to identify quality and information is the EdReports.org Quality Instructional Materials Tool for High School ELA.
The EdReports.org Quality Instructional Materials Tool for High School ELA was developed by a team of anchor educators who met in early 2017. This anchor educator group took into consideration information, feedback, and suggestions from the “listening tour” that EdReports staff conducted with literacy experts and educators from across the country. The group reviewed the Common Core State Standards for English Language Arts (CCSS ELA), research around literacy instruction, current state materials review rubrics and tools, the Revised Publishers’ Criteria for the Common Core State Standards in ELA and Literacy for Grades 3-12 and other resources such as the Instructional Materials Evaluation Tool for ELA/Literacy 3-12 (IMET), the ELA/Literacy Grade-Level Instructional Materials Evaluation Tool (GIMET), and the Educators Evaluating the Quality of Instructional Products rubric for ELA/literacy 3-5 and ELA 3-12 (EQuIP). From this study and collaboration with educators at many levels and roles, the Quality Instructional Materials Tool for High School ELA was created. The Evidence Guides were developed to support reviewers’ understanding of how to identify evidence using this rubric.
Review teams are comprised of expert ELA educators from across the country. Teams collect evidence and determine scores based on the core materials used in the classroom over the course of a school year. Review teams begin at the indicator level and identify evidence for the indicators from each grade in the series. Teams meet weekly online (audio and video meetings) to decide on scores and identify the most representative evidence to support that indicator. Then they move to the next indicator, continuing to work within and across multiple grades. The lead reviewers from each team meet separately week to discuss indicators and calibrate how evidence is being collected. Calibrators work across teams to ensure that definitions and scoring is done consistently, and to support the application of the Evidence Guides in all review team work.
Materials reviewed include teacher editions, student editions, and any related texts that are identified as core products within a year-long comprehensive program.
A key component of the ELA review tool is how it addresses the whole of the CCSS ELA to include reading, writing, speaking and listening, and language. Teams review the materials to ensure that all standards are covered coherently over the course of a school year’s worth of instructional materials. The tool identifies both the presence and the integration of the standards to support students as they build knowledge and literacy skills.
Using the EdReports.org Quality Instructional Materials Tool for Grades High School ELA and related Evidence Guides, reviewers consider the following to create the high-quality, evidence-rich reports:
Text Quality and Complexity, and Alignment to Standards with Tasks and Questions Grounded in Evidence
If material "meets" or "partially meets", move to Gateway 2
Building Knowledge with Texts, Vocabulary, and Tasks
If material "meets" Gateways 1 and 2, move to Gateway 3
Instructional Supports and Other Usability Indicators
Text Quality and Complexity and Alignment to the Standards with Tasks and Questions Grounded in Evidence (Gateway 1)
The criteria in this first gateway support reviewers to determine if high-quality texts are the central focus of lessons, are at the appropriate grade level text complexity, and are accompanied by quality tasks aligned to the standards of reading, writing, speaking, listening, and language in service to grow literacy skills. This gateway comprises two criteria for grades 9-12.
Criterion 1: Reviewers first consider whether the texts are worthy of students’ time and attention (of quality, rigorous, and at the right text complexity for grade level, student, and task).
One of the most frequent questions we receive is how reviewers determine text complexity. Text complexity is defined in CCSS Appendix A and Supplemental Information for Appendix A as having three components: quantitative dimensions, qualitative dimensions, and reader and task considerations. Our tool also measures these components through a multi-step process. First, reviewers take stock of the quantitative range of the texts and compare these to the grade-band range provided in the standards. Then, review teams consider the qualitative features of the text. Reviewers also work with the reader and task considerations, particularly if the texts fall outside the grade band.
Note: Indicator 1b is not rated but evidence is still collected to be included in the review. EdReports.org acknowledges that attention to the distribution of text types and genres is critical, but may vary widely from program to program while still meeting the intent and distribution called for in the standards.
Criterion 2: Then reviewers look to see if materials provide opportunities for rich and rigorous evidence-based discussions and writing about texts to build strong literacy skills.
In this set of indicators, reviewers identify high quality sequences of text-dependent question and tasks that support students’ reading, writing, speaking and listening, and language. The reviewers identify the specific lessons, activities, practice, and assessments that meet the grade level standards. For example, when looking at the writing within materials, reviewers seek to identify if the writing tasks are grounded in evidence (1m), provide students opportunity to practice the writing types required for each grade (1l), provide practice in both on-demand and process writing (1k), and are text-dependent (1g and 1h).
In order for instructional materials to receive a designation of “meets” or “partially meets” expectations for Gateway 1, materials must at least partially meet expectations for both Criteria 1 and 2. Materials cannot receive a designation of “does not meet” on either of these criteria and be reviewed for Gateway 2.
If materials meet or partially meet expectations for Gateway 1, reviewers then proceed to the second gateway, which covers how well the materials support students to build their knowledge and academic vocabulary and identifies the integration of the standards. Indicators in this gateway support reviewers to evaluate whether materials build students’ knowledge across topics and content areas (and/or themes, where appropriate), if academic vocabulary instruction is intentionally and coherently sequenced to build vocabulary, and if questions and tasks build in rigor and complexity to culminating tasks that demonstrate students’ ability to analyze components of texts and topics. This gateway also includes identifying the presence of coherent, year-long plans for academic vocabulary, writing development, research skills practice, and independent reading.
This gateway has one criterion and eight indicators:
Only materials that fully meet the expectations for the first two gateways will be reviewed for Instructional Supports and Usability (Gateway 3). The last set of indicators that our reviewers examine are around how well materials support student learning and engagement and support teacher learning and understanding of the standards. They also look to see if materials also offer supports to differentiate instruction for diverse learners and enrich instruction through technology. There are four scored criteria and one non-scored criterion. For ‘Effective Technology Use,’ indicators are not rated but evidence is still collected to be included in the review. EdReports.org considers technology use to be an important element of usability, but since printed and online materials vary widely in their use of technology we are not scoring these indicators at this time.