Language selector

9. Early screening

Page controls

Page content

A screening measure is a quick and informal evidence-based test that provides information about possible reading difficulties.

Introduction

A screening measure is a quick and informal evidence-based test that provides information about possible reading difficulties.[875] It identifies students who are at risk for or currently experiencing reading difficulties so they can receive more instruction or immediate intervention. Although beyond the scope of this report, early measures can be used to screen for difficulties in oral language development.

Screening is not a diagnosis. It does not identify children for a special education designation or label. It is an early detection strategy for the benefit of students and teachers. Teachers better understand how to help their students, and students receive immediate and targeted support.

Universal screening means conducting common and standardized screening assessments on all students, using evidence-based screening instruments. These instruments have established reliability and validity standards to increase confidence in their effectiveness.

Universal early screening is not only effective, but also necessary to protect the rights of all students, particularly students from many Code-protected groups. It facilitates early interventions, reduces the potential for bias, and creates better decision-making around student outcomes. Schools can make data-informed decisions because they can compare results from common screening tools across populations.

All screening tools need to be used responsibly and consider the cultural and linguistic diversity of the student community. No screening tool should ever be used to devalue a child. The purpose of screening and data collection is to make sure students and teachers have the resources they need.

As outlined in section 8, Curriculum and instruction, screening is a key part of a Response to Intervention (RTI) or Multi-tiered System of Supports (MTSS) approach to inclusive education. If schools provide evidence-based classroom instruction, use universal screening to identify students and provide immediate interventions when needed, it will reduce the number of students who fail to learn to read. Screening is effective when it is early, evidence-based and universal.

When screening happens early, schools can identify struggling and at-risk readers to provide early and targeted intervention. Earlier interventions are more effective because students’ response to intervention declines as they become older.[876] Screening identifies at-risk young readers who are behind on foundational reading skills, and prevents them from developing later reading difficulties and dealing with the associated lifelong consequences of failing to learn to read. That is why early screening is critical for catching students before they fall behind. 

The inquiry found that Ontario does not have universal, systematic, evidence-based early screening to identify at-risk students who need immediate interventions.

One of the obstacles to universal screening is the Ministry of Education’s Policy/Program Memorandum (PPM) 155. As currently interpreted, PPM 155 leaves the frequency, timing and selection of diagnostic assessments to individual teachers’ professional judgment. Screening students is most effective when evidence-based measures are used at specific intervals. However, neither the evidence-based measures nor the timing of screening is standardized in Ontario. As a result, this PPM has contributed to gaps, inconsistencies and a lack of an effective, student-centered approach to early screening.

Screening practices vary between boards, schools and individual teachers. Boards use a combination of observational methods, such as running records or miscue analyses that are often associated with non-evidence based reading programs. Some boards use assessments they developed in-house, but these do not appear to have been adequately evaluated to make sure that they are effective. While some boards include screeners that may be evidence-based on their list of possible assessments, teachers are not required to use them and boards could not confirm if they were being used. 

Ontario schools need standardized protocols about when, how often and which screening tools should be used. Teachers should be an integral part of developing this model. However, how PPM 155 is being applied is not working for Ontario students. The current model does not create the necessary conditions for data collection because there are no common screening tools across classrooms, schools and school boards.

To successfully implement a tiered approach, screening tools must be evidence-based, include the appropriate measures and be administered twice a year from Kindergarten to Grade 2. This screening data must be used to provide immediate intervention for students who need it.

Ontario must address its current inadequate approach to screening. The current approach creates unnecessary conflict and confusion between school boards and teachers, and neglects the best interests of at-risk children. The science of screening for early reading skills is advanced, the financial cost is minimal and the impact of current practices on students is harmful.[877]

Back to top

The importance of early screening

Waiting to see if a student has difficulty learning to read does not work. This “wait and see” approach to identifying learning difficulties is widespread in Ontario’s education system. Schools typically wait for students to present extreme difficulty before a teacher notices or is able to refer a student for further support.

Education researchers have noted several disadvantages with the “wait and see” model. Assessment is imprecise because it is mostly based on teacher observation or identification measures that are not linked to effective instruction.[878] This can lead to decisions informed by bias, rather than data. The intervention is reactive and based on deficit, rather than proactive and based on risk.[879] Students receive interventions too late or not at all.

Schools must screen every student early (starting in Kindergarten Year 1) using evidence-based screening tools. The earlier schools screen students, the earlier students can receive instruction or intervention that will enable them to learn to read accurately and fluently.[880]

In its submission to the inquiry, the Association of Psychology Leaders in Ontario Schools (APLOS) emphasized that screening is important because it provides “a classroom profile to help the teacher to determine the focus of curriculum for his or her classroom.” APLOS added that this leads to early and effective intervention for struggling readers, which reduces “the potential for long-term learning challenges” for many students. The APLOS told the inquiry:

Early reading screenings provide an opportunity to implement programming that is responsive to emerging learning needs. This does not need to result in children being separated into groups or centred out among their peers. Instead, it ensures that learning goals continue to change within the classroom and that the needs of all learners are addressed. Teaching in the early years that incorporates direct and systematic instruction is a powerful preventative tool that will reduce reading difficulties in the later years, thereby decreasing requirements for more comprehensive assessments to determine the core difficulty.  

Early screening can also reduce the need for professional assessments. A board speech-language pathologist (SLP) told the inquiry that early screening is “essential for all students Kindergarten to Grade 2. If done correctly, it will significantly reduce the referrals to Psychologists and SLPs and may likely increase the [number of] students with grade-appropriate literacy skills.”

Also, if tier 1 classroom instruction is not evidence-based, tier 2 interventions will not be as effective. Evidence-based instruction reinforces the foundational skills targeted in tier 2 interventions in the classroom, and enables students to continue developing these skills even after interventions have ended.

Back to top

Evidence-based screening

Evidence-based screening is the most effective way to identify struggling and at-risk readers. It is an objective and measurable way to improve student outcomes and reduce bias in assessment, including for students from culturally and linguistically diverse backgrounds.[881] Bias can affect teachers’ and other educational professionals’ perceptions of student ability and performance. Universal screening safeguards against these potential biases.

The inquiry heard about assumptions that students who struggle to read should not be learning English and French at the same time. This incorrect assumption has limited many students’ access to French-language education. For example, sometimes newcomer and multilingual students are discouraged from enrolling in French Immersion programs, and the inquiry heard many examples of families being told a student with reading difficulties should not remain in a French Immersion program or must be withdrawn for the school to offer supports.

The inquiry also heard that students in French boards have less access to resources and programs for reading difficulties. Parents reported giving up their right to have their child receive a French-language education and moving their child to an English board to access better supports. One parent said:

En Ontario, nous avons le droit à l'enseignement en français par contre lors de trouble d'apprentissage, il y a très peu de ressources ou programmes disponibles pour le personnel enseignants et les élèves. C'est en partie pour cette raison que nous avons retiré notre enfant du système scolaire francophone.

[In Ontario, we have the right to be taught in French. However, there are very few resources or programs available for teachers and students with learning disabilities. This is part of the reason why we removed our child from the French school system.]

Students who have or who are at risk for reading difficulties, should enjoy the same academic, social and employment-related benefits of learning French. Parents who are French-language rights holders have a right to have their child receive a French-language education under section 23 of the Canadian Charter of Rights and Freedoms. The Ministry has recognized all students can learn French given the appropriate supports.

A preventative approach is also needed for students learning French and at risk for reading disabilities. If a student struggles to read in French, they will likely also have difficulty in English so removing them from a French-language program is not the solution. Instead, early scientifically validated screening and evidence-based interventions should equally be implemented within French-language instruction.

Not all research is equal. Research has different degrees of quality. Individuals or companies who create screening tools often claim their assessments are research-based. However, research alone is not an indicator of whether a given tool is valid and reliable. The research must be:

  1. Valid (strong[882] internal and external validity)
  2. Reliable
  3. Linked to the science of reading instruction and acquiring foundational reading skills.

Internal validity relates to how well a study is conducted. It measures whether the research was done right. Studies with high internal validity can support causal conclusions. Studies with more moderate internal validity support the generality of a relationship, but causality is uncertain.

External validity relates to how applicable the findings are in the real world. Studies with high external validity include a range of participants and settings so that study results can be generalized to those participants and settings. Research with moderate external validity may have smaller sample sizes, so generalization to other populations may be uncertain.

External validity also refers to what is being measured and how this relates to the larger field of study. For example, a measure of whether a child can use syntactic (sentence structure) or other cues to guess at words in a sentence is not a valid indicator of foundational word-reading skills.

Reliability means the same results are found across multiple settings and assessors. Results do not change based on when or where the screening occurs and who does the scoring.

When it comes to reading, many screening tools have gone through rigorous development and their level of validity and reliability is well known.[883]

Back to top

The Institute of Education Sciences practice guide

The Institute of Education Sciences (IES) is an independent and non-partisan arm of the U.S. Department of Education. The institute reviews the best available scientific evidence and expertise to address systemic challenges in education that cannot be solved by single programs. It publishes practice guides that are subjected to rigorous external peer review.

In 2009, the IES published the practice guide, Assisting Students Struggling with Reading: Response to Intervention (RTI) and Multi-Tier Intervention in the Primary Grades.[884]

This guide was written by a panel of experts in reading, measurement and research methodology. The experts summarized recommended screening practices for early reading and provided evidence of their effectiveness. Each recommendation received a rating that reflects the strength of the research showing its effectiveness.[885] The recommendations with moderate to strong evidentiary support are that schools should:

  1. Screen all students for potential reading problems at the beginning of the year and again in the middle of the year
  2. Screen students in Kindergarten through Grade 2
  3. Regularly monitor the progress of students who have an elevated risk for developing reading disabilities
  4. Use measures that are efficient, reliable, and reasonably valid
  5. Provide intensive, systematic instruction on up to three foundational reading skills in small groups, to students who score below the benchmark score on universal screening. Typically, these groups meet between three and five times a week, for 20 to 40 minutes.

The IES report also recommended screening areas that are appropriate based on each grade level:[886]

Kindergarten screening batteries should include measures assessing letter knowledge and phonemic awareness...As children move into Grade 1, screening batteries should include measures assessing phonemic awareness, decoding, word identification and text reading. By the second semester of Grade 1, the decoding, word identification, and text reading should include speed as an outcome. Grade 2 batteries should include measures involving word reading and passage reading. These measures are typically timed.

The expert panel created a table based on the most recent scientific literature that outlines the screening measure that is required at each time point to assess early word-reading accuracy and fluency.[887]

 

Figure 4

Table: Recommended target areas for early screening and progress monitoring

Measures Recommended grade levels Proficiencies assessed Purpose Limitations
Letter naming fluency K–1 Letter name identification and the ability to rapidly  retrieve abstract information Screening This measure is poor for progress monitoring since students begin to learn to  associate letters with sounds.
It is not valid for English learners in kindergarten, but seems valid for grade 1
Phoneme Segmentation K-1 Phonemic awareness Screening
and progress
monitoring
This measure is problematic
for measuring progress in
the second semester of grade
1. As students learn to read,
they seem to focus less on
phonemic skills and more on
decoding strategies.
Nonsense word
fluency
1 Proficiency and
automaticity
with basic
phonics rule
Screening
and progress
monitoring
This measure is limited to
only very simple words and
does not tap the ability to
read irregular words or multisyllabic
words.
Word
identification
1–2 Word reading Screening
and progress
monitoring
This measure addresses many
of the limitations of nonsense
word fluency by including
multisyllabic and irregular
words.
Oral reading
fluency
(also called
passage reading
fluency)
1–2 Reading connected
text
accurately and
fluently
Screening
and progress
monitoring
Although the measure has
moderately strong criterionrelated
validity, it cannot give
a full picture of students’
reading proficiency. Many students
will score close to zero
at the beginning of grade 1.
The measure still is a reasonable
predictor of end of year
reading performance.

Although the expert panel did not recommend any specific screening tool, they recommended that when schools select screening tools, they should have similar properties to measures that have been examined in the scientific literature. The panel cited three specific screening tools and the corresponding studies that show they include measures that accurately predict future student performance. These tools are DIBELS,[888] Comprehensive Test of Phonological Processing (CTOPP),[889] and the Texas Primary Reading Inventory.[890] The Rapid Naming Subtests of the CTOPP could also be included as these predict later word-reading accuracy and fluency difficulties.

Back to top

Screening measures

Screening measures are designed to make sure students are learning the appropriate skills based on their grade. At different points in time, specific screening measures need to be used that are appropriate to the expected reading development.

As shown in the IES report, the science of early identification is advanced and many tools are available to predict risk for reading difficulties.[891]

Screening tools that assess phonological awareness of units larger than the phoneme, such as rhyming, syllables and onsets, may be suitable in Kindergarten Year 1. In Kindergarten Year 2 year, and as the IES table above shows, these skills need to focus on phonemic awareness and letter-naming fluency. In Grade 1, the focus should be on word identification, decoding, text reading and non-word reading. Later in Grade 1 and in Grade 2, screeners should measure word- and text-reading fluency, which means these measures are timed.

It is important to measure different skills at multiple points across these early years, to avoid missing students who can perform simpler tasks but struggle as reading demands increase.

For example, a student may do well if simpler screening measures are administered in the fall of Year 2 (rhyming, syllable and onset awareness), but struggle later with the more complex tasks of blending and segmenting individual phonemes within words. Another child may struggle with decoding words, and another with building automatic or quick reading of words. All these later difficulties are typical of students at risk for reading disabilities, but these students would be missed in scenarios where only early, pre-reading skills are screened in Kindergarten or even early Grade 1.

Screening should therefore focus on more than just the simpler areas of decoding. It should include advanced skills, so interventions can properly focus on building all the necessary skills. The goal is for students to not only learn to read, but also to read as accurately and quickly as their same-age peers. 

Back to top

The role of risk factors

Some students may have family members with a diagnosed reading disability or undiagnosed reading difficulty. This information on family history can also inform early identification.[892] However, family risk status should never be a pre-requisite for an intervention. Rather, student performance on screening measures and response to instruction should determine if they receive a reading intervention.[893]

Also, environmental factors such as socioeconomic disadvantage can place children at risk for reading difficulties. These children will not necessarily have a family member with a documented history of reading disability. This is one reason why early screening for all students is crucial. Regardless of whether a student struggles to learn to read because of stronger hereditary and/or environmental contributors (which cannot be determined), the type of intervention does not change. Dr. Susan Brady, a U.S. psychologist and literacy expert, explains: [894]

Another risk factor for dyslexia is familial risk; if close relatives have been diagnosed with dyslexia there is a genetic risk for a child to develop dyslexia...[895] On the other hand, from an environmental perspective, disruptive early life experiences resulting from stress, poverty, and low levels of parental education also place children at risk for language and literacy deficits[896]… Further, there is increasing evidence that socioeconomic disadvantages can affect children’s brain development[897]…Despite the differences in origin, it is not possible at the individual level to distinguish between the characteristics of cases of biologically based and environmentally induced dyslexia[898]…and of course a child may have both sources of problems. From the perspective of treatment, diagnosed reading and language weaknesses require the same kinds of interventions regardless of the mix of biological or environmental causes.

 

Progress monitoring

Progress monitoring is distinct from screening. Screening is given at a point in time when the student is likely not receiving an intervention. Screening flags students who need to receive effective interventions to learn the skills that make up word reading and decoding. Screening may also be used as an initial skill assessment to determine starting points for different interventions.

Screening tools measure skills that are highly predictive of reading acquisition or later functioning, but are not always the best way to monitor a student’s progress or response to an intervention. Progress monitoring can provide a more in-depth assessment of reading skills, and can better gauge a child’s response to an intervention. Progress monitoring measures the early reading foundational skills that are being targeted by the intervention. 

Similar to screening, the results from progress monitoring help inform further decision-making and interventions.[899] Together, screening, early intervention and progress monitoring aim to set the student on a positive trajectory in learning all the foundational skills of word-reading accuracy and fluency, so they catch up to their peers. This trajectory is measured to ensure ongoing “average” performance or ongoing and increasingly intensive interventions.

Back to top

The role of teachers

Teachers play a critical role in assessing students and identifying their learning needs.[900] As recognized in the Report of the Royal Commission on Learning: “…no one knows the student’s capacities, or is in a position to assess them in all their nuances and complexity, better than the classroom teacher.”[901] Teachers spend every day in the classroom teaching and observing students. Observation is one component of assessment, but needs to be supplemented with standardized, early and evidence-based screening.

A teacher may observe that a student is not gaining skills adequately with classroom instruction. The teacher may then refer the student for a tier 2 intervention. Sometimes, parents may voluntarily disclose a family history of reading disability. Combined with teacher observation, this could also flag a student as needing extra support through an intervention. Universal screening makes sure all students, regardless of their family background or being noticed by teachers, are systematically flagged when foundational word-reading skills are not developing as needed.

Screening is not the same as a professional assessment that occurs after a student is referred (for example, psychoeducational assessment or speech-language assessment). Classroom teachers are well placed to screen for word-level reading. When teachers screen students, they learn valuable information about their students to help inform their instruction. However, other educators such as resource or special education teachers are also well-suited to administer these tests, and administering and scoring assessments can be supported by other professionals (such as psychologists and SLPs) in the education system.

Whoever administers and scores the screening tools will need training on the basic principles of early reading screening tools, and specific knowledge about the selected tool(s). In Teaching Reading is Rocket Science, Dr. Louisa Moats says teachers who understand classroom reading assessment have the knowledge to answer questions such as:

  1. Question: What specific skills…should be present at the end of Kindergarten [and] are the best predictors of achievement?
    Answer: Essential skills consist of the ability to segment the phonemes in simple words, to name alphabet letters presented randomly, to produce the sounds represented by most consonants and the short vowels, to spell simple words phonetically, and to demonstrate age-appropriate vocabulary development.
     
  2. Question: Are running records or oral reading tests reliable or valid indicators of reading ability?
    Answer: The reliability of oral reading tests and running records is lower than the reliability of more structured, specific measures of component reading skills. Teachers judging the cause of specific oral reading errors (for example, miscue analysis) tends to be unreliable, and the category of “visual” errors is misnamed. On the other hand, timed, brief oral reading tests that measure words read correctly per minute are excellent predictors of future reading from about the middle of Grade 1 onward.
     
  3. Question: When are children typically expected to spell trapped, plate, illustrate and preparing?
    Answer:
  • Plate: end of Grade 1 when the most common long vowel spelling is learned.
  • Trapped: end of Grade 2 when the basic doubling rule for endings beginning with vowels is learned.
  • Preparing: end of Grade 4 when students expand their knowledge to Latin-based words with prefixes, roots, and suffixes.
  • Illustrate: end of Grade 5 when more complex words with prefixes, roots, and suffixes are learned.
  • Offered: end of Grade 6 when patterns involve prefixes, roots, and suffixes and more complex spelling changes[902]
     
  1. Question: Why is it important to test comprehension using several different types of assessment?
    Answer: Several assessments will lead to a more accurate picture of students’ comprehension because the outcome of comprehension tests depends on many variables, including the student’s prior knowledge of the topics in the passages, decoding ability, and vocabulary; the response format; the length of the texts; and so forth.

Word-level reading screening tools can take 10 to 15 minutes to complete per student,[903] and additional time for scoring and collating the data. Depending on class size, screening all students at one time could take up to three days. Teachers and school administrators who completed the inquiry survey reported that teachers do not have adequate release time to screen students, so fewer students are identified. The inquiry also heard that, due to lack of sufficient release time for classroom teachers, special education teachers may be pulled away from pressing work of their own and asked to conduct screenings.

Back to top

Screening practices in other jurisdictions

North Vancouver District School Board

The North Vancouver District School Board has a policy that states student assessment is “ongoing, research-based, varied in nature and administered over a period of time to allow students to demonstrate their full range of learning.”[904] Teachers are trained to screen all students in Kindergarten and Grade 1 using one common screening tool, and input the data into a central system and student profiles. This data is used to determine if students are on track and are responding well to instruction. Schools publicly report the assessment data and set measurable targets to improve reading outcomes for all students.[905]

Teachers screen all students for reading difficulties starting in January of Kindergarten, using a tool that measures phonological awareness.[906] Based on the results, intervention begins in February.[907] In May, teachers retest the Kindergarten students who are receiving intervention. In fall of Grade 1, teachers administer Dynamic Indicators of Basic Early Literacy Skills (DIBELS) to all students.[908]

DIBELS was created by the University of Oregon Center on Teaching and Learning. It is a series of short tests or measures (one-minute) that predict later reading proficiency. Measures include naming the letters of the alphabet, segmenting the phonemes in a word, reading non-words and word-reading fluency. DIBELS results can be used to evaluate individual student development, and to provide feedback on effectiveness of curriculum and programming.

The measures also largely overlap with the recommendations in the IES report, described earlier. DIBELS measures are evidence-based and a reliable indicator of early literacy development.[909] Although use of the online tool costs money, the materials are free, and its assessment properties are open-access and anyone can study them.[910]

A board representative from North Vancouver noted how their screening initiative shifted their understanding of how students learn to read:

We believed that if you simply surrounded kids with good books and good literature, that they would learn to read by osmosis, and we know that that is not true…All kids need good instruction to learn how to read.[911]

Back to top

United Kingdom

In 2012, the United Kingdom (U.K.) passed a law mandating screening for all students in Year 1 (equivalent to Year 2 in Ontario).[912] Students in Year 2 (equivalent to Grade 1 in Ontario) are screened if they did not meet the expected standard in Year 1.[913] Students who do not achieve the expected standard are given interventions and supports.[914] The assessment takes under 10 minutes per student to administer.

This screening also takes place in the context of a national curriculum that explicitly outlines the components of word-reading instruction (decoding, word-reading skills and phonics instruction).[915] Unlike Ontario, the U.K. assessment practices are clear, transparent and invite accountability.

Each educator’s responsibilities are clearly outlined in public documents.[916] For example, a guidance document states: “while the teacher’s knowledge of pupils can inform judgements, they must be based on sound and demonstrable evidence. This ensures that judgements are as objective as possible, and consistent between classes and schools.”[917] This judgment is also exercised within the framework of a mandated, common screening tool. Schools must also make sure teachers are appropriately trained and administer the test according to the published guidance.

As well, the U.K. mandates very specific reporting requirements to keep parents informed about their child’s progress, including information about screening results relative to students of the same age in their school and nationally.[918]

The U.K. also collects national data on the percentage of students who achieved the expected phonics standard, broken down by demographic data including gender, income,[919] ethnicity, special education needs and first language other than English.[920]

 

Australia

In 2017, the South Australian Department of Education and Child Development conducted a pilot of the U.K.’s phonics screener for Reception and Year 1 students (equivalent to Year 1 and Year 2 in Ontario).[921] Following the pilot,[922] South Australia mandated use of the phonics screener for all Year 1 students and provided training to teachers and school leaders.[923] The phonics check is available online for school personnel and families.[924]

In 2021, New South Wales joined South Australia and mandated the PSC for all Year 1 students.[925] The Education Minister said: “The evidence speaks for itself. The best results in reading for our students occur when phonics is explicitly and systematically taught from Kindergarten moving on to other forms of reading instruction.”[926]

Back to top

United States

At least 30 states in the U.S. have legislation that mandates screening for dyslexia.[927] Many of these states have published lists of recommended early screeners[928] or handbooks to guide screening practices.[929] Other states mandate the use of specific screening tools, and others provide criteria to guide the selection process.[930] Screening for dyslexia in these states largely centres on the types of early, universal screening approaches for word-reading skills outlined in this report.

Ohio recently passed a law requiring schools to administer annual dyslexia screenings.[931] The Ohio Department of Education (ODE) must establish a committee that will determine screening and intervention measures to evaluate the literacy skills of students using a structured literacy program. School districts must follow the standards established by the ODE and only use screening tools from an approved list. They must screen all students in Kindergarten and report the results to the ODE. All teachers from Kindergarten through Grade 3 must take a fixed number of training hours in evidence-based approaches for identifying characteristics of dyslexia and instructing students with dyslexia.[932]

In 2021, the New York City Department of Education introduced mandatory screening for students in Kindergarten through Grade 2. All schools will use DIBELS at least three times per year. Schools will then implement intervention plans based on the results.[933] The president of the United Federation of Teachers, a union that represents most teachers in New York City, said his union has advocated for schoolwide screening for years, but concerns with COVID-related learning disruptions prompted the education department to finally mandate universal screening.[934]

Experts in these different jurisdictions have noted[935] that screening by itself will not improve student learning. Changes to the curriculum and teacher education are necessary. Success depends on how well systems, schools and teachers respond to the results of screening and provide evidence-based interventions.

Compared to other jurisdictions, Ontario’s approach to screening is ad hoc, de-centralized and undefined.

Back to top

Ontario’s approach to screening

The Ministry of Education (Ministry) is responsible for developing the screening and assessment policies intended to provide consistent direction to school boards.[936] The Education Act states that the Minister shall:

[R]equire school boards to implement procedures for early and ongoing identification of the learning abilities and needs of pupils, and shall prescribe standards in accordance with which such procedures be implemented.”[937]

From 2004 to 2010, the Ministry introduced several policies and initiatives to standardize literacy instruction and assessment. Before 2004, the Ministry’s 1982 Policy/Program Memorandum 11: Early identification of children's learning needs (PPM 11) and the Education Act were the key sources on early identification of student needs.

In 2004, the Ministry made a plan for improving students’ achievement in reading, writing and math. It created the Literacy and Numeracy Secretariat (LNS) to help build capacity among educators and increase student achievement. The LNS also helped provide support to teachers and principals to implement common diagnostic assessment tools.[938]

Up to that point, there were no common assessment practices for reading in Ontario. The Ministry provided funding to boards to buy and implement common reading assessment tools, such as the Developmental Reading Assessment (DRA), PM Benchmarks or the Comprehension, Attitudes, Strategies and Interests (CASI).[939] This resulted in an increase in the use of common diagnostic assessments in schools.[940] However, these instruments do not meet the criteria related to critical foundational word-reading skills set out in the scientific research and IES report. The limitations of these common tools and others are explored further in this section.

Beginning in 2005, the Ministry also provided funding to boards to create data management systems. Teachers were required to administer specific diagnostic assessment tools, typically in fall and spring, and to enter the data into their board’s data management system. The school and board administrators were able to obtain the diagnostic assessment results from the system for use in school and board improvement planning.

In 2010, Ontario created a provincial policy for assessment practice for Grades 1–12 called Growing Success.[941] Although the 2000s marked a shift in the education landscape towards centralizing assessment tools, these policies did not set out clear standards.

In 2013, the move to standardization was reversed with the introduction of PPM 155. PPM 155 leaves a high degree of discretion to individual teachers on how to use diagnostic assessment tools.

PPM 155 was a response to problems with rolling out the 2005 common assessment tools. While the primary purpose of these tools was to use student data to improve learning, in some cases the assessments were perceived as an exercise in monitoring teacher performance. Some boards also added additional assessments and collected large amounts of data from teachers, taking significant time away from classroom instruction.[942] This created workload challenges for teachers. Also, depending on the school board, not all teachers were able to view results in the data management system.[943]

In 2014, the Ministry created its Policy/Program Memorandum 8: Identification of and Program Planning for students with Learning Disabilities (PPM 8), to specifically address early identification of students with learning disabilities. General in nature, this PPM did not mandate any specific screening tools to measure risk for learning or reading disabilities, but rather repeated principles of identification.

Consistent provincial standards and trust in teachers’ professional judgment are not mutually exclusive. Standards give teachers the necessary tools to improve student outcomes, and to enhance public trust in the exercise of teacher discretion. Teacher discretion is an important component of many forms of assessment including diagnostic assessments, but it should not drive the process when it comes to screening. Many teachers said they wanted more guidance on screening.

Teachers face many challenges in navigating how to teach reading. They are provided with an ineffective reading curriculum and trained inadequately. There are also a lot of materials related to teaching reading that sound persuasive but are not evidence-based.  In Teaching Reading is Rocket Science, a report commissioned by The American Federation of Teachers (AFT), the AFT president wrote:

As a profession, we have the drive and the passion to do the hard work of understanding and using the science of reading. And it is hard work, much harder than it should be since so few of the education publishers and professional development providers have cast aside their profitable-but-outdated materials and programs to create new resources that reflect the latest research.[944]

Universal early screening does not undermine teachers’ professional judgement. Rather, it removes the outdated, non-evidence based tools that our current education system is using, and replaces them with standard and effective tools to improve student learning.

Back to top

Policy/Program Memorandum 11

Policy/Program Memorandum 11[945] (PPM 11) deals with early identification of children’s learning needs. It was last revised nearly 40 years ago, in 1982. PPM 11 requires each school board to have procedures to identify each child’s level of development, learning ability and needs. It also requires each board to make sure education programs are designed to accommodate these needs. It recognizes that these procedures should be part of a continuous assessment process that should start when a child is first enrolled in school or immediately after Kindergarten.

Although the PPM’s goal to encourage early identification is good, the policy is not specific enough. Rather than setting standards, it espouses general principles.

Screening for foundational word-reading skills should happen twice a year in the early grades. However, PPM 11 only requires that early assessment procedures be initiated at the beginning of the year, without clearly defining what is meant by “continuous” assessment.

It does not mandate early screening using evidence-based tools, and it cites resources that are outdated (as early as the 1970s and the most recent is from 1980). Since 1982, the scientific literature on assessments has evolved and these developments should be reflected in the cited resources and inform the substance of the PPM.

The Ontario Catholic School Trustees’ Association’s (OCSTA) submission also referenced the age of this PPM, and noted that it may be useful for the Ministry to conduct a consultation and review the requirements in PPM 11 in the near future.

There have been calls to revise Growing Success, the main policy document on assessment.[946] The need to update PPM 11 to include changing knowledge about assessment is just as urgent.

For example, while Ontario’s 2013 Learning for All resource guide recommends a tiered approach, this inclusive framework is missing from PPM 11. In its inquiry submission, the Ontario Association of Speech-Language Pathologists and Audiologists (OSLA) recommended that PPM 11 should mandate RTI approaches “for the purpose of detecting students who are not responding sufficiently to core programming and who may require more intensive instruction or preventative interventions to support reading development.”

Also, PPM 11 does not provide meaningful guidance on meeting the needs of multilingual students (who are learning English at the same time as they are learning the curriculum). Instead, it creates the potential for confusion when it states: “Where a child's language is other than English or French, a reasonable delay in the language based aspects of assessment should be considered.” While some time may be needed to expose multilingual students to the language of instruction, Ontario school boards are often delaying supports for these students because of untested assumptions (see section 6, The experience of students and families and section 12, Professional assessments). The PPM’s vague language of “reasonable delay” without any further guidance may be a contributing factor to this reality. Universal evidence-based screening means all students are screened. There is no scientific reason to treat multilingual students differently from other students when it comes to early screening.[947]

Because of these assumptions, the OCSTA recommends using dynamic assessment procedures for multilingual students with repeated testing over time to reduce bias, false positives and false negatives.

See section 6, The experience of students and families and section 12, Professional assessments, for a further discussion of inquiry findings on the experience of multilingual students, and recommendations to align the Ministry and school boards’ approaches with research science.

Back to top

Policy/Program Memorandum 8

The Ministry of Education has a PPM specifically related to learning disabilities, which was last updated in 2014.[948] PPM 8 sets out requirements for identification and program planning for students with learning disabilities.

PPM 8 clearly states that any program planning applies equally to students who have not been formally identified through the IPRC process. The determining factor is not formal identification or a diagnosis but the needs of the student. Students who show difficulties in learning and who would benefit from special education programs are entitled to these services.

In its inquiry submission, the Ontario English Catholic Teachers Association (OECTA) recognized that early identification and appropriate interventions give students “the best chance to succeed.” OECTA accurately acknowledged that the inquiry will “hear from a number of students and families who had to wait too long for identification and access to resources.” Despite the Code and Ministry policy, OECTA also reported that boards typically only provide special education support to students who have been identified through the formal IPRC process. As a result, the learning needs of many other students who have not been formally identified go unmet. The inquiry heard accounts of this concerning practice across Ontario.

PPM 8 notes that many students with learning disabilities show signs before they enter the school system. It also repeats the requirement outlined in PPM 11 and the Education Act to implement procedures for early and ongoing identification of the learning abilities and needs of students. Dyslexia and reading disabilities are learning disabilities, but this PPM does not address the appropriate early screening and progress monitoring procedures.

While the PPM requires procedures for early screening, it does not mandate specific evidence-based screening tools and leaves boards, schools and teachers with considerable discretion on screening. This report outlines the appropriate early screening procedures for word-reading skills and risk for reading disabilities.

Similar to PPM 11, the vague language of PPM 8 may encourage a “wait and see” approach. PPM 8 states: “after a period of instruction that has taken into account individual students’ strengths and needs, it will become evident that some students who are experiencing difficulty in learning may potentially have a learning disability.” While it is true that daily instruction is one data point to assess student learning, universal evidence-based screening tools are the crucial starting point.

Although the PPM 8 refers to the tiered approach, UDL and differentiated instruction, it does not require these inclusive approaches. Instead, it says that school boards “should consider” and “may” put these into place.

PPM 8 stands out in comparison to other more prescriptive and robust PPMs such as Policy/Program Memorandum 140 – Incorporating methods of Applied Behaviour Analysis (ABA) into programs for students with Autism Spectrum Disorders (ASD). PPM 140 requires school boards to offer students with ASD special education programs and services, including using ABA methods where appropriate. PPM 140 also requires educators to measure an individual student’s progress by collecting and analyzing data on an ongoing basis. Educators must use this data to determine the effectiveness of the program for the student, and to change the program as necessary.

School boards are required to develop a plan to implement PPM 140. This PPM also states that the Ministry will integrate monitoring boards’ implementation into existing reporting mechanisms. The Ministry states that it has conducted annual surveys of school boards on their compliance with requirements set out in PPM 140.[949]

Comparing PPM 140 (ASD) to PPM 8 (LDs), it appears that the Ministry has taken steps to be more prescriptive with boards’ responsibilities for students with ASD than students with learning disabilities. As well, it appears that the Ministry’s monitoring mechanisms for compliance with PPM 140 are more robust than for PPM 8, although we did not assess how effective the monitoring for PPM 140 is in practice.

Using systematic, evidence-based approaches to prevent reading difficulties should be mandated, not simply suggested, in PPM 8. See section 12, Professional assessments for more concerns with PPM 8.

While Growing Success, PPM 11 and PPM 8 inform assessment principles in Ontario, PPM 155 is the most important policy directive in practice.

Back to top

Policy/Program Memorandum 155

PPM 155, issued in 2013,[950] deals with diagnostic assessment and how teachers should use their professional judgment related to assessment. It contains many statements confirming the Ministry’s view of the importance of diagnostic assessment, and outlines the responsibilities of teachers, principals and school boards.

School boards are directed to create a list of board-approved assessment tools for teachers to choose from. However, most aspects of diagnostic assessment are left to a teacher’s professional judgment. Teachers can exercise discretion on all aspects of assessment, such as:

  • Which students to assess (individual student, small group or whole class)
  • Which assessment tool to use (from the board’s list of pre-approved tools)
  • The frequency of assessment
  • The timing of assessment.

The content of PPM 155 is not new. It borrows heavily from the 2010 policy Growing Success. The PPM quotes this policy to repeat that “teachers' professional judgments are at the heart of effective assessment, evaluation, and reporting of student achievement.” However, what is novel is the amount of teacher discretion the PPM introduced within Ontario’s education system.

 

Scope of PPM 155

PPM 155 applies only to the use of formal diagnostic assessment tools. It does not apply to special education assessment, large-scale provincial assessments (for example, EQAO), or assessments conducted as part of ministry-approved national or international assessments (for example, PISA).

Since its introduction, there has been much disagreement between teachers, unions and boards about what is considered formal diagnostic assessment. Conflicting views about the role of PPM 155 may be because it fails to delineate where diagnostic assessments begin and end. The PPM does not define “formal” diagnostic or special education assessments.[951] This leaves room for an over-broad interpretation of diagnostic assessments that includes screening tools, which are more appropriately special education assessments.

The OHRC’s position is that screening tools are more like a special education assessment than a diagnostic assessment, because they identify students who are at risk for reading disabilities so they can receive interventions and special education supports.

In Growing Success, Ontario defines assessment as “the process of gathering information that accurately reflects how well a student is achieving the curriculum expectations in a subject or course.” Diagnostic assessment is defined as:

Assessment that is used to identify a student’s needs and abilities and the student’s readiness to acquire the knowledge and skills outlined in the curriculum expectations. Diagnostic assessment usually takes place at the start of a school year, term, semester, or teaching unit. It is a key tool used by teachers in planning instruction and setting appropriate learning goals.

Growing Success also describes the shift away from using only terms such as “diagnostic,” “formative” and “summative.” Instead, it calls for these terms to be supplemented with the phrases “assessment for learning,” “assessment as learning” and “assessment of learning.” Diagnostic assessments fall under the category of “assessment for learning,” which is defined as:

The ongoing process of gathering and interpreting evidence about student learning for the purpose of determining where students are in their learning, where they need to go, and how best to get there. The information gathered is used by teachers to provide feedback and adjust instruction and by students to focus their learning. Assessment for learning is a high-yield instructional strategy that takes place while the student is still learning and serves to promote learning. (Adapted from Assessment Reform Group, 2002.)[952]

This addition was meant to signal a shift away from describing how the information is gathered to, more importantly, how it is used. This principle is key to understanding why screening is distinct from diagnostic assessments.

Screening tools that identify students at risk for failure to acquire word-reading skills are not diagnostic in nature. The purpose of screening is to identify a student and secure interventions (which can include special education programs) before the student develops a reading disability (more severe reading difficulty).

PPM 155 describes special education assessments, which include:

[E]ducational and/or other professional assessments conducted to identify students with special education needs, to determine the special education programs and/or services required by these students, and/or to support decisions related to such programs and services.

A potential reason for the confusion around the classification of screening tools may be because our current education model is reactive rather than proactive. There could be hesitancy to see screening as a special education assessment. Some may view special education assessments exclusively as assessments for students who already have a disability. Special education assessments might be seen as pathways to further professional assessments (for example, psychoeducational assessments) or formal identification through the IPRC. Screening is part of a systematic “early detection and preventative strategy”[953] that identifies students who may be at risk for reading difficulties and disabilities. 

Another key to understanding screening tools as a unique form of assessment is to look at what Growing Success and PPM 155 refer to as examples of data collection in the context of diagnostic assessments and assessment for learning:

Teachers will obtain assessment information through a variety of means, which may include formal and informal observations, discussions, learning conversations, questioning, conferences, homework, tasks done in groups, demonstrations, projects, portfolios, developmental continua, performances, peer and self-assessments, self-reflections, essays, and tests.

Observations, student portfolios and student self-assessment are not substitutes for scientifically studied measures for screening foundational word reading and related skills. The sources of information described in Growing Success and PPM 155 are not the right methods to screen students for reading difficulties. Studies have shown that relying only on teachers' judgments of students' early literacy skills may be insufficient to accurately identify students at risk for reading difficulties.[954]

Province-mandated assessments or large-scale assessments are also outside the scope of PPM 155. The Ministry has the power to mandate universal screening across the province.[955]

Screening tools are a distinct and unique form of assessment that requires a separate policy than what is described in the Ministry’s existing policy documents and memoranda.

 

History of PPM 155

PPM 155 was issued following a commitment made in a Memorandum of Understanding between the Ministry and OECTA in 2012. On its website, the Elementary Teachers’ Federation of Ontario (ETFO) states that the policy “signals a fundamental change in the locus of control on the use of diagnostic assessments from school boards and principals to individual teachers.”[956] 

PPM 155 was viewed as a victory for many teachers’ associations in Ontario.[957] In 2004, when the Ministry started directing the use of common assessment tools, many teachers were unhappy.[958] Some teachers saw them as a means for boards to monitor teacher performance.[959] This is also evident in the language of the 2015 Central Agreement between the Ministry and ETFO. The agreement states: “The results of diagnostic assessments shall not be used in any way in evaluating teachers. No teacher shall suffer discipline or discharge as a consequence of any diagnostic assessment results.” [960]

Before PPM 155, teachers reported not knowing what the boards did with the data and “felt that they were being judged.”[961] While no boards raised this in the current OHRC inquiry, a 2013 Ministry study did report that one board felt the diagnostic assessment tools could serve as a means to hold “teachers accountable.” The board stated: “The good teachers will continue to perform assessment effectively, the poor or unmotivated teacher now has less accountability.”[962]

The discourse on PPM 155 positions board responsibilities in opposition to teachers’ professional judgment.[963] While boards and teachers share the same goal, there can be healthy disagreement on how to best meet the needs of students.

In this case, examining what the scientific literature says is helpful. Teachers’ professional judgment must be exercised in the framework of mandated, universal and early screening. Universal screening is needed to meet the needs of students under the Code. It is not meant to be and should not be used as a tool to monitor teachers’ performance.

 

Professional judgment

The Ministry defines professional judgement in PPM 155 (taken from Growing Success) as:

Judgment that is informed by professional knowledge of curriculum expectations, context, evidence of learning, methods of instruction and assessment, and the criteria and standards that indicate success in student learning. In professional practice, judgment involves a purposeful and systematic thinking process that evolves in terms of accuracy and insight with ongoing reflection and self-correction.

In inquiry surveys, many teachers reported not receiving professional learning on screening students for reading disabilities. See section 8, Curriculum and instruction. This means that professional judgment is not being informed by science-based methods of measurement and assessment.

Teachers’ professional judgment is not undermined by universal screening. It is undermined when they are taught to use unreliable assessments with questionable validity.

Teachers need to be trained on the evidence-based tools available in the field, on their most effective timing and use, and school boards need to ensure consistency across classrooms, schools and boards. Similar to other professions, teachers must be provided with the tools to effectively carry out their role.

Professionals in any given field must exercise their judgment within the bounds of the scientific evidence, in a way that does not negatively affect the people they serve. Governments routinely set standards and remove discretion when broader public interests are at stake.

 

Data collection and consistency

PPM 155 has been identified as a major barrier by Ontario school boards in program planning and collecting important data. Since PPM 155, teachers are no longer required to enter data from board-mandated diagnostic assessment tools.

For screening data to be useful, it must be standardized. Each student must be assessed using the same measures, administered at consistent points of time.

In 2013, the Ministry requested a study of the implementation and effects of PPM 155.[964] One of the purposes was to understand the consistency in implementation practices and any effect on boards’ ability to collect data. The report documented the many differences in perspectives between boards and teachers around their obligations and how to interpret formal diagnostic assessments versus special education assessments.

The 2013 study investigated eight school boards in Ontario and found there was no consensus on implementing PPM 155. Differences among the boards included:

  • Previous use of diagnostic assessment tools
  • Current use of diagnostic assessment tools
  • Capabilities of data management systems
  • Perspectives on diagnostic assessment and on PPM 155 (whether and how it should be implemented).

The study also revealed board concerns about the loss of data. One board reported that only half of teachers submitted results centrally, and did not know whether teachers assessed some or all of their students. They said: “not only has data been lost that is valuable in discussions about evidence informed practice but it has set back many years the gains made in terms of creating awareness” about the data.[965] Another board said the PPM disrupted their previous practice of requiring assessments twice a year and felt it only required assessments once a year.[966]

In addition to the 2013 study, boards have voiced concerns about PPM 155 in other contexts. For example, in a brief submitted to the Ministry in 2018, the Conseil ontarien des directrices et des directeurs de l’ éducation de langue français (CODELF) said:

While recognizing that the professional judgment of teachers is the cornerstone of evaluation, a critical analysis of the importance of diagnostic assessment at the classroom level as well as the added value of school-based and system-based diagnostic assessments should be undertaken.[967]

The inconsistent approach to screening and variability between classrooms, schools and boards is a direct result of how the system has been set up. Since PPM 155, as currently interpreted, does not allow boards to mandate the screening tool or timing, they can only provide a menu of options and suggested time frames. This obstructs boards from fulfilling their responsibilities for programming, planning and accountability. Such an approach undermines consistency, fails to meet students’ instructional needs, misses students who need intervention, and ultimately limits student achievement.

Back to top

Kindergarten to Grade 12 Education Standards Development Committee

In 2017, the Minister for Seniors and Accessibility appointed a committee to address barriers facing Ontario public school students in Kindergarten through Grade 12. This committee is called the Kindergarten to Grade 12 Education Standards Development Committee (ESD Committee) and is established under the Accessibility for Ontarians with Disabilities Act, 2005.[968] The ESD Committee includes people with disabilities, disability organizations and education experts.

In 2021, the committee wrote an initial report with 197 recommendations. One of the report’s guiding principles made the critical link between evidence-based practices and the rights of students with disabilities:

Research-informed, evidence-based programs, pedagogies and policies facilitate a culture of respect for equity, equality, access and inclusion in all schools, and ensure evidence of impact across the education system for students with disabilities…[969]

The report also addressed curriculum, assessment and instruction. It included recommendations to improve early and ongoing assessment for students with disability-related needs. The report said

Students with disabilities can face difficulties and significant delays in getting professional assessments, (including but not limited to psychoeducational assessments), where needed, for their disability-related needs. Additionally, there is the potential for unfair/biased assessment for some students with disabilities due to a lack of understanding of the students, their lived experiences and identities and can lead to misinterpretations that create unintentional new barriers to an accessible and inclusive education for students with disabilities. The lack of a necessary assessment can impede their access to needed services, and to effective accommodations of their disabilities.[970]

The committee recommended steps and processes to address barriers that delay timely and fair/unbiased assessments to identify disability-related need. The committee also acknowledged that are many types of educational assessment, including ongoing, evidence-based classroom assessments. The committee recommended that the Ministry, boards and faculties of education:

Ensure that teacher education programs, in-service and ongoing job-embedded professional learning on diagnostic, formative and summative curriculum based and more formal assessments be provided to educators to inform Differentiated Instruction for all learners.

Back to top

School boards’ approach to screening

The OHRC asked the eight inquiry boards to provide documents, data and information explaining their approach to early screening for reading. The OHRC also asked questions in its meetings with each board, to better understand their approach and invite discussion about any obstacles in implementing universal screening.

Every inquiry school board raised PPM 155 as an obstacle to universal screening, because it limits their ability to mandate specific tools and set a schedule for screening, and prevents them from collecting data about screening centrally. However, some boards said they are trying to work around the parameters set by the Ministry of Education. Similar to the 2013 Ministry study that found different interpretations of the PPM, many school boards also reported different responses from local bargaining agents to board directives about assessment. One school board, Thames Valley, found a way to work around the limits of PPM 155. Its approach will be described later in this section.

There are significant problems in the screeners boards are using or promoting and in their procedures for implementing them. These screeners and processes do not align with the scientific evidence. There are critical issues with all aspects of screening approaches currently practiced in Ontario. This includes their frequency, type of measures used, reliability and validity, implementation procedures, decision-making processes, consistency, level of expert input and data collection. This compromises the effectiveness of the boards’ tiered approach of support.

Aspects of these findings are not surprising, given that balanced literacy and cueing-systems are the primary approaches to early word reading reported by boards and outlined in the Ministry curriculum and teaching documents. Evidence-based universal screening needs to occur within an overall system of teaching foundational reading skills that is consistent with evidence. This is not currently happening in Ontario.

 

Frequency

Most school boards are not implementing universal screening at multiple points in time across Kindergarten to Grade 2. Screening is typically administered to all students at only one point in time and most often in Kindergarten Year 2, although even this is not consistent across boards. For example, some boards only screen students whose teachers identify them as struggling. This is not a universal, standardized approach. Also, screening students only in Kindergarten Year 2 is not enough. They should be screened twice a year until Grade 2.

Back to top

Measures

When school boards administer screening in Year 2, students are mostly given tests that only include letter-name or letter-sound knowledge measures and/or phonological awareness. Where phonological awareness is assessed, it is mostly an assessment of sound segments larger than the phoneme (such as syllables and rhyming). These skills are very early pre-literacy skills, and are not adequate for screening on their own.

As seen in the IES table above, students should also be screened for phoneme awareness and fluency, sound-letter fluency, decoding, word and text reading. Measures of word and text reading should include fluency and accuracy, starting in mid-Grade 1. Boards often misinterpreted their basic Year 2 screeners as being more complete screening assessments of all the knowledge and skills for word-reading acquisition.

The reason screening does not include word-reading accuracy and fluency is clear. This omission flows from the curriculum. School boards follow the curriculum’s three-cueing approach (see section 8, Curriculum and instruction) to “teach” word identification. Thus, screening instruments have not been designed to measure accuracy and speed for reading words in isolation, a hallmark of skilled reading.

Beyond Kindergarten, the boards reported that teachers used assessments associated with the commercial balanced literacy programs they use, such as Reading Recovery® and Fountas and Pinnell programs including Levelled Literacy Intervention. Boards reported using the following assessments:

 

Table 19: Commercial assessments used by the eight inquiry school boards

Assessment

# of school boards

PM Benchmarks

6

Running Records

5

Benchmark Assessment Systems (BAS)

3

Observation Survey of Early Literacy Achievement

3

Comprehension, Attitudes, Strategies and Interests (CASI)

3

Oral Language Assessment

2

Developmental Reading Assessment (DRA)

2

These screening tools were also the tools educators reported most often in the inquiry survey.

 

Table 20: Commercial assessments reported by educator respondents

Assessment

Educator respondents

Running Records

22%

PM Benchmarks

20%

Developmental Reading Assessment (DRA)

18%

Benchmark Assessment Systems (BAS)

11%

Observation Survey of Early Literacy Achievement

9%

Oral Language Assessment

7%

These assessment instruments do not adequately measure foundational skills in word-reading accuracy and fluency. They largely measure reading text in language-patterned books, use of cueing strategies to problem-solve words in text and recognizing high-frequency words. Book-reading-level assessments are not a useful measure of whether a student is learning foundational word-reading skills.

The tools that school boards currently use are often referred to as informal reading inventories (IRIs) in the literature. These are commercially produced reading assessments that have been minimally researched.[971] There is little information about their accuracy potential, and IRIs rarely report the measurement properties of their data. Those that do often have weak research methodology.[972]

A 2015 study showed that an oral reading fluency measure “demonstrated higher diagnostic accuracy for correctly identifying at-risk students and resulted in 80% correct classification compared to 54% for the…[Fountas and Pinnell Benchmark Assessment System].”[973] Further, the researchers concluded: “Thus, practitioners who use reading inventory data for screening decisions will likely be about as accurate as if they flipped a coin whenever a new student entered the classroom.”[974] The researchers also noted that “the oral reading fluency assessment required approximately three to five minutes per student and the reading inventory required approximately 20 to 30 minutes per student to complete.”[975]

Running records are another type of assessment that are widespread across Ontario and are used together with other commonly used assessments. Running records are referenced in many Ministry materials, including the Guide to Effective Instruction in Reading Grades 1–3[976] and the Kindergarten Program, 2016.[977] This assessment was developed in 1985 by Dr. Marie Clay and was integrated into the Reading Recovery® intervention program. It is also one of the six tasks in the Observation Survey of Early Literacy Achievement.[978]

In a running-record approach, teachers record what individual students say and do while reading a text aloud. Teachers record accuracies, errors, assistance provided by the teacher and self-correction rates to a student’s ability to read a text. Teachers also record which cueing system a student is using to read, and determine the cause of the student’s reading errors (miscue analysis).[979]

Running records are used beyond Reading Recovery® in other reading interventions.[980] For example, LLI (Fountas and Pinnell) uses the same coding approach, but calls it “reading record.”[981] LLI uses BAS at specific intervals in the school year, but uses reading records as a way to ensure ongoing assessment between BAS tests.

There is no evidence to support the validity of running records or related approaches. Their psychometric properties[982] are questionable, and they fail to identify many children at risk for word-reading failure.[983] This assessment approach does not measure the skills students should be taught to learn to read. Beginning readers should not be using meaning, structural and visual cues to read words.[984]

A running record only tells the teacher how a student reads words in sentence contexts, often in predictable texts. It shows students’ guessing or word-prediction skills, not if they are learning foundational word-reading skills. Using these assessments creates an illusion of valuable data but masks word-reading problems.[985] This delays identifying the problem, and the more delay, the harder it is to address reading difficulties.

Instead, screening should measure foundational word-reading skills. As noted earlier, the IES report summarizes current evidence-based recommendations. Universal screening in word reading after Year 2 should minimally include:

  • Grade 1 (beginning): phonemic awareness, decoding, word identification and text reading
  • Grade 1 (second semester): decoding, word identification and text reading, and should include speed as well as accuracy as outcomes
  • Grade 2 timed word reading and passage reading.[986]

Back to top

Reliability and validity

Many boards reported using board-developed screeners. While there is nothing inherently wrong with this approach, there must be evidence to support their use. The school boards did not submit enough information to judge their reliability and validity,[987] and it appeared that such evidence may not exist or have been collected.

While some board-developed approaches focus on some useful skills like phonological awareness, they do not include the full set of skills needed in an evidence-based screening approach.

For example, several screeners assessed three items for each skill area included in phonological awareness. A student’s skill was then categorized as not present, emerging or present, based on the number of items they scored correctly. There was then a total score cut-off that determined if the student was “at risk” or “not at risk.” However, there was no information to evaluate how boards arrived at these specific decisions and categories. There was also no information given on measurement error associated with the screening tools, or how this was incorporated into decision-making rules.

Instead, school boards should start with screening measures that have already been studied and are well known,[988] and then do further studies on reliability and validity within their particular school board context. If boards do use their own developed measures, these should include all the critical components from evidence-based recommendations (the IES report), and boards should continually evaluate the implementation of these measures.

 

Progress monitoring

Some school boards use an initial screener at a given point in time (usually the fall of Year 2) and then re-administer it or a slightly different one that sometimes includes some more advanced skills at a second and/or third point in time. However, most boards only administer these screeners to the same students who performed poorly the first time.

There are two issues with this approach. First, progress monitoring is distinct from screening, and a given screener may not be the best way to monitor a students’ progress or response to intervention. Second, all students should be screened twice a year from Kindergarten until Grade 2. If not, students who may have performed well on early literacy skills, but have issues with more difficult skills such as reading accuracy or fluency, will be missed.

Also, similar to the initial screening, these progress monitoring tools often measure book-reading levels by assessing use of the cueing systems for reading words. Some of these book-reading assessments simply indicate that a student is significantly below grade level, which is not a useful measure as it does not tell the teacher how the student scored on foundational word-reading skills (isolated word-reading accuracy and fluency).

Back to top

Decisions based on screening results

The OHRC asked school boards to provide information about how the results of screening are used. The information about decision-making and changes to instruction approaches was either unclear or in some cases may be a misinterpretation of related research.

For example, several boards said that screeners for letter-sound knowledge are used as a tool for teachers to know which letter-sound associations to teach individual students in their classroom, and/or to flag at-risk children and start teaching them the letter-sound associations they do not know.

Explicit and systematic classroom instruction in Year 2 and Grade 1 is much more efficient for teaching the entire class sound-letter associations and using these to decode words and texts, at the same time as teaching phonemic awareness skills. This type of direct and systematic teaching will help almost all children master the early decoding skills they need to move onto the next grade. This approach will be much more efficient and effective – and good for all students.[989]

Second, teaching the at-risk students the sound-letter associations they do not know will not accelerate their learning in how to decode, and is unlikely to put them at the same level as their peers in word-reading skills. Using a screener quickly measures the skills that predict how well students will develop later word-reading accuracy and fluency. The screening measure flags students who need effective interventions to learn the skills that make up decoding and word reading.

Since the screening tools used by most boards only include very early literacy and pre-reading skills, their decision-making about appropriate interventions is also limited to these basic measures. For example, a student who performs poorly on a quick test of phonological awareness is seen as only needing interventions that target this skill.

To make sure reading interventions target the full range of necessary skills for word-reading accuracy and fluency, appropriate screening tools and intervention programs must be used.

Back to top

Consistency

School boards submitted their list of approved assessments and recommended schedules for screening to the OHRC. There was high degree of variability. PPM 155 is a significant reason for this.

All eight inquiry boards raised concerns about the effects of PPM 155. They consistently told us that PPM 155 has been a barrier to collecting data centrally, making data-based decisions, planning and delivering programs. In their view, the PPM does not recognize the importance and value of data. The school boards reported they could no longer:

  • Collect data from a common assessment tool and use this data to compare schools, identify high-need schools and allocate extra resources
  • Systematically track every single student who is assessed and identified as at risk for reading difficulties, and make sure they receive interventions
  • Measure the impact of their system-level decisions to either change their approach if it is not working or scale up a successful intervention.

Boards are trying to create consistent approaches. Many of the inquiry boards send out memoranda at the beginning of the school year explaining the list of different screening tools and recommending a schedule for completion. However, under PPM 155, the tool and frequency are optional. Some educator survey respondents reported that their local teachers’ association promoted the view that diagnostic assessments are only necessary for students who are at risk, otherwise teachers’ observations are enough. This frustrates the purpose of universal screening.

Under PPM 155, teachers collect and provide data on a voluntary basis. They maintain their own records, but there is no requirement to report this data. Principals can follow up with teachers to ask about assessments and data, but the result will depend on the individual teacher and principal’s familiarity with reading assessments and decision on whether to seek this information. A system that mandates discretion about whether to use sound practices is not only inefficient and ineffective, but also breeds unnecessary conflict. For example, one principal said:

It’s very problematic because it puts [the] principal in conflict with staff because principals are trying to encourage the staff but there’s nothing there that allows you to say that it needs to be done so it becomes a place of conflict when it shouldn’t be – sometimes you do need the weight of “this is a must do not a should do.”

It was clear from our discussions that school boards think that PPM 155 has decreased the level of accountability. One board representative said they feel “professionally handcuffed” and less accountable because the current data does not allow them to work with schools and identify patterns. No board was able to provide the compliance rate for completing assessments.

Boards also talked about how PPM 155 was not only a detriment to board-level planning and student achievement but also to teachers. Some boards indicated that PPM 155 is not universally supported among teachers. They said that if the PPM were changed, there would likely not be the “huge backlash” that many might expect. One board reported that before PPM 155, they did not have any push-back from teachers when the board directed common and scheduled assessments.

A few boards did not think that PPM 155 is as much of a “hot topic” as it is made out to be, and said that most teachers do assessments. What is missed, they said, is the ability to look at the data system-wide and provide targeted approaches to professional development to fill in the gaps for instruction needs. The board reported that teachers routinely ask for training on screening tools.

Another school board said that the impact is felt among new teachers who can no longer move into a pre-existing “fabric of understanding,” and that “leaving things to professional judgment is one thing, but unless you combine that with extensive [professional development] then those things aren’t necessarily aligned.”

Some boards did include evidence-based screening tools on their board lists. However, because of PPM 155, there is no guarantee that teachers will pick these assessments, particularly when many of the non-evidence based ones are better known, associated with familiar commercial intervention programs supplied in the schools, and more consistent with the prevalent balanced literacy and cueing approach to early reading instruction. Other boards use interventions with embedded screening instruments. However, these interventions vary by schools and are often not evidence-based interventions.

Survey respondents also raised concerns about PPM 155. Respondents included teachers, other education professionals and school and board-level administrators. Teachers said they want more clarity on what screening tools to use. In response to a survey question asking whether students are screened for reading difficulties in Kindergarten or Grade 1, one teacher replied:

It all depends on the teacher. Part of this issue is PPM 155. We need to make universal screening mandatory and train people how to do it. But we also then need the resources to provide the intervention.

When asked how often students are screened, another teacher said:

It is standard in our board to do it twice a year, but many don't and there is not a lot of accountability. This is in part because, if I am honest, PPM 155, which has its merits but if we are not using any true data at a school level to inform our practice then that is not best either…If we are not tracking data and not forced to submit data how do we know data is being used and being used well?

School and board-level administrators similarly identified data-related concerns with PPM 155 and thought it should be re-examined to make sure “all students benefit from regular and ongoing diagnostic assessments to guide instruction in the classroom and next steps.” One board-level administrator said:

Prior to the Ministry's implementation of PPM 155 in 2013, schools collected information from system assessments…to collectively monitor the progress of ALL students. District School Boards in turn collected this school-based information to create literacy plans to address gaps. It also allowed the system to provide professional learning for teachers to address these gaps. With PPM 155, we no longer have this rich data to monitor [the] progress of all students.

Another board-level staff member described resistance to buy-in on screening and how data literacy is an issue even at senior leadership levels. When asked what they would do to improve access to screening, they responded:

First, consistent messaging and training of educators at the pre-education service level that includes rigorous coursework in learning science, direct instruction etc. that includes understanding data (basic statistics), assessment, psychometrics, and a lack of fear around assessment and evaluation. Second, removing structural inequities around issues such as the way in which PPM 155 was interpreted and used to undermine change and evidence-based practice. More importantly, the demand for ethical and accountable leadership to avoid biased and ill-informed decision-making. What I witnessed was public-relations and politically motivated decisions around student performance that were not grounded in evidence.

Another issue identified in educator surveys was that since boards cannot mandate screening at particular time periods, these assessments are often done at the end of the year. This results in a loss of opportunity to implement effective interventions. The purpose of screening is to immediately provide interventions. Screening at the end of the year does not allow for this and needlessly delays identifying children who need interventions.

Finally, one survey respondent, a teachers’ association representative, said: “It seems to me that there is a huge amount of luck in the current system regarding who gets screened, when and how.”

Rather than leaving screening tools, which are an essential step in inclusive education, to the discretion of teachers who are overburdened, and to boards that have limited research capacity and are constrained by PPM 155, the Ministry should mandate a short list of scientifically validated and reliable tools. Many jurisdictions outside of Ontario have followed such an approach. One teacher survey respondent said:

There should be a Ministry of Education document so that each school board across Ontario isn't creating their own. The province should have one, standard early reading screener that has been trialed and proven to be effective, rather than have each school board "reinvent the wheel," and create less effective assessment tools.

Back to top

Expertise

Teachers and school boards are not typically experts in reading science or the science of screening. Many of the inquiry school boards reported they would welcome direction from the Ministry on which screeners and interventions are scientifically validated. Although some boards wished they could do such a review, they acknowledged that they were not well-positioned to review the research due to capacity and resource constraints.

Many boards included DRA and PM Benchmarks on their approved list of tools because the Ministry provided funding for them in the early 2000s. Boards also included BAS (Fountas and Pinnell) because it was a commonly used tool across the province. This has resulted in many non-evidence-based tools being recommended to teachers.

Generally, school boards could not provide a clear rationale for including or excluding screening tools on the board-approved list. However, two boards reported working with psychologists and speech-language pathologists (SLPs) to determine which screening tools to include. Some boards said they encouraged teachers to work collaboratively with SLPs and psychologists.

Organizations representing these professions provided submissions to the inquiry about the importance of multi-disciplinary collaboration. The APLOS said:

Psychologists play an important role in supporting teachers to use data for the purpose of early identification and instructional decision making.  By using formative evaluation tools (CBM; DIBELS), teachers can assess progress and achievement within the context of instruction. Teachers who use these tools are more likely to raise goals and make instructional changes in the classroom to the benefit of all students.

The OSLA also talked about the role their members can play in supporting educators since they have “specialized knowledge, clinical judgement, and expertise to support individuals with reading disabilities across the lifespan and at every developmental stage.” The association recommended that schools: 

Leverage experiences from across Ontario involving partnerships between educators and speech-language pathologists to screen early learners for risk factors.

The inquiry educator survey asked respondents which screening tools should be used. Respondents could select more than one option. There was a discrepancy reported between professions. Teachers’[990] top choice was running records, followed closely by the Oral Language Assessment. Teachers also rated BAS and PM Benchmarks as “very effective.” These are familiar tools as most Ontario boards currently use them.

SLPs and psychologists preferred many of the lesser-used screening tools. SLPs’[991] top choice was the Comprehensive Test of Phonological Processing (CTOPP), which assesses phonological awareness, rapid naming and phonological memory, but does not assess letter-sound knowledge, decoding or word-reading accuracy and fluency.[992] Psychologists’ top choice was CTOPP, followed closely by DIBELS. Both CTOPP and DIBELS measures largely overlap with the recommendations in the IES report, described earlier. Both groups of professionals did not rank running records, BAS or PM Benchmarks as effective screening tools.

Through interviews and educator surveys, the inquiry heard that SLPs’ and psychologists’ expertise is not always valued or leveraged. One survey respondent said:

Experts in data, research and ethics were sidelined and ignored. Attempts to bring balance and truth were met with swift retribution, bullying and in my case, termination for questioning the [perpetuation] of structural inequities and misuse/[misrepresentation] of data and evaluation practices.

The inquiry also heard an example of two school boards that collaborated with their SLPs and devised a creative way to work around PPM 155. One of the boards, Thames Valley, uses their SLPs to screen all students (who have parental consent) in Year 2 (winter) and Grade 1(fall).

The SLPs are meant to follow up after the results of the screening to make sure schools that require additional support have intensive instruction in their classrooms, and offer resources for teachers to meet this goal. With the growth of newcomer families in London, the board says that this screening and follow-up has helped students in Kindergarten catch up and has also increased teacher awareness. SLPs use an online tool and input data to create a student and class profile. The system allows the school board to compare classes across the school and the system.

While this approach was the most systematic and data-driven among the eight inquiry school boards and a commendable effort within the restrictive framework of PPM 155, it is not necessarily a sustainable solution for all school boards. For SLPs to conduct these assessments, schools must obtain parental consent. Ontario law[993] requires parental consent before any services or “evaluation” are provided by SLPs to students. While Thames Valley reported that the consent rate was high, screening is still not universally administered.

If teachers are screening, parental consent is not required. Further, this screening may focus on the skills often assessed by SLPs, such as phonemic awareness, but miss the word- and text-reading accuracy and fluency components.

There is also an extra cost to conducting the screenings through SLPs. Approximately 40 board SLPs conduct the screenings, and Thames Valley has allocated resources beyond the Ministry funding envelope for SLP services. Finally, there is a benefit to teachers conducting the screening as they can apply knowledge they gain from it to the classroom. While there is a transfer of information between SLPs and teachers, if teachers conduct screening they will directly receive the necessary data to inform their instruction.

Back to top

Data

Boards provided very little information on how the results of screening were recorded and used and on progress monitoring once students had been flagged. They could not indicate which screening tools are most often used and how often.

Most educator respondents (69%) said that students should be screened for reading difficulties between two to three times per year. Tellingly and similar to the response of school boards, almost 42% responded “unknown” when asked how often students are currently screened.

This gap in information limits boards’ ability to identify students who need early interventions, to assess whether students are responding to interventions, to gauge the effectiveness of their programs, or to analyze if there are disparities in interventions or outcomes between schools and among students based on factors such as gender, race, ancestry and socioeconomic status.

Teachers, unions, boards and the Ministry will need to work together to implement universal screening and related data collection. The purpose of universal screening, as a tool to better the lives of students with reading difficulties and not to monitor teacher performance or merely as a box-ticking exercise, should be made clear, and training will be important to give educators the support they need.

In the context of screening, the primary purpose of data collection is not to measure outcomes. It is to make sure students are receiving the interventions they need, early on. Screening and data collection should not contribute to stigmatizing students who need support. Screening is part of a comprehensive framework to make sure education serves as an equalizer and prevents reading difficulties before they arise. Boards must also be very careful not to use or report the data in a way that stereotypes or further marginalizes any student, group of students or school.

Communicating with parents is also a key part of successfully implementing early screening. Parents must understand that the screening is universal, their child is not being singled out, and the purpose of screening is to see if their child may need further supports or interventions. Some parents may be concerned that screening could lead to their child being labelled or stigmatized. Boards must explain that screening helps avoid the risk of a student developing a reading disability or needing more intensive special education supports later on.

The authors of the 2013 Ministry-commissioned report on PPM 155 reported that based on their study, it seemed that across respondents (teachers, principals, board-level administrators), no one disputed two points: (1) school boards need data for planning and (2) assessment tools can help teachers understand students’ learning and plan instruction. The inquiry found these to still be true today. Teachers are asking for more guidance on how best to screen students for early reading skills. Further, data is also necessary for making decisions about screening and tiered interventions.

Data is meaningful when it leads to action. Early evidence-based screening is only one of many steps to making sure students at risk of reading difficulties have meaningful access to education. An additional related and essential step is early, evidence-based interventions.

Back to top


Recommendations

The OHRC makes the following recommendations:

 

Mandate early, evidence-based universal screening

59. The Ministry of Education (Ministry) should provide stable, enveloped yearly funding for evidence-based screening of all students in Kindergarten Year 1 to Grade 2 in word-reading accuracy and fluency.

 

60. The Ministry should work with external expert(s) to mandate and standardize evidence-based screening on foundational skills focused on word-reading accuracy and fluency. The Ministry should:

  1. Require school boards to screen all students twice a year (beginning and mid-year) from Kindergarten Year 1 to Grade 2
  2. Determine the appropriate screening measures to be used based on the specific grade and time in the year with reference to the recommendations in the IES report that have moderate to strong evidentiary support. At minimum, measures should include:
    1. Kindergarten: letter knowledge and phonemic awareness
    2. Grade 1 (beginning): phonemic awareness, decoding, word identification and text reading
    3. Grade 1 (second semester): decoding, word identification and text reading, and should include speed as well as accuracy as an outcome
    4. Grade 2: timed word reading and passage reading
  3. Select or develop valid and reliable screening tools that correspond to each specific grade and time in the year for administration by school boards
  4. Set out the standardized procedures for administering, scoring and recording data from the screening instruments
  5. Make sure screening tools have clear, reliable and valid interpretation and decision rules. Screening tools should be used to identify students at risk of failing to learn to read words adequately, and to get these children into immediate, effective evidence-based interventions.

 

61. The Ministry and school boards should make sure that early scientifically validated screening and evidence-based interventions are equally implemented within French-language instruction. Students with reading difficulties should have an equal opportunity to learn in French.

 

Revise Policy/Program Memoranda (PPMs)

  1. The Ministry of Education (Ministry) should work with external expert(s) to revise PPM 8, 11 and 155 so they provide clear directives to teachers, principals and school boards about their respective responsibilities. The PPMs should be updated to reflect the current scientific research consensus on early identification of students at risk for reading disabilities. The PPMs should:
    1. Mandate a tiered/(Response to Intervention (RTI)/Multi-tiered System of Supports (MTSS) approach for all students
    2. State that screening tools should be used to immediately provide tiered intervention to students who require support
    3. Require school boards to provide small-group interventions (tier 2) for students who struggle with evidence-based classroom instruction (tier 1). School boards should provide more intensive and often individualized interventions (tier 3) to students who struggle with tier 1 instruction and 2 interventions, based on progress monitoring. At tier 3, a psychoeducational assessment could be used, but should not be required, to fully assess the learning challenges, and should not delay tier 3 intervention
    4. Remove the statement in PPM 11 that school boards should consider a reasonable delay in the language-based aspect of assessment for students whose language is not English or French. All students, including multilingual students (who are learning English at the same time as they are learning the curriculum), should be screened for word-reading difficulties
    5. Update the resources presently listed in the PPMs to include the most current science-based research
    6. Revise the PPMs to reflect the OHRC’s recommendation to mandate early, evidence-based screening. If PPM 155 is not revised, then the Ministry should provide a directive to school boards that makes clear that early screening is a special education assessment or province-wide assessment and exempt from the scope of PPM 155.

 

Mandate accountability measures

63. The Ministry of Education (Ministry) should work with external expert(s) to mandate data collection on the selected screening tools to improve accountability. Specifically, the Ministry should:

  1. Mandate school boards collect data to further validate and, if necessary, refine screening tools and decision-making processes
  2. Develop measures to monitor progress in word-reading accuracy and fluency skills that are being targeted in specific interventions.

 

64. School boards should make sure clear standards are in place to communicate with students and parents about the screening tool, the timing, and how to interpret the results. The communication should also indicate when and what intervention will be provided if the student is identified as at risk for reading difficulties.

 

65. School boards should not use the results of screening to performance manage teachers. No teacher should face discipline or discharge because of screening results.

 

Ensure educators receive adequate professional learning on screening tools

66. School boards should make sure staff (for example, teachers) administering the screening tools receive comprehensive, sustained and job-embedded professional learning on the specific screening tool or tools that they will be administering, and on how to interpret the results.

 

67. School boards should make sure educators are supported with time to complete these screening assessments and related data handling.

 

 

[875] A screening measure is “a brief assessment that provides predictive information about a child’s development in a specific academic area.” See G N Davis et al, “Children at risk for reading failure; constructing an early screening measure” (2007) 39:5 Teaching Exceptional Children 32 at 33 [Davis, “Children at risk”].

[876] P Nevills & P Wolfe, Building the reading brain, PreK–3 (2nd ed.). (Thousand Oaks, CA: Corwin Press, 2009).

[877] Moats, Teaching Reading Is Rocket Science, supra note 22.

[878] S Vaughan & Lynn S Fuchs, “Redefining Learning Disabilities as Inadequate Response to Instruction: The Promise and Potential Problems” 18:3 Learning Disabilities Research & Practice 137 at 139 [Vaughan & Fuchs, “Redefining Learning Disabilities”].

[879] Vaughan & Fuchs, “Redefining Learning Disabilities,” supra note 878.

[880] Linda Siegel, “A case study of successful early screening and intervention” (2018) 44:3 Perspectives on Language and Literacy 29; O Ozernov-Palchik & DE Gabrieli, “Neuroimaging, early identification and personalized intervention for developmental dyslexia” (2018) 44:3 Perspectives on Language and Literacy 15; Louisa Moats, “When older students can’t read” (2002) 58:6 Educational Leadership: Journal of the Department of Supervision and Curriculum Development 36.

[881] Vaughan & Fuchs, “Redefining Learning Disabilities,” supra note 878 at 137.

[882] IES defines strong as moderate to high.

[883] R Gersten et al, Assisting Students Struggling with Reading: Response to Intervention (RtI) and MultiTier Intervention in the Primary Grades: A Practice Guide NCEE 2009-4045 (Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education, 2008), online: Institute of Education Sciences https://ies.ed.gov/ncee/wwc/docs/practiceguide/rti_reading_pg_021809.pdf [Gersten et al, Assisting Students Struggling with Reading].

[884] Ibid at 6—11.

[885] Strong refers to consistent and generalizable evidence that a program causes better outcomes. Moderate refers to evidence from studies that allow strong causal conclusions but cannot be generalized with assurance to the population on which a recommendation is focused (perhaps because the findings have not been widely replicated) or to evidence from studies that are generalizable but have more causal ambiguity than offered by experimental designs (such as statistical models of correlational data or group comparison designs for which equivalence of the groups at pretest is uncertain). Low refers to expert opinion based on reasonable extrapolations from research and theory on other topics and evidence from studies that do not meet the standards for moderate or strong evidence. See Table 1 for details of the criteria used to determine the level of evidence for each recommendation. Gersten et al, Assisting Students Struggling with Reading, supra note 883. 

[886] Gersten et al, Assisting Students Struggling with Reading, supra note 883 at 6—11. The report further outlines screening for early language and background knowledge of students that may interfere with developing reading comprehension. Specifically, it is important to screen the areas of receptive and productive vocabulary, grammar sensitivity, and background or word knowledge; however, they also note that these screening measures are not yet well developed or readily accessible.

[887] Gersten et al, Assisting Students Struggling with Reading, supra note 883 at 13: “Source: Authors’ compilation based on Fuchs, Fuchs, Thompson, Al Otaiba, Yen, Yang, Braun & O’Connor (2001b), Speece et al. (2003b); Schatschneider (2006); O’Connor & Jenkins (1999); and Baker & Baker (2008) for letter-naming fluency. For phoneme segmentation, O’Connor & Jenkins (1999). For non-words word fluency, Speece et al. (2003b); Good, Simmons & Kame’enui (2001). For word identification, Fuchs, Fuchs & Compton (2004); Compton et al. (2006). For oral reading fluency, Fuchs, Fuchs, Hosp & Jenkins (2001a); Fuchs, Fuchs & Maxwell (1988); Schatschneider (2006); Speece & Case (2001); Gersten, Dimino & Jayanthi (2008); Baker, Gersten, Haager & Dingle (2006).”

[888] For the DIBELS measures alternative-form reliability estimate for grade 1 letter-naming fluency; .86 for grade 1 non-word fluency; 83 and .90 for grade 2 oral reading fluency. R H Good & R Kaminski, Dynamic indicators of basic early literacy skills (Longmont, CO: Sopris West Educational Services, 2003).

[889] Coefficient alpha estimates are .92 and .91 for six- and seven-year-old children on the elision measure, and .89 and .86 for six- and seven-year-old children on the sound-matching measure on the Comprehensive Test of Phonological Processing. See JK Torgesen et al, “Prevention and remediation of severe reading disabilities: Keeping the end in mind” (1997) 1:3 Scientific Studies of Reading 217. Alternate test-form and stability coefficients exceed .90 in Grade 1 for the word identification fluency task. See D L Compton et al, “Selecting at-risk readers in first grade for early intervention: a two-year longitudinal study of decision rules and procedures” (2006) 98:2 Journal of Educational Psychology 394.

[890] Coefficient alpha estimates are .84 for Grade 1 letter sound knowledge, .80 for Grade 1 phoneme blending, and .85 and .83 for Grade 1 and 2 word reading on the Texas Primary Reading Inventory (1999). Children’s Learning Institute, University of Texas-Houston Health Science Center, Technical report: Texas primary reading inventory, 1999 ed.  (Houston: Texas Institute for Measurement, Evaluation and Statistics, 1999).    

[891] M L Farrall, Reading Assessment: Linking Language, Literacy, and Cognition (Hoboken, NJ: Wiley, 2012).

[892] Davis, “Children at risk”, supra note 875.

[893] Fletcher et al, Learning disabilities, supra note 59.

[894] S Brady, “The 2003 IDA definition of dyslexia: A call for changes” (2019) 45:1 Perspectives on Language and Literacy 15.

[895] B F Pennington & R K Olson, “Early reading development in children at family risk for dyslexia” (2005) 72:3 Child Development 816.

[896] D Hartas, “Families’ social backgrounds matter: Socio-economic factors, home learning and young children’s language, literacy and social outcomes” (2011) 37 British Educational Research Journal 893.

[897] J E Herbers et al, “Early reading skills and academic achievement trajectories of students facing poverty, homelessness, and high residential mobility” (2012) 41:9 Educational Researcher 366.

[898] K G Noble et al, “Socioeconomic background modulates cognition-achievement relationships in reading” (2006) 21:3 Cognitive Development 349.

[899] C A Denton, “Response to intervention for reading difficulties in the primary grades: Some answers and lingering questions.” 45:3 Journal of learning disabilities (2012) 232.

[900] Other educators such as early childhood educators may also have a role in assessing students and identifying their needs. See section 4, Context for the inquiry – Inquiry scope.

[901] Royal Commission on Learning, For the Love of Learning (Toronto: Queen’s Printer for Ontario, 1994) at 2, online: Queen’s University https://qspace.library.queensu.ca/bitstream/handle/1974/6880/rcol_short_version.pdf?sequence=5&isAllowed=y.

[902] Moats, Teaching Reading is Rocket Science, supra note 22.

[903] See for example “Screening for Reading Impairments” (last visited 27 January 2022), online: Gaab Lab gaablab.com/screening-for-reading-impairments.

[904] “Policy 209: Student Assessment” (last reviewed 28 May 2013), online: North Vancouver School District sd44.ca/Board/PoliciesProcedures/Series200/Policy209/Pages/default.aspx#/=.

[905] See for example “School Plan for 2011-2012” (2011), online: North Vancouver School District sd44.ca/Board/Achievement/SchoolPlans/Documents/2011_12/UpperLynnSchoolPlan2011_12.pdf

[906] Inclusive Education 44 Learning Services Handbook (2020), online: North Vancouver School District sd44.ca/ProgramsServices/InclusiveEducation/Documents/NVSD%20Inclusive%20Education%20Handbook%202020.pdf.

[907] Ibid.  

[908] Ibid.  

[909] K Smolkowski & K D Cummings, “Evaluation of the DIBELS diagnostic system for the selection of native and proficient English speakers at risk of reading difficulties” (2016) 34:2 Journal of Psychoeducational Assessment 103. R Kaminski et al, “Best practices in using Dynamic Indicators of Basic Early Literacy Skills for formative assessment and evaluation” (2008) 4 Best practices in school psychology 1181.

[910] "DIBELS® 8th Edition Materials,” (last visited 27 January 2022), online: University of Oregon: DIEBELS https://dibels.uoregon.edu/materials.

[911] Jennifer Palma “North Vancouver Tests for Dyslexia Early,” Global News (8 November 2017), Global News, online: https://globalnews.ca/video/3851769/north-vancouver-tests-for-dyslexia-early.

[912] U.K. Department of Education, “National Curriculum assessments at key stage 1 and phonics screening checks in England, 2018,” (2019), online: UK Government www.gov.uk/government/statistics/phonics-screening-check-and-key-stage-1-assessments-england-2018/national-curriculum-assessments-at-key-stage-1-and-phonics-screening-checks-in-england-2018 [UK Department of Education, “National Curriculum assessments”]; U.K. Department of Education, Assessment framework for the development of the Year 1 phonics screening check: for test developers”, (2017), online: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/628842/Y1_Phonics_assessment_framework_PDFA_V3.pdf.

[913] U.K. Department of Education, Phonics screening check: administration guidance (2019), online: UK Government https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/798235/2019_phonics_screening_check_administration_guidance.pdf; U.K. Department of Education, “The phonics screening check: responding to the results: Departmental advice for reception and key stage 1 teachers” (2013), online: U.K. Government https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/285349/The_phonics_screening_check_responding_to_the_results--.pdf.

[914] S Machin et al, “Changing how literacy is taught: evidence on synthetic phonics” (2018) 10:2 American Economic Journal: Economic Policy 217. This study reports that phonics helps close the achievement gap.

[915] U.K. Department of Education, English programmes of study: key stages 1 and 2: National curriculum in England (2013), online: U.K. Government https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/335186/PRIMARY_national_curriculum_-_English_220714.pdf.

[916] U.K. Department of Education, Key stage 1 assessment and reporting arrangements (2019), online: U.K. Government  https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/838959/2020_KS1_assessment_and_reporting_arrangements.pdf [U.K. Department of Education, Key stage 1 assessment and reporting arrangements].

[917] U.K. Department of Education, “Guidance: Key stage 1 teacher assessment guidance” (2022), online: UK Government gov.uk/government/publications/key-stage-1-teacher-assessment-guidance/key-stage-1-teacher-assessment-guidance.

[918] U.K. Department of Education, Key stage 1 assessment and reporting arrangements, supra note 916.

[919] As measured by eligibility for free school meals.

[920] U.K. Department of Education, “National Curriculum assessments,” supra note 912.

[921] Ann-Louise Hordacre et al, Evaluation of the Trial of the UK Phonics Screening Check in South Australian Schools (2017), online: South Australia Department of Education education.sa.gov.au/sites/default/files/evaluation-uk-phonics-screening-check-sa.pdf?acsf_files_redirect [Hordacre et al].

[922] “Year 1 Phonics Screening Check begins” (18 August 2020), online: Steven Marshall, Premier of South Australia premier.sa.gov.au/news/media-releases/news/year-1-phonics-screening-check-begins.

[923] “2022 Phonics screening check: Information for parents/carers, families and the general community” (last visited 28 January 2022), online (pdf): South Australia Department of Education education.sa.gov.au/sites/default/files/phonics-screening-check-fact-sheet.pdf.

[924] “About the Literacy Hub” (last visited 28 January 2022), online: Literacy Hub https://literacyhub.edu.au/about-the-literacy-hub/; “Phonics Check: Year 1” (last visited 28 January 2022), online: Literacy Hub https://literacyhub.edu.au/phonics-check.html.

[925] “Year 1 Phonics Screening Check” (last modified 12 January 2022), online: New South Wales Government https://education.nsw.gov.au/teaching-and-learning/curriculum/literacy-and-numeracy/assessment-resources/phonics-screening-check.

[926] Hordacre et al, supra note 921.

[927] Alabama, Arizona, Arkansas, California, Florida, Georgia, Illinois, Indiana, Kansas, Kentucky, Louisiana, Maine, Maryland,  Massachusetts, Minnesota, Mississippi, Missouri, Montana, Nevada, New Hampshire, New Jersey, New Mexico, North Carolina, Oregon, Rhode Island, South Carolina, Tennessee, Texas, Virginia, West Virginia, Wyoming. See also: “Dyslexia in other countries: Dyslexia and Reading Instruction in the U.S.” (last visited 28 January 2022), online: International Dyslexia Association Ontario idaontario.com/dyslexia-in-other-countries/; “State Dyslexia Laws” (last modified 24 October 2021), online: Dyslegia: A Legislative Information Site dyslegia.com/state-dyslexia-laws/.

[928] See for example Connecticut State Department of Education, SLD/Dyslexia Assessment Resource Guide (October 2016), online (pdf): IDA Ontario idaontario.com/wp-content/uploads/2018/02/Connecticut-SLD-Dyslexia-Assessment-Resource-Guide-2016.pdf and Alabama State Board of Education, Alabama Dyslexia Resource Guide (last modified 27 October 2016), online (pdf): IDA Ontario idaontario.com/wp-content/uploads/2017/06/Alabama-dyslexia-resource-guide.pdf. See also “Universal Screening for Reading Difficulties: Screening Tools – Selection and Use” (last visited 29 January 2022), online: Decoding Dyslexia Canada https://decodingdyslexiaca.org/universal-screening/screening-tools-selection-use.

[929] See for example The Dyslexia Handbook: Procedures concerning dyslexia and related disorders (2018), online: Texas Education Agency https://tea.texas.gov/sites/default/files/2018-Dyslexia-Handbook_Approved_Accomodated_12_11_2018.pdf.

[930] “Screening and identifying characteristics of dyslexia” (19 August 2019), online: Minnesota Department of Education https://education.mn.gov/mdeprod/idcplg?IdcService=GET_FILE&dDocName=MDE086773&RevisionSelectionMethod=latestReleased&Rendition=primary.

[931] “Section 3323.251: Dyslexia Screening” (12 April 2021), online: Ohio Legislative Services Commission https://codes.ohio.gov/ohio-revised-code/section-3323.251. For the 2022–2023 school year, all students in Grades 1 through 3 must receive a “tier one dyslexia screening measure” before the end of the school year. Kindergarten students enrolled for the 2022–2023 school year must be screened with the same measure sometime between January 1, 2023 and January 1, 2024. For all school years thereafter, all Kindergarten students must receive a “tier one dyslexia screening measure” between January 1 of their first school year and the following January 1. Other students in Grades 1 through 6 can receive screenings if their parent or teacher requests it.

[932] “Answers to frequently asked questions about Ohio’s dyslexia support laws now available online” (5 May 2021), online: Ohio Department of Education http://education.ohio.gov/Media/Ed-Connection/May-10-2021/Answers-to-frequently-asked-questions-about-Ohio%E2%80%99s.

[933] “Mayor de Blasio and Chancellor Porter announce academic recovery plan for pivotal school year ahead” (last modified 13 July 2021), online: New York City Department of Education schools.nyc.gov/about-us/news/announcements/contentdetails/2021/07/09/mayor-de-blasio-and-chancellor-porter-announce-academic-recovery-plan-for-pivotal-school-year-ahead;  Alex Zimmerman, “Carranza wants to assess NYC students throughout the year, but critics decry overtesting” (27 September 2019), online: Chalkbeat New York https://ny.chalkbeat.org/2019/9/27/21121783/carranza-wants-to-assess-nyc-students-throughout-the-year-but-critics-decry-overtesting; Claire Hunt, “Department of Education implements mandatory literacy screenings for kindergarten to second grade students in New York City” (14 October 2021), online: Columbia Spectator columbiaspectator.com/news/2021/10/14/department-of-education-implements-mandatory-literacy-screenings-for-kindergarten-to-second-grade-students-in-new-york-city/

[934] Alex Zimmerman & Yoav Gonen, “NYC plans to screen nearly 200,000 students in the early grades to uncover struggling readers. Then what?” (1 September 2021), online: Chalkbeat New York https://ny.chalkbeat.org/2021/9/1/22652793/literacy-screening-nyc-schools [Zimmerman & Gonen, “NYC plans to screen”].

[935] Jennifer Buckingham & Kevin Wheldall, “Why all states and territories should follow SA’s lead and introduce the Year 1 Phonics Check” (11 June 2020), online: Nomanis nomanis.com.au/single-post/why-all-states-and-territories-should-follow-south-australia-s-lead-and-introduce-the-year-1-phonics; Zimmerman & Gonen, “NYC plans to screen,” supra note 934.

[936] Auditor General, 2020 Value for Money Audit: Curriculum, supra note 328 at 3.

[937] Education Act, s8(3).

[938] Wilsa Audet et al, Evaluation report: the impact of the Literacy and Numeracy Secretariat: changes in Ontario’s Education System (Canadian Language and Literacy Research Network: 2009), online: Internet Archive https://web.archive.org/web/20190621095755/www.edu.gov.on.ca/eng/document/reports/ome_report09_en.pdf at 63.

[939] M J Gallagher et al, “Achieving Excellence: Bringing Effective Literacy Pedagogy to Scale in Ontario's Publicly-Funded Education System” (November 2016) 17:4 Journal of Educational Change 477 [Gallagher et al, “Achieving Excellence].

[940] R Childs with M Herbert et al, A Study About Policy/Program Memorandum (PPM 155: Diagnostic Assessment In Support of Student Learning (12 May 2014), prepared for the Ontario Ministry of Education, at p 6–7, online: Research Gate https://www.researchgate.net/publication/320707367_A_Study_about_PolicyProgram_Memorandum_PPM_155_Diagnostic_Assessment_in_Support_of_Student_Learning_Final_Report [Childs Report].

[941] Growing Success: Assessment, Evaluation and Reporting in Ontario schools (2010), online: Ontario Ministry of Education http://www.edu.gov.on.ca/eng/policyfunding/growsuccess.pdf   [Ontario Ministry of Education, Growing Success].

[942] Lynne Hollingshead, “Teacher Professionalism Constructed in Policy” (last visited 29 January 2022), online: Quest Conference questconference.ca/wp-content/uploads/2019/01/2014HollingsheadArticle.pdf.

[943] Childs Report, supra 940 at p 5, 26.

[944] Moats, Teaching Reading Is Rocket Science, supra note 22 at Forward.

[945] PPM 11, supra note 209.

[946] The Auditor General recommended changes to the main policy document on assessment, Growing Success, to “take into account changing knowledge about assessment and new commitments to early childhood learning, equity, inclusion, special educational needs, culturally relevant and responsive pedagogy and the use of technology." Auditor General, 2020 Value for Money Audit: Curriculum, supra note 328 at 43.

[947] Geva & Wade-Woolley, “Issues in the assessment of reading disability in second language children,” supra note 458; D’Angiulli et al, “Schooling,” supra note 764.

[948] PPM 8, supra note 209.

[949] The Ministry notes that monitoring is currently on hold to minimize reporting burden during COVID-19.

[950] Policy/Program Memorandum No 155: Diagnostic Assessment in Support of Student Learning (7 January 2013), online: Ontario Ministry of Education ontario.ca/document/education-ontario-policy-and-program-direction/policyprogram-memorandum-155 [PPM 155].

[951] Childs Report, supra note 940 at 24. According to the Childs Report, “formal” may have been used to mean “purchased, as opposed to teacher developed, assessment tools”, or perhaps to mean “board-mandated,” ibid.

[952] Ontario Ministry of Education, Growing Success, supra note 941 at 144.

[953] Gersten et al, Assisting Students Struggling with Reading, supra note 58 at 4.

[954] S D Martin & E S Shapiro, “Examining the accuracy of teachers’ judgements of DIBELS performance” (2011) 48:4 Psychology in the Schools 343.

[955] Special Education Assessments and province-mandated assessments are outside the scope of PPM 155. Also, the Ministry has powers under the Education Act, see ss 8(1), 8(3)(a) and s 11.1.

[956] “Policy and Program Memorandum 155 (PPM 155): Diagnostic Assessment in support of student learning – Advice for members” (last modified December 2018), online: Elementary Teachers Federation of Ontario https://members.etfo.ca/SupportingMembers/Employees/PDF%20Versions/Policy%20and%20Program%20Memorandum%20155.pdf.

[957] Vivian McCaffrey et al, It’s Elementary: A brief history of Ontario’s public elementary teachers and their federations (2018) at 12, online: Elementary Teachers Federation of Ontario  https://www.etfo.ca/getmedia/832d2c35-916a-4450-9cd3-0b4768e9360d/181204_HistoryETFO2018.pdf.

[958] Gallagher et al, “Achieving Excellence,” supra note 939 at 486.

[959] Ibid.

[961] Childs Report, supra note 940 at 16.

[962] Ibid at 18.

[963] Ibid at 19: “While at the board level there is concern about the lack of consistency, the elementary teacher sees the introduction of PPM 155 as showing that teachers’ professional judgment should be respected, ‘something that has been ignored because of the data driven needs of the board.’”

[964] Childs Report, supra note 940.

[965] Ibid at 15.

[966] Ibid at 22: “Before PPM 155, teachers were required to administer board-specified diagnostic assessment tools in September and additional ‘anchor tasks’ in February to measure progress.”

[967]  Brief submitted to the Ontario Ministry of Education regarding education assessment in Ontario (January 2018), online: Conseils ontariens des directrices et des directeurs de l’ éducation de languge française https://codelf.ca/fichiers/pdf/codelf-brief-Jan8-2018.pdf.

[968] Accessibility for Ontarians with Disabilities Act, 2005, S.O. 2005, c. 11 [Accessibility for Ontarians with Disabilities Act].

[969] K–12 Education Standards Development Committee, Development of proposed Kindergarten to Grade 12 (K12) education standards – 2021 initial recommendations report, (30 November 2021) at Initial proposed long-term objective: Guiding principles statements, online: Ontario ontario.ca/document/development-proposed-kindergarten-grade-12-k-12-education-standards-2021-initial-recommendations [K–12 Education Standards Development Committee, Development of proposed K-12 education standards].

[970] Ibid at Barrier area narratives and recommendations: s. 3.

[971] Parker et al, “A brief report,” supra note 806.

[972] J E Spector, “How reliable are informal reading inventories?” (2005) 42 Psychology in the Schools 593 [Spector, “How reliable are informal reading inventories?”].

[973] Parker et al, “A brief report,” supra note 806.

[974] Ibid at 64.

[975] Ibid at 65. See also for support of poor psychometric properties of reading inventories, Spector, “How reliable are informal reading inventories?” supra note 972.

[976] Ontario Ministry of Education, A Guide to Effective Instruction in Reading, supra note 730.

[977] Ontario Ministry of Education, The Kindergarten Program 2016, supra note 707.

[978] M M Clay, An observation survey of early literacy achievement, 4th ed, (Portsmouth, NH: Heinemann, 2019).

[979] Ontario Ministry of Education, A Guide to Effective Instruction in Reading, supra note 730.

[980] Jerome V D’Agostino et al, “The Generalizability of Running Record Accuracy and Self-Correction Scores” 42:2 Reading Psychology 111, DOI: https://doi.org/10.1080/02702711.2021.1880177.

[981] “Fountas & Pinnell Apps” (last visited 27 January 2022), online: Fountas & Pinnell Literacy: Elevating Teacher Experience fountasandpinnell.com/apps/; “Running Records – Second Read” (last visited 27 January 2022), online: Fountas & Pinnell Literacy: Elevating Teacher Experience fountasandpinnell.com/forum/messages?TopicID=219.   See also Fountas & Pinnell, Guided Reading, supra note 702.  

[982] Psychometric properties refer to the validity and reliability of the measurement tool.

[983] Moats, Teaching Reading Is Rocket Science, supra note 22 at 22; Parker et al, “A brief report,” supra note 806. See also, for support of poor psychometric properties of reading inventories: Spector, “How reliable are informal reading inventories?” supra note 972.

[984] Shanahan, “What constitutes a science of reading instruction?” supra note 700 at S239.

[985] Parker et al, “A brief report,” supra note 806.

[986] Gersten et al, Assisting Students Struggling with Reading, supra note 883.

[987] J R Jenkins & E Johnson, “Universal screening for reading problems: Why and how should we do this” (2018), online (pdf): RTI action network. RTI-Action-Network_Why_and_How.pdf (decodingdyslexiaca.org); J K Gilbert et al, “Early screening for risk of reading disabilities: Recommendations for a four-step screening system” (2012) 38:1 Assessment for effective intervention 6.

[988] Gersten et al, Assisting Students Struggling with Reading, supra note 883.

[989] NICHD: National Reading Panel Report, supra note 654.

[990] We include people who have experience in classroom settings: classroom teacher, teacher candidate, learning support teacher, principal, vice-principal, in-school literacy specialist, special education teacher, early childhood educator.

[991] Included private and board SLPs.

[992] R K Wagner et al, Comprehensive test of phonological processing: CTOPP (Austin, TX: Pro-ed, 1999).

[993] Regulated Health Professions Act, 1991, S.O. 1991, c. 18; Audiology and Speech-Language Pathology Act, 1991, S.O. 1991, c. 19; Personal Health Information Protection Act, 2004, S.O. 2004, c.3, Sched. A; Health Care Consent Act, 1996, S.O. 1996, c.2, Sched. A; Education Act: for students under the age of 18, parents and guardians give written consent.

 

Book Prev / Next Navigation