Today in education, more than ever before, teachers have to know how to analyze data.  For the most part teachers have mastered the art of administering assessments.  There is no shortage of assessments.  We use screeners, diagnostics, progress monitors, and outcome assessments.  And let’s not forget the good old test prep.  Test prep is probably the most-used assessment of any other.  One of the more useful ways to analyze data and sort kids into groups that I have used is the “Four Quadrant Sort”.  The purpose of this article is to explore several different reading and math four quadrant sorts. 

Sort #1 – Reading  -  Accuracy vs. Fluency

                The first sort I like to use is the sort for reading fluency.  We use the data from our universal screener AIMSweb RCBM (oral reading fluency).  The sort is completed using both fluency (words per minute) and accuracy (percent correct) data.  You could also use data from the DIBELS Oral Reading Fluency measure, or any other assessment that gives you results in words per minute and accuracy percentages.  You could even take whatever reading passage you want to use, have all your students read it orally for one minute.  Mark their errors.  Use the total number of words, number of errors and words read correctly to calculate their accuracy.  All you would need then is a chart of suggested words per minute, like the one by Hasbrouck & Tindal.  A copy of their chart can be found at

                The first quadrant of the sort is for students that are both accurate and fluent.  Accurate means the student has an accuracy percentage correct of at least 95% (meaning that 95% of the words they read were correct).  You may choose to use 98% as your accuracy cutoff score.  Fluent means the student is above the 25th percentile in words per minute.  This is according to the national norms table provided by AIMSweb.  You may decide to make your fluency cutoff be above the 50th percentile.  These are your “enrichment” students.  The second quadrant is for the students that are accurate, but not fluent.  In other words, these students have accuracy percentages of at least 95% (or 98%, if you choose).  However, their fluency scores are not above the 25th percentile (or 50th percentile).  These are most likely your “benchmark” students.  The third quadrant is for the students that are not accurate or fluent.  They have an accuracy percentage correct below 95% (or 98%).  These students also have a fluency score at the 25th percentile or below (or 50th percentile or below) in words per minute.  These are the real “intervention” kids.  These are the kids we choose assess using a diagnostic.  The fourth quadrant is for the students that are fluent, but not accurate.  This is normally not a very big group of kids.  These students read enough words per minute to place them above the 25th percentile (or 50th percentile).  Their accuracy scores however, are not at the required 95% (or 98%).  These students are most likely to be grouped with your “enrichment” or “benchmark” students, depending on the number of words per minute they read.  If there are enough students in this quadrant and you have the resources, they can be their own group.  Some four quadrant sorts can be found at the following websites: 

Sort #2 – Reading  -  Fluency vs. Comprehension

                Another sort for reading is one that focuses on fluency and comprehension.  The comprehension measure we use for this sort is the AIMSweb MAZE measure.  It is a sentence-level comprehension assessment.  It is essentially a written version of a cloze test.  Accuracy is not considered in this sort.  You may want to look at or add accuracy into this sort if you have large groupings.  For example, you may want to split each quadrant into two parts:  one part is accurate at 95% and above (or 98% and above), the other part of the quadrant would be those that are below 95% (or 98%).  Essentially you would be taking the four quadrants and turning them into 8 quadrants.  This is assuming, of course, that you have the personnel to have eight groups of students.

                The first quadrant is for those students that are adequate in both fluency and comprehension.  Thus, their words per minute are above the 25th percentile (or 50th percentile), and their comprehension score is above the 25th percentile (or 50th percentile).    These are your “enrichment” students.  [Remember, these students can be divided into two groups:  One group would be those that are adequate in fluency, comprehension and accuracy.  The other group will be those that are adequate in fluency and comprehension, but not accuracy].  The second quadrant is for those students that are adequate in fluency, but not in comprehension.  Their scores in fluency place them above the 25th (or 50th percentile).  Their scores in comprehension are below the 25th (or 50th) percentile.  [You can also split this quadrant into two groups:  Those that are adequate in fluency, not adequate in comprehension, but adequate in accuracy.  Those in the other group will be those that are adequate in fluency, not adequate in comprehension, and not adequate in accuracy].  The third quadrant is for those students that are not adequate in fluency or comprehension.  They score below the 25th (or 50th) percentile on both the oral reading test and the MAZE test.  [If dividing this group into two, one group would be those that are not adequate in fluency, comprehension or accuracy.  The other group would be comprised of those that are not adequate in fluency or comprehension, but adequate in accuracy].  The fourth quadrant is for those students that are adequate in comprehension, but not fluency.  So their scores on the MAZE comprehension tests place them above the 25th (or 50th) percentile.  However, their score on the fluency measure place them below the 25th (or 50th) percentile.  [A further division of this quadrant would mean that one group is adequate in comprehension, but not fluency and they are adequate in accuracy.  The other group would be those that are adequate in comprehension, not adequate in fluency and not adequate in accuracy].

Sort #3 – Math  -  Computation vs. Concepts & Application

For this particular sort, we use the data from the AIMSweb mathematical measures.  DIBELS currently has a math test.  I have not used it, so I am not sure if you can use it for this sort.  If there is a computation score and a concepts & application score, you can use it for this sort.

The first quadrant is for those students that are adequate in both computation and concepts & application.  That is, the scores are above the 25th percentile (or 50th) in both computation and concepts & application.  The second quadrant is for those students that are adequate in computation, but not adequate in concepts & applications.  So they were able to score above the 25th percentile (or 50th) in computation.  They were not able, however, to score above the 25th (or 50th) percentile on the concepts & application measure.  The third quadrant is for those students that are not adequate in computation or concepts & applications.  So these students scored at the 25th (or 50th) percentile or below on both computation and concepts & application.  The fourth quadrant is for those students that are adequate in concepts & application, but not computation.  This should be one of the smaller groups.  These are students whose scores on the concept & applications are above the 25th  (50th) percentile.  Their scores on the computation portion are at the 25th (or 50th) percentile and below.

Sort #4 – Phoneme Segmentation

For those of you who teach kindergarten and first grade students the sort for phoneme segmentation will be an important one.  It’s a little different from the previously discussed sorts, but is still easy to use to determine grouping of students.  The two factors to consider are the fluency with which the student segments the words into phonemes and whether or not they pass the assessment according to the criteria.  Two of the most common assessments for this skill are AIMSweb and DIBELS.

The first quadrant is for those students who can segment all phonemes fluently (meaning they meet the criteria for passing the assessment) and they are accurate at 95% or higher.   The second quadrant is for those students that segment phonemes with 95% or higher accuracy.  However, they do not pass the phoneme segmentation fluency assessment.  The third quadrant is for those students that segment phonemes, sounds, word parts, but their accuracy is less than 95%.  They do not pass the phoneme segmentation fluency assessment.  The fourth quadrant is for those students that are very quick, but not accurate.  Their accuracy is below 95%, but they are fluent in the phoneme segmentation fluency assessment.   They pass the assessment, but their accuracy is low.


Sort #5 – Nonsense Word Fluency

This sort is especially useful for those of you who teach primary level students.  We will discuss two different NWF sorts: one for word reading fluency and one for phonics.  The first one I will discuss is the one where the students are reading words.

                The first quadrant is for those students that are reading whole words.  They are not sounding them out.  Some call this unitization.  The second quadrant is for those students that are reading words a sound at a time, then reading the whole word.  The third quadrant is for those students that are doing some blending.  Perhaps they are reading them as onset and rime.  The fourth quadrant is for those who are decoding the words a sound at a time.

The next Nonsense Word Fluency sort is the one for phonics or alphabetic principle.  In this case the students are not yet reading whole words.

                The first quadrant is for the students that can read the initial and final sounds.  Maybe they will only read initial sounds or final sounds.  The second quadrant is for the students that have repeated substitution errors for consonant and vowel sounds.  The third quadrant is for those students who have errors on the middle or medial vowels, usually deletions.  The fourth quadrant is for students who are unable to read the whole word or recode.

Where do you go from here?

                My take on this whole four quadrant sort for instructional groupings strategy is very simple.  You could take any two pieces of data that you have obtained through assessment.  Identify which one of the skills you assessed is the more basic, prerequisite, fundamental of the two.  This is the one that needs to be adequate in quadrant 1, adequate in quadrant 2, not adequate in quadrant 3 and adequate in quadrant 4.  The next thing to do is to take the next skill you assessed, the higher level one, the one that builds upon the previous one, etc.  This is the skill that is adequate in quadrant 1, not adequate in quadrant 2, not adequate in quadrant 3 and adequate in quadrant 4.

                This may seem oversimplified, but I came to this conclusion through comparing the sort for accuracy vs. fluency in reading and the sort for fluency vs. comprehension and also the sort for computation vs. concepts & applications in math.  In all of these sorts, the skill listed first is the more prerequisite of the two skills in the sort.  For example, accuracy comes before fluency.  Fluency is essential in order to be able to comprehend.  Being able to compute is essential for those working on concepts & application.

                Let me give you an example of what I’m talking about here.  I have not seen a sort for this particular set of skills:  vocabulary and comprehension.  Since it is widely believed that vocabulary skill is a prerequisite to comprehension skill , I have chosen these two skills for this sort. Take your vocabulary and comprehension assessment results.  Determine what adequate scores are on both of these skills.  If you use a published assessment, rather than one that you created, you won’t have to determine what an adequate score is.  That information will be provided for you.

                The first quadrant would be those who are adequate in both vocabulary and comprehension.  The second quadrant would be those students who are adequate in vocabulary, but not comprehension.  The third quadrant would be those students who are not adequate in vocabulary or comprehension.  The fourth quadrant would be those students who are adequate in comprehension, but not vocabulary.

                I imagine this particular sort would be useful for those teachers who have kids that are adequate in both accuracy and fluency.  If there is an accuracy or fluency deficit, you most likely won’t even bother doing this vocabulary and comprehension sort.  Remember, accuracy comes first.

    Any time you begin a new assessment system, there will be "growing pains". In the district I teach in we had our share, but by the end of the second AIMSweb assessment session (Winter, 2012), we felt like we had a pretty good handle on it for the next session (Spring, 2012). Following, I will discuss some of the issues we had and the solutions - or attempted solutions - we implemented. 
    First of all let me describe our situation. We had a district-wide testing team. This was the first time our district had done this. Of course this was also the first time our district had really implemented an assessment system. We had not used a universal screener district-wide before. Previously, each building tested its own students, usually using a formative or diagnostic test. Our team consisted of our building-level interventionists. Our instructional coaches played a role, but for the most part didn't administer any assessments. They helped with organizing materials, served as "runners" to get kids to and from the testing area, helped make the schedule, etc. It should be noted that by "helped" I mean they basically did it all by themselves.
    Our testing team received training in how to administer and score the AIMSweb measures. The training was over the course of two to three sessions. We basically followed the AIMSweb Training Workbook, used the video examples to practice scoring, received information from presenters in person and via webcam. We didn't get training from certified AIMSweb trainers. And, I think this made a difference. In my opinion it would have been helpful to get training from someone who had actually administered the assessment measures. 
    Our first session (Fall, 2011), went pretty well considering we knew next to nothing going in. Our main issue was in scoring. We didn't agree on what to do about the whole "does the answer have to be written in the blank or not" on the MCAP (Math Concepts & Application) and MComp (Math Computation) measures. The instructions are pretty explicit. According to AIMSweb, if the answer isn't in the blank, it is marked incorrect. However, there is a grey area. In the standardized instructions for the students on the MCAP it specifically says to write the answer in the blank. In the standardized instructions for the students on the MComp is doesn't say to write the answer in the blank. Unfortunately, what this meant for us was that some scorers counted it wrong and some didn't. So, we didn't have consistent scoring the first time around.
    The lesson we learned (I hope) is that you need to have those types of issues decided before you even administer the test. We talked about what it meant if the student didn't write the answer in the blank. We decided it meant they couldn't follow instructions, not that they could or couldn't compute the problem correctly. We asked ourselves, "What are we trying to determine with the test?" We decided we were not trying to determine whether or not a student can follow directions. We decided that it didn't matter if they put it in the blank or not, we just needed to all be scoring the same way. So, we eventually decided not to count it wrong as long as the answer was in the box somewhere and correct.
    Another issue we had was that there was no "team leader" for our testing team. We had someone we could call, but not anyone on site. In hindsight it would have been helpful to have a "go to" person assigned or appointed to the group. This could be one of the interventionists, or someone who doesn't do any testing. This would have saved us quite a bit of time when we had to try and figure out how we were going to score the math. That person could have made an executive decision or called someone to find out. Then there would have been no disagreement about what to do in a particular situation. 
    Another major issue we still have is what to do about the data in terms of getting it out to the teachers and explaining what it means. We did eventually print out parent report letters and talk about the results at conferences. We found what we really need is for the teachers to receive some AIMSweb training. We need to know how to read the data, interpret or analyze the data, and learn how to talk to parents about the data. Some people aren't familiar with percentile rank, norms, standardization, etc.
    Progress monitoring is another area that hasn't been perfectly implemented. Some schools progress monitor once per week, some once every two weeks, some barely once a month. There is a lot of information to consider when progress monitoring. Some of the more important ones are: How often do you progress monitor? Should you progress monitor or strategically monitor a particular student? When progress monitoring is it really necessary to "drill down" or "test backwards" until you find the level at which to monitor the student? (That, by the way, takes a long time.) How do you set the goals for the student? What formula do you use? What do you do if the student reaches his/her goal? Are they automatically dismissed from intervention? What if they aren't on a trajectory that shows they will meet their goal? Do they automatically go to tier 3 interventions? How many data points are necessary to make a decision about a student?
Hopefully you will be able to have some of these questions answered before you begin your district-wide assessment system. It will save you so much time and effort and you will be able to focus on what matters: what to do with the students who are at-risk according to the screener.

Intervention can take many forms.  Differentiation within your own classroom can be considered a form of intervention. Intervention can take the form of “push-in”, where students receive intervention services within the classroom from a person other than the classroom teacher.  More traditionally, someone other than the classroom teacher can provide “pull-out” intervention services for the students. 
Intervention can be provided outside of the school day, either before or after school.

No matter which way you or your school decides to provide intervention, it is not an easy task to undertake.  There are many variables involved that need to be considered. 
There are many questions that will have to be answered. 

How many people will be available to provide intervention?  

Does your school have interventionists?  

Will the classroom teachers be the only people available? 
Is your current schedule conducive to providing intervention in the way you have chosen?  

How much time is available for intervention?  

Is there funding available for resources or programs?  

How will you determine who is eligible for intervention?  

Will you progress monitor? 

What assessment(s) will you use?  

How will you know if your intervention has been successful? 

How many students will there be in intervention?  

For what subjects will you provide intervention?  Reading? Math?  

How many times per week will intervention be provided?  

How long will an intervention period be?

Will the intervention, if it is a separate class, be graded?

How will decisions regarding students be made?

How does all this fit into your school’s current Student Improvement Team?

And now, how will all this look considering the new Common Core State Standards are here?

Okay, I’ll stop now.  You get the point.

 Most teachers are familiar with the RTI process.  Here in Kansas we call it MTSS or Mult-Tier System of Supports.  Some people refer to the process as a “three-tiered” approach. 
Regardless of what you call it, you will need to start somewhere to getthe whole process going.


 Where do you start? I’m not sure if there is a right or wrong answer to this question.  I suppose it depends upon which process you decide to follow.  If you are “lucky”enough to be a school that is undergoing improvement efforts, you may be able to procure some professional help to get you going in the right direction.  In some cases the state has intervention specialists, school improvement specialists, RTI specialists, etc. 

And don’t forget, there is a wealth of information available on the internet.  Do your homework and see what’s out there.  Chances are, there is a school very similar to yours that has undergone the same process.  It has been my experience that schools and teachers that have found something that works are very willing to share information.  It is also my opinion that if someone somewhere else has already figured out something that works, you don’t need to reinvent the wheel, so to speak. Take what works and customize it to work in your situation.


You may not be able to do anything about it, at least at first, but you need to look at your core programs.  Why? If your core is serving your students effectively, then you won’t have that many students in need of intervention. Your RTI pyramid will look more like it is supposed to: 
The bottom of the pyramid, the students effectively served by the core, should contain about 80% of the students. 
The next tier in the pyramid, the students effectively served by whatever intervention you are providing, should contain about 15% of the students.  The top of the pyramid, should be about 5% of your kids. These kids will
receive more intense intervention.

In the case of my school and district, our three-tier model didn’t really resemble
a pyramid.  We have well more than 5% or our kids eligible for tier 3 intense intervention.  We also have more than 15% or our kids receiving tier 2 intervention services.  Essentially, we found out that our core is not effectively serving our students.

Dealing with an ineffective core is an entirely other issue that will not be discussed
any further here.


One of the first things you will need to do is to figure out what is going on with your district’s assessment system.  Does it provide you with the information you need?  Are teachers actually using it?  Is the data being used to make decision appropriately?  

Basically, you need to have these types of assessments:
    Screening assessments to determine who is at-risk (administered three
times per year.  
    Progress monitoring assessments to see how kids are responding to
    Diagnostic assessments to determine more specific areas of

We decided as a district that we needed a comprehensive system of assessments.  Essentially, you need to use a universal screener to determine who is at-risk for reading or math difficulty.  You can use some online screeners, some are free.  Most likely you will need to purchase an assessment system. Many good systems have both a screener and progress monitor.  I don’t know of any commercially available assessment systems that have a screener, progress monitor and diagnostic assessment.

Some of the more widely known assessment systems are DIBELS (Dynamic Indicators of
Basic Early Literacy Skills), AIMSweb, MAP (Measures of Academic Progress), and Renaissance’s STAR Reading, and STAR Math.  

We have used DIBELS in the past.  You can download all of the testing materials you need for free.  It is a good, research-based program.  It will allow you to both screen and progress monitor your students in reading (and now) math.  Although it is not a diagnostic test, the data you get from DIBELS allows you to determine specific areas of need in reading and math.

AIMSweb works pretty much the same way.  

Some of the diagnostic tests that you can use are fairly inexpensive.  I’m not aware of any for math that are fairly inexpensive.  The Quick Phonics Screener (QPS) is available for order online.  It quickly will tell you what areas of phonemic and phonological awareness as well as phonics, your students are weakin.  The Phonological Awareness Skills Test (PAST) is available in Yvette Zgonc’s book “Sounds in Action”.  This test will give you information about whether you students have weaknesses at the word level, syllable level or phoneme level.


After you have chosen an assessment, you will need to set up a schedule of who, what, where and when to test.  You may want to have a testing team.  Or maybe your district will decide that each school will test their own kids.  Perhaps the classroom teachers will do the testing. Maybe you will have an interventionist to do your testing.


Next, you will need to decide what to do with all the data.  You might decide to just follow the guidelines in the assessment for determining who receives what services.  Maybe you will decide as a district to only provide intervention to the bottom 25% or bottom 10% of the students according to the screener.  

After you decide what will determine who receives services, you will need to figure out how many kids that is.  Who will teach them?  Interventionists?  Classroom teachers?  A combination of the two?  How will they be grouped?  How big are the groups going to be?  

Another consideration is what materials will be used to teach tier 2 and tier 3
kids.  Will you have commercially available programs that are specifically targeted to tier 2 and/or tier 3 kids?  Will your teacher orinterventionist have to just come up with their own stuff?  

Somewhere along the line you will need to schedule time for intervention.  Will this schedule be dictated by the building’s overall schedule? Will it depend upon how many kids there are?  You will have to figure out how many teachers are available.  What time of day will it be?  Can kids be pulled out for intervention?  Will it be a push-in system?  How often will intervention be provided?  Every day?  Twice a week? Three times a week?  

As you can see, there are a lot of decisions to be made.  There are a lot of things to be considered.  There is a lot of work to be done.  Whatever you decide to do, you just have to get started and keep in mind that it’s all about the kids.  You are doing this for the kids.  You
are making decisions based on what’s best for kids.  Remember that with every decision you will have to make and the rest will eventually fall into place.

Here are some resources that may be worth looking into further.  I am partial to MTSS because that is what Kansas uses as its three tier model of intervention.  I also like it because when you are undertaking the task of implementing a district-wide intervention system, assessment system, etc., they have a detailed plan for how to do it.  They have a workbook that you can use with templates, examples, etc.  
National Center on Response to Intervention

DMG (Dynamic Measurement Group)
PAST/Yvette Zgonc


Recently we have added some outlines, lessons, resource lists, etc. 

What would you like to see added to this site to make it more useful to you?

This site is a free one, so some of the features we would like to add are not possible.  For example, you would have to download any of the documents in order to be able to view and/or purchase any of them.

Thanks for any input or comments.











    April 2012
    March 2012