Read High Stakes Management: Leveraging SuccessMaker Research to Raise Achievement text version

High Stakes ManagementTM: Leveraging SuccessMaker® Research To Raise Achievement

November 1, 2001 Rev. June 1, 2005

Pearson Digital Learning

High Stakes Management is a trademark of NCS Pearson SuccessMaker is a registered trademark of NCS Pearson

Abstract

SuccessMaker® managed courseware provides in-depth coverage of a comprehensive set of mathematics and reading objectives at the K-8 levels. The Foundations courses, in particular, are designed to give educators a continuously updated view of the student's academic progress, as evidenced by the student's performance in the courseware. Moreover, student courseware performance is highly correlated with student achievement on high-stakes tests. Pearson Digital Learning has combined these advantages to enable educators to forecast and manage high stakes achievement over the course of the school year. Thus educators can meet accountability demands by drawing on sound, educational practice throughout the school year. This paper describes the research foundations of Pearson Digital Learning's High Stakes ManagementTM initiative.

1 June 2005

Pearson Digital Learning Leveraging SuccessMaker Research

i

High Stakes Management: Leveraging SuccessMaker Research to Raise Achievement Abstract I. SuccessMaker's Role in Standards-Based Accountability Curriculum structures Adaptive algorithms Forecasts of courseware progress III. Basic Processes in High Stakes Management Relating courseware progress to high-stakes achievement Forecasting school achievement IV. Research Basis Overview Models of student learning Meaningful measurements, predictable progress Relationship between test levels and SuccessMaker levels Forecasts of high stakes achievement V. The High Stakes ManagementTM Program Initiating High Stakes Management in a district Monitoring the program Establishing the high-stakes relationship Using the High Stakes Forecast VI. Summary VII. References II. SuccessMaker Features Enabling High Stakes Management i 1 2 2 3 3 4 4 4 6 6 6 7 9 12 12 12 12 13 13 14 15

1 June 2005

Pearson Digital Learning Leveraging SuccessMaker Research

ii

I.

SuccessMaker's Role in Standards-Based Accountability

Robert Linn, co-director of the Center for Research on Evaluation, Standards, and Student Testing (CRESST), has observed that reform of one kind or another seems to be a permanent feature of the American educational system, with assessment and accountability playing increasingly prominent roles (Linn, 1998). Current educational reforms emphasize content standards and high-stakes testing (along with the dual demands of equity and high performance), but as Linn notes, the tests may not be very well aligned with the standards in some instances. Although there are no easy fixes to such problems, a number of improvements are possible, and here we see a clear role for SuccessMaker managed courseware. As we will describe, SuccessMaker has a very fine-grained set of curriculum objectives that map readily to state standards. In addition, student levels in SuccessMaker are highly statistically correlated with student scores on state assessments. Therefore we see SuccessMaker as a bridge from state standards to state assessments. We put this idea into practice with a set of tools and services, which we refer to collectively as High Stakes Management. These tools and services equip educators with continuously updated forecasts of student achievement on high stakes tests, and thereby enable educators to meet accountability demands over the course of the school year through sound pedagogical practice. In the next section we describe the features of SuccessMaker that have made this offering possible. In the following sections we outline the elements of High Stakes Management, along with the research foundations of this initiative. Finally we describe the High Stakes Management Program.

1 June 2005

Pearson Digital Learning Leveraging SuccessMaker Research

1

II.

SuccessMaker Features Enabling High Stakes Management

Curriculum structures SuccessMaker courseware is the product of more than 30 years of research and development in adaptive instruction. Pearson Digital Learning's research department has collected data from thousands of students to refine the adaptation to the individual learner, and to improve the effectiveness of course design, instructional strategies, and implementation models. SuccessMaker Foundations courses, such as Math Concepts and Skills (MCS) and Reader's Workshop (RW), are structured by strands, or content areas. The objectives in each strand are organized in increasing, evenly spaced levels of difficulty and indexed by grade-level units. A student moves to the next level in a strand by demonstrating mastery of the current level, based on established criteria. Strand level is thus an objective-based (criterion-referenced) measure of student skill and understanding. Figure 1: Each strand in Math Concepts and Skills (MCS) spans several course levels

Math Concepts and Skills Level

Strand 0 1 2 3 4 5 6 7 8

Number Concepts Geometry Measurement Addition Problem Solving Strategies Word Problems Subtraction Probability & Statistics Fractions Equations Speed Games Applications Multiplication Decimals Division Science Applications

The position of a student in the overall course is represented by an average of the strand levels. This average level indicates the specific knowledge and skills the student has been presented and mastered. The student's overall course level summarizes the student's current status, and is used to compute the gains the student has made over specified periods of time. The student's overall gain is measured from the student's functional level in the course, which is established by a process called initial placement motion (IPM). The gain is calculated by measuring the difference between this initial level and subsequent levels. The gain thus represents the student's progress through content-specific objectives. For example, a student progressing from an initial placement level of 4.50 to a level of 5.75 has a gain of 1.25 levels. This is a criterion-referenced gain, representing an amount of work accomplished.

Pearson Digital Learning Leveraging SuccessMaker Research

1 June 2005

2

Adaptive algorithms Using adaptive learning algorithms developed and refined over decades, the course adjusts the distribution of activities across strands for each student based on that student's individual performance. In this way the course identifies and emphasizes the areas in which the student needs the most attention, to ensure that the student's development of skills and learning of the subject material are well balanced. The student's average level is, therefore, a continuously updated, representative summary of the student's academic progress in the course. If each student were to see the same block of material, as with some computer-assisted instruction, the amount of time needed to complete a specific lesson would be determined only by the student's response time and the number of attempts required. In contrast, SuccessMaker adapts to each student, adjusting the content distribution, the number of exercises presented on each objective, and the tutorials and other strategies to help the individual student learn. The learning models and mastery decisions of the foundations-based curriculum make the most of student time on task by dynamically adjusting to student performance. Equally important, the software is designed to keep the student engaged by presenting the most relevant material at a pleasantly challenging pace. Forecasts of courseware progress Through the years the company's Research and Measurement Department has guided the evolution, refinement, and implementation of SuccessMaker courses. In particular, we have improved the recommendations for scheduling and pacing of students in our Foundations math and reading courses. We have analyzed data on thousands of students, calculating each student's gain over time (learning trajectory), and updating estimates of the average time students need to master particular objectives. This research enables educators, at the beginning of the year, to schedule students' time in a course to achieve desired gains. Figure 2: Example of one student's learning trajectory

1.4 1.2 1 0.8

The dots on this graph represent one student's gain over time. The curve represents forecasted gains. Researchers model learning trajectories, compare predictions to observations, and thereby refine the model.

30

Gain

0.6 0.4 0.2 0 0 5 10 15 20 25

Observed

Time in Hours

At the beginning of the year SuccessMaker determines each student's appropriate initial placement in the course and reports this to the teacher. Then, based on the student's pace through the course, the system refines the estimated time the student will need to make desired gains. The teacher can thereby schedule the student for sufficient time in the course so that she or he progresses from her/his initial SuccessMaker course level to the desired level.

Pearson Digital Learning Leveraging SuccessMaker Research

1 June 2005

3

III.

Basic Processes in High Stakes Management

Relating courseware progress to high-stakes test achievement In recent years, we investigated the relationship between courseware levels SuccessMaker students in various school districts across the country achieved at the time of the statewide assessment, and their achievement on the test. Using linear regression, we identified specific SuccessMaker courseware levels corresponding to statewide assessment achievement at the 60% and 80% confidence levels. In a High Stakes Management Program, these courseware levels are known as courseware target levels. We have analyzed the relationship to tests such as the Terra Nova, Stanford Achievement Test, Ninth Edition (SAT-9), Texas Assessment of Knowledge and Skills (TAKS), Florida Comprehensive Achievement Test (FCAT), California Standards Test (CST), North Carolina End-of-Grade tests (EOG), Georgia's Criterion-Referenced Competency Test (CRCT), New Jersey's Grade Eight Proficiency Assessment (GEPA) and others. For example, analyses in several Florida districts, one of which is depicted below, have enabled us to identify specific SuccessMaker levels that correspond to FCAT achievement levels. Figure 3: Grade 5 Math, FCAT Scores versus MCS Levels

400

Scaled Score

300

Each point represents one student. The horizontal axis shows the student's course level in Math Concepts and Skills; the vertical axis shows the student's FCAT score. Visually, the overall pattern shows the close relationship between student course level and test score, which is confirmed by a high correlation coefficient (r = 0.76).

r = 0.76

1 2 3 4 MCS Level 5 6 7 8

200

100

High Stakes Management Time-of-Test Forecast The High Stakes Management Time-of-Test Forecast is a report developed by Pearson Digital Learning's Research Group that is based on student usage rates in the courseware and courseware target levels. The main purpose of the report is to illustrate the probability individual students (and groups of students) have of scoring proficient or higher on the statewide assessment. The underlying purpose of the forecast report is to provide individualized intervention strategies for students identified as at-risk for passing the test. The probabilities in the report are represented as either high (green), medium (yellow) or low (red). The key components of the High Stakes Management Time-of-Test Forecast report are the Prescriptive Scheduling algorithms embedded in the SuccessMaker management system (detailed in the Research section of this paper) and the SuccessMaker courseware target levels for achieving proficient on a particular statewide assessment.

1 June 2005 Pearson Digital Learning Leveraging SuccessMaker Research 4

Process The SuccessMaker Prescriptive Scheduling report is run for a designated group of students. Although a variety of different report options are available, the report must be run with the option of inputting a SuccessMaker courseware level for the specific purpose of having the embedded report algorithms forecast the amount of time required for each student to reach the inputted courseware level. It is important to note that a separate report must be run and a different courseware level inputted for each grade and subject for which there is a High Stakes Management targeted group of students. After the report is run, the data is exported as an ASCII text or Excel file, depending on the version of SuccessMaker from which the data is being pulled, for the purpose of importing it easily in an Excel template, created by PDL's Research Group to produce the High Stakes Management Time-of-Test Forecast reports. After the Prescriptive Scheduling reports have been run and the data transmitted and received by PDL, it is imported into the Excel template file. The template file prompts the user to enter three key dates: 1) the date the students started taking SuccessMaker courseware sessions, 2) the date the Prescriptive Scheduling report was run, and 3) the future date of the statewide assessment. There are two additional prompts, one for the grade level of the students and another for the name of the school. Once all of the necessary inputs have been entered, a series of Excel macro procedures embedded in the template file are executed. The High Stakes Management Time-of-Test Forecast macros first calculate each individual student's Daily Usage Rate. This Daily Usage Rate is then multiplied by the # of Instructional Days to Go to calculate a projection of how much time each student will spend in the courseware until the statewide assessment is administered if his current courseware usage remains stable. This time projection is multiplied by the Forecasted Gain Rate. The resultant product is the Projected Forecasted Gain, which is then added to each student's courseware level when the Prescriptive Scheduling report was run to generate the student's Projected Forecasted Level. The Projected Forecasted Level is compared against the courseware target level (entered in the Prescriptive Scheduling report). If the Projected Forecasted Level for a particular student is greater than or equal to the courseware target level, it is formatted as a green box on the final report table. If the Projected Forecasted Level is less than the courseware target level, but within 15% of its value, it is formatted as a yellow box. If the Projected Forecasted Level falls below 85% of the courseware target level, it is formatted as a red box. Therefore, the final report does not display each student's Projected Forecasted Level, but rather a rectangular box, shaded as green, yellow, or red, to indicate his probability of scoring proficient or higher on the statewide assessment as indicated by his current SuccessMaker courseware usage. In addition to the table displaying the individual student forecasts, the High Stakes Management Time-of-Test Forecast report also includes a chart summarizing for the group (and across all groups within a single grade level) the count of students that falls within each of the three categories (green, yellow, and red) for: Initial Placement Motion (IPM) level, courseware level at the time the Prescriptive Scheduling report was run, and the Projected Forecasted Level. Two additional columns of information are also presented, to indicate the number of additional minutes of SuccessMaker courseware usage each student would need in order to improve his current forecast from red to yellow or yellow to green. PDL does not recommend students spend more than 40 minutes (or two 20-minute sessions) per instructional day in one Foundations

1 June 2005 Pearson Digital Learning Leveraging SuccessMaker Research 5

course per subject, and therefore if a student's projection exceeds this recommendation, this information is noted on the report as well. The ultimate aim of High Stakes Management is to help schools to monitor and manage student achievement on high-stakes tests while promoting the genuine learning required by broad educational standards.

IV.

Research Basis

Overview SuccessMaker's research foundations go back to pioneering work on intelligent tutoring led by Professor Patrick Suppes at Stanford University beginning in the mid-1960's (Suppes, 1981). A guiding idea of this work is to develop a structured, comprehensive, and deep curriculum, to be presented to the individual learner in the manner and rate best suited to that learner, as determined by the learner's responses to the system. The ensuing research, led for more than two decades by Suppes' colleague, Mario Zanotti, investigated the educational efficiency and effectiveness of the system, and led to subsequent extensions and refinements of the software. Over the past decade, while continuing to refine the learner-system interaction, we examined the statistical relationship between student performance on our courseware and student achievement on standardized tests. This broadened view of student achievement underscored the importance of classroom instruction as the principal component of the learner's educational environment, and led to the development of diagnostic reports about individual learners and groups of learners to support the classroom instructor. We have thus extended the scope of our research considerations to our effectiveness in supporting classroom instruction, and the educational effectiveness of various implementations of our courseware. Models of student learning SuccessMaker's instructional algorithms are designed to respond to and improve student performance. The basis for the original development and subsequent refinement of these algorithms are the learning models developed by Suppes and Zanotti. Suppes and Zanotti developed specific mathematical models based on their theory of mastery learning, consisting of six main components: (1) content coverage and dynamic ordering of concepts, (2) distributed presentation of instruction based on strands, (3) initial adaptive placement, (4) learning models for judging mastery, (5) retention models for assigning review, and (6) decisions on tutorial intervention. Their theory of mastery learning asserts that students achieve greater results when they · · Spend time on the content that is deemed more important by state and local curriculum standards and testing objectives; Focus their efforts on concepts and skills organized into homogenous strands (i.e., subcontent areas, one for fractions, one for word problems, and so forth) and move fluidly between the strands as appropriate;

Pearson Digital Learning Leveraging SuccessMaker Research

1 June 2005

6

·

Are placed at an appropriate grade level to begin the coursework, based on their performance in an initial sequence of sessions; · Can move forward at their own pace when their performance in a particular area satisfies certain criteria that take into account the fact that the learning process is dynamic; · Receive tutoring intervention automatically, returning to the prerequisite content when the pattern of responses indicates that the student is having difficulty performing; and · Are automatically assisted to retain learning, as forgetting new concepts is a natural occurrence. These design principles have been implemented in SuccessMaker algorithms, which have been field-tested, statistically analyzed, and refined. The response data behind this evolutionary development have come from more than 30,000 students from diverse populations. With these data, we have refined our models of how students learn, revised our instructional algorithms, and thereby enhanced student learning. The algorithms adjust the learning sequence for each student, based on the student's individual pattern of responses to the instruction. Thus, while based on a common curriculum, the sequence of instruction for each student is rich and unique. Moreover, the instructional strategies and learning sequences within a course depend on the specific content area. SuccessMaker's "intelligent tutor" or expert system, which is individualized to specific content and available to provide background and interactive help at any point, differs significantly from simpler tutorial programs (Wilburg, 1995). SuccessMaker meets several criteria for effective instruction; including accommodating the heterogeneous needs of all students in a classroom and offering adequate academic learning time on task (Slavin, 1995). Research-based instructional programs, which can forecast individual and group achievement gains based on time on task, offer the additional educational benefit of measurable, predictable results (Suppes, 1988). SuccessMaker foundations courses have proved particularly effective in adapting to each student's learning style, evaluating student responses, and moving the student through the curriculum at an appropriate learning pace (Quality Education Data, 1996). Finally, the on-site support and professional development components are critical to the overall success of this program (Wilburg, 1995). Meaningful measurements, predictable progress SuccessMaker's curriculum structures, along with mathematical models of individual progress through those structures, provide a foundation for estimating the time a student will require to make a specified courseware gain, or conversely, the gain a student can expect to make over a specified period of time. The table that follows, a statistical compilation of such estimates based on data from thousands of students, exemplifies the information we provide to instructors for scheduling student time on the courseware at the beginning of the school year, and more generally for implementing the courseware consistent with academic goals for the district, the school, or the individual student.

1 June 2005

Pearson Digital Learning Leveraging SuccessMaker Research

7

Table 1: Hours of MCS instruction needed to achieve specified gains from specified initial levels 1

IPM Level Hours needed for gain of 1.0 course level by percentile 25%ile 50%ile 75%ile

0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0

14 14 25 22 20 13 11 9

19 18 33 28 25 17 13 12

25 24 43 37 33 22 17 15

Once a student has been placed at the appropriate initial level in a course, and has begun to work through the curriculum, estimates such as those presented in the table above are further refined to match the individual student's rate of progress. Of course, we are continuously reviewing and improving our algorithms for forecasting student progress through the courseware. We do so by examining so-called residual trajectories, i.e., the difference of the student's actual progress through the course minus the progress we have forecasted. An example of such an analysis is presented in the figure below.

1

These estimates are for K through 4th graders only; 5th and 6th graders take on average 10% less time than these estimates. Pearson Digital Learning Leveraging SuccessMaker Research

1 June 2005

8

Figure 4: Example of observed minus predicted gains, MCS, Grade 5

actual minus predicted gain -0.2 -0.1 0.0 0.1 0.2

Each thread represents the forecasting error for an individual student. The fanning out of the threads shows that the farther out the prediction is made the greater the prediction error. The final prediction error is about 0.1 course levels. Researchers use such diagrams to calibrate and refine forecasting algorithms.

30 35 40 45 50 session number 55 60

In this example, we examine 25 fifth-grade students enrolled in Math Concepts and Skills (MCS) who have completed 30 sessions (each of about 15 minutes duration). At this point, we have forecast the gains they will make in sessions 31 through 60. The figure shows the students' actual progress minus their forecasted progress in those sessions. To quantify these differences, we calculate the root-mean-squared (RMS) prediction error at the completion of the 60th session. In this example the RMS prediction error turns out to be about 0.1 course levels (roughly corresponding to one-tenth of one year's growth), which is a fair representation of our courseware forecasting accuracy at the time of this writing. Relationship between test levels and SuccessMaker levels Over the years we have conducted a number of relationship studies, i.e., studies of the relationship between student performance on SuccessMaker and student achievement on standardized tests. Figure 3 above is excerpted from such an analysis. In these analyses we restrict attention to students who have completed a minimum number of hours in the subject courseware, and who have successfully mastered a minimum percentage of the objectives attempted. Under these conditions, we have observed consistently high statistical correlations between these two measures of the student's skill and understanding: courseware level at the time of the test; and test score. For example, we recently completed studies in Florida for the 2003-2004 school year. The results for Grade 4 reading and Grade 5 mathematics in three school districts are summarized in the tables below. (In these tables, n denotes the number of students and r denotes the correlation coefficient.)

1 June 2005

Pearson Digital Learning Leveraging SuccessMaker Research

9

Table 2: Reader's Workshop versus FCAT 2004, Grade 4 District

A B C Total:

Table 3: Math Concepts and Skills versus FCAT 2004, Grade 5 District

A B C Total:

n

426 577 367 1370

r

0.63 0.62 0.68 0.64

n

537 588 413 1538

r

0.75 0.77 0.74 0.75

We see that the correlations among these three districts range from 0.55 to 0.68 in reading, and from 0.74 to 0.78 in mathematics. SuccessMaker's high correlations are remarkable, since SuccessMaker is first and foremost an instructional system. The high correlations in Tables 2 and 3 suggest that the student's SuccessMaker course level indicates the student's likely achievement on high-stakes tests. We pursued this idea by using observed relationships to calculate courseware levels corresponding to state achievement goals as illustrated in the tables below. Here, 2+ signifies FCAT Achievement Level 2 and above (i.e., Levels 2 through 5; similarly, 3+ signifies FCAT Achievement Level 3 and above.

Table 4: Reader's Workshop levels for FCAT Achievement Levels, Grade 4. District

A B C

Table 5: Math Concepts & Skills levels for FCAT Achievement Levels, Grade 5. District

A B C

2+

4.9 5.0 4.9

3+

5.9 5.9 5.7

2+

5.4 5.4 5.2

3+

6.7 6.6 6.2

Table 4 indicates, for example, that fourth-grade students in District A whose Reader's Workshop course level was at least 4.9 at the time of the FCAT had a better chance of reaching FCAT Achievement Level 2 or above. It turns out in fact, that using courseware level 4.9 correctly classified Level 2+ achievement for 83% of the 426 fourth-grade students in District A. (That is, 35 of the 426 students failed to meet course level 4.9 and failed to attain FCAT Achievement Level 2 or higher; and 317 of the 426 students met both the course level and the FCAT Achievement Level; therefore the course level 4.9 criterion coincided with the FCAT Achievement Level 2+ criterion for 352, or 83%, of the 426 students.) The other courseware levels shown in Tables 4 and 5 had similarly high rates of correct classification. This is further evidence of the program's effectiveness in monitoring progress toward achievement. In recent years, Pearson Digital Learning has worked in partnership with school districts nationwide to collect and analyze data from more than 10,000 students on a wide range of tests. These include the Terra Nova, Stanford Achievement Test, Ninth Edition (SAT-9), Texas

1 June 2005 Pearson Digital Learning Leveraging SuccessMaker Research 10

Assessment of Knowledge and Skills (TAKS), Florida Comprehensive Achievement Test (FCAT), California Standards Test (CST), North Carolina End-of-Grade tests (EOG), Georgia's Criterion-Referenced Competency Test (CRCT), New Jersey's Grade Eight Proficiency Assessment (GEPA) and others. In all cases, the results showed a consistent and strong correlation between SuccessMaker levels and performance on the state and standardized tests. The consistency of these results among school districts shows that reasonable courseware target levels can be set statewide. On the other hand, the variation among districts in the same state is sufficiently large to warrant individual relationship studies (OnTarget Analyses®), with consequent adjustments to the statewide targets. As mandated by the No Child Left Behind Act of 2001 (NCLB), Adequate Yearly Progress (AYP) measures the progress of all students toward meeting a state's academic achievement standards. The main purpose is to ensure that all children (within each state) have an equal opportunity to obtain a high-quality education and reach proficiency. Despite varying academic standards and assessments being implemented across the states, the main goal for each remains the same: to enable all students to score proficient or higher on the statewide assessment by the 2013­14 school year. The High Stakes ManagementTM Statewide Benchmarks have been developed from available information within the state. 1. Whenever possible, we examined the OnTarget Analyses® Pearson Digital Learning conducted for districts within the state to determine statewide benchmarks (forming a weighted average of district-specific target levels, with the greatest weight given to districts in the center of the statistical distribution of values). 2. In states where there has been little OnTarget Analysis activity, we calibrated the state test to the state tests in the first category based on the percentage of students in the state who achieved proficiency respectively on the state assessment and on the most recent National Assessment of Educational Progress (NAEP) conducted by the United States Department of Education. The calibration enabled us to translate benchmark levels from the first category into benchmark levels appropriate for the state in the second category. 3. In a third category (states for which no statewide assessment data was available) we used the grade-level of SuccessMaker content, established over years of empirical analysis, to set benchmark levels appropriate to the month in which the state test is administered. PDL will update the statewide benchmarks with the most recent information from each state available two times a year.

1 June 2005

Pearson Digital Learning Leveraging SuccessMaker Research

11

Forecasts of high stakes achievement Our studies over the last five years yield correlation coefficients generally ranging from 0.65 to 0.80. These high correlations and their consistency across districts enable us to offer a close mapping between the student's progress in SuccessMaker Foundations courseware and progress toward high stakes test achievement. To make this mapping more explicit and more useful to educators, we have launched the High Stakes Management Program, in which we project the student's course level at any time during the school year to the course level at the time of the standardized test, and then on to the student's high stakes test achievement.

V.

The High Stakes Management Program

Initiating High Stakes Management in a district/program Pearson Digital Learning's High Stakes Management solution combines the findings of two areas of research - forecasting time needed to reach specific SuccessMaker levels and the relationship of SuccessMaker levels to high stakes test achievement. The result is a powerful new suite of tools and services to forecast and manage student achievement. The first step is to plan the implementation of SuccessMaker as an integral part of the district or program's educational strategy. The implementation planning process includes: identifying the current academic goals to which SuccessMaker can contribute; identifying the student population that will be served; and deciding whether to focus on mathematics or reading or both. This information is used to schedule SuccessMaker use, plan professional development of teachers, and to select reports to be used to monitor the program and identify instructional areas of need. This overall plan is documented on the High Stakes Management Program Plan template. Additionally, the overall plan for the program is further refined at the school level via the High Stakes Management School Action Plan template. Further steps involve the collection of SuccessMaker data at the time of the statewide assessment (criterion-referenced state assessment or norm-referenced test), and the matching of these data to student test scores when they become available. Tools are available to make this step easy. Monitoring the program The SuccessMaker system includes standard reports to help educators manage the program during the school year. To begin with, an Initial Placement Motion (IPM) algorithm places each student at his or her appropriate level in the SuccessMaker Foundations course. Once the system has determined this level, teachers can run the Prescriptive Scheduling report to obtain forecasts of the time students will need to reach target levels in the courseware. In addition, the system provides diagnostic reports to help teachers to identify individual students and groups of students needing specific instructional help. Other reports help teachers to monitor student usage and progress throughout the school year. We recommend having Pearson Digital Learning provide a High Stakes Management Interim Progress Report around mid-year, as the basis for observations and recommendations for program improvement.

1 June 2005

Pearson Digital Learning Leveraging SuccessMaker Research

12

Establishing the high-stakes relationship At sites with an established High Stakes Management program, we collect data on SuccessMaker performance levels at the time the statewide assessment is administered, and then conduct an OnTarget Analysis (relationship study) once test scores are available and matched to courseware levels. Students included in the analysis are those with at least 10 hours of time in a SuccessMaker course with acceptable performance. The minimum time in the course and acceptable performance criteria ensure that the SuccessMaker level reflects student learning. In the OnTarget Analysis we determine the statistical relationship between test scores and SuccessMaker levels to establish courseware target levels to be used in the following year. This statistical relationship is refined in subsequent years via OnTarget Review and subsequent OnTarget Analysis studies. Using the High Stakes Forecast The High Stakes Forecast (a component of the High Stakes Management Interim Progress Report) applies the statistical relationship established in the OnTarget Analysis to forecast probabilities for individual students toward scoring proficient or higher on the statewide assessment as indicated by their SuccessMaker usage. For example, students who reach or exceed courseware target levels calculated at the 80% confidence level, have at least an 80% chance or scoring proficient or higher on the statewide assessment. The basic elements of the High Stakes Forecast are shown in Figures 5 and 6. Figure 5 shows the data aggregated across a group of students in the fourth grade for reading and Figure 6 shows the information for individual students within the same group.

Grade 4 RW Union-04-Taft

12 Number of Students 10 8 6 4 2 0 Placement Current Time-of-Test Forecast Probability of Scoring Proficient or Above on High-Stakes Test Based on SuccessMaker Usage Low Medium High

Figure 5: High Stakes Management forecasting report, at the group level.

1 June 2005

Pearson Digital Learning Leveraging SuccessMaker Research

13

Figure 6: High Stakes Management forecasting report, at the individual student level.

The forecasting algorithms take several sources of uncertainty into account, most notably, the forecasting uncertainty in projecting from initial courseware levels to levels at the time of the test, and the variability in the relationship of courseware levels at the time of the test to actual test scores. The forecast is therefore an estimation achievement at the point in time in which the data were collected.

VI.

Summary

SuccessMaker's roots go back to the pioneering days of computer based instruction at Stanford University in the 1960's. Over the years, the curriculum content and structure have been refined so that the student's progress through the courseware is measurable, meaningful, and predictable. Consequently, student performance in the courseware has proven to be statistically correlated with student achievement on state assessments, so much so that school achievement on these assessments can be forecast from student performance on the courseware during the school year. The advantages of such forecasting are to reduce the uncertainty of test outcomes, to make school achievement more manageable, and thereby to enable educators, whose top priority is genuine student learning, to meet new accountability demands.

1 June 2005

Pearson Digital Learning Leveraging SuccessMaker Research

14

VII.

References

Florida Department of Education. (2001). FCAT briefing book (Publication Number 322020501-200-BL)1. Available at http://www.firn.edu/doe/sas/fcat/pdf/fcat_brief.pdf. Linn, R. (1998). Assessments and accountability. (CSE Technical Report 490). Los Angeles, CA: CRESST. Pearson Research Group. (2001). Math Concepts and Skills time/gain estimates by initial placement motion (IPM )levels (Implementation Note #1). Mesa, AZ: Pearson Digital Learning. Pearson Research Group. (2001). Foundations reading time/gain estimates (Implementation Note #2). Mesa, AZ: Pearson Digital Learning. Quality Education Data. (1996). Educational software effectiveness study. Denver, CO: Author. Slavin, R. (1987). A theory of school and classroom organization. Educational Psychologist, 22, 89-109. Suppes, P. (1981). University-level computer-assisted instruction at Stanford: 1968-1980. Stanford, CA: Stanford University. Suppes, P., & Zanotti, M. (1996). Foundations of probability with applications, selected papers. 1974-1995. Cambridge, MA: Cambridge University Press. Suppes, P. (1988). Probable relation between functional gain and time needed for Math Concepts and Skills and Reader's Workshop. Technical Notes on Curriculum and Evaluation, 1, 1-6. Sunnyvale, CA: Computer Curriculum Corporation. Wilburg. K. (1995). Integrated learning systems: What does the research say? The Computing Teacher, 2, 7-10.

1 June 2005

Pearson Digital Learning Leveraging SuccessMaker Research

15

Information

High Stakes Management: Leveraging SuccessMaker Research to Raise Achievement

18 pages

Report File (DMCA)

Our content is added by our users. We aim to remove reported files within 1 working day. Please use this link to notify us:

Report this file as copyright or inappropriate

328056


You might also be interested in

BETA
High Stakes Management: Leveraging SuccessMaker Research to Raise Achievement