4 Test Administration

Chapter 4 of the Dynamic Learning Maps® (DLM®) Alternate Assessment System 2014–2015 Technical Manual—Integrated Model (Dynamic Learning Maps Consortium, 2016a) describes general test administration and monitoring procedures. This chapter describes updated procedures and data collected in 2020–2021, including the DLM policy on virtual test administration, a summary of administration time, Personal Needs and Preferences Profile selections, and teacher survey responses regarding user experience, remote assessment administration, and accessibility.

Overall, administration features remained consistent with the 2019–2020 intended implementation, including the use of instructionally embedded assessment in both the fall and spring windows and the availability of accessibility supports.

For a complete description of test administration for DLM assessments, including information on available resources and materials and information on monitoring assessment administration, see the 2014–2015 Technical Manual—Integrated Model (Dynamic Learning Maps Consortium, 2016a).

4.1 Overview of Key Administration Features

This section describes DLM test administration for 2020–2021. For a complete description of key administration features, including information on assessment delivery, Kite® Student Portal, and linkage level selection, see Chapter 4 of the 2014–2015 Technical Manual—Integrated Model (Dynamic Learning Maps Consortium, 2016a). Additional information about changes in administration, including the shift to two instructionally embedded windows, can also be found in the Test Administration Manual 2020–2021 (DLM Consortium, 2021) and the Educator Portal User Guide (Dynamic Learning Maps Consortium, 2021c).

4.1.1 Test Windows

The fall instructionally embedded window opened on September 14, 2020, and closed on December 18, 2020. The spring instructionally embedded window opened on February 1, 2021, and closed on July 2, 2021. Students were expected to meet blueprint coverage requirements in both of the fall and spring windows.

4.1.2 DLM Statement on Virtual Assessment Administration

In October 2020, DLM staff released a policy document stating that DLM assessments must be administered in person by a qualified test administrator, not virtually (e.g., over Zoom, Microsoft Teams, Google Hangouts, etc., in which the test administrator is not physically present during administration). This policy was supported by a resolution from the DLM Technical Advisory Committee, who agreed that there would be too many risks associated with a virtual administration (e.g., student ability to access the content, test security, validity of score inferences). The policy does not require an in-school administration. For example, a test administrator could travel to the student’s house, or a separate off-site testing facility could be used.

4.2 Administration Evidence

This section describes evidence collected during the 2020–2021 operational administration of the DLM alternate assessment. The categories of evidence include data relating to administration time, device usage, and the use of instructionally embedded assessments.

4.2.1 Administration Time

Estimated administration time varies by student and subject. Total time during the instructionally embedded window varies depending on the number of Essential Elements (EEs) a teacher chooses and the number of times a student is assessed on each EE. Testlets can be administered separately across multiple testing sessions as long as they are all completed within the testing window. The estimated total testing time is 60–75 minutes per student in ELA and 35–50 minutes in mathematics in each of the fall and spring windows.

The published estimated total testing time per testlet is around 5–10 minutes in mathematics, 10–15 minutes in reading, and 10-20 minutes for writing. Published estimates are slightly longer than anticipated real testing times because of the assumption that teachers need time for setup. Actual testing time per testlet varies depending on each student’s unique characteristics.

Kite Student Portal captured start and end dates and time stamps for every testlet. To calculate the actual testing time per testlet, the difference between these start and end times was calculated for each completed testlet. Table 4.1 summarizes the distribution of test times per testlet. Most testlets took around 6 minutes or less to complete, with mathematics testlets generally taking less time than ELA testlets. Testlets time out after 90 minutes.

Table 4.1: Distribution of Response Times per Testlet in Minutes
Grade Min Median Mean Max 25Q 75Q IQR
English language arts
3 0.13 3.88 4.84 88.00 2.35 6.05 3.70
4 0.22 4.03 5.06 86.20 2.63 6.25 3.62
5 0.15 3.92 4.91 87.25 2.48 6.08 3.60
6 0.18 3.98 5.04 86.35 2.57 6.27 3.70
7 0.12 4.68 5.73 88.57 2.88 7.10 4.22
8 0.20 4.28 5.21 82.47 2.77 6.52 3.75
9 0.25 4.92 6.16 84.72 3.03 7.42 4.38
10 0.20 4.70 5.98 84.58 2.92 7.40 4.48
11 0.18 4.82 6.15 85.53 2.95 7.48 4.53
12 0.28 3.85 5.28 61.90 2.08 6.68 4.60
Mathematics
3 0.10 1.83 2.74 88.67 1.05 3.27 2.22
4 0.10 1.67 2.42 80.90 0.98 2.85 1.87
5 0.07 1.72 2.37 78.72 1.05 2.85 1.80
6 0.10 1.87 2.61 86.35 1.13 3.12 1.98
7 0.08 1.47 2.25 84.20 0.87 2.62 1.75
8 0.08 1.62 2.34 88.02 0.97 2.82 1.85
9 0.12 1.83 2.62 66.83 1.02 3.20 2.18
10 0.08 1.82 2.52 73.28 1.05 3.05 2.00
11 0.02 1.78 2.72 73.92 1.05 3.23 2.18
12 0.12 1.40 2.54 89.30 0.50 3.08 2.58
Note. Min = minimum, Max = maximum, 25Q = lower quartile, 75Q = upper quartile, IQR = interquartile range

4.2.2 Device Usage

Testlets may be administered on a variety of platforms. In addition to start and end times, Kite Student Portal captured the operating system used for each testlet completed in 2020–2021. Although this data does not capture specific devices used to complete each testlet (e.g., SMART Board, switch system, etc.), this data does provide high-level information about how students access assessment content. For example, we can identify how often an iPad is used relative to a Chromebook or traditional PC. Figure 4.1 shows the number of testlets completed on each operating system, by subject and linkage level. Overall, 37% of testlets were completed on a PC, 30% were completed on a Chromebook, 24% were completed on an iPad, and 8% were completed on a Mac. In general, PCs are the most popular operating system for lower linkage levels, whereas PCs and Chromebooks are more similar at the higher linkage levels. This may reflect that testlets at the lower linkage levels are typically teacher-administered, but higher linkage levels are computer administered. Thus, these results may indicate that teachers and students tend to use different devices for accessing assessment content.

Figure 4.1: Distribution of Devices Used for Completed Testlets

Distribution of Devices Used for Completed Testlets

4.2.3 Instructionally Embedded Administration

As part of instructionally embedded administration, teachers select the linkage level to be assessed for each selected EE. To support teachers in their decision making, the system provides a recommended linkage level. In the fall instructionally embedded window, the recommended linkage level for a student is determined by the First Contact complexity band and is the same for all EEs in a subject. The correspondence between the First Contact complexity bands and the recommended linkage level is shown in Table 4.2.

Table 4.2: Correspondence of Complexity Bands and Recommended Linkage Level
First Contact complexity band Linkage level
Foundational band Initial Precursor
Band 1 Distal Precursor
Band 2 Proximal Precursor
Band 3 Target

During the spring instructionally embedded window, the recommended linkage level is determined using rules similar to the adaptive routing algorithm that was previously used for system-assigned testlets.

  • The spring recommended linkage level was one linkage level higher than the linkage level assessed in fall if the student responded correctly to at least 80% of items. If the assessed fall linkage level was at the highest linkage level (i.e., Successor), the recommendation remained at that level.
  • The spring recommended linkage level was one linkage level lower than the linkage level assessed in fall if the student responded correctly to less than 35% of items. If the assessed fall linkage level was at the lowest linkage level (i.e., Initial Precursor), the recommendation remained at that level.
  • The spring recommended linkage level was at the same linkage level assessed during the fall window if the student responded correctly to between 35% and 80% of items.

Consistent with fall instructionally embedded administration, teachers select a linkage level for each EE during the spring instructionally embedded administration. Data from the fall instructionally embedded window indicated that teachers accepted the recommended level 60% (n = 72,318) of the time. Similarly, in the spring instructionally embedded window, teachers accepted the recommended level 61% (n = 102,231) of the time. In instances where teachers adjusted the level from the system-recommended level, it was typically to the linkage level below the level recommended, which was observed for 32% (n = 38,565) of testlets administered in the fall and 27% (n = 44,648) of testlets administered in the spring.

4.2.4 Administration Incidents

As in all previous years, testlet assignment during the 2020–2021 operational assessment administration was monitored for evidence that students were correctly assigned to testlets. Administration incidents that have the potential to affect scoring are reported to state education agencies in a supplemental Incident File. No incidents were observed during the 2020–2021 operational assessment windows. Assignment of testlets will continue to be monitored in subsequent years to track any potential incidents and report them to state education agencies.

4.3 Implementation Evidence

This section describes evidence collected during the 2020–2021 operational implementation of the DLM alternate assessment. The categories of evidence include a description of Kite system updates and survey data relating to user experience, remote assessment administration, and accessibility.

4.3.1 Kite System Updates

Several updates were made to the Kite system during 2020–2021 to improve its functionality. Text was added to the Instruction and Assessment Planner to explain that the EEs for high school mathematics for non-enrolled grades are optional and do not count towards blueprint requirements. A new Student Roster and First Contact Survey Status extract was created to provide testing readiness information in one place. The roster includes the current grade in which the student is enrolled, all subjects in which the student is rostered, and the student’s First Contact survey status and completion date. A majority of the pages in Educator Portal that include tables were reorganized to take advantage of the horizontal space. All tables in Educator Portal were updated to a standard user interface. An update was made to the user interface by having users first enter roster information; roster name and subject, as well as roster location; state, district, and school. Lastly, the voice generator used to create the spoken audio for text to speech on all testlets was updated to a more lifelike voice at a standard reading speed.

4.3.2 User Experience With the DLM System

User experience with the 2020–2021 assessments was evaluated through the spring 2021 survey, which was disseminated to all teachers who had a student rostered for DLM assessments. As in previous years, the survey was distributed to teachers in Kite Student Portal, where students completed assessments. Each student was assigned a survey for their teacher to complete. The survey consisted of four blocks. Blocks A and C, which provide information used for the validity argument and information about teacher background, respectively, are administered in every survey. Block B is spiraled, and teachers are asked about one of the following topics per survey: accessibility, relationship to ELA instruction, relationship to mathematics instruction, or relationship to science instruction. Block N was added in 2021 to gather information about educational context during the COVID-19 pandemic.

A total of 2,579 teachers responded to the survey (with a response rate of 59%) about 5,933 students’ experiences.

Participating teachers responded to surveys for a median of 2 students. Teachers reported having an average of 12 years of experience in ELA, 12 years in mathematics, and 10 years with students with significant cognitive disabilities. The median response to the number of years of experience in ELA was 9 years, the median experience in mathematics was 9 years, and the median experience with students with significant cognitive disabilities was 7 years. Approximately 24% indicated they had experience administering the DLM assessment in all seven operational years.

The following sections summarize user experience with the system, remote assessment administration, and accessibility. Additional survey results are summarized in Chapter 9 (Validity Studies). Survey results pertaining to the educational experience of students during the COVID-19 pandemic are described by Accessible Teaching, Learning, and Assessment, Systems (Accessible Teaching, Learning, and Assessment Systems, 2021). For responses to the priors years’ surveys, see Chapter 4 and Chapter 9 in the respective technical manuals (Dynamic Learning Maps Consortium, 2018, 2019, 2020).

4.3.2.1 Educator Experience

Survey respondents were asked to reflect on their own experience with the assessments as well as their comfort level and knowledge administering them. Most of the questions required teachers to respond on a 4-point scale: strongly disagree, disagree, agree, or strongly agree. Responses are summarized in Table 4.3.

Nearly all teachers (96%) agreed or strongly agreed that they were confident administering DLM testlets. Most respondents (89%) agreed or strongly agreed that the required test administrator training prepared them for their responsibilities as test administrators. Most teachers also responded that they had access to curriculum aligned with the content that was measured by the assessments (89%) and that they used the manuals and the Educator Resources page (93%).

Table 4.3: Teacher Responses Regarding Test Administration
SD
D
A
SA
A+SA
Statement n % n % n % n % n %
I was confident in my ability to deliver DLM testlets 24 1.0 72 3.1 918 39.3 1,320 56.6 2,238 95.9
Required test administrator training prepared me for the responsibilities of a test administrator 66 2.8 180 7.7 1,143 49.1 938 40.3 2,081 89.4
I have access to curriculum aligned with the content measured by DLM assessments 66 2.8 200 8.6 1,191 51.2 871 37.4 2,062 88.6
I used manuals and/or the DLM Educator Resource Page materials 35 1.5 135 5.8 1,257 54.0 900 38.7 2,157 92.7
Note. SD = strongly disagree; D = disagree; A = agree; SA = strongly agree; A+SA = agree and strongly agree.

4.3.3 Remote Assessment Administration

Two questions on Block N of the survey asked test administrators where their student took assessments this year, and if the student took any tests remotely (i.e., at a location other than school but with a trained test administrator present), what their remote testing experience was like. As a reminder, the DLM policy on virtual assessment administration required an in-person test administrator, but that administration was not required to occur in school. Table 4.4 summarizes teacher responses regarding the setting of test administration. Most teachers (95%) responded that DLM assessments were administered to the student at school. Table 4.5 summarizes teachers’ responses about the experience of students who took DLM assessments remotely. Of the students who took assessments remotely, very few (less than 20%, 2% of all students) used different accessibility supports than they would normally have access to, experienced technology difficulties, had to respond in a less preferred response mode, and/or had someone other than the teacher administer the assessments remotely (e.g., paraeducator or other qualified test administrator).

Table 4.4: Teacher Responses Regarding Administration Setting
Setting n %
At school 5,559 94.8
At home    108   1.8
Testing facility not at school      53   0.9
Other      37   0.6
Not applicable    109   1.9
Table 4.5: Teacher Responses Regarding Circumstances Applicable to Remote Testing
Circumstance Yes (%) No (%) Unknown (%)
Student used different accessibility supports when testing remotely than at school 161 (15.6) 771 (74.9) 98 (9.5)
Student experienced technology difficulties during assessments taken remotely   98   (8.8) 929 (83.7) 83 (7.5)
Student had to respond in a less preferred response mode because of remote arrangements   99   (9.1) 902 (82.9) 87 (8.0)
Someone other than the teacher administered the assessments remotely   48   (4.3) 998 (88.5) 82 (7.3)

4.3.4 Accessibility

Accessibility supports provided in 2020–2021 were the same as those available in previous years. The DLM Accessibility Manual (Dynamic Learning Maps Consortium, 2021b), distinguishes accessibility supports that are provided in Kite Student Portal via the Personal Needs and Preferences Profile, require additional tools or materials, or are provided by the test administrator outside the system.

Table 4.6 shows selection rates for the three categories of accessibility supports. The most commonly selected supports were human read aloud, test administrator enters responses for student, and individualized manipulatives. For a complete description of the available accessibility supports, see Chapter 4 of the 2014–2015 Technical Manual—Integrated Model (Dynamic Learning Maps Consortium, 2016a).

Table 4.6: Accessibility Supports Selected for Students (N = 12,824)
Support n %
Supports provided in Kite Student Portal
Spoken audio 1,694 13.2
Magnification 1,534 12.0
Color contrast 1,195 9.3
Overlay color 496 3.9
Invert color choice 341 2.7
Supports requiring additional tools/materials
Individualized manipulatives 5,101 39.8
Calculator 2,903 22.6
Single-switch system 435 3.4
Alternate form - visual impairment 404 3.2
Two-switch system 175 1.4
Uncontracted braille 5 0.0
Supports provided outside the system
Human read aloud 11,087 86.5
Test administrator enters responses for student 7,908 61.7
Partner assisted scanning 793 6.2
Sign interpretation of text 222 1.7
Language translation of text 104 0.8

Teachers were asked whether the student was able to effectively use available accessibility supports and whether the accessibility supports were similar to the ones used for instruction. The majority of teachers agreed that students were able to effectively use accessibility supports (92%).

Of the teachers who reported that their student was unable to effectively use the accessibility supports (8%), the most commonly reported reason was that the student could not provide a response even with the support provided (63%). These data are shown in Table 4.7.

Table 4.7: Reason Student Was Unable to Effectively Use Available Accessibility Supports
Reason n %
Even with support, the student could not provide a response 139 63.2
The student needed a support that wasn’t available or allowed 62 28.2
The student refused the support during testing 39 17.7
The student was unfamiliar with the support 30 13.6
There was a technology problem (e.g., KITE display, AAC device) 8 3.6

4.3.5 Data Forensics Monitoring

During the 2020–2021 administration, two data forensics monitoring reports were made available in Educator Portal. The first report includes information about testlets completed outside of normal business hours. The second report includes information about testlets that were completed within a short period of time.

The Testing Outside of Hours report allows state education agencies to specify days and hours within a day that testlets are expected to be completed. Each state can select its own days and hours for setting expectations. For example, a state could elect to flag any testlet completed outside of Monday through Friday from 6:00 a.m. to 5:00 p.m. local time. The Testing Outside of Hours report then identifies students who completed assessments outside of the defined expected hours. Overall, 4,284 (1%) English language arts and mathematics testlets were completed outside of the expected hours by 2,507 (21%) students.

The Testing Completed in a Short Period of Time report identifies students who completed a testlet within an unexpectedly short period of time. The threshold for inclusion in the report was testlet completion time of less than 30 seconds in mathematics and 60 seconds in ELA. The report is intended for state users to identify potentially aberrant response patterns; however there are many legitimate reasons a testlet may be submitted in a short time period. Overall, 14,172 (5%) testlets were completed in an short period of time by 3,243 (27%) students.

4.4 Conclusion

During the 2020–2021 academic year, the DLM system was available during the fall and spring instructionally embedded windows. Administration evidence was collected in the form of administration time data and instructionally embedded administration data. Implementation evidence was collected in the form of teacher survey responses regarding user experience, remote assessment administration, accessibility, and Personal Needs and Profile selections. New data forensics monitoring reports were made available to state education agencies in Educator Portal.