A PDF version of this document is available. You will need Adobe Acrobat.

Effective Technology Planning for the Technology Literacy Challange (PDF)

Performance Assessment

What gets measured, gets done.

The most important component of a quality technology plan is the establishment of a process by which the education institution will regularly gather and analyze data to guide planning and decision-making related to the integration of technology into education.

Performance measurement or assessment has increasingly become a requirement of government agencies and nonprofit organizations to demonstrate that programs are successfully achieving their goals(1). Performance assessment shifts the attention from what was "done" to what was "accomplished." Performance assessment will shift the attention from how many computers are in the schools, to how those computers are being used to improve student performance and school operations.

In the traditional educational environment, assessment was closely tied to a judgment of success or failure, with gradations in between (i.e. A, B, C, D, F). In a standards-based instructional environment, performance assessment a tool to assist the student and teacher determine where the student is in relationship to the performance objective and the effectiveness of certain teaching and learning strategies. Ongoing performance assessment, of the latter sort, is a critical component of technology planning and implementation. Performance assessment is used for the purposes of determining progress towards goals and evaluating the effectiveness of programs, not a judgment of success or failure.

This standard performance assessment model identifies inputs, activities, outputs, and outcomes, including intermediate and end outcomes(2):

Inputs are the resources, including funds, time, and people, that are allocated to the technology-related activities.

Activities are the specific program activities or tasks that are undertaken. The installation of a computer network, acquisition of equipment, workshops, planning for the integration of technology into the curriculum, and the like are all activities.

Outputs are measurements of the direct products of the program activities. Outputs are usually measured in terms of volume or work completed. Outputs include the number of staff members who participated in staff training opportunities, the status of the technology infrastructure, the development of curriculum goals related to technology literacy, and the like.

Outcomes are the consequences of what the program did that had an impact on the intended recipients. Generally outcomes are differentiated as intermediate and end outcomes. Intermediate outcomes are expected to lead to the ends desired, but are not themselves ends. End outcomes are the final desired results of the program's work.

Intermediate outcomes are identified through experience, research, or logical analysis as being necessary prerequisites to achieve the end outcomes. Intermediate outcomes focus on what educators do related to technology that is expected to lead to improved student achievement and school performance. For example, staff competencies in the use of technology, the amount and manner of use of technology for instructional and administrative activities, adequacy/robustness of the district's technology infrastructure and technology support are all intermediate outcomes that have been identified as factors that can reasonably be predicted to lead improvements in student achievement and school performance. Intermediate outcomes can also address issues related to the quality of the service delivery. For example, participant evaluation of professional development activities will provide information about the quality of these activities.

Measurement of end outcomes related to the implementation of technology into schools is very difficult outside of a traditional research environment. The two key end outcomes that are anticipated from the integration of technology in our schools are improvements in student achievement and school performance. It is difficult to identify measurable advancements in these end outcomes that can be directly attributable to investments made in technology. This is because there are simply too many variables to consider to effectively ascertain any kind of "causal" relationship between student achievement or school performance and technology. In the future, as we expand our understanding of the use of technology in education and expand the use of technology as a tool to conduct performance measurement, we should be able to do a better job of measuring end outcomes.

The standard performance assessment model yields information about "what" has occurred. It generally does not address questions of "how" or "why" certain results occurred(3). Given the current stage of the integration of technology into our schools, it is important to consider the "how" and "why" questions. "How" and "why" questions are within the realm of a more traditional program evaluation approach, which is very compatible with the performance assessment model. An in-depth evaluation of performance measurements should help answer important questions about the kinds of investments of resources, personnel, and time and the kinds of activities that are necessary to accomplish the successful integration of technology into education.

As the district's network system becomes operational, the network itself will greatly facilitate gathering data to support the performance assessment, for example, the use of web-based forms for gathering survey data. Assessment strategies involving the use of network technologies can be expected to mature over the next decade.

The most important step in a performance assessment model is the use of the data to inform future decision-making. Measurement of performance is not for the purposes of sitting in judgment, rather for the purpose of providing effective information that will lead to future success.

(1)Newcomer, K.E. "Using performance measurement to improve programs. "In Newcomer, K.E. (ed.). Using performance measurement to improve public and nonprofit programs. San Francisco: Jossey-Bass, 1997.

(2)Wholey, J.S., Hatry, H.P., and Newcomer, K.E. (eds.) Handbook of Practical Program Evaluation. San Francisco: Jossey-Bass, 1994.

(3)Newcomer, K.E. "Using performance measurement to improve programs. "In Newcomer, K.E. (ed.). Using performance measurement to improve public and nonprofit programs. San Francisco: Jossey-Bass, 1997.

Previous Section
Next Section