Skip to content
Home » Faculty Support » Program Assessment Overview » Step 4: Identify & Gather Evidence & Interpret Results

Step 4: Identify & Gather Evidence & Interpret Results

Collecting evidence is a relatively easy process, especially compared to the intentional work involved in selecting the assessments. The basic steps are to determine when the evidence will be available and who will be responsible for collecting it. In most cases, the faculty member teaching the course is in the best position to gather the assessments that will be reviewed.

There are, however, a couple of considerations to keep in mind. First, it is recommended that the faculty member submit work to the program evaluator(s) that has not been graded. This practice helps in maintaining objectivity and reduces the likelihood that the reviewer(s) will be influenced by the grades and feedback provided by the professor of the course. A secondary consideration is the format of the assessment. In most cases, electronic rather than paper copies are preferable.

Important: Documentation Requirement

During Step 4 of the VTSU program assessment process, templates are used to develop the Program Outcomes Assessment Matrix and the Yearly Learning Outcomes Assessment Report (Parts II and III).

Direct Evidence

This type of measure includes signature assignment submissions and/or other student work that directly demonstrates that program-level student learning outcomes are being addressed in the curriculum and to what extent students are acquiring the skills, abilities, and knowledge expected of graduates of the program.

Effective assessment must include direct evidence that is tangible, visible, and measurable, such as:

  • Capstone projects and student portfolios evaluated using a rubric
  • Comprehensive examinations
  • Proficiency exams
  • Performance in licensure exams
  • Research projects evaluated using a rubric
  • Major papers evaluated using a rubric
  • Juried performance evaluations
  • National or standardized exam scores
  • Pre- and post-test measures

Indirect Evidence

There are some program-level student learning outcomes, however, that are difficult to evaluate with direct evidence, such as those involving attitudes and beliefs. In those cases, evidence gained through surveys, student reflection exercises, and other indirect measures can be advantageous. In addition, when used in conjunction with direct evidence, educators leading a program assessment can gain a more comprehensive view of student learning than they would by relying exclusively on direct measurements.

The following are examples of often used indirect evidence:

  • Student surveys
  • Focus groups
  • Employer perception surveys
  • Alumni perception surveys
  • Exit interviews
  • Student self-evaluations

Once the evidence has been collected, the next step is to organize the data acquired through the use of tools, such as rubrics, exams, and surveys, into useful information that demonstrates the effectiveness of the curriculum and guides future decision making. Although evaluating qualitative data is important, effective program assessment is highly dependent on the compilation and interpretation of quantitative data.

Spreadsheet apps, such as Microsoft Excel, can be useful in automating calculations and transforming raw data into more easily digestible tables, charts, and graphs. In addition to the use of Excel for analysis, some colleges and universities will gather data, often through student surveys, and hold it in larger database systems that have additional reporting features.

It is important to note that many educational institutions, including Vermont State University, also employ professionals with backgrounds in data science and analysis that can provide valuable feedback and guidance. With this in mind, VTSU faculty members involved in the program assessment process are encouraged to consult with the Institutional Research department if they feel it would be beneficial.

Once the data has been organized, the next steps are to share the process followed and the results it yielded to other stakeholders, such as departmental colleagues and key staff members. Involving multiple perspectives can be very beneficial in terms of improving the assessment process and interpreting the results.

After a common understanding of the assessment process has been established and everyone involved has an opportunity to review the results, the next step is to interpret the results with a focus on a single question:

As described by Massa and Kasimatis in Meaningful and Manageable Program Assessment: A How-To Guide for Higher Education Faculty, there are two recommended approaches for answering this question.

Established Criterion

The first approach is to evaluate the results in relation to a criterion for success. Essentially, this is an absolute standard that establishes a minimum acceptable level of performance. Ideally, the criterion is established at the beginning of the assessment process.

Comparator Group

Another approach is to establish a comparator group for in-house assessment measurement. For example, a comparison between students in the program, such as graduating senior students and first-year students, helps in understanding how much the curriculum has contributed to growth in the relevant skills, abilities, and knowledge.