Glossary for Assessment in Departments and Programs 

Assessment Plan 

Every four years, all departments and programs design a plan for systemactic assessment.  The plan is structured as a table that shows, for each outcome, learning opportunities, assessment methods and measures, targets/benchmarks, and a schedule of when the outcome is to be assessed (each fall, every other spring, etc.) A sample plan is available.

Annual Report 

Each summer, departments and programs report the results of assessments that were conducted in the previous academic year. Not all of a department/programs outcomes must be assessed in a given year, provided all outcomes are assessed in a two-year cycle. Annual reports include a table for each outcome that shows what methods/measures were used for assessing the outcome as well as targets/benchmarks, results, and actions taken. The report should also include a discussion section detailing the department’s or program’s conversation about the results and intended actions to improve student learning. The report might also include a summary of any "closing the loop" conversations held by the department/program.

Four-Year Report 

Every four years, departments and programs file a comprehensive assessment report with Academic Affairs. The report includes an assessment plan for the coming four years, rubrics for assessing each outcome, and a discussion section focusing on actions taken over the past four years and reflecting on past and future assessment in the department or program. 

Goal 

Historically at King’s, the term goal, which still appears on many syllabi, denoted desirable outcomes that could not be, or were not intended to be, assessed---appreciating literature, for example, or developing a sense of curiosity.  Such aspirations remain a significant part of a King’s education, and faculty should expect students to achieve much more than can be demonstrated through assessment data. For the purposes of collecting and reporting data, however, the assessment program uses the term goal to mean the broad, overarching aim of a major or program. So, for example, one of the goals of the Core curriculum is to “prepare and dispose our students to be responsible citizens in our increasingly interdependent world.” In working toward the grand, aspirational goal of responsible citizenship, a student taking a history Core class will achieve the learning outcome of being able to “recognize the causes and consequences of historical events.” Instructors in history provide evidence showing how well students are meeting that outcome.  

Learning Outcome 

Outcomes are the specific, measurable achievements (in skills, knowledge, and dispositions) that demonstrate learning in a department or program (including the Core curriculum). Outcomes should be written in clear, specific, concrete language using action verbs. Bloom’s revised taxonomy provides some guidance for designating measurable outcomes and selecting useful verbs: 

Remembering (Knowledge): know, define, identify, label, list, name, find, state, match, recognize, locate, memorize, quote, recall, reproduce, tell, enumerate, recite, repeat, retell 

Understanding (Comprehension): describe, explain, paraphrase, summarize, translate, illustrate, represent, show, articulate, employ, interpret 

Applying: apply, solve, calculate, modify, experiment, teach, compute, develop, operate, perform, implement, use/utilize, integrate, incorporate, display, write, conduct, order, organize 

Analyzing: analyze, compare, contrast, classify, categorize, examine, differentiate, plan, distinguish, separate, connect, divide, infer 

Evaluating: evaluate, appraise, judge, critique, rate, score, choose, select, argue, estimate, criticize, recommend, defend, measure, predict, rank, justify, persuade, weigh 

Creating: create, synthesize, generate, design, formulate, devise, make, compose, arrange, propose, hypothesize, construct, invent, produce, adapt, assemble, reimagine, structure 

Performance Criteria

Performance criteria are the elemental characteristics or dimensions—skills, knowledge, content, or behavior—upon which a student’s work is evaluated and that show the level at which an outcome has been achieved. The level at which a student achieves the outcome of “recognizing the causes and consequences of historical events,” for example, might be determined in part by her ability to consider a broad range of local and global factors when conducting a cause-and-effect analysis. 

Performance criteria often appear in a rubric, of which the following is one example: 

A student’s performance is measured in four scoring levels: exceeds expectations, meets expectations, approaches expectations, falls below expectations. Descriptors for performance at each level are written so that characteristics (specific knowledge, skills, and dispositions) and the language used to describe the characteristics (whether literal or figurative) are consistent across the row.  For example, if knowledge is the measure of achievement in the first column, it should be the measure across the row.  If at the Exceeds Expectations level a student demonstrates "extensive" knowledge, the extent of her knowledge should also be indicated in the other columns: perhaps it’s “limited" or "narrow” at the level of Approaches Expectations. If her knowledge is "deep" in the first column, it might be "superficial" in the third column, and so on. 

Not all outcomes can be measured using four-point rubrics. Faculty should determine the best strategies for measuring and demonstrating student learning. 

Target and Benchmark 

Target refers to a number that provides acceptable evidence that students are learning. The target shows, for example, the percentage of students who meet or exceed expectations for a performance criterion on a common writing assignment, the percentage of students who correctly perform a task, the average grade earned by students taking a common exam, and so on. Targets are usually set by faculty in a department or program, and there are no universal guidelines for selecting targets, no magic number. When using rubrics, some programs set their targets at around 75 percent: 75 percent of students score a 3 or better on a 4-point rubric, for example. For tests, some departments look for a percentage of students to achieve a certain grade: 75 percent of students will score at least a 65 on the exam. For indirect measures, some departments establish a target grade: 75percent of students score a B or better in the course, for instance. It is possible to have multiple targets: 75 percent of students will score a 3 or 4 on the rubric, and at least 15 percent will score a 4. Some programs have complicated formulas for determining what constitutes acceptable proof of leaning.  

Benchmarks are targets that are adopted by comparison to external sources (average scores among students at other colleges, for example) or established by accrediting agencies or higher education experts. Departments and programs using benchmarks rather than targets might, in their assessment plan, provide the source of the benchmarks. 

Learning Opportunity 

A learning opportunity is the course or experience within a department/program that best provides students with the knowledge and practice necessary to achieve an outcome.  

Measures and Methods 

Assessment measures and methods provide evidence, directly or indirectly, that students are achieving learning outcomes. Direct methods are the actual products or performances that an instructor can point to as evidence that a student has mastered content or developed a skill. A student’s performance on papers, tests, projects, presentations, and so on provides direct evidence of learning. Indirect measures—such as surveys, course evaluations, final grades, and so on—may be useful in gauging student perception or the general quality or success of a course or program, but they only imply that actual learning outcomes are being met.  

A department must provide two assessment measures or methods for each outcome, one of which must be direct.  

The following is provided by the Middle States Association: 

Direct: 

examinations and quizzes  
standardized tests  
term papers and reports  
research projects  
capstone projects, senior theses, exhibits  
oral presentations 
case study analysis  
field work, internship performance, service learning, or clinical experiences  
artistic performances and products  
class discussion participation (scored with rubric) 
pass rates or scores on licensure, certification, or subject area tests  
student publications or conference presentations  
employer and internship supervisor ratings of student performance 
performance on institutional tests of writing, critical thinking, or general knowledge  


Course evaluations 
Test blueprints (outlines of the concepts and skills covered on tests)  
Percent of class time spent in active learning  
Number of student hours spent on service learning  
Number of student hours spend on homework  
Number of student hours spent at intellectual or cultural activities related to the course  
Grades that are not based on explicit criteria related to clear learning goals 
Focus group interviews with students, faculty members or employers  
Registration or course enrollment information  
Department or program review data  
Job placement  
Employer or alumni surveys  
Student perception surveys  
Graduate school placement rates 
Locally-developed, commercial, or national surveys of student perceptions or self-report of activities (e.g. National Survey of Student Engagement)  
Transcript studies that examine patterns and trends of course selection and grading  

Annual reports including institutional benchmarks, such as graduation and retention rates, grade point averages of graduates, etc.  

Results 
Results—the data generated by an assessment measure—should be compared to targets and benchmarks to determine if action is needed. 
 
Action Taken 
The faculty in a department or program will determine how best to respond to the results of assessment. Actions should be designed to respond directly to what results show regarding a specific outcome. Below are just a few of the actions taken by departments and programs following assessment of the 2017-18 year: 
In Fall 2018, the instructor will require students to appraise research articles at the beginning of each class for the first half of the semester.  Students will have to classify each article according to CEBM guidelines and explain why an article should have that classification. 
More in-class problems related to the Law of Supply and Law of Demand will be completed in class. 
Identify struggling students in 200-level courses and pair them with peer tutors.   
Campus reference librarians will be utilized to provide additional instruction. 
Senior Seminar will center on directed research in Fall 2018 to ensure that students are interacting with a wider breadth of secondary sources.  This is to improve the quality of the original analysis in the final project. 
Continue review sessions before examinations and consider making them mandatory. 

quire more writing and receive more intensive feedback from the instructor on assignments as a means to prompt more effective analysis of critical theories introduced in the course. 
 
esearch and Methods will emphasize the collection of relevant sources representing a wide-array of academic viewpoints to improve students’ historiographical analyses.  This will be achieved through directed research and additional library sessions. 
Instructors will implement “building block” research assignments that may include three of more of the following constituent elements: 1) Student submission of a project proposal, including discussion of likely sources; 2) Library research session; 3) Development of Library Research Guide; 4) Annotated Bibliography; 5) Paper or project outline, reflecting where sources will be used; 6) Peer review or group work that emphasizes combining sources or joining in common research; 7) Individual research discussion with professor. 
 
Research projects (and the majors courses themselves) should be focused on this outcome, making sure that students get to synthesize a central aspect of their theological learning from earlier courses by exploring it more deeply through discussion and research in this course. 
 Continue to employ methods from James Lang’s Small Teaching.  
Reflective writing needs to be better integrated into the theological reflection, not just the personal reflection.  A teaching innovation scholarship was applied for and received to develop a better journal reflection structure. 


Closing the Loop 
"Closing the loop” has been in common use at King’s for many years with various meanings. We’ll use this phrase (sparingly) to mean the act of looking back at past actions to determine whether they have had an impact. So, for example, a department, in response to low scores on a test question, might take the action of increasing the amount of class time spent on the topic of the question. A department “closes the loop” when it determines in a future assessment cycle whether that action was fruitful.  
 
chedule 
All outcomes in a department/program must be completed in a two-year cycle.