Contents
Cal Poly Pomona

P&R Responses for recommendation 106

Recommendation 106
Department ASI
Consensus Opinion NA out of NA faculty/staff : NA
Consensus Explanation
Minority Opinion NA out of NA faculty/staff : Pro
Minority Explanation The setting of performance metrics and benchmarking with similar programs or institutions should be pursued. This will create a climate for seeking high quality or excellence in what we do on campus. It will also challenge us to be the best that we can be, given available resources and budgets.

Recommendation 106
Department ASI PRSM
Consensus Opinion 8 out of 8 faculty/staff : Pro
Consensus Explanation We are in favor of a common measurement to benchmark, but want to draw attention to the subjective nature of things such as quality and student development. We hope the subjective nature of quality assessment and measurement will be taken into account when this recommendation is being implemented.
Minority Opinion NA out of NA faculty/staff : NA
Minority Explanation

Recommendation 106
Department Computer Science
Consensus Opinion 11 out of 12 faculty/staff : Pro
Consensus Explanation The computer science department supports establishment of common resource metrics across widely differing programs to benchmark performance with other units and institutions.
Minority Opinion NA out of NA faculty/staff : NA
Minority Explanation

Recommendation 106
Department Enrollment Services
Consensus Opinion 2 out of 2 faculty/staff : With modifications
Consensus Explanation We are in agreement with the need for improved performance metrics for effective utilization of resources. While Financial Services may take the lead for this, it is critical that all divisions be included in the planning for the collection and review of appropriate performance metrics and that professional associations/standards for different functions be considered as resources in the development of these metrics.
Minority Opinion NA out of NA faculty/staff : With modifications
Minority Explanation

Recommendation 106
Department Student Affairs Info & Tech Services (SAITS)
Consensus Opinion 7 out of 9 faculty/staff : Con
Consensus Explanation It would be a better use of time and effort if we did not attempt to develop any kind of 'yardstick' that could try to compare the relative effectiveness of use of resources across departments. By definition, departments provide unique value-added services across the campus. No matter where commonalities are attempted to be found, there is no good way to distill them into a number that has any meaningful comparison value. Assessment is a necessity, and should be strongly encouraged, but for the assessments to have any meaning, they need to be packaged with the subjective context of what is being measured, how it is measured, and under what circumstances it is measured.

Context is absolutely important, and, as such, makes the most useful assessments those that are tailored to the specific responsibilities of any area being examined. To try to scale such measures is to dilute them to the point where they become mere numbers, devoid of any relevance.
Minority Opinion 2 out of 9 faculty/staff : With modifications
Minority Explanation As noted in the committee’s report, units on campus are providing “widely differing programs” and the goal of developing “common yardsticks” is a subjective assumption that the same metrics are meaningful in the same way to all areas of the university. Function/mission, related population, department size and resources could make this more of an apples and oranges comparison when done across the university. Comparisons to departments/functions at other universities of similar size and make up would be an alternative approach. To achieve this, the department suggests that ODT provide training to assist departments in developing their own relevant metrics using common standards and practices. This training could also be tied together with the quality assessment initiative. This training process may naturally find other areas to which departments or divisions could share and/or develop similar benchmarks.

Recommendation 106
Department Undergraduate Studies
Consensus Opinion NA out of NA faculty/staff : NA
Consensus Explanation
Minority Opinion 10 out of 10 faculty/staff : With modifications
Minority Explanation Recommendation 106 – Common Metrics and Measurements

We agree that common yardsticks would make it easier to determine how well programs are using their resources to meet their goals. However, we are doubtful that this can be achieved. Efficiency is a tricky unit to define. As the many programs in Academic Affairs were completing the P&R reports last year, there was much discussion about how to complete the desired efficiency statistics. Some members of the campus community are helped only once, some are helped on a weekly basis by one individual, some are helped by several individuals; in some cases the help is level from week to week, in other cases, the help spikes once or twice during the quarter. Some work does not involve directly helping individuals, yet those employees’ work should not necessarily be measured by the number of “widgets” produced.

Is it really necessary or appropriate to compare across programs? Perhaps more effort should be placed on Recommendation 105 – Program Quality Assessment, requiring better definitions of quality and efficiency within programs.

Recommendation 106
Department University Writing Center
Consensus Opinion 3 out of 3 faculty/staff : Con
Consensus Explanation In one sense, the very existence of this recommendation is an admission that the data gathered in the P&R process is not good data. Our programs do not have common metrics for measuring effectiveness because our goals and methods are different. For the most part, the UWC is designed to deal with large numbers of students in 30-minute appointments. The McNair program, on the other hand, is designed to spend a fairly large amount of money on a small group of selected students. Whereas the purpose of the UWC program is to help a writer improve over a period of time, the purpose of the McNair program is to give an under-represented minority undergraduate student enough support to get into a Ph.D. program. To compare these programs in terms of dollars per student is to miss the point entirely. Some service models are expensive, and some are cheap, but the cheap model may not be effective at all.

Common metrics and measurements may serve the Prioritization and Recovery process, and management by spreadsheet, but too much emphasis on common measures is sure to obscure real learning issues and drive every program to the cheapest model.
Minority Opinion NA out of NA faculty/staff : NA
Minority Explanation

Recommendations not submitted through the forms are available in this folder. They mainly consist of Microsoft Word or Adobe Acrobat documents. If none were submitted for this recommendation, the folder will be empty.