You can edit almost every page by Creating an account. Otherwise, see the FAQ.

Evaluation approaches

From EverybodyWiki Bios & Wiki



Evaluation approaches are conceptually distinct ways of thinking about, designing, and conducting evaluation efforts. Many of the evaluation approaches in use today make unique contributions to solving important problems, while others refine existing approaches in some way. Classification systems intended to sort out unique approaches from variations on a theme are presented here to help identify some basic schools of thought for conducting an evaluation. After these approaches are identified, they are summarized in terms of a few important attributes.

Since the mid-1960s, the number of alternative approaches to conducting evaluation efforts has increased dramatically. Factors such as the United States Elementary and Secondary Education Act of 1965 that required educators to evaluate their efforts and results, and the growing public concern for accountability of human service programs contributed to this growth. In addition, over this period of time there has been an international movement towards encouraging evidence based practice in all professions and in all sectors. Evidence Based Practice (EBP) requires evaluations to deliver the information needed to determine what is the best way of achieving results.

Classification of approaches[edit]

Two classifications of evaluation approaches by House [1] and Stufflebeam & Webster [2] were combined by Frisbie [3] into a manageable number of approaches in terms of their unique and important underlying principles. The general structures of these classification systems are discussed first. The structures then are combined to present a more detailed classification of fifteen evaluation approaches.

House considers all major evaluation approaches to be based on a common ideology, liberal democracy. Important principles of this ideology include freedom of choice, the uniqueness of the individual, and empirical inquiry grounded in objectivity. He also contends they all are based on subjectivist ethics, in which ethical conduct is based on the subjective or intuitive experience of an individual or group. One form of subjectivist ethics is utilitarian, in which “the good” is determined by what maximizes some single, explicit interpretation of happiness for society as a whole. Another form of subjectivist ethics is intuitionist / pluralist, in which no single interpretation of “the good” is assumed and these interpretations need not be explicitly stated nor justified.

These ethical positions have corresponding epistemologiesphilosophies of obtaining knowledge. The objectivist epistemology is associated with the utilitarian ethic. In general, it is used to acquire knowledge capable of external verification (intersubjective agreement) through publicly inspectable methods and data. The subjectivist epistemology is associated with the intuitionist/pluralist ethic. It is used to acquire new knowledge based on existing personal knowledge and experiences that are (explicit) or are not (tacit) available for public inspection.

House further divides each epistemological approach by two main political perspectives. Approaches can take an elite perspective, focusing on the interests of managers and professionals. They also can take a mass perspective, focusing on consumers and participatory approaches.

Stufflebeam and Webster place approaches into one of three groups according to their orientation toward the role of values, an ethical consideration. The political orientation promotes a positive or negative view of an object regardless of what its value might actually be. They call this pseudo-evaluation. The questions orientation includes approaches that might or might not provide answers specifically related to the value of an object. They call this quasi-evaluation. The values orientation includes approaches primarily intended to determine the value of some object. They call this true evaluation.

Table 1 is used to classify fifteen evaluation approaches in terms of epistemology, major perspective (from House), and orientation (from Stufflebeam & Webster). When considered simultaneously, these three dimensions produce twelve cells. Only seven of the cells contain approaches, although all four true evaluation cells contain at least one approach.

Table 1[edit]

Classification of approaches for conducting evaluations
based on epistemology, major perspective, and orientation
Epistemology
(Ethic)
Major perspective Orientation
Political
(Pseudo-evaluation)
Questions
(Quasi-evaluation)
Values
(True evaluation)
Objectivist
(Utilitarian)
Elite
(Managerial)

Politically controlled
Public relations

Experimental research
Management information systems
Testing programs
Objectives-based
Content analysis

Decision-oriented
Policy studies

Mass
(Consumers)

Accountability Consumer-oriented
Subjectivist
(Institutionalist/
Pluralist)
Elite
(Professional)


Accreditation/ certification
Connoisseur

Mass
(Participatory)


Adversary
Client-centered

Note. Epistemology and major perspective from House (1978). Orientation from Stufflebeam & Webster (1980).

Two pseudo-evaluation approaches, politically controlled and public relations studies, are represented. They are based on an objectivist epistemology from an elite perspective.

Six quasi-evaluation approaches use an objectivist epistemology. Five of them—experimental research, management information systems, testing programs, objectives-based studies, and content analysis—take an elite perspective. Accountability takes a mass perspective.

Seven true evaluation approaches are included. Two approaches, decision-oriented and policy studies, are based on an objectivist epistemology from an elite perspective. Consumer-oriented studies are based on an objectivist epistemology from a mass perspective. Two approaches—accreditation/certification and connoisseur studies—are based on a subjectivist epistemology from an elite perspective. Finally, adversary and client-centered studies are based on a subjectivist epistemology from a mass perspective.

Summary of approaches[edit]

The preceding section was used to distinguish between fifteen evaluation approaches in terms of their epistemology, major perspective, and orientation to values. This section is used to summarize each of the fifteen approaches in enough detail so that those placed in the same cell of Table 1 can be distinguished from each other.

Table 2 is used to summarize each approach in terms of four attributes—organizer, purpose, strengths, and weaknesses. The organizer represents the main considerations or cues practitioners use to organize a study. The purpose represents the desired outcome for a study at a very general level. Strengths and weaknesses represent other attributes that should be considered when deciding whether to use the approach for a particular study. The following narrative highlights differences between approaches grouped into the same cell of Table 1.

Table 2[edit]

s
Approach Attribute
Organizer Purpose Key strengths Key weaknesses
Politically controlled Threats Get, keep or increase influence, power or money. Secure evidence advantageous to the client in a conflict. Violates the principle of full & frank disclosure.
Public relations Propaganda needs Create positive public image. Secure evidence most likely to bolster public support. Violates the principles of balanced reporting, justified conclusions, & objectivity.
Experimental research Causal relationships Determine causal relationships between variables. Strongest paradigm for determining causal relationships. Requires controlled setting, limits range of evidence, focuses primarily on results.
Management information systems Scientific efficiency Continuously supply evidence needed to fund, direct, & control programs. Gives managers detailed evidence about complex programs. Human service variables are rarely amenable to the narrow, quantitative definitions needed.
Testing programs Individual differences Compare test scores of individuals & groups to selected norms. Produces valid & reliable evidence in many performance areas. Very familiar to public. Data usually only on testee performance, overemphasizes test-taking skills, can be poor sample of what is taught or expected.
Objectives-based Objectives Relates outcomes to objectives. Common sense appeal, widely used, uses behavioral objectives & testing technologies. Leads to terminal evidence often too narrow to provide basis for judging to value of a program.
Content analysis Content of a communication Describe & draw conclusion about a communication. Allows for unobtrusive analysis of large volumes of unstructured, symbolic materials. Sample may be unrepresentative yet overwhelming in volume. Analysis design often overly simplistic for question.
Accountability Performance expectations Provide constituents with an accurate accounting of results. Popular with constituents. Aimed at improving quality of products and services. Creates unrest between practitioners & consumers. Politics often forces premature studies.
Decision-oriented Decisions Provide a knowledge & value base for making & defending decisions. Encourages use of evaluation to plan & implement needed programs. Helps justify decisions about plans & actions. Necessary collaboration between evaluator & decision-maker provides opportunity to bias results.
Policy studies Broad issues Identify and assess potential costs & benefits of competing policies. Provide general direction for broadly focused actions. Often corrupted or subverted by politically motivated actions of participants.
Consumer-oriented Generalized needs & values, effects Judge the relative merits of alternative goods & services. Independent appraisal to protect practitioners & consumers from shoddy products & services. High public credibility. Might not help practitioners do a better job. Requires credible & competent evaluators.
Accreditation / certification Standards & guidelines Determine if institutions, programs, & personnel should be approved to perform specified functions. Helps public make informed decisions about quality of organizations & qualifications of personnel. Standards & guidelines typically emphasize intrinsic criteria to the exclusion of outcome measures.
Connoisseur Critical guideposts Critically describe, appraise, & illuminate an object. Exploits highly developed expertise on subject of interest. Can inspire others to more insightful efforts. Dependent on small number of experts, making evaluation susceptible to subjectivity, bias, and corruption.
Adversary “Hot” issues Present the pro & cons of an issue. Ensures balances presentations of represented perspectives. Can discourage cooperation, heighten animosities.
Client-centered Specific concerns & issues Foster understanding of activities & how they are valued in a given setting & from a variety of perspectives. Practitioners are helped to conduct their own evaluation. Low external credibility, susceptible to bias in favor of participants.
Note. Adapted and condensed primarily from House (1978) and Stufflebeam & Webster (1980).

Pseudo-evaluation[edit]

Politically controlled and public relations studies are based on an objectivist epistemology from an elite perspective. Although both of these approaches seek to misrepresent value interpretations about some object, they go about it a bit differently. Information obtained through politically controlled studies is released or withheld to meet the special interests of the holder.

Public relations information is used to paint a positive image of an object regardless of the actual situation. Neither of these approaches is acceptable evaluation practice, although the seasoned reader can surely think of a few examples where they have been used.

Objectivist, elite, quasi-evaluation[edit]

As a group, these five approaches represent a highly respected collection of disciplined inquiry approaches. They are considered quasi-evaluation approaches because particular studies can legitimately focus only on questions of knowledge without addressing any questions of value. Such studies are, by definition, not evaluations. These approaches can produce characterizations without producing appraisals, although specific studies can produce both. Each of these approaches serves its intended purpose well. They are discussed roughly in order of the extent to which they approach the objectivist ideal.

Experimental research is the best approach for determining causal relationships between variables. The potential problem with using this as an evaluation approach is that its highly controlled and stylized methodology may not be sufficiently responsive to the dynamically changing needs of most human service programs.

Management information systems (MISs) can give detailed information about the dynamic operations of complex programs. However, this information is restricted to readily quantifiable data usually available at regular intervals.

Testing programs are familiar to just about anyone who has attended school, served in the military, or worked for a large company. These programs are good at comparing individuals or groups to selected norms in a number of subject areas or to a set of standards of performance. However, they only focus on testee performance and they might not adequately sample what is taught or expected.

Objectives-based approaches relate outcomes to prespecified objectives, allowing judgments to be made about their level of attainment. Unfortunately, the objectives are often not proven to be important or they focus on outcomes too narrow to provide the basis for determining the value of an object.

Content analysis is a quasi-evaluation approach because content analysis judgments need not be based on value statements. Instead, they can be based on knowledge. Such content analyses are not evaluations. On the other hand, when content analysis judgments are based on values, such studies are evaluations.

Objectivist, mass, quasi-evaluation[edit]

Accountability is popular with constituents because it is intended to provide an accurate accounting of results that can improve the quality of products and services. However, this approach quickly can turn practitioners and consumers into adversaries when implemented in a heavy-handed fashion.

Objectivist, elite, true evaluation[edit]

Decision-oriented studies are designed to provide a knowledge base for making and defending decisions. This approach usually requires the close collaboration between an evaluator and decision-maker, allowing it to be susceptible to corruption and bias.

Policy studies provide general guidance and direction on broad issues by identifying and assessing potential costs and benefits of competing policies. The drawback is these studies can be corrupted or subverted by the politically motivated actions of the participants.

Objectivist, mass, true evaluation[edit]

Consumer-oriented studies are used to judge the relative merits of goods and services based on generalized needs and values, along with a comprehensive range of effects. However, this approach does not necessarily help practitioners improve their work, and it requires a very good and credible evaluator to do it well.

Subjectivist, elite, true evaluation[edit]

Accreditation / certification programs are based on self-study and peer review of organizations, programs, and personnel. They draw on the insights, experience, and expertise of qualified individuals who use established guidelines to determine if the applicant should be approved to perform specified functions. However, unless performance-based standards are used, attributes of applicants and the processes they perform often are overemphasized in relation to measures of outcomes or effects.

Connoisseur studies are designed to provide a knowledge base for making and defending decisions. This approach usually requires the close collaboration between an evaluator and decision-maker, allowing it to be susceptible to corruption and bias.

Subjectivist, mass, true evaluation[edit]

The adversary approach focuses on drawing out the pros and cons of controversial issues through quasi-legal proceedings. This helps ensure a balanced presentation of different perspectives on the issues, but it is also likely to discourage later cooperation and heighten animosities between contesting parties if “winners” and “losers” emerge.

Client-centered studies address specific concerns and issues of practitioners and other clients of the study in a particular setting. These studies help people understand the activities and values involved from a variety of perspectives. However, this responsive approach can lead to low external credibility and a favorable bias toward those who participated in the study.

Responsive Evaluation[edit]

The essential design of the responsive approach is Robert Stake’s 12 Prominent Events:[1] A) Identify program scope; B) Overview program activities; C) Discover purposes, concerns; D) Conceptualize issues, problems; E) Identify data needs; F) Select observers, judges, and instruments (if any); G) Observe designated antecedents, transactions, and outcomes; H) Thematize and prepare portrayals and case studies; I) Winnow, match issues to audiences; J) Format for audience use; K) Assemble formal reports (if any); L) Talk with clients, program staff, and audiences.

Philosophically, the responsive approach is grounded in a constructivist ontology and nonpositivist epistemology. The approach posits that there are multiple perspectives in regards to what is ‘truth’ and therefore evaluation designs ought to be conscious of individuals and groups multiple interpretations of reality. Reality is considered a result of an individual’s construction of meaning. Proponents of the responsive approach contest that people naturally examine, record, and conclude about the value of phenomena and what it means to them.[2]

Diversity of approaches[edit]

Implementation of a range of evaluation approaches may provide an advantage to an institution or business.[3][4][5][6]

Notes and references[edit]

  1. ^ House, E. R. (1978). Assumptions underlying evaluation models. Educational Researcher. 7(3), 4-12.
  2. ^ Stufflebeam, D. L., & Webster, W. J. (1980). An analysis of alternative approaches to evaluation. Educational Evaluation and Policy Analysis. 2(3), 5-19.
  3. ^ Frisbie, R. D. (1986). The use of microcomputer programs to improve the reliability and validity of content analysis in evaluation. Doctoral dissertation, Western Michigan University.

References[edit]

  1. Stake, R. (1976). A theoretical statement of responsive evaluation. Studies in Educational Evaluation, 2 (1), 19-22.
  2. Cameron, Bobby Thomas. (2014). Using responsive evaluation in strategic management. Strategic Leadership Review 4 (2), 22-27
  3. House, E.R. (1978). "Assumptions underlying evaluation models". Educational Researcher. 7 (3): 4–12. doi:10.3102/0013189x007003004.
  4. Stufflebeam, D. L.; Webster, W. J. (1980). "An analysis of alternative approaches to evaluation". Educational Evaluation and Policy Analysis. 2 (3): 5–19. doi:10.3102/01623737002003005.
  5. Weisberg, Michael; Muldoon, Ryan (2009). "Epistemic Landscapes and the Division of Cognitive Labor". Philosophy of Science. 76: 225–252. doi:10.1086/644786.
  6. Hong, Lu; Page, Scott E. (2001). "Problem Solving by Heterogeneous Agents". Journal of Economic Theory. 97: 123–163. doi:10.1006/jeth.2000.2709.


This article "Evaluation approaches" is from Wikipedia. The list of its authors can be seen in its historical. Articles copied from Draft Namespace on Wikipedia could be seen on the Draft Namespace of Wikipedia and not main one.