You can edit almost every page by Creating an account. Otherwise, see the FAQ.

Investigative methods in heuristics

From EverybodyWiki Bios & Wiki


Methodology of heuristics refers to scientific methods that are used by researchers to study decision heuristics (also called decision strategies). There is no unique methodological approach that all researchers use, but rather a variety of approaches exist in the fields which investigate decision heuristics.[1][2]

Overview about Methods[edit]

Decision heuristics are investigated from a normative, descriptive, as well as prescriptive (decision theory) perspective. The normative perspective means a researcher investigates abstract - often mathematical - properties of the strategy in conjunction with a task.[3] It answers the questions: When does this strategy work well? (See also ecological rationality.) The descriptive perspective looks at whether a heuristic is applied by humans or animals when faced with a specific decision problem. The term descriptive analysis refers to studies investigating whether agents (humans and other animals) behave in line with what a heuristic model predicts. Last, the prescriptive perspective asks the question of when a real decision maker should rely on which decision strategy.[4] First, researchers use mathematical proofs to show under which conditions which heuristic leads to a good outcome. For example, one paper analyzed mathematically when a heuristic called fast and frugal tree works well.[3] Also, a means to study the properties of decision strategies is simulation: Researchers use data sets about a decision problem and look at what solution a heuristic would generate for this problem. They use inputs that exist in the data to solve the decision task, feed the input into a heuristic model, which is usually formalized as a computer algorithm, and let the model generate a response. Then the response generated by the model is checked against the true solution to the decision problem.

In addition, empirical methods are used to investigate whether a heuristic is used by decision makers. This can include controlled laboratory experiments, field research, or interviews.

Dimensions of Methods[edit]

"Three dimensions of methods used in judgment and decision making"
Three dimensions of methods to study heuristics in judgment and decision making, labels from[1]

The different methods can be classified along at least three dimensions. One can classify methods to test heuristics with respect to what part of the data they use to fit the model, which part of the model they test, and whether they aggregate data or not. These dimensions can be combined in various ways (see the picture for an overview):

  • Level of analysis: To what extent is the data aggregated?
    • Individual level
    • Aggregate level
  • Part of model: Which part of the model is tested?
    • Input-output
    • Process
  • Set of data: How much of the data is used to generate the model?
    • Fitting
    • Prediction

Individual versus aggregate testing[edit]

Whether an individual or aggregate-level testing is more appropriate depends on the research question. When researchers are interested in whether people rely on a specific heuristic an individual-level analysis is required.[5] It examines the response data of each individual participant and looks at how well different models predicts the data from each participant. Due to systematic individual differences, some participants might behave consistently with a specific model, while others might rely on another model, however this cannot be inferred from a group-level analysis.

In contrast aggregate testing is appropriate when researchers are interested in whether reliance on some specific strategy leads to group-level patterns. For example Todd, Billari & Simao (2005) looked at how individual mate-search heuristics can explain aggregate age-at-marriage patterns in historical data.[6]

Input-output versus process-tracing[edit]

Testing on input-output vs. process-tracing refers to the part of a model of a decision strategy that a researcher wants to test.

Input-output testing methods propose a relation between inputs and outputs and test if changes in the input influence the output.[7] The idea is to manipulate the input, and investigate whether the model produces an output that is in line with the data. The nature of the proposed relation between in- and output (whether the proposed relation itself might take a different form while yet predicting the same output) is not of interest here. The test looks only at whether a change in the input corresponds with a predicted change in the output.

On the other hand, process-tracing methods propose and test a process between the input and the output. Process tracing methods measure this process of decision making itself as well as the input-output relation.[7] There are several aspects of the decision heuristic that researchers investigate: First the acquisition of information, second how information is integrated and evaluated, and third physiological, neurological or other corollary aspects of deciding.[8] The data used to measure those processes are (for information acquisition, integration and evaluation): think-aloud protocols, eye tracking, or (for information search) information search tracking; (for corollary aspects) measuring response time, skin conductance, pupil dilation, transcranial magnetic stimulation, transcranial direct-current stimulation.[8]

Fitting versus prediction[edit]

Model fitting is the process of searching for parameters such that the model describes a set of available data best (usually measured by goodness of fit). However, noise-free data are impossible to obtain,[2] therefore, the predictive performance of a model is often assessed by looking at how well it predicts unseen data.

Prediction refers to the use of a model with fixed parameters to generate output for new data which has not been used for fitting.[9] For prediction the parameters can be fixed by fitting to a training set or by fixing them to specific values.[10] Prediction is part of resampling: A common method used is cross-validation, which predicts data from another sample where an initial sample is used for fitting; however prediction also refers to predicting data from one sample where some subset of the data is used for fitting (see bootstrapping).

Methodological Recommendations[edit]

In designing experiments to generate data and tests for heuristics further aspects are considered relevant:[2]

Competitive model test: Model testing can involve one or more than one models. If it involves several models it is a competitive model test: A researcher looks at multiple models of strategies and compares each of them to a dataset. It can be relatively easy to declare that one model is a good model because it fits the data, but there might be another model out there that fits the data much better.

Representative stimuli: Further there is a recommendation to use real-world stimuli for experiments. The term representative refers to the question whether the stimuli are selected to match one or several real decision situations. When designing the material for experiments, some authors advise to design the stimulus material as representative for a real-world decision task as possible, i.e. by using tasks or problems that actually exist instead of artificially created problem sets.

Source of data: Source of data: another recommendation when testing models is to use data from existing experiments that come either from a third (neutral) source or from a study that tested a competing model.[11] Counterintuitive predictions: Further, it is recommended to give slightly more weight to a model that correctly predicts a finding that is not predicted by the majority of models.[12]

See also[edit]

References[edit]

  1. 1.0 1.1 Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1111/j.1756-8765.2008.01006.x, please use {{cite journal}} with |doi=10.1111/j.1756-8765.2008.01006.x instead.
  2. 2.0 2.1 2.2 Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.3724/SP.J.1041.2010.00072 , please use {{cite journal}} with |doi=10.3724/SP.J.1041.2010.00072 instead.
  3. 3.0 3.1 Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1016/j.jmp.2006.06.001, please use {{cite journal}} with |doi=10.1016/j.jmp.2006.06.001 instead.
  4. Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1287/deca.1100.0191, please use {{cite journal}} with |doi=10.1287/deca.1100.0191 instead.
  5. Template:Cite isbn
  6. Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi: 10.1353/dem.2005.0027, please use {{cite journal}} with |doi= 10.1353/dem.2005.0027 instead.
  7. 7.0 7.1 Template:Cite isbn
  8. 8.0 8.1 Template:Cite isbn Cite error: Invalid <ref> tag; name "schulte2011" defined multiple times with different content
  9. Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1016/j.cognition.2011.12.002, please use {{cite journal}} with |doi=10.1016/j.cognition.2011.12.002 instead.
  10. Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1146/annurev-psych-120709-145346, please use {{cite journal}} with |doi=10.1146/annurev-psych-120709-145346 instead.
  11. Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1037/0033-295X.113.2.409, please use {{cite journal}} with |doi=10.1037/0033-295X.113.2.409 instead.
  12. Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1016/j.jmp.2010.07.002, please use {{cite journal}} with |doi=10.1016/j.jmp.2010.07.002 instead.


This article "Investigative methods in heuristics" is from Wikipedia. The list of its authors can be seen in its historical. Articles copied from Draft Namespace on Wikipedia could be seen on the Draft Namespace of Wikipedia and not main one.