Investigative methods in heuristics

Methodology of heuristics refers to scientific methods that are used by researchers to study decision heuristics (also called decision strategies). There is no unique methodological approach that all researchers use, but rather a variety of approaches exist in the fields which investigate decision heuristics.
Overview about Methods
Decision heuristics are investigated from a normative, descriptive, as well as perspective. The normative perspective means a researcher investigates abstract - often mathematical - properties of the strategy in conjunction with a task.
First, researchers use mathematical proofs to show under which conditions which heuristic leads to a good outcome. For example, one paper analyzed mathematically when a heuristic called fast and frugal tree works well.
Also, a means to study the properties of decision strategies is simulation: Researchers use data sets about a decision problem and look at what solution a heuristic would generate for this problem. They use inputs that exist in the data to solve the decision task, feed the input into a heuristic model, which is usually formalized as a computer algorithm, and let the model generate a response. Then the response generated by the model is checked against the true solution to the decision problem.
In addition, empirical methods are used to investigate whether a heuristic is used by decision makers. This can include controlled laboratory experiments, field research, or interviews.
Dimensions of Methods
The different methods can be classified along at least three dimensions. One can classify methods to test heuristics with respect to what part of the data they use to fit the model, which part of the model they test, and whether they aggregate data or not. These dimensions can be combined in various ways (see the picture for an overview):
* Level of analysis: To what extent is the data aggregated?
** Individual level
** Aggregate level
* Part of model: Which part of the model is tested?
** Input-output
** Process
* Set of data: How much of the data is used to generate the model?
** Fitting
** Prediction
Individual versus aggregate testing
Whether an individual or aggregate-level testing is more appropriate depends on the research question. When researchers are interested in whether people rely on a specific heuristic an individual-level analysis is required. It examines the response data of each individual participant and looks at how well different models predicts the data from each participant. Due to systematic individual differences, some participants might behave consistently with a specific model, while others might rely on another model, however this cannot be inferred from a group-level analysis.
In contrast aggregate testing is appropriate when researchers are interested in whether reliance on some specific strategy leads to group-level patterns. For example Todd, Billari & Simao (2005) looked at how individual mate-search heuristics can explain aggregate age-at-marriage patterns in historical data.
Input-output versus process-tracing
Testing on input-output vs. process-tracing refers to the part of a model of a decision strategy that a researcher wants to test.
Input-output testing methods propose a relation between inputs and outputs and test if changes in the input influence the output. The idea is to manipulate the input, and investigate whether the model produces an output that is in line with the data. The nature of the proposed relation between in- and output (whether the proposed relation itself might take a different form while yet predicting the same output) is not of interest here. The test looks only at whether a change in the input corresponds with a predicted change in the output.
On the other hand, process-tracing methods propose and test a process between the input and the output. Process tracing methods measure this process of decision making itself as well as the input-output relation.
The data used to measure those processes are (for information acquisition, integration and evaluation): think-aloud protocols, eye tracking, or (for information search) information search tracking; (for corollary aspects) measuring response time, skin conductance, pupil dilation, transcranial magnetic stimulation, transcranial direct-current stimulation.
Fitting versus prediction
Model fitting is the process of searching for parameters such that the model describes a set of available data best (usually measured by goodness of fit). However, noise-free data are impossible to obtain For prediction the parameters can be fixed by fitting to a training set or by fixing them to specific values. Prediction is part of resampling: A common method used is cross-validation, which predicts data from another sample where an initial sample is used for fitting; however prediction also refers to predicting data from one sample where some subset of the data is used for fitting (see bootstrapping).
Methodological Recommendations
In designing experiments to generate data and tests for heuristics further aspects are considered relevant:
Counterintuitive predictions: Further, it is recommended to give slightly more weight to a model that correctly predicts a finding that is not predicted by the majority of models.
 
< Prev   Next >