How do you accurately measure a person's beliefs? What's the best approach to measure risk attitudes? What methods minimise reporting bias? Answers to such questions can be difficult, but are crucial to researchers studying psychology and behaviour in research evaluations. In our Measures Matter series, our goal is to provide information on measurement tools which might be particularly useful for researchers in behavioural economics, psychology and psychiatry.
Aimed at fellow researchers, content published monthly in this series will include step-by-step instructions on how to replicate certain techniques in the spirit of knowledge-sharing and as part of an effort to make methods more widely available.
Economists, behavioural scientists and other social scientists have been designing several methods to elicit individual’s preferences about their perception of future costs and benefits. In this post, we outline two commonly-used behavioural tasks with (real or hypothetical) monetary rewards used to elicit “time-preferences” in surveys: Multiple Price Lists (Andersen, Harrison, Lau, & Rutström, 2008) and Convex Time Budgets (Andreoni & Sprenger, 2012).
Risk preferences play an important role in many domains of decision-making, and heterogeneity in risk attitudes has implications for our analysis and policy recommendations. As researchers collecting data in the field, we may (in some circumstances) want to directly measure the risk attitudes of the individuals we study. In this post we discuss some of the most popular elicitation and estimation methods for risk preferences.
Expectations about future income are a crucial determinant of job-search behaviour. Such beliefs matter both for structural estimation and as outcome measures for active labour market policies. For our work in South Africa, we focused on both aspects, and carefully piloted ways of measuring beliefs about future income.
Measures of expectations can provide important insights into future economic behaviour, reflecting the forward-looking nature of many economic decisions. This post explains how to obtain measures, such as quantiles and moments of interest, derived from subjective probabilistic expectations.
There are innovative ways to gather information about sensitive topics during surveys that can reduce reporting bias and safeguard respondents. To increase a respondent’s privacy when answering sensitive survey questions, an alternative method to face-to-face interviewing can be used: Audio Computer Assisted Self-Interviewing (ACASI).
Measuring attitudes towards certain groups of individuals or behaviours may be of interest for a multitude of reasons. An innovative way to measure people’s attitudes used in psychology in developed countries are implicit association tests (IATs). These are now being used in settings with low literacy by psychologists but have also been picked by economists in the last few years.
An alternative approach to capture women’s empowerment that is gaining popularity is to measure women’s willingness to pay to ‘hide’ income from spouse or in other words, to gain control of resources in a lab setting. Note that this method is also referred to with different names such as “spousal cooperation game”.
In face to face surveys, there may be concerns about social desirability bias resulting in under-reporting of what respondents perceive as negative behaviours and over-reporting of positive behaviours. We piloted some measures for the objective measurement of education investment assets in the endline survey of a Randomised Control Trial in Kenya (Orkin et al., 2020). This post provides details on each measure and the issues encountered in the setting.
The Centre for the Study of African Economies runs a popular Coder's Corner series for advice on handling your data or writing code for particular analysis techniques. Below are links to a few of their posts which might be particularly useful for researchers in behavioural economics, psychology and psychiatry.
The interests of researchers and policymakers often extend beyond a simple average treatment effect when evaluating interventions in randomised experiments. Exploring heterogeneous treatment effects, or average treatment effects by subgroups and covariates, can provide useful answers to a variety of important questions. This post explores using machine learning methods for inference on heterogeneous treatment effects.
When we analyse the results of an experiment, we are often interested in understanding what the treatment effects are on sub-groups. This type of sub-group analysis is usually estimated using in-sample information on the relationship between the outcome of interest and the covariates in the control group to predict outcome for all groups without treatment. However, this procedure generates substantial bias due to overfitting. This post discusses a method for overmining this bias.
Including control variables in regressions can substantially increase the statistical power of your analysis. However, deciding which control variables to select is arbitrary. Pre-analysis plans allow researchers to credibly commit to a set of controls, yet these controls might turn out to be suboptimal ex-post. Regularisation techniques deal with this problem and ensure that you make the most of your existing data.
Extensive robustness checks have become a requirement for empirical research. This often leads to Online Appendices with hundreds of result tables that are very hard to digest for readers and referees. Stata16’s speccurve command written by Martin Eckhoff Andresen is an easy to use command that facilitates the generation of specification curves. A specification curve plots a large number of regression coefficients and confidence intervals sorted by estimated impact from different specifications that allow the assessment of robustness in a single figure.
How can mediation analysis be useful in an experiment that has a behavioural component? With multiple follow-ups on behavioural characteristics and socioeconomic variables, researchers can use mediation to test whether socioeconomic outcomes in later rounds can plausibly be explained by changes in the psychological variables at intermediate follow-up rounds after the behavioural intervention.
Measuring psychological outcomes can be difficult when the constructs we are interested in are unobservable (e.g., the Big Five personality traits) or very costly and time consuming to measure (e.g., clinical depression). Factor analysis is a statistical technique used widely by psychologists and social scientists. It enables us to test if a given set of measures captures an underlying, unobservable construct (factor). This helps us to select and verify our measurement instruments.
Field experiments in (behavioural) development economics have become increasingly complex. Many trials test whether cost-effective behavioural additions to more traditional interventions and rigorous analysis of heterogeneous treatment effects across sub-groups has become the norm. This post shows how you can create publication style balance and summary tables taking into account these complexities.