This study focuses on Saudi mothers’ and their children’s judgments and reasoning about exclusion based on religion. Sixty Saudi children and their mothers residing in Saudi Arabia and 58 Saudi children and their mothers residing in the United Kingdom were interviewed. They were read vignettes depicting episodes of exclusion based on the targets’ religion ordered by peers or a father. Participants were asked to judge the acceptability of exclusion and justify their judgments. Both groups rated the religious-based exclusion of children from peer interactions as unacceptable. Saudi children and mothers residing in the UK were less accepting of exclusion than were children and mothers residing in Saudi Arabia. In addition, children and mothers residing in the UK were more likely to evaluate exclusion as a moral issue and less likely as a social conventional issue than were children and mothers residing in Saudi Arabia. Mothers in the UK were also less likely to invoke psychological reasons than were mothers in Saudi Arabia. Children’s judgments about exclusion were predicted by mothers’ judgments about exclusion. In addition, the number of times children used moral or social conventional reasons across the vignettes was positively correlated with mothers’ use of these categories. The findings, which support the Social Reasoning Development model, are discussed in relation to how mothers and immersion in socio-cultural contexts are related to children’s judgments and reasoning about social exclusion.
Cognitive modeling tools have been widely used by researchers and practitioners to help design, evaluate and study computer user interfaces (UIs). Despite their usefulness, large-scale modeling tasks can still be very challenging due to the amount of manual work needed. To address this scalability challenge, we propose CogTool+, a new cognitive modeling software framework developed on top of the well-known software tool CogTool. CogTool+ addresses the scalability problem by supporting the following key features: 1) a higher level of parameterization and automation; 2) algorithmic components; 3) interfaces for using external data; 4) a clear separation of tasks, which allows programmers and psychologists to define reusable components (e.g., algorithmic modules and behavioral templates) that can be used by UI/UX researchers and designers without the need to understand the low-level implementation details of such components. CogTool+ also supports mixed cognitive models required for many large-scale modeling tasks and provides an offline analyzer of simulation results. In order to show how CogTool+ can reduce the human effort required for large-scale modeling, we illustrate how it works using a pedagogical example, and demonstrate its s actual performance by applying it to large-scale modeling tasks of two real-world user-authentication systems.
Lada Timotijevic, Carys Banks, Patrice Rusconi, Bernadette Egan, Matthew Peacock, Ellen Seiss, Morro Touray, Heather Gage, C. Pellicano, G. Spalletta, F. Assogna, M. Giglio, A. Marcante, G. Gentile, I. Cikajilo, D. Gatsios, S. Konitsiotis, D. Fotiadis (2020)Designing a mHealth Clinical Decision Support System for Parkinson’s Disease: A Theoretically Grounded User Needs Approach, In: BMC Medical Informatics and Decision Making2034
BMC (Springer Nature)
Background: Despite the established evidence and theoretical advances explaining human judgments under uncertainty, developments of mobile health (mHealth) Clinical Decision Support Systems (CDSS) have not explicitly applied the psychology of decision making to the study of user needs. We report on a user needs approach to develop a prototype of a mHealth CDSS for Parkinson’s Disease (PD), which is theoretically grounded in the psychological literature about expert decision making and judgement under uncertainty. Methods: A suite of user needs studies was conducted in 4 European countries (Greece, Italy, Slovenia, the UK) prior to the development of PD_Manager, a mHealth-based CDSS designed for Parkinson’s Disease, using wireless technology. Study 1 undertook Hierarchical Task Analysis (HTA) including elicitation of user needs, cognitive demands and perceived risks/benefits (ethical considerations) associated with the proposed CDSS, through structured interviews of prescribing clinicians (N=47). Study 2 carried out computational modelling of prescribing clinicians’ (N=12) decision strategies based on social judgment theory. Study 3 was a vignette study of prescribing clinicians’ (N=18) willingness to change treatment based on either self-reported symptoms data, devices-generated symptoms data or combinations of both. Results: Study 1 indicated that system development should move away from the traditional silos of ‘motor’ and ‘non-motor’ symptom evaluations and suggest that presenting data on symptoms according to goal-based domains would be the most beneficial approach, the most important being patients’ overall Quality of Life (QoL). The computational modelling in Study 2 extrapolated different factor combinations when making judgements about different questions. Study 3 indicated that the clinicians were equally likely to change the care plan based on information about the change in the patient’s condition from the patient’s self-report and the wearable devices. 3 Conclusions: Based on our approach, we could formulate the following principles of mHealth design: 1) enabling shared decision making between the clinician, patient and the carer; 2) flexibility that accounts for diagnostic and treatment variation among clinicians; 3) monitoring of information integration from multiple sources. Our approach highlighted the central importance of the patient-clinician relationship in clinical decision making and the relevance of theoretical as opposed to algorithm (technology)-based modelling of human judgment.
Over the past few decades, two-factor models of social cognition have emerged as a dominant
framework for understanding impression development. These models suggest that two
dimensions – warmth and competence – are key in shaping our cognitive, emotional, and
behavioral reactions toward social targets. More recently, research has jettisoned the warmth
dimension, distinguishing instead between sociability (e.g., friendliness and likeability) and
morality (e.g., honesty and trustworthiness) and showing that morality is far more important than
sociability (and competence) in predicting the evaluations we make of individuals and groups.
Presenting research from our laboratories, we show that moral categories are central at all stages
of impression development, from implicit assumptions, to information gathering and to final
evaluations. Moreover, moral trait information has a dominant role in predicting people’s
behavioral reactions toward social targets. We also show that morality dominates impression
development, because it is closely linked to the essential judgment of whether another party’s
intentions are beneficial or harmful. Thus, our research informs a new framework for
understanding person and group perception: the Moral Primacy Model (MPM) of impression
development. We conclude by discussing how the MPM relates to classic and emerging models
of social cognition and by outlining a trajectory for future research.
This thesis aimed to investigate the role of minority stress (MS) and autistic community connectedness (ACC) on mental health (MH) and wellbeing in the autistic community. Multiple methods were used, across four studies. Study one consisted of a qualitative study using grounded theory tools to create a measure of ACC, as none existed. The findings indicated that ACC compromises of three sub-domains – belongingness, social, and political connectedness. Stigma and identity both informed the level of ACC experienced by participants. In study two, a measure of ACC was created and validated in a new sample of autistic individuals (N = 133) using confirmatory factor analysis to test factor-structure and for item purification. Results indicated factorial, convergent and discriminant validity, for a 10-item scale. Studies three and four consisted of a cross-sectional and longitudinal survey where 195 autistic and 181 non-autistic people completed questionnaires at baseline and 99 autistic participants completed measures nine months later at follow-up. Resilience resources, ACC, MH and wellbeing, and MS were measured both times. Study three showed that the differences in MH, wellbeing, and resilience resources between the autistic and non-autistic sample persisted beyond demographics and general stress. Higher MS predicted lower MH and wellbeing, while ACC moderated the relationship between MS and MH, ameliorating the effects of MS. The longitudinal study (study four) showed that higher MS scores at baseline were associated with worse MH and wellbeing nine-months later, while higher ACC was associated with better MH and wellbeing. The results suggest a model of ACC and MS whereby autistic people may experience differing levels of ACC depending on experiences of stigma and autistic identity. This ACC in turn moderates the impact of MS on MH.These findings and implications of the research are further integrated into autism, MS, MH, and community literature.
Processing fluency has been shown to be flexible metacognitive cue for a range of judgements including truth, familiarity, and trust. Amongst these, affect judgements are of particular interest as 1) affect can be genuinely evoked by fluency, and 2) affect can be used as a cue for other judgements. However, there is disagreement towards the pattern of affective responses arising from fluency. The hedonic marking hypothesis (Winkielman, Schwarz, Fazendeiro, & Reber, 2003) suggests that fluency is fundamentally positive, whilst the fluency amplification account (Albrecht & Carbon, 2014) suggests that the affective response can be positive or negative, depending on (and congruent with) the valence of stimuli being exposed to. Whilst these accounts have been used as competing explanations, this thesis argues that they both contribute to overall affective responses in a novel multi-source account. This thesis developed a novel set of business scenarios to manipulate fluency (using coherence) and valence (using risk). Evidence from three approaches is presented: 1) Meta-Analysis examining affective responses to fluency, with a sample of 108 publications (k = 591 effect sizes), 2) Five behavioural experiments, and 3) Facial electromyography (fEMG). Across these approaches, neither hedonic marking nor fluency amplification in isolation could account for the full pattern of results. Instead, results were explained by the combined contribution of the two models, as predicted by the multi-source account. The unique findings were uncovered by manipulating stimuli valence, as well as separately measuring positive and negative affect, an approach not previously investigated in the literature, thereby adding methodological, as well as theoretical, contributions to the literature on fluency effects. Implications for future research are to adopt a separate measurement approach to investigate wider judgement domains, whilst practical implications for business assessment and agenda setting are also discussed.
When examining social targets, people may ask asymmetric questions, that is, questions for which “yes” and “no” answers are neither equally diagnostic nor equally frequent. The consequences of this information-gathering strategy on impression formation deserve empirical investigation. The present work explored the role played by the trade-off between the diagnosticity and frequency of answers that follow asymmetric questions. In Study 1, participants received answers to symmetric/asymmetric questions on an anonymous social target. In Study 2, participants read answers to a specific symmetric/asymmetric question provided by different group members. Overall, the results of both studies indicate that asymmetric questions had less impact on impressions than did symmetric questions, suggesting that individuals are more sensitive to data frequency than diagnosticity when forming impressions.
Peer exclusion is when a group of children exclude another child or reject his or her request to join them (Gazelle & Druhen, 2009). Peer exclusion affects the child's wellbeing and academic achievement. A number of studies have examined how children evaluate peer exclusion based on group membership, for example of the basis of gender and ethnicity, in the US and Europe. However, little work has been done in the Middle East. Moreover, no work has included parents with their children to test the relationship between parents and children. This thesis examined how Saudi children and their mothers evaluate religion-based exclusion. Five studies were carried out to achieve the aim of this thesis. The main aim of these studies was to examine how Saudi children evaluate the exclusion of in-group members (Muslim, Sunni) and out-group members (Shia, non-Muslim) when the perpetrator of the exclusion was their father or their peers. In the first study, Saudi children (N= 92) residing in Saudi Arabia were interviewed. Children were more likely to accept exclusion of out-group members than in-group members. Also, they were more likely to accept exclusion when it was ordered by their father than if it was ordered by a group of peers. In the second study, mothers (N= 60) residing in Saudi Arabia and children were interviewed. There was a significant mother-child relationship only when discussing the exclusion of out-group members. In the third study, Saudi children residing in the UK were interviewed (N= 76) and the findings were similar to the first study; children were more likely to accept the exclusion of out-group members than in-group members and exclusion by their fathers than by peers. In the fourth study, Saudi mothers and children residing in the UK were interviewed. There was no significant mother-child relationship in the evaluation of religion-based exclusion. The final study compared Saudi children and their mothers in Saudi Arabia with Saudi children and their mothers in the UK. Saudis in Saudi Arabia were more accepting of exclusion than Saudis in the UK. Children in Saudi Arabia and in the UK were more likely to accept exclusion than their mothers. Generally, children and their mothers in Saudi Arabia and in the UK were more likely to accept exclusion by the father than by their peers. In summary, the results of this thesis suggest that Saudi fathers play a vital role in affecting children's and mothers' attitudes. Mothers seem to hold more tolerant attitudes than their children. The findings are discussed in relation to Saudi culture and the literature on transmission of attitudes and intergroup contact.
In two studies, we investigated how people use base rates and the presence versus the absence of new information to judge which of two hypotheses is more likely. Participants were given problems based on two decks of cards printed with 0-4 letters. A table showed the relative frequencies of the letters on the cards within each deck. Participants were told the letters that were printed on or absent from a card the experimenter had drawn. Base rates were conveyed by telling participants that the experimenter had chosen the deck by drawing from an urn containing, in different proportions, tickets marked either 'deck 1' or 'deck 2'. The task was to judge from which of the two decks the card was most likely drawn. Prior probabilities and the evidential strength of the subset of present clues (computed as 'weight of evidence') were the only significant predictors of participants' dichotomous (both studies) and continuous (Study 2) judgments. The evidential strength of all clues was not a significant predictor of participants' judgments in either study, and no significant interactions emerged. We discuss the results as evidence for additive integration of base rates and the new present information in hypothesis testing.
Previous studies on hypothesis-testing behaviour have reported systematic preferences for posing positive questions (i.e., inquiries about features that are consistent with the truth of the hypothesis) and different types of asymmetric questions (i.e., questions where the hypothesis confirming and the hypothesis disconfirming responses have different evidential strength). Both tendencies can contribute - in some circumstances - to confirmation biases (i.e., the improper acceptance or maintenance of an incorrect hypothesis). The empirical support for asymmetric testing is, however, scarce and partly contradictory, and the relative strength of positive testing and asymmetric testing has not been empirically compared. In four studies where subjects were asked to select (Experiment 1) or evaluate (Experiments 2-4) questions for controlling an abstract hypothesis, we orthogonally balanced the positivity/negativity of questions by their symmetry/asymmetry (Experiments 1-3), or by the type of asymmetry (confirmatory vs disconfirmatory; Experiment 4). In all Experiments participants strongly preferred positive to negative questions. Their choices were on the other hand mostly unaffected by symmetry and asymmetry in general, or - more specifically - by different types of asymmetry. Other results indicated that participants were sensitive to the diagnosticity of the questions (Experiments 1-3), and that they preferred testing features with a high probability under the focal hypothesis (Experiment 4). In the discussion we argue that recourse to asymmetric testing - observed in some previous studies using more contextualized problems - probably depends on context-related motivations and prior knowledge. In abstract tasks, where that knowledge is not available, more simple strategies - such as positive testing - are prevalent.
This study investigates the influence of verbal and non-verbal cues on people’s credibility judgments of fake Twitter profiles generated by an information hiding mobile app solely for transmitting secret messages. We tested the hypotheses that the trustworthiness conveyed by the profile picture, morality-related trait adjectives included in the profile summary and the profile owner’s gender would increase people’s credibility judgments of those fake Twitter profiles. 24 participants assessed 16 fake profiles on their credibility. They also expressed their confidence in their credibility judgements and they answered an open-ended question on which parts of the profile influenced their credibility judgements. The results showed that overall participants did not trust the Twitter profiles. Furthermore, confidence judgements were higher when profiles included competence-related traits in the profile summaries. Verbal rather than non-verbal cues had thus more influence on participants’ judgements. The openended responses revealed a large reliance on the content of the profile, which is what the mobile app relies on. We discussed these findings in light of the relative lack of credibility of the profiles generated by the mobile app. The new insights can help improve designs of systems depending on automated social media accounts and will provide useful clues about other applications where cognitive computing plays a role.
Human cognitive modeling techniques and related software tools have been widely used by researchers and practitioners to evalu- ate the e ectiveness of user interface (UI) designs and related human performance. However, they are rarely used in the cyber security eld despite the fact that human factors have been recognized as a key ele- ment for cyber security systems. For a cyber security system involving a relatively complicated UI, it could be di cult to build a cognitive model that accurately captures the di erent cognitive tasks involved in all user interactions. Using a moderately complicated user authentication system as an example system and CogTool as a typical cognitive modeling tool, this paper aims to provide insights into the use of eye-tracking data for facilitating human cognitive modeling of cognitive tasks more e ectively and accurately.We used visual scan paths extracted from an eye-tracking user study to facilitate the design of cognitive modeling tasks. This al- lowed us to reproduce some insecure human behavioral patterns observed in some previous lab-based user studies on the same system, and more importantly, we also found some unexpected new results about human behavior. The comparison between human cognitive models with and without eye-tracking data suggests that eye-tracking data can provide useful information to facilitate the process of human cognitive modeling as well as to achieve a better understanding of security-related human behaviors. In addition, our results demonstrated that cyber security re- search can bene t from a combination of eye-tracking and cognitive mod- eling to study human behavior related security problems.
Despite all the information about the risks, many people still smoke. Several studies investigated risk perceptions in smokers. The adequate perceptions of the risks from smoking is particularly important and this study investigated the risk perception of young smokers vs non-smokers by a new time-estimation task in which we required participants (smokers and non-smokers) to estimate the onset time of smoking-related conditions in an average young smoker. The findings supported our main hypothesis that smokers, compared to non-smokers, postponed the onset of both mild and severe smoking-related conditions. The results also revealed that the onset time estimates for mild conditions given by both smokers and nonsmokers were associated with their self-perceptions of risk and level of fear of developing smoking-related conditions. The findings cast light on smokers’ distorted temporal perception of the health-damaging consequences of smoking. Implications for the adequacy of risk perception in smokers are discussed.
Proactive password checkers have been widely used to persuade users to select stronger passwords by providing machine-generated strength ratings of passwords. If such ratings do not match human-generated ratings of human users, there can be a loss of trust in PPCs. In order to study the effectiveness of PPCs, it would be useful to investigate how human users perceive such machine- and human-generated ratings in terms of their trust, which has been rarely studied in the literature. To fill this gap, we report a large-scale crowdsourcing study with over 1,000 workers. The participants were asked to choose which of the two ratings they trusted more. The passwords were selected based on a survey of over 100 human password experts. The results revealed that participants exhibited four distinct behavioral patterns when the passwords were hidden,and many changed their behaviors significantly after the passwords were disclosed, suggesting their reported trust was influenced by their own judgments.
Three experiments examined how people gather information on in-group and out-group members. Previous studies have revealed that category-based expectancies bias the hypothesis-testing process towards confirmation through the use of asymmetric-confirming questions (which are queries where the replies supporting the prior expectancies are more informative than those falsifying them). However, to date there is no empirical investigation of the use of such a question-asking strategy in an intergroup context. In the present studies, participants were asked to produce (Study 1) or to choose (Studies 2 and 3) questions in order to investigate the presence of various traits in an in-group or an out-group member. Traits were manipulated by valence and typicality. The results revealed that category-based expectancies do not always lead to asymmetric-confirming testing: whereas participants tended to ask questions that confirmed positive in-group and negative out-group stereotypical attributes, they used a more symmetric strategy when testing for the presence of negative in-group or positive out-group traits. Moreover, Study 3 also revealed a moderation effect of in-group identification. The findings point to the role played by motivational factors associated with preserving a positive social identity. Possible consequences of these hypothesis-testing processes in preserving a positive social identity for intergroup relations are discussed.
CONTEXT: Research on decision making suggests that a wide range of spontaneous processes may influence medical judgment. OBJECTIVES: We considered an easily accessible strategy, anchoring and insufficient adjustment, which might contribute to health care professionals' miscalibration of patients' pain. METHODS: A sample (n=423) of physicians, nurses, medical students, and nursing students participated in a computerized task that showed 16 vignettes featuring fictitious patients reporting headache. In the experimental condition, participants were asked to evaluate the severity of the patient's pain before and after knowing the patient's rating. In the control condition, participants were shown all information about the patient at the same time and were required to make judgments in a single stage. RESULTS: When participants could express an initial impression before knowing the patient's rating, they fully anchored to their initial impressions in almost half of the responses. Moreover, even among those who revised their initial impression, the extent of the revision was often insufficient. Greater anchoring was associated with patients' ratings that were higher than the participants' initial impression. Finally, we provided evidence that anchoring increased pain miscalibration. We discuss our findings in terms of their contribution to the understanding of the cognitive processes involved in pain assessment. CONCLUSION: When estimating patients' pain intensity, observers are driven by anchoring, a rule of thumb that might have pernicious consequences in terms of unwarranted overreliance on initial impressions and insufficient revision in light of relevant disconfirming evidence. Taking this heuristic into account might foster accurate pain assessment and treatment.
Two experiments investigated whether dealing with a homogeneous subset of syllogisms with time-constrained responses encouraged participants to develop and use heuristics for abstract (Experiment 1) and thematic (Experiment 2) syllogisms. An atmosphere-based heuristic accounted for most responses with both abstract and thematic syllogisms. With thematic syllogisms, a weaker effect of a belief heuristic was also observed, mainly where the correct response was inconsistent with the atmosphere of the premises. Analytic processes appear to have played little role in the time-constrained condition, whereas their involvement increased in a self-paced, unconstrained condition. From a dual-process perspective, the results further specify how task demands affect the recruitment of heuristic and analytic systems of reasoning. Because the syllogisms and experimental procedure were the same as those used in a previous neuroimaging study by Goel, Buchel, Frith, and Dolan (2000), the result also deepen our understanding of the cognitive processes investigated by that study.
Trait inference in person perception is based on observers' implicit assumptions about the relations between trait adjectives (e.g., fair) and the either consistent or inconsistent behaviors (e.g., having double standards) that an actor can manifest. This article presents new empirical data and theoretical interpretations on people' behavioral expectations, that is, people's perceived trait-behavior relations along the morality (versus competence) dimension. We specifically address the issue of the moderate levels of both traits and behaviors almost neglected by prior research by using a measure of the perceived general frequency of behaviors. A preliminary study identifies a set of competence- and morality-related traits and a subset of traits balanced for valence. Studies 1±2 show that moral target persons are associated with greater behavioral flexibility than immoral ones where abstract categories of behaviors are concerned. For example, participants judge it more likely that a fair person would behave unfairly than an unfair person would behave fairly. Study 3 replicates the results of the first 2 studies using concrete categories of behaviors (e.g., telling the truth/ omitting some information). Study 4 shows that the positive asymmetry in morality-related trait-behavior relations holds for both North-American and European (i.e., Italian) individuals. A small-scale meta-analysis confirms the existence of a positive asymmetry in traitbehavior relations along both morality and competence dimensions for moderate levels of both traits and behaviors. We discuss these findings in relation to prior models and results on trait-behavior relations and we advance a motivational explanation based on selfprotection.
The Attentional Blink (AB) is a temporary deficit for a second target (T2) when that target appears after a first target (T1). Although sophisticated models have been developed to explain the substantial AB literature in isolation, the current study considers how the AB relates to perceptual dynamics more broadly. We show that the time-course of the AB is closely related to the time course of the transition from positive to negative repetition priming effects in perceptual identification. Many AB tasks involve a switch between a T1 defined in one manner and a T2 defined in a different manner. Other AB tasks are non-switching, with all targets belonging to the same well-known category (e.g., letter targets versus number distractors) or sharing the same perceptual feature. We propose that these non-switching AB tasks reflect perceptual habituation for the target-defining attribute; thus, a ‘perceptual wink’, with perception of one attribute (target identity) undisturbed while perception of another (target detection) is impaired. On this account, the immediate benefit following T1 (lag-1 sparing) reflects positive repetition priming and the subsequent deficit (the blink) reflects negative repetition priming for the realization that a target occurred. In developing the perceptual wink model, we extended the nROUSE model of perceptual priming to explain the results of two new experiments combining the AB and identity repetitions. This establishes important connections between non-switching AB tasks and perceptual dynamics.
Morality, which refers to characteristics such as trustworthiness and honesty, has a primary role in social perception and judgment. A negativity effect characterizes the morality dimension, whereby negative information is weighed more than positive information in trait attribution and impression formation. This article reviews the literature on the negativity effect in trait attribution and impression formation. We examine the main boundary conditions of the negativity effect by considering relevant moderators such as behavior consistency and evaluative extremity, level of categorization, and measurement type as well as some theoretical and empirical inconsistencies in the literature. We also review recent studies showing that social perceivers hold negative assumptions about people’s morality. We outline future directions for research on the negativity effect that should consider trait extremity, use alternative measures to the perceived frequency of behaviors, introduce more precise definitions of relevant constructs such as diagnosticity, and test different schemata of trait-behavior relations.
Research has shown that warmth and competence are core dimensions on which perceivers judge others and that warmth has a primary role at various phases of impression formation. Three studies explored whether the two components of warmth (i.e., sociability and morality) have distinct roles in predicting the global impression of social groups. In Study 1 (N= 105) and Study 2 (N= 112), participants read an immigration scenario depicting an unfamiliar social group in terms of high (vs. low) morality, sociability, and competence. In both studies, participants were asked to report their global impression of the group. Results showed that global evaluations were better predicted by morality than by sociability or competence-trait ascriptions. Study 3 (N= 86) further showed that the effect of moral traits on group global evaluations was mediated by the perception of threat. The importance of these findings for the impression-formation process is discussed.
Two experiments examined how people perceive the diagnosticity of different answers ("yes" and "no") to the same question. We manipulated whether the "yes" and the "no" answers conveyed the same amount of information or not, as well as the presentation format of the probabilities of the features inquired about. In Experiment 1, participants were presented with only the percentages of occurrence of the features, which most straightforwardly apply to the diagnosticity of "yes" answers. In Experiment 2, participants received in addition the percentages of the absence of features, which serve to assess the diagnosticity of "no" answers. Consistent with previous studies, we found that participants underestimated the difference in the diagnosticity conveyed by different answers to the same question. However, participants' insensitivity was greater when the normative (Bayesian) diagnosticity of the "no" answer was higher than that of the "yes" answer. We also found oversensitivity to answer diagnosticity, whereby participants valued as differentially diagnostic two answers that were normatively equal in terms of their diagnosticity. Presenting to participants the percentages of occurrence of the features inquired about together with their complements increased their sensitivity to the diagnosticity of answers. We discuss the implications of these findings for confirmation bias in hypothesis testing. © 2013 © 2013 The Experimental Psychology Society.
In three studies, we investigated whether and to what extent the evaluation of two mutually exclusive hypotheses is affected by a feature-positive effect, wherein present clues are weighted more than absent clues. Participants (N = 126) were presented with abstract problems concerning the most likely provenance of a card that was drawn from one of two decks. We factored the correct response (the hypothesis favored by the consideration of all clues) and the ratio of present-to-absent features in each set of observations. Furthermore, across the studies, we manipulated the presentation format of the features' probabilities by providing the probability distributions of occurrences (Study 1), non-occurrences (Study 3) or both (Study 2). In all studies, both participant preference and accuracy were mostly determined by an over-reliance on present features. Moreover, across participants, both confidence in the responses and the informativeness of the present clues correlated positively with the number of responses given in line with an exclusive consideration of present features. These results were mostly independent of both the rarity of the absent clues and the presentation format. We concluded that the feature-positive effect influences hypothesis evaluation, and we discussed the implications for confirmation bias.
Previous studies have indicated that high status people are prone to use leading questions during interpersonal interaction. The present study (N = 254) aimed to investigate if asymmetry between high and low status individuals is likely to bias the social hypothesis testing toward asymmetric questions, namely queries for which the "yes" and the "no" answers are not equally diagnostic. To this purpose, after manipulating their status (supervisor vs. subordinate), participants were asked to choose questions to investigate the presence of attributes (positive or negative) in a social target. The results showed that higher status individuals are more likely to adopt the asymmetric confirming strategy during the social hypothesis-testing than lower status individuals. The potential application of this research is discussed.
This article presents two experiments aiming to investigate the adoption of a graduated measure to describe credibility attribution by observers who evaluate patients' pain accounts. A total of 160 medical students were required to express a credibility judgment on the pain intensity level of hypothetical patients. We used 16 vignettes based on a factorial mixed-design. Within-participants factors were the reported pain, the presence of a physical sign, the patient's facial expression and the patient's gender, and between-groups factors were the patient's age and the geographical distribution of the patient's name. Results confirm the well-established tendency not to believe patients' self-reports and provide information regarding the evaluators' uncertainty. The findings suggest that a graduated measure is useful for assessing the degree of uncertainty of the observers and subtle effects of different factors upon the judgment of patient's pain.
Research on the two fundamental dimensions of social judgment, namely warmth and competence, has shown that warmth has a primary and a dominant role in information gathering about others. In two studies we examined whether the sociability and morality components of warmth play distinct roles in such a process. Study 1 (N=60) investigated which traits were mostly selected when forming impressions about others. The results showed that, regardless of the task goal, traits related to morality and sociability were differently processed. Furthermore, participants were more interested in obtaining information about morality than about sociability when asked to form a global impression about others. Study 2 (N=98) explored the adoption of asymmetric/symmetric strategies when asking questions to make inferences on others. As predicted, participants adopted an asymmetrically disconfirming strategy on morality traits, while they looked for more symmetrical evidence on sociability or competence traits. Overall, our findings indicated a distinct and a dominant role of the moral component of warmth in the information-gathering process. Copyright © 2010 John Wiley & Sons, Ltd.
Three studies using abstract materials tested possible moderators of the feature-positive effect in hypothesis evaluation whereby people use the presence of features more than their absence to judge which of 2 competing hypotheses is more likely. Drawing on a distinction made in visual perception research, we tested whether the feature-positive effect emerges both when using nonsubstitutive features, which can be removed without replacement by other features, and substitutive features, the absence of which implies the presence of other features (e.g., the colour red, the absence of which entails the presence of another colour). Furthermore, we tested whether presenting to participants both the clue occurrence probabilities (which are needed to consider clue presence) and their complements (which are needed to gauge the impact of the absent clues) decreased the feature-positive effect. The results showed that regardless of the type of feature (i.e., nonsubstitutive vs. substitutive), participants provided more responses consistent with an evaluation of the subset of present clues compared to all other kinds of responses. However, the use of substitutive features combined with an explicit presentation format of probabilistic information had a debiasing effect. Furthermore, the use of substitutive features negated participant sensitivity to the rarity of clues, whereby the feature-positive effect decreased when there was one absent clue and two present clues for problems in which the exclusive consideration of the presence of features did not suggest the correct response.
Evidence evaluation is a crucial process in many human activities, spanning from medical diagnosis to impression formation. The present experiments investigated which, if any, normative model best conforms to people’s intuition about the value of the obtained evidence. Psychologists, epistemologists, and philosophers of science have proposed several models to account for people’s intuition about the utility of the obtained evidence with respect either to a focal hypothesis or to a constellation of hypotheses. We pitted against each other the so called optimal-experimental-design models (i.e., Bayesian diagnosticity, log10 diagnosticity, information gain, Kullback-Leibler distance, probability gain, and impact) and measures L and Z to compare their ability to describe humans’ intuition about the value of the obtained evidence. Participants received words-and-numbers scenarios concerning two hypotheses and binary features. They were asked to evaluate the utility of “yes” and “no” answers to questions about some features possessed in different proportions (i.e., the likelihoods) by two types of extraterrestrial creatures (corresponding to two mutually exclusive and exhaustive hypotheses). Participants evaluated either how an answer was helpful or how an answer decreased/increased their beliefs with respect either to a single hypothesis or to both hypotheses. We fitted mixed-effects models and we used the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) values to compare the competing models of the value of the obtained evidence. Overall, the experiments showed that measure Z was the best-fitting model of participants’ judgments of the value of obtained answers. We discussed the implications for the human hypothesis-evaluation process.
This article examines individuals' expectations in a social hypothesis testing task. Participants selected questions from a list to investigate the presence of personality traits in a target individual. They also identified the responses that they expected to receive and the likelihood of the expected responses. The results of two studies indicated that when people asked questions inquiring about the hypothesized traits that did not entail strong a priori beliefs, they expected to find evidence confirming the hypothesis under investigation. These confirming expectations were more pronounced for symmetric questions, in which the diagnosticity and frequency of the expected evidence did not conflict. When the search for information was asymmetric, confirming expectations were diminished, likely as a consequence of either the rareness or low diagnosticity of the hypothesis-confirming outcome. We also discuss the implications of these findings for confirmation bias.
Moral identity, which is based on moral concerns, is one of the many types of identities that an individual may have. In recent literature, spanning the period from the 1980s to the present - including the work of the prominent researcher into moral identity, Blasi, and Aquino and Reed, who developed their widely used moral identity scale in 2000 - there has been a persistent assumption that fairness and caring, or the individualising moral foundations, comprise the entire contents of moral identity. However, it is well documented that broader cultural differences are considered to have a clear effect on individuals, as cultures vary in the degree to which their norms, values and beliefs influence individual identities. Despite this, no published studies have explored moral identity with respect to culture. Thus, in this thesis, I argued that culture influences people’s moral identity, and that we need to consider and expect more moral variation between people across different cultures. I aimed here to develop an understanding of the importance of culture influence on moral identity in two cultural contexts, those of Britain and Saudi Arabia. In Study 1 (n=160), I employed the prototype approach, and my results show that traits related to fairness/reciprocity and care/harm were prototypical of the concept of a moral person among both the British and Saudi participants. Meanwhile, respect, as well as traits related to religiousness, were prototypical of the concept of a moral person in only the Saudi sample. In Study 2, (n = 539), participants from each culture were randomly assigned one of six conditions where they completed moral identity measures. In each condition, participants were presented either with a person characterised by the exact moral traits listed in Aquino and Reed’s (2002) moral identity scale, or with a person characterised by moral traits represent one of the five moral foundations. Also, for each condition, the moral traits important in the participants’ own culture were examined. The results showed large differences between the British and Saudi samples with regard to three moral foundations: in-group/loyalty; authority/respect and purity/sanctity, all three of which relate to binding concerns. These differences were mediated by the perceived cultural importance of these traits in each sample, particularly the binding traits. In Study 3 (n=938), I developed a novel moral identity scale and tested it for its reliability and validity in overcoming the shortcomings of previous scales used to measure moral identity, particularly the overlooked element of cultural variations in morality. Finally, in Study 4 (n=496), and given that there is an assumption in the literature that moral identity which is based on the individualising moral foundations (particularly caring and fairness) has always pro-social implications. I argued in this study that when we expand our understanding of moral identity to include the long-overlooked binding moral approach (e.g., authority, purity, in-group loyalty), moral identity may relate to negative attitudes toward out-groups. The results supported the idea that we need not take for granted that moral identity contributes to a reduction in prejudice. The results also indicated that the new moral identity scale is better than Aquino and Reed’s (2002) moral identity scale in its ability to predict prejudice attitudes. Overall, this thesis demonstrates that the contents of moral identity are more diverse than has been assumed in the moral identity research. In addition, the results indicate that there is a need to be mindful of a dark side to moral identity that is often neglected, specifically when we, as researchers, recognise and include various moral concerns in the conceptualisation and measurements of moral identity.