“Second guessing yourself all the time about what they really mean…”: Cognitive differences between autistic and non‐autistic adults in understanding implied meaning

This study investigated cognitive differences between autistic and non‐autistic people in understanding implied meaning in conversation using a novel computerized test, the Implicature Comprehension Test. Controlling for core language ability, autistic participants (N = 66) were over twice as likely to endorse a non‐normative interpretation of an implied meaning and over five times as likely to select “do not know” when asked about the presence of an implied meaning, compared to non‐autistic participants (N = 118). A further experiment suggested that the selection of “do not know” reflected a cognitive preference for certainty and explicit communication, and that the normative inference could often be made when the test format was more constrained. Our research supports the hypothesis that autistic individuals can find it challenging to process language in its pragmatic context, and that cognitive preferences play a role in this.


Introduction
Social communication difficulties are a diagnostic feature of autism (American Psychiatric Association, 2013). These difficulties are sometimes thought to be due to a core impairment in pragmatics (e.g., Baron-Cohen, 1988;Rapin & Dunn, 2003). Pragmatics refers to the role of context in shaping communicative meaning, such as when we interpret beyond the literal content of an utterance to infer what a person means in context (Ariel, 2010). Drawing inferences about linguistic meaning has been identified as an area of difficulty in autism (see Loukusa & Moilanen, 2009), and qualitative reports suggest that autistic people tend to use and understand language more literally than their non-autistic peers (Hobson, 2012). Similarly, a meta-analysis showed that interpreting nonliteral language, such as metaphor, irony and jokes, poses challenges for autistic people, although this depends perhaps most importantly on an individual's core language ability, that is, their vocabulary and grammar skills (Kalandadze, Norbury, Naerland, & Naess, 2018). This last point highlights a key debate in the literature surrounding difficulties with inferences and non-literal language in autism: some studies have attributed these pragmatic difficulties to co-occurring language impairments (Lucas & Norbury, 2014;Norbury & Nation, 2011), whereas others support the idea that autistic people experience challenges with inferences that go over and beyond any difficulties with core language skills (Norbury & Bishop, 2002;Jolliffe & Baron-Cohen, 1999, 2000. The present research aimed to clarify some of these uncertainties by investigating differences in the processing of implied meaning between autistic and nonautistic people, with a focus on a novel task, the Implicature Comprehension Test (ICT; Wilson & Bishop, 2019).
The ICT is an assessment of particularized conversational implicature-that is, implied meaning that is dependent on the specific context to be understood. As an example: imagine you asked a friend if they have been to the new café, and they said "oh the cakes there are good." You might infer that, yes, your friend has been to the café, even though they did not directly say it. In particularized conversational implicature, such as this example, the listener uses the constraints of the communication scenario (i.e., that a question has been asked) and their expectations about what a relevant contribution would sound like to understand the intended meaning (see Relevance Theory for a theoretical background to implicature; Sperber & Wilson, 1986). Previous research has investigated some forms of implicature in autism, especially scalar implicature, such as where we infer that not everyone behaved in the sentence, "Some of the children were well-behaved". Autistic people do not seem to have specific difficulty with this type of implicature (Chevallier et al., 2010;Pijnacker et al., 2009), and we suggest that this lack of a relationship with autism is because such meaning is not heavily dependent on the context (i.e., with this sort of language, we can simply recall the rule that "some" equals "not all"). Particularized conversational implicature, on the other hand, depends on the context to be understood, and so this sort of implicature may be more challenging for autistic people if they are sometimes less sensitive to context (Frith, 1989). In the present paper, we test the hypothesis that autistic individuals would have lower implicature scores on the ICT, even controlling for core language ability (i.e., grammar and vocabulary skills). Following this hypothesis-testing, we present some exploratory analysis comparing two different response formats on the test to tease out the cognitive differences that might underpin performance on the test.

METHODS
This analysis is part of a larger project, which was preregistered on the Open Science Framework (OSF) as "Identifying pragmatic difficulties in autistic adults" (https://osf.io/g4bvm). We only report on measures relevant to this paper, but a full description of our protocol can be seen on OSF. Ethical review for this study occurred in two stages, separately for the autistic and non-autistic participants, as the groups were recruited sequentially. The first stage of the study was granted ethical clearance in July 2018 (Refs. R57087/RE002) and the second in November 2018 (R59912/RE002) by the Medical Science Interdivisional Research Ethics Committee at Oxford University.

Participants
We recruited autistic adults through support and social groups, both organized independently and by the National Autistic Society (NAS), as well as through Autistica, a research-focused charity in the UK. Inclusion criteria for individuals giving informed consent to participate included: (1) an autism spectrum diagnosis by a clinical service, (2) native-level fluency in English, and (3) age of 18 years or over. Exclusion criteria included: (1) significant visual or hearing impairment, (2) history of neurological illness or head injury, and (3) no access to a computer with internet access and audio. Individuals were invited to participate regardless of other diagnoses, including ADHD, genetic syndromes or learning disabilities. There was no stratification by sex in the sampling process, but we checked for sex differences in the measures below. We aimed to recruit at least 50 autistic individuals, in line with the power calculation described on OSF, but rather than prescribe a set sample size, we set a specific date as the stopping rule for recruitment: the study opened in 2019 and all individuals expressing interest in participating by March 31, 2019 were invited to do so.
The comparison sample of 120 non-autistic adults was recruited online via the participant platform, Prolific (https://www.prolific.co/). They fulfilled similar inclusion criteria, with the exception of having no autism diagnosis. This is the same sample as recruited in Wilson and Bishop (2019). Average age of the non-autistic participants was 30 years; 11 months (SD = 11 years; 3 months, min = 18 years, max = 64 years). A total of 65 identified as women, 54 as men, and one person did not declare their gender. The majority of the sample described themselves as White (103 out of 120); four people identified as Mixed Race, four as Black, and eight as Asian. Thirty-four people said they were currently students. Of the 86 individuals who reported not being students, highest level of education was given as high school/secondary school for 18 individuals, vocational training/college courses for 13, bachelor's degree for 53, and a higher degree for nine.
The autistic sample comprised 71 people. Of 80 individuals recruited into the study, 74 completed the battery of cognitive tasks which forms the basis of the analysis here, and of these 74 people, 71 reported a clinical diagnosis by appropriately trained professionals (clinical psychologists, psychiatrists, and specialist nurse practitioners trained in autism diagnosis) and mostly as part of multidisciplinary teams (MDTs) in National Health Service (NHS) settings. Many individuals were verbally-able, reflecting what was described in DSM-IV as an Asperger's type presentation; when asked to indicate the term that had been used for their diagnosis, 42 individuals (59%) indicated that either "Asperger's Syndrome" had been formally used, or language such as "Autism Spectrum Condition, previously termed Asperger's" had been used. Forty-five individuals identified as female, 25 as male and one as non-binary. Average age was 38 years (SD = 14 years, min = 18 years, max = 70 years). Approximate average age at diagnosis was 31 years (SD = 18 years, 50 individuals were diagnosed as adults). The sample was unusual among samples in autism research for having a high proportion of female participants and individuals diagnosed in adulthood. As part of the research protocol, the majority of participants (n = 65) took part in an ADOS-2 assessment as noted below. Fifty-five people scored above the ADOS-2 "autism spectrum" cut-off. Of the 10 who did not, all were female, which speaks to the notion that autistic females may mask some of their difficulties, especially in "safe" one-to-one situations such as during an ADOS-2 evaluation (Lai et al., 2017). Other neurodevelopmental diagnoses reported by participants included: dyslexia (n = 8), language disorder (n = 3), dyspraxia/developmental coordination disorder (n = 9), and ADHD (n = 7). Except for one Asian person, each autistic participant was White. Fifteen people said they were currently students. Of the 56 individuals who reported not being students, highest level of education was given as at least some high school/secondary school for eight individuals, vocational training/college courses for 16, bachelor's degree for 16, and a higher degree for 15 (one person gave no response to this question).

Procedure
For autistic participants, the study involved three sections, the first two of which took place online. In the first section, individuals were asked to complete questionnaire measures. In the second section, a set of seven cognitive tests were administered online at a time and place of the participant's choosing. This cognitive battery included the ICT. The tasks were supported by Gorilla, the online tool for behavioral experiments (https://gorilla.sc/). In the third section, participants were either invited to the Department of Experimental Psychology, University of Oxford, or visited at home, for an in-person communication assessment using Module 4 of the diagnostic instrument, the Autism Diagnostic Observation Schedule Second Edition (ADOS-2; Lord et al., 2012), and also completed some further cognitive tasks under supervision, including the ICT-Adapted. Individuals were compensated with a £20 voucher as a thank you for their participation.
Non-autistic participants only completed the second section of the study, detailed above. They were compensated £5.

Measures
Implicature comprehension test (ICT; Wilson & Bishop, 2019). Participants watch a sequence of 55 videos, each consisting of a conversational adjacency pair, with two characters each producing a short utterance, followed by a comprehension question eliciting a response of "yes," "no" or "do not know." For 36 items, participants need to process implicature to answer the question. In half the items "yes" is the correct answer, and in half "no" is. By "correct," we mean the answer that is chosen by the most people and/or correlates with total scores on the measure, suggesting that it taps a latent ability in inferring meaning. Full reliability analysis is presented in Wilson and Bishop (2019), and Cronbach's alpha for implicature total score was 0.89 in the present sample. There are also 10 items where the answer is more explicit; these serve as positive control items. The main measured variable was sum of implicature items correctly answered (out of 36), and there was also a control variable, the sum of explicitresponse items correctly answered (out of 10).
Synonyms test (Wilson & Bishop, 2019). Participants are presented with a written target word on the screen, and select a synonym from a choice of four words; there are 25 trials. This is a timed task, with up to 12 s per item. There was one measured variable: the sum of items correctly answered (out of 25). Cronbach's alpha for the total score was .81 in the present sample.
Grammaticality decision test (Wilson & Bishop, 2019). Participants listen to 44 sentences and decide if they are well-formed and grammatical; there is a 6-s limit to listen and respond. There was one measured variable: the sum of items currently answered (out of 44). Cronbach's alpha for the total score was 0.94 in the present sample.
Implicature comprehension test-Adapted version (ICT-Adapted). This test was structured like the ICT, with the exception that there was no "do not know" option. There are different reasons that a person may choose "do not know" when given the chance: perhaps they simply do not know and would be guessing otherwise, or perhaps they do not feel confident in their response but would tend to arrive at the same response as other people if constrained to give a response. This adapted version of the test was devised to tease out these two alternatives. There was one measured variable: the sum of implicature items correctly answered (out of 36). This task was only completed by autistic participants, and was carried out during the third part of the research protocol with a researcher present. Cronbach's alpha for implicature total score was .85 in the present sample.

Data analysis
Data were analyzed in R (R Core Team, 2017). R packages knitr (Xie, 2017), papaja (Aust & Barth, 2018), and htmlTable (Gordon, Gragg, & Konings, 2018) were used to produce the Rmarkdown report for this work. Data and scripts are accessible on OSF (https://osf.io/mhfet/). Scores on the vocabulary and grammar tasks were processed using principal components analysis (PCA) with the prcomp() function. The first principal component was extracted as a measure of "core language ability"; this showed a correlation of .85 with the vocabulary and grammar variables. As nine participants had outlying scores on the grammar task, they were excluded at this stage so that their scores would not influence the PCA. For these participants, a predicted score for "core language ability" was assigned based on linear regression with their vocabulary score as predictor. Finally, to facilitate interpretation of results, core language scores were standardized to have a mean of 0 and SD of 1 in the sample.
We identified participants with outlying scores on either the ICT (implicature or control items) or the core language factor. An outlying score was at least 2.2 times the interquartile range below the lower quartile (Hoaglin & Iglewicz, 1987). Participants with scores in this range are likely to have either not engaged well with our tasks or have weak overall language ability. We ran our main analyses with and without these individuals to test the sensitivity of our results to outliers.
We tested whether the autistic and non-autistic groups differed in their tendency to select incorrect and "do not know" compared to correct responses on the ICT, when also accounting for core language ability. We ran a multinomial mixed effects model using the MCMCglmm package (Hadfield, 2010) with group membership (autistic or non-autistic) and core language ability as fixed effects, and participant and item as random effects with slopes and intercepts. The model was fitted using Markov chain Monte Carlo (MCMC) sampling across 3500 samples. It estimated two logit equations, one comparing probability of a correct to incorrect response, and a second comparing a correct to "do not know" response, and assigned predicted log-odds to each comparison. We report odds ratios with 95% credible intervals (and approximate pvalues calculated through the MCMC approach) for each fixed effect.
We followed up this analysis of group differences on the ICT with an analysis of differences by test condition. Autistic participants completed the ICT and the ICT-Adapted (without the "do not know" option) with a researcher. We had two hypotheses to explain why participants may choose the "do not know" option first time round. It may be due to an impairment in integrating information and making normative inferences. However, it may also be due to an aspect of cognitive style whereby a participant feels uncomfortable making inferences when they feel there is not enough information to say; however, when constrained to give an answer, they are likely to make the expected inference. In the first account, a "do not know" response first time round would translate to chance-level accuracy second time round, whereas in the second account, we might expect high accuracy. We ran a random effects logistic regression using the lme4 package (Bates, Maechler, Bolker, & Walker, 2015) to differentiate between these accounts. Response first time round (correct, incorrect, "do not know") was fixed effect, and participant and item were random effects with slopes and intercepts. Accuracy on a given implicature item second time round was dependent variable. We report probability of a correct response, along with 95% confidence intervals.
Finally, we assessed how well scores on the ICT discriminated between those with and without an autism spectrum diagnosis. We attempted to optimize the index extracted from the test to separate the groups, using area under the curve (AUC) in receiver operator curve (ROC) analysis to select the best index. We tested total implicature score, number of "do not know" responses, and residualized versions of these indices. For residualized scores, we calculated the difference between a participant's actual score and the score expected from their performance on the control items; expected scores were computed based on linear regression in an unrelated sample of 120 non-autistic adults (the pilot study reported in Wilson & Bishop, 2019). We also tested whether it was appropriate to exclude participants based on their score on the control items of the ICT (out of 10). After determining which index had the best AUC, we calculated the sensitivity and specificity of that index in our sample. R package ROCR (Sing, Sander, Beerenwinkel, & Lengauer, 2005) was used for this analysis. Table 1 for descriptive statistics for each variable by group. Individuals are excluded from the dataset if they had an outlying score on any of the language measures: on this basis six autistic participants and three nonautistic were excluded. There was a group difference in implicature scores on the ICT, as hypothesized, with autistic participants scoring lower than non-autistic, t (114.09) = 7.83, p < 0.001, Cohen's d = 1.25, but there was no difference in core language scores,, t (115.75) = 1.68, p = 0.096. Total scores on the ICT did not differ by sex, t(162.73) = 0.92, p = 0.360, and nor by ADOS-2 outcome (autism/autism spectrum vs. non spectrum) among the autistic participants, t(15.55) = 0.56, p = 0.581.

See
We tested whether group membership (autistic or nonautistic) predicted response (correct, incorrect or "do not know") to any given implicature item, while also accounting for core language ability, in a multinomial mixed effects model. Autistic participants were 2.56 [1.76, 3.77] times more likely to select an incorrect rather than correct answer compared to non-autistic participants, and 6.19 [3.63, 10.39] times more likely to pick the "do not know" rather than the correct response, p < 0.001 in both cases. Lower core language ability was related to a greater likelihood of selecting incorrect responses compared to the correct answer: for each SD decrease in core language, odds of an incorrect response were 1.78 [1.47, 2.15] times higher, p < 0.001, whereas there was no effect for "do not know" responses, p = 0.142. Clearly, autistic people had a much greater tendency to select the "do not know" response. To make sense of this, we compared how autistic participants scored on the original version of the test compared to an adapted version without a "do not know" option. Sixty-one people completed both versions; four of these were participants with outlying scores, who were excluded from the analysis. We used a mixed effects logistic regression, with accuracy on the second version of the test as response variable and response on the original version (correct, incorrect, "do not know") as fixed effect. A correct response first time round led to 97% [95%, 99%] probability of a correct response second time round. Incorrect and "do not know" responses were associated with 71% [56%, 83%] and 91% [85%, 95%] probabilities of a correct response second time round. (Results were essentially identical when including outliers.) In this analysis, we see high accuracy for items when "do not know" was the response first time round, indicating that participants will typically make the normative inference when constrained to do so. Accuracy was substantially higher than when an incorrect response was given the first time round, and almost as high as when a correct answer was originally given. This provides support to the second account outlined above that differences in cognitive style are likely to play a role in explaining group differences in scores on this test.
We assessed how well scores discriminated between those with and without an autism spectrum diagnosis. The AUCs of different indices were compared, including raw implicature total (AUC = .80), raw total of implicature items given a "do not know" response (AUC = .75), residualized implicature total (AUC = .80), and residualized total of implicature items given a "do not know" response (AUC = .78). We tested whether it was appropriate to drop participants who did not score well on the control items, but all thresholds (out of 10) were associated with slightly worse AUCs. Raw implicature score represented the best index. The optimal cut-off was 27 (with lower scores more common in the autistic group) which gave sensitivity of 76% and specificity of 76% for autism in our sample.

DISCUSSION
On the ICT, autistic people were over twice as likely as non-autistic people to choose the "incorrect" inference, suggesting that they had some difficulty with arriving at the normative interpretation of implied language. In everyday conversations, this might translate as being more likely to "get the wrong end of the stick" sometimes. This finding agrees with previous research that autistic people underperform on tests of inferencing (for a review, see Loukusa & Moilanen, 2009), and consolidates previous research as we rule out core language as an explanation for inferencing difficulties, by including core language as a covariate and controlling language demands in the task itself. Difficulties with making inferences about implied language would be predicted by leading accounts of the cognitive differences in autism. These include the "theory of mind" account ( Baron-Cohen,-2000), which focuses on making inferences about other people's mental states, and the "central coherence" account (Frith, 1989), which focuses more generally on integrating information into a global whole. Both accounts predict differences in process (e.g., reduced tendency) and outcome (e.g., arrival at unusual interpretations) when making inferences about other people's implied meanings in conversation.
Perhaps, the most notable finding in the present research was the much greater tendency of autistic people to record "do not know" responses when given the opportunity. As such, in an ambiguous scenario where a person may or may not intend to communicate an indirect meaning, autistic people seem inclined to avoid making the normative inference that non-autistic people are likely to make, perhaps because "there's not enough information to say." Interestingly, when the test was given in a more constrained format that did not allow a "do not know" response, autistic participants typically processed implicature as expected. A "do not know" response first time round was associated with over 90% accuracy second time round-indeed, not substantially lower than accuracy on items that were answered correctly first time round. This suggests that the autistic participants could process inferences about implied meaning, but their tendency to do so differed according to the circumstances. This agrees with other research that has shown that autistic people may be sensitive to implied meaning in some contexts without necessarily drawing explicit inferences, as indicated by a dissociation between performance on implicit and explicit measures (Black, Barzy, Williams & Ferguson, 2019;Tirado & Saldana, 2016). As all participants completed the ICT-Adapted after the ICT, we should not rule out a practice effect, not least because accuracy on items originally given an incorrect response first time round was around 70%, that is, significantly greater than chance. However, this is rather lower than 90%, which suggests that a practice effect does not fully explain the high accuracy when people originally said "do not know" and supports the view that different cognitive processing styles may be present in autism.
We can interpret the tendency to select "do not know" in several ways. Firstly, the more open nature of the ICT (in which "do not know" was an option) may have caused difficulty for the autistic participants. This task required the individual to decide both whether an inference was warranted and also what that inference might be. This introduces two levels of uncertainty, and makes the task more open-domain (i.e., with rules and structure needing to be imposed by the individual), which Klin, Jones, Schultz, and Volkmar (2003) suggested is a major factor in activities that autistic people find more challenging. Another possibility is that it was less the nature of the task and more the nature of the information being presented that led to the distinctive response patterns of autistic individuals. In implicature items, a question was asked by one character and a response was given by a second character, and this response only seemed adequate if the listener looked for an implied meaning. As such, the frequent use of the "do not know" option may have reflected an autistic preference for more certain, explicit information.
Qualitative comments made by participants about the test would support the suggestion that they preferred certainty, as several independently expressed frustration and noted that there was just not enough information to make a judgment on the implicature items. This probably reflected an element of cognitive preference rather than just reduced inferencing ability, as individuals typically could select the "correct" answer when constrained to, even if they may have preferred to say "do not know," as they did first time round. Perhaps, the drive toward certainty is based on a history of communication mishaps, with "do not know" even representing an adaptive response when someone is aware of having experienced difficulty in the past when determining whether a person was implicating something or not. One participant offered the following take on these issues: "I can make a really good guess at what people mean but the anxiety surrounding all the possible meanings is so exhausting that like if they say something I'm 99% sure it means this but that 1% of but what about all the other things it could possibly mean… It's just really, really exhausting and second guessing yourself all the time of 'was that thing the right thing?' … And people aren't brilliant at giving feedback, so you don't know if you've said the right thing … I think it's much more the anxiety of not being sure if you're understanding someone correctly than just outright getting it wrong … because there were so many times as a kid when I misunderstood and got it wrong and then if you get it wrong people react to you badly or they ostracize you … I think it's an anxiety that's built up over a lifetime of not quite getting it right enough of the time. (This was their response to the question "What do you find harder about social situations?" during an ADOS-2 interview).
Pulling together the threads discussed here, there seemed to be cognitive differences in how autistic people responded to and made predictions based on uncertainty (see Sinha, Kjelgaard, Gandhi, Tsourides, & Cardinaux, 2014 for a review of the role of prediction in autism). In this respect, it is worthwhile noting that the trait "intolerance of uncertainty" has recently emerged from anxiety studies into the autism literature. Several studies indicate that autistic individuals show greater levels of this trait, compared to the general population, and that variability in the trait relates to core features of autism, including sensory sensitivities, insistence on sameness and repetitive behaviors (e.g., Hwang, Arnold, Srasuebkul, & Trollor, 2020;Vasa, Kreiser, Keefer, Singh, & Mostofsky, 2018;Wigham, Rodgers, & South, 2015). One thing to note is that "intolerance of uncertainty" relates to appraisals (i.e., negative, potentially catastrophic appraisals) and is highly linked to anxiety. However, it is possible that selection of "do not know" on the test was less a negative appraisal of uncertainty but instead reflected a lower criterion for deciding something was uncertain (i.e., the influence was more "cognitive" than emotional). Therefore, it remains to be seen whether "intolerance of uncertainty" offers a helpful explanation of the results (and if it did, we might expect a correlation with anxiety, though scores on an anxiety questionnaire given to the autistic adults did not relate to ICT scores).
Interestingly, in our previous paper (Wilson & Bishop, 2019), there was no relationship between scores on the Autism-Spectrum Quotient (AQ; Baron-Cohen, Wheelwright, Skinner, Martin, & Clubley, 2001) and the ICT in a general population sample of adults. The AQ is taken to measure autism-related traits, including subtle social difficulties and a preference for routine and focused interests, as they vary continuously throughout the general population. Given that there was a robust difference on the test when comparing people with and without an autism diagnosis, it is striking that there was no relationship with "sub-clinical" autistic features. This suggests that the cognitive differences observed here between autistic and non-autistic people are more of a categoric difference (i.e., a different way of thinking) rather than a thinking style that varies continuously with autistic-like traits. In our previous general population sample (Wilson & Bishop, 2019), average AQ scores were elevated above population norms and the range of scores was broad, so the lack of a relationship between autistic traits and implicature scores cannot easily be attributed to issues with the data distribution. The AQ may not tap continuous variation well perhaps, as there is some uncertainty about its appropriateness for this purpose (James, Dubey, Smith, Ropar, & Tunney, 2016), although there is continuous genetic influence on the measure across the general population and at the extreme (Robinson et al., 2013). Overall, then, the existence of a characteristic response pattern on the ICT in autistic adults but no relationship between test scores and AQ scores suggests possible categoric rather than continuous differences in relation to autism.

Potential clinical applications and implications
Given that there was a large effect size on the ICT, we may ask if it could be useful for clinicians in describing the cognitive profile of individuals during the diagnostic process, or indeed afterwards. In the present sample, a score of 27 on the test gave 76% sensitivity and specificity for autism. This sensitivity may be interpreted in relation to the sensitivity of the gold-standard tool for autism assessment, the ADOS-2, which was 85% in the current sample (55 of 65 adults with existing diagnoses scored above the autism spectrum cut-off); it is not possible to assess the specificity of the ADOS-2 as it was not administered to the non-autistic sample. It goes without saying that the ICT is not diagnostic, but we suggest it may complement standard clinical tools in characterizing an individual's communication abilities, at least for those with a language level equivalent to or higher than a child in the middle primary years. It should be noted that many of the autistic adults involved in this study would likely have been described as having Asperger's Syndrome under DSM-IV, as indicated by their own reports and our assessment of their core language ability, which was often above average (e.g., on the vocabulary task, 70% scored at or above the 50th percentile for control sample scores), and so the results reported here are most applicable to similar autistic adults. Further work would be needed to test the validity of the ICT with individuals who are less verbal and/or have learning disabilities, where limitations in understanding the basic vocabulary and grammar used in the test items may constrain performance.
Just as the ICT might have practical uses in assessment, the findings reported here also have implications for how clinicians might interact with autistic people. This study indicated that autistic people have some difficulty with "catching on" to implied responses in conversation. They may not take meaning as intended, and may not feel confident about whether implied meaning is present or not. This lack of confidence may be exacerbated by a history of communication difficulties, and it is important to bear in mind that difficulties could be seen even in those with high levels of vocabulary and grammar skills. Regarding autism-friendly communication, the National Autistic Society currently offer the following advice: "Avoid using irony, sarcasm, figurative language, rhetorical questions, idioms or exaggeration. If you do use these, explain what you have said and be clear about what you really mean to say" (National Autistic Society, 2017). This study supports the advice, but also suggests that common communication strategies, such as hinting and indirect responses, may pose barriers for autistic people. Likewise, non-autistic people could have a role to play in improving the communication experiences of autistic people by accommodating their preferences for explicit communication.