CRIS Natural Language Processing
CRIS NLP Applications Catalogue
Natural Language Processing (NLP) is a type of Artificial Intelligence, or AI, for extracting structured information from the free text of electronic health records. We have set up the Clinical Record Interactive Search (CRIS) NLP Service to facilitate the extraction of anonymised information from the free text of the clinical record at the South London and Maudsley NHS Foundation Trust. Research using data from electronic health records (EHRs) is rapidly increasing and most of the valuable information is sought in the free text. However, manually reviewing the free text is very time consuming. To overcome the burden of manual work and extract the information needed, NLP methods are utilised and in high demand throughout the research world.
CRIS Natural Language Processing
Welcome to the NLP Applications Catalogue. This provides detailed information about various apps developed within the CRIS service.
The webpage published and regularly updated here contains the details and performance of over 80 NLP applications relating to the automatic extraction of mental health data from the EHR that we have developed and currently routinely deployed through our NLP service.
This webpage provides details of NLP resources which have been developed since around 2009 for use at the NIHR Maudsley Biomedical Research Centre and its mental healthcare data platform, CRIS. We have set up the CRIS NLP Service to facilitate the extraction of anonymised information from the free text of the clinical record. Research using data from electronic health records (EHRs) is rapidly increasing and the most valuable information is sometimes only contained in the free text. This is particularly the case in mental healthcare, although not limited to that sector.
The CRIS system was developed for use within NIHR Maudsley BRC. It provides authorised researchers with regulated, secure access to anonymised information extracted from South London and Maudsley’s EHR.
The South London and Maudsley provides mental healthcare to a defined geographic catchment of four south London boroughs (Croydon, Lambeth, Lewisham, Southwark) with around 1.3 million residents, in addition to a range of national specialist services.
General Points of Use
All applications currently in production at the CRIS NLP Service are described here.
Our aim is to update this webpage at least twice yearly so please check you are using the version that pertains to the data extraction you are using.
Guidance for use: Every application report comprises four parts:
1) Description– the name of application and short explanation of what construct(s) the application seeks to capture.
2) Definition - an account of how the application was developed (e.g. machine-learning/rule-based, the terms searched for and guidelines for annotators), annotation classes produced and interrater reliability results (Cohen’s Kappa).
3) Performance – precision and recall are used to evaluate application performance in pre-annotated documents identified by the app as well as un-annotated documents retrieved by keyword searching the free text of the events and correspondence sections of CRIS. a) Precision is the ratio of the number of relevant (true positive) entities retrieved to the total number of entities (irrelevant -false positive- and relevant -true positive)) retrieved. b) Recall is the ratio of the number of relevant (true positive) entities retrieved to the number of relevant (true positive and false negative) entities available in the database. Performance testing is outlined in chronological order for either pre-annotated documents, unannotated documents retrieved through specific keyword searches or both. The latest performance testing on the list corresponds to results produced by the version of the application currently in use by the NLP Service. Search terms used for recall testing are presented, where necessary. Similarly, details are provided for any post-processing rules that have been implemented. Notes relating to observations by annotators and performance testers are described, where applicable.
4) Production – information is provided on the version of the application currently in use by the NLP Service and the corresponding deployment schedule.
Symptom scales (see proposed allocations)
As the number of symptom applications is increasing, we regularly evaluate how to make these available to researchers in a flexible and meaningful manner.
To this end, and in order to reduce the risk of too many and/or highly correlated variables in analyses, we are currently utilising symptom scales that group positive schizophreniform, negative schizophreniform, depressive, manic, disorganized and catatonic symptoms respectively.
The group of ‘other’ symptoms represent symptoms that have been developed separately for different purposes and that are intended to be used individually rather than in scales.
Each symptom receives a score of 1 if it’s scored as positive within a given surveillance period.
Individual symptoms are then summed to generate a total score of:
• 0 – 16 for positive schizophreniform
• 0 – 12 for negative schizophreniform
• 0 – 21 for depressive
• 0 – 8 for manic
• 0 – 8 for disorganized
• 0 – 4 for catatonic
We are encouraging researchers, unless there is a particular reason to be discussed with the NLP team, to use the scales for extracting and analysing data relating to symptom applications.
Version
V3.2
Contents
Symptoms
AggressionAgitation
Anergia
Anhedonia
Anosmia
Anxiety
Apathy
Arousal
Bad Dreams
Blunted Affect
Circumstantiality
Cognitive Impairment
Concrete Thinking
Delusions
Derailment
Disturbed Sleep
Diurnal Variation
Drowsiness
Early Morning wakening
Echolalia
Elation
Emotional Withdrawal
Eye Contact (Catergorisation)
Fatigue
Flight of Ideas
Fluctuation
Formal Thoughts Disorder
Grandiosity
Guilt
Hallucinations (All)
Hallucinations - Auditory
Hallucinations - Olfactory Tactile Gustatory (OTG)
Hallucinations - Visual
Helplessness
Hopelessness
Hostility
Insomnia
Irritability
Loss of Coherence
Low energy
Mood instability
Mutism
Negative Symptoms
Nightmares
Obsessive Compulsive Symptoms
Paranoia
Passivity
Presecutory Ideation
Poor Appetite
Poor Concentration
Poor Eye Contact
Poor Insight
Poor Motivation
Poverty Of Speech
Poverty Of Thought
Psychomotor Activity (Catergorisation)
Smell
Social Withdrawal
Stupor
Suicidal Indeation
Tangentiality
Taste
Tearfulness
Thought Block
Thought Broadcast
Thought Insertion
Thought Withdrawal
Waxy Flexibility
Weight Loss
Worthlessness
Physical Health Conditions
AsthmaBronchitis
Cough
Crohn's Disease
Falls
Fever
Hypertension
Multimorbidity - 21 Long Term Conditions (Medcat)
Pain
Rheumatoid Arthiritis
HIV
HIV Treatment
Contextual Factors
AmphetamineCannabis
Chronic Alcohol Abuse
Cocaine or Crack Cocaine
MDMA
Smoking
Education
Occupation
Lives Alone
Loneliness
Violence
Interventions
CAMHS - Creative TherapyCAMHS - Dialectical Behaviour Therapy (DBT)
CAMHS - Psychotherapy/Psychosocial Intervention
Cognitive Behavioural Therapy (CBT)
Depot Medication
Family Intervention
Medication
Social Care - Care Package
Social Care - Home Care
Social Care - Meals on Wheels
Outcome and Clincial Status
Blood Pressure (BP)Body Mass Index (BMI)
Brain MRI report volumetric Assessment for dementia
Cholesterol
EGFR
HBA1C
Lithium
Mini-Mental State Examination (MMSE)
Neutrophils
Non-Adherence
Diagnosis
Treatment- Resistant Depression
Bradykinesia (Dementia)
Trajectory
Tremor (Dementia)
QT
White Cells
Miscellanous
FormsQuoted Speech
Symptom Scales (see notes)
Symptoms
Aggression
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of aggressive behaviour in patients, including verbal, physical and sexual aggression.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive mentions include:
“reported to be quite aggressive towards…”,
“violence and aggression, requires continued management and continues to reduce in terms of incidents etc”.
Also include verbal aggression and physical aggression.
Excludes negative and irrelevant mentions, e.g.:
“no aggression”,
“no evidence of aggression”
“aggression won’t be tolerated”.
etc. Definitions: Search term(case insensitive): *aggress*
Evaluated Performance
Cohen's k = 85% (50 un-annotated documents - 25 events/25 attachments, search term ‘aggress*’). Instance level (testing done on 100 random documents): Precision (specificity / accuracy) = 91%
Recall (sensitivity / coverage) = 75% Patient level (testing done on 50 random documents): Precision (specificity / accuracy) = 76%
Additional Notes
Run schedule– Monthly
Other Specifications
Version 1.0, Last updated: xx
DOI
Agitation
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of agitation
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive mentions include:
“Very agitated at present, he was agitated”,
“He was initially calm but then became agitated and started staring and pointing at me towards”,
“Should also include no longer agitated. “
Excludes negative and irrelevant mentions, e.g.:
“He did not seem distracted or agitated”,
“Not agitated”,
“No evidence of agitation”.
“a common symptom of psychomotor agitation” Definitions: Search term(case insensitive): *agitat*
Evaluated Performance
Cohen's k = 85% (50 un-annotated documents - 25 events/25 attachments, search term ‘agitat*’). Instance level (testing done on 100 random documents): Precision (specificity / accuracy) = 85%
Recall (sensitivity / coverage) = 79% Patient level: All patients with primary diagnosis code F32* or F33* (testing done on 30 random document, one document per patient) Precision (specificity / accuracy) = 82%
Additional Notes
Run schedule– Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Anergia
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of anergia
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive mentions include:
“feelings of anergia…”
Excludes negative mentions, e.g:
“no anergia”,
“no evidence of anergia”,
“no feeling of anergia”. Definitions: Search term(case insensitive): *anergia*
Evaluated Performance
Cohen's k = 100% (50 un-annotated documents - 25 events/25 attachments, search term ‘anergia*’). Instance level i.e. for all specific mentions (testing done on 100 random documents):
Precision (specificity / accuracy) = 95% Recall (sensitivity / coverage) = 89% Patient level – All patients with primary diagnosis code F32* or F33* (testing done on 30 random document, one document per patient): Precision (specificity / accuracy) = 93%
Additional Notes
Run schedule– Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Anhedonia
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of anhedonia (inability to experience pleasure from activities usually found enjoyable).
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive mentions include:
“ X had been anhedonic”,
“ X has anhedonia”.
Excludes Negative mentions, e.g
“no anhedonia”,
“no evidence of anhedonia”,
“not anhedonic”,
Exclude ‘Unknown’ mentions e.g:
i) used in a list, not applying to patient (e.g. typical symptoms include …);
ii) uncertain (might have anhedonia, ?anhedonia, possible anhedonia);
iii) not clearly present (monitor for anhedonia, anhedonia has improved);
iv) listed as potential treatment side-effect;
v) vague (‘she is not completely anhedonic’, ‘appears almost anhedonic’) Definitions: Search term(s): *anhedon*
Evaluated Performance
Cohen's k=85% (50 un-annotated documents - 25 events/25 attachments, search term 'anhedon*’). Instance level, (testing done on 100 random documents):
Precision (specificity / accuracy) = 93% Recall (sensitivity / coverage) = 86% Patient level – All patients with primary diagnosis code F32* or F33* (testing done on 30 random document, one document per patient) Precision (specificity / accuracy) = 87%
Additional Notes
Run schedule– Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Anosmia
Return to contentsCRIS NLP Service
Brief Description
Application to extract and classify mentions related to anosmia.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive mentions include:
“Loss of enjoyment of food due to anosmia”,
“COVID symptoms such as anosmia”
Excludes Negative and Unknown mentions, e.g.:
“Nil anosmia”,
“Doctor mentioned they had anosmia so could not smell patient”,
“Anosmia related to people other than the patient”,
Unknown mentions: Annotations are coded as unknown when it is not clear if the patient has symptoms/experiences of anosmia
E.g.
“Mentions of medications for it”,
“Don’t come to the practice if you have any covid symptoms such as anosmia” etc. Definitions: Search term(case insensitive): Anosmia*
Evaluated Performance
Cohen's k = 83% (100 Random Documents). Instance level, (testing done on 100 random documents):Precision (specificity / accuracy) = 83% Recall (sensitivity / coverage) = 93%
Additional Notes
Run schedule– On demand
Other Specifications
Version 1.0, Last updated:xx
Anxiety
Return to contentsCRIS NLP Service
Brief Description
Application to extract and classify mentions related to (any kind of) anxiety.
Development Approach
Development approach: Rule-Based.
Classification of past or present symptom: Both.
Classes produced: Affirmed, Negated, and Irrelevant.
Output and Definitions
The output includes-
Affirmed examples of anxiety includes:
“ZZZZZ shows anxiety problems”
Exclude Negative mentions of anxiety includes:
“ZZZZ does not show anxiety problems”
Excludes Irrelevant examples of anxiety includes:
“If ZZZZ was anxious he would not take his medication”
Examples related to the patient or someone else (“experiencer”)
i. patient: “ZZZ shows anxiety problems”
ii. other: “nurse is worried about the patient”
iii. unknown: “he showed clear signs of anxiety” Search Terms (case insensitive)-Available on request
Evaluated Performance
Cohen’s k = 94% ( 3000 random documents) Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 87% Recall (sensitivity / coverage) = 97%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Apathy
Return to contentsCRIS NLP Service
Brief Description
Application to extract the presence of apathy
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive mentions includes:
any indication that apathy was being reported as a symptom:
“ continues to demonstrate apathy”
“ some degree of apathy noted”
“presentation with apathy”
“his report of apathy given”.
Exclude Negative mentions of apathy:
“denied apathy”
“no evidence of apathy”
Exclude ‘Unknown’ annotations of apathy:
“may develop apathy or as a possible side effect of medication”
“*apathy* found in quite a few names” Definitions: Search term(s): *apath*
Evaluated Performance
Cohen's k=86% (50 un-annotated documents - 25 events/25 attachments, search term ‘apath*’). Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 93% Recall (sensitivity / coverage) = 86% Patient level – All patients with primary diagnosis code F32* or F33* (testing done on 30 random document, one document per patient): Precision (specificity / accuracy) = 73%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Arousal
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of arousal excluding sexual arousal.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes:
Positive mentions include:
“...the decisions she makes when emotionally aroused”,
“...during hyperaroused state”,
“following an incidence of physiological arousal”
Exclude Negative mentions include:
“mentions of sexual arousal”,
“no arousal”,
“not aroused”,
“denies being aroused”
Unknown mentions include:
“annotations include unclear statements and hypotheticals” Definitions: Search term(s): *arous*
Evaluated Performance
Cohen's k = 95% (50 un-annotated documents - 25 events/25 attachments, search term ‘*arous*’). Instance level, i.e. for all specific mentions (testing done on 100 random documents):
Precision (specificity / accuracy) = 89% Recall (sensitivity / coverage) = 91%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Bad Dreams
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of experiencing a bad dream
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations include:
“ZZZZZ had a bad dream last night”,
“she frequently has bad dreams”,
“ZZZZZ has suffered from bad dreams in the past”,
“ZZZZZ had a bad dream that she was underwater”,
“ he’s been having fewer bad dreams”
Exclude Negative mentions:
“she denied any bad dreams”,
“does not suffer from bad dreams”,
“no other PTSD symptoms such as bad dreams”,
“he said the experience was like a bad dream”,
“ZZZZZ compared his time in hospital to a bad dream”
Exclude Unknown mentions:
“she said it might have been a bad dream”,
“he woke up in a start, as if waking from a bad dream”,
“ZZZZZ couldn’t remember whether the conversation was just a bad dream”,
“doesn’t want to have bad dreams” Definitions: Search term(s): bad dream*
Evaluated Performance
Cohen's k = 100% (100 unannotated documents- 50 events/50 attachments, search terms ‘dream’ and ‘dreams’). Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 89% Recall (sensitivity / coverage) = 100%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Blunted Affect
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of blunted affect
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations include:
“his affect remains very blunted”,
“objectively flattened affect”,
“states that ZZZZZ continues to appear flat in affect”
Exclude Negative annotations:
“incongruent affect”,
“stable affect”,
“no blunted affect”
Exclude Unknown annotations:
“typical symptoms include blunted affect”,
“slightly flat affect”,
“relative shows flat affect” Definitions: Search term(s):
*affect*
blunt* [0 to 2 words in between] *affect*
flat [0 to 2 words in between] *affect*
restrict [ 0 to 2 words in between *affect*
*affect* [0 to 2 words in between] blunt
*affect* [0 to 2 words in between] flat
*Affect* [0 to 2 words in between] restrict
Evaluated Performance
Cohen's k = 100% (50 annotated documents - 25 events/24 attachments/1 mental health care plan). Instance level, i.e. for all specific mentions (testing done on 100 random documents):
Precision (specificity / accuracy) = 100% Recall (sensitivity / coverage) = 80% Patient level – Testing done on 30 random document, one document per patient. Precision (specificity / accuracy) = 93%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Circumstantiality
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of circumstantiality
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive mentions include:
“loose associations and circumstantiality”,
“circumstantial in nature”,
“some circumstantiality at points”,
“speech is less circumstantial”
Exclude Negative mentions:
“no signs of circumstantiality”,
“no evidence of circumstantial”
Exclude Unknown mentions:
“Such as a hypothetical cause of something else” Definition: Search term(s): *circumstan*
Evaluated Performance
Cohen's k = 100% (50 annotated documents - 25 events/25 attachments). Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 94% Recall (sensitivity / coverage) = 92% Patient level – Testing done on 30 random document, one document per patient. Precision (specificity / accuracy) = 90%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Cognitive Impairment
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of cognitive impairment. The application allows to detect cognitive impairments related to attention, memory, executive functions, and emotion, as well as a generic cognition domain. This application has been developed for patients diagnosed with schizophrenia.
Development Approach
Development approach: Rule-Based.
Classification of past or present symptom: Both.
Classes produced: Affirmed and relating to the patient and Negated/Irrelevant.
Output and Definitions
The output includes-
Affirmed mentions:
“patient shows attention problems (positive) “
“ZZZ does not show good concentration “
“ZZZ shows poor concentration “
“patient scored 10/18 for attention “
“ZZZ seems hyper-vigilant “
Exclude Negated and Irrelevant mentions:
“patient uses distraction technique to ignore hallucinations “
“attention seeking”
“patient needs (medical) attention”
“draw your attention to…” Definition- Search term(s): attention, concentration, focus, distracted, adhd, hypervigilance, attend to
Evaluated Performance
Cohen’s k
Cognition – 66% (Done on 3000 random documents)
Emotion – 84% (Done on 3000 random documents)
Executive function – 40% (Done on 3000 random documents)
Memory – 68% (Done on 3000 random documents)
Attention – 99% (Done on 2616 random documents). Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 96% Recall (sensitivity / coverage) = 92% Instance level – Patient with an F20 diagnosis (testing done on 100 random documents). Precision (specificity / accuracy) = 78% Recall (sensitivity / coverage) = 70%
Additional Notes
Run schedule – On request
Other Specifications
Version 1.0, Last updated:xx
DOI
Concrete Thinking
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of concrete thinking.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations include:
“text referring to ‘concrete thinking’”,
“speech or answers to questions being ‘concrete’”,
“the patient being described as ‘concrete’ without elaboration”,
“answers being described as concrete in cognitive assessments”,
“‘understanding’ or ‘manner’ or ‘interpretations’ of circumstances being described as concrete”
Exclude Negative annotations:
“no evidence of concrete thinking”
Exclude Unknown annotations:
“reference to concrete as a material (concrete floor, concrete house etc.)”
“no concrete plans”,
“delusions being concrete”,
“achieving concrete goals using concrete learning activities” Definitions: Search term(s): Concrete [word][word]think* think [word] [word] concret*
Evaluated Performance
Cohen's k = 83% (50 un-annotated documents - 25 events/25 attachments, search term ‘concrete*’) Instance level, i.e. for all specific mentions (testing done on 146 random documents):
Precision (specificity / accuracy) = 84% Recall (sensitivity / coverage) = 41% Patient level – Testing done on 30 random document, one document per patient. Precision (specificity / accuracy) = 87%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Delusions
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of delusions
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive mentions include:
“paranoid delusions”,
“continued to express delusional ideas of the nature”
“no longer delusional- indicates past”
Exclude Negative mentions:
“no delusions”,
“denied delusions”
Exclude Unknown mentions:
“delusions are common” Definitions: Search term(s): *delusion*
Evaluated Performance
Cohen's k = 92% (50 un-annotated documents - 25 events/25 attachments, search term ‘delusion*’) Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 93% Recall (sensitivity / coverage) = 85% Patient level – Testing done on 30 random document, one document per patient. Precision (specificity / accuracy) = 87%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Derailment
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of derailment.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive annotations:
“he derailed frequently”,
“there was evidence of flight of ideas”,
“thought derailment in his language”‘speech no longer derailed’.
Exclude Negative annotations:
“no derailment”,
“erratic compliance can further derail her stability”
“no evidence of derailment”,
“without derailment”,
“without derailing”,
“no evidence of loosening of association, derailment or tangential thoughts”
Exclude Unknown annotations:
“train was derailed” Definitions: Search term(s): *derail*
Evaluated Performance
Cohen's k = 100% (50 un-annotated documents - 25 events/25 attachments, search term ‘derail*’) Instance level, i.e. for all specific mentions (testing done on 100 random documents):
Precision (specificity / accuracy) = 88% Recall (sensitivity / coverage) = 95% Patient level – Testing done on 30 random document, one document per patient :Precision (specificity / accuracy) = 74%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Disturbed Sleep
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of disturbed sleep.
Development Approach
Development approach: Rule-Based.
Classes produced: Positive
Output and Definitions
The output includes-
Instances of disturbed sleep:
“complains of poor sleep”,
“poor sleep”,
“sleep disturbed”,
“sleep difficulty”,
“sleeping poorly”,
“not sleeping very well”,
“cannot sleep”,
“sleep pattern poor”,
“difficulties with sleep”,
“slept badly last couple of nights” Definitions: Search term(s): Not poor*, interrupt*, disturb*, inadequat*, disorder*, prevent*, stop*, problem* , difficult*, reduc*, less*, impair*, erratic*, unable*, worse*, depriv* [0-2 token] sleep* or slep* , little sleep, sleepless night, broken sleep, sleep intermittently, sleep* or slep* [0-2 token] not, poor*, interrupt*, disturb*, inadequat*, disorder*, prevent*, stop*, problem*, difficult*, reduc*, less*, impair*, erratic* ,unable*, worse*, depriv* .
Evaluated Performance
Cohen's k = 75% (50 un-annotated documents - 25 events/25 attachments, search term ‘*sleep*’ or ‘slept’). Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 88% Recall (sensitivity / coverage) = 68%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 2.0, Last updated:xx
Diurnal Variation
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of diurnal variation of mood
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive mentions:
• ‘patient complaints of diurnal variation’
• ‘he reported diurnal variation in his mood’
• ‘Diurnal variation present’
• ‘some diurnal variation of mood present’
Exclude Negative examples:
• ‘no diurnal variation’
• ‘diurnal variation absent’
• ‘patient complaints of ongoing depression but no diurnal variation’
• ‘depressive symptoms present without diurnal variation’
Exclude Unknown examples:
• ‘diurnal variation could be a symptom of more severe depression’
• ‘we spoke about possible diurnal variation in his mood’
• ‘it was not certain if there were diurnal variation’ Definitions: Search term(s): diurnal variation
Evaluated Performance
Cohen's k = xx Instance level, i.e. for all specific mentions (Testing done on 100 Random Documents): Precision (specificity / accuracy) = 94% Recall (sensitivity / coverage) = 100%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Drowsiness
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of drowsiness
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive examples:
“ZZZZZ appeared to be drowsy”,
“She has complained of feeling drowsy”
Exclude Negative examples:
“He is not drowsy in the mornings”,
“She was quite happy and did not appear drowsy”,
“ZZZZZ denied any symptoms of drowsiness”,
“Negative annotations should be when the patient denies drowsiness, or is described as not drowsy etc”
Exclude Unknown examples:
“In reading the label (of the medication), ZZZZZ expressed concern in the indication that it might make him drowsy”,
“Monitor for increased drowsiness and inform for change in presentation”,
“risk of drowsiness, instructions to reduce medication if the patient becomes drowsy” Definitions: Search term(s): drows*
Evaluated Performance
Cohen's k = 83% 1000 un-annotated documents, search term ‘drows*’). Instance level, i.e. for all specific mentions (testing done on 100 random documents):
Precision (specificity / accuracy) = 80% Recall (sensitivity / coverage) =100%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Early Morning wakening
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of early morning wakening.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive annotations:
“ patient complaints of early morning awakening”,
“he reported early morning wakening”,
“Early morning awakening present”,
“there is still some early morning wakening”
Exclude Negative annotations:
“no early morning wakening”,
“early morning wakening absent”,
“patient complaints of disturbed sleep but no early morning awakening”,
“sleeps badly but without early morning wakening”
Exclude Unknown annotations:
“early morning awakening could be a symptom of more severe depression”,
“we spoke about how to deal possible early morning wakening”,
“it was not certain if there were occasions of early morning awakening” Definitions: Search term(s): early morning wakening
Evaluated Performance
Cohen's k = xx Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 95% Recall (sensitivity / coverage) = 96%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Echolalia
Return to contentsCRIS NLP Service
Brief Description
Application to extract occurrences where echolalia is present.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations include:
“no neologisms, but repeated what I said almost like echolalia”,
“intermittent echolalia”,
“some or less echolalia”
Exclude Negative annotations:
“no echolalia”,
“no evidence of echolalia”,
“cannot remember any echolalia or stereotyped utterances”
Exclude Unknown annotations:
“Echolalia is not a common symptom”,
“Include hypotheticals such as he may have some echolalia, evidence of possible echolalia” Definitions: Search term(s): *echola*
Evaluated Performance
Cohen's k = 88% (50 un-annotated documents - 25 events/25 attachments, search term ‘echola*’) Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 89% Recall (sensitivity / coverage) = 86% Patient level – Testing done on 30 random document, one document per patient, Precision (specificity / accuracy) = 74%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Elation
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of elation.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
“mildly elated in mood”,
“elated in mood on return from leave”,
“she appeared elated and aroused”
Exclude Negative annotations:
“ZZZZZ was coherent and more optimistic/aspirational than elated throughout the conversation”,
“no elated behaviour" etc.
Exclude Unknown annotations:
“In his elated state there is a risk of accidental harm”,
“monitor for elation”,
“elation is a known side effect” and
“Statements were term is used in a list, not applying to patients (e.g. Typical symptoms include...)” Definitions: Search term(s): *elat*
Evaluated Performance
Cohen's k = 100% (50 annotated documents - 25 events/25 attachments) 1) Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 94% Recall (sensitivity / coverage) = 97% Patient level – Testing done on 30 random document, one document per patient. Precision (specificity / accuracy) = 90%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Emotional Withdrawal
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of emotional withdrawal
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
To any description of the patient being described as withdrawn or showing withdrawal but with the following exceptions (which are annotated as unknown):
• Alcohol, substance, medication withdrawal
• Withdrawal symptoms, fits, seizures etc.
• Social withdrawal (i.e. a patient described as becoming withdrawn would be positive but a patient described as showing ‘social withdrawal’ would be unknown – because social withdrawal is covered in another application).
• Thought withdrawal (e.g. ‘no thought insertion, withdrawal or broadcast’)
• Withdrawing money, benefits being withdrawn etc.
Negative and unknown annotations are restricted to instances where the patient is being described as not withdrawn and categorised as unknown. Definition: Search term(s): withdrawn
Evaluated Performance
Cohen's k = 100% (50 un-annotated documents - 25 events/25 attachments, search term ‘withdrawn’). Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 85% Recall (sensitivity / coverage) = 96%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Eye Contact (Catergorisation)
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of eye contact and determine the type of eye contact.
Development Approach
Development approach:Rule-Based
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive mentions: the application successfully identifies the type of eye contact (as denoted by the keyword) in the context (as denoted by the contextstring)
e.g., keyword: ‘good’; contextstring: ‘There was good eye contact’
Negative mentions: the application does not successfully identifies the type of contact (as denoted by the keyword) in the context (as denoted by the contextstring). The keyword does not related to the eye contact
e.g., keyword: ‘showed’; contextstring: ‘showed little eye-contact’. Definitions: Search term(s): Eye *contact*
Keyword: the term describing the type of eye contact
ContextString: the context containing the keyword in its relation to eye-contact
Evaluated Performance
Cohen's k = xx Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 91%
Recall (sensitivity / coverage) = 80%
Additional Notes
Run schedule – On request
Other Specifications
Version 1.0, Last updated:xx
Fatigue
Return to contentsCRIS NLP Service
Brief Description
Application to identify symptoms of fatigue.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive annotations:
“ZZZZ has been experiencing fatigue”,
“fatigue interfering with daily activities”
Exclude Negative annotations:
“No mentions of fatigue”,
“her high levels of anxiety impact on fatigue”,
“main symptoms of dissociation leading to fatigue”
Exclude Unknown annotations:
“ZZZZ is undertaking CBT for fatigue”. Definitions: Search term(s): Fatigue, exclude ‘chronic fatigue syndrome’
Evaluated Performance
Cohen's k = xx Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 78% Recall (sensitivity / coverage) = 95%
Additional Notes
Run schedule – Monthly
Other Specifications
Version: xx, Last updated:xx
Flight of Ideas
Return to contentsCRIS NLP Service
Brief Description
Application to extract instances of flight of ideas.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive annotations:
“Mrs ZZZZZ was very elated with by marked flights of ideas”,
“marked pressure of speech associated with flights of ideas”,
“Some flight of ideas”.
Exclude Negative annotations:
“no evidence of flight of ideas”,
“no flight of ideas”
Exclude Unknown annotations:
“bordering on flight of ideas”
“relative shows FOI” Definitions: Search term(s): *flight* *of* *idea*
Evaluated Performance
Cohen's k = 96% (50 un-annotated documents - 25 events/25 attachments, search term ‘flight of’). Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 91% Recall (sensitivity / coverage) = 94% Patient level – Testing done on 30 random document, one document per patient.Precision (specificity / accuracy) = 72%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Fluctuation
Return to contentsCRIS NLP Service
Brief Description
The purpose of this application is to determine if a mention of fluctuation within the text is relevant
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-Positive annotations:
“Mrs ZZZZZ’s mood has been fluctuating a lot”,
“suicidal thoughts appear to fluctuate”
Exclude Negative annotations:
“no evidence of mood fluctuation”,
“does not appear to have significant fluctuations in mental state”
Exclude Unknown annotations:
“unsure whether fluctuation has a mood component”,
“monitoring to see if fluctuations deteriorate”,
“his mother’s responsibility fluctuated”,
“is the person’s risk likely to fluctuate.. yes/no…”
Evaluated Performance
Cohen's k = xx Instance level, i.e. for all specific mentions (testing done on 100 random documents):
Precision (specificity / accuracy) = 87% Recall (sensitivity / coverage) = 96%
Additional Notes
Run schedule – On request
Other Specifications
Version 1.0, Last updated:xx
DOI
Formal Thoughts Disorder
Return to contentsCRIS NLP Service
Brief Description
Application to extract occurrences where formal thought disorder is present.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive annotations:
“deteriorating into a more thought disordered state with outbursts of aggression”,
“there was always a degree thought disorder”,
“Include some formal thought disorder”
Exclude Negative annotations:
“no FTD”,
“no signs of FTD”,
“NFTD”
Exclude Unknown annotations:
“?FTD”,
“relative shows FTD”,
“check if FTD has improved”,
“used in a list, not applying to patient ‘typical symptoms include...” Definitions: Search term(s): *ftd*,*formal* *thought* *disorder*
Evaluated Performance
Cohen's k = 100% (50 un-annotated documents - 25 events/25 attachments, search term ‘*ftd*, *formal**thought* *disorder*) Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 83% Recall (sensitivity / coverage) = 83% Patient level – Testing done on 50 random document, one document per patient: Precision (specificity / accuracy) = 72%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Grandiosity
Return to contentsCRIS NLP Service
Brief Description
Application to extract occurrences where grandiosity is apparent.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive annotations:
“ZZZZZ was wearing slippers and was animated elated and grandiose”,
“reduction in grandiosity”,
”No longer grandiose”
Exclude Negative annotations:
“No evidence of grandiose of delusions in the content of his speech”,
“no evidence of grandiose ideas”
Exclude Unknown annotations:
“his experience could lead to grandiose ideas” Definitions: Search term(s): *grandios*
Evaluated Performance
Cohen's k = 89% (50 un-annotated documents - 25 events/25 attachments, search term ‘grandio*’). Instance level: (testing done on 100 random documents)
Precision (specificity / accuracy) = 95% Recall (sensitivity / coverage) = 91% Patient level: Testing done on 30 random document, one document per patient,
Precision (specificity / accuracy) = 97%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Guilt
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of guilt.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
“she then feels guilty/angry towards mum”,
“Being angry is easier to deal with than feeling guilty”,
“Include feelings of guilt with a reasonable cause and mentions stating”,
“no longer feels guilty”
Exclude Negative annotations:
“No feeling of guilt”,
“denies feeling hopeless or guilty”
Exclude Unknown annotations:
“he might be feeling guilty”,
“some guilt”,
“sometimes feeling guilty” Definitions: Search term(s): *guil*
Evaluated Performance
Cohen's k = 92% (50 un-annotated documents - 25 events/25 attachments, search term ‘guil*’). Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 83% Recall (sensitivity / coverage) = 83% Patient level – All patients with primary diagnosis code of F32* and F33*.Testing done on 90 random document, one document per patient: Precision (specificity / accuracy) = 93%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Hallucinations (All)
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of hallucinations.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive annotations:
“her husband was minimising her hallucinations”,
“continues to experience auditory hallucinations”,
“doesn’t appear distressed by his hallucinations”,
“he reported auditory and visual hallucinations”,
“this will likely worsen her hallucinations”,
“his hallucinations subsided”,
“Neuroleptics were prescribed for her hallucinations”.
Exclude Negative annotations:
“denied any hallucinations”,
“no evidence of auditory hallucinations”,
“he reports it is a dream rather than hallucinations”,
“hears voices but denies command hallucinations”,
“did not report any further auditory hallucinations”,
“hallucinations have not recurred”,
“no longer appeared to have hallucinations”,
“has not had hallucinations for the last 4 months”,
“the hallucinations stopped”
Exclude Unknown annotations:
Statements contains probably/possibly/maybe/likely/unclear/unable to ascertain/unconfirmed reports of hallucinations/ experiencing hallucinations,
“pseudo(-) hallucinations”,
“hallucinations present?”,
“?hallucinations”,
“this is not a psychiatric symptom such as hallucinations”,
“abnormalities including hallucinations, derealisation etc”,
“rating scale including delusions, hallucinations, clinical domains”
“hallucinations is a sign of relapse”,
“it is unusual for hallucinations to present in this way”,
“CBT is effective for hallucinations” Definitions: Search term(s): hallucinat*
Evaluated Performance
Cohen's k = 83% (100 un-annotated documents - 50 events/50 attachments, search term ‘hallucinat*’). Instance level, i.e. for all specific mentions (testing done on 100 dandom documents): Precision (specificity / accuracy) = 84% Recall (sensitivity / coverage) = 98%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 2.0, Last updated:xx
DOI
Hallucinations - Auditory
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of auditory hallucinations non-specific to diagnosis.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive annotations:
“Seems to be having olfactory hallucination”,
“in relation to her tactile hallucinations”
Exclude Negative annotations:
“denies auditory, visual, gustatory, olfactory and tactile hallucinations at the time of the assessment”,
“denied tactile/olfactory hallucination”
Exclude Unknown annotations:
“possibly olfactory hallucinations” Definitions: Search term(s): auditory hallucinat*
Evaluated Performance
Cohen's k = 96% (50 un-annotated documents - 25 events/25 attachments, search term ‘auditory’ or ‘halluc*’). Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 80% Recall (sensitivity / coverage) = 84%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Hallucinations - Olfactory Tactile Gustatory (OTG)
Return to contentsCRIS NLP Service
Brief Description
Application to extract occurrences where auditory hallucination is present. Auditory hallucinations may be due to a diagnosis of psychosis/schizophrenia or may be due to other causes, e.g. due to substance abuse.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
“seems to be having olfactory hallucinations”,
“in relation to her tactile hallucinations”
Exclude Negative annotations:
“denies auditory, visual, gustatory, olfactory and tactile hallucinations at the time of the assessment”,
“denied tactile/olfactory hallucinations”
Exclude Unknown annotations
“possibly olfactory hallucinations” Definitions: Search term(s):
*olfactory* [0-10 words in between] *hallucin*
*hallucin* [0-10 words in between] *olfactory*
*gustat* [0-10 words in between] *hallucin*
*hallucin* [01-10 words in between] *gustat*
*tactile* [0-10 words in between] *hallucin*
*hallucin* [0-10 words in between] *tactile*
Evaluated Performance
Cohen's k = 100% (50 un-annotated documents - 25 events/25 attachments, search term ‘olfact*’ or ‘gustat*’ or ‘tactile’). Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 78% Recall (sensitivity / coverage) = 68% Patient level – Testing done on 50 random document, one document per patient, Precision (specificity / accuracy) = 86%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Hallucinations - Visual
Return to contentsCRIS NLP Service
Brief Description
Application to extract occurrences where visual hallucination is present. Visual hallucinations may be due to a diagnosis of psychosis/schizophrenia or may be due to other causes, e.g. due to substance abuse.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
“Responding to visual hallucination”,
“Experiencing visual hallucination”,
“history of visual hallucination”,
“distressed by visual hallucination”
Exclude Negative annotations:
“denied any visual hallucination”,
“not responding to visual hallucination”,
“no visual hallucination”,
“no current visual hallucination (with no reference to past)”
Exclude Unknown annotations:
“if/may/possible/possibly/might have visual hallucinations”,
“monitor for possible visual hallucination” Definitions: Search term(s): visual hallucinat*
Evaluated Performance
Cohen's k = 100% (50 un-annotated documents - 25 events/25 attachments, search term ‘visual’ and ‘halluc*’). Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 91% Recall (sensitivity / coverage) = 96%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Helplessness
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of helplessness.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
“Ideas of helplessness secondary to her physical symptoms present”,
“ideation compounded by anxiety and a sense of helplessness”
Exclude Negative annotations:
“denies uselessness or helplessness”,
“no thoughts of hopelessness or helplessness”.
Exclude Unknown annotations:
“there a sense of helplessness”,
“helplessness is a common symptom” Definitions: Search term(s):*helpless*
Evaluated Performance
Cohen's k = 100% (50 un-annotated documents - 25 events/25 attachments, search term ‘helpless*’).Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 93% Recall (sensitivity / coverage) = 86% Patient level – All patients with primary diagnosis of F32* or F33* in a structured field (random sample of 30 patients, one document per patient): Precision (specificity / accuracy) = 90%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Hopelessness
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of hopelessness
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
“feeling very low and hopeless”,
“says feels hopeless”
Exclude Negative annotations:
“denies hopelessness”,
“no thoughts of hopelessness or helplessness”
Exclude Unknown annotations:
“there a sense of hopelessness”,
“hopelessness is a common symptom” Definitions: Search term(s):*hopeles*
Evaluated Performance
Cohen's k = 90% (50 un-annotated documents - 25 events/25 attachments, search term ‘hopeless*’). Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 90% Recall (sensitivity / coverage) = 95% Patient level – All Patients with a primary diagnosis of F32* or F33* in a structured field (one document per patient) Precision (specificity / accuracy) = 87%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Hostility
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of hostility.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
“increased hostility and paranoia”,
“she presented as hostile to the nurses”
Exclude Negative annotations:
“not hostile”,
“denied any feelings of hostility”
Exclude Unknown annotations:
“he may become hostile”,
“hostility is something to look out for” Definitions: Search term(s): *hostil*
Evaluated Performance
Cohen's k = 94% (50 un-annotated documents - 25 events/25 attachments, search term ‘hostil*’). Instance level, i.e. for all specific mentions (testing done on 100 random documents): Precision (specificity / accuracy) = 89% Recall (sensitivity / coverage) = 94% Patient level – Random sample of 30 patients (one document per patient): Precision (specificity / accuracy) = 87%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Insomnia
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of insomnia.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
“initial insomnia”,
“contributes to her insomnia”,
“problems with insomnia”,
“this has resulted in insomnia”,
“this will address his insomnia”
Exclude Negative annotations:
“no insomnia”,
“no evidence of insomnia”,
“not insomniac2
Exclude Unknown annotations:
“Typical symptoms include insomnia”,
“might have insomnia”,
“?insomnia”,
“possible insomnia”,
“monitor for insomnia”,
“insomnia has improved” Definitions: Search term(s): *insom*
Evaluated Performance
Cohen's k = 94% (50 un-annotated documents - 25 events/25 attachments, search term ‘insomn*’). Instance level, Random sample of 100 random documents:Precision (specificity / accuracy) = 89% Recall (sensitivity / coverage) = 94% Patient level – All patients with primary diagnosis of F32* or F33* in a structured field, random sample of 50 patients (one document per patient): Precision (specificity / accuracy) = 94%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Irritability
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of irritability.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
“can be irritable”,
“became irritable”,
“appeared irritable”,
“complained of feeling irritable”
Exclude Negative mentions:
“no evidence of irritability”,
“no longer irritable”,
“no sign of irritability”
Exclude Unknown annotations:
“irritable bowel syndrome”,
“becomes irritable when unwell”,
“can be irritable if …[NB some ambiguity with positive ‘can be’ mentions, although linked here with the ‘if’ qualifier]”,
“less irritable” Definitions: Search term(s): *irritabl*
Evaluated Performance
Cohen's k = 100% (50 un-annotated documents - 25 events/25 attachments, search term ‘irritabil*’ or ‘irritabl*’). Instance level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 100% Recall (sensitivity / coverage) = 83%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Loss of Coherence
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of incoherence or loss of coherence in speech or thinking.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
“patient was incoherent”,
“his speech is characterised by a loss of coherence”
Exclude Negative annotations:
“patient is coherent”,
“coherence in his thinking”
Exclude Unknown annotations:
“coherent discharge plan”,
“could not give me a coherent account”,
“more coherent”,
“mood was coherent with speech”
“a few instances where coherence/incoherence was part of a heading or question” Definitions: Search term(s): coheren*, incoheren*
Evaluated Performance
Cohen's k = 100% (50 un-annotated documents - 25 events/25 attachments, search term ‘incoheren*’). Instance level, Random sample of 100 Random Documents:Precision (specificity / accuracy) = 98%Recall (sensitivity / coverage) = 95% Patient level – All patients with primary diagnosis code F32* or F33* in a structured field, random sample of 50 (one document per patient). Precision (specificity / accuracy) = 93%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Low energy
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of low energy.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
“low energy”,
“decreased energy”,
“not much energy”,
“no energy”
Exclude Negative annotations:
“no indications of low energy”,
“increased energy”
Exclude Unclear annotations:
“..., might be caused by low energy”,
“monitor for low energy”,
“energy levels have improved”,
“fluoxetine reduces her energy”,
“some energy bars” Definitions: Search term(s): *energy*
Evaluated Performance
Cohen's k = 95% (50 un-annotated documents - 25 events/25 attachments, search term ‘energ*’). IInstance level, Random sample of 100 Random Documents:Precision (specificity / accuracy) = 82% Recall (sensitivity / coverage) = 85% Patient level – All patients with primary diagnosis code F32* or F33* in a structured field, random sample of 50 (one document per patient). Precision (specificity / accuracy) = 76%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Mood instability
Return to contentsCRIS NLP Service
Brief Description
This application identifies instances of mood instability.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive annotations:
“she continues to have frequent mood swings”,
“expressed fluctuating mood”
Exclude Negative annotations:
“no mood fluctuation”
“no mood unpredictability”,
“denied diurnal mood variations”
Exclude Unknown annotations:
“she had harmed others in the past when her mood changed”,
“tried antidepressants in the past but they led to fluctuations in mood”,
“no change in mood”,
“her mood has not changed and she is still depressed” Definitions: Search term(s):
Chang* [0-2 words in between] *mood*
Extremes [0-2 words in between] *mood*
Fluctuat* [0-2 words in between] *mood*
Instability [0-2 words in between] *mood*
*labil* [0-2 words in between] mood
Rapid cycling [0-2 words in between] mood
*swings* [0-2 words in between] mood
*unpredictable* [0-2 words in between] mood
unsettled [0-2 words in between] mood
unstable [0-2 words in between] mood
*variable* [0-2 words in between] mood
*variation* [0-2 words in between] mood
*volatile* [0-2 words in between] mood
mood [0-2 words in between] chang*
mood [0-2 words in between] Extremes
mood [0-2 words in between] fluctuat*
mood [0-2 words in between] Instability
mood [0-2 words in between] *labil*
mood [0-2 words in between] Rapid cycling
mood [0-2 words in between] *swings*
mood [0-2 words in between] *unpredictable*
mood [0-2 words in between] Unsettled
mood [0-2 words in between] Unstable
mood [0-2 words in between] *variable*
Evaluated Performance
Cohen's k = 91% (50 un-annotated documents - 25 events/25 attachments, search term ‘mood’). Instance level, Random sample of 100 Random Documents:
Precision (specificity / accuracy) = 100% Recall (sensitivity / coverage) = 70% Patient level – All patients with random sample of 50 (one document per patient). Precision (specificity / accuracy) = 72%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Mutism
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of mutism.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
“she has periods of 'mutism”,
“he did not respond any further and remained mute”
Exclude Unknown annotations:
“her mother is mute”,
“muted body language” Definitions: Search term(s): *mute* , *mutism*
Evaluated Performance
Cohen's k = 100% (50 un-annotated documents - 25 events/25 attachments, search term ‘mut*’). Instance level, Random sample of 100 Random Documents:
Precision (specificity / accuracy) = 91% Recall (sensitivity / coverage) = 75% Patient level – All patients with random sample of 30 (one document per patient).
Precision (specificity / accuracy) = 93%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Negative Symptoms
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of negative symptoms.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
“she was having negative symptoms”,
“diagnosis of schizophrenia with prominent negative symptoms”
Exclude Negative annotations:
“no negative symptom”,
“no evidence of negative symptoms”
Exclude Unknown annotations:
“symptoms present?”,
“negative symptoms can be debilitating” Definitions: Search term(s): *negative* *symptom*
Evaluated Performance
Cohen's k = 85% (50 annotated documents - 25 events/25 attachments). Instance level, Random sample of 100 Random Documents:
Precision (specificity / accuracy) = 86% Recall (sensitivity / coverage) = 95% Patient level – All patients with random sample of 30 (one document per patient).
Precision (specificity / accuracy) = 87%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Nightmares
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of nightmares.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
“she was having nightmares”,
“unsettled sleep with vivid nightmares”
Exclude Negative annotations:
“no nightmares”,
“no complains of having nightmares”
Exclude Unknown annotations:
“it’s been a nightmare to get this arranged”,
“a nightmare scenario would be….” Definitions: Search term(s): nightmare*
Evaluated Performance
Cohen's k = 95% (50 un-annotated documents - events, search term 'nightmare*’). Instance level, Random sample of 100 Random Documents:Precision (specificity / accuracy) = 89% Recall (sensitivity / coverage) = 100%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Obsessive Compulsive Symptoms
Return to contentsCRIS NLP Service
Brief Description
Application to identify obsessive-compulsive symptoms (OCS) in patients with schizophrenia, schizoaffective disorder or bipolar disorder
Development Approach
Development approach: Rule-Based.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive annotations of OCS include
• Text states that patient has OCD features/symptoms
• Text states that patient has OCS
• Text including hoarding, which is considered part of OCS, regardless of presence or absence of specific examples
• Text states that patient has either obsessive or compulsive or rituals or Yale-Brown Obsessive Compulsive Scale (YBOCS) [see keywords below] and one of the following:
o Obsessions or compulsions are described as egodystonic
o Intrusive, cause patient distress or excessive worrying/anxiety
o Patient feels unable to stop obsessions or compulsions
o Patient recognises symptoms are irrational or senseless
• Clinician provides specific YBOCS symptoms
• Text reports that patient has been diagnosed with OCD by clinician
Negative annotations of OCS include
• Text makes no mention of OCS
• Text states that patient does not have OCS • Text states that patient has either compulsions or obsessions, not both, and there is no information about any of the following:
o Patient distress
o Obsessive or compulsive symptoms described as egodystonic
o Inability to stop obsessions or compulsions
o Description of specific compulsions or specific obsessions
o Patient insight
• Text states that non-clinician observers (e.g., patient or family/friends) believe patient has obsessions or compulsions without describing YBOCS symptoms.
• Text includes hedge words (i.e., possibly, apparently, seems) that specifically refers to OCS keywords
• Text includes risky, risk-taking or self-harming behaviours
• Text includes romantic or weight-related (food-related) words that modify OCS keywords
Evaluated Performance
Cohen's k = 80% (600 annotated documents for interrater reliability). Instance level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 72%
Additional Notes
Run schedule – On request
Other Specifications
Version 1.0, Last updated:xx
Paranoia
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of paranoia. Paranoia may be due to a diagnosis of paranoid schizophrenia or may be due to other causes, e.g. substance abuse.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
“vague paranoid ideation”,
“caused him to feel paranoid”
Exclude Negative annotations:
“denied any paranoia”,
“no paranoid feelings”
Exclude Unknown annotations:
“relative is paranoid about me”,
“paranoia can cause distress” Definitions: Search term(s): *paranoi*
Evaluated Performance
Cohen's k = 92% (100 annotated documents - 25 events/69 attachments/1 mental state formulation/3npresenting circumstances/2 progress notes). Instance level, Random sample of 100 Random Documents:Precision (specificity / accuracy) = 86% Recall (sensitivity / coverage) = 94% Patient level – All patients, random sample of 50 (one document per patient), Precision (specificity / accuracy) = 82%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Passivity
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of passivity.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
“patient describes experiencing passivity”,
“patient has experienced passivity in the past but not on current admission”
Exclude Negative annotations:
"denies passivity",
"no passivity".
Exclude Unknown annotations:
“passivity could not be discussed”,
“possible passivity requiring further exploration”,
“unclear whether this is passivity or another symptom” Definitions: Search term(s):passivity
Evaluated Performance
Cohen's k = 83% (438 unannotated documents – search term ‘passivity’). Instance level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 89% Recall (sensitivity / coverage) = 100%
Additional Notes
Run schedule – On request
Other Specifications
Version 1.0, Last updated:xx
Presecutory Ideation
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of ideas of persecution.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive annotations:
“she was having delusions of persecution”,
“she suffered persecutory delusions”,
“marked persecutory delusions”,
“paranoid persecutory ideations”,
“persecutory ideas present”
Exclude Negative annotations:
“denies persecutory delusions”,
“he denied any worries of persecution”,
“no persecutory delusions”,
“no delusions of persecution”,
“did not report persecutory ideas”,
“no persecutory ideation present”
Exclude Unknown annotation:
“this might not be a persecutory belief”,
“no longer experiencing persecutory delusions” Definitions: Search term(s): [Pp]ersecu*
Evaluated Performance
Cohen's k = 91% (50 un-annotated documents - 25 events/25 attachments, search term ‘persecut*’). Instance level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 80% Recall (sensitivity / coverage) = 96%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Poor Appetite
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of poor appetite (negative annotations).
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive annotations( applied to adjectives implying a good or normal appetite):
“Appetite fine”
“Appetite and sleep OK”,
“Appetite reasonable”,
“appetite alright”,
“sleep and appetite both preserved”
Exclude Negative annotations:
“loss of appetite”,
“reduced appetite”,s
“decrease in appetite”,
“not so good appetite”,
“diminished appetite”,
“lack of appetite”
Exclude Unknown annotations:
“Loss of appetite as a potential side effect”,
“as an early warning sign, as a description of a diagnosis (rather than patient experience)”, “describing a relative rather than the patient, ‘appetite suppressants’” Definitions: Search term(s): *appetite* within the same sentence of *eat* *well*, *alright*, excellent*, fine*, fair*, good*, healthy, intact*, not too bad*, no problem, not a concern*.
Evaluated Performance
Cohen’s k = 91% (Done on 50 random documents). Instance level, Random sample of 100 random documents: Precision (specificity / accuracy) = 83%
Recall (sensitivity / coverage) = 71% Patient level – All patients with primary diagnosis code F32* or F33* in a structured field, random sample of 30 (one document per patient), Precision (specificity / accuracy) = 97%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Poor Concentration
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of poor concentration.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive annotations:
“my concentration is still poor”,
“she found it difficult to concentrate”,
“he finds it hard to concentrate”
Exclude Negative annotations:
“good attention and concentration”,
“participating well and able to concentrate on activities”
“concentrate is adequate or reasonable”
Exclude Unknown annotations:
“‘gave her a concentration solution”,
“talk concentrated on her difficulties”,
“urine is concentrated”, “he is able to distract himself by concentrating on telly”. Definitions: Search term(s): *concentrat*
Evaluated Performance
Cohen's k = 95% (100 annotated documents – 45 attachments/3 CAMHS events/1 CCS correspondence/35
mental state formulation/1POSProforma/10 ward progress note). Instance level, Random sample of 100 Random Documents:Precision (specificity / accuracy) = 84% Recall (sensitivity / coverage) = 60% Patient level – All patients with primary diagnosis code F32* or F33* in a structured field, random sample of 50 (one document per patient), Precision (specificity / accuracy) = 76%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Poor Eye Contact
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of poor eye contact.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive mentions:
“looked unkempt, quiet voice, poor eye contact”,
“eye contact was poo”,
“she refused eye contact”,
“throughout the conversation she failed to maintain eye contact”,
“unable to engage in eye contact”,
“eye contact was very limited”,
“no eye contact and constantly looking at floor”
Exclude Negative mentions:
“good eye contact”,
“he was comfortable with eye contact”,
“she made eye contact whilst talking”,
“excessive eye contact was made throughout our conversation”,
“ZZZZZ made occasional eye contact with me”,
“eye contact was inconsistent”,
“Mr ZZZZZ made reasonable eye contact”,
“low voice, average eye contact”
Exclude Unknown mentions:
“she showed increased eye contact”,
“I noticed reduced eye contact today” Definitions: Search term(s): Available on request
Evaluated Performance
Cohen’s k = 92% (100 annotated documents). Patient level – All patients, random sample of 100 Random Documents (one document per patient) Precision (specificity / accuracy) = 81% Recall (sensitivity / coverage) = 65%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Poor Insight
Return to contentsCRIS NLP Service
Brief Description
Applications to identify instances of poor insight.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive annotation:
(An instance is classed as positive if the patient’s insight is minimal or absent)
“Lacking/ Lack of insight”
“Doesn’t have insight”
“No/ None insight”
“Poor insight”
“Limited insight”
“Insightless”
“Little insight”
Exclude Negative annotations (An instance is classed as negative if the patient displays a moderate or high degree of insight into their illness).
“Clear insight”
“Had/ Has insight”
“Improving insight”
“Partial insight”
“Some insight”
“Good insight”
“Insightful”
“Present insight” Exclude Unknown annotation:
“There is a lengthy and unclear description of the patient’s insight, without a final, specific verdict”
“Insight was not assessed”
“The word ‘insight’ is not used in a psychiatry context, rendering it irrelevant”
“The record does not refer to the patient’s current level of insight, perhaps mentioning predicted/ previous levels instead”
“It doesn’t contain the above keywords, despite the general conclusion that can be drawn from it, as this would decrease the overall accuracy of the app”
“Lack of insight not suggestive of psychotic illness, e.g. ‘lack of insight into how his drinking affects his son’ or ‘lack of insight into how she repeats the same cycles with romantic partners” Definitions: Search term(s): insight
Evaluated Performance
Cohen’s k = 88% (50 un-annotated documents - 25 events/25 attachments, search term ‘insight*’). Instance level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 87% Recall (sensitivity / coverage) = 70% Patient level – Random sample of 50 (one document per patient)
Precision (specificity / accuracy) = 83%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Poor Motivation
Return to contentsCRIS NLP Service
Brief Description
This application aims to identify instances of poor motivation.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive’ annotations:
“poor motivation”,
“unable to motivate’ self”,
“difficult to motivate’ self”,
“struggling with motivation”
Exclude Negative annotations:
“patient has good general motivation”,
“participate in alcohol rehabilitation”,
“improving motivation”
Exclude Unknown annotations:
“tasks/groups designed for motivation”,
“comments about motivation but not clearly indicating whether this was high or low”,
“plans to ascertain motivation levels”,
“other use of the word (e.g. ‘racially motivated’)” Definitions: Search term(s): Motivate* with in the same sentence of lack*, poor, struggle*, no
Evaluated Performance
Cohen's k = 88% (50 un-annotated documents - 25 events/25 attachments, search term ‘motiv*’). Instance level, Random sample of 100 Random Documents:
Precision (specificity / accuracy) = 85% Recall (sensitivity / coverage) = 45% Patient level – Random sample of 30 (one document per patient) Precision (specificity / accuracy) = 87%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Poverty Of Speech
Return to contentsCRIS NLP Service
Brief Description
Application to identify poverty of speech.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive annotations:
“he continues to display negative symptoms including blunting of affect, poverty of speech”,
“he does have negative symptoms in the form of poverty of speech”
“less poverty of speech”
Exclude Negative annotations:
“no poverty of speech”,
“poverty of speech not observed”
Exclude Unknown annotations:
“poverty of speech is a common symptom of…, “
“?poverty of speech” Definitions: Search term(s): speech within the same sentence of poverty, impoverish.
Evaluated Performance
Cohen's k = 100% (50 annotated documents - 12 events/32 attachments/5 CCS_correspondence, 1 discharge notification summary). Instance level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 87% Recall (sensitivity / coverage) = 85% Patient level – Random sample of 30 (one document per patient), Precision (specificity / accuracy) = 87%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Poverty Of Thought
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of poverty of thought.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive mentions:
“poverty of thought was very striking”,
“evidence of poverty of thought”,
“some poverty of thought”
Exclude Negative mentions:
“no poverty of thought”,
“no evidence of poverty of thought”
Exclude Unknown mentions:
“poverty of thought needs to be assessed”,
“poverty of thought among other symptoms” Definitions: Search term(s): *poverty* *of* *thought*
Evaluated Performance
Cohen's k = 90% (50 annotated documents). Instance level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 95%
Recall (sensitivity / coverage) = 93% Patient level – Random sample of 30 (one document per patient), Precision (specificity / accuracy) = 73%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Psychomotor Activity (Catergorisation)
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of psychomotor activity and determine the level of activity
Development Approach
Development approach: Rule-Based.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive annotations:
Positive/correct mentions identifies the level of psychomotor activity (as denoted by the keyword) in the context (as denoted by the contextstring). In addition, psychomotor_activity column correctly states whether the reference to abnormal levels of psychomotor activity in the contextstring.
For example: Keyword: ‘psychomotor agitation’; Contextstring: ‘patient showed psychomotor agitation’;
Negativity: ‘No’; psychomotor_activity: ‘psychomotor agitation’
Negative/incorrect/irrelevant mentions do not successfully identify the level of activity (as denoted by the keyword) in the context (as denoted by the contextstring). Or an instance of psychomotor activity is noted as negated.
For example: Keyword: ‘psychomotor activity’; Contextstring: ‘normal psychomotor activity’; Negativity: ‘yes’;psychomotor_activity’: ‘psychomotor activity’
Keyword: ‘psychomotor activity’; Contextstring: ‘change in psychomotor activity’; Negativity: ‘yes’; psychomotor_activity: ‘psychomotor activity’
Evaluated Performance
Cohen's k = xx Instance level, Random sample of 100 Random Documents:Precision (specificity / accuracy) = 92% Recall (sensitivity / coverage) = 92%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Smell
Return to contentsCRIS NLP Service
Brief Description
Application to identify symptoms of loss of smell.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
“ she has not recovered her sense of smell since she contracted COVID-19 in May 2021”,
“ Complains of loss of smell and loss of tastes”
Exclude Negative annotations:
“Negative annotations include denies any symptoms of loss of smell”,
” Her mother could not smell the food she made”.
Exclude Unknown annotations:
“ no one else could smell it either”,
“ she was unsure whether her smell had been affected” Definitions: Search term(s): Loss of smell/ Lack of smell
Evaluated Performance
Cohen's k = xx Instance level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 83% Recall (sensitivity / coverage) = 100%
Additional Notes
Run schedule – On request
Other Specifications
Version 1.0, Last updated:xx
Social Withdrawal
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of social withdrawal.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive annotations:
“she is withdrawn socially from friends and family”,
“Mr ZZZZZ became very isolated and socially withdrawn”,
“ some social withdrawal”
Exclude Negative annotations:
“not being socially withdrawn”,
“no evidence of being socially withdrawn”
Exclude ‘Unknown’ annotations:
“social withdrawal is common in depression”,
“need to ask about social withdrawal”. Definitions: Search term(s): Social within the same sentence of withdraw.
Evaluated Performance
Cohen's k = 100% (50 un-annotated documents - 25 events/25 attachments, search term ‘withdraw*’). Instance level, Random sample of 100 Random Documents:Precision (specificity / accuracy) = 60% Recall (sensitivity / coverage) = 86% Patient level – Random sample of 30 (one document per patient)
Precision (specificity / accuracy) = 90%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Stupor
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of stupor. This includes depressive stupor, psychotic stupor, catatonic stupor, dissociative stupor and manic stupor.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
“ZZZZ presented in a psychotic stupor”,
“man with stuporous catatonia”,
“he is in a depressive stupor”,
“his presentation being a schizoaffective stupor”,
“periods of being less responsive/stuporous”,
Exclude Negative annotations:
“not in the state of stupor”,
“presentation not suggestive of depressive stupor”,
“reported not feeling stuporous”
Exclude Unknown annotations:
“?manic stupor”,
possible psychotic stupor however need to exclude medical cause and stupors induced by substance abuse such as:
“drink himself to stupor”,
“drinking heavily and ending up stuporific”,
“drinking to a stupor”,
“drunken stupors”.
Definitions: Search term(s): Stupor*
Evaluated Performance
Cohen's k = 96% (50 un-annotated documents - 25 events/25 attachments, search term ‘aggress*’). Instance level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 88% Recall (sensitivity / coverage) = 87%
Additional Notes
Run schedule – On request
Other Specifications
Version 1.0, Last updated:xx
DOI
Suicidal Indeation
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of suicidal ideation - thinking about, considering, or planning suicide.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
“Her main concerns were his low mood QQQQQ suicidal ideation”,
“He has recently sent a letter to mom describing suicidal ideation”,
“QQQQQ then advised of suicidal ideation”
Exclude Negative annotations:
“There was no immediate risk in relation to self-harm or current suicidal ideation”,
“There has been no self-harm and no suicidal ideation disclosed to QQQQQ”,
“She denies having self-harming or suicidal ideation although sometimes would rather sleep and not get up in the morning”
Exclude ‘Unknown’ annotations:
“Suicidal ideation is a common symptom in depression”,
“It wasn’t certain if she was experiencing suicidal ideation” Definitions: Search term(s): *suicide* ideat*
Evaluated Performance
Cohen's k = 92% (50 un-annotated documents - 25 events/25 attachments, search term ‘ideation’). Instance level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 81% Recall (sensitivity / coverage) = 87%
Additional Notes
Run schedule – On request
Other Specifications
Version 1.0, Last updated:xx
Tangentiality
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of tangentiality.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive mentions:
“ he was very tangential lacked goal directed thinking”,
“ there was evidence of tangential speech”
Exclude Negative mentions:
“ no evidence of formal thought disorder or tangentiality of thoughts”,
“there was no overt tangentiality or loosening of associations”
Exclude ‘Unknown’ annotations:
“there can be tangentiality”,
“FTD is characterised by tangentiality”,
“ go off on a tangent” -Definitions: Search term(s): *tangent*
Evaluated Performance
Cohen's k = 81% (50 un-annotated documents - 25 events/25 attachments, search term ‘tangent*’). Instance level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 99% Recall (sensitivity / coverage) = 90% Patient level – Random sample of 30 (one document per patient)
Precision (specificity / accuracy) = 97%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Taste
Return to contentsCRIS NLP Service
Brief Description
Application to identify symptoms of loss of taste within community populations
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive mentions:
“the patient reported loss of enjoyment of food due to loss of taste”,
“COVID symptoms present such as loss of taste”
Exclude Negative mentions:
“the patient denied loss of taste”,
“patients’ mother reported loss of taste due to COVID”
Exclude ‘Unknown’ annotations( when there is reference of loss of taste in terms of an automated letter or email between colleagues or when it is not clear if the patient has symptoms/experiences of loss of taste):
“the patient is not sure if he has lost his taste”,
“don’t come to the practice if you have any COVID symptoms such as loss of taste etc Definitions: Search term(s): Loss of taste*, lack of taste*
Evaluated Performance
Cohen's k = xx Instance level, Random sample of 100 Random Documents:
Precision (specificity / accuracy) = 95% Recall (sensitivity / coverage) = 90%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Tearfulness
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of tearfulness
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive mentions:
“appeared tearful”,
“was tearful (including was XX and tearful; was tearful and YY)”,
“became tearful”,
“moments of tearfulness”,
“a bit tearful”
Exclude Negative mention:
“not tearful”,
“no tearfulness”,
“denies feeling tearful”,
“no tearful episodes”
Exclude ‘Unknown’ annotations:
“less tearful”,
“couldn’t remember being tearful” and
Statements applying to another person (e.g. mother was tearful) or a person who was not clearly enough the patient.Definitions: Search term(s): *tearful*
Evaluated Performance
Cohen's k = 100% (50 un-annotated documents - 25 events/25 attachments, search term ‘tearful*’). Instance level, Random sample of 100 Random Documents:Precision (specificity / accuracy) = 100% Recall (sensitivity / coverage) = 94% Patient level – Random sample of 30 (one document per patient)
Precision (specificity / accuracy) = 100%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Thought Block
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of thought block.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive mentions:
“showed some thought block”,
“thought block”,
“paucity of thought”
Exclude Negative mentions:
“denies problems with thought block”,
“no thought block elicited”
Exclude ‘Unknown’ annotations:
“thought block can be difficult to assess”,
“ …among thought block and other symptoms” Definitions: Search term(s): *thought* *block*
Evaluated Performance
Cohen's k = 100% (50 un-annotated documents - 25 events/25 attachments, search term ‘thought block*’) Instance level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 91% Recall (sensitivity / coverage) = 75% Patient level – Random sample of 30 (one document per patient)
Precision (specificity / accuracy) = 93%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Thought Broadcast
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of thought broadcasting.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive mentions:
"patient describes experiencing thought broadcasting",
"patient has experienced thought broadcasting in the past but not on current admission".
Exclude Negative annotations:
"denies thought broadcasting",
"no thought broadcasting".
Exclude Unknown annotation:
" thought broadcasting could not be discussed",
"possible thought broadcasting requiring further exploration",
"unclear whether this is thought broadcasting or another symptom". -Definitions: Search term(s): Though* within the same sentence of broadcast*
Evaluated Performance
Cohen's k = 94% (95 unannotated documents – search term ‘thought broadcast*’). Instance level, Random sample of 100 Random Documents:
Precision (specificity / accuracy) = 86% Recall (sensitivity / coverage) = 92%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Thought Insertion
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of thought insertion.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive mentions:
"patient describes experiencing thought insertion" or "patient has experienced thought insertion in the past but not on current admission".
Exclude Negative mentions:
"denies thought insertion",
"No thought insertion".
Exclude Unknown annotations:
"thought insertion could not be discussed",
"possible thought insertion requiring further exploration",
"Unclear whether this is thought insertion or another symptom". Definitions: Search term(s): Though* [0-2 words] insert*
Evaluated Performance
Cohen's k = 97% (96 unannotated documents – search term ‘thought insert*’). Instance level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 81% Recall (sensitivity / coverage) = 96%
Additional Notes
Run schedule – On Request
Other Specifications
Version 1.0, Last updated:xx
Thought Withdrawal
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of thought withdrawal.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive mentions:
"patient describes experiencing thought withdrawal" ,
"patient has experienced thought withdrawal in the past but not on current admission".
Exclude Negative mentions:
"denies thought withdrawal",
"no thought withdrawal".
Exclude Unknown annotations:
"thought withdrawal could not be discussed",
"possible thought withdrawal requiring further exploration",
"unclear whether this is thought withdrawal or another symptom" Definitions: Search term(s): Though* [0-2 words] withdraw*
Evaluated Performance
Cohen's k = 95% (76 unannotated documents – search term ‘thought withdraw*’). Instance level, Random sample of 100 Random Documents:
Precision (specificity / accuracy) = 90% Recall (sensitivity / coverage) = 88%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Waxy Flexibility
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of waxy flexibility. Waxy flexibility is a psychomotor symptom of catatonia as associated with schizophrenia, bipolar disorder, or other mental disorders which leads to a decreased response to stimuli and a tendency to remain in an immobile posture.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive mentions:
“she presents as catatonic with waxy flexibility”,
“exhibiting waxy flexibility”
Exclude Negative mentions:
“no waxy flexibility”,
“no evidence of waxy flexibility”
Exclude ‘Unknown’ annotations:
“his right pre-tibial region was swollen and waxy and slightly pink”,
“waxy flexibility is a very uncommon symptom” Definitions: Search term(s): *waxy*
Evaluated Performance
Cohen's k = 96% (50 un-annotated documents - 25 events/25 attachments, search term ‘waxy’). Instance level, Random sample of 100 Random Documents:
Precision (specificity / accuracy) = 80% Recall (sensitivity / coverage) = 86% Patient level – Random sample of 30 (one document per patient) Precision (specificity / accuracy) = 90%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Weight Loss
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of weight loss.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive mentions:
“significant weight loss”,
“pleased with his weight loss”
Exclude Negative mentions:
“no weight loss”,
“denies weight loss”.
Exclude Unknown annotations:
“maintain adequate dietary intake and avoid weight loss”,
“the latter reduced in line with weight loss” Definitions: Search term(s):
Loss [0-2 words in between] *weight*
Lost [0-2 words in between] *weight*
Weight* [0-2 words in between] loss
Weight* [0-2 words in between] lost
Evaluated Performance
Cohen’s k = 100% (50 un-annotated documents - 25 events/25 attachments, search term ‘weight* loss’, ‘loss* weight’). Instance level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 90% Recall (sensitivity / coverage) = 88%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Worthlessness
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of worthlessness
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive mentions:
“feeling worthless”,
“feels hopeless and worthless”
Exclude Negative mentions:
“no worthlessness”,
“denies feelings of worthlessness”
Exclude Unknown annotations:
“his father had told him that he was worthless”,
“would call them worthless” Definitions: Search term(s): *worthless*
Evaluated Performance
Cohen's k = 82% (50 un-annotated documents - 25 events/25 attachments, search term ‘worthless*’). Instance level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 88% Recall (sensitivity / coverage) = 86% Patient level – Random sample of 30 (one document per patient)
Precision (specificity / accuracy) = 90%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Physical Health Conditions
Asthma
Return to contentsCRIS NLP Service
Brief Description
Application to identify patients with diagnosis of asthma.
Development Approach
Development approach: Sem-EHR
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive mentions include:
‘past medical history: eczema, asthma’,
‘diagnosed with asthma during childhood’,
‘uses inhaler to manage asthma symptoms’,
‘suffered from an asthma attack’,
‘ZZZZZ suffers from severe asthma’,
‘Mrs ZZZZZ has mild asthma’. Definitions: Search term(s): Ontology available on request
Evaluated Performance
Cohen’s k = 98% (50 patients from patient level testing, 50 documents from annotation level testing, search term ‘asthma’). Instance level, Random sample of 100 Random Documents:Precision (specificity / accuracy) = 95% Recall (sensitivity / coverage) = 84%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Bronchitis
Return to contentsCRIS NLP Service
Brief Description
Application to identify patients with diagnosis of bronchitis
Development Approach
Development approach: Sem-EHR
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive mentions include:
‘Recently had COPD (chronic obstructive pulmonary disease’,
‘ZZZZ had chronic bronchitis,
‘Past diagnosis: chronic obstructive airway disease’,
‘physical health history: asthma, bronchitis’,
‘centrilobular emphysema’. Definitions: Search term(s): Ontology available on request
Evaluated Performance
Cohen's k = xx Instance level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 85% Recall (sensitivity / coverage) = 48% Patient level – Random sample of 30 (one document per patient), Precision (specificity / accuracy) = 94%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Cough
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of coughing.
Development Approach
Development approach: Machine- learning
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive mentions:
“She has been experiencing a cough for the last week and is going to call her GP”,
“ZZZ called ahead of today’s session reporting a cough so we agreed to move the session to over the phone due to current COVID guidance”,
“He has been to the GP due to coughing up sputum”
Exclude Negative mentions:
“she denied any coughing or shortness of breath”,
“He stated he was unwell with a cold last week, no cough or cough reported”,
“Fever, cough, shortness of breath: Nil”
Exclude ‘Unknown’ annotations:
“She is feeling very distresses because people were coughing near her on the bus”,
“Her son is currently off school with bad cough”
Definitions: Search term(s): Cough*
Evaluated Performance
Cohens k = 79% (150 un-annotated documents, search terms ‘cough*’). Instance level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 83% Recall (sensitivity / coverage) = 80%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Crohn's Disease
Return to contentsCRIS NLP Service
Brief Description
Application to identify patients with diagnosis of Crohn’s disease.
Development Approach
Development approach: Sem-EHR
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Include Positive mentions:
“recently been diagnosed with crohn’s disease”,
“ZZZZ has crohn’s disease”,
“she has a history of crohn’s disease”,
“has been hospitalised due to severe crohn’s disease”,
“physical health history: asthma, diabetes, hypertension, crohn’s disease” Definitions: Search term(s): Ontology available on request
Evaluated Performance
Cohen’s k = 98% (50 patients from patient level testing, 50 documents from annotation level testing, search term
‘crohn*’). Patient level – Random sample of 50 (one document per patient) Precision (specificity / accuracy) = 94% Recall (sensitivity / coverage) = 78%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Falls
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of falls or falling.
Development Approach
Development approach: Rules-Based
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Output:
• Fall_single_episode, any reference to a single fall (regardless of when it happened) e.g., ‘he fell last night’, ‘he had one fall 10 years ago’).
• Fall_recurrent: any reference to more than one fall (regardless of when they happened), e.g. ‘he reported recurrent falls’, ‘she had a couple of falls.
• Not_relevant: to capture irrelevant mentions or false positives, e.g. ‘in the fall’, ‘falling in love’ or any other fall mention such as risk of falling, side effect of this medication is risk of falling.
Note 1: positive annotations must refer to the patient and not someone else.
His mother had one fall > NOT_RELEVANT
Note 2: hypothetical statements should not be counted
If she took this medication, she might be at risk of falling > NOT_RELEVANT
Note 3: classes should be chosen on an annotation level: “She had a fall 10 months ago and then had another fall yesterday” should end up as two single-episode annotations, but “she had a couple of falls: 10 months ago and yesterday” would end up as a FALL_RECURRENT
Note 4: accidental falls are to be considered relevant
He fell from the bed > FALL_SINGLE_EPISODE
Note 5: mentions where a fall is "suggested" but not explicitly written (e.g. 'Fall pendant','Falls clinic', 'Falls referral', 'Falls prevention advice') should be considered as NOT_RELEVANT Definitions: Search term(s): Fall*, fell
Evaluated Performance
Cohen's k = xx Patient level – Random sample of 50 (one document per patient) Precision (specificity / accuracy) = 77% Recall (sensitivity / coverage) = 58%
Additional Notes
Run schedule – On Request
Other Specifications
Version 1.0, Last updated:xx
Fever
Return to contentsCRIS NLP Service
Brief Description
Application to identify patients with any symptom of fever developed within the last month.
Development Approach
Development approach: Machine-learning
Classification of past or present symptom: Past.
Classes produced: Positive
Output and Definitions
The output includes-
Positive mentions:
“She informed me on the phone she has had a fever all week”,
“ZZZ has been taking paracetamol for a fever”,
“Attended A&E reporting fever”,
“She felt feverish”
Exclude Negative mentions:
“I asked if she had any symptoms, such as fever, which she denied”,
“Temperature was checked for signs of fever, none observed”
“Cough, fever, shortness of breath: Nil”
Exclude Unknown annotations:
“Her son had a fever last night and she can’t make it to today’s session”
“She reported worrying over what to do if the baby developed a fever”
“I have informed her that if symptoms worsen, or she develops a fever, to attend A&E”
Definitions: Search term(s): fever*
Evaluated Performance
Cohen's k = xx Instance level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 85% Recall (sensitivity / coverage) = 86%
Additional Notes
Run schedule – On Request
Other Specifications
Version 1.0, Last updated:xx
Hypertension
Return to contentsCRIS NLP Service
Brief Description
Application to identify patients with diagnosis of hypertension or high blood pressure
Development Approach
Development approach: Sem-EHR
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive mentions:
“Recently been diagnosed with hypertension”,
“ZZZZ has high blood pressure”,
“she has a history of hypertension”,
“physical health history: asthma, diabetes, high blood pressure” Definitions: Search term(s): Ontology available on request
Evaluated Performance
Cohen’s k = 91% (50 patients from patient level testing, 50 documents from annotation level testing, search term
‘hypertension*’, ‘high blood pressure*’). Instance level, Random sample of 200 Random Documents: Precision (specificity / accuracy) = 94% Patient level – Random sample of 30 (one document per patient) Precision (specificity / accuracy) = 94% Recall (sensitivity / coverage) = 94%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Multimorbidity - 21 Long Term Conditions (Medcat)
Return to contentsCRIS NLP Service
Brief Description
Application to identify patients with diagnosis of physical health conditions (21 conditions in total, including arthritis, asthma, atrial fibrillation, cerebrovascular accident, chronic kidney disease, chronic liver disease, chronic obstructive lung disease, chronic sinusitis, coronary arteriosclerosis, diabetes mellitus, eczema, epilepsy, heart failure, systemic arterial hypertensive disorder, inflammatory bowel disease, ischemic heart disease, migraine, multiple sclerosis, myocardial infarction, parkinson's disease, psoriasis, transient ischemic attack).
Development Approach
Development approach: Machine-learning
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive mentions:
“He reported that he suffers from diabetes and hypertension”,
“Ms ZZZZ has a history of atopy including asthma”,
“Physical health history: asthma, diabetes, high blood pressure: Nil” ,
“Physical health: lung disease confirmed”
Evaluated Performance
Cohen's k = 91% (Done on 50 random documents)
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Pain
Return to contentsCRIS NLP Service
Brief Description
Application to determine if a mention of pain (or related words, such as sore, ache, *algia, *dynia etc.) within the text is relevant i.e. associated with the patient and refers to physical pain.
Development Approach
Development approach: Machine-learning
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive mentions include:
‘she is in constant pain’;
‘he suffers from severe headaches’;
he is taking pain killers due to a pulled muscle’ Definitions: Search term(s): %dynia%, '%algia%, %burn%', % headache%, % backache%, % toothache%, % earache%, % ache%, %sore%, %spasm%, % colic%, % cramp%, % hurt%, % sciatic%, % tender%, % pain %, % pains%, % painful%
Evaluated Performance
Cohen’s k = 86% for Attachment (based on 865 annotations)
Cohen’s k = 91% for Event (based on 458 annotations). Patient level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 91% Recall (sensitivity / coverage) = 78%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Rheumatoid Arthiritis
Return to contentsCRIS NLP Service
Brief Description
Application to identify patients with diagnoses of rheumatoid arthritis.
Development Approach
Development approach: Sem-EHR
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive mentions include:
“ZZZZZ has been in pain due to her rheumatoid arthritis”,
“she has been bedbound with rheumatoid arthritis this week”,
“medication for her rheumatoid arthritis”,
“physical health comorbidities: hypertension, rheumatoid arthritis”,
“diagnosed with rheumatoid arthritis is 1988” Definitions: Search term(s): Ontology available on request
Evaluated Performance
Cohen’s k = 98% (50 patients from patient level testing, 50 documents from annotation level testing, search term ‘rheumatoid arthritis’) Patient level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 91% Recall (sensitivity / coverage) = 86%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
HIV
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of HIV diagnosis.
Development Approach
Development approach: Machine-learning
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive mentions include:
ZZZZZ was diagnosed with HIV (only include cases where a definite HIV diagnosis is present in the text) Definitions: Search term(s): hiv
Evaluated Performance
Cohen's k = Cohen's k = 98% (Done on 50 random documents). Instance level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 70% Recall (sensitivity / coverage) = 100% Patient level – Precision (specificity / accuracy) = 64%
Additional Notes
Run schedule – On request
Other Specifications
Version 1.0, Last updated:xx
HIV Treatment
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of HIV treatment.
Development Approach
Development approach: Machine-learning
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-Positive mentions:
Include any positive references to the search terms above. Definitions: Search term(s): Anti-retroviral, antiretroviral, ARV, HAART, cART , ART, CD4, Undetectable, Abacavir, Lamivudine, Zidovudine, Aptivus, Atazanavir, Atripla, Celsentri, Cobicistat, Combivir, Darunavir, Didanosine, Dolutegravir, Edurant, Efavirenz, Elvitegravir, Emtricitabine, Emtricitabine, Emtriva, Enfuvirtide, Epivir, Etravirine, Eviplera,
Fosamprenavir.
Evaluated Performance
Cohen's k = xx Instance level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 98% Recall (sensitivity / coverage) = 100% Patient level – Precision (specificity / accuracy) = 76%
Additional Notes
Run schedule – On request
Other Specifications
Version 1.0, Last updated:xx
Contextual Factors
Amphetamine
Return to contentsCRIS NLP Service
Brief Description
To identify instances of amphetamine use.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Include Positive mentions:
“denies current use of amphetamine, however last reported using 3 months ago”,
“first took amphetamines at the age of 15”,
“UDS: +ve amphetamine”,
“ZZZZZ has been trying to give up amphetamine for the last 2 months”,
“ZZZZZ was found in possession of large quantities of amphetamines”,
“She admitted to having bought amphetamine 2 days ago” ,
“amphetamine-psychosis”
Exclude Negative mentions:
“ZZZZZ denies use of alcohol and amphetamine”,
“ZZZZZ has not used amphetamine for the last week”,
“-ve: amphetamine”
Exclude ‘Unknown’ mentions:
“ZZZZZZ’s mother has a history of amphetamine abuse” – subject other than patient,
“ZZZZZ is planning on taking amphetamine this weekend” – future or conditional event,
“We discussed the dangers of amphetamine” Definitions: Search term(s): Amphetamin*
Evaluated Performance
Cohen's k = 84% (50 un-annotated documents - 25 events/25 attachments, search term ‘amphetamine*’). Instance level, Random sample of 100 Random Documents: Precision (specificity / accuracy) = 80% Recall (sensitivity / coverage) = 84% Patient level – Random sample of 30 (one document per patient)
Precision (specificity / accuracy) = 90%
Additional Notes
Run schedule – Monthly
Other Specifications
Version xx, Last updated:xx
Cannabis
Return to contentsCRIS NLP Service
Brief Description
To identify instances of cannabis use.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
“He is a cannabis smoker”,
“she smoked cannabis when at uni”
“she stopped using cannabis 3 years ago”
Exclude Negative mentions:
“denied taking any drugs including cannabis”,
“no cannabis use”
Exclude ‘Unknown’ annotations:
“she stated in hash voice”,
“pot of yoghurt”,
“father cannabis user”,
“pot for UDS” Definitions: Search term(s): cannabis ,skunk, weed, Pot, marijuana, grass ,THC, hash, cannabinoids, resin, hashish, weeds, Cannabis- ,spices, Spice, ganja, CBD, cannabis-induced, Cannabinoid, cannabies, grasses, Cannaboids, marijuana, cannabbase, cannabis-free, skunk- cannabis, Hashis, cannabis-related, cannabi, cannabise, cannabinoids, cannabis-use, marijuana, cannabus, cannabiss, weed- skunks
Evaluated Performance
Cohen's k = 100% (50 un-annotated documents - 25 events/25 attachments, search terms ‘cannabis ‘, ‘marijuana’, ‘weed’, ‘pot’, ‘hash’, ‘skunk’, ‘resin’, ‘spice*’). 1) Instance level (Overall) , Random sample of 100 Random Documents:Precision (specificity / accuracy) = 77% Recall (sensitivity / coverage) = 93% Instance level (Current) – Random sample of 30 (one document per patient) Precision (specificity / accuracy) = 72% Patient level – Random sample of 30 (one document per patient) Precision (specificity / accuracy) = 93%
Additional Notes
Run schedule – Monthly
Other Specifications
Version xx, Last updated:xx
Chronic Alcohol Abuse
Return to contentsCRIS NLP Service
Brief Description
This application identifies instances of Chronic Alcohol Abuse within the text of CRIS clinical texts where Chronic Alcohol Abuse in the subject of the text is mentioned.
Development Approach
Development approach: Machine-learning.
Search terms(s): ‘Alcoholism’, ‘an alcoholic’, ‘problem* drinker’, ‘drinking problem’, ‘problem with drink*’, ‘problem with alcohol’, ‘alcohol problem’, ‘excessive drinker’, ‘drink* in excess’, ‘consumes alcohol excessively’, ‘consumes alcohol in excess’, ‘heavy drinking’, ‘drink* heavily’, ‘drink* excessively’, ‘alcohol related disorder’, ‘alcohol use disorder’, ‘consumes excessive amounts of alcohol’, ‘regularly gets drunk’.
Output and Definitions
There are 2 classes of annotation, Class 1 (positive), and Class 0 (negative).
Positive annotations should include all instances where there is an explicit reference to regular excess consumption of alcohol, or problematic drinking. Positive annotations should include both present and historical instances.
1. ‘zzzz has a drinking problem’; or ‘zzz regularly drinks too much’; or ‘zzz is an alcoholic’.
2. ‘zzz had a historic drinking problem’; or ‘zzz used to drink too much’
Non-regular instances of excessive drinking should be negatively annotated.
E.g., ‘zzz drank too much last night’
Also, Instances that refer to historic excessive alcohol consumption that also state that it is no longer an issue should be negatively annotated.
E.g., ‘used to drink excessively but does not anymore’
Evaluated Performance
Cohen’s k = 92%, 100 annotations. Instance level i.e. for all specific mentions (testing done on 100 random documents).
Precision (specificity / accuracy) = 85%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
Cocaine or Crack Cocaine
Return to contentsCRIS NLP Service
Brief Description
To identify instances of cocaine or crack cocaine use.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes-
Positive annotations:
“denies current use of cocaine, however last reported using 3 months ago”,
“first smoked cocaine at the age of 15”,
“UDS: +ve cocaine”,
“ZZZZZ has been trying to give up cocaine for the last 2 months”,
“ZZZZZ was found in possession of large quantities of cocaine”,
“She admitted to having bought cocaine 2 days ago” ,
“He has stopped taking cocaine”.
Exclude Negative annotations:
“ZZZZZ denies use of street drugs such as cocaine”,
“ZZZZZ has not used cocaine for the last week”,
“Crack N” – form style.
Exclude ‘Unknown’ annotations:
“ZZZZZZ’s mother has a history of crack abuse” – another subject other than the patient,
“ZZZZ is planning on taking cocaine this weekend” – future or conditional events,
“When cooking he decided to crack the eggs open” – irrelevant ,
“ZZZZZ believes cocaine isn’t good for people” – irrelevant,
“We discussed the dangers of crack”. Definitions: Search term(s): Cocaine*, crack
Evaluated Performance
Cohen's k = 95% (50 un-annotated documents - 25 events/25 attachments, search term ‘cocaine*’). Instance level (Overall) , Random sample of 100 Random Documents:Precision (specificity / accuracy) = 84% Recall (sensitivity / coverage) = 97% Patient level – Random sample of 30 (one document per patient)
Precision (specificity / accuracy) = 97%
Additional Notes
Run schedule – Monthly
Other Specifications
Version xx, Last updated:xx
MDMA
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of MDMA use
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive annotations:
“denies current use of MDMA, however last reported using 3 months ago”,
“first took MDMA at the age of 15”,
“UDS: +ve MDMA”,
“ZZZZZ has been trying to give up MDMA for the last 2 months”,
“ZZZZZ was found in possession of large quantities of MDMA”,
“She admitted to having bought MDMA 2 days ago”,
“He has stopped taking MDMA”
Exclude Negative annotations:
“ZZZZZ denies use of street drugs such as MDMA”,
,“ZZZZZ has not used MDMA for the last week”,
“UDS -ve: MDMA”
Exclude ‘Unknown’ annotations:
“ZZZZZZ’s mother has a history of MDMA abuse” – another subject other than the patient,
“ZZZZ is planning on taking MDMA this weekend” – future or conditional events,
“ZZZZZ believes MDMA isn’t good for people” – irrelevant,
“We discussed the dangers of MDMA”. Definitions: Search term(s): mdma
Evaluated Performance
Cohen's k = 100% (50 un-annotated documents - 25 events/25 attachments, search term ‘mdma’). Instance level (Overall) , Random sample of 100 Random Documents:Precision (specificity / accuracy) = 100% Recall (sensitivity / coverage) = 99% Patient level – Random sample of 30 (one document per patient)
Precision (specificity / accuracy) = 87%
Additional Notes
Run schedule – Monthly
Other Specifications
Version xx, Last updated:xx
Smoking
Return to contentsCRIS NLP Service
Brief Description
This application distinguishes between people who are a) current smokers, b) current non-smokers (ever smoked) and c) non-smokers. This application may at times bring back contradictory information on the same patient since patient may start smoking and stop smoking and because of the varied level of information available to the clinician.
Development Approach
Development approach: Rule-Based.
Classification of past or present symptom: Both.
Output and Definitions
The output includes-
Status:
One of the following must be annotated in the status feature:
Never: clearly not smoking currently or just a general message that the subject does NOT smoke. Ex: “…is a non-smoker”, “… was/is not a smoker”, “… doesn’t smoke”, “ZZZZZ denies ever smoking”, or “… is currently not smoking”
Current: a clear message that the subject is currently smoking
Ex: “…smokes 20 cigarettes a day”, “… has been smoking for 10 years”, “…is a smoker”, “ZZZZZ smokes in the ward”, “…went to garden for a smoke”, “ZZZZZ is stable when smoking”, “…has a history of heavy smoking”, “Consider stopping smoking”, “ZZZZZ found smoking in her room” or “… is a tobacco user”)
Past: any hint that the subjects was smoking
Ex: “… used to smoke”, “… has quitted smoking”, “… stopped smoking”, “ZZZZZ is an ex-smoker” or “…was a smoker”)
Evaluated Performance
Cohen's k = xx Instance level (Overall) , Random sample of 100 Random Documents: Precision (specificity / accuracy) = 85% Recall (sensitivity / coverage) = 89% Patient level – Random sample of 30 (one document per patient) Precision (specificity / accuracy) = 95%
Additional Notes
Run schedule – Weekly
Other Specifications
Version xx, Last updated:xx
Education
Return to contentsCRIS NLP Service
Brief Description
Application to identify the highest level of education at patient level.
Development Approach
Development approach: Rule-Based.
Classification of past or present symptom: Both.
Output and Definitions
The output includes- Output:
Group 1: A level group
Rule Stage of course
Accepted Accepted for A-level course or equivalent (course or institution)
Ongoing Started course but not (yet) completed (including evidence of attending relevant institution)
Dropped Out Started course but not completed - dropped out
Expelled Started course but not completed - expelled
Failed Completed course – failed all exams
Completed Completed course
Passes Passed at least one exam
Applied_undergrad Applied for university / course
Note: aspirations, plans, application only are not accepted.
Group 3: University
Rule Stage of Course
Accepted Accepted for course / institution
Ongoing Started course but not (yet) completed
Dropped out Started course but not completed - dropped out
Expelled Started course but not completed - expelled
Failed Completed course – failed
Completed Completed course
Passed Passed / graduated
Applied_University Applied for University
Group 4: unqualified group
Rule Definition
Unqualified A specific reference in notes describing as having left school without any qualifications.
GSCE_Dropped_out Started GCSE course but not completed - dropped out
GSCE_Expelled Started GCSE course but not completed - expelled
GSCE_Failed Completed GCSE course – failed all exams
School leaving age
Examples
He left school at the age of 16 years
Was 19 years old when she left school
Mrs ZZZZZ left school at 15 without any qualifications
Group 2: GCSE group
Rule Stage of Course
Ongoing Started GCSE course (or equivalent) but not (yet) completed
Completed Completed GCSE course or equivalent
Passed Passed at least one exam (GSCE or equivalent)
Applied_A-level Applied for 6th form (college) / A-level
Evaluated Performance
GCSE – Cohen's k = 90% (50 annotated documents – 25 events, 25 attachments)
No qualifications - Cohen’s k = 100% (50 annotated documents – 25 events, 25 attachments)
Additional Notes
Run schedule – On request
Other Specifications
Version xx, Last updated:xx
Occupation
Return to contentsCRIS NLP Service
Brief Description
Application to identify occupations/work descriptions and who these refer to.
Development Approach
Development approach: Machine- learning and Rule-Based.
Classification of past or present symptom: Both.
Output and Definitions
The output includes-
Output:
There are two parts to each annotation: Firstly, the occupation feature is annotated - this could be a job title, for example a ‘builder’; or a job description, for example ‘working in construction’. Secondly, the occupation relation is annotated: who the occupation belongs to, for example the patient or their family member.
Unpaid occupational categories were included (e.g. student, unemployed, homemaker, volunteer). Depending on the text available, extractions can state a specific job title (e.g. head-teacher) or a general occupational category (e.g. self-employed).
Work aspirations were excluded from annotations. Frequently extracted health/social care occupations (e.g. psychiatrist) are not annotated as belonging to the patient, in order to maximise precision.
Occupation feature (text) – the job title (e.g. ‘hairdresser’)
Occupation relation (text) – who the occupation belongs to (e.g. ‘patient’)
The full annotation guideline document is available on request. Definitions: Search term(s): Gazetteer available on request
Evaluated Performance
Cohen’s k = 77% for occupation feature (200 ‘personal history’ documents)
Cohen’s k = 72% for occupation relation (200 ‘personal history’ documents). Instance level (Overall) , Random sample of 200 Personal History Documents:
Precision (specificity / accuracy) = 77% Recall (sensitivity / coverage) = 79% Instance level (Overall) , Random sample of 82 Personal History Documents from records of patients aged >= 16 years Precision (specificity / accuracy) = 96% Patient level – Random sample of 82 Personal History Documents from records of patients aged >= 16 years Precision (specificity / accuracy) = 96%
Additional Notes
Run schedule – On request
Other Specifications
Version xx, Last updated:xx
Lives Alone
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of living alone.
Development Approach
Development approach: Rule-Based.
Classification of past or present symptom: Both.
Output and Definitions
The output includes-
Output:
The application identifies the following:
“Lives on her own”, Who- none ,
“She lives alone”, Who- She
“He presently lives alone on 7th floor”, Subject – He
“His father lives alone”, Subject – Father Definitions: Search term(s): Lives alone, Lives by himself, Lives by herself, Lives on his own, Lives on her own
Evaluated Performance
Cohen's k = 100% (50 un-annotated documents - 25 events/25 attachments, search term ‘lives on his/her own’),
‘lives by him/herself’, ‘lives alone’). Instance level (Overall) , Random sample of 100 Random Documents:Precision (specificity / accuracy) = 77%
Recall (sensitivity / coverage) = 83% Precision (Subject) = 61%
Additional Notes
Run schedule – On request
Other Specifications
Version 1.0, Last updated:xx
Loneliness
Return to contentsCRIS NLP Service
Brief Description
Application to identify instances of loneliness.
Development Approach
Development approach: Machine-learning.
Classification of past or present symptom: Both.
Classes produced: Positive
Output and Definitions
The output includes- Positive mentions:
“the patient is lonely”,
“the patient confirms they have a sense/feeling of loneliness”,
“preventing further loneliness”
Exclude Negative mentions:
“Patient is not lonely”,
“denies being lonely”.
Exclude ‘Unknown’ mentions:
“the patient’s family member is lonely”;
“they are participating in an activity on a ward to prevent boredom/loneliness”,
“EHR discusses the prevention of loneliness”,
“Instances where a clinician suspects loneliness, or if there “might be loneliness/lonely” but it is not declared by, or agreed by the patient, would be classified as unknown”
“Loneliness if indicated on a form as a heading or question would be classified as unknown”. Definitions: Search term(s): lonely loneliness
Evaluated Performance
Cohen’s k= 81% (100 unannotated documents, search terms ‘lonely’, ‘loneliness’). Instance level (Overall) , Random sample of 100 Random Documents:
Precision (specificity / accuracy) = 87% Recall (sensitivity / coverage) = 100%
Additional Notes
Run schedule – Monthly
Other Specifications
Version 1.0, Last updated:xx
DOI
Violence
Return to contentsCRIS NLP Service
Brief Description
Application to identify and classify different types of violence.
Development Approach
Development approach: Bert model
Classification of past or present symptom: Both.
Output and Definitions
The output includes-
Violence_type – It will describe difference violence types, values are Emotional, Financial, Irrelevant, Physical(non-sexual), Sexual and Unspecified.
Physical (non-sexual): e.g., punching, hitting, slapping, attach with a weapon, that is not sexual in nature.
‘She was beaten by her boyfriend’ or ‘someone spat at him’
Sexual: e.g., sexual assault, rape.
‘Patient showed sexually inappropriate behaviour’ or ‘zzz was raped when she was 12 years old’
Emotional: e.g., gaslighting psychological abuse
‘He was psychological abused from the age of 10’ or ‘She is subjected to emotional abuse by her boyfriend’
Financial: financial and economic abuse e.g., withholding of money
‘zzz has suffered from financial abuse’ or ‘She has had problems with stress, money problem, financial abuse and relationship problems’
Unspecified: violence is being discussed but the type of violence has not otherwise been specified.
‘He was abuse as a child by his father’ or ‘he has a history of being abused’
Irrelevant: not a mention of violence, the keyword is ambiguous and, in the sense used, it does not refer to violence. In this case, no other attributes should be added
‘Patient mentioned that her child was exposed to an abuse relationship’ or ‘He often feels this as a stabbing pain’
Temporality – it will describe whether event is past or not past. Values are Past and Not_Past. Details of values are:
Past: The violence occurred more than a year ago.
E.g., ‘zzz was sexually abuse several years ago in a past relationship’ or ‘’zzz was subjected to emotional abuse throughout childhood (when discussing an adult)
Recent: Violence is occurring now, in the past year.
e.g., ‘zzz was hit last week’
Unclear: It is not clear if the violence being described is past or recent.
e.g., ‘zzz was coerced in to taking part against his will’
Presence- Values are actual and unclear.
Threat: e.g., ‘He made a gesture as if to attack staff’ or ‘The patient threatened to hit me’
Actual: e.g., ‘he hit the nurse’
255
Kw_text – This will pick up the keyword from the text, based on that it will decide violence type.
Polarity- Values are Abstract, Affirmed and Negated. Details for values are given below:
Affirmed- The mention of violence is discussing something has happened, including threats.
Negated - The mention of violence is discussing something that has not happened, such as the absence of violence.
e.g., ‘No violence or aggression noted’
Abstract - Violence is mentioned, but is being conjectured, speculated, or hypothesised about. For example, possible violence or risks of violence.
‘Clinician wondered whether there was emotional abuse’
Patient_role – values are Perpetrator, Unclear, Victim and Witness. Details are:
Victim: The patient was the victim of the violence.
e.g., ‘zzz mentioned that she was gaslighted by ex-partner’
Perpetrator: The patient perpetrated the violence
e.g., ‘Patient made threats to kill and attack staff’
Witness: The patient witnessed the violence, rather than being a victim or perpetrator.
e.g., ‘zzz saw his sister being punch by dad’
Setting – values are Domestic, not domestic and Not known. Details are:
Domestic: Violence occurred in a domestic setting, including intimate partner violence (family members, intimate partners, ex-intimate partners and household members).
e.g., ‘Patient stabbed his roommate’ or ‘She was hit by her boyfriend’
Not domestic: Violence did not happen in a domestic setting.
e.g., ‘She was abused whilst in care’
Not known: Whether or not the setting was domestic is not known.
Evaluated Performance
Physical Violence – Random sample of 50 documents:
Precision (specificity / accuracy): Type=76%, Setting=78%, Presence=78%, Patient_role=64%, Polarity=74%.
Search term(s): “abus”, “assault”, “attack”, “beat”, “fight”, “hit”, “punch, “push”, “threw”, “violenc”.
Financial Violence – Random sample of 50 documents:
Precision (specificity / accuracy): Type=98%, Setting=86%, Presence=96%, Patient_role=92%, Polarity=96% .
Search term(s): “abus”, “assault”, “economic abus”, “financial abus”, “financially abus”, “struck”, “violenc”.
Emotional Violence – Random sample of 50 documents:
Precision (specificity / accuracy): Type=100%, Settings=98%, Presence=100%, Patient_role=96%, Polarity=100%.
Search term(s): “abus”, “emotional abus”, “emotional manipulat”, “emotionally abus”, “gaslight”, “psychological abus”.
Sexual Violence – Random sample of 50 documents:
Precision (specificity / accuracy): Type=82%, Settings=90%, Presence=84%, Patient_role=78%
Search term(s): “abus”, “assault”, “rape”.
Additional Notes
Run schedule – Quarterly
Other Specifications
Version 2.0, Last updated:xx
Interventions
CAMHS - Creative Therapy
Return to contentsCRIS NLP Service
Brief Description
This application is to extract creative therapy intervention that covers art, play, music and drama interventions provided by CAMHS for children and young people from the free text.
Development Approach
Development approach: Text hunter
Classification of past or present instance: Both.
Search term(s):
“art” within a few words of “assessment”, “need*”, “intake”, “appointment”, “appt”, “support”, “intervention”, “session”, “saw”, “therapy”, “follow up”, “refresher”, or “top-up”.
“play” within a few words of “assessment”, “need*”, “intake”, “appointment”, or “appt”, “support”, “intervention”, “session”, “saw”, “therapy”, “follow up”, “refresher”, or “top-up”.
“music” within a few words of “assessment”, “need*”, “intake”, “appointment”, or “appt”, “support”, “intervention”, “session”, “saw”, “therapy”, “follow up”, “refresher”, or “top-up”.
“drama” within a few words of “assessment”, “need*”, “intake”, “appointment”, or “appt”, “support”, “intervention”, “session”, “saw”, “therapy”, “follow up”, “refresher”, or “top-up”.
“art: seen”, “play: seen”, “music: seen”, “drama: seen”, “art: reviewed”, “play: reviewed”, “music: reviewed”, “drama: reviewed”.
Output and Definitions
The output include -
Positive examples:
"engaging with ART therapy",
"she joined others in music and dancing session. Staff engages her with musical instruments. ZZZZZ appears pleasant in the music session",
"had an art session with the O.T this afternoon and did colouring".
Negative examples:
"mum also thought that art therapy separately for ZZZZZ could be helpful",
"mum keen to access therapy for children, either art/play therapy or counselling to help process feelings around the accident",
Evaluated Performance
Random sample of 100 Random Documents:
Precision (specificity / accuracy) = 84%
Additional Notes
Run schedule – On request
Other Specifications
Version 1.0, Last updated:xx
CAMHS - Dialectical Behaviour Therapy (DBT)
Return to contentsCRIS NLP Service
Brief Description
This application is to extract DBT intervention provided by CAMHS for children and young people from the free text.
Development Approach
Development approach: Text hunter
Classification of past or present instance: Both.
Search term(s):
“Dialectical Behaviour Therapy” within a few words of “assessment”, “need*”, “intake”, “appointment”, “Appt”, “support”, “intervention”, “session”, “therapy”, “saw”, “attended”, “continued”, “follow up”, “refresher”, or “top-up”.
“DBT” within a few words of “assessment”, “need*”, “intake”, “appointment”, “Appt”, “support”, “intervention”, “session”, “therapy”, “attended”, “saw”, “continued”, “follow up”, “refresher”, or “top-up”.
“Dialectical Behaviour Therapy: seen”, “DBT: seen”, “Dialectical Behaviour Therapy: reviewed”, “DBT: reviewed”, “Dialectical Behaviour Therapy: reviewed”.
Output and Definitions
The output include -
Positive examples:
"Attended a DBT group yesterday and needed some encouragement",
"She frequently attends groups help by the therapy such as DBT and Movement therapy.",
"ZZZZZ came to his first DBT group",
Negative examples:
"It discuss about mum's DBT therapy",
"parents/carers to be invited to DBT assessment:",
"A number of options for further support were discussed including DBT skills sessions".
Evaluated Performance
Random sample of 100 Random Documents:
Precision (specificity / accuracy) = 92%
Additional Notes
Run schedule – On request
Other Specifications
Version 1.0, Last updated:xx