Adolescent Use of Social Media: What Does It Mean for Mental Health?

By Katelyn Teng

A central focus of the field of neuroethics lies at the intersection of existing and emerging technologies, and the promotion of brain health. Nearly all adolescents use at least one form of social media (97%) (1), and 14% of adolescents experience mental health conditions (2). During adolescence, there is a higher risk of developing mental illnesses (3). With the increasing usage of social media and the high incidences of mental health conditions observed in adolescents, parents wonder: How should they make decisions around their children’s social media use based on evidence of positive outcomes?

Can social media be used to promote good mental health in youth?

Social media can be an important facet of adolescent life (4). It is an opportunity for social interaction and to build communities with people they don’t necessarily see face to face. Durable support networks are meaningful resources that can help adolescents avoid the development of mental illnesses (5). And, importantly, social media provides an accessible way for youth to find and receive mental health support through the help of online communities — many of which include people with similar lived experiences (6). Mental health practitioners report that reduced isolation, social skill development, and accessible communication are benefits to the mental wellbeing of younger users. These online communities allow social media users to educate themselves on their mental health struggles, and provide them with a platform to discuss their experiences with others who can contribute resources and support that may otherwise be difficult to obtain outside of social media (6). Social media can also be a platform to advocate or to promote positive mental health strategies to peers, creating a venue for accessible mental health help.

Adolescents are also subject to external risk factors to developing mental health conditions, including academics and friendship dynamics, that require coping strategies to maintain good mental health (5). Social media serves as a potentially effective tool to alleviate stress, such as by providing distraction or relaxation, and could help adolescents cope with stressors. Younger people have reported that the main goals of using apps for mental health purposes are for helping them calm down, maintaining well-being in the long term, and to have access to other resources for support (8).

Is social media contributing to mental illnesses in young audiences?

Despite these potential benefits, social media has the potential to cause harm. Many parents opt to mediate or disallow social media use, and adolescents are particularly susceptible to social media’s influence due to the nature of this developmental period (3). Reports of self-esteem struggles, poor sleep habits, and the setting of unrealistic standards on social media have made some parents uncomfortable with their children’s use of it (3,5). Peers can use social media to drive social exclusion and influence risky behaviour — for example, by using externalized issues like bullying to encourage internalized issues such as struggles with self-image (5). Youth may also feel pressured to perfect the way they present themselves to their peers, which results in meticulously edited content. Over time, adolescents could develop a habit of comparing themselves to their online image as well as what they see online from their peers, and as social media use becomes more popular, this tendency for comparison is exacerbated.

Are social media platforms ethically responsible for safe social media use?

Social media platforms have an important role in making sure that their platform is beneficial, and not harmful, to adolescent users. As such, there is an ethical need to measure and assess the functionalities that these platforms have in place to address their role in this goal. For example, social media platforms often censor the content of social media posts with the intention of preventing harmful content from being shared or viewed – however, it is important to also note that the practice of censorship has not been strongly supported by evidence of positive outcomes. Content moderation has been shown to potentially counter positive mental health engagement by limiting access to helpful resources, regardless of intention (9). Because of the potential for these policies to shape adolescent mental health and overall social experience, social media platforms need to make conscious efforts moving forward to make safety policies with evidence supporting their effectiveness. While it is ethically necessary for them to consider strategies to maintain adolescent health, this is not always followed in practice, so it is important for parents and younger people to be informed on what types of content moderation policies exist in the platforms they use. Being familiar with how social media platforms moderate content can help parents and adolescents be best equipped to decide on a safe platform that works for them.

Should children be on social media?

Using social media without internet safety or mental health education can be detrimental; however, it is important to acknowledge that adolescents also report that social media is meaningful to them for connecting with their peers and engaging with online discourse. Additionally, research suggests that the method of parenting and mediating social media influences how younger people use these platforms (7). For example, parental mediation that supports adolescents’ autonomy may lead to less time on social media, less risky social media use, and a reduction of anxiety or depression symptoms. Supporting autonomy can look like developing age-appropriate rules around social media as well as taking into consideration the views of adolescents (7). As such, the way parents respond to social media use can be a formative factor for a child’s interactions and attitudes towards social media.

The experience that users have with social media varies with the functionalities and features that the platforms have. When choosing mental health supports on their phones, adolescents prioritize features such as accessibility, quality of intervention, security, customizability, and usability. The credibility and safety of an app is also valued among younger users (8). Understanding what adolescents value in social media, as well as what platforms they are likely to choose, can allow parents to better understand how to assess platforms for safety and mental health support capabilities.

Whether or not social media should be used and/or moderated is a personal parenting decision that should be well-researched and discussed amongst both parents and their children. Informed use for both parties contributes to the overall safety when engaging with social media for social or mental health support. While using social media can pose risks for people of any age, it is important to consider the benefits that may be realized for youth when they are given the opportunity to learn how to navigate social media in a safe and healthy way.

References

  1. Vogels EA, Gelles-Watnick R, Massarat N. Teens, Social Media and Technology 2022 [Internet]. Pew Research Center. 2023 Aug 10 [cited 2023 Jul 6]. Available from: https://www.pewresearch.org/internet/2022/08/10/teens-social-media-and-technology-2022/
  2. Mental health of adolescents [Internet]. World Health Organization. 2021 [cited 2023 July 6]. Available from: https://www.who.int/news-room/fact-sheets/detail/adolescent-mental-health
  3. Nesi J. The Impact of Social Media on Youth Mental Health: Challenges and Opportunities. North Carolina Medical Journal. 2020 Mar 01;81(2):116–21.
  4. Barry CT, Sidoti CL, Briggs SM, Reiter SR, Lindsey RA. Adolescent social media use and mental health from adolescent and parent perspectives. Journal of Adolescence. 2017 Sep;61(1):1-11.
  5. O’Reilly, M. Social media and adolescent mental health: the good, the bad and the ugly. Journal of Mental Health. 2020 Jan 28;29(2):200-06.
  6. O’Reilly M, Dogra N, Hughes J, Reilly P, George R, Whiteman N. Potential of social media in promoting mental health in adolescents. Health Promotion International. 2019 Oct;34(5):981–91.
  7. Beyens I, Keijsers L, Coyne SM. Social media, parenting, and well-being. Current Opinion in Psychology. 2022 Oct;47:101350.
  8. Kabacińska K, McLeod K, MacKenzie A, Vu K, Cianfrone M, Tugwell A, Robillard JM. What criteria are young people using to select mobile mental health applications? A nominal group study. Digital Health. 2022 May;8.
  9. Zhang CC, Zaleski G, Kailley JN, Teng KA, English M, Riminchan A, Robillard JM. Debate: Social media content moderation may do more harm than good for youth mental health. Child and Adolescent Mental Health. 2024 Feb;29(1):104-106.

Katelyn Teng is an undergraduate research assistant in the NEST Lab under the supervision of Dr. Julie Robillard. She is pursuing a BSc in Neuroscience at the University of British Columbia, and is passionate about mental health advocation and technology’s role in patient experience. Outside of work and school, she can be found baking sweet treats, collecting her favourite vinyl records, and with friends and family.

Placebo and the ethics of deception in brain research and clinical care

Join us for the 2024 Brain Awareness Week Annual Neuroethics Distinguished Lecture featuring Dr. A. Jon Stoessl!
 

Tuesday, March 12, 2024
5:30 PM – 7:00 PM
C300 Theatre, UBC Robson Square
800 Robson Street, Vancouver, BC, V6Z 3B7 (map)

Everyone is welcome! This public in-person event is free, but RSVP is required: https://bit.ly/2024baw 

Overview
The placebo effect is powerful in many neurological and psychiatric disorders and clinical trials often use placebos when developing and testing new treatments. Some people question the ethics of including a placebo group in research, while others would argue that to not do so is ethically fraught. In some cases, estimating the placebo effect and uncovering its underlying mechanisms may depend upon the use of deception, but this may be in conflict with basic principles of autonomy.

Deception requires careful thought as to whether it is necessary, and if so, how it will be managed in an ethically acceptable manner. While there have been advances showing that genetic factors that may contribute to the placebo effect, the idea that placebo responders should be excluded from clinical trials may be scientifically unsound and may further violate the principle of social justice. The use of placebos in clinical care is more controversial and while there may be benefits, there may be risks and additional ethical challenges. Health care providers need to be sensitive to the impact of deception not only on their own relationship with patients, but also on potential effects on trust of the profession as a whole.

A. Jon Stoessl, CM, MD
Dr. A. Jon Stoessl is Professor and immediate past Head (2009-2023) of Neurology at the University of British Columbia (UBC). He was previously Director of the Pacific Parkinson’s Research Centre and Parkinson’s Foundation Centre of Excellence (2001-2014). Dr. Stoessl was Co-Director (2014-2019), then Director (2019) of UBC’s Djavad Mowafaghian Centre for Brain Health. He previously held a Tier 1 Canada Research Chair in Parkinson’s Disease. Dr. Stoessl is Editor-in-Chief of Movement Disorders, has served on numerous other editorial boards including Lancet Neurology and Annals of Neurology, previously chaired the Scientific Advisory Boards of Parkinson’s Canada and the Parkinson’s Foundation, is Past-President of the World Parkinson Coalition. He was Chair of the Local Organizing Committee and Co-Chair of the Congress Scientific Program Committee for the 2017 MDS Vancouver Congress.

Dr. Stoessl uses positron emission tomography to study chemical changes in the brain with the objective of gaining a better understanding of the causes and complications of Parkinson’s disease (PD) and its treatment, as well as how PD can be used as a model to better understand dopamine functions in the brain. He has published more than 300 papers and book chapters, and has been cited more than 31,000 times in the scientific literature with an ­h-index of 79 (Google Scholar).

Dr. Stoessl is a Member of the Order of Canada and was recognized by the Queen Elizabeth Jubilee Medal. He is a Fellow of the Canadian Academy of Health Sciences.

Brain Awareness Week
Brain Awareness Week is the global campaign to foster public enthusiasm and support for brain science. Every March, partners host imaginative activities in their communities that share the wonders of the brain and the impact brain science has on our everyday lives.

The paradox of automation bias: Challenges to minimizing error in Human-AI healthcare collaboration

By Cindy Zhang

It is not unusual for surgeons to begin operating on a brain tumor patient without exactly knowing the type of tumor they have, or the type of surgery required to safely remove it. Glioma is a common form of brain tumor and requires different surgical approaches depending on its subtype (1). Since it’s difficult to determine subtypes before surgery, tissue samples are often sent to a pathologist during the operation (1). This takes about 10 to 15 minutes as the patient’s brain lies exposed on the surgical table (2). Due to poor quality samples and stressful conditions, misdiagnoses can occur (2).

In 2023, researchers developed a new artificial intelligence (AI) tool nicknamed “CHARM” to identify glioma subtypes from tissue samples (1). If approved for clinical use, CHARM will facilitate fast and accurate diagnoses during surgeries – a significant advancement in neurosurgery.

This is just one example of AI’s tremendous potential in medicine. However, new discoveries come with new challenges – is the field of neuroethics ready to overcome them?

AI as support tools, not replacements

The notion that AI might completely replace humans in healthcare is intriguing. However, given the current state of AI technology, it is far more productive to discuss its role as support tools rather than replacements.

This is the view advocated by digital health researcher Dr. Emre Sezgin (3). Rather than treating AI as replacements for doctors, Sezgin emphasizes a human-in-the-loop approach in which AI tools support healthcare providers towards better decision-making. The AI tool can offer recommendations, and the healthcare provider can evaluate its outputs and make the final judgement. The approach can supposedly help in “reducing potential errors or biases”(3).

The human-in-the-loop approach is supported by evidence. In one study, no single automated system could outperform human radiologists in breast cancer screening, but the accuracy of screening improved when the radiologist and the AI worked together (4).

Despite its positive potential, Dr. Sezgin notes that there are important organizational challenges to adopting more AI tools in healthcare (3). The algorithmic systems would require rigorous evaluation to ensure they meet safety standards (5–7). Hospitals and clinics would need to review their policies to make sure that new AI practices are aligned with local laws and regulations while also preparing to deal with issues of information security, liability, and service reimbursement, to name a few (8,9). The list goes on.

Figure 1. “AI adoption to enable doctor–AI collaboration and considerations” by Emre Sezgin CC BY-NC 4.0.

The risk of over-trusting medical AI tools

Should AI technologies be safe, effective, and available within healthcare, their success can still be undermined by our unconscious biases towards AI.

Automation bias is the human tendency to accept an algorithm’s recommendations without sufficiently questioning or verifying it (10). This can lead us to miss important errors. In healthcare, using an erroneous automated decision support tool can increase the likelihood of following incorrect advice by 26% compared to working without the tool (11). In a human-in-the-loop approach, healthcare providers are at risk of automation bias when making the final judgement based on AI recommendations. This creates a dilemma.

Of course, AI tools should be held to high safety standards. Their rate of error should be very low – but they might never be perfect. Paradoxically, the more reliable the AI algorithm, the higher the likelihood that its human users will overlook an error (10). If an AI tool is highly accurate, there is less incentive to spend time and effort verifying its outputs. This is sometimes referred to as automation complacency rather than bias, but both involve similar attention processes (10).

Organizational challenges to medical AI are important to address. However, it is equally important to address the risk of psychological challenges like automation bias to ensure the best use of medical AI.

How do we minimize automation bias towards medical AI?

It is important to train and educate healthcare providers about newly adopted AI tools (12). Within this training, it is also crucial to acknowledge the risk of automation bias and teach ways to mitigate it – for example, by setting high standards for verifying AI recommendations.

Another solution is to design computational tools with features that minimize automation bias. AI tools that provide confidence estimates alongside their recommendations can help healthcare providers gauge its reliability and prompt them to verify low confidence recommendations (13). However, the paradox is apparent: even high confidence recommendations may have errors, and a high confidence estimate may discourage verification and lead to worsening automation bias.

Recent movements toward Explainable AI may also help ease the “black box” factor around AI, helping healthcare providers verify the AI algorithm’s decision-making process for greater accuracy.

Human-in-the-loop approaches to AI in healthcare are promising. Sometimes the machine in the loop needs debugging – so, too, does the human. We need to examine our psychological biases in order to ensure a smooth-running system.This blog post is based on the original essay ‘All in our heads: Cognitive biases as psychological barriers to the successful adoption and use of medical artificial intelligence’ by Cindy Zhang, nominated as Honourable Mention in the 2023 Neuroethics Essay Contest by the International Neuroethics Society (INS) and International Youth Neuroscience Association (IYNA).

References

1.         Nasrallah MP, Zhao J, Tsai CC, Meredith D, Marostica E, Ligon KL, et al. Machine learning for cryosection pathology predicts the 2021 WHO classification of glioma. Med. 2023 Jun 29;S2666-6340(23):00189–7.

2.         Sasani A. New AI tool can help treat brain tumors more quickly and accurately, study finds. The Guardian [Internet]. 2023 Jul 7 [cited 2023 Jul 9]; Available from: https://www.theguardian.com/science/2023/jul/07/brain-tumors-gliomas-ai-tool

3.         Sezgin E. Artificial intelligence in healthcare: Complementing, not replacing, doctors and healthcare providers. DIGITAL HEALTH. 2023 Jan 1;9:20552076231186520.

4.         Schaffter T, Buist DSM, Lee CI, Nikulin Y, Ribli D, Guan Y, et al. Evaluation of Combined Artificial Intelligence and Radiologist Assessment to Interpret Screening Mammograms. JAMA Network Open. 2020 Mar 2;3(3):e200265.

5.         Health C for D and R. Artificial Intelligence and Machine Learning in Software as a Medical Device. FDA [Internet]. 2023 Oct 20 [cited 2024 Jan 7]; Available from: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device

6.         Federal Trade Commission [Internet]. 2020 [cited 2024 Jan 7]. Using Artificial Intelligence and Algorithms. Available from: https://www.ftc.gov/business-guidance/blog/2020/04/using-artificial-intelligence-and-algorithms

7.         Price WN. Health Care AI: Law, Regulation, and Policy. Book Chapters [Internet]. 2019 Jan 1; Available from: https://repository.law.umich.edu/book_chapters/357

8.         Abràmoff MD, Roehrenbeck C, Trujillo S, Goldstein J, Graves AS, Repka MX, et al. A reimbursement framework for artificial intelligence in healthcare. npj Digit Med. 2022 Jun 9;5(1):1–6.

9.         Wolff J, Pauling J, Keck A, Baumbach J. Success Factors of Artificial Intelligence Implementation in Healthcare. Frontiers in Digital Health [Internet]. 2021 [cited 2024 Jan 7];3. Available from: https://www.frontiersin.org/articles/10.3389/fdgth.2021.594971

10.       Parasuraman R, Manzey DH. Complacency and Bias in Human Use of Automation: An Attentional Integration. Hum Factors. 2010 Jun 1;52(3):381–410.

11.       Goddard K, Roudsari A, Wyatt JC. Automation bias: a systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association. 2012 Jan 1;19(1):121–7.

12.       Wartman SA, Combs CD. Reimagining Medical Education in the Age of AI. AMA J Ethics. 2019 Feb 1;21(2):E146-152.

13.       McGuirl JM, Sarter NB. Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information. Hum Factors. 2006;48(4):656–65.

What does ChatGPT know about dementia?

Post by Jill Dosso

This blog post summarizes results from the peer-reviewed journal article “What does ChatGPT know about dementia? A comparative analysis of information quality” published in the Journal of Alzheimer’s Disease, 2023

The need for online information about Alzheimer’s Disease and dementia

Persons living with dementia and their care partners often wish for access to more and better information about living with the condition [1–3]. While healthcare providers are a valued resource for this information, their time is limited, and access can be challenging. At least 40% of older adults seek health information online [4], though this number may be much higher in some populations [5,6]. Online information about dementia exists on many virtual platforms, including social media, and varies widely in quality [1–3,7–10].

A recent analysis identified a number of barriers to online information access for persons living with dementia: information is targeted towards care partners and medical practitioners, rather than persons with lived experience; information can be pessimistic and hard to decipher; information is inaccurate or overly simple; and information is untrustworthy [11]. There is a clear demand for easily accessible, accurate dementia information that can be customized to a variety of user information needs.

ChatGPT: a new information source

ChatGPT (Conversational Generative Pre-training Transformer) is an online tool launched by OpenAI in November 2022. Users can engage in typed back-and-forth dialogue with the system through a web browser. Unlike a typical search engine query like a Google search, the platform retains information across an interaction, which creates a more natural and conversational experience. The “machinery” of ChatGPT is a generative artificial intelligence, meaning it uses machine learning models to create novel and data-driven content. These models have been trained on a dataset created from online materials and refined through human feedback [12,13].  The exact content of this dataset has not been publicly released.

How does ChatGPT stack up as a source of online information about dementia?

In a recent study, our research team at the Neuroscience, Engagement, and Smart Tech Lab at Neuroethics Canada asked how ChatGPT compared to other sources of online information about dementia. To create a set of questions that real users would likely have about dementia, we collected Frequently Asked Questions from the webpages of three national dementia organizations in Canada, USA, and Mexico. We posed these questions to ChatGPT-3.5 in April 2023. Responses from ChatGPT were evaluated using a standard tool previously developed by the NEST lab to assess the quality of online health information [14].

Strengths of ChatGPT-3.5: We found that ChatGPT, like the Alzheimer’s organizations, provided generally accurate information, directed users to bring their questions to a physician, and did not endorse commercial products.

Strengths of Alzheimer’s organization websites: Organizations were more likely than ChatGPT to state the limits of scientific evidence explicitly and produced more readable responses (i.e., responses had a readability score corresponding to a lower grade level). They were also more likely to link to local, specific, and actionable resources for support.

Conclusion

This research represents one snapshot of behaviour from a generative artificial intelligence tool: ChatGPT-3.5. This platform, and others, will continue to change over time and may produce different responses with different prompts or in languages other than English.

This work can support:

  1. Persons living with dementia and their care partners in screening potential sources of dementia information online;
  2. Healthcare providers as they advise persons living with dementia and their care partners; and
  3. Non-profit providers of dementia support services as they create helpful resources for their communities.

There is an ethical imperative to include persons with lived experiences of dementia in the creation of technologies to support them [15]. These perspectives are critically important for tools at the intersection of generative artificial intelligence and digital health. Understanding the online information available to these families is a first step in prioritizing their needs and perspectives in technology research and development.


Jill Dosso, PhD is a Postdoctoral Fellow in the Neuroscience, Engagement, and Smart Tech (NEST) lab at the University of British Columbia and BC Children’s Hospital. In her work, she studies the perspectives of persons with lived experience on emerging technologies to support brain health across the lifespan.


References

[1]          Allen F, Cain R, Meyer C (2020) Seeking relational information sources in the digital age: A study into information source preferences amongst family and friends of those with dementia. Dementia 19, 766–785.

[2]          Montiel-Aponte MC, Bertolucci PHF (2021) Do you look for information about dementia? Knowledge of cognitive impairment in older people among their relatives. Dement Neuropsychol 15, 248–255.

[3]          Washington KT, Meadows SE, Elliott SG, Koopman RJ (2011) Information needs of informal caregivers of older adults with chronic health conditions. Patient Educ Couns 83, 37–44.

[4]          Yoon H, Jang Y, Vaughan PW, Garcia M (2020) Older Adults’ Internet Use for Health Information: Digital Divide by Race/Ethnicity and Socioeconomic Status. J Appl Gerontol 39, 105–110.

[5]          Levy H, Janke AT, Langa KM (2015) Health Literacy and the Digital Divide Among Older Americans. J Gen Intern Med 30, 284–289.

[6]          Tam MT, Dosso JA, Robillard JM (2021) The impact of a global pandemic on people living with dementia and their care partners: analysis of 417 lived experience reports. J Alzheimers Dis 80, 865–875.

[7]          Robillard JM (2016) The Online Environment: A Key Variable in the Ethical Response to Complementary and Alternative Medicine for Alzheimer’s Disease. J Alzheimers Dis 51, 11–13.

[8]          Robillard JM, Johnson TW, Hennessey C, Beattie BL, Illes J (2013) Aging 2.0: Health Information about Dementia on Twitter. PLOS ONE 8, e69861.

[9]          Robillard JM, Illes J, Arcand M, Beattie BL, Hayden S, Lawrence P, McGrenere J, Reiner PB, Wittenberg D, Jacova C (2015) Scientific and ethical features of English-language online tests for Alzheimer’s disease. Alzheimers Dement Diagn Assess Dis Monit 1, 281–288.

[10]        Robillard JM, Feng TL (2016) Health Advice in a Digital World: Quality and Content of Online Information about the Prevention of Alzheimer’s Disease. J Alzheimers Dis 55, 219–229.

[11]        Dixon E, Anderson J, Blackwelder D, L. Radnofsky M, Lazar A (2022) Barriers to Online Dementia Information and Mitigation. In CHI Conference on Human Factors in Computing Systems ACM, New Orleans LA USA, pp. 1–14.

[12]        Introducing ChatGPT, Last updated November 30, 2022, Accessed on November 30, 2022.

[13]        Forbes, The Next Generation Of Large Language Models, Last updated February 7, 2023, Accessed on February 7, 2023.

[14]        Robillard JM, Jun JH, Lai J-A, Feng TL (2018) The QUEST for quality online health information: validation of a short quantitative tool. BMC Med Inform Decis Mak 18, 87.

[15]        Robillard JM, Cleland I, Hoey J, Nugent C (2018) Ethical adoption: A new imperative in the development of technology for dementia. Alzheimers Dement 14, 1104–1113.

Learning about situated personhood with Dr. Stefanie Blain-Moraes

Last fall, I had the opportunity to sit down and enjoy some treats at Agora Café with our visiting Assistant Professor, Dr. Stefanie Blain-Moraes, PhD, P. Eng. She returned from the 2022 International Neuroethics Society (INS) Annual Meeting in Montréal, where her work on situated personhood and caregivers of behaviourally unresponsive individuals received the recognition of “Best Topical Contribution – Theoretical / Philosophical“.

Stefanie Blain-Moraes, Assistant Professor, McGill University, Canada; Young Scientist during the Session on “Human-Centred High-Tech: Neurotechnology”. At the World Economic Forum – Annual Meeting of the New Champions in Dalian, People’s Republic of China 2017. Copyright by World Economic Forum / Ciaran McCrickard

Stefanie is the leader of the Biosignal Interaction and Personhood Technology (BIAPT) Lab at McGill University. The BIAPT Lab’s objectives are to understand neurophysiological (nervous system function) and physiological bases (bodily function) of human consciousness. They aim to translate this understanding into technologies that improve the quality of lives of non-communicative persons and their caregivers.

An individual's level of consciousness - their ability to perceive themselves and their environment - is typically assessed by their appropriate response to the environment.

Their work aims to assess consciousness and establish a prognosis (prediction of the course of a disease) for recovery of consciousness in behaviourally unresponsive patients, determine neural correlates of consciousness (relationships between mental and neural states), and understand the implications of caring for behaviourally unresponsive patients.

A behaviourally unresponsive patient has compromised functioning in their linguistic and behavioural communication, which limits their ability to reveal conscious states to others - this increases the reliance on inferring residual consciousness through relevant proxies1.

At this point, there may be more questions on how the BIAPT Lab strives to address these and other complex objectives. In my interview with Stefanie, we gauge her perspectives on questions at the intersection of consciousness, ethics, and technology (note: Stefanie’s response to a question follows immediately after the italicised question).

How do we know if behaviourally unresponsive patients are conscious? How do we know whether they have the potential to eventually recover consciousness?

There are a growing number of individuals who live with conditions that make them behaviourally unresponsive. Caring for these individuals poses challenges for the caregiver, as most of the necessary communication surrounding care is up to their interpretation. The individual’s unresponsive state raises questions about personhood and moral status as well.

How do we view personhood?

The static view of personhood has a strict view on the relationship between personhood and moral status. Within the static view, we have the individual and the relational; the former is defined by culture. In the latter view, personhood and cultural concepts depend on the behaviourally unresponsive individual’s relation to others. On one hand, you have someone referring to an individual as their mother, even when their capacity has gone. On the other, the individual’s lack of capacities implies they are not a person. An example of this would be an individual in a vegetative state who has no reflexive experience.

Is this still the view on personhood?

Over the past few decades, there has been a gradual shift away from the static view of personhood. In 2018, I published a paper that argued that consciousness could be dissociated from personhood, and there is a responsibility to assume personhood in the absence of consciousness. Now, we are more focused on the situated, dynamic view of personhood. It opens a conceptual space between personhood and moral status. We’ve seen that it’s not a static phenomenon, as it is something that waxes and wanes. Personhood fluctuates based on context.

Has this shift from the static to situated view affected the types of questions you’ve asked?

There is a need for our questions to be multidisciplinary. We are currently developing a technology called biomusic, which translates meaningful changes in the autonomic nervous system into auditory output. Caregivers are critical to this project, as we work with them to understand the person to determine the type and genre of music they would output.

Biomusic technology can be used to monitor physiological reactions, which could provide caregivers with the opportunity to accomplish other tasks without being directly beside the individual. It can also detect signals related to emotions, and augment the relationship between caregivers and the individual they are caring for.

There is a need for our questions to be multidisciplinary.

How did your formal training as an engineer supplement your work in the field of ethics, and now neuroethics?

It has helped shape the types of questions I ask. For brain-computer interfaces, we ask questions on: Who has access? How should they be used? How do we give access? On consciousness, we are keen on detecting levels of consciousness in minimally conscious people and how it affects medical decisions. Working on ethical issues happened as a response to all these questions.

What’s next for you?

In disorders of consciousness, we are looking into ethical issues concerning the Adaptive Recognition Index for neuroprognostication (prediction of recovery from disorders of consciousness caused by severe brain injury2) in Canadian critical care intensive care units. Right now, we are conducting data collection in 5 intensive care units across Canada, and hoping to eventually develop a framework that is more pro-active than reactive.

Another project is looking at transcranial alternating current stimulation (tACS) in the upper superior parietal lobe of the brain. Earlier work has found that coma patients have woken up, so we hope to maximise an individual’s capacity for consciousness and utilise tACS as ignition for an individual to potentially recover. We are also looking into the ethical implications of this.

End of interview.

Stefanie’s dedication to enhancing interactions between non-communicative individuals and their caregivers could prevent caregiver burnout, and reduce the risk of neglect or family abandonment of behaviourally unresponsive individuals. Stefanie and the BIAPT Lab continue to develop novel technologies to assess levels of consciousness and cognition in non-communicative individuals.

We are so grateful to learn alongside Stefanie as she is here with us in Vancouver (and now off to Montréal!). We are looking forward to what’s to come next.

To learn more about Stefanie’s work, read her citations and visit the BIAPT Lab’s website.

References

(1) Farisco, M., Pennartz, C., Annen, J. et al. Indicators and criteria of consciousness: ethical implications for the care of behaviourally unresponsive patients. BMC Med Ethics 23, 30 (2022). https://doi.org/10.1186/s12910-022-00770-3

(2) Fischer, D., Edlow, B. L., Giacino, J. T., & Greer, D. M. (2022). Neuroprognostication: a conceptual framework. Nature reviews. Neurology18(7), 419–427. https://doi.org/10.1038/s41582-022-00644-7

I am a Research Assistant at Neuroethics Canada. I would like to acknowledge the notes and suggestions I’ve received from Marianne Bacani, Viorica Hrincu, Anna Nuechterlein, and Katelyn Teng. Special thanks as well to Stefanie for her patience, taking the time to chat, and for the blueberry coconut cake!

Emotions and stigma: Social robots from the perspectives of older adult end-users

Post by Jaya Kailley

This blog post summarizes results from the peer-reviewed journal article “Older adult perspectives on emotion and stigma in social robots published in Frontiers in Psychiatry (2023).

Social robots: Tools to support the quality of life of older adults

Canada’s older adult population is rapidly growing (1), and tools such as social robots may be able to support the health and quality of life of older adults. Social robots are devices that can provide support to users through interaction. They can be used for health monitoring, reminders, and cognitive training activities (2), and they have been shown to decrease behavioural and psychological symptoms of dementia, improve mood, decrease loneliness, decrease blood pressure, and support pain management (3–6).

End-user driven approaches to improve social robot adoption

 Despite their benefits, social robots are not readily adopted by older adults. Issues raised in the literature include a lack of emotional alignment between end-users and devices (7,8) and the perception of stigma around social robot use (3,9). To address these barriers and improve the design of future devices, it is important to better understand user preferences for social robots, rather than relying only on expert opinion (10).

Older adult perspectives on social robot applications, emotion and stigma

The Neuroscience, Engagement, and Smart Tech (NEST) lab at Neuroethics Canada explored the topics of emotion and stigma in social robots from the perspectives of older adults, people living with dementia, and care partners. This project was co-designed with a Lived Experience Expert Group (LEEG, or “League”). We conducted online workshops where participants had an opportunity to share their thoughts on what a social robot should be able to do, what kind of emotional range it should be capable of displaying, how much emotion it should display, and considerations around using a social robot in various public contexts (Figure 1).

Figure 1 – Results from the SOCRATES workshops.

Participants expressed that they would want a social robot to interact with them and provide companionship. They also suggested that a robot could be a medium to connect with others; for example, a robot could facilitate electronic communication between two people.

Emotional display was something that most participants desired in a social robot, but there were different preferences for how much emotion a robot should display. Participants mentioned that a social robot displaying negative emotions could be stressful for the user, but a social robot displaying only positive emotions could appear artificial. One participant suggested that to get around this issue, a social robot could have a dial to set the level of interactivity that the user desired. Ideas for displays of emotions raised included facial expressions, body movements, or noises. Participants also explained how having a robot that aligned its emotions with the user could facilitate connection between the robot and user.

Participants also discussed considerations around using a robot in front of other people. Some participants voiced concern about attracting negative attention from an audience and feeling judged, while others suggested that a social robot could help to educate and raise awareness about dementia and how technologies can provide support to older adults.

Looking to the future

One key part of the neuroethics field today is co-created research (11), which involves engaging end-users in the creation of interventions meant to support their wellbeing. The results from this study highlight that social robots should have advanced interactive abilities and emotional capabilities to ensure that users can feel connected to these devices. Since older adults have different preferences for emotional range, customizability should be prioritized in the design of future devices. The results for considerations around using a robot in public suggest that social robot marketing may have a significant impact on the way assistive technologies are perceived in the future. Highlighting these devices as support and using them to educate the public about dementia may reduce the stigma around these technologies. These key findings should be incorporated into the design and implementation of future social robots to improve social robot adoption among older adults.

References

1.         Infographic: Canada’s seniors population outlook: Uncharted territory | CIHI [Internet]. [cited 2022 Jun 16]. Available from: https://www.cihi.ca/en/infographic-canadas-seniors-population-outlook-uncharted-territory

2.         Getson C, Nejat G. Socially assistive robots helping older adults through the pandemic and life after COVID-19. Robotics. 2021 Sep;10(3):106.

3.         Hung L, Liu C, Woldum E, Au-Yeung A, Berndt A, Wallsworth C, et al. The benefits of and barriers to using a social robot PARO in care settings: A scoping review. BMC Geriatr. 2019 Aug 23;19(1):232.

4.         Petersen S, Houston S, Qin H, Tague C, Studley J. The utilization of robotic pets in dementia care. Journal of Alzheimer’s Disease. 2017 Jan 1;55(2):569–74.

5.         Robinson H, MacDonald B, Broadbent E. Physiological effects of a companion robot on blood pressure of older people in residential care facility: A pilot study. Australasian Journal on Ageing. 2015;34(1):27–32.

6.         Latikka R, Rubio-Hernández R, Lohan ES, Rantala J, Nieto Fernández F, Laitinen A, et al. Older adults’ loneliness, social isolation, and physical information and communication technology in the era of ambient assisted living: A systematic literature review. J Med Internet Res. 2021 Dec 30;23(12):e28022.

7.         Prescott TJ, Robillard JM. Are friends electric? The benefits and risks of human-robot relationships. iScience. 2021 Jan 22;24(1):101993.

8.         Pu L, Moyle W, Jones C, Todorovic M. The effectiveness of social robots for older adults: A systematic review and meta-analysis of randomized controlled studies. The Gerontologist. 2019 Jan 9;59(1):e37–51.

9.         Koh WQ, Felding SA, Budak KB, Toomey E, Casey D. Barriers and facilitators to the implementation of social robots for older adults and people with dementia: a scoping review. BMC Geriatr. 2021 Jun 9;21:351.

10.       Bradwell HL, Edwards KJ, Winnington R, Thill S, Jones RB. Companion robots for older people: importance of user-centred design demonstrated through observations and focus groups comparing preferences of older people and roboticists in South West England. BMJ Open. 2019 Sep 1;9(9):e032468.

11.       Illes J. Reflecting on the Past and Future of Neuroethics: The Brain on a Pedestal. AJOB Neuroscience. 2023 Mar 31;1–4.

Jaya Kailley is a directed studies student under the supervision of Dr. Julie Robillard in the NEST Lab, and she is pursuing an Integrated Sciences degree in Behavioural Neuroscience and Physiology at the University of British Columbia. She currently supports research projects that aim to include end-users in the process of social robot development. Outside of work, Jaya enjoys playing the piano, drawing, and reading fiction novels.

Bell Let’s Talk and Other Media Mental Health Campaigns – How Effective Are They for Young People?

Disclaimer: The following blog post involves mentions of suicide.

One in seven youth aged 10 to 19 experience a mental health issue – but many don’t seek or receive the care they need (1). Can media mental health campaigns help to bridge this gap?

What are media mental health campaigns?

Media mental health campaigns are marketing efforts to raise awareness of mental health issues using mass media channels like social media. These campaigns focus on topics such as suicide prevention, destigmatizing mental illness, and promoting mental health resources.

Given the widespread use of social media among young people (generally defined as people aged 10-24), these platforms can be leveraged to spread mental health information in a cost-effective and accessible way (2). It is no surprise to see a growing number of media mental health campaigns directed toward young people over the past few years. But how effective are these campaigns? Do they significantly impact the feelings and behaviours of young people toward mental health?

Despite their prevalence and popularity, there exists only a small number of empirical evaluations of how these campaigns affect young people.

Bell Let’s Talk: The largest mental health campaign in Canada

Consider Bell Let’s Talk – Canada’s most well-known media mental health campaign.

Founded in 2010 by Bell Canada, Bell Let’s Talk aims to destigmatize mental illness by encouraging dialogue about mental health on social media. For one day a year, the company donates $0.05 to mental health initiatives for every text or call on the Bell network, as well as every social media interaction with the campaign. To date, Bell Let’s Talk has raised over $139 million toward mental health initiatives and is the largest corporate initiative in Canada dedicated to mental health (3).

Despite its impressive reach and financial success, there have only been two empirical evaluations of Bell Let’s Talk’s impacts on young people.

Bell Let’s Talk and rates of access to mental health services in young people

In the first evaluation, Booth et al. were curious to see whether the Bell Let’s Talk campaign encouraged young people to use mental health services (4). They analyzed the monthly outpatient mental health visit rates of young people in Ontario between 2006 and 2015 and found that the 2012 Bell Let’s Talk campaign was associated with a temporary increase in mental health service use.

These findings are promising – however, it is possible that other events happening during this time also contributed to the change they observed.

Is Bell Let’s Talk effective for suicide prevention?

Another evaluation of Bell Let’s Talk studied whether the campaign was associated with changes in suicide rates (5). Cȏté et al. compared the suicide rate in Ontario before and during the campaign and found no significant difference in the rates, both among young people and the general population.

In the same study, the researchers analyzed suicide-related Twitter posts under the hashtag #BellLetsTalk. They found that these Tweets focused more on suicide being a societal problem, and less on promoting protective messages about coping and resilience – which have been associated with fewer suicides (6,7). Based on this, the researchers suggest that Bell Let’s Talk and similar campaigns should aim to promote more protective messaging around suicide on social media, which may be more effective for suicide prevention.

Further evaluation of media mental health campaigns is needed

These evaluations suggest that Bell Let’s Talk can positively affect young people but could benefit from promoting more protective messages online. Given the enormous reach, financial stakes, and potential influence of the campaign, two evaluations are insufficient to capture the overall effects of Bell Let’s Talk on young people across Canada. Further evaluations are necessary.

Research in this area must overcome significant obstacles. First, evaluations that rely on objective measures of behaviour fail to capture the unique experiences of individuals as they interact with the campaign. Conversely, evaluations that rely on self-reported data leave room for response bias. For instance, study participants may self-report more positive outcomes after seeing campaign media because they believe researchers expect to see these outcomes. Additionally, campaigns may significantly impact people’s attitudes and behaviours in the long-term in a way that is difficult to detect in evaluations that focus on short-term impact. Even then, these long-term changes in outcomes could be attributed to factors other than mental health campaigns, such as a general increase in access to mental health resources and mental health education.

Despite these challenges, more research evaluations in this area have emerged through the years to address this important knowledge gap. By studying trends in existing media mental health campaigns for young people, we can inform the design and implementation of more effective campaigns in the future, ensuring they will have a better impact on the health and well-being of young people.

References:

  1. Mental health of adolescents [Internet]. World Health Organization. 2021 [cited 2023 Mar 7]. Available from: https://www.who.int/news-room/fact-sheets/detail/adolescent-mental-health
  2. Robinson J, Bailey E, Hetrick S, Paix S, O’Donnell M, Cox G, et al. Developing Social Media-Based Suicide Prevention Messages in Partnership With Young People: Exploratory Study. JMIR Ment Health. 2017 Oct 4;4(4):e40.
  3. The positive impact of your efforts | Bell Let’s Talk [Internet]. [cited 2023 Mar 7]. Available from: https://letstalk.bell.ca/our-impact/
  4. Booth RG, Allen BN, Bray Jenkyn KM, Li L, Shariff SZ. Youth Mental Health Services Utilization Rates After a Large-Scale Social Media Campaign: Population-Based Interrupted Time-Series Analysis. JMIR Ment Health. 2018 Apr 6;5(2):e27.
  5. Côté D, Williams M, Zaheer R, Niederkrotenthaler T, Schaffer A, Sinyor M. Suicide-related Twitter Content in Response to a National Mental Health Awareness Campaign and the Association between the Campaign and Suicide Rates in Ontario. Can J Psychiatry. 2021 May;66(5):460–7.
  6. Niederkrotenthaler T, Voracek M, Herberth A, Till B, Strauss M, Etzersdorfer E, et al. Role of media reports in completed and prevented suicide: Werther v. Papageno effects. Br J Psychiatry. 2010 Sep;197(3):234–43.
  7. Sinyor M, Williams M, Zaheer R, Loureiro R, Pirkis J, Heisel MJ, et al. The association between Twitter content and suicide. Aust N Z J Psychiatry. 2021 Mar;55(3):268–76.

Cindy Zhang (she/her) is a research assistant at the NEST Lab under the supervision of Dr. Julie Robillard. She is an undergraduate student pursuing a Bachelor of Arts degree in Psychology at the University of British Columbia. At the lab, she currently supports research projects on the impact of social media on youth mental health and family communication.

The Effects of Post-Traumatic Stress on Pediatric Organ Transplant Patients

What is post-traumatic stress?

Post-traumatic Stress (PTS) is a psychiatric disorder resulting from the experience of a stressful or traumatic event. PTS is either the diagnostic entity known as post-traumatic stress disorder (PTSD) or PTSD-related symptomology known as PTSS [1]. PTS symptoms are clustered into three categories: reexperiencing, avoidance, and hyperarousal [2]. Reexperiencing can entail flashbacks of traumatic events, while avoidance of reminders of the stressor characterizes the avoidance dimension of symptomology. Finally, prominent anxiety and hypervigilance underly the hyperarousal dimension.

Symptoms of PTS can greatly vary, but can include:

  • Uncontrollable thoughts and memories related to the event
  • Bad dreams about the event
  • Physical bodily reactions (e.g., sweating, beating heart, dizziness)
  • Changes in mood
  • Difficulty paying attention
  • Feelings of fear, guilt, anger, and shame
  • Sleep disturbances

Organ transplantation can be considered a traumatic event for a child. PTS resulting from an organ transplantation could arise from many factors, including surgical procedures, repeated laboratory and imaging investigations, life-threatening incidents in care, and dependence on technology for organ function and survival [1].

How common is post-traumatic stress in pediatric organ transplant patients?

A 2021 study conducted at BC Children’s Hospital by Hind et al. investigated the effects of PTS on the life quality of 61 pediatric organ transplant recipients. The sample included 12 heart transplant recipients, 30 kidney transplant recipients, and 19 liver transplant recipients [1].

They found that a total of 52 patients (85.2%) reported at least one trauma symptom, and eight (13.1%) of these patients indicated symptoms that put them at significant risk of PTSD. They also observed that kidney recipients had higher overall trauma scores than other organ transplant patients, perhaps due to the extensive post-transplant care involved after kidney transplantation. Non-white patients reported significantly higher trauma scores, while females reported higher trauma scores than their male counterparts, though this result was not statistically significant. Spending more days in the hospital and being prescribed more medication were also associated with higher trauma scores [2].

How does post-traumatic stress affect the daily life of organ transplant patients?

Hind et al. also found that quality-of-life questionnaire scores were negatively correlated with trauma scores. This means that physical, emotional, social, academic, and psychosocial functioning decreased as trauma increased, leading to poorer quality of life. The results of this study indicate that PTS is a prevalent issue amongst pediatric organ transplant recipients and it can have detrimental effects on the daily functioning of patients post-transplant.

How does post-traumatic stress affect treatment?

Studies have shown that PTS in pediatric organ transplant recipients can impact treatment outcomes by causing post-transplant treatment nonadherence [1,3,4].

Why is treatment nonadherence significant to treatment outcomes?

Consider the example of a 19-year-old female, whose first transplant was lost due to nonadherence [3]. The patient was granted a second organ transplant after stating that she would adhere to post-operative treatment. However, two days following her second surgery, she stopped taking her medications. When questioned, the patient revealed that she had been suffering for more than one year from recurrent intrusive thoughts about her liver disease, and recurrent dreams about her wait for the first transplant. She reported wanting to avoid any reminder of her illness, including even the sight of a nurse or medication. Healthcare practitioners applied Cognitive Behavioural Therapy in the form of gradual exposure to a hospital environment to this patient. Family intervention was also undertaken to increase her social support network. Following this treatment, the patient resumed taking her medications and continued doing so for more than a year after her second transplant.

This case study shows the profound impact PTS can have on medical care, and how addressing trauma symptoms can improve patient outcomes. Therefore, its findings reinforce that providing access to mental health resources is an imperative for this population considering their effects on the psychological and physical wellbeing of patients.

It is important to recognize that transplantation can be a traumatic experience from which patients and their families may develop PTS symptoms. Therefore, providing resources and support services for PTS before, during and after transplantation can help patient health outcomes as well as their overall quality of life. Treatment for PTS may vary, but collaboration between healthcare practitioners and psychologists, social workers, councilors, and other support personnel helping with psychosocial coping can enhance patient experience and facilitate the transplantation journey for patients and families alike.

References:

1. Hind T, Lui S, Moon E, Broad K, Lang S, Schreiber RA, et al. Post-traumatic stress as a determinant of quality of life in pediatric solid-organ transplant recipients. Pediatr Transplant. 2021;25(4):e14005, https://doi.org/10.1111/petr.14005

2. Nash RP, Loiselle MM, Stahl JL, Conklin JL, Rose TL, Hutto A, et al. Post-Traumatic Stress Disorder and Post-Traumatic Growth following Kidney Transplantation. Kidney360. 2022 Sep 29;3(9):1590, https://doi.org/10.34067/KID.0008152021

3. Shemesh E, Lurie S, Stuber ML, Emre S, Patel Y, Vohra P, et al. A pilot study of posttraumatic stress and nonadherence in pediatric liver transplant recipients. Pediatrics. 2000 Feb;105(2):E29, https://doi.org/10.1542/peds.105.2.e29

4. Martin LR, Feig C, Maksoudian CR, Wysong K, Faasse K. A perspective on nonadherence to drug therapy: psychological barriers and strategies to overcome nonadherence. Patient Prefer Adherence. 2018 Aug 22;12:1527–35, https://doi.org/10.2147/PPA.S155971

Anna Riminchan was born in Bulgaria, where she spent her early childhood before immigrating to Canada with her family. Anna is currently working towards a Bachelor of Science Degree, majoring in Behavioural Neuroscience and minoring in Visual Arts at the University of British Columbia. In the meantime, she is contributing to advancing research in neuroscience, after which, she plans to pursue a degree in medicine. In her spare time, you can find Anna working on her latest art piece!

Pervasive But Problematic: How Generative AI Is Disrupting Academia

Dominating headlines since late 2022, the generative AI system ChatGPT, has rapidly become one of the most controversial and fastest-growing consumer applications in history [1]. Capable of composing Shakespearean sonnets with hip-hop lyrics, drafting manuscripts with key points and strong counterarguments, or creating academic blogs worthy of publication, ChatGPT offers unrivalled potential to automize tasks and generate large bodies of text at lightning speed [2].

Original image generated using DALL·E text to image AI generator. Available from: https://openai.com/dall-e-2/

ChatGPT is a sophisticated language model that responds to user requests in ways that appear intuitive and conversational [3]. The model is built upon swathes of information obtained from the internet, 300 million words to be precise [4]. ChatGPT works by forming connections between data to reveal patterns of information that align with user prompts [5]. As a language model, ChatGPT has the ability to remember threads of information, enabling users to ask follow-up or clarifying questions. It is this personalized interactive dialogue that elevates it above traditional search engine models.

Unsurprisingly, generative AI has amassed a strong army of followers eager to monopolise on its efficient functionalities: 100 million people conversed with the chatbot in January alone [1].

But what might the lure of working smarter, not harder mean in academia?

Perilous Publishing, Or Powerful Penmanship

No longer a whisper shared between hushed sororities, generative AI like ChatGPT has become a powerful force proudly employed by professors and pupils alike. However, despite its popularity, uptake is not unanimous. Academics are divided.

With the ability to generate work at the touch of a button, users risk being led down a perilous path towards plagiarism, and having their development stifled. The clear threat to academic integrity and original thought is sending many into a state of panic [6–8]. Editors of scientific journals are also having to wrestle with publishing ethics, as ChatGPT is increasingly being cited as a co-author [9].

But, despite its outward looking proficiency, the generative AI model has a number of particularly unnerving limitations. In the words of OpenAI, ChatGPT’s parent company, the software “will occasionally make up facts or ‘hallucinate’ outputs, [that] may be inaccurate, untruthful, and otherwise misleading” [5].

Available from: https://chat.openai.com/chat

The rise of generative AI shines a spotlight on a troublesome issue in academia: the exchange of papers for grades. Whilst finished articles are necessary, when product triumphs over process, valuable lessons found in the process of writing can be overlooked.

“This technology [generative AI] … cuts into the university’s core purpose of cultivating a thoughtful and critically engaged citizenry. If we can now all access sophisticated and original writing with a simple prompt, why require students to write at all?” [10]

Responsively, in an attempt to stem the flow of plagiarism and untruth, and protect creative thinking, some academics have enforced outright bans of generative AI systems [2].

Unethical AI

Academic integrity aside, ChatGPT’s capabilities are also undermined by moral and ethical concerns.

A recent Times Magazine exposé revealed that OpenAI outsources work to a firm in Kenya, whose staff are assigned the menial task of trawling through mountains of data, flagging harmful items to ensure that ChatGPT’s outputs are “safe for human consumption”. Data Enrichment Specialists earn less than $2/hour [2].

Moreover, generative AI propagates systemic biases by repurposing primarily westernised data in response to English-language prompts, created by tech-savvy users with easy access to IT. For some, the commercialization of more sophisticated platforms like ChatGPT Pro will also prove particularly exclusionary [10–13].

Embracing The Chatbot

However, vying in support of generative AI in academia, are those such as Associate Professor of Learning Enhancement at Edinburgh Napier University, Sam Illingworth, who state that it would be unrealistic and unrepresentative of future workplaces if students did not learn to use these technologies. Illingworth and others call for a shift from albeit valid concerns around insidious plagiarism (OpenAI’s own plagiarism detector tool is highly inaccurate, with a 26% success rate [11]) toward embracing the opportunities as a chance to reshape pedagogy [4].

Methods for teaching and assessment are having to be reexamined, with some suggesting that a return to traditional methods, such as impromptu oral exams, personal reflections or in-person written assignments, may prove effective against a proliferation of AI generated work [12,13].

Generative AI chatbots also have the potential to become a teacher’s best friend [14]. Automating grading rubrics or assisting with lesson planning might offer a much-needed morale boost to a professional body whose expertise is being somewhat jeopardized by the emergent technology. And despite rumors of existential threat [15,16], generative AI, for now at least, poses no immediate risk of replacing human educators; empathy and creativity are among unique human qualities proving tricky to manufacture from binary code.

The Future Is Unknown

Much like other technologies that have emerged from Sisyphean cycles of innovation (think Casio graphing calculator or Mac OS), ChatGPT and fellow generative AI chatbots have the potential to transform the face of education [17].

As the AI arms race marches on at quickening pace, with companies delivering a daily bombardment of upgrades and functionalities, it is impossible to predict who, or what, might benefit or become a casualty to automation in academia. The story of AI in academia remains unwritten, but as the indelible mark left by ChatGPT suggests, it is certain to deliver a compelling narrative.


References:

1.         Hu K. ChatGPT sets record for fastest-growing user base – analyst note. Reuters [Internet]. 2023 Feb 2 [cited 2023 Feb 6]; Available from: https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/

2.         Perrigo B. OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | Time [Internet]. 2023 [cited 2023 Feb 13]. Available from: https://time.com/6247678/openai-chatgpt-kenya-workers/

3.         OpenAi. ChatGPT: Optimizing Language Models for Dialogue [Internet]. OpenAI. 2022 [cited 2023 Feb 17]. Available from: https://openai.com/blog/chatgpt/

4.         Hughes A. ChatGPT: Everything you need to know about OpenAI’s GPT-3 tool [Internet]. BBC Science Focus Magazine. 2023 [cited 2023 Feb 6]. Available from: https://www.sciencefocus.com/future-technology/gpt-3/

5.         OpenAi. ChatGPT General FAQ [Internet]. 2023 [cited 2023 Feb 18]. Available from: https://help.openai.com/en/articles/6783457-chatgpt-general-faq

6.         Heidt A. ‘Arms race with automation’: professors fret about AI-generated coursework. Nature [Internet]. 2023 Jan 24 [cited 2023 Feb 6]; Available from: https://www.nature.com/articles/d41586-023-00204-z

7.         Kubacka T. “Publish-or-perish” and ChatGPT: a dangerous mix [Internet]. Lookalikes and Meanders. 2023 [cited 2023 Feb 6]. Available from: https://lookalikes.substack.com/p/publish-or-perish-and-chatgpt-a-dangerous

8.         Boyle K. A reason for the moral panic re AI in academia: in work, we learn prioritization of tasks, which higher ed doesn’t prize. Speed is crucial in work— it’s discouraged in school. Tools that encourage speed are bad for some established industries. Take note of who screams loudly. https://t.co/ot8YHh7H7b [Internet]. Twitter. 2023 [cited 2023 Feb 6]. Available from: https://twitter.com/KTmBoyle/status/1619384367637471234

9.         Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature [Internet]. 2023 Jan 18 [cited 2023 Feb 21];613(7945):620–1. Available from: https://www.nature.com/articles/d41586-023-00107-z

10.       Southworth J. Rethinking university writing pedagogy in a world of ChatGPT [Internet]. University Affairs. 2023 [cited 2023 Feb 18]. Available from: https://www.universityaffairs.ca/opinion/in-my-opinion/rethinking-university-writing-pedagogy-in-a-world-of-chatgpt/

11.       Wiggers K. OpenAI releases tool to detect AI-generated text, including from ChatGPT [Internet]. TechCrunch. 2023 [cited 2023 Feb 6]. Available from: https://techcrunch.com/2023/01/31/openai-releases-tool-to-detect-ai-generated-text-including-from-chatgpt/

12.       Nature Portfolio. A poll of @Nature readers about the use of AI chatbots in academia suggests that the resulting essays are still easy to flag, and it’s possible to amend existing policies and assignments to address their use. https://t.co/lHyPtEEb7F [Internet]. Twitter. 2023 [cited 2023 Feb 6]. Available from: https://twitter.com/NaturePortfolio/status/1619751476947046408

13.       Khatsenkova S. ChatGPT: Is it possible to spot AI-generated text? [Internet]. euronews. 2023 [cited 2023 Feb 6]. Available from: https://www.euronews.com/next/2023/01/19/chatgpt-is-it-possible-to-detect-ai-generated-text

14.       Roose K. Don’t Ban ChatGPT in Schools. Teach With It. The New York Times [Internet]. 2023 Jan 12 [cited 2023 Feb 21]; Available from: https://www.nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html

15.       Thorp HH. ChatGPT is fun, but not an author. Science [Internet]. 2023 Jan 27 [cited 2023 Feb 6];379(6630):313–313. Available from: https://www.science.org/doi/10.1126/science.adg7879

16.       Chow A, Perrigo B. The AI Arms Race Is On. Start Worrying | Time [Internet]. 2023 [cited 2023 Feb 18]. Available from: https://time.com/6255952/ai-impact-chatgpt-microsoft-google/

17.       Orben A. The Sisyphean Cycle of Technology Panics. Perspect Psychol Sci [Internet]. 2020 Sep 1 [cited 2023 Feb 6];15(5):1143–57. Available from: https://doi.org/10.1177/1745691620919372


Susanna Martin BSc (Hons) is a Research Assistant at The Neuroscience Engagement and Smart Tech (NEST) lab.

Feeling welcomed: Creating space for Indigenous voices in brain and mental health research

Join us for the 2023 Brain Awareness Week Annual Neuroethics Distinguished Lecture featuring Dr. Melissa L. Perreault!
 

Tuesday, March 14, 2023
5:30 PM – 7:00 PM PDT
Bill Reid Gallery of Northwest Coast Art
639 Hornby Street, Vancouver, BC, V6C 2G3 (map)

Everyone is welcome! This public in-person event is free, but tickets are required.
Kindly RSVP here: https://ncbaw2023.eventbrite.ca 

Overview
Research on Indigenous communities has historically been conducted using a one-sided approach, with researchers having little knowledge of Indigenous culture, minimal concerns surrounding community needs or desires, and without giving back to the community. In this lecture intended for people from all backgrounds and professions, Dr. Melissa L. Perreault will discuss how this is the time to give Indigenous communities a voice in research on brain and mental health that is guided ethically and culturally.

Melissa L. Perreault, PhD
Dr. Melissa L. Perreault, PhD, is an Associate Professor and neuroscientist in the Department of Biomedical Sciences at the University of Guelph and a member of the College of New Scholars, Artists, and Scientists in the Royal Society of Canada. Dr. Perreault’s research is focused on the understanding of sex differences in the mechanisms that underly neuropsychiatric disorders, and on the identification of brain wave patterns that can be used as biomarkers to identify brain and mental health disorders.

Dr. Perreault is a citizen of the Métis Nation of Ontario, descended from the historic Métis Community of the Mattawa/Ottawa River. She has developed numerous Indigenous and equity, diversity, and inclusion initiatives at institutional, national, and international levels. As a member of the Indigenous Knowledge Holders Group for the Canadian Brain Research Strategy, she continues to strive towards inclusivity in neuroscience and Indigenous community research.

Brain Awareness Week
Brain Awareness Week is the global campaign to foster public enthusiasm and support for brain science. Every March, partners host imaginative activities in their communities that share the wonders of the brain and the impact brain science has on our everyday lives.