Emotions and stigma: Social robots from the perspectives of older adult end-users

Post by Jaya Kailley

This blog post summarizes results from the peer-reviewed journal article “Older adult perspectives on emotion and stigma in social robots published in Frontiers in Psychiatry (2023).

Social robots: Tools to support the quality of life of older adults

Canada’s older adult population is rapidly growing (1), and tools such as social robots may be able to support the health and quality of life of older adults. Social robots are devices that can provide support to users through interaction. They can be used for health monitoring, reminders, and cognitive training activities (2), and they have been shown to decrease behavioural and psychological symptoms of dementia, improve mood, decrease loneliness, decrease blood pressure, and support pain management (3–6).

End-user driven approaches to improve social robot adoption

 Despite their benefits, social robots are not readily adopted by older adults. Issues raised in the literature include a lack of emotional alignment between end-users and devices (7,8) and the perception of stigma around social robot use (3,9). To address these barriers and improve the design of future devices, it is important to better understand user preferences for social robots, rather than relying only on expert opinion (10).

Older adult perspectives on social robot applications, emotion and stigma

The Neuroscience, Engagement, and Smart Tech (NEST) lab at Neuroethics Canada explored the topics of emotion and stigma in social robots from the perspectives of older adults, people living with dementia, and care partners. This project was co-designed with a Lived Experience Expert Group (LEEG, or “League”). We conducted online workshops where participants had an opportunity to share their thoughts on what a social robot should be able to do, what kind of emotional range it should be capable of displaying, how much emotion it should display, and considerations around using a social robot in various public contexts (Figure 1).

Figure 1 – Results from the SOCRATES workshops.

Participants expressed that they would want a social robot to interact with them and provide companionship. They also suggested that a robot could be a medium to connect with others; for example, a robot could facilitate electronic communication between two people.

Emotional display was something that most participants desired in a social robot, but there were different preferences for how much emotion a robot should display. Participants mentioned that a social robot displaying negative emotions could be stressful for the user, but a social robot displaying only positive emotions could appear artificial. One participant suggested that to get around this issue, a social robot could have a dial to set the level of interactivity that the user desired. Ideas for displays of emotions raised included facial expressions, body movements, or noises. Participants also explained how having a robot that aligned its emotions with the user could facilitate connection between the robot and user.

Participants also discussed considerations around using a robot in front of other people. Some participants voiced concern about attracting negative attention from an audience and feeling judged, while others suggested that a social robot could help to educate and raise awareness about dementia and how technologies can provide support to older adults.

Looking to the future

One key part of the neuroethics field today is co-created research (11), which involves engaging end-users in the creation of interventions meant to support their wellbeing. The results from this study highlight that social robots should have advanced interactive abilities and emotional capabilities to ensure that users can feel connected to these devices. Since older adults have different preferences for emotional range, customizability should be prioritized in the design of future devices. The results for considerations around using a robot in public suggest that social robot marketing may have a significant impact on the way assistive technologies are perceived in the future. Highlighting these devices as support and using them to educate the public about dementia may reduce the stigma around these technologies. These key findings should be incorporated into the design and implementation of future social robots to improve social robot adoption among older adults.


1.         Infographic: Canada’s seniors population outlook: Uncharted territory | CIHI [Internet]. [cited 2022 Jun 16]. Available from: https://www.cihi.ca/en/infographic-canadas-seniors-population-outlook-uncharted-territory

2.         Getson C, Nejat G. Socially assistive robots helping older adults through the pandemic and life after COVID-19. Robotics. 2021 Sep;10(3):106.

3.         Hung L, Liu C, Woldum E, Au-Yeung A, Berndt A, Wallsworth C, et al. The benefits of and barriers to using a social robot PARO in care settings: A scoping review. BMC Geriatr. 2019 Aug 23;19(1):232.

4.         Petersen S, Houston S, Qin H, Tague C, Studley J. The utilization of robotic pets in dementia care. Journal of Alzheimer’s Disease. 2017 Jan 1;55(2):569–74.

5.         Robinson H, MacDonald B, Broadbent E. Physiological effects of a companion robot on blood pressure of older people in residential care facility: A pilot study. Australasian Journal on Ageing. 2015;34(1):27–32.

6.         Latikka R, Rubio-Hernández R, Lohan ES, Rantala J, Nieto Fernández F, Laitinen A, et al. Older adults’ loneliness, social isolation, and physical information and communication technology in the era of ambient assisted living: A systematic literature review. J Med Internet Res. 2021 Dec 30;23(12):e28022.

7.         Prescott TJ, Robillard JM. Are friends electric? The benefits and risks of human-robot relationships. iScience. 2021 Jan 22;24(1):101993.

8.         Pu L, Moyle W, Jones C, Todorovic M. The effectiveness of social robots for older adults: A systematic review and meta-analysis of randomized controlled studies. The Gerontologist. 2019 Jan 9;59(1):e37–51.

9.         Koh WQ, Felding SA, Budak KB, Toomey E, Casey D. Barriers and facilitators to the implementation of social robots for older adults and people with dementia: a scoping review. BMC Geriatr. 2021 Jun 9;21:351.

10.       Bradwell HL, Edwards KJ, Winnington R, Thill S, Jones RB. Companion robots for older people: importance of user-centred design demonstrated through observations and focus groups comparing preferences of older people and roboticists in South West England. BMJ Open. 2019 Sep 1;9(9):e032468.

11.       Illes J. Reflecting on the Past and Future of Neuroethics: The Brain on a Pedestal. AJOB Neuroscience. 2023 Mar 31;1–4.

Jaya Kailley is a directed studies student under the supervision of Dr. Julie Robillard in the NEST Lab, and she is pursuing an Integrated Sciences degree in Behavioural Neuroscience and Physiology at the University of British Columbia. She currently supports research projects that aim to include end-users in the process of social robot development. Outside of work, Jaya enjoys playing the piano, drawing, and reading fiction novels.


Bell Let’s Talk and Other Media Mental Health Campaigns – How Effective Are They for Young People?

Disclaimer: The following blog post involves mentions of suicide.

One in seven youth aged 10 to 19 experience a mental health issue – but many don’t seek or receive the care they need (1). Can media mental health campaigns help to bridge this gap?

What are media mental health campaigns?

Media mental health campaigns are marketing efforts to raise awareness of mental health issues using mass media channels like social media. These campaigns focus on topics such as suicide prevention, destigmatizing mental illness, and promoting mental health resources.

Given the widespread use of social media among young people (generally defined as people aged 10-24), these platforms can be leveraged to spread mental health information in a cost-effective and accessible way (2). It is no surprise to see a growing number of media mental health campaigns directed toward young people over the past few years. But how effective are these campaigns? Do they significantly impact the feelings and behaviours of young people toward mental health?

Despite their prevalence and popularity, there exists only a small number of empirical evaluations of how these campaigns affect young people.

Bell Let’s Talk: The largest mental health campaign in Canada

Consider Bell Let’s Talk – Canada’s most well-known media mental health campaign.

Founded in 2010 by Bell Canada, Bell Let’s Talk aims to destigmatize mental illness by encouraging dialogue about mental health on social media. For one day a year, the company donates $0.05 to mental health initiatives for every text or call on the Bell network, as well as every social media interaction with the campaign. To date, Bell Let’s Talk has raised over $139 million toward mental health initiatives and is the largest corporate initiative in Canada dedicated to mental health (3).

Despite its impressive reach and financial success, there have only been two empirical evaluations of Bell Let’s Talk’s impacts on young people.

Bell Let’s Talk and rates of access to mental health services in young people

In the first evaluation, Booth et al. were curious to see whether the Bell Let’s Talk campaign encouraged young people to use mental health services (4). They analyzed the monthly outpatient mental health visit rates of young people in Ontario between 2006 and 2015 and found that the 2012 Bell Let’s Talk campaign was associated with a temporary increase in mental health service use.

These findings are promising – however, it is possible that other events happening during this time also contributed to the change they observed.

Is Bell Let’s Talk effective for suicide prevention?

Another evaluation of Bell Let’s Talk studied whether the campaign was associated with changes in suicide rates (5). Cȏté et al. compared the suicide rate in Ontario before and during the campaign and found no significant difference in the rates, both among young people and the general population.

In the same study, the researchers analyzed suicide-related Twitter posts under the hashtag #BellLetsTalk. They found that these Tweets focused more on suicide being a societal problem, and less on promoting protective messages about coping and resilience – which have been associated with fewer suicides (6,7). Based on this, the researchers suggest that Bell Let’s Talk and similar campaigns should aim to promote more protective messaging around suicide on social media, which may be more effective for suicide prevention.

Further evaluation of media mental health campaigns is needed

These evaluations suggest that Bell Let’s Talk can positively affect young people but could benefit from promoting more protective messages online. Given the enormous reach, financial stakes, and potential influence of the campaign, two evaluations are insufficient to capture the overall effects of Bell Let’s Talk on young people across Canada. Further evaluations are necessary.

Research in this area must overcome significant obstacles. First, evaluations that rely on objective measures of behaviour fail to capture the unique experiences of individuals as they interact with the campaign. Conversely, evaluations that rely on self-reported data leave room for response bias. For instance, study participants may self-report more positive outcomes after seeing campaign media because they believe researchers expect to see these outcomes. Additionally, campaigns may significantly impact people’s attitudes and behaviours in the long-term in a way that is difficult to detect in evaluations that focus on short-term impact. Even then, these long-term changes in outcomes could be attributed to factors other than mental health campaigns, such as a general increase in access to mental health resources and mental health education.

Despite these challenges, more research evaluations in this area have emerged through the years to address this important knowledge gap. By studying trends in existing media mental health campaigns for young people, we can inform the design and implementation of more effective campaigns in the future, ensuring they will have a better impact on the health and well-being of young people.


  1. Mental health of adolescents [Internet]. World Health Organization. 2021 [cited 2023 Mar 7]. Available from: https://www.who.int/news-room/fact-sheets/detail/adolescent-mental-health
  2. Robinson J, Bailey E, Hetrick S, Paix S, O’Donnell M, Cox G, et al. Developing Social Media-Based Suicide Prevention Messages in Partnership With Young People: Exploratory Study. JMIR Ment Health. 2017 Oct 4;4(4):e40.
  3. The positive impact of your efforts | Bell Let’s Talk [Internet]. [cited 2023 Mar 7]. Available from: https://letstalk.bell.ca/our-impact/
  4. Booth RG, Allen BN, Bray Jenkyn KM, Li L, Shariff SZ. Youth Mental Health Services Utilization Rates After a Large-Scale Social Media Campaign: Population-Based Interrupted Time-Series Analysis. JMIR Ment Health. 2018 Apr 6;5(2):e27.
  5. Côté D, Williams M, Zaheer R, Niederkrotenthaler T, Schaffer A, Sinyor M. Suicide-related Twitter Content in Response to a National Mental Health Awareness Campaign and the Association between the Campaign and Suicide Rates in Ontario. Can J Psychiatry. 2021 May;66(5):460–7.
  6. Niederkrotenthaler T, Voracek M, Herberth A, Till B, Strauss M, Etzersdorfer E, et al. Role of media reports in completed and prevented suicide: Werther v. Papageno effects. Br J Psychiatry. 2010 Sep;197(3):234–43.
  7. Sinyor M, Williams M, Zaheer R, Loureiro R, Pirkis J, Heisel MJ, et al. The association between Twitter content and suicide. Aust N Z J Psychiatry. 2021 Mar;55(3):268–76.

Cindy Zhang (she/her) is a research assistant at the NEST Lab under the supervision of Dr. Julie Robillard. She is an undergraduate student pursuing a Bachelor of Arts degree in Psychology at the University of British Columbia. At the lab, she currently supports research projects on the impact of social media on youth mental health and family communication.

The Effects of Post-Traumatic Stress on Pediatric Organ Transplant Patients

What is post-traumatic stress?

Post-traumatic Stress (PTS) is a psychiatric disorder resulting from the experience of a stressful or traumatic event. PTS is either the diagnostic entity known as post-traumatic stress disorder (PTSD) or PTSD-related symptomology known as PTSS [1]. PTS symptoms are clustered into three categories: reexperiencing, avoidance, and hyperarousal [2]. Reexperiencing can entail flashbacks of traumatic events, while avoidance of reminders of the stressor characterizes the avoidance dimension of symptomology. Finally, prominent anxiety and hypervigilance underly the hyperarousal dimension.

Symptoms of PTS can greatly vary, but can include:

  • Uncontrollable thoughts and memories related to the event
  • Bad dreams about the event
  • Physical bodily reactions (e.g., sweating, beating heart, dizziness)
  • Changes in mood
  • Difficulty paying attention
  • Feelings of fear, guilt, anger, and shame
  • Sleep disturbances

Organ transplantation can be considered a traumatic event for a child. PTS resulting from an organ transplantation could arise from many factors, including surgical procedures, repeated laboratory and imaging investigations, life-threatening incidents in care, and dependence on technology for organ function and survival [1].

How common is post-traumatic stress in pediatric organ transplant patients?

A 2021 study conducted at BC Children’s Hospital by Hind et al. investigated the effects of PTS on the life quality of 61 pediatric organ transplant recipients. The sample included 12 heart transplant recipients, 30 kidney transplant recipients, and 19 liver transplant recipients [1].

They found that a total of 52 patients (85.2%) reported at least one trauma symptom, and eight (13.1%) of these patients indicated symptoms that put them at significant risk of PTSD. They also observed that kidney recipients had higher overall trauma scores than other organ transplant patients, perhaps due to the extensive post-transplant care involved after kidney transplantation. Non-white patients reported significantly higher trauma scores, while females reported higher trauma scores than their male counterparts, though this result was not statistically significant. Spending more days in the hospital and being prescribed more medication were also associated with higher trauma scores [2].

How does post-traumatic stress affect the daily life of organ transplant patients?

Hind et al. also found that quality-of-life questionnaire scores were negatively correlated with trauma scores. This means that physical, emotional, social, academic, and psychosocial functioning decreased as trauma increased, leading to poorer quality of life. The results of this study indicate that PTS is a prevalent issue amongst pediatric organ transplant recipients and it can have detrimental effects on the daily functioning of patients post-transplant.

How does post-traumatic stress affect treatment?

Studies have shown that PTS in pediatric organ transplant recipients can impact treatment outcomes by causing post-transplant treatment nonadherence [1,3,4].

Why is treatment nonadherence significant to treatment outcomes?

Consider the example of a 19-year-old female, whose first transplant was lost due to nonadherence [3]. The patient was granted a second organ transplant after stating that she would adhere to post-operative treatment. However, two days following her second surgery, she stopped taking her medications. When questioned, the patient revealed that she had been suffering for more than one year from recurrent intrusive thoughts about her liver disease, and recurrent dreams about her wait for the first transplant. She reported wanting to avoid any reminder of her illness, including even the sight of a nurse or medication. Healthcare practitioners applied Cognitive Behavioural Therapy in the form of gradual exposure to a hospital environment to this patient. Family intervention was also undertaken to increase her social support network. Following this treatment, the patient resumed taking her medications and continued doing so for more than a year after her second transplant.

This case study shows the profound impact PTS can have on medical care, and how addressing trauma symptoms can improve patient outcomes. Therefore, its findings reinforce that providing access to mental health resources is an imperative for this population considering their effects on the psychological and physical wellbeing of patients.

It is important to recognize that transplantation can be a traumatic experience from which patients and their families may develop PTS symptoms. Therefore, providing resources and support services for PTS before, during and after transplantation can help patient health outcomes as well as their overall quality of life. Treatment for PTS may vary, but collaboration between healthcare practitioners and psychologists, social workers, councilors, and other support personnel helping with psychosocial coping can enhance patient experience and facilitate the transplantation journey for patients and families alike.


1. Hind T, Lui S, Moon E, Broad K, Lang S, Schreiber RA, et al. Post-traumatic stress as a determinant of quality of life in pediatric solid-organ transplant recipients. Pediatr Transplant. 2021;25(4):e14005, https://doi.org/10.1111/petr.14005

2. Nash RP, Loiselle MM, Stahl JL, Conklin JL, Rose TL, Hutto A, et al. Post-Traumatic Stress Disorder and Post-Traumatic Growth following Kidney Transplantation. Kidney360. 2022 Sep 29;3(9):1590, https://doi.org/10.34067/KID.0008152021

3. Shemesh E, Lurie S, Stuber ML, Emre S, Patel Y, Vohra P, et al. A pilot study of posttraumatic stress and nonadherence in pediatric liver transplant recipients. Pediatrics. 2000 Feb;105(2):E29, https://doi.org/10.1542/peds.105.2.e29

4. Martin LR, Feig C, Maksoudian CR, Wysong K, Faasse K. A perspective on nonadherence to drug therapy: psychological barriers and strategies to overcome nonadherence. Patient Prefer Adherence. 2018 Aug 22;12:1527–35, https://doi.org/10.2147/PPA.S155971

Anna Riminchan was born in Bulgaria, where she spent her early childhood before immigrating to Canada with her family. Anna is currently working towards a Bachelor of Science Degree, majoring in Behavioural Neuroscience and minoring in Visual Arts at the University of British Columbia. In the meantime, she is contributing to advancing research in neuroscience, after which, she plans to pursue a degree in medicine. In her spare time, you can find Anna working on her latest art piece!

Pervasive But Problematic: How Generative AI Is Disrupting Academia

Dominating headlines since late 2022, the generative AI system ChatGPT, has rapidly become one of the most controversial and fastest-growing consumer applications in history [1]. Capable of composing Shakespearean sonnets with hip-hop lyrics, drafting manuscripts with key points and strong counterarguments, or creating academic blogs worthy of publication, ChatGPT offers unrivalled potential to automize tasks and generate large bodies of text at lightning speed [2].

Original image generated using DALL·E text to image AI generator. Available from: https://openai.com/dall-e-2/

ChatGPT is a sophisticated language model that responds to user requests in ways that appear intuitive and conversational [3]. The model is built upon swathes of information obtained from the internet, 300 million words to be precise [4]. ChatGPT works by forming connections between data to reveal patterns of information that align with user prompts [5]. As a language model, ChatGPT has the ability to remember threads of information, enabling users to ask follow-up or clarifying questions. It is this personalized interactive dialogue that elevates it above traditional search engine models.

Unsurprisingly, generative AI has amassed a strong army of followers eager to monopolise on its efficient functionalities: 100 million people conversed with the chatbot in January alone [1].

But what might the lure of working smarter, not harder mean in academia?

Perilous Publishing, Or Powerful Penmanship

No longer a whisper shared between hushed sororities, generative AI like ChatGPT has become a powerful force proudly employed by professors and pupils alike. However, despite its popularity, uptake is not unanimous. Academics are divided.

With the ability to generate work at the touch of a button, users risk being led down a perilous path towards plagiarism, and having their development stifled. The clear threat to academic integrity and original thought is sending many into a state of panic [6–8]. Editors of scientific journals are also having to wrestle with publishing ethics, as ChatGPT is increasingly being cited as a co-author [9].

But, despite its outward looking proficiency, the generative AI model has a number of particularly unnerving limitations. In the words of OpenAI, ChatGPT’s parent company, the software “will occasionally make up facts or ‘hallucinate’ outputs, [that] may be inaccurate, untruthful, and otherwise misleading” [5].

Available from: https://chat.openai.com/chat

The rise of generative AI shines a spotlight on a troublesome issue in academia: the exchange of papers for grades. Whilst finished articles are necessary, when product triumphs over process, valuable lessons found in the process of writing can be overlooked.

“This technology [generative AI] … cuts into the university’s core purpose of cultivating a thoughtful and critically engaged citizenry. If we can now all access sophisticated and original writing with a simple prompt, why require students to write at all?” [10]

Responsively, in an attempt to stem the flow of plagiarism and untruth, and protect creative thinking, some academics have enforced outright bans of generative AI systems [2].

Unethical AI

Academic integrity aside, ChatGPT’s capabilities are also undermined by moral and ethical concerns.

A recent Times Magazine exposé revealed that OpenAI outsources work to a firm in Kenya, whose staff are assigned the menial task of trawling through mountains of data, flagging harmful items to ensure that ChatGPT’s outputs are “safe for human consumption”. Data Enrichment Specialists earn less than $2/hour [2].

Moreover, generative AI propagates systemic biases by repurposing primarily westernised data in response to English-language prompts, created by tech-savvy users with easy access to IT. For some, the commercialization of more sophisticated platforms like ChatGPT Pro will also prove particularly exclusionary [10–13].

Embracing The Chatbot

However, vying in support of generative AI in academia, are those such as Associate Professor of Learning Enhancement at Edinburgh Napier University, Sam Illingworth, who state that it would be unrealistic and unrepresentative of future workplaces if students did not learn to use these technologies. Illingworth and others call for a shift from albeit valid concerns around insidious plagiarism (OpenAI’s own plagiarism detector tool is highly inaccurate, with a 26% success rate [11]) toward embracing the opportunities as a chance to reshape pedagogy [4].

Methods for teaching and assessment are having to be reexamined, with some suggesting that a return to traditional methods, such as impromptu oral exams, personal reflections or in-person written assignments, may prove effective against a proliferation of AI generated work [12,13].

Generative AI chatbots also have the potential to become a teacher’s best friend [14]. Automating grading rubrics or assisting with lesson planning might offer a much-needed morale boost to a professional body whose expertise is being somewhat jeopardized by the emergent technology. And despite rumors of existential threat [15,16], generative AI, for now at least, poses no immediate risk of replacing human educators; empathy and creativity are among unique human qualities proving tricky to manufacture from binary code.

The Future Is Unknown

Much like other technologies that have emerged from Sisyphean cycles of innovation (think Casio graphing calculator or Mac OS), ChatGPT and fellow generative AI chatbots have the potential to transform the face of education [17].

As the AI arms race marches on at quickening pace, with companies delivering a daily bombardment of upgrades and functionalities, it is impossible to predict who, or what, might benefit or become a casualty to automation in academia. The story of AI in academia remains unwritten, but as the indelible mark left by ChatGPT suggests, it is certain to deliver a compelling narrative.


1.         Hu K. ChatGPT sets record for fastest-growing user base – analyst note. Reuters [Internet]. 2023 Feb 2 [cited 2023 Feb 6]; Available from: https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/

2.         Perrigo B. OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | Time [Internet]. 2023 [cited 2023 Feb 13]. Available from: https://time.com/6247678/openai-chatgpt-kenya-workers/

3.         OpenAi. ChatGPT: Optimizing Language Models for Dialogue [Internet]. OpenAI. 2022 [cited 2023 Feb 17]. Available from: https://openai.com/blog/chatgpt/

4.         Hughes A. ChatGPT: Everything you need to know about OpenAI’s GPT-3 tool [Internet]. BBC Science Focus Magazine. 2023 [cited 2023 Feb 6]. Available from: https://www.sciencefocus.com/future-technology/gpt-3/

5.         OpenAi. ChatGPT General FAQ [Internet]. 2023 [cited 2023 Feb 18]. Available from: https://help.openai.com/en/articles/6783457-chatgpt-general-faq

6.         Heidt A. ‘Arms race with automation’: professors fret about AI-generated coursework. Nature [Internet]. 2023 Jan 24 [cited 2023 Feb 6]; Available from: https://www.nature.com/articles/d41586-023-00204-z

7.         Kubacka T. “Publish-or-perish” and ChatGPT: a dangerous mix [Internet]. Lookalikes and Meanders. 2023 [cited 2023 Feb 6]. Available from: https://lookalikes.substack.com/p/publish-or-perish-and-chatgpt-a-dangerous

8.         Boyle K. A reason for the moral panic re AI in academia: in work, we learn prioritization of tasks, which higher ed doesn’t prize. Speed is crucial in work— it’s discouraged in school. Tools that encourage speed are bad for some established industries. Take note of who screams loudly. https://t.co/ot8YHh7H7b [Internet]. Twitter. 2023 [cited 2023 Feb 6]. Available from: https://twitter.com/KTmBoyle/status/1619384367637471234

9.         Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature [Internet]. 2023 Jan 18 [cited 2023 Feb 21];613(7945):620–1. Available from: https://www.nature.com/articles/d41586-023-00107-z

10.       Southworth J. Rethinking university writing pedagogy in a world of ChatGPT [Internet]. University Affairs. 2023 [cited 2023 Feb 18]. Available from: https://www.universityaffairs.ca/opinion/in-my-opinion/rethinking-university-writing-pedagogy-in-a-world-of-chatgpt/

11.       Wiggers K. OpenAI releases tool to detect AI-generated text, including from ChatGPT [Internet]. TechCrunch. 2023 [cited 2023 Feb 6]. Available from: https://techcrunch.com/2023/01/31/openai-releases-tool-to-detect-ai-generated-text-including-from-chatgpt/

12.       Nature Portfolio. A poll of @Nature readers about the use of AI chatbots in academia suggests that the resulting essays are still easy to flag, and it’s possible to amend existing policies and assignments to address their use. https://t.co/lHyPtEEb7F [Internet]. Twitter. 2023 [cited 2023 Feb 6]. Available from: https://twitter.com/NaturePortfolio/status/1619751476947046408

13.       Khatsenkova S. ChatGPT: Is it possible to spot AI-generated text? [Internet]. euronews. 2023 [cited 2023 Feb 6]. Available from: https://www.euronews.com/next/2023/01/19/chatgpt-is-it-possible-to-detect-ai-generated-text

14.       Roose K. Don’t Ban ChatGPT in Schools. Teach With It. The New York Times [Internet]. 2023 Jan 12 [cited 2023 Feb 21]; Available from: https://www.nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html

15.       Thorp HH. ChatGPT is fun, but not an author. Science [Internet]. 2023 Jan 27 [cited 2023 Feb 6];379(6630):313–313. Available from: https://www.science.org/doi/10.1126/science.adg7879

16.       Chow A, Perrigo B. The AI Arms Race Is On. Start Worrying | Time [Internet]. 2023 [cited 2023 Feb 18]. Available from: https://time.com/6255952/ai-impact-chatgpt-microsoft-google/

17.       Orben A. The Sisyphean Cycle of Technology Panics. Perspect Psychol Sci [Internet]. 2020 Sep 1 [cited 2023 Feb 6];15(5):1143–57. Available from: https://doi.org/10.1177/1745691620919372

Susanna Martin BSc (Hons) is a Research Assistant at The Neuroscience Engagement and Smart Tech (NEST) lab.

Feeling welcomed: Creating space for Indigenous voices in brain and mental health research

Join us for the 2023 Brain Awareness Week Annual Neuroethics Distinguished Lecture featuring Dr. Melissa L. Perreault!

Tuesday, March 14, 2023
5:30 PM – 7:00 PM PDT
Bill Reid Gallery of Northwest Coast Art
639 Hornby Street, Vancouver, BC, V6C 2G3 (map)

Everyone is welcome! This public in-person event is free, but tickets are required.
Kindly RSVP here: https://ncbaw2023.eventbrite.ca 

Research on Indigenous communities has historically been conducted using a one-sided approach, with researchers having little knowledge of Indigenous culture, minimal concerns surrounding community needs or desires, and without giving back to the community. In this lecture intended for people from all backgrounds and professions, Dr. Melissa L. Perreault will discuss how this is the time to give Indigenous communities a voice in research on brain and mental health that is guided ethically and culturally.

Melissa L. Perreault, PhD
Dr. Melissa L. Perreault, PhD, is an Associate Professor and neuroscientist in the Department of Biomedical Sciences at the University of Guelph and a member of the College of New Scholars, Artists, and Scientists in the Royal Society of Canada. Dr. Perreault’s research is focused on the understanding of sex differences in the mechanisms that underly neuropsychiatric disorders, and on the identification of brain wave patterns that can be used as biomarkers to identify brain and mental health disorders.

Dr. Perreault is a citizen of the Métis Nation of Ontario, descended from the historic Métis Community of the Mattawa/Ottawa River. She has developed numerous Indigenous and equity, diversity, and inclusion initiatives at institutional, national, and international levels. As a member of the Indigenous Knowledge Holders Group for the Canadian Brain Research Strategy, she continues to strive towards inclusivity in neuroscience and Indigenous community research.

Brain Awareness Week
Brain Awareness Week is the global campaign to foster public enthusiasm and support for brain science. Every March, partners host imaginative activities in their communities that share the wonders of the brain and the impact brain science has on our everyday lives.

Designing for Dementia: Towards a Truly Universal Built Environment

The buildings and streets around us shape how we interact with the environment. How can we design cities that are equitable and accessible for people living with dementia?

The physical structures in which we live, work, and play constitute the built environment around us[1]. Since we constantly interact with the built environment, architects must consider different needs and abilities in their designs to enable equitable access. This is called universal design[2], and it can be seen all around us.

Take, for example, a crosswalk. In the image below, the sidewalk slopes down to provide wheelchair access. There are also tactile bumps to signify a crossing for people using a long cane. The walking person signal provides clear instructions for those who cannot hear the traffic slowing. All these features represent design decisions made to increase the usability of the crosswalk.

Not Yet Universal

While designers have tried to create built environments that are universally accessible, they rarely consult people with lived experience of dementia in the design process. The result is cities and spaces that are confusing and inaccessible to those with dementia[1,3–5].

The City of Vancouver exemplifies this disconnect in what they specify as “accessible street design”[6]. Despite claiming to use the principles of universal design, the city’s specifications mainly focus on physical disabilities and neglect considerations for dementia. Further, dementia research tends to focus on social interactions and personal connections. Designers are simply unaware of the needs of those living with dementia[4,7,8].

Work in the field of Environmental Gerontology – which considers the relationship between the environment, health, and aging – has made the issue clear: making the built environment more accessible can improve quality of life for people with dementia[7]. How do we adapt our environments to better suit their needs?

Limited Work in Limited Spaces

To date, only a few small-scale research trials included people with dementia in the design process. One example is the Lepine-Versailles Garden in France, which took input from people living with dementia and their care partners. Dementia-specific features include clearly marked boundaries to the garden, and small enclosed spaces that help relieve anxiety and provide safety[9]. These small spaces can provide a place for close, quiet social interactions, which become more relevant as dementia progresses and language skills diminish[8].

Environmental interventions are also present in the psychogeriatric environment. One hospital in the Netherlands built custom handrails to help people living with dementia navigate more easily[4]. For example, they made one handrail of wood and played bird noises to help residents find the garden.

Handrail with a bird located near the garden. Nearby speakers played bird chirping sounds. Reproduced from Ludden et al. (2019).

However, these findings are very limited – they apply to specific environments that are not representative of most people’s experiences with dementia. A majority of people with dementia live in their home, and they routinely navigate their environment by taking walks on their own[3,5]. Increasingly, health research has emphasized the idea of ‘aging in place,’ suggesting that the number of people living at home with dementia will increase.

Design Principles for Dementia

So, what should dementia friendly environments actually look like? Mitchell and Burton studied the design principles that make environments accessible[3,5]. Based on these design principles, they developed design strategies to help people living with dementia flourish in the built environment. These include:

  • Small blocks laid out in an irregular grid with minimal crossroads and gently winding streets: this layout emphasises legibility, allowing people to identify where they need to go and avoid complicated crossroads.
  • Varied urban form and architecture, with landmarks and visual cues: this makes different parts of a neighbourhood more distinct from each other, which can help with wayfinding and orienting.
  • Mixed use buildings, including plenty of local services: this ensures people do not have to travel too far from their homes to access essential services (e.g., grocery store) and enables access without the need for driving or transiting.

These recommendations highlight the specific needs of people living with dementia that designers often overlook. For example, while cities designed on a grid are easy to navigate for most people, these grids are often repetitive and rely on numbered streets signs to navigate, making this design inaccessible to those with dementia.

From Ideas to Implementation: An Ethical Imperative

In summary, designers should prioritize the long-overlooked needs of people with dementia in their designs. Researchers have developed detailed recommendations for how to include dementia in universal design. Implementation is now the key.

With the global population aging, the number of people living with dementia is increasing. The focus on aging in place also means that many people will remain in their homes for longer and interact with the public environment (as opposed to specific dementia care facilities). It is therefore an ethical imperative that architects, city planners, and all other groups involved in the process of designing our built environment begin to consider the needs of people living with dementia.

Consulting people living with dementia during the design process is an opportunity to preserve their autonomy and dignity. This will require an overhaul of how we think about our built environment, and a shift towards truly universal design.


  1. Sturge, J., Nordin, S., Sussana Patil, D., Jones, A., Légaré, F., Elf, M., & Meijering, L. (2021). Features of the social and built environment that contribute to the well-being of people with dementia who live at home: A scoping review. Health & Place, 67, 102483. https://doi.org/10.1016/j.healthplace.2020.102483
  2. Story, M. F. (1998). Maximizing Usability: The Principles of Universal Design. Assistive Technology, 10(1), 4–12. https://doi.org/10.1080/10400435.1998.10131955
  3. Mitchell, L., & Burton, E. (2010). Designing Dementia‐Friendly Neighbourhoods: Helping People with Dementia to Get Out and About. Journal of Integrated Care, 18(6), 11–18. https://doi.org/10.5042/jic.2010.0647
  4. Ludden, G. D. S., van Rompay, T. J. L., Niedderer, K., & Tournier, I. (2019). Environmental design for dementia care—Towards more meaningful experiences through design. Maturitas, 128, 10–16. https://doi.org/10.1016/j.maturitas.2019.06.011
  5. Mitchell, L., & Burton, E. (2006). Neighbourhoods for life: Designing dementia‐friendly outdoor environments. Quality in Ageing and Older Adults, 7(1), 26–33. https://doi.org/10.1108/14717794200600005
  6. Accessible Street Design. (n.d.). The City of Vancouver Engineering Services. Retrieved January 1, 2023, from https://vancouver.ca/files/cov/accessiblestreetdesign.pdf
  7. Chrysikou, E., Tziraki, C., & Buhalis, D. (2018). Architectural hybrids for living across the lifespan: Lessons from dementia. The Service Industries Journal, 38(1–2), 4–26. https://doi.org/10.1080/02642069.2017.1365138
  8. Van Steenwinkel, I., Van Audenhove, C., & Heylighen, A. (2019). Offering architects insights into experiences of living with dementia: A case study on orientation in space, time, and identity. Dementia, 18(2), 742–756. https://doi.org/10.1177/1471301217692905
  9. Charras, K., Bébin, C., Laulier, V., Mabire, J.-B., & Aquino, J.-P. (2020). Designing dementia-friendly gardens: A workshop for landscape architects: Innovative Practice. Dementia, 19(7), 2504–2512. https://doi.org/10.1177/1471301218808609

Grayden Zaleski is a Directed Studies Student under the supervision of Dr. Julie Robillard in the NEST Lab. He is pursuing a Bachelor of Science degree with a major in Behavioural Neuroscience and a Computer Science minor. His research interests include human computer interaction, accessible technology, and the use of technology in healthcare to improve patient experience. He is currently working to engage healthcare providers and community members through an innovative online ‘Tweet Chat’! Additionally, he is contributing to the first empirical characterization of social media use in dementia research, which seeks to assess the benefits and harms of social media usage for research participation. In his spare time, you can find Grayden exploring Vancouver and playing simulation games.

Lessons from end-users: how young people select mobile mental health applications

This blog post discusses findings from a peer-reviewed article titled: “What criteria are young people using to select mobile mental health applications? A nominal group study” published in Digital Health (2022, paper here).

Mental health apps: more accessible mental health support

According to the World Health Organization, as many as one in seven children and teenagers aged 10-19 experience mental health disorders (1). Despite the prevalence of mental health issues in this group, access to professional diagnosis and treatment remains low. As a result, children and teenagers are turning to smartphone applications to support their mental wellbeing. When young people search for apps to support their mental health in general or to address a specific problem, such as anxiety, they are faced with an overwhelming number of options.

How to pick a mental health app?

The smartphone applications available on platforms such as Google Play or App Store are evaluated based on general user experience or satisfaction, usually on a 1 to 5-star rating scale. This system results in apps being suggested based on popularity, rather than the content of the app or its effectiveness in addressing mental health concerns. While some mental health interventions delivered by apps are developed based on evidence, most of the apps on the market are not supported by science. Additionally, there is no regulatory oversight to prevent apps from promoting potentially harmful interventions, making false claims, or mishandling user data (2,3). Based on a search from 2016, only 2.6% of apps make effectiveness claims that are supported in any way (4). The high number of available apps combined with only popularity-based rankings make it difficult to choose apps that are safe and effective.

Young people’s criteria for selecting mental health apps

Since selecting mental health support apps is challenging, the Neuroscience Engagement and Smart Tech (NEST) lab at Neuroethics Canada, in collaboration with Foundry BC, set out to develop a tool that would make it easier to select an app that is best suited to user’s circumstances. A tool that is helpful to young people has to align with their needs and priorities. Thus, we conducted a series of nominal group meetings to identify the criteria that are important to young people when they select mental health apps. The infographic below summarizes the criteria that emerged in discussions with 47 young people aged 15-25 in four towns in British Columbia, Canada. These criteria will inform the development of an app-selection tool that will combine end-user priorities with expert input.

The future of mental health support

As mental health apps continue to increase in popularity, so does the diversity and complexity of the features they offer. For example, some mobile applications offer access to healthcare professionals via video or chat but may also use AI chat bots to provide help or counselling. As we uncovered in the nominal groups, young people want the apps to provide links to community services that are available in their area and allow users to share the information that the apps collect with their health care team. As such, it is critically important to identify the priorities of end-users to guide the ethical usage of this innovative form of mental health support.


  1. Adolescent mental health [Internet]. [cited 2022 Dec 16]. Available from: https://www.who.int/news-room/fact-sheets/detail/adolescent-mental-health
  2. Anthes E. Mental health: There’s an app for that. Nature News. 2016 Apr 7;532(7597):20.
  3. Robillard JM, Feng TL, Sporn AB, Lai JA, Lo C, Ta M, et al. Availability, readability, and content of privacy policies and terms of agreements of mental health apps. Internet Interventions. 2019 Sep 1;17:100243.
  4. Larsen ME, Nicholas J, Christensen H. Quantifying App Store Dynamics: Longitudinal Tracking of Mental Health Apps. JMIR mHealth and uHealth. 2016 Aug 9;4(3):e6020.

Would you join a clinical trial advertised on Facebook? The ethics of dementia research content on social media

Some areas of dementia research are relevant to healthy older adults, and social media can help spread the word. What should researchers and the public know about dementia research content on social media to support future brain health?

Dementia risk reduction is highly relevant for healthy adults. Addressing certain lifestyle factors may reduce future cases of dementia (1). Examples include education, performing physical activity, quitting smoking, preventing head injury, and stabilizing blood pressure.

Online exposure to this topic may encourage lifestyle changes but also promote much-needed dementia research participation for risk reduction. Some dementia researchers are turning to social media as a low-cost way to increase community awareness and research participation (2–5).

Appropriateness of social media in brain health research

Health-related content on social media is not without risk. Ethical concerns accompany the use of social media in research (6,7). Common concerns include privacy, confidentiality, informed consent, the spread of misinformation, and the protection of vulnerable groups. 

Understanding how dementia research is typically presented online can inform social media use to improve public involvement. Currently, however, there is no thorough overview of the type of dementia research content users may encounter on social media. 

To inform future ethical guidance of online brain health engagement, we investigated current uses of social media for dementia research. 

Image by Jason A. Howie under the cc-by-2.0 license on Wikimedia Commons. Creator assumes no responsibility or liability for the content of this site. Original image.

Dementia research on Facebook vs. Twitter

We reviewed a sample of public dementia research posts on Facebook and Twitter (8). Our analysis included understanding the types of users posting about dementia research and the topics discussed. 

Facebook users were mainly advocacy and health organizations rather than individuals. In contrast, Twitter users largely had academic or research backgrounds. This difference in user groups may explain the greater amount of academic content on Twitter, such as peer-reviewed research articles. Most research articles were open access and available to the public but may not be accessible for a wide range of literacy levels.

For both platforms, prevention and risk reduction were main areas of focus in dementia research. Posts with these topics appeared the most frequently and received a lot of attention in the form of likes, shares, and comments.

Other popular topics included dementia treatment and research related to the detection of dementia. Treatment posts primarily discussed the approval of aducanumab1 by the Food and Drug Administration (FDA), leading to much online debate. This may explain why, at the time, non-academic users had more interactions on dementia treatment tweets. The purpose behind most posts was to share dementia research information and knowledge.

On risk, responsibility, and stigma

The posts in our social media data emphasized individual prevention efforts, such as diet and exercise. However, topics also included social and environmental barriers that interfere with dementia risk reduction, care, treatment delivery, and other research areas. 

As stated in one Facebook post, “[the] social determinants of health can significantly impact brain health disparities & the ability to access care.”

Barriers are unequally distributed across communities that vary by race, ethnicity, sex and gender, socioeconomic background, disability, and other aspects of identity.

Dementia researchers on social media should avoid using language that elicits stigma or equates brain health with personal responsibility (9). Society-wide initiatives that overcome barriers can potentially impact future population health on a broader scale positively and more effectively.

Image from Pixabay.

Practical social media guidance is needed for dementia research

A better understanding of the dementia research space on social media can inform future ethical guidelines. Dementia research engagement should incorporate the community’s values and perspectives on using social media for risk reduction.

1 The FDA approved aducanumab as a treatment for Alzheimer’s disease in June 2021. The decision was met with much controversy and ethical discussion. More information can be found here.

Access the full research paper here.

This work is supported by the Alzheimer’s Association Research Grant program (JMR), the Canadian Consortium on Neurodegeneration in Aging, AGE-WELL NCE Inc., a member of the Networks of Centres of Excellence program, and the University of British Columbia Four Year Doctoral Fellowship (VH).


  1. Livingston G, Huntley J, Sommerlad A, Ames D, Ballard C, Banerjee S, et al. Dementia prevention, intervention, and care: 2020 report of the Lancet Commission. The Lancet. 2020 Aug 8;396(10248):413–46.
  2. Corey KL, McCurry MK, Sethares KA, Bourbonniere M, Hirschman KB, Meghani SH. Utilizing Internet-based recruitment and data collection to access different age groups of former family caregivers. Appl Nurs Res. 2018 Dec 1;44:82–7.
  3. Isaacson RS, Seifan A, Haddox CL, Mureb M, Rahman A, Scheyer O, et al. Using social media to disseminate education about Alzheimer’s prevention & treatment: a pilot study on Alzheimer’s universe (www.AlzU.org). J Commun Healthc. 2018;11(2):106–13.
  4. Friedman DB, Gibson A, Torres W, Irizarry J, Rodriguez J, Tang W, et al. Increasing Community Awareness About Alzheimer’s Disease in Puerto Rico Through Coffee Shop Education and Social Media. J Community Health. 2016 Oct;41(5):1006–12.
  5. Stout SH, Babulal GM, Johnson AM, Williams MM, Roe CM. Recruitment of African American and Non-Hispanic White Older Adults for Alzheimer Disease Research Via Traditional and Social Media: a Case Study. J Cross-Cult Gerontol. 2020 Sep 1;35(3):329–39.
  6. Bender JL, Cyr AB, Arbuckle L, Ferris LE. Ethics and Privacy Implications of Using the Internet and Social Media to Recruit Participants for Health Research: A Privacy-by-Design Framework for Online Recruitment. J Med Internet Res. 2017;19(4):e104.
  7. Gelinas L, Pierce R, Winkler S, Cohen IG, Lynch HF, Bierer BE. Using Social Media as a Research Recruitment Tool: Ethical Issues and Recommendations. Am J Bioeth. 2017 Mar 4;17(3):3–14.
  8. Hrincu V, An Z, Joseph K, Jiang YF, Robillard JM. Dementia Research on Facebook and Twitter: Current Practice and Challenges. J Alzheimers Dis. 2022 Jan 1; 1–13.
  9. Lawless M, Augoustinos M, LeCouteur A. “Your Brain Matters”: Issues of Risk and Responsibility in Online Dementia Prevention Information. Qual Health Res. 2018 Aug 1;28(10):1539–51.

Viorica Hrincu, MSc is doing her PhD in Experimental Medicine at the University of British Columbia in the Neuroscience Engagement and Smart Tech (NEST) lab.

Cross-Cultural Perspectives on Personhood: A Neuroethics Lens

Post by Anna Nuechterlein

What is personhood? How is it viewed, described, and understood? Indeed, personhood is a complex and dynamic concept that varies across philosophical, political, legal, and cultural domains. In Western discourse, theories of personhood gained traction during the Age of Enlightenment. During this era, John Locke suggested that personal identity is synonymous with “psychological continuity”, grounded in consciousness and memory [1]. Emmanuel Kant believed that personhood is defined by “capacity” and “rationality” – in other words, the ability to act according to human reason [2].

In recent history, the Western individualistic notion of what it means to be a person or hold personhood has been challenged. Feminist accounts of personhood suggest that cultural interactions, societal values, and interpersonal relations intimately shape personhood [3, 4]. Cross-cultural perspectives on personhood further highlight the fluid and adaptable nature of the term. Scholars such as Nhlanhla Mkhize and Polycarp Ikuenobe have written about communal personhood in the African context, recognizing that in many African societies, personhood is achieved and maintained by belonging to a community [5, 6].

Different conceptions of personhood raise unique, culturally-mediated neuroethics considerations for privacy, informed consent, and values, across social, research, and clinical contexts. For example, in societies where personhood is primarily viewed as communal, privacy may be less valued. As a result, Kenyan philosopher Eunice Kamaara cautions that African youth may be less inclined to consider significant privacy risks when sharing sensitive personal and community information [7]. Many Indigenous communities also approach the self as communal and view decision-making and informed consent as largely collective processes [8]. The implications for ethical research conduct are significant: Māorian criminologist Juan Tauri proposes that disregard of communal consent by research ethics boards disempowers Indigenous peoples and perpetuates colonial practices in research ethics [9]. In Western culture, the limitations of current conceptions of personhood have come to fruition through inquiries into disorders of consciousness. Scholars such as Canadian biomedical engineer, Stefanie Blain-Moraes, have proposed that personhood, responsiveness, and consciousness are too often conceptually blended in Western medicine, thereby potentially diminishing personhood in individuals in minimally consciousness states [10].

So, what is personhood? How is it viewed, described, and understood? Ultimately, there is no one right answer to this question. With the emergence of a “global neuroethics” [11], there is a cultural imperative to incorporate diverse worldviews of personhood into theoretical and practical applications of neuroethics. To advance the goals of neuroethics on an international landscape, a flexible approach that acknowledges and minimizes risks such as overstating cultural differences, deepening harmful stereotypes, and perpetuating the “west” versus “the rest” narrative [12] is imperative. Questions and issues relating to personhood must be addressed through a holistic and intersectional lens situated within relevant socio-cultural and socio-political contexts, recognizing the limitations of “ethical universalism” [13]. As emphasized by neuroethicists Arleen Salles, Karen Herrer-Ferrá, and Laura Cabrera, “much conceptual and groundwork remains to be done to respectfully learn from different cultures and promote frameworks that advance the local and global goals of neuroethics”. Rethinking frameworks based on Western conceptions of self and personhood, nurturing international collaborations, centering local ways of knowing [11] and embracing humility are pivotal beginnings toward creating culturally relevant and appropriate frameworks for understanding personhood.

Author bio: Anna is a research assistant and project coordinator at Neuroethics Canada. She is interested in the junction where neuroscience, law, and policy meet, and will be pursuing a legal education at the University of Toronto in 2023. Outside of academia, she loves to read, paint, play guitar, and run (literally) around Vancouver.

Special thanks to Stefanie Blain-Moraes for sharing her insights on personhood and providing guidance for this blog.


  1. Locke, J. (1997). An essay concerning human understanding. Harmondsworth, UK: Penguin Books.
  2. Kant, I. (1948). Groundwork of the metaphysics of morals. In The moral law: Kant’s groundwork of the metaphysics of morals, ed. H. J. Paton, X–XX. London, UK: Hutchinson.
  3. Harris, H. (1998). Should We Say that Personhood Is Relational? Scottish Journal of Theology, 51(2): 214-234. doi:10.1017/S0036930600050134
  4. White FJ. (2013). Personhood: An essential characteristic of the human species. Linacre Q, 80(1), 74-97. doi: 10.1179/0024363912Z.00000000010.
  5. Ikuenobe, P (2016). Good and Beautiful: A Moral-Aesthetic View of Personhood in African Communal Traditions. Essays in Philosophy 17(1): 125-163.
  6. Mkhize, N. (2006). Communal personhood and the principle of autonomy: the ethical challenges. CME, 24(1).
  7. Kamaara, E, Nderitu, D, Masese, E, Kiyiapi, L, Wawa, S, Oketch, D, Sigei J, Atwoli, L. (2022). Personhood, Privacy, and Spirituality: Neuroethics of digital mental health innovations for youth in Africa. International Neuroethics Society Annual Meeting, Montreal, Quebec, Canada.
  8. Stevenson S, Beattie BL, Vedan R, Dwosh E, Bruce L, Illes J. (2013). Neuroethics, confidentiality, and a cultural imperative in early onset Alzheimer disease: a case study with a First Nation population. Philos Ethics Humanit Med, 16(8). doi: 10.1186/1747-5341-8-15.
  9. Tauri, Juan M. (2017). Research ethics, informed consent and the disempowerment of First Nation peoples. Research Ethics 14(3): 1-14.
  10. Blain-Moraes S, Racine E, Mashour GA. (2018). Consciousness and Personhood in Medical Care. Front Hum Neurosci, 2(12). doi: 10.3389/fnhum.2018.00306.
  11. Salles, A., Herrera-Ferrá, K., & Cabrera, L. Y. (2018, December 18). Global Neuroethics and cultural diversity: Some challenges to consider. Neuronline. Retrieved November 15, 2022, from https://neuronline.sfn.org/professional-development/global-neuroethics-and-cultural-diversity-some-challenges-to-consider
  12. Degnen, C. (2018). Cross-cultural perspectives on personhood and the life course. Palgrave Macmillan New York, 1–33. https://doi.org/10.1057/978-1-137-56642-3
  13. Fleischacker, Samuel. “1. Limits of Universalism”. The Ethics of Culture, Ithaca, NY: Cornell University Press, 1994, pp. 1-20. https://doi.org/10.7591/9781501734595-002

Policy’s Role in the Use of Social Robots in Care Homes

Can policy help social robots provide ethical, dignified, and beneficial care for older adults? This question has been the subject of ongoing ethical debate concerning the use of social robots in care homes.

Around the world, the expanding population of older adults is increasing the need for care resources, straining health and aged care providers (1). The COVID-19 pandemic has further highlighted the negative consequences of overpacked and understaffed healthcare institutions (2). As governments seek solutions to reduce pressure on care homes, the use of social robots as a potential tool has been suggested.

Already, social robots have been studied in the context of older adult care to provide companionship, exercise, cognitive therapy and help with daily tasks (3). Although these studies have shown predominantly positive effects, the majority have assessed social robots in short-term situations, have had small sample sizes, or lack diversity and may not be generalizable to all cultures (3).

Studies from North America with larger sample sizes and longer time periods are showing variable results, with some older adults experiencing declines in loneliness and increased interaction with other older adults (4). Populations of care home residents with dementia also show variation in responses, suggesting that one approach to delivering care with social robots does not fit all (5). However, it is important to note that the social robot used in both of these studies is the same model that has been in use since 2003 (6). The field of social robotics is rapidly expanding, with many new types and models of robots released with a greater focus on end-users in their development (7,8). Research with such models done in care home contexts with generalizable samples is limited (3). Although more research is needed, social robots developed with users are showing promising preliminary results and could be a viable future solution to promoting well-being in the elderly (9).

To this end, social robots are showing great promise as beneficial tools in care homes. They can assist caregivers in situations where they are tired, distracted, overwhelmed, or not feeling very well (5). Social robots can be used to empower older adults to be more independent and to aid aging at-home care (2).

However, key ethical challenges in the use of social robot care assistants include autonomy, privacy, dignity, and bias (2). Autonomy can be suppressed or overridden by a social robot if, for example, a user is prevented from climbing on a chair to reach something in an effort to prevent a fall. Although the user’s safety is maintained, their autonomy and dignity may be diminished by the robot. Furthermore, the social robot’s monitoring features and social interaction with the user require data storage and use, which could interfere with the user’s privacy.

Currently, legislation around privacy and consumer protection could form the basis of government-enforced policies around social robots. However, in AI, self-regulation through developers has typically been the norm (2). Criticism to this point can be made in that self-regulation does not sufficiently protect the rights and safety of vulnerable populations such as older adults, and that manufacturers primarily protect their own interests.

This is where Johnston suggests ethics by design can ensure that ethical values of dignity, respect for autonomy and benevolence can be programmed into the robot’s behavior such that it protects the interests of the elderly. Johnston continues that to determine the “moral code” programmed into social robots and to monitor the ethical use of such systems within care home contexts, the use of clinical ethics committees can be employed. Ethics committees can provide consultation services, help in creating care home policies and procedures regarding social robots, and aid to resolve emerging ethical dilemmas. To counteract ethical biases in design, it is important that ethics committees consider multi-stakeholder perspectives. Emphasizing the voices of end-users tailors social robot functionality to the populations it will serve, and aids in user acceptance of social robots (10).

Furthermore, policies must consider both the benefits and drawbacks of using social in care home contexts (1). Potential benefits could include increased efficiency, increased welfare, physiological and psychological benefits, and increased satisfaction (1). There are, however, interesting objections to the use of social robots including the possibility that relations with robots can potentially displace human contact, that these relations could be harmful, that robot care is undignified and disrespectful, and that social robots are deceptive (1). These are ethical considerations that must be carefully balanced in a holistic policy aimed to maximize benefits for end-users while mitigating potential downsides to social robot use.

Although we are not yet at the stage where social robots can be used in a large-scale fashion across care homes in North America, it is important to anticipate their future ethical ramifications. By discussing policy-regulated ethical considerations, we are taking strides towards the responsible development and use of social robots with the goal of minimizing their potential for harm and ensuring their benefits for human care.

Bio: Anna Riminchan was born in Bulgaria, where she spent her early childhood before immigrating to Canada with her family. Anna is currently working towards a Bachelor of Science Degree, majoring in Behavioural Neuroscience and minoring in Visual Arts at the University of British Columbia. In the meantime, she is contributing to advancing research in neuroscience, after which, she plans to pursue a degree in medicine. In her spare time, you can find Anna working on her latest art piece! 


  1. Sætra HS. The foundations of a policy for the use of social robots in care. Technol Soc. 2020 Nov 1;63:101383.
  2. Johnston C. Ethical Design and Use of Robotic Care of the Elderly. J Bioethical Inq. 2022 Mar 1;19(1):11–4.
  3. Thunberg S, Ziemke T. Social Robots in Care Homes for Older Adults. In: Li H, Ge SS, Wu Y, Wykowska A, He H, Liu X, et al., editors. Social Robotics. Cham: Springer International Publishing; 2021. p. 475–86. (Lecture Notes in Computer Science).
  4. Robinson H, MacDonald B, Kerse N, Broadbent E. The Psychosocial Effects of a Companion Robot: A Randomized Controlled Trial. J Am Med Dir Assoc. 2013 Sep 1;14(9):661–7.
  5. Moyle W, Jones C, Murfield J, Thalib L, Beattie E, Shum D, et al. Using a therapeutic companion robot for dementia symptoms in long-term care: reflections from a cluster-RCT. Aging Ment Health. 2019 Mar 4;23(3):329–36.
  6. PARO Therapeutic Robot [Internet]. [cited 2022 Jul 22]. Available from: http://www.parorobots.com/
  7. Breazeal CL, Ostrowski AK, Singh N, Park HW. Designing Social Robots for Older Adults. 2019;10.
  8. Östlund B, Olander E, Jonsson O, Frennert S. STS-inspired design to meet the challenges of modern aging. Welfare technology as a tool to promote user-driven innovations or another way to keep older users hostage? Technol Forecast Soc Change. 2015 Apr 1;93:82–90.
  9. Hutson S, Lim SL, Bentley PJ, Bianchi-Berthouze N, Bowling A. Investigating the Suitability of Social Robots for the Wellbeing of the Elderly. In: D’Mello S, Graesser A, Schuller B, Martin JC, editors. Affective Computing and Intelligent Interaction. Berlin, Heidelberg: Springer; 2011. p. 578–87. (Lecture Notes in Computer Science).
  10. Hameed I, Tan ZH, Thomsen N, Duan X. User Acceptance of Social Robots. In 2016.