Generative AI in Adult Social Care: Defining what ‘responsible use’ looks like

Publication date
Image for decoration only from Adobe Stock by vegefox.com
Image from Adobe Stock by vegefox.com

by Dr Caroline Green

On the 1st of February 2024, thirty organisations and representatives of people in Adult Social Care met at Reuben College, University of Oxford, to address the urgent need to define what it means to ‘responsibly use’ generative AI in adult social care. ‘Adult social care’ refers to activities providing people with disabilities, mental illness or who are older with the help and support they may need to stay independent, well and safe[1]. This includes assistance with activities of daily living, such as dressing, personal hygiene and eating. In England, around 1.6 million people formally work in adult social care, many supporting adults in various setting such as in care homes and at home[2]. There are also around 5.7 million unpaid carers across the United Kingdom, caring for relatives, friends or neighbours[3].

Generative AI, such as GPT powered AI chatbots, have many potential and actual use cases in care settings such as care homes for older or disabled adults. Whilst this quickly developing type of AI may benefit people in adult social care, there is a need to discover and understand the distinct ethical risks and implications relating to the purpose of social care provision. The process towards defining, disseminating knowledge and implementing the ‘responsible use of generative AI’ in adult social care must include all groups of people in social care, such as people drawing on care, family and professional carers, care provider organisations, policy makers and regulators, local authorities, representative and advocacy groups amongst others.

The first roundtable held at Reuben College, organised by us at the Institute for Ethics in AI, particularly Dr Caroline Green, together with Reuben College,  Katie Thorn from Digital Care Hub and Daniel Casson at Casson Consulting was a first step towards defining this process. The ‘Oxford statement on generative AI in adult social care’ is a result of the roundtable, highlighting our shared commitment to driving the process over the next six months. Following on from the statement, we will host more roundtables and deliberative spaces for care workers, people drawing on care and family carers, tech providers and others, working towards creating guidelines to disseminate widely.

The statement is available here. You are invited to join us and endorse the statement using this link

For more information contact Dr Caroline Green at the Institute for Ethics in AI.

 

[1] The Kingsfund: https://www.kingsfund.org.uk/insight-and-analysis/data-and-charts/key-facts-figures-adult-social-care#:~:text=What%20is%20adult%20social%20care,and%20stay%20well%20and%20safe.

[2]  Skills for Care: https://www.skillsforcare.org.uk/Adult-Social-Care-Workforce-Data/Workforce-intelligence/publications/national-information/The-state-of-the-adult-social-care-sector-and-workforce-in-England.aspx

[3] Carers UK: https://www.carersuk.org/policy-and-research/key-facts-and-figures/

Suggested citation: Green, C., ‘Generative AI in Adult Social Care: Defining What 'Responsible Use' Looks Like’, AI Ethics At Oxford Blog (19th February 2024) (available at: https://www.oxford-aiethics.ox.ac.uk/blog/generative-ai-adult-social-care-defining-what-responsible-use-looks).