A conversation around LLM complexities
In this brief conversation, Stuart Cranney, CV's Director of Innovation, sheds light on some of the complexities to be considered when thinking through the potential risks of using LLMs.
At CV, we are actively exploring the potential of LLMs as a creative and conversational tool. However, we are also acutely aware of the risks involved in deploying these tools in a ministry context. Balancing excitement about the opportunities these tools present with the prudence necessary to ensure their responsible use, we believe it is crucial to carefully weigh the potential benefits against the inherent risks.
While we recognise that it is not yet possible to completely mitigate all of these risks, we are committed to exploring these opportunities responsibly and proactively.
Below are some of the risks we've encountered within our own work in evangelism, and some of the initial steps we've taken to mitigate some of these risks.
One of the major appeals of using a Large Language Model (LLM) is its ability to rapidly generate impressive content for a variety of purposes. However, there is a significant risk: how do you ensure the content produced is accurate and aligns with theological truth?
From time to time, LLMs are known to hallucinate. Hallucination occurs when a model generates content that is factually incorrect, misleading, or entirely fabricated.
As a consequence, the potential for spreading misinformation or even ideas that may be considered unbiblical is a concern for anyone using LLMs – but the stakes are particularly high for ministries dedicated to faithfully sharing the gospel. Human intervention is often required to establish with certainty if the output from an LLM is fit for purpose.
LLMs are trained on vast amounts of data, which can include biased or prejudiced content. Consequently, these models may inadvertently produce outputs that reflect societal biases or inappropriate language. In a ministry or pastoral context, this could result in problematic statements or insensitive remarks that could alienate or offend.
LLMs like ChatGPT can engage in conversations that feel remarkably human, which might lead individuals to share personal or sensitive information. However, as AI systems, they lack the ability to securely manage confidential data, which creates significant privacy risks. If your commitment to stewardship extends to wanting to protect the personal or sensitive information of individuals you minister to, you may need to consider platforms and solutions that allow you to preserve data privacy.
Although LLMs like ChatGPT can deliver quick and efficient responses in the context of chat-like conversations, over-reliance on AI for this kind of communication risks eroding the human touch that is often vital in ministry work, especially in evangelism. Pastoral care, counselling, and spiritual guidance are inherently personal and relational, and requires the empathy and discernment that only a human can provide. Moreover, we believe that we as the church have been called to play a role that cannot – and should not – always be replicated by machine interaction. AI supports the work we've been called to, but should not replace it.
In the context of a chat conversation, you may be able to include automations that recognise the intent of a message from a seeker – but the above-mentioned human engagement and pastoral care is all the more important when there is a serious issue with regards to safeguarding and protecting vulnerable people from harm. LLMs can be trained to provide a degree of supportive feedback to somebody at risk of harming themselves or somebody else, but can never take the place of a real human in a high-risk situation.
Some of the risks around inaccuracy, hallucination and bias can be mitigated by committing to a manual review of AI-generated content before it is published or shared. This is particularly relevant in cases where LLMs are used for creative content generation (as opposed to larger-scale programmatic, conversational implementations).
Manual review may also be necessary in cases where sensitive theological or societal issues are present in the subject matter.
Against the backdrop of its own work in AI and other areas of emerging tech over the last few years, CV recognised the need to convene a group dedicated to carefully considering the ethical implications of these technologies. This led to the establishment of our Tech Ethics Advisory Board, a forum where we can thoughtfully navigate the challenges and opportunities that come with utilising an array of technologies in our ministry.
This board provides a dedicated space to thoughtfully consider some of the most important questions around matters such as AI, and provides input and advice to our senior leadership in navigating these matters, helping to ensure that our approach aligns with our values and mission.
While these types of advisory groups may be associated with larger organisations, there is nothing that prevents even smaller ministries or churches from establishing advisory groups, whether formal or informal, to assist in thinking through these matters. This allows for informed, proactive decisions, ultimately fostering a responsible and impactful use of technologies such as AI in your ministry.
Like others, we have found it helpful to convene a small group that consists of both independent individuals and senior leaders, with these broad areas of speciality or interest:
A combination of independent and in-house advisors roughly covering these areas of interest is a helpful starting point, and at the very least provides the basis for conversations that could reveal how to move forward.
A number of tools are emerging that are aimed at mitigating some of the inherent risks associated with these technologies.
While none at present represent a one-stop fail-safe that eliminates all risk, several interesting solutions are emerging that take care of a broad range of challenges.
CV has been experimenting with a specific tool that covers, amongst other things:
If you want to learn more, check out this other article: