hero-image

Using AI to create financial services content

2023 was the year when AI came into its own. From image processing to content creation, it has empowered businesses across the world to produce previously unimaginable results, often with limited skills.

But what about financial services organisations, with their specialised products, niche audiences and exacting compliance standards? How can they put AI to use?

Fundamentally, there are two key barriers to using AI for financial services content: quality and data security. Here’s why they’re important, and how to overcome them.

Barrier 1: Quality

To be effective, financial services organisations need content that is:

  • accurate
  • up to date
  • compliant
  • persuasive, and
  • unique, speaking with the brand’s distinctive voice.

Currently, AI performs poorly on all of these fronts. Tools like Bard and ChatGPT are notorious for producing “hallucinations” – incorrect or misleading content that is sometimes embarrassingly wrong1. Recent cases include the New York lawyer who used ChatGPT for legal research before submitting a brief in court that included six entirely fictional cases to support his case2. Obviously, for financial services companies, errors like these can have significant legal and reputational costs.

Currency is also an issue. AI tools need to be trained with enormous libraries of data, and that takes time. When it first released ChatGPT, OpenAI stated that the tool only had knowledge of events before September 2021 – and while it has since been updated, it can be unclear how current its database actually is3.

Both of these issues point to a bigger problem: compliance.

Then there is the more general issue of readability. AI creates a “race to the middle”, helping relatively inexperienced writers and designers produce content of acceptable but limited quality, while potentially dumbing down the work of more skilled creators. That’s a particular problem for financial services organisations seeking to cut-through with a persuasive, engaging and distinctive brand voice. It’s also an issue that appears to be intensifying, as the universe of content used to train AI tools starts to contain more and more AI-generated content – the bland leading the bland.

Together these issues point to a simple conclusion: while AI can be a useful tool for rapidly producing first drafts of certain kinds of content (for example, evergreen educational content on non-specialist topics), those drafts will need intensive polishing to meet the standard financial services organisations expect. As a result, they are likely to need support to:

  • Design effective prompts to elicit useful responses from AI tools – essentially, asking the right questions in the right way (something of an art in itself).
  • Do in-depth fact and compliance checks.
  • Pull in recent data and information, and ensure it’s properly integrated into the narrative of each piece.
  • Uplift draft content to be persuasive and engaging, while reflecting the unique brand personality of each business.

Barrier 2: Data security

This is a potentially more serious issue for financial services organisations, particularly where they seek to overcome the quality issue by training tools with existing IP, or where content needs to incorporate proprietary or commercially sensitive information. In particular, organisation need to avoid their data being:

  • Uploaded to an external AI operating tool for processing in an environment they do not control, then used to train a tool that is accessible to other users, potentially including competitors.
  • Exposed to interception in environments that may not meet Australian regulatory requirements for data security, or the businesses’ own IT standards.
  • Subject to offshore regulatory regimes such as the US PATRIOT Act, that can require database operators to release or share data.

While there are technical solutions such as stand-alone enterprise installations, they need to be carefully designed and implemented to avoid unexpected exposures. The key is to ask questions and read the fine print carefully, rather than simply accepting headline assurances – some tools that claim to keep data secure and segregated may still share it at the back end, unless carefully configured. In particular, a number of commercially available AI tools developed for specific uses may be driven by an underlying engine like ChatGPT, which may store and use inputs as part of normal processing, even if the tool itself comes with data protections.

Getting the right support

We can help you harness the potential of AI more effectively with advice and editorial support to transform AI-generated first drafts into customer-ready content.

Find out more about how we can help you put AI to work