25. 09. 2024

Ghost in the ledger: Generative AI can make mistakes

Ghost in the ledger: Generative AI can make mistakes

Speaking at the Festival of Accounting & Bookkeeping earlier this year, Ian Pay, head of data analytics and tech at ICAEW, highlighted the challenges of trusting AI-generated content, particularly in customer service and compliance contexts, and outlined how to overcome them. This feature forms part of our special editorial report on the subject, in association with Sage.

As a profession, accountants and bookkeepers are well aware of the importance of ethics in accountancy and should understand the need for transparency, accountability and ethical principles in artificial intelligence (AI) use, citing limitations of AI tools like ChatGPT due to possible bugs and biases.

Ian Pay, head of data analytics and tech at ICAEW, told the Festival of Accounting & Bookkeeping earlier this year: “AI is built by humans and humans make mistakes.”

He went on to explain that ChatGPT is an example of a large language model (LLM) generative AI tool that uses probabilistic models.

“These tools are designed to take data they’ve been fed and generate the next word or the pixel based on what is statistically likely to be the next word or the next pixel, and they’ve been programmed to do this by humans.

“Unfortunately, humans are prone to making mistakes and historically we have not got a great track record in terms of bias.”

While Pay encouraged accountants to use AI to support their work, they should ensure proper human oversight of any results to avoid mistakes, bugs or “hallucinations” – the name given to incorrect or misleading results generated by AI models.

“Ultimately, the way that you build trust in AI is by starting from a position of not trusting it,” said Pay. “If you start there, if you get humans involved, you get to the point where you’re confident that the output is consistent and reliable.”

AI tools: What should accountants be aware of?

Pay also outlined several key points accounting professionals should be aware of when regarding sensitive data and AI.

Have firm or business-wide policies in place to set out what AI should (and shouldn’t) be used for. This can be based on a firm’s tolerance for risk, but should also consider relevant data regulations and privacy rules.

If you decide to trial or use a particular AI tool, provide training for all relevant staff. Specific training on AI tools is usually available from the vendor, or more general training on things like prompt engineering is now provided by several accountancy bodies.

Good-quality prompts give better outputs. Give context and boundaries to prompts and questions and ask questions in multiple ways while asking for citations and references to ensure better accuracy.

When prompting natural language AI tools, try to ask questions in a neutral way. AI will often agree with you as it is pre-disposed to provide a good experience for the user, so avoid leading questions.

Be transparent about what you’re using AI for. If you’re using an AI chatbot, don’t pretend to clients it’s a real person. Make it easy for them to get through to a real person if they want and be aware you could be liable if an AI chatbot on your site gets something wrong.

Maintain transparency and an audit trail. When using AI for bookkeeping and accounting tasks make sure you can see exactly what the AI has done and identify any mistakes.

This article is sourced from the following link:

https://www.accountingweb.co.uk/tech/accounting-software/ghost-in-the-ledger-generative-ai-can-make-mistakes