Business

The silent threat of AI and what it means for business

David Armstrong: ‘Organisations are now discovering that the integration of AI into their most critical processes can expose them to reputational, legal and operational pitfalls’

Generative AI will take centre stage in 2024
Businesses must embrace AI with discipline, transparency and a clear framework for governance (Khanchit Khirisutchalual/Getty Images)

While the promise of AI is real, even innovations that set out with the best of intentions can produce consequences that are both unexpected and harmful.

Few professionals will be unaware of the benefits AI delivers for a business, enabling greater efficiency by allowing companies to allocate resources more strategically, free employees from administrative tasks, and focus attention on higher-value activities.

In recent years, however, concerns about the weaponisation of AI, particularly generative AI, have intensified. A recent study by Oxford University Press, which surveyed 2,000 young people aged 13 to 18, found that more than half could not distinguish misinformation generated by AI from factual content.

Another 2025 report by the European Broadcasting Union revealed that nearly a quarter of respondents now turn to AI assistants as sources of information, yet almost half of the answers those assistants provided contained a significant error, many of them entirely fabricated, in a phenomenon referred to as an ‘AI hallucination’.

The same study concluded that AI models “routinely misrepresented” news content, regardless of the language, country or platform involved.

What these examples demonstrate is that generative AI has become extraordinarily persuasive. Because it can tailor output to match the user’s tone, world view and language patterns, it does not simply present information, it presents information that appears to ‘sound right’ to its audience. Whether the content is accurate becomes secondary to its personal relevance, and that is where the danger lies.



The risks are not limited to teenagers on their phones or casual interactions with chatbots. Major organisations are now discovering that the integration of AI into their most critical processes can expose them to reputational, legal and operational pitfalls.

A recent incident involving global business advisory firm Deloitte has brought this point sharply into focus. Commissioned by the Australian Department of Employment and Workplace Relations to assess the “Future Made in Australia” compliance framework and its supporting IT systems, Deloitte delivered a report in July 2025 that was later found to contain fabricated academic citations, false references and even a quote incorrectly attributed to a Federal Court judgment.

After admitting that generative AI had been used in earlier drafts, the firm agreed to refund part of its AUD$440,000 consultancy fee.

The revelation that a global consultancy, with layers of quality control and expert oversight, could publish a report riddled with such fundamental errors is telling. If this can happen within a blue-chip, highly regulated environment, then every business adopting AI must accept that similar risks apply to them.

For businesses across Northern Ireland, the implications are significant. AI is already being used in customer service operations, internal communications, marketing, data analysis and productivity tools. Without proper oversight, it can produce biased, incomplete or entirely incorrect outputs that make their way into business-critical documents.

In a world in which cyber criminals are increasingly exploiting AI to produce targeted, believable misinformation, the security vulnerabilities become ever-more pronounced.

David Armstrong
David Armstrong, chief executive of the b4b Group

The solution is not to reject AI. Instead, businesses must embrace AI with discipline, transparency and a clear framework for governance. Organisations must foster a culture of accountability, where the use of AI is transparent and where human judgement always remains central.

A machine may help to draft content or process information, but it is the business that must own the decisions made on the back of that content.

  • David Armstrong is chief executive of the b4b Group