One in two SMEs are adopting AI but what are the risks? 

The use of artificial intelligence (AI) is booming. Businesses in all industries and sectors are jumping onboard to harness the tech to drive growth by improving productivity and profitability, improve customer experience, enhance their competitive edge, and cut costs.  

Australian businesses are leading the charge. According to Avanade’s AI Readiness Survey, 76% of Australian businesses use AI daily, compared to 57% of businesses worldwide. Some 96% of Australian businesses believe they must transition to an AI-first operating model within a year to be competitive and meet customer standards.    

It’s not only large corporations that are embracing AI, SMEs are also adopting the tech.  

The NAB SME Business Insights report notes that 23% of SMEs have already invested time or money in emerging technology solutions such as AI, machine learning and other forms of automation to help their businesses run. A further 20% have not yet done so but are planning on it. The research showed four in 10 hoped to see profitability gains, and six in 10 were looking for productivity increases.  

A survey by Schneider Electric found around one in two SMEs have flagged ‘substantial’ investment in AI to come over the next five years to enable company-wide adoption of the technology. An all-in approach to AI would be taken by 48% of SMEs, while 45% would take a more gradual project-by-project approach.  

Around half of businesses plan on implementing AI by purchasing off-the-shelf solutions to then tailor to their specific needs. The survey found the most common forms of AI were generative-AI (63%) and application embedded AI (39%). Quality control, visual inspection, and robotic process automation uses rounded out the top five.  

Benefits of AI 

NAB found SMEs consider the main benefits of AI technology solutions to be productivity increases and a reduction in administrative tasks. Around 40% believe AI will make their business more profitable while around one-third expect to reap benefits as online queries are dealt with by technology.  

Research from the Commonwealth Bank found SMEs using AI were deriving benefits, with 87% of those surveyed saying AI was delivering savings in costs and time. Some 56% reported that AI investment had driven growth in their business. The most common use of AI was content creation, with 48% of businesses using AI to create and edit content. AI was being used to generate new ideas within 39% of business and 33% were using AI to automate routine activities.  

According to the CSIRO, AI can provide operational benefits to businesses, helping them boost revenue and improve cyber safety and security. The tech can help businesses analyse data more efficiently and strengthen their understanding of markets and consumer preferences. The CSIRO notes Australian companies report earning an average of $500,000 in revenue from customer service bots that enhance the customer experience through around the clock support, improved response times and personalised product recommendations. AI can also help businesses anticipate demand for their products and adjust their supply chain accordingly, mitigating costs from over- or understocking. Automation of administrative processes can also reduce menial tasks, help businesses save time and improve labour productivity. The CSIRO found that, on average, respondents reported time savings of 30% across all AI-related initiatives.   

Risks of AI 

Along with numerous benefits, there are also risks for SMEs in using AI. 

Numerous surveys have found that while the technology is readily being embraced by SMEs (to varying degrees based on industry), there are key issues of concern. 

Data from Statista showed 87% of people were very or somewhat worried about AI – 71% over AI scams, 69% over deepfakes, 69% over sexual/online abuse, 66% over AI hallucinations (where the technology makes up information), 62% over data privacy, and 60% over bias amplification. 

Businesses are concerned about the potential for AI to introduce new cybersecurity threats, poor data quality and compromise the privacy of customer information. Other issues include ethical and environmental issues, intellectual property, IT skills, employee training and job losses. 

However, according to the Schneider Electric survey, the main hurdles for SMEs in adopting AI were concerns over security and data integrity. 

Security   

According to the Absolute Security Cyber Resilience Index 2024, the majority of organisations around the world are underprepared for taking on AI applications and the security risks they represent.  

AI threats are perceived as a looming cybersecurity problem. According to Palo Alto, 43% of security professionals predict traditional security controls will not be up to the challenge of detecting AI-powered attacks. 

A study by Darktrace found 60% of IT security experts feel unprepared against AI-augmented cyber threats. Some 89% of experts believe AI threats will have a ‘significant impact’ on their organisation. 

According to a report in Tech News, 75% of cybersecurity professionals have seen an increase in AI-powered cyberattacks over the past year, with 85% attributing it to cybercriminals weaponising AI. Almost half (46%) of cybersecurity professionals anticipate that AI will heighten companies’ vulnerability levels. 

Tech News notes: “Cybercriminals are increasingly weaponising artificial intelligence (AI) to execute more sophisticated, large-scale, and advanced targeted cyberattacks. AI has empowered these attackers to develop adaptive malware that evades detection, craft highly convincing phishing schemes, and automate complex assaults.” 

Of particular concern is the heightened risk of identity-based attacks. According to Vectra AI, when businesses modernise their IT infrastructure with AI technologies and methodologies, they are integrating not just AI and machine learning but also third-party applications. This can make maintaining strict access control to sensitive networks, services, and applications more challenging and increases the risk of identity-based attacks. AI provides new opportunities for cybercriminals to exploit vulnerabilities in identity-related systems to perpetrate ransomware, scams and business email compromise (BEC). 

Cloud security is also of concern. According to Palo Alto’s State of Cloud-Native Security Report 2024, Australia is a world leader in cloud adoption and investment, with 26% of organisations having fully cloud-native environments. Vectra AI notes that the biggest issue in the cloud is credential theft through repositories like GitHub or BitBucket, when a developer mistakenly uploads the credentials, or if the cloud’s complexity leads to misconfigurations being used or abused. The Palo Alto survey notes legacy applications are a concern, along with there being too many cloud tools – organisations on average are using “16 tools from 14 different vendors”. 

Data integrity 

One of the key risks associated with implementing AI is the vulnerability of sensitive data. With AI systems relying heavily on vast datasets, the potential for data breaches, unauthorised access, and misuse poses a significant threat. 

Some platforms store all conversations on their servers, creating a risk that sensitive data could be exposed if the servers are hacked, or potentially used to generate answers for other users.  

Another problem a SME could face when using AI concerns its accuracy. AI is only as good as the database that powers it. There is a risk that the sources of the information being accessed are incorrect or biased. For example, chatbots are trained from books, websites and articles, and its knowledge is limited to information that was available when it was trained as it is unable to access new information. As a result, some of the information and answers the bot provides users could be low quality or outdated, or it may contain errors.  

Chatbots are also known to ‘hallucinate’ – that is, generate fabricated information in response to a user’s prompt, but present it as if it’s factual and correct. The chatbots are trained on enormous amounts of data, which is how they learn to recognise patterns and connections between words and topics. They use this knowledge to interpret prompts and generate new content, such as text or photos. However, as the chatbots are essentially predicting the word that is most likely to come next in a sentence, they can sometimes generate outputs that sound correct, but aren’t actually true. According to some researchers, chatbots hallucinate as much as 27% of the time, with factual errors present in 46% of their responses. 

A Representative Study on Human Detection of Artificially Generated Media Across Countries found AI-generated content and that created by real people are almost impossible to tell apart. It notes that artificially generated content can be misused in many ways. The Swiss Federal Institute of Technology Lausanne also found that ChatGPT’s current iteration is almost twice as persuasive as the average human being. This raises ethical and reputational questions for SMEs using the technology.  

Mitigating risks 

Businesses can face significant consequences for mismanaging AI. These range from potential regulatory infractions and ethical dilemmas as well as the cybersecurity vulnerabilities and privacy concerns already mentioned.  

Given these risks, it is essential business owners and managers establish comprehensive risk management strategies to effectively mitigate the threat. 

Considerations should include: 

  • Understanding relevant data protection legislation and requirements.  
  • Implementing a strong AI use policy. 
  • Modernising IT infrastructure. 
  • Understanding how any data is being used and stored. 
  • Ensuring data storage centres are able to handle the increased strain. 
  • Protecting AI systems from cyberthreats (as far as possible). 
  • Investing in security controls. 
  • Focussing on basic techniques and tactics, procedures and protocols (TTPs) that can help prevent and remediate security incidents. 
  • Safeguarding data privacy. 
  • Guaranteeing any AI tool used is ethical and accurate, rigorously tested, and subject to human oversight. 
  • Ensuring employees are well-trained and equipped to manage AI tools. 

AI has implications for cybersecurity, regulatory risks, and misinformation management, and there are insurance products expressly written to cover claims or loss from AI use and AI error – so it is important to work with your EBM Account Manager to ensure that the right policies are included in your insurance program.