A Risk to Australia’s Cybersecurity Landscape


A recent study by Western Sydney University, Adult Media Literacy in 2024, revealed worryingly low levels of media literacy among Australians, particularly given the deepfake capabilities posted by newer AI technologies.

This deficiency poses an IT security risk, given that human error remains the leading cause of security breaches. As disinformation and deepfakes become increasingly sophisticated, the need for a cohesive national response is more urgent than ever, the report noted.

Because AI can produce highly convincing disinformation, the risk of human error becomes magnified. Individuals who are not media literate are more likely to fall prey to such schemes, potentially compromising sensitive information or systems.

The growing threat of disinformation and deepfakes

While AI offers undeniable benefits in the generation and distribution of information, it also presents new challenges, including disinformation and deepfakes that require high levels of media literacy across the nation to mitigate.

Tanya Notley, an associate professor at Western Sydney University who was involved in the Adult Media Literacy report, explained that AI introduces some particular complexities to media literacy.

“It’s really just getting harder and harder to identify where AI has been used,” she told TechRepublic.

To overcome these challenges, individuals must understand how to verify the information they see and how to tell the difference between a quality source and one likely to post deepfakes.

Unfortunately, about 1 in 3 Australians (34%) report having “low confidence” in their media literacy. Education plays a factor, as just 1 in 4 (25%) Australians with a low level of education reported having confidence in verifying information they find online.

Why media literacy matters to cyber security

The connection between media literacy and cyber security might not be immediately apparent, but it is critical. Recent research from Proofpoint found that 74% of CISOs consider human error to be the “most significant” vulnerability in organisations.

Low media literacy exacerbates this issue. When individuals cannot effectively assess the credibility of information, they become more susceptible to common cyber security threats, including phishing scams, social engineering, and other forms of manipulation that directly lead to security breaches.

An already infamous example of this occurred in May, when cybercriminals successfully used a deepfake to impersonate the CFO of an engineering company, Arup, to convince an employee to transfer $25 million to a series of Hong Kong bank accounts.

The role of media literacy in national security

As Notley pointed out, improving media literacy is not just a matter of education. It is a national security imperative — particularly in Australia, a nation where there is already a cyber security skills shortage.

“Focusing on one thing, which many people have, such as regulation, is inadequate,” she said. “We actually have to have a multi-pronged approach, and media literacy does a number of different things. One of which is to increase people’s knowledge about how generative AI is being used and how to think critically and ask questions about that.”

According to Notley, this multi-pronged approach should include:

  • Media literacy education: Educational institutions and community organisations should implement robust media literacy programs that equip individuals with the skills to critically evaluate digital content. This education should cover not only traditional media but also the nuances of AI-generated content.
  • Regulation and policy: Governments must develop and enforce regulations that hold digital platforms accountable for the content they host. This includes mandating transparency about AI-generated content and ensuring that platforms take proactive measures to prevent the spread of disinformation.
  • Public awareness campaigns: National campaigns are needed to raise awareness about the risks associated with low media literacy and the importance of being critical consumers of information. These campaigns should be designed to reach all demographics, including those who are less likely to be digitally literate.
  • Industry collaboration: The IT industry plays a crucial role in enhancing media literacy. By partnering with organisations such as the Australian Media Literacy Alliance, tech companies can contribute to the development of tools and resources that help users identify and resist disinformation.
  • Training and education: Just as first aid and workplace safety drills are considered essential, with regular updates to ensure that staff and the broader organisation are in compliance, media literacy should become a mandatory part of employee training and regularly updated as the landscape changes.

How the IT industry can support media literacy

The IT industry has a unique responsibility to leverage media literacy as a core component of cybersecurity. By developing tools that can detect and flag AI-generated content, tech companies can help users navigate the digital landscape more safely.

And as noted by the Proofpoint research, CISOs, while concerned about the risk of human error, are also bullish on the ability of AI-powered solutions and other technologies to mitigate human-centric risks, highlighting that technology can be the solution for the problem that technology creates.

However, it’s also important to build a culture without blame. One of the biggest reasons that human error is such a risk is that people often feel frightened to speak up for fear of punishment and even losing their jobs.

Ultimately, one of the biggest defences we have against misinformation is the free and confident exchange of information, and so the CISO and IT team should actively encourage people to speak up, flag content that concerns them, and, if they’re worried that they have fallen for a deepfake, to report it instantly.



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Exit mobile version