Lifestyle
Facebook’s fact-checking changes: a Citro explainer

Meta recently announced the discontinuation of its third-party fact-checking program, opting instead for a ‘Community Notes’ system. So what does this actually mean for the average user?
By Allison Tait
You may have seen a lot of discussion in the media and online recently about Mark Zuckerberg’s announcement about changes at Meta (Facebook, Instagram, Threads and more), and a flurry of terms like ‘misinformation’, ‘disinformation’ and ‘media literacy’ being thrown around.
But what does it mean for the average Australian, who’s probably only on Facebook to keep up with friends and family and watch cute dog/duck/baby goat videos (like this one)?
What exactly is changing at Meta/Facebook?
On 7 January 2025, Mark Zuckerberg announced that the company was ending its third-party (ie. independent) fact-checking program and ‘moving to a Community Notes model’.
The company will also ‘allow more speech by lifting restrictions on some topics that are part of mainstream discourse’, and will ‘take a more personalised approach to political content’.
What impact will these changes have for Aussies?
According to a statement from the Australian Communications and Media Authority (ACMA), “Meta’s announcement about changes to its fact-checking processes currently only relate to the United States. The ACMA understands on advice from Meta that there is no immediate plan to make changes to the third-party fact-checking program in Australia.”
So nothing changes for Australians?
Yes, and no.
“Meta has changed its global ‘hateful conduct’ policies, so this applies to all people in all countries,” says Dr Tanya Notley, Associate Professor at Western Sydney University, and co-author of a report, published in December 2024, about Online Misinformation in Australia. “People can see how the ‘hateful conduct’ policies changed here – they are significant changes that permit new forms of hateful misinformation to be published.”
In practice, Dr Notley continues, “the change means that Meta is removing many of the safety measures that were designed and implemented to protect women, LGBTQ people, immigrants, and other groups who already experience a much higher rate of violent and hateful attacks online.”
And while only US fact-checkers have been ‘fired’ or removed at this stage, there is no guarantee that the policy won’t stretch to all fact-checkers in the future.
Which has led to a lot of discussion about the spread of mis- and disinformation.
What is misinformation?
ACMA describes misinformation as ‘false, misleading or deceptive information that can cause harm’. Examples of misinformation can include scam advertisements, false information shared on social media, doctored images and videos and made-up news articles.
“Our 2024 [Adult Media Literacy] national survey shows that adult Australians are worried about misinformation,” says Dr Notley. “They encounter it regularly on social media and the vast majority (80%) think the spread of misinformation is an issue that needs to be addressed.”
What is disinformation?
The National Library of Australia gives the following definition of disinformation: ‘deliberately misleading or biased information; manipulated narrative or facts; propaganda’.
What protections are in place to help shield Australians from misleading information?
Australian Code of Practice on Disinformation and Misinformation
“Meta is a signatory to the voluntary Australian Code of Practice on Disinformation and Misinformation, which is administered by DIGI,” states ACMA. “Under that voluntary code, Meta has committed to a range of measures in its latest transparency report including initiatives with third-party fact-checking organisations to inform its processes to combat misinformation and support media literacy.”
The Digital Industry Group Inc (DIGI) is a not-for-profit industry association advocating for the digital industry in Australia.
DIGI Managing Director Sunita Bose explains that the aforementioned code is “a blueprint for best practice for digital platforms addressing the complex challenges of misinformation online and was introduced in response to a request from the government in 2019 that recognised the need for an industry-led approach.”

The voluntary code, developed by DIGI, is signed by 9 major technology companies, including Meta, and is designed to protect Australians from harmful false information online. Other signatories include Adobe, Apple, Google, Microsoft and TikTok.
“It requires signatories to implement policies, reporting mechanisms, and scalable measures such as content labelling and restrictions on fake accounts,” says Bose. “They must also publish transparency reports each year (found here) that provide detail to Australians on their efforts to protect them from mis- and disinformation.
Online Safety Act
In a recent statement, eSafety Commissioner Julie Inman Grant, also reiterated that, “as an entity operating within Australia, [Meta] is required to comply with Australian law, including the Online Safety Act.
“Under the Act, eSafety is empowered as the national online safety educator and coordinator to protect Australians from online harms including child sexual abuse, terrorism, cyberbullying, image-based abuse and serious adult cyber abuse directed against individuals.”
Find out more about the Online Safety Act here.
How can I tell if information is factual?
One of the best things that we can all do to help stop the spread of misinformation, disinformation and harmful online content is to develop our information and media literacy skills. Because the fact is that a recent study found that a whopping 97% of adults in Australia have ‘limited skills’ to verify information online – and that half of those have ‘no ability’ at all.
[Don’t believe us? Read the full Online Misinformation in Australia report here.]
“Very few adults have received any training to help them quickly and effectively check the veracity of information and quality of sources online,” says Dr Notley. “[One issue is] that information verification skills taught for a pre-digital era are not very helpful. We also found that people often made snap judgements based on cynicism, past experience, their gut feeling or on the ‘look and feel’ of content.”
Older Australians may be more at risk
While the report found there wasn’t a huge difference between age groups in their abilities to verify information, older Australians tended to score slightly lower across the 4 activities provided for the study – 2 from websites and 2 from social media.
“It may be that older people were less familiar with the social media sites and with misinformation techniques on these platforms,” says Dr Notley. “It may also be that older Australians are more likely to have been taught fact-checking techniques that don’t work well on the internet.”
One trait that connected all the age groups was that they didn’t engage in ‘lateral reading, where you leave the content (website or social media post) to investigate the source or the claims being made.
This is where those information- and media-literacy skills come into play.
What is ‘media literacy’?
The Australian Media Literacy Alliance defines media literacy as the ability to critically engage with media in all aspects of life.
What is ‘information literacy’ – and how is it different from ‘digital literacy’?
A term currently mostly used in education sectors, information literacy is the ability to locate, manage, assess and use information from a variety of sources. Digital literacy is the ability to identify and use technology confidently for a range of life, employment and education purposes.
How do I find trusted sources?
An oft-repeated mantra for checking information online is to find a ‘trusted source’ for fact checking – but how do you know a source is ‘trusted’?
“We all need to be aware of our own biases,” says Dr Notley. “By pausing and reflecting on why we are responding to content in a particular way we can check in our biases and consider how they may be getting in the way of us discerning what is true, fair and trustworthy.”
She suggests considering the following:
1. Why am I seeing this particular content? Misinformation is increasingly targeted, so think about why it might be in your social media feed.
2. Look for transparency before trusting a source. “If someone is making claims but not providing information about where the claim comes from, then you can’t trust what they are saying because you can’t check their claims,” says Dr Notley.
3. It’s important to know the policies of news organisations. Many news organisations track and monitor any accidental errors they make and are very upfront about them. This kind of transparency is important because it demonstrates a commitment to true and factual reporting.
4. Investigate the expertise of people making claims on social media— are they really qualified to speak on the topic?
Tips for building information and media literacy skills
With more information than ever available at our fingertips, it’s also more important than ever to understand where that information is coming from.
ACMA offers simple tips for spotting misinformation, including checking sources, understanding whether you’re reading ‘opinion’ or ‘fact’, reading the full story (not just the headline) and, most importantly, not hitting that share button if you have any doubts at all.
“Try to cross-check [the information] with other authoritative sources like reputable news outlets, government websites or fact-checking websites,” adds Bose. “Look closely to see if images have been manipulated – and always report questionable content to platforms.
Where to find to help
Be Connected supports people aged 50+ to confidently and safely use the internet and digital technology
Want to report online misinformation? ACMA has a list of platforms that will link you to the relevant page.
Feature image: iStock/Kenneth Cheung
You might also like: