Australia’s U16 Media ban (crosspost from Substack)

This the first in a series discussing the Australian legislation banning people under 16 from using social media. I’m writing from the perspective of a longstanding user of new media and also as someone with personal experience of dealing (not very successfully) with problems of under-16 screen addiction. On the other hand, I’m not a technical expert so I may get some details wrong. I’ll be happy to accept correction on these points

1. What is the ban and how (if at all) will it work

The legislation was rushed through Parliament with little discussion, so not much has been spelt out about its scope or how the ban will be implemented.

The legislation requires specified sites to adopt some form of age verification – yet to be spelt out. It is explicitly said to apply to

  • • Instagram
  • • TikTok
  • • Snapchat
  • • Facebook
  • • Reddit
  • • X (formerly Twitter)

but would presumably also apply to Bluesky and Threads, and perhaps the Fediverse. On the other hand messaging services are explicitly exempt – it would be hard to restrict them without also banning SMS. Also, and unlike the US, there are no restrictions on adult sites.

Platforms will have the choice of introducing an Australia-specific age verification scheme, blocking access for all Australian users, or ignoring the ban and facing the consequences. US experience with state level age verification rules for adult sites (not restricted under the Australian legislation) suggest that adult sites have mostly chosen the second or third options. Aylo, the operator of Pornhub and other well-known sites, has blocked all access from states with age verification rules. Other sites have simply ignored the ban

It seems unlikely that the Australian government will have much success in prosecuting non-compliant sites based overseas. So, the primary enforcement mechanism will presumably come by forcing Australian ISPs to block access to these sites.

In the absence of countermeasures, bans of this kind can easily be evaded using Virtual Private Networks (VPNs). These make it impossible for ISPs to determine which sites are being visited, but typically do not conceal the fact that a VPN is being used The most effective countermeasure would probably be a requirement for ISPs to block VPNs altogether. Such a requirement would severely compromise privacy for all users. As usual in such cases, there are workarounds that would require even more intrusive countermeasures.

Summing up, a ban on U16 access to social media sites can be made at least partially effective. However, it will have significant impacts on all Australian users, including loss of access to some social media, and restrictions on privacy tools such as VPNs.

More discussion on my Substack

2 thoughts on “Australia’s U16 Media ban (crosspost from Substack)

  1. MartinK

    I have no idea if the government has thought of this but I think something like this could work for them to handle the VPN issues. There will of course still be ways to ‘route around’ this but it is a lot better than nothing.

    • Require ISP’s to block non conforming sites directly.
    • Require ISP’s to block all except approved VPN’s.
    • Require VPN’s wanting approval to block the non conforming sites.

    The problems with approved VPN’s is that there are 1000’s of VPN’s. Many businesses run their own for employees to operate remotely as one example. In this case require, the VPN software companies to block non conforming sites and require ISP’s to block non approved VPN software instead of VPN services. OK, I’m not sure how easy it would be for ISP’s to reliably detect VPN software but it is made easier by only needing to detect the server side VPN software and, again, they should be able to do something that is better than nothing.

    The cynic on me says that the government will, instead, try to force monitoring software to be installed in all PC’s, perhaps making it a mandatory prerequisite for using MyGov, the Services Australia website, and the various highschool online services.

  2. “The next step would be to end the algorithmic promotion of toxic material. The remedy here seems simple enough. If a network platform selects material to promote, its owners should be considered as the publishers of that material.”

    That works for toxic recommendations, but not for the wider problem of making media fit for democracy. Traditional media are run by publishers, and the sector is almost as broken as the new forms.

    Let’s concentrate on newspapers, radio and TV. (Nobody is worried about university presses or Trotskykite street papers.) None of them have targeted recommendations. They ought to provide reliable news, in a balanced selection, and diverse opinion from qualified people. Current media only meet the reliability test, as it is enforced by laws on defamation. Even Fox News rarely resorts to fabrication, and you can trust the reporting in the WSJ even if the opinion page is full wingnut. They do a poor job on news selection and diversity of opinion.

    I suggest a utopian approach could at least reframe the debate. What would a healthy media landscape look like in a real deliberative democracy? How do you ensure expert input without technocracy? How can an excellent media it be paid for?

    (cross-commented on Substack)

Leave a comment