Twitter and Twitch added to list of those concerned with Australia’s Online Safety Bill

Twitter and live streaming service Twitch have joined the mounting list of service providers, researchers, and civil liberties groups that take issue with Australia’s pending Online Safety Bill.

The Bill, labelled “rushed” in various ways by many providing submissions to the committee now probing its contents, contains six key priority areas: A cyberbullying scheme to remove material that is harmful to children; an adult cyber abuse scheme to remove material that seriously harms adults; an image-based abuse scheme to remove intimate images that have been shared without consent; basic online safety expectations for the eSafety Commissioner to hold services accountable; an online content scheme for the removal of “harmful” material through take-down powers; and an abhorrent violent material blocking scheme to block websites hosting abhorrent violent material.  

Of concern to both Twitter and Twitch is the absence of due regard to different types of business models and content types, specifically around the power given to the relevant minister to determine basic online safety expectations for social media services, relevant electronic services, and designated internet services.

“In order to continue to foster digital growth and innovation in the Australian economy, and to ensure reasonable and fair competition, it is critically important to avoid placing requirements across the digital ecosystem that only large, mature companies can reasonably comply with,” Twitter said [PDF].

Likewise, Twitch believes it is important to consider a sufficiently flexible approach that gives due regard to different types of business models and content types.

“As evidenced by Australia’s own ongoing content classification review, classification is difficult and fluid,” it wrote [PDF].

“Twitch is primarily focused on live, user-generated content, which is not submitted for classification. 

“It is our experience that an enforcement approach based on comprehensive Community Guidelines is most effective for such diverse, interactive, and ephemeral content.”

Twitter also took issue with the shortening of takedown times from 48 hours to 24 hours.

It said given the vast types of content covered under the Bill, there may be frequent factors that necessitate a longer review period.

“The shortened time frame will make it difficult to accommodate procedural checks on possible errors in reports, the removal of legitimate speech, and providing necessary user notices,” it said, commenting that if the idea is to protect the user, this should be understood by the government.

Pointing to the comment from the eSafety Commissioner that in the administration of current content schemes, her office already experiences prompt removal from online service providers when they are issued with a report, Twitter is confused why it is necessary to further reduce and codify the turnaround time from 48 to 24 hours.

“As currently drafted, the Bill essentially confers quasi-judicial and law enforcement powers to the eSafety Commissioner without any accompanying guidelines or guardrails around which cases would constitute grounds for the Commissioner to exercise these powers other than the very broad ‘serious harm’ definition,” Twitter noted.

“Thus, the expansion of the eSafety Commissioner’s powers that are currently proposed under the Bill should be coupled with concomitant levels of oversight.”

Also on the overreaching powers the eSafety Commissioner is set to get, Twitch said the Bill must be proportionate in the types of content for which notice non-compliance triggers upstream disruption.

“The app and link deletion powers are appropriately reserved for issues relating to class 1 content. This same proportionate threshold should be replicated in the Commissioner’s power to apply for a Federal Court order, which currently applies to the entire online content scheme (including class 2),” Twitch explained.

“The most substantial powers should be reserved for the worst content and limited to systemic non-compliance with class 1 notices.

“Regardless of what threshold is selected, any scheme that justifies mandating the complete removal of a service on the basis of its non-compliance with notices should also take considerable steps to establish confidence that the service is demonstrating actual noncompliance, before proceeding to upstream disruption powers.”


Consultation on the draft Bill received 370 submissions, according to Minister Paul Fletcher, but the department has only just begun making them public.

In the first batch of submissions, hidden among the 52 marked as anonymous, Facebook provided its concern with three areas of the Bill, with one being the expansion of cyberbullying takedown schemes to private messaging.

It said [PDF] extending the scheme to the likes of its Messenger app is a disproportionate response to bullying and harassment, given the existing protections and tools already available

“The eSafety Commissioner and law enforcement already have powers around the worst risks to online safety that can arises in private messages … [most services] provide tools and features to give users control over their safety in private messaging, like the ability to delete unwanted messages and block unwanted contacts,” Facebook wrote.

“Despite the fact that existing laws allow the most serious abuses of private messaging to be addressed, the draft legislation extends regulatory oversight to private conversations between Australians. Whilst no form of bullying and harassment should be tolerated, we do not believe this type of scheme is suitable for private messaging.”

The social media giant said human relationships can be very complex and that private messaging could involve interactions that are highly nuanced, context-dependent, and could be misinterpreted as bullying, like a group of friends sharing an in-joke, or an argument between adults currently or formerly in a romantic relationship.

“It does not seem clear that government regulation of these types of conversations are warranted, given there are already measures to protect against when these conversations become abusive,” it said.

“Moreover, the policy rationale of the Australian government’s cyberbullying scheme for social media does not apply in the same way to private messaging. Bullying over private messaging cannot go viral in the same way as a piece of bullying content on a public social media platform; and regulators will rarely have the full context to determine whether a private conversation genuinely constitutes bullying.”

While Facebook’s submission to the inquiry is yet to be published, the company highlighted that what it prepared in its draft response echoed much of what it submitted at the start of the Bill’s initial consultation, as the draft was near identical to the original consultation paper.

The Bill before Parliament remains mostly unchanged, too.