On June 29, the National Communications Commission (hereinafter, the “NCC”) released the draft Digital Intermediary Services Act to carry out the accountability of relevant service providers. The draft is pre-announced for 60 days (until August 29), with three public explanatory meetings convened on August 11, 16, and 18. However, the public hearing originally scheduled to be held on August 25 is currently postponed by the NCC, and will be held after reviewing and improving the quality of the draft. After the draft being finalized, it will be submitted to the Executive Yuan for review, and ultimately to the Legislative Yuan for deliberation.
The draft references the draft Digital Service Act (DSA) released by the European Union this year (2022) and relevant digital bills in other countries, and requires “digital intermediary service providers” (e.g. Line, WeChat, KBRO, Facebook, YouTube, and Yahoo! Auction, etc.) to disclose information about themselves, sellers who use the platforms to trade, and online advertisements, and to publish annual transparency reports. Although service providers are not required to actively monitor and review the content of the information posted by users, still online platform providers are required to establish “notification and response mechanisms” for users to report illegal contents and an “internal objection mechanism” that allows users who have been suspended or whose postings have been removed from the platform to respond to the platform and remedy their rights and interests. The most controversial provisions of the draft are Articles 18, 19, and 20 concerning the “information restriction order,” the “emergency information restriction order,” and the “marking of warnings,” which stipulate that the competent authorities for various laws and regulations may apply to the court for a restriction order to compel the service provider to remove the illegal information, restrict the circulation of such information, or take other necessary measures. In addition, before the court’s ruling, the authority that applies for the ruling may directly issue an administrative sanction requiring the service provider to temporarily mark the specific information with a warning, indicating that the information is a rumor or untrue.
Many people criticize that if the administrative authorities can directly determine the kind of information that could be false before a court ruling, and require the platform operator to add a warning message, they may negatively label specific postings and are a representation of the use of government power to gag the speech of the people. Furthermore, people also suspect that the existence of emergency restriction orders will seriously undermine the freedom of speech.
According to the general introduction of the draft, the “information restriction orders” and “emergency information restriction orders” are made with reference to the “access restriction orders” and “interim access restriction orders” of the draft Online Safety Bill formulated in the United Kingdom in 2022, and the “protection order” and “emergency protection order” of the Domestic Violence Prevention Act of Taiwan. Pursuant to the relevant provisions of the draft UK Online Safety Bill, in the event that the operator fails to meet its legal obligations, the competent authority may apply to the court for an access restriction order to prevent the operator from providing services to users in order to compel the operator to fulfill its obligations as soon as possible. The UK draft does not request to restrict access to specific information, nor does it provide that a restriction order may be urgently applied for within a specific short period of time, which is different from the content of Taiwan’s Digital Intermediary Services Act. Additionally, although Taiwan’s Domestic Violence Prevention Act provides that under certain conditions, the court shall issue an emergency protection order within four hours, still the existence of domestic violence injuries are easier to determine, such as injury certificates and electronic files of audio and video recordings, may help determine whether there is physical, mental or economic abuse. However, since the protection of personal freedom of speech and the public interest associated with the mass audience should be considered and weighed, whether the speech is illegal or not and the extent of the infringement on the audience raises by the speech is oftentimes highly controversial. Thus, it is debatable whether it is appropriate to require the court to make a decision within 48 hours.
As for the provisions related to “the marking of warnings,” they are made by referencing Article 28b(3)(j) of the EU’s revised Audiovisual Media Services Directive (Directive (EU) 2018/1808) in 2018 according to the general introduction of the draft. According to the content of the provision in the EU directive, it aims to protect the physical and mental development of minors and to protect the general public from exposure to information related to violence, hatred, and violation of the EU criminal law by requiring video sharing platform operators to provide effective handling measures. This is different from Taiwan’s draft Digital Intermediary Services Act, which allows the competent administrative authorities to require operators to mark false information with a warning in relation to the prices of agricultural products, infectious disease epidemics, food safety, etc. The former pertains more to the criminal law domain, while the latter is more related to the administrative law domain. In addition, the EU’s bill also does not specifically mention the practice of adding any marking of warning. Hence, it is questionable whether administrative authorities should be given the power to require the operators to add the marking of warning to specific postings. Although only postings that violate “laws with express terms for rumors or false information” may be marked with a warning, still relevant laws and regulations do not specifically define “falsehood” or “rumor,” and administrative authorities may have significant room for their discretion as to the type of posting that should be marked with a warning on such a basis. Last but not least, the draft does not specify how to provide immediate remedies to users who have posted their comments that have been marked with a warning. If the typical channels of remedy for a general administrative sanction are followed, i.e., filing an administrative appeal to the superior authority, this calls into question if a timely remedy can be provided. All of the above doubts deserve further contemplation.
Despite the well-intention of the Digital Intermediary Services Act, some portions of the current version may unduly restrict the freedom of speech in violation of the principle of proportionality, and there shall be room for revision. Although erroneous postings on an online platform, which disseminates information rapidly, may create a significant negative impact, the restrictions on the freedom of speech may subsequently turn out to be excessive, or it may be concluded that the postings are actually not inappropriate, yet the postings cannot attract public attention anymore and the misunderstanding caused by the restriction so imposed cannot be resolved due to the influx of rapid and massive information. How to design the law so that it would not be overly strict or overly lenient should still be discussed and explored by each party in order to achieve the best balance.
 Sections 126 and 127 of the draft Online Safety Bill of the United Kingdom dated June 28, 2022, URL: https://bills.parliament.uk/bills/3137 (last visited on August 23, 2022)
 The European Union revised Article 28 b(3)(j) of the revised Audiovisual Media Services Directive, Directive (EU) 2018/1808, URL:https://eur-lex.europa.eu/eli/dir/2018/1808/oj (last visited on August 23, 2022).