The draft text provides specific examples of the systemic risks the platforms should assess in their risk assessments. These include risks such as the dissemination of illegal content, the malfunctioning of their service and any “actual and foreseeable negative effects on the protection of public health”.
The risk assessments have to be completed at least annually or before new services are launched and, once they are complete, the platforms must put in place “reasonable, transparent, proportionate and effective mitigation measures, tailored to the specific systemic risks” identified.
While the draft DSA contains examples of the mitigating measures very large online platforms might implement, MEPs have added a proposed amendment that would make clear that the requirements to apply those measures “shall not lead to a general monitoring obligation or active fact-finding obligations”. The E-Commerce Directive from 2000, which the DSA is designed to enhance, already prohibits general monitoring obligations from being imposed on intermediaries.
Among the other proposed changes that would affect very large online platforms are draft new requirements around tackling so-called ‘deep fakes’.
“Where a very large online platform becomes aware that a piece of content is a generated or manipulated image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and falsely appears to a person to be authentic or truthful (deep fakes), the provider shall label the content in a way that informs that the content is inauthentic and that is clearly visible for the recipient of the services,” according to the draft DSA approved by the European Parliament.
Other significant proposals adopted by MEPs are plans to prohibit intermediaries from using “the structure, function or manner of operation of their online interface, or any part thereof, to distort or impair recipients of services’ ability to make a free, autonomous and informed decision or choice”. The draft text cites alleged practices that “exploit cognitive biases and prompt recipients of the service to purchase goods and services that they do not want or to reveal personal information they would prefer not to disclose”.
A list of specific actions that the intermediaries must refrain from taking in the context of their online interface and recipients’ decisions and choices include “giving more visual prominence to any of the consent options when asking the recipient of the service for a decision”.
MEPs have also moved to toughen the proposed new requirements around targeted advertising.
Under the plans, online platforms would be required to provide “meaningful information, including information about how their data will be monetised” to recipients of their service to enable those users to make informed decisions on whether to consent to the processing of their personal data for the purposes of advertising.
Platforms would be prohibited from disabling users’ access to “the functionalities of the platform” if they refuse to consent to the processing of their personal data for the purposes of advertising. The use of “targeting or amplification techniques that process, reveal or infer personal data of minors” for the purpose of displaying advertisements would also be prohibited.
The use of “special categories of data”, which, under data protection law, includes information about a person’s race or ethnicity, political opinions, religious beliefs, their health and sexual orientation, would also not be permitted for the purposes of “targeting individuals” if the MEPs’ proposals are adopted.
A new right for recipients of intermediary services to seek compensation from the service providers is also envisaged under the revised DSA proposals. This right would apply “against any direct damage or loss suffered due to an infringement by providers of intermediary services of obligations established under [the DSA]”.