China Justice Observer

中司观察

EnglishArabicChinese (Simplified)DutchFrenchGermanHindiItalianJapaneseKoreanPortugueseRussianSpanishSwedishHebrewIndonesianVietnameseThaiTurkishMalay

How Does China Address Platform Accountability in Algorithmic Decision-Making?

Sun, 28 Nov 2021
Categories: Insights

avatar

 

Key takeaways:

  • Chinese lawmakers have realized the central role of algorithms in the operation of internet platforms, and defined it as “automated decision-making” in Personal Information Protection Law (PIPL) enacted on 20 Aug. 2021, regulating this technology for the first time.
  • Under the PIPL, platforms shall assess impact of algorithms in advance, and are liable for results of the decision-making afterwards.
  • The PIPL expands the right to know of the platform users, and requires platforms to break the "information cocoons" created by algorithmic personalized recommendations to users.


China's Personal Information Protection Law (个人信息保护法), enacted in August 2021, draws the boundaries for Internet platforms conducting automated decision-making through algorithms.

Ⅰ. Background

Chinese Internet platforms, typically TopBuzz and TikTok of ByteDance, are extensively using recommendation algorithms to push content and products to their users.

However, such algorithms, allegedly having interfered with users’ rights of free decisions and thus created moral hazard, are questioned by the public and regulators.

Chinese lawmakers have realized the central role of algorithms in the operation of such platforms, and defined it as “automated decision-making” in Personal Information Protection Law (hereinafter ‘the PIPL’) enacted on 20 Aug. 2021, regulating this technology for the first time.

In accordance with the PIPL, automated decision-making refers to the activities of automatically analyzing and assessing individuals' behavioral habits, hobbies, or financial, health and credit status through computer programs and making decisions thereon. (Article 73)

Prior to that, there were divided opinions over the platforms’ liability for automated decision-making. For example, some people believed that the platforms should not be liable for the results of their automated decision-making algorithms, which were essentially a kind of neutral technology. However, the PIPL clarifies the opposite.

Ⅱ. Restrictions on platforms

1. Regulators directly review the algorithms

As personal information processors, platforms shall audit the compliance of their processing of personal information with laws and administrative regulations on a regular basis. (Article 54)

This requires platforms to periodically audit their algorithmic automated decision-making and other information processing activities.

According to the rule, the regulators can also conduct internal audits on the operation of platforms’ algorithms, instead of external supervision on platforms’ acts and the consequences only.

Accordingly, the regulators make the algorithms as the direct regulating object, which enables the regulators to intervene in the technology and details of automated decision-making.

2. Platforms assess the impact of algorithms in advance 

As personal information processors, platforms shall conduct personal information protection impact assessment in advance and record the processing information if they use personal information for automated decision-making. (Article 55)

The assessment by platforms shall cover the following:

A. Whether the purposes, methods or any other aspect of the processing of personal information are lawful, legitimate and necessary;

B. The impact on personal rights and interests and level of risk; and

C. Whether the security protection measures taken are lawful, effective and commensurate with the level of risk.

Accordingly, the platforms shall conduct a prior assessment before algorithms of automated decision-making go live. The risk assessment includes the legitimacy and necessity of the algorithmic automated decision-making, as well as its impact and risk.

Defective algorithmic automated decision-making from platforms may bring harm to citizens' property and personal rights, even to public interests and national security.

Therefore, the negative consequences may impact thousands of users. At that point, even though the platforms are held accountable, it may be difficult to recover the damage that has already been done.

To prevent such a situation, the law establishes a prior assessment system for platforms’ algorithms in an attempt to intervene in the algorithms beforehand.

3. Platforms are liable for the results of the decision-making afterwards

Platforms shall assume the following obligations for results of automated decision-making (Article 24):

A. Platforms shall ensure that the results are fair and impartial

Where personal information processors conduct automated decision-making with personal information, they shall ensure transparency of the decision-making and fairness and impartiality of the results, and shall not give unreasonable differential treatment to individuals in terms of transaction prices or other transaction conditions.

B. Platforms shall provide automated decision-making options not targeting personal characteristics to their users.

Where push-based information delivery or commercial marketing to individuals is conducted by means of automated decision-making, options not targeting at personal characteristics of the individuals or easy ways to refuse to receive shall be provided to the individuals simultaneously.

C. Platforms shall make explanations of the decision-making results.

Where a decision that has a material impact on an individual's rights and interests is made by means of automated decision-making, the individual shall have the right to request the personal information processor to make explanations, as well as the right to refuse the making of decisions by the personal information processor solely by means of automated decision-making.

The rule holds platforms liable for the results of automated decision-making, including:

A. The rule does not recognize the "technology neutrality" defense that has been used by platforms. Platforms should be responsible for the results of the algorithmic automated decision-making and should ensure that the results are fair and reasonable.

B. The rule expands the right to know of the platform users. The users can request the transparency of the automated decision-making results as well as explanations from platforms in case of “a material impact”.

C. The rule requires platforms to break the "information cocoons" created by algorithmic personalized recommendations to users, and requires platforms to protect the users' right to know.

III. Our Comments

China has made a breakthrough in the PIPL by adding legal rules for platforms’ algorithms of automated decision-making. However, it still needs to be further refined. For example, the law does not clarify:

A. conditions for platforms to initiate algorithm assessment.

B. whether and to what extent the assessment reports will be made public after platforms evaluate their algorithms, and 

C. how platforms should be liable for the damage caused by their algorithmic automated decision-making.

I presume that Chinese regulators are still exploring the possibility of enacting a series of specific regulations to further implement the PIPL.

 

Photo by Road Trip with Raj on Unsplash

 

 

 

 

 

Contributors: Guodong Du 杜国栋

Save as PDF

Related laws on China Laws Portal

You might also like

Thus Spoke Chinese Judges on Cross-border Service of Process: Insights from Chinese Supreme Court Justices on 2023 Civil Procedure Law Amendment (2)

The 2023 Civil Procedure Law adopts a problem-oriented approach, addressing difficulties in the service of process for foreign-related cases by expanding channels and shortening the service by publication period to 60 days for non-domiciled parties, reflecting a broader initiative to enhance efficiency and adapt legal procedures to the complexities of international litigation.

Beijing Court Releases Report on Violation of Citizens’ Personal Information

Charting the evolution of China's data protection landscape from the 2009 Criminal Law Amendment to the 2016 Cybersecurity Law, and to the 2021 Personal Information Protection Law, a pivotal white paper issued by Beijing High People’s Court in November 2023 underscores the role of Chinese courts in enforcing stringent rules for network operators and safeguarding citizens' personal information.

China Issues Guidelines for Outbound Transfer of Personal Information

In May 2023, the Cyberspace Administration of China (CAC) issued the “Guidelines for the Filing of Standard Contracts for the Outbound Transfer of Personal Information (First Edition)”, providing specific requirements for the methods, procedures, and materials for filing standard contracts for the outbound transfer of personal information.

China Amends Conscription Work Regulation

On 1 Apr. 2023, China’s State Council and Central Military Commission jointly promulgated the revised “Regulation on Conscription Work” (征兵工作条例).

Juvenile Criminal Records Sealing System in China

Under Chinese criminal laws, where a juvenile has reached 18 when committing a crime and is sentenced to fixed-term imprisonment of five years or a lighter punishment, the criminal records concerned shall be sealed for preservation.