Entertainment and Media Guide to AI: Geopolitics of AI- Regulation … – Lexology

United States

The United States currently lacks comprehensive legislation on AI. However, various administrative agencies are undertaking efforts to provide guidance on legal issues surrounding AI. Such guidance include the Federal Trade Commission’s Business Blog posts “Keep your AI claims in check” (Feb. 27, 2023) and “Chatbots, deepfakes, and voice clones: AI deception for sale” (Mar. 20, 2023), and the National Institute of Standards and Technology’s “Artificial Intelligence Risk Management Framework (AI RMF) 1.0” (Jan. 2023) and the U.S. Copyright Office’s statement of policy on “Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence” (Mar. 16, 2023). Certain regulations have already been implemented at the state and local level to regulate the impacts of AI in various fields, including employment law with New York City’s Local Law 144 requiring employers to conduct bias audits of AI-enabled tools used for employment decisions.

While the Senate Judiciary Subcommittee on Privacy, Technology, and the Law hearing on “Oversight of A.I.: Rules for Artificial Intelligence” shows an effort by the U.S. government to understand and regulate the sector, there is no federal legislation on AI as of the date of this article.

Key takeaways

  • U.S. agencies are beginning to develop guidance on legal issues around AI
  • The EU bases its approach on laws implemented on an EU level or a local member state level
  • The EU’s GDPR has important articles related to algorithmic decision-making

The EU

The regulatory framework

The EU’s regulatory approach to manage risks associated with AI is complex and multifaceted. It is based on laws that have already been implemented on an EU level or a local member state level, particularly the General Data Protection Regulation (GDPR), Regulation (EU) 2016/6799), copyright and other IP laws such as the Copyright Directive discussed above, the EU Directive on protection of trade secrets (Directive (EU) 2016/943) as well as general commercial contract law principles. The EU’s strategy further includes two major pieces of legislation that have already been enacted and that will certainly also change the digital landscape: the Digital Markets Act (Regulation (EU) 2022/1925) and the Digital Services Act (Regulation (EU) 2022/2065) as well as additional planned legislation like the EU Data Act on making data available by data holders to data recipients.

In addition to these more general laws that apply, but are not tailored, to AI, the EU currently builds on laws that are shaped for AI and currently making their way through the European legislative process, particularly the AI Act. It is fair to say that the EU approach to AI risk management is characterized by a comprehensive range of legislation tailored to specific digital environments.

From a territorial perspective and as a general rule, these EU laws apply when either the organization operating the AI system is based in the EU/EEA or if the users of the AI system or the subjects whose data is processed by the AI system are based in the EU/EEA. From a copyright perspective, EU copyright laws also apply if and to the extent protection of the copyright-protected work is sought in the EU/EEA.

GDPR and AI

GDPR will apply to AI if either the business operating the AI system is based in the EU/EEA or if the users of the AI system are located in the EU (art. 3 GDPR).

In addition to the data protection implications (see Data protection and privacy section), the GDPR contains two important articles related to algorithmic decision-making. First, the GDPR states that algorithmic systems should not make significant decisions affecting legal rights without human oversight. Second, the GDPR guarantees an individual’s right to “meaningful information about the logic” of algorithmic systems. As in many areas, the GDPR is not very clear in that respect, and thus there are many unanswered questions about this clause. How does the GDPR affect machine learning in the enterprise? In particular, how often may data subjects request this information, how valuable is the information to them and what happens when companies refuse to provide the information? As a result, the idea that the GDPR mandates a “right to explanation” from machine learning models has become a controversial subject.

EU AI Act

In April 2021, the EU Commission published a proposal for an EU Artificial Intelligence Act in the form of an AI Regulation, which would be immediately enforceable throughout the EU. This AI Regulation seeks to harmonize rules on artificial intelligence by ensuring that AI products are sufficiently safe and robust before they enter the EU market. It applies if operation of the AI system, its use or the use of its output has a connection to the EU/EEA (art. 2 AI Act) to:

  • Providers that first supply commercially or put an AI system into service in the EU, regardless of whether the providers are located inside or outside the EU
  • Users of AI located within the EU
  • Providers and users located outside the EU, if the output produced by the system is used within the EU

The EU AI Act will be a particularly important component in many areas of EU AI risk management. Although the AI Act is not yet final, its main features can be analyzed from the European Commission’s April 2021 proposal, the EU Council’s December 2022 final proposal and available information from ongoing discussions in the European Parliament (most recent update as of the writing of this article was in May 2023).

The AI Act has been presented as a “horizontal” piece of legislation by the EU Commission. The EU AI Act indeed sets out horizontal rules for the development, commodification and use of AI-driven products, services and systems within the territory of the EU.

However, there are also several limitations and exemptions in the AI Act. It actually implements a tiered system of regulatory obligations for a specifically enumerated list of AI applications. Some AI applications associated with minor risks, such as deepfakes, chatbots and biometric analytics, must make clear disclosures to affected individuals. Another group of AI systems with “unacceptable risks” would be banned outright. A third group of AI systems are considered high risk and may only be operated or used under certain restrictions that include logging, documentation, IT security and possibility for human intervention.

The following AI systems are considered intrusive and discriminatory and are banned:

  • “Real-time” remote biometric identification systems in publicly accessible spaces
  • “Post-remote” biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization
  • Biometric categorization systems using sensitive characteristics (e.g., gender, race, ethnicity, citizenship status, religion, political orientation)
  • Predictive policing systems (based on profiling, location or past criminal behavior)
  • Emotion recognition systems in law enforcement, border management, workplace and educational institutions
  • Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and the right to privacy)

High-risk AI systems, which are the most comprehensive and impactful of the classifications in the AI Act, are subjected to an advance conformity assessment. Providers of such high-risk AI systems are required to take extra steps, for example, to implement risk and quality management systems or to document the system’s output.

AI systems are considered high-risk if they have an adverse impact on people’s safety or their fundamental rights. This category includes areas that may do harm to people’s health, safety, fundamental rights or the environment, as well as AI systems that influence voters in political campaigns and in recommender systems used by social media platforms (with more than 45 million users under the Digital Services Act).

Different categories of AI applications are classified as high-risk in the AI Act:

  • AI systems that are intended to be used as safety components of a product or is itself a product covered by legislation listed in Annex II to the AI Act. This category of high-risk AI systems includes consumer products that are already regulated under the regulatory regime of the EU single market, for example, products such as medical devices, vehicles or toys. In general, this means that AI-enabled consumer products will still go through the existing regulatory process under the relevant product harmonization legislation and will not require a second, independent conformity assessment just for the requirements of the AI law.
  • AI systems listed in a set of AI applications that include significant socially relevant decisions. The list includes, for example, real-time and post-remote biometric identification systems, systems used for hiring or educational access and credit scoring systems.

Unlike consumer products, the latter AI systems are generally considered new risks and have been largely unregulated. This means that the EU will need to develop specific AI standards for all of these different use cases. This is expected to be a significant implementation challenge, given the number of high-risk AI applications and the novelty of AI standards.

The AI Act provides for substantial fines in the event of noncompliance as well as other remedies, which can scale up to the higher of €30 million or 6% of the total worldwide annual turnover in the most serious cases.

The AI Act is still expected to enter into force in 2023, but organizations will have three years to get ready before the AI Act applies.

Digital Services Act and Digital Markets Act

The EU’s AI law is not the only significant law regulating AI risks. The EU has already passed the Digital Services Act (DSA) and the Digital Markets Act (DMA), and a future AI Liability Directive could also play an important role. Both Acts have a similar extraterritorial scope as GDPR and, therefore, may also apply to organizations that are based outside the EU (see. art. 2 DSA and art 1 DMA) if their users are based in the EU/EEA.

The DSA, passed in November 2022, considers AI as part of its holistic approach to online platforms and search engines. By creating new transparency requirements, requiring independent audits and enabling independent research on large platforms, organizations will have to reveal much new information about the function and harms of AI in these platforms under the DSA. Further, the DSA requires large platforms to explain their AI for content recommendations, such as populating news feeds, and to offer users an alternative recommender system not based on sensitive user data. To the extent that these recommender systems contribute to the spread of disinformation, and large platforms fail to mitigate that harm, they may face fines under the DSA.

Similarly, the DMA is broadly aimed at increasing competition in digital marketplaces and considers some AI deployments in that scope. For example, large technology companies deemed to be “gatekeepers” under the law will be barred from self-preferencing their own products and services over third parties, a rule that is certain to affect AI ranking in search engines and the ordering of products on e-commerce platforms. The European Commission will also be able to conduct inspections of gatekeepers’ data and AI systems. While the DMA and DSA are not primarily about AI, these laws signal a clear willingness by the EU to govern AI built into highly complex systems.

Overall, we think it is fair to say that the approach of the European legislature to AI risk management has, in aggregate, a more centrally coordinated and comprehensive regulatory coverage than other governments, particularly the U.S. government. The EU’s legal framework covers more applications and includes more binding rules for each application. On the other hand, both the United States and the EU regulatory frameworks favor largely risk-based approaches to AI regulation and have described similar principles for how dependably AI should function.

The UK

The United Kingdom recently published a white paper detailing its “pro-innovation” approach to AI. The UK Department for Science, Innovation and Technology (DSIT) and the Office for Artificial Intelligence published evidence outlining the regulatory framework options for AI governance, which is to regulate AI through the existing regulatory members of the Digital Regulation Cooperation Forum (DRCF). The DRCF is meant to rely on the UK’s existing regulatory framework to develop appropriate guidance on key issues like algorithmic bias, safety and privacy. The DRCF comprises the Competition and Markets Authority, the Information Commissioner’s Office, Ofcom and the Financial Conduct Authority.

The UK AI Regulation Policy white paper sets out plans for a risk-based, adaptable regulatory framework. The white paper confirms that five cross-sectoral principles will initially form a non-statutory framework for AI. The UK government may in the future decide to introduce a statutory duty on the DRCF regulators.

  1. Appropriate transparency and explainability – parties directly affected by the use of an AI system should be able to access sufficient information to be able to enforce their rights, including how the AI system works and how it makes decisions. Bearing in mind that the logic and decision-making in AI systems cannot always be explained meaningfully, the level of explainability should be appropriate to the context, including the level of risk.
  2. Safety, security and robustness – regulators may need to consider technical standards, for example, addressing testing and data quality.
  3. Accountability and governance – regulators will need to determine who is accountable for compliance with existing regulations and principles, and provide guidance on how to demonstrate accountability.
  4. Contestability and redress – those affected should be able to contest an AI decision or outcome that is harmful or creates material risk of harm.
  5. Fairness – AI systems should not undermine legal rights, discriminate unfairly against individuals or create unfair market outcomes.

The UK recently published a “Pro-Innovation Regulation of Technologies Review,” by Sir Patrick Vallance, which discusses issues of copyright in AI under the Copyright, Designs and Patents Act 1988 (CDPA). The general rule is that the first owner of copyright will be the creator, except if the work is made in the course of employment. Under the CDPA, certain acts related to copyright-protected works are permitted, such as text and data mining. TDM can be used for AI input since it can be used to train AI by analyzing large amounts of information. This permitted act is limited, however, and allows the making of copies of TDM for non-commercial research. This permitted act allows computation analysis of copyright-protected works, but for purposes of AI, any use of such copyright-protected works could only be for non-commercial research. The AI recommendations suggest a code of practice that should be published later this year and a wait-and-see approach to legislative changes based on developments with a view to an international harmonized approach.

The UK’s position in relation to data that includes personal information is quite different. There is no copyright or IP rights in personal data (relevant laws are the UK Data Protection Act 2018 and the UK GDPR). The UK’s regulator, the Information Commissioner’s Office (ICO), has taken a keen interest in AI and has issued guidance on AI and data protection. The guidance covers requirements for fairness in AI and provides a road map to data protection compliance for developers and users of generative AI. Any use of personal data in AI will require compliance with all of the fair processing principles, which in part overlap with some of the ethical concerns related to AI. Some of those principles include fairness, transparency, lawful processing of personal data, security, data minimization and data integrity. The ICO has acknowledged the importance of AI, and, in its normal pragmatic fashion, offers a toolkit to aid compliance.

Given the UK’s laissez-faire approach to regulation, the entertainment and media industry, in the short term, will not be impacted by any new laws, and the use of any personal data must still comply with the UK’s data protection laws

Source link

Source: News

Add a Comment

Your email address will not be published. Required fields are marked *