The use of AI systems and corporate compliance rules

25 Apr 2025 | Newsletter

Bianca M. GutierrezGutierrez Law Firm, Italy

The Italian Group investigated the issue of corporate compliance rules in the new reality of AI. The investigation was aimed at identifying what should be best practices to reduce the risk of criminal corporate liability for infringements of intellectual property rights and personal rights committed inside the company by using AI.

The use of artificial intelligence systems in business can have positive effects in terms of reducing the time needed to perform tasks, but also negative effects in terms of the company’s responsibility for offences committed by its employees and collaborators.

According to our legal system, corporate liability can not only be civil but also criminal, following the entry into force of the Legislative Decree June 8, 2001, No 231 (“Decree”). The Decree states that the company is responsible for criminal offences committed in its interest or to its advantage: (a) by persons who represent, administer or direct the organisation or one of its organisational units with financial and functional autonomy, as well as by persons who manage and control it, even de facto; (b) by persons under the direction or supervision of one of the subjects referred to in (a).

The Decree states that companies must adopt organisational and risk management models capable of preventing offences from being committed, based on specific codes of conduct. After its entry into force, companies operating in Italy have adopted good practice models to be respected internally.

The bill approved by the Italian Senate on 20 March, 2025, and now under discussion in the Chamber of Deputies (“DDL 1146”), which aims to bring Italian law into line with the AI Act, provides for the introduction of: (i) a new crime in the Italian Penal Code (article 612 quarter “Unlawful dissemination of content generated or altered with artificial intelligence systems”), consisting of the unauthorised publication or dissemination of falsified or altered images, videos or voices of another person, likely to mislead as to their authenticity and which prejudice the right to personal identity; (ii) aggravating circumstances (common and special with special effect) for some existing crimes. As far as the criminal offences envisaged by the Italian Copyright Law are concerned, the bill only introduces a new form of the offence under art. 171 (letter a-ter), consisting of the reproduction or extraction of texts or data from works or other materials available on the web or in databases in violation of articles 70-ter and 70-quarter of the Italian Copyright Law, including through artificial intelligence systems.

There has been some discussion on how to adapt the best practice rules that companies operating in Italy already adopted to the new level of risk that the spread of AI systems has created.

The Study group on criminal IP law, which I coordinate, and which was created within the Italian group to explore criminal law aspects concerning intellectual property, has recently addressed this issue.

The Regulation (EU) 2024/1689 of June 13, 2024 “laying down harmonised rules on artificial intelligence” (“AI Act”) sets out the safety requirements that every artificial intelligence system must fulfil in order to be introduced on the market, after identifying and dividing the level of risk for each of the subjects involved in the production and commercialisation process. Indeed, the AI Act follows a risk-based approach: the greater the risk, the stricter the rules. It distinguishes between limited risk and high risk, while also considering systems that do not produce systemic risks.

As for the AI Code of Practice (“Code”), the second draft recently unveiled by the European Union provides the methods that a company must adopt in terms of corporate compliance (also) to avoid violation of copyright law at every stage of the lifecycle of the AI systems (development, deployment, monitoring). The Code aims to ensure that all AI systems comply with the requirements dictated by the AI Act by applying the principles of ethics, transparency and reliability.

Article 56 of the AI Act, which provides for the adoption of the Code at an EU level, expressly states that it must cover, at least,  “the measures, procedures and modalities for the assessment and management of the systemic risks at Union level, including the documentation thereof, which shall be proportionate to the risks, take into consideration their severity and probability and take into account the specific challenges of tackling those risks in light of the possible ways in which such risks may emerge and materialise along the AI value chain”.

Our Group believes that a new corporate compliance model should be developed starting from the model studied for Law 231, duly adapted to the new area of risk determined by the use of artificial intelligence, taking into account the AI Act and the Code.

Decree 231, which imposes in Italy the adoption of corporate compliance, is inspired by self-regulation, exactly like the AI Act, which considers an imprinting of a regulatory-precautionary type.

In this perspective, useful indications can be drawn:

– from the AI Act, article 9 (stating that a risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems), article 19 (which obliges providers of high-risk AI systems to keep automatically generated logs, to the extent such logs are under their control) and article 26 (that imposes the deployers of high-risk AI systems’ obligation  to take appropriate technical and organisational measures to ensure a proper use of such systems);

– from the Code, particularly regarding the measures on “Taxonomy of Systemic Risk”, as a basis for the systemic risk assesment and mitigation, and on “Safety and Security Framework”, which shall detail the risk management policies they adhere to in order to proactively assess and proportionately mitigate systemic risks from their general-purpose AI models with systemic risks.

The second part of the Code, regarding AI models that could involve systemic risks, outlines the measures for the evaluation and mitigation of these risks, including model evaluations, incident reporting and cybersecurity obligations.

The Italian Group is aware that the concrete definition of a new corporate compliance model will require much further study and will have to be compared with what will be the definitive text of the Code. However, we are convinced that the reflections made in this first phase have already allowed us to outline the essential aspects on which the modification effort will be concentrated, in other words, creating guidelines for the work to come.