Rosa Ballardini (University of Lapland, Finland)
Rob van den Hoven van Genderen (University of Lapland, Finland)
Title: Legal Incentives for Innovations in the Emotional AI Domain

Abstract: That emotions strongly influence and drive our way of living and our experiences as human beings is undeniable. In this context, Artificial Intelligence (AI) technologies and the use of emotional data are crucial to push developments further. Several of these types of innovations could be perceived as welcome in our society, as they can improve our wellbeing by providing an emotion-based solution to an emotion-based problem. However, these types of inventions might also have a negative effect on the regulatory interdependence, privacy and autonomy of natural persons and may negatively influence their behavior.

As such, they might be blocked by several legal or ethical provisions currently existing in the European regulatory system. Yet, these inventions might require considerable investments, and thus legal incentives such as intellectual property rights are crucial to support investments in R&D in these fields. Moreover, a level of certainty in terms of the extent of legal and ethical acceptability of such innovations is also important to secure innovative investments . Shedding light over the boundaries in terms of what is allowed and acceptable form the perspective of law and ethics with regard to research, development as well as exploitation and commercialization of innovations in the emotional AI domain is essential in order to enhance clarity and identify gaps where incentives are either lacking or short-fetched. Yet, research in this field is to date almost non-existent, leaving developers and innovators to navigate into pretty uncertain waters in this context. This presentation investigates the interlinkage between legal incentives, technological innovations and ethics through the lenses of patent law and key fundamental rights provisions, in the context of emotional AI in order to provide with a comprehensive overview of the challenges, limitations, but also opportunities for the protection, commercialization and exploitation of emotional AI related inventions.

Erich Schweighofer (University of Vienna, Austria)
Title: Development of Legal Informatics

Abstract: It is not so widly known, but legal informatics has already a history of about 60 years. Many papers and books have been written, and excellent conference series with publications exist now (i.a. ICAIL, IRIS, JURIX, Cyberspace, NCLI, BILETA). It is an established interdisciplinary field of science but not sufficiently well recognised by law schools, in particular if the legal part – ICT law – is left aside. A main reason may be the still dominent restraint of law to develop theory and put the focus on legal practice. As digitalisation cannot be easily ignored by law faculties, the most obvious development is now the establishment of units and studies with fancy and more marketing-like titles (with the most successful one, legal tech on top). It may save time for in-depth research but the challenges of the law in the knowledge and network society remain unsolved. A review of the main research questions of the last 60 years demonstrates that the “big questions” have not changed that much: the very significant extension of “clients” of the legal system: mostly all in a society, now in a potentially global environment, adding robots and agents as legally relevant actors; the strong rise of relevance of the knowledge factor and its repercussions; (semi-)automation of legal work for support but also solving the cost issue of the legal system.

Societal inclusion, coupled with globalisation: A legal system available for all in society, worldwide, including also machines – as a system for legal protection of interests – is an easy political slogan that very difficult to achieve. Law is volumious, dynamic and costly, even more nowadays and sufficient knowledge remains an ongoing challenge. Gigabytes as a unit of measurement of the quantitative size of a legal system, daily change of law in force, increased litigation and lenghtly legal process rules require significant technological support and a new form of man/machine co-operation.

Legal text (multimedia) corpora and its intellectual mastering: Legal databases online are indispensable in the legal work today. Efficient legal search, automated translation, generation of metadata and legal analysis gain importance as the traditional concept of “learning the law” is replaced by “mastering the law” in a set of global villages and societies.

Automation of law: With generative AI (e.g. ChatGPT), an automated production of legal documents and its exchange and analysis is becoming a reality in the future. The question of man/machine co-operation focusses on this challenge. The axiom of human control remains undisputed but – in detail – focuses on the challenging question of man/machine co-operation.

ICT stays and does not go away. Without question, legal practice is working with this challenge for a long time. For quite a time, the hope of some irrelevance of ICT in the future was strong, the “typewriter approach”. Reality was and is different. As ICT is already a strong pillar of the legal system and its role is growing, law schools must react in its legal curicula to this challenge. Lawyers must share its textual proficiency with machines but also learn to use these tools in the very fast and efficient but procedural (semi-)automated application of law. Intellectual control may be not easy at all. Some training in the new culture technique of ICT is inevitable; main topics are: legal information retrieval, legal logic, legal ontologies, natural language processing, telecommunication. The auxilliary part of legal informatics stays as a management task for lawyers: hardware, software, server, networks, IT security etc. Much more interesting and impossible with legal knowledge is the use of technology applications for lawyers. For a few, maybe legally trained ingineers or technologically advanced lawyers, the development of technology applications for lawyers would be a highly relevant work for the legal system.

Without question, informatics faculties with its many disciplines may like to dominate the ICT part of legal informatics. However, with is many and dynamic application questions, it should be lawyers who must take the leading role. Otherwise, the complexity of the legal system is not taken into account.

For the last 30 years, the legal part of legal informatics was dominent, with the catchwords: e-person, e-transaction, e-document, e-signature, international and european telecommunication, E-Government, E-Justice, E-Commerce, data governance law and privacy and IP/IT law. It is interesting that answers were prodcued mostly with methods of legal dogmatics, much less in an interdisciplinary context. Present problems of data protection, IP law or competition law may have its origin in this development.

Tobias Mahler (Norwegian Research Center for Computers and Law, Norway)
Title: Limitations for Transnational AI Governance

Abstract: This presentation delves into the challenges and possibilities of establishing effective global governance mechanisms for AI development, deployment, and use. While current AI governance frameworks primarily stem from supranational lawmaking or intergovernmental cooperation, this discourse draws parallels and distinctions with the multistakeholder approach observed in domains like internet governance, wherein non-state entities such as ICANN wield significant influence. By examining shared traits and divergences between these realms, this discussion sheds light on potential implications for shaping future policy strategies.

Mona Naomi Lintvedt (Norwegian Research Center for Computers and Law, Norway)
Title: Under the robot's gaze

Abstract: In robotics research, the gaze of the robot is commonly understood by the dictionary term of ‘a fixed intent look’, which implies use of eyes. The gaze theory of Sartre and Lacan, however, emphasises that gaze is not only about eyes, and sometimes nothing to do with eyes, but rather the different senses that create images in one’s mind. This signifies a power relationship between the observer and the observed. With their sensory capabilities, robots are, in effect, the perfect panopticons (or rather panspectrons as coined by DeLanda) which can operate openly in our midst, when we welcome them into our homes while potentially remaining unaware of the full extent of their observational capacities. They can follow the user, observing and intruding around the clock, not only observing what is visible in plain sight, but even through walls and obstacles. The robot is capable of all-encompassing, constant surveillance. Thus, as with Foucault’s panopticism, the person is placed in a constant state of visibility and subject to the constraints of power, affecting personal autonomy and dignity. This talk is based on research which will investigate the power relationship, and the assumed power imbalance, in human-robot interaction through the lens of gaze theory and panopticism.


Béatrice Schütte (University of Helsinki and University of Lapland, Finland)
Title: The principle of social and environmental wellbeing in the development and use of AI

Abstract: The latest version of the AI Act adopted in June 2023 by the European Parliament (EP) features six general principles according to which AI systems shall be developed and used by operators. One of them is the principle of social and environmental wellbeing. Article 4 (a) (1) (f) of the AI Act sets out that ‘social and environmental well-being’ means that ‘AI systems shall be developed and used in a sustainable and environmentally friendly manner as well as in a way to benefit all human beings, while monitoring and assessing the longterm impacts on the individual, society and democracy’. It had been first introduced as one of the components for trustworthy AI in the Ethics Guidelines developed by the High Level Expert Group. Similar references can be found in the OECD AI Principles, namely Principle 1.1. on inclusive growth, sustainable development and wellbeing. The purpose of this contribution is to examine the principle of social and environmental wellbeing in light of the approach of general principles for the development and use of AI. It will further show its development starting from the Ethics Guidelines for Trustworthy AI issued by the High Level Expert Group and assess it within the bigger picture of AI regulation in the EU and at the international level. The contribution will close with an outlook on possible implications in concrete use cases.

Cecilia Magnusson Sjöberg (Stockholm University, Sweden)
Title: Privacy in the Archives - a Haven for AI?

Abstract: Final words at this stage of the presentation indicate a quest for more law addressing legal implications of personal data processing in archives. It applies in particular to the public sector of society when it comes to (very) large volumes of official data. Privacy in archives require no doubt legal awareness proactively. Just to exemplify the kind of concerns has to do with how to differentiate between deletion and erasure in digital environments, how to design digital document management systems that allow for separation of secret and confidential data? It concerns more precisely the interplay between AI (artificial intelligence), ML (machine learning) autonomous systems, mathematics, statistics, language technologies and so forth, emanating into autonomous systems. Then but first then might AI offer a potential to become a haven for AI by way of IT governance covering primarily vagueness, ambiguities, and coverage. Evaluation is of course also important which could be built mainly upon recall and precision. Noteworthy is the fact that Archive law and IT Law to a large extent are separated both academically and in practice (primarily commercially). Of course, this does not happen formally but rather in practice. Wishful thinking the next Nordic conference will have a sperate track for archivists to illuminate this.

Tanel Kerikmäe (Estonia Tech University, Estonia)
Title: Critical view to the implementation of public AI use cases

Abstract: Estonia is well-known avant-garde country, proud of its success in the field of digitalisation. Nevertheless, the core of e-society i.e e-governace and AI use cases (kratid) should not automatically glorified but also screened and audited. The planning and adoption of the new AI related technologies in data driven world is certainly not risk-free and should be based on certain principles that safeguard the support of stakeholders to the innovation. Lawyers as other social scientists have been shadowed by IT experts when preparing the new AI supported service and the opportunities seem to be prioritised over the needs. The attempts demonstrate the alienation of tech-enthusiasts from the expectations of the addressees of the e-services (for example the misleading promotion of a robot-judge or geo-location certain types of industries. The challenges of past, present and future are, related to the procedures that are (or are not) used when managing the lifecycle of a use-case or not-including stakeholders. One of the main principles of good law-making could also be applied when vitalising new AI use cases, namely that the priority is always the welfare of citizen not less work or fame for State apparatus. When designing AI-based solutions, it is relevant to assess carefully whether the solution has the potential to solve a problem in a specific case. Consideration should also be given to ready-made solutions, which can be customized for the needs of a specific user group. The new initiatives such as Real-Time Economy (RTE) and Bürokratt will have significant impact to the rights and obligations of both private sector and citizens. The EU’s approach to AI clearly centres on excellence and trust. Sandboxes created today focus to the IP and data protection but not to the interests of the stakeholders.

Anja Møller Pedersen (University of Copenhagen, Denmark)
Title: Algorithmically generated suspicion: Introducing intelligence-led and predictive policing to the Danish National Police

Abstract: In 2017, the Danish National Police acquired a new, ‘AI-ready’, policing platform from the US tech company Palantir. This platform, called POL-INTEL, does not collect new intelligence, but it enables the police to carry out data analysis across police registers and other available intelligence, including open source, thus representing a quantum leap in the transition towards intelligence-led and eventually predictive policing in Denmark. While speeding up police investigation and improving its efficiency, putting trust in algorithmically generated network analysis and cross integration of registers, thus outsourcing suspicion to those developing the applied software, without careful scrutiny and sufficient legal safeguards has serious fundamental rights implications.

Nevertheless, a fairly broad legal basis for carrying out such data-driven policing was introduced without much ado and only little public attention, leaving much consideration as to its application to police discretion, without acknowledging its serious implications for the fundamental rights to respect for privacy and data protection amongst others.

In the meantime, the proposed EU AI Act explicitly categorises predictive policing as “high risk” use of AI, thus reiterating the need for a proper public debate on personal and democratic costs and benefits of the transition towards intelligence-led and (maybe) eventually predictive policing, and a more detailed regulation of the use of POL-INTEL for data-driven policing and the use of open source. Against this background, the presentation aims a drawing attention at some of these fundamental rights challenges, discussing how to ensure the necessary transition to intelligence-led policing without undermining fundamental rights and democracy.

Jouko Nuottila (University of Lapland, Finland)
Hilja Autto (University of Lapland, Finland)
Title: Revolutionising contracts with AI: the case of the Ecodesign Regulation

Abstract: Contracts have two main groups of users: lawyers and others. The latter often claim that contracts focus on the wrong things and are written in a way that alienates people rather than helping them achieve their goals. This presentation explores a new era in which it is easier to revise contracts. Building on the research and practice of Proactive Law and Contract Design, we explore how AI can help change contracts for the better and automate many of the tedious parts. Our research-based views highlight the power of plain language and information design to improve the communication of complex information and support stakeholder empowerment and choice. While our findings can be applied to any complex communication, we apply them here to existing model clauses in environmental and social contracts, with examples of how AI and Large Language Models can be used to generate first drafts of revised content and support contract (re)design in a more modern, useful and usable way. We use the new Ecodesign Regulation as an example of how AI can support the design of contract content, language and presentation. Our examples show how the combination of information design and AI can lead to contracts that are both legally and operationally functional, and to more sustainable supply chain contracting practices. In short, better business and a better society.

Reijo Aarnio (Sitra, Finland)
Title: Fair data economy

With the potential to generate sustainable growth, enhance productivity and innovation, and bolster the competitiveness of European businesses on the global stage, the European Data Strategy plays a pivotal role in Europe´s policy of open strategic autonomy. The developments in Artificial Intelligence (AI) and generative AI (GenAI) challenge everyone´s digital sovereignty and the European vision of making Europe a hub for trustworthy AI. However, using the development in the field of AI and GenAI responsibly, we have the potential of making our machines, devices, industry and society even smarter. In order to make the modern data economy fairer, we need new technological, economic and legislative innovations that specifically support the rights of individuals. We need an economic right for consumers in the fair data economy. To reach successfully the new environment and respecting the rights of citizens´ it requires strong leadership, clear guidance and effective supervision.

Peter Wahlgren (Stockholm University, Sweden)
Title: Sustainable Legal Education

Abstract: The impact of digitalisation and AI on society is profound. The ways in which we communicate, work, consume and live change. The legal sector is not immune to this, the development affects how the legal system operates and AI har the power to challenge the rule of law, human rights and democracy in several ways. Likewise, the understanding of how control and problem solving - which are the basic tasks of law - can be exercised changes. A sustainable legal education cannot neglect this development. Lawyers of the future must be able to understand the implications of the technology in order to deliver relevant legal services. Both the requirements regarding substantive legal knowledge and methodological proficiency are affected. The need to reform the traditional legal curriculum is acute. The alternative, a continued ostrich-like strategy raises several critical questions. What if jurisprudence continues to ignore new ways in which human activities are expected to be carried out in order to ensure reasonable demands for efficiency and quality? What if law school curriculums no longer reflect the standardised ways in which transactions and problem solving are managed in society? And, if the answers to these questions give rise to a fear that legal education will be marginalised, is reorientation still an option, or is it already too late? Can a reformed legal education and legal informatics fill the growing void? Alternatively, is it unproblematic to recast fundamental legal principles and teleological interpretation methods into technical solutions, and can these tasks therefore be safely entrusted to tech-companies and engineers without legal training? Can ethical, legal and social impact analyses be automated, and, if so, is this the point in time when confused understandings of the function of jurisprudence and the legal profession finally disintegrate? In retrospect, was law as we now know it merely a historical parenthesis; a dubious control and problem-solving mechanism that an immature civilisation all too long clung to?

Tuomas Pöysti (Chancellor of Justice, Finland)
Title: Precautionary and Risk Governance Design Patterns in Law on Artificial Intelligence (AI)

Abstract: The Artificial Intelligence (AI) enables powerful human – machine collaboration and better decision-making including use of digital product and sustainability information (digital product pass), steering of resources for energy efficiency and sustainable economy. The use of AI as well as the green transition benefits from wide use of data, sharing of data and platform-based solutions. The artificial intelligence itself will be subject to regulation in the European Union Artificial Intelligence Act and in the Council of Europe Convention of the Use of AI, but these instruments are not applied in isolation but as part of a wider legal system. In the regulation of old and new risks legal systems encounter recurrent problems and repeat similar solutions beyond individual fields of law. The interfaces between data protection law, competition law, product regulation and public and administrative law do change but the fundamental problem of protecting human autonomy and agency and limiting arbitrary and otherwise harmful power remains. Legal tradition applies normative patterns and the legislator develops design patterns to be used in law. These legal design patterns are a useful way to look into the structural change and needs of change in law to make the law serve its purpose in digital and green transitions. The presentation will look into precautionary and standard risk management design patterns in the regulation of the use and development of AI in Europe.

Beata Mäihäniemi (University of Helsinki, Finland)
Title: The interface between digital platforms and sustainability – Insights from competition law & data governance regulation

Abstract: Users online should have a chance to buy sustainable products and services through digital platforms to an limited extent. Nevertheless, environmental issues are becoming more and more pressing, and users would be more willing to e.g., compensate for their flight emission, buy organic food or ensure protection of endangered species. The question arises who is responsible for ensuring sustainability of products and services that are sold/offered online? Is it the consumer who often feels she wants to be environmentally friendly but chooses cheaper products over sustainable ones as she is used to choosing a cheaper offer? Or is the government/EU who is promoting the twin transition in the first place?

I claim that this responsibility could be addressed by means of competition law as well as new regulations (recently published or in a proposal phase), such as Digital Markets Act, Digital Services Act and Data Act. In particular, consumer welfare (i.e. how consumers would be better off) could denote also buying more sustainable goods and services as non-priced goals of competition law are now becoming increasingly popular. Similarly, any tools offered in recent data governance regulations, such as Digital Markets Act, Digital Services Act or Data Act that increase user autonomy and self-empowerment of online users, could also have a similar effect.

What are the main behavioural obstacles for applying sustainability by online platforms that impede enhancing consumer welfare and user autonomy?. For example, digital platforms often nudge consumers into specific behaviours that may not be in the best interest of the user. Similarly, platforms take advantage of heuristics and biases that users face so that they would not be able to choose more sustainable products and services. How can these obstacles be addressed by competition law and data governance tools?

Maksymilian Kuzmicz (Stockholm University, Sweden)
Title: They should know: Interest in being informed and potential conflicts. The case study of Active and Assisted Living (AAL).

Abstract: Ensuring that users and consumers are well-informed about the multifaceted aspects of emerging technologies, including their environmental impact, is of paramount importance. However, such interests often collide with the concerns of other stakeholders. This research contribution explores an approach to address this challenge, focusing on video-based AAL as a case study. In the first step, possible conflicts of interest are recognized. Subsequently, I examine how the identified conflicts are resolved within the current legislative framework of the EU.