Toy airplane with red tail on a map showing Spain and Mallorca, with red pin marker on Mallorca.
Study  |  02/26/2026

What Does It Really Cost – and Why Specifically for Me?

Artificial intelligence is increasingly determining the prices we are offered when shopping online. Klaus Wiedemann, Senior Research Fellow at the Institute, has examined what this means, which laws protect us, and where further regulation is needed.

Toy airplane with red tail on a map showing Spain and Mallorca, with red pin marker on Mallorca.
Photo: Daniel Ernst/Adobe Stock
Dr. Klaus Wiedemann in a dark suit, light blue shirt, and dark tie in front of a gray background
Dr. Klaus Wiedemann, Senior Research Fellow in the Intellectual Property and Competition Law Department at the Institute.

Imagine you are looking for a flight to Mallorca, you open the same booking website as your neighbor – and suddenly find yourself paying significantly more than she does. The travel time, number of passengers, and destination are the same, but not the price that a machine has calculated for you. What sounds like science fiction has long been everyday fact in online retail, and it is becoming more complex. Klaus Wiedemann, a legal scholar, has systematically investigated this phenomenon in a recent article. He comes to an alarming conclusion.
 

Unpredictable Prices
 

The key difference lies in the process behind the pricing. With so-called “dynamic pricing”, all customers see the same price at the same time – it fluctuates based on demand, inventory, and the competitive situation, but it is the same for everyone. Not so with personalized pricing: here, the price depends on the specific characteristics of the individual – their purchase history, their surfing behavior, the device they use, or whether they have accessed the website via a search engine or a price comparison portal. In some cases, the stated goal of the algorithm is to get as close as possible to the highest amount that this person would be willing to pay at that moment. At the same time, the system takes into account the prices charged by other providers.

AI-based systems go further than classic algorithms: they are only given an abstract goal – such as “maximize profit” – which they then independently develop strategies to achieve. They learn from experience, optimize themselves autonomously, and their mechanics are so complex that even the companies that use them can no longer explain why a certain price appears on the screen in the end. The OECD already summed up this problem in 2017: such an algorithm delivers an optimal result without revealing the underlying decision-making steps.

What the Law Does Today – And Where Its Limits Lie
 

What protects us legally? There is no general obligation to disclose price calculations. However, if a price has been personalized on the basis of automated data processing, the provider must provide pre-contractual information about this in distance selling contracts. According to Wiedemann, anyone who uses personal data for pricing purposes also needs the active consent of the data subjects – a mere mention in the small print is not sufficient. But Wiedemann clearly sees the limits of this information model: Who actually reads every privacy policy? The protection offered by the law today primarily benefits those who are actively interested in information.
 

In the end, the fundamental question remains unanswered: Do AI-supported pricing systems lead to more competition and lower prices for everyone – or does the wealth flow one-sidedly to those who own and operate the algorithms? Time will tell whether today's instruments will suffice when price personalization is no longer the exception but the rule. Until then, it is worth pausing for a moment before clicking “Buy now” and asking yourself: Is this price really mine?
 

Klaus Wiedemann
Die Preisfrage – KI-basierte Preissetzungsmethoden im europäischen Wettbewerbs- und Verbraucherrecht der Digitalwirtschaft
Zeitschrift für Europäisches Privatrecht 34, 1 (2026), 12 – 38

Microphones for media interviews at a press conference.
Miscellaneous  |  02/20/2026

How Copyright Can Survive in the Age of AI

Artificial intelligence can generate texts, images, and music in seconds that would take humans hours or days to create – and at a fraction of the cost. In a recent paper, Josef Drexl shows that traditional copyright law falls short in this area and argues for a radical paradigm shift.

Microphones for media interviews at a press conference.
Photo: Microgen/Adobe Stock

A Court Ruling with Limited Effect
 

In November 2025, GEMA achieved a victory against OpenAI in the Munich I Regional Court: if an AI model stores song lyrics in such a way that users can reproduce them through prompts, this constitutes copyright infringement, according to the court.
 

In his paper, Josef Drexl makes it clear that this legal victory obscures the real problem. This is because this type of reproduction through so-called memorization only occurs in relatively few cases. On the contrary, according to the logic pursued and technologically implemented by generative AI, memorization should be prevented and completely new content generated. And this is precisely where the core problem lies: AI is displacing human-created works from the market without copying them.
 

Substitution Instead of Imitation
 

It is important to distinguish between imitation competition (where someone copies an existing work) and substitution competition (where something new fulfills the same function). This is because copyright protects against imitation competition, while allowing substitution competition.
 

With generative AI, however, it is precisely substitution competition that leads to the problem. An AI article may not take a single sentence from existing texts – and yet it still displaces journalists. “Traditional copyright law falls short here structurally,” argues Josef Drexl.
 

Why Human Creativity Remains Worth Protecting
 

The author cites two key arguments: Generative AI is technologically dependent on human creativity. Models that are trained too extensively with AI-generated data will increasingly “hallucinate,” i.e., produce more and more errors and eventually even collapse.
 

Even more important is that our democracy needs human thinking. Journalists, as the so-called fourth estate, not only inform the public, but also uncover abuses through their research, thereby keeping those in power in check. Creative professionals in the cultural industry play a similarly important role. “Generative AI alone cannot do this,” emphasizes Josef Drexl.
 

The Radical Proposal: Participation Without Reference to the Work
 

Drexl outlines three levels of regulation – from optimizations of the existing system to a new “right to AI exploitation” to his preferred model: a “right to fair compensation,” completely detached from the specific use of the work.
 

Josef Drexl proposes that commercial providers and operators of generative AI pay levies that collecting societies distribute to authors – as a non-transferable right. The distribution would take the form of a surcharge on other distributions over the last five years. AI developers should also be able to accept this, because “the development of high-quality generative AI models is centrally dependent on being trained with human-generated products,” according to Drexl.
 

The advantages are obvious: there are neither problems of proof nor lengthy negotiations. Creative professionals whose works have never been used for AI training also deserve compensation, because they too are exposed to competition from AI-generated output.
 

Thinking Beyond Copyright
 

Drexl emphasizes that copyright alone cannot solve the problem. For journalism, he proposes a citizen's levy, similar to the broadcasting license fee in Germany. Voters would decide annually which press publishers receive the funds, provided that quality standards are met and a minimum percentage of publishing costs is based on the remuneration of journalists. This would create a disincentive to rationalize with generative AI.
 


Josef Drexl
KI-Nutzungen und Kreative: Umrisse eines gerechten Ausgleichs
Max Planck Institute for Innovation and Competition Discussion Paper No. 28 (2026)

Black fountain pen next to three handwritten signatures on white paper reading 'Rose', 'Hoffmann', and 'Harhoff'
Study  |  01/27/2026

Why Reducing Bureaucracy Fails When It Comes to Signatures: Majority Unaware of Their Digital Rights

A new study warns that efforts to reduce bureaucracy may fail due to a lack of information among the population. The representative survey conducted by the Institute has found that the majority of adults in Germany are unaware of new legal provisions allowing them to dispense with handwritten signatures in everyday transactions. However, some segments of the population are systematically better informed.

Black fountain pen next to three handwritten signatures on white paper reading 'Rose', 'Hoffmann', and 'Harhoff'
Symbol image: AI generated

The study examined two everyday scenarios in which the legislator has abolished the requirement for a handwritten signature (written form requirement). This allows for further digitalization of business processes and cost savings. However, the study documents that the new regulations are hardly known, even though some of the legal changes were made ten years ago. As a result, the German population is missing out on considerable digitalization potential.


Consent to Data Processing


According to the General Data Protection Regulation (GDPR), which has been in force in Germany since 2018, declarations of consent no longer need to be signed by hand. Under the old Federal Data Protection Act (BDSG), this was still necessary, as it required the written form. The GDPR only requires that consent be documented.


The survey asked: “Do you have to sign a data processing consent form for it to be legally binding?” The correct answer is “No”.


However, around 50% of respondents answered incorrectly: A third said “Yes”, 19% “I don't know”. The analysis shows that certain population groups are systematically less well informed. These include people without a high school diploma, those with low incomes, those with children, those with a migrant background, and women. People in the northeastern Federal States (Brandenburg, Mecklenburg-Western Pomerania, and Saxony-Anhalt) are also less well informed than people in the reference region of Baden-Württemberg. This applies even when the effects of high school graduation, income, etc. are taken into account. No statistically significant correlation was found with regard to age.


Lack of knowledge is detrimental to digitalization and innovation. Companies, doctors’ offices, nursery schools, etc. could, for example, refrain from collecting consent forms digitally (e.g., by email). As a result, people unnecessarily cling to analog solutions on paper, which leads to discontent among consumers and extra costs for companies. Against the backdrop of the current debate on reforming data protection law at the EU level, it is clear that public awareness must become a central aspect of the legal policy debate. Without knowledge, simplifications and streamlining of data protection law will not achieve their intended purpose.


Cancellation of Newspaper Subscriptions and Mobile Phone Contracts


Already in 2016, a reform of the German Civil Code (BGB) abolished the written form requirement for a number of contracts. Specifically, this means that clauses requiring the written form are invalid. Instead, it is legally possible to cancel newspaper subscriptions and mobile phone contracts by email, for example, even without a scanned signature.


The survey question was: “Which of the following contracts can be terminated by email?” There were two possible answers: newspaper subscriptions and mobile phone contracts. The correct answer for both is “Yes”.


Around 30% of respondents answered incorrectly. Regarding newspaper subscriptions, 11% answered “No”, for mobile phone contracts 16.5% answered “No”. In both categories, 16.6% answered “I don't know”. Here, too, certain population groups are systematically less well informed. These include people without a high school diploma, those with low household incomes, people with a migrant background, people who live in cities, and those who do not follow new trends and developments. People under 40 are also less likely to know that mobile phone contracts can be canceled by email. Compared to similar people in the reference region of Baden-Württemberg, people in North Rhine-Westphalia or in the central German states (Saxony and Thuringia) are systematically better informed. Among professionals, people with a legal background in particular were more likely to give incorrect answers.


There is a risk that consumers will not exercise their rights and will incur higher costs when terminating contracts by sending letters. It is also possible that they will refrain from terminating their contracts altogether because they consider the costs of termination to be too high. Thus, lack of knowledge is detrimental to overall economic welfare. Since handwritten notices of termination remain valid, companies must accept them, which also results in higher administrative costs for them.


Need for Action


The authors consider it problematic that legislators generally refrain from launching information campaigns when changes are made to legislation governing the private sector. It is assumed that it is sufficient for one of the two contracting parties to be better informed and for the other to be disciplined by the courts if necessary. However, service providers have no incentive to inform their customers about simplifications, particularly in the case of terminations.


To achieve positive effects, the authors recommend targeted information campaigns, such as those commonly used in the U.S. The fact that the technology itself can also contribute to this is part of the Institute’s ongoing research.


Authors of the study:
Michael E. Rose, Ph.D., Senior Research Fellow
Jörg Hoffmann, Research Fellow
Prof. Dietmar Harhoff, Ph.D., Director at the Institute


To the publication:

Rose, Michael E., Hoffmann, Jörg, Harhoff, Dietmar (2026). Digitalization and Signatures: Evidence from a Survey Among German Citizens, Transforming Government: People, Process and Policy, 1–16.


Open Access:

Rose, Michael E., Hoffmann, Jörg, Harhoff, Dietmar (2026). Digitalization and Signatures: Evidence from a Survey Among German Citizens, Max Planck Institute for Innovation & Competition Research Paper No. 25-05.

Study  |  11/27/2025

Who Uses Artificial Intelligence in Research − And for What?

A new study provides answers based on a survey with more than 6,000 researchers from the Max Planck Society and the Fraunhofer Society.

Dot plot showing familiarity with assistive technology tools by demographic and educational categories, with confidence intervals for each group.
Figure 1: Association of individual characteristics and familiarity with AI tools.
Bar chart comparing respondents' selections for various uses of an AI tool between Van Noorden & Perkel and a present dataset, showing percentages for each use case.
Figure 2: ‘‘In your research, what do you use AI tools for? Select all that apply’’. Comparison with the study of Van Noorden & Perkel (2023). AI and Science: What 1,600 Researchers Think, Nature 621 (7980), 672–675.
Bar chart shows barriers to the use of AI, with legal uncertainties, lack of knowledge, and data protection concerns as the most common reasons.
Figure 3: Frequency of barriers cited among respondents’ top two reasons for limited AI tool use.

In June 2024, all employees of the Max Planck Society and the Fraunhofer Society were invited to take part in an anonymous survey on their use of AI tools for their work. As researchers and support staff have substantially different task profiles and opportunities to use AI, they are considered separately. The current analysis focuses on the 6,215 complete responses from researchers, which are broadly representative of the two research organizations. The survey addressed AI tools in general rather than generative AI specifically, though the latter may have featured prominently in respondents’ considerations due to their visibility in public discourse.


The key insights into researchers’ use of AI have now been published in Research Policy, an internationally leading peer-reviewed journal focusing on research, technology, and innovation policy and their implications for science, the economy, and society.


Researchers actively use AI tools, with adaoption patterns varying with roles and beliefs.


Many researchers in the sample are already using AI tools: 42.4% say they are very or fairly familiar with these tools, while 44.0% say they have used them a few times or more. Nearly a quarter (25.9%) of all researchers use AI tools daily or more frequently. Only about one in five researchers (22.2%) never use AI for work.


Clear patterns emerge regarding who is more familiar with AI (Figure 1). Younger researchers tend to use AI more often than older ones. Those with higher education levels are also more familiar with AI tools. Women report lower familiarity with AI tools than men. 


Respondents who use AI tools tend to be more positive about their potential impact on research quality, skill development, and society in general. While an overwhelming majority of researchers (69.2%) expect AI to transform, or even revolutionize, their field in the next decade, they are more divided in their opinions about the effect of AI tools on society: 40.6% believe AI tools offer more opportunities than risks, while 22.2% think they pose more risks than opportunities.


A gender gap in AI use appears – as also documented in other studies – and is largely explained by differences in familiarity with AI tools.


Other studies have documented that women tend to use AI less than men do. We also document this tendency among scientists for research tasks. Our fine-grained data establish that the gender gap in AI usage for research is not due to ability or negative beliefs about AI, but rather due to familiarity with AI tools. Once women start using AI tools, they find them to be just as helpful as men do.


AI is becoming a co-creator, supporting not only peripheral but core research tasks.


Researchers now use AI at every stage of the research process (Figure 2). The most common applications include testing ideas, writing code, and drafting research papers. Interestingly, researchers use AI tools more for the tasks on which they spend the most time.


Efficiency is a major driver for use, yet many struggle with effective prompting.


Half of the researchers (50.4%) reported using AI to speed up their work. However, our survey suggests that effectively using AI tools requires skill. To proxy prompting ability, we showed respondents a picture of a visual phenomenon and asked them to create a prompt to identify the phenomenon from a Large Language Model (LLM). We considered a prompt successful if, after ten iterations with an LLM, at least one returned the correct answer or suggested uploading the picture to an LLM. Despite their advanced educational background and awareness of AI tools, only a fifth of researchers (21.0% ) managed to create a successful prompt for the test task. Learning to write prompts seems to be a new skill in itself. Those who have received training on how to write prompts and use AI are much more likely to produce good prompts. 


Institutions can accelerate adoption by addressing key barriers: legal uncertainty, lack of knowledge, and limited access to suitable tools.


Many of the obstacles to more frequent AI use could be mitigated through institutional action (Figure 3). The most frequently mentioned barriers are legal uncertainties (17.6%), lack of knowledge (17.4%), and limited availability of suitable tools (16.6%). Legal uncertainties are especially important for researchers who handle a lot of administrative work, such as personnel and project management.


Researchers also want clearer guidelines. Most respondents prefer high-level guidance to address legal uncertainties. 58.7% expect guidance from supranational bodies, such as the EU, and 51.3% expect guidance from their own research organizations (the Max Planck Society or the Fraunhofer Society).


Conclusion


The rapid development of AI has led to the quick adoption of new tools by the research community, making AI an important subject of research as well as a tool for advancing research. Researchers are increasingly integrating AI into their core activities. While opinions are divided about the long-term impact of AI on breakthrough innovation, skill development, and research equity, there is a general consensus that this technology will transform research practice profoundly. As AI becomes an increasingly important research tool, it is essential to understand who uses it, for which tasks, and what challenges they face. This knowledge is crucial for designing future policies, supporting researchers in adopting the technology effectively and responsibly, and safeguarding scholarly standards.


To the study:


Chugunova, Marina; Harhoff, Dietmar; Hölzle, Katharina; Kaschub, Verena; Malagimani, Sonal; Morgalla, Ulrike; Rose, Robert (2026). Who Uses AI in Research, and for What? Large-Scale Survey Evidence from Germany, Research Policy, 55 (2), 105381. DOI

Aerial view of a winding, turquoise river surrounded by dense, green tropical forest in Brazil. Photo: Jose Sabino/Pixabay
Study  |  11/21/2025

How Data and Data Governance Can Promote Climate Protection in Brazil

Whether in agriculture or urban life, reliable information on emissions, land use, and the flow of resources can make the difference between effective climate policy and hollow promises. A project by the Institute addresses this issue and examines how data governance can be used to achieve the United Nations’ climate goals in Brazil – as detailed in the recently published report.

Aerial view of a winding, turquoise river surrounded by dense, green tropical forest in Brazil. Photo: Jose Sabino/Pixabay
Photo: Jose Sabino/Pixabay
Aerial view of a winding, turquoise river surrounded by dense, green tropical forest in Brazil. Photo: Jose Sabino/Pixabay
Photo: Jose Sabino/Pixabay

Brazil Between the Amazon and the Metropolis


Brazil is a key climate region in many respects: the Amazon binds enormous amounts of CO2, while urban centers such as São Paulo release huge amounts of greenhouse gases. The report shows that both realities are closely linked – through supply chains, consumption patterns, and energy supply. Access to data also has a global impact on international product exports and market entry. For example, coffee farmers from Brazil could enter the EU market if they can prove the origin and sustainability of their products. Access to climate data and the use of product traceability technologies could help them meet international standards. The challenge is to bring together the different interests of industry, politics, and civil society to enable progress.


From Workshop to Method


In December 2022, the Institute brought together stakeholders from industry, technology, government, and NGOs at the Data Sharing & Climate Action in Brazil workshop in São Paulo. The goal was to identify specific situations in which data sharing can advance climate protection and thereby support UN Climate Goal No 13. Initiatives and business models that already have experience using data, AI, and digital technologies were presented.


After the workshop, the research team conducted a legal and political analysis and reached a clear conclusion: Sharing data is not enough; strong governance of climate data is needed. Without it, problems such as misinterpretation, misuse, and conflicts over data ownership and control can arise, which could compromise efforts to combat climate change.


The report presents a method for identifying climate-related contexts and determining what type of data governance is appropriate to better leverage the potential of data for climate protection.
 

Legal Framework: Many Rules, Little Connection


The current legal framework does not link environmental and climate policy with data regulation. In particular, Brazil currently lacks a legal definition of climate data that would help balance competing interests when data sharing for climate action conflicts with other values such as freedom of economic activity or the protection of personal data. Establishing such a definition is a fundamental step toward coherent climate data governance.


With regard to parties interested in climate data, the report emphasizes that data governance for climate protection measures can be strengthened or restricted depending on the balance of other interests.


Without data governance, many technical possibilities remain untapped. However, a lack of legal coherence blocks its full potential.


Examples of Data in Application


The report identifies four climate areas in which access to and exchange of data are crucial. It examines the respective legal implications and discusses which elements of data governance could provide a solution.
 

  • Sustainable land use: Precision agriculture companies offer data solutions that enable farmers to monitor and predict the environmental impact of their farming, including carbon footprint, energy consumption, waste, and water consumption.
  • Traceability: Blockchain and AI are essential for tracking products throughout the supply chain. They enable the collection of real-time data on production processes, transport, and distribution, and guarantee the authenticity and integrity of this data.
  • Combating illegal activities: Satellite data enables the government to better track where illegal deforestation, mining, and land grabbing are taking place.
  • Smart Cities: Data from smart meters could help the government in São Paulo and researchers obtain information about users’ energy consumption and better plan the energy transition.


Prospects for the Future


To harness the potential of data to combat climate change in Brazil, the authors suggest carefully weighing up the various interests involved and promoting partnerships and data platforms for climate data. On these platforms, all stakeholders can share data under fair conditions and benefit from this. If data is available under the right conditions and with clear rules, this can lead to more initiatives for adaptation and mitigation measures. From a legal perspective, the Brazilian legal framework could benefit from a clear definition of climate data. This would provide a basis and an incentive to share data with legal certainty.



Vicente Bagnoli, Carolina Weber (née Banda), Germán Oscar Johannsen, Christiane Bedini Santorsula, Maria Beatriz Monteiro, Juliana Abrusio
Data Governance in Emerging Economies to Achieve the Sustainable Development Goals Brazil Country Report Based on the Workshop ‘Data Governance for Climate Action in Brazil’ (São Paulo, 15-16 December 2022)
Max Planck Institute for Innovation & Competition Research Paper No. 25-22

An illustration depicts a person with gray hair wearing an orange long-sleeve shirt sitting in front of a laptop. The person holds their head with one hand, suggesting confusion or contemplation. Above the head is a speech bubble containing a question mark. The laptop screen shows an exclamation mark inside a triangle. In the background, a sheet of paper displays a paragraph symbol and several lines representing text.
Study  |  11/14/2025

Majority of Germans Unaware of New Digital Consumer Rights

A new study warns that consumer rights reforms may fail due to information gaps. The representative survey conducted by the Institute has found that more than half of adults in Germany are not aware of the statutory consumer rights that enable them to benefit from innovative services and IT security. Certain segments of the population, however, are systematically better informed.

An illustration depicts a person with gray hair wearing an orange long-sleeve shirt sitting in front of a laptop. The person holds their head with one hand, suggesting confusion or contemplation. Above the head is a speech bubble containing a question mark. The laptop screen shows an exclamation mark inside a triangle. In the background, a sheet of paper displays a paragraph symbol and several lines representing text.
Symbol image: AI generated.

The study examined two everyday scenarios whose legal framework has been reformed in recent years. In both cases, the European legislator aims to strengthen consumer protection while actively promoting innovation and, ultimately, enhancing competition. As business processes become increasingly digital, consumers face new challenges – now addressed through specific legal rights. However, the study shows that these reforms remain widely unknown. As a result, the intended effects of the legislation may fail to materialize.


Bank Account Switching Service


Under the Second Payment Services Directive (PSD2), implemented in German law in 2018, banks are required to make account data accessible via interfaces to other payment and information service providers. This is intended, for example, to facilitate switching bank accounts. Overall, the goal is to foster innovation in the financial sector and strengthen competition.


The survey asked: “Suppose you want to switch your bank account: Does your current bank have to share your account data with the new bank upon request?” The correct answer is “Yes”.


However, around 50% of respondents answered incorrectly. The analysis shows that certain population groups are systematically better informed. These include people interested in new trends, higher-income individuals, rural residents, and men. Marital status, migration background, age, and education level were also examined, but these four factors did not significantly influence responses.


Lack of knowledge harms both competition and innovation. For example, account holders might perceive switching costs as too high and therefore refrain from changing banks or using new services. Innovative new companies, anticipating this low willingness to switch, may stay out of the market. In this way, lack of awareness perpetuates barriers to market entry.


Security Updates for Laptops


Under the Digital Content Directive, implemented in German law in 2022, sellers of electronic devices, including laptops, are required to provide security updates. Otherwise, the device is considered defective, and buyers can assert statutory warranty claims.


The survey question was: “Suppose you bought a new laptop last year: Is the seller required to provide you with current security updates?” The correct answer is “Yes”.


Around 73% of respondents answered incorrectly. Again, certain groups are systematically better informed. These include people interested in new trends, individuals under 40, Germans with a migration background, foreign nationals, men, and parents of children. Household income, place of residence, and education level were also examined but did not significantly affect responses.


The risk is that consumers may fail to assert their rights, potentially incurring higher costs or using defective devices. Lack of awareness therefore negatively affects overall economic welfare, as the right to security updates correlates with increased demand and stronger incentives for innovation.


Need for Action


Although it is already known that consumers are often unaware of their rights, this study confirms the urgent need for action in these cases. To achieve positive effects, the authors recommend complementing existing collective enforcement mechanisms with targeted information campaigns that inform citizens of their rights. The study also suggests that technology itself could contribute to raising awareness, which is the subject of further research at the Institute.


Authors of the study:
Michael E. Rose, Ph.D., Senior Research Fellow
Jörg Hoffmann, Senior Research Fellow
Prof. Dietmar Harhoff, Ph.D., Director at the Institute


To the publication:

Rose, Michael E., Hoffmann, Jörg, Harhoff, Dietmar (2025). Digital Consumer Law, Competition and the (Un-)Informed Consumer: Evidence from a Survey among German Consumers, Journal of European Consumer and Market Law, 14 (4), 170–177.

Colorful cloud featuring David and Goliath.
Miscellaneous  |  10/09/2025

The Dilemma of AI Alliances: When Partnerships May Chill Innovation Competition

In their latest paper, Josef Drexl and Daria Kim examine the competition law challenges posed by strategic partnerships between big tech companies and smaller AI developers. These alliances promise efficiency and progress, but also pose significant competition risks – especially for innovation competition.

Colorful cloud featuring David and Goliath.
Big tech companies and startups: What pitfalls do these unequal partnerships hold? (Image: Adobe Stock)

The most impressive achievements being made in AI innovation today are the result of massive resources. Large technology companies, known as big tech, have spent several years securing significant advantages in terms of data availability, computing power and cloud infrastructure. At the same time, AI start-ups are striving to implement innovative concepts and approaches and develop groundbreaking AI models. The pattern of dependency is recognizable – can the pitfalls in this practice be avoided?


The Control Issue

Even when competition authorities gain access to AI agreements, their terms remain largely unknown to the general public. What is widely known is that big tech companies often provide critical resources such as computing power, cloud infrastructure, data or financial resources. There is a risk that, in return, they will impose conditions that restrict both the free choice of licensing and, more generally, the innovation models of AI developers. Of particular importance is how the AI models resulting from these strategic partnerships are distributed and made accessible to both subsequent innovators and the general public.


Open-source licensing of AI models has been the subject of heated debate for some time. Some see it as an ideal means of promoting innovation and competition. Others have criticized it as a diversionary tactic by companies to strengthen their own position within the AI ecosystem. However, “openness” of model licenses cannot automatically and universally be equated with more innovation. On the one hand, openness can vary depending on the degree and type of AI components made accessible. On the other hand, the openness of AI models can have different, partly contradictory implications for innovation and therefore does not allow for an unambiguous normative evaluation; in some cases, control over certain resources may be justified as a legitimate competitive advantage.


Innovation Competition as a Discovery Process

Traditional competition law approaches reach their limits here. On the one hand, it is often unclear which theory of harm—if any—can be applied to capture competition concerns; on the other hand, it is often uncertain what impact certain competition strategies actually have on competition and innovation in this dynamic environment. The goal is not only to protect competition against restrains, including through the use of AI, but also to create conditions under which companies can freely and creatively pursue new avenues of AI innovation.


Recent cases, such as the partnerships between Microsoft and OpenAI and between Microsoft and Mistral AI, show that traditional competition law instruments are not sufficient to address the specific risks of these digital alliances. What is needed, therefore, is a distinct analytical approach that addresses specific concerns about dependencies between big tech and AI developers, particularly in the context of innovation competition. A promising framework is to base the competition law analysis on the concept of innovation competition as a discovery procedure. The key is to preserve the freedom of AI developers to choose their own licensing models and pursue independent innovation strategies without undue restrictions imposed under cooperation agreements.


In addition to applying innovation competition as a discovery procedure as the guiding concept for competition law enforcement, it is also worth considering a reform of the Digital Markets Act or even the introduction of a new competition law instrument to promote freedom of choice and access in digital markets in this context.


Access the paper on SSRN:
Josef Drexl, Daria Kim
AI Innovation Competition as a Discovery Procedure: The Role and Limits of Competition Law

Miscellaneous  |  09/30/2025

The Internet in Transition: How Will the Digital Future Be Shaped?

The internet is undergoing fundamental change. The driving factors are rapid advances in Artificial Intelligence and a series of new regulations such as the Data Act, Digital Markets Act, and Digital Services Act. These developments impact the core of our digital society and raise questions.

Prof. Dr. Josef Drexl and Germán Oscar Johannsen expressed their position in video statements during a humanet3 workshop.

Who determines the rules on the internet? Will algorithms soon steer the debate? And what role does Big Tech play, those companies that invisibly engineer our digital spaces?


What the humanet3 Research Group Is Investigating


To get to the bottom of these questions, three Max Planck Institutes have joined forces and launched the humanet3 Research Group. Researchers from the Max Planck Institute for Innovation and Competition, the Max Planck Institute for Human Development und des Max Planck Institute for Comparative Public Law and International Law — more specifically, a group consisting of legal experts and experts in computer-assisted social sciences—are pursuing an ambitious goal: to analyze, deconstruct, and rethink the “human-centered digital transformation” of digital public spaces. In the newly published  Research Agenda of humanet3, the group describes the approaches and methods of its work.


Specific projects will shed light on how technology, law, and society influence each other:
 

  • Humans at the center of AI: What does it mean when we talk about “human-centered AI”? How can this ideal be implemented technically and legally?
  • Humans in global law: How is “the human” constructed in global law, and what consequences does this have for our actions on the internet?
  • Regulation by the EU: To what extent does European regulation restrict our behavior on social media platforms? What power do these platforms themselves possess to influence our behavior?
  • Power for civil society: Could a new type of regulation that strengthens not only the state but also groups within civil society help us reclaim power from Big Tech?


It Is in Everyone's Hands To Shape the Digital Future


The work of humanet3 aims to show that shaping the digital future cannot be left solely to tech companies or regulatory authorities. It is a task for society as a whole. The aim is to preserve the internet as a place of free expression while creating mechanisms to protect it from the challenges of AI and the concentration of power.


The central question that ultimately stands above all else is: How can we ensure that the internet remains a place where people are at the center and do not end up as mere data sources or algorithm fodder? The research conducted by humanet3 provides important impetus for this debate. It reminds us that we must be the creative forces of the digital future and not mere passengers on the journey.



Rethinking Digital Public Spaces for Democracy

Statement by Josef Drexl (YouTube video)

Statement by Gérman Oscar Johannsen (YouTube video)


The Research Agenda on SSRN:
Erik Tuchtfeld, Germán Oscar Johannsen, Anna Sophia Tiedeke, Chaewon Yun
humanet3: The Third Attempt at a Human-Centered Internet – A Research Agenda
Max Planck Institute for Innovation & Competition Research Paper No. 25-21

Silhouette of a person in a suit in front of a schematic representation of multiple people and the text 'SEXUAL MISCONDUCT IN ACADEMIA'
Study  |  09/15/2025

Does the Scientific Community Sanction Sexual Misconduct?

Science strives to produce reliable knowledge, advance our understanding of the world, and ultimately drive progress. This pursuit depends not only on individual excellence but also on collaboration, exchange, and support within the scientific community. While publishing flawed or fraudulent research often leads to reputational penalties for its authors, it remains unclear whether misconduct unrelated to research integrity – but harmful to the community – prompts a similar response. A new study now provides important findings for addressing sexual misconduct and strengthening scientific and social norms in science.

Silhouette of a person in a suit in front of a schematic representation of multiple people and the text 'SEXUAL MISCONDUCT IN ACADEMIA'
Symbol image: AI generated.

The study by Rainer Widmann, Michael E. Rose, and Marina Chugunova now published in “The Review of Economics and Statistics”, has examined the question of whether the scientific community not only sanctions “bad science”, but also “bad citizenship”. The authors focus on sexual misconduct, which is prevalent in academia as in other fields. The study is the first to provide systematic and causal evidence on the consequences of sexual misconduct for perpetrators.


Data and Approach


The researchers constructed a dataset of 210 scientists at research-intensive universities in the United States across all disciplines against whom allegations of sexual misconduct have been made public between 1998 and 2019. In their analysis, they track citations to articles of alleged perpetrators that were published prior to allegations, and compare them to the citations received by other articles that stem from the same journal issue. To examine the consequences of allegations for the accused, they were matched to a set of observationally similar scientists.


The Results of the Study


The authors found that the scientific community cites prior work of alleged perpetrators less after allegations of sexual misconduct surface. Co-authorship networks play a role for spreading the information about the misconduct and mediate the response of other researchers: Researchers who are very close to the perpetrator in the co-authorship network (e.g., former coauthors) react the strongest and reduce their citations the most. The effect is particularly strong for close male peers. The effect is muted in more male-dominated fields, suggesting that field culture shapes responses to misconduct.


Comparing the results of the new study to previously found citation penalties for scientific misconduct, the magnitudes appear similarly sized. Finally, the authors document that alleged perpetrators face palpable career consequences: they publish and collaborate less following the allegations, and they are more likely to quit academic research altogether.


Conclusions and Societal Impact


The findings show that the scientific community responds to sexual misconduct even though such misconduct does not cast doubt on the validity of scientific findings of the accused. The study thus provides important impetus for the discussion on how to address misconduct and strengthen professional norms in science. The results are particularly important given the increasingly collaborative and social nature of modern research. The study offers evidence relevant for professional organizations seeking to strengthen scientific and social norms.


Directly to the publication:


Widmann, Rainer, Rose, Michael E., Chugunova, Marina (2025). Sexual Misconduct, Accused Scientists, and Their Research, The Review of Economics and Statistics, 1–29.


Updated version of the news from 26 January 2023.

The image shows the cover page of a UNSECO report. A woman with long brown hair is wearing a black T-shirt with a white graphic pattern and a blue scarf. She is walking through rubble. She is holding an electronic device with cables in each hand. A brick wall can be seen in the background. The UNESCO logo in black and white is located at the top left. The text below the person reads: ‘Resilient Minds The unseen struggles of scientists in wartime Ukraine’.
Study  |  07/31/2025

Resilient Minds: The Unseen Struggles of Scientists in Wartime Ukraine

A UNESCO report published in the summer of 2025 highlights the often invisible burdens faced by researchers during the war in Ukraine. Anastasiia Lutsenko was the lead author of the study. The report is based on an analysis of open data, statistics, scientific publications, legislation, and the results of a series of group discussions with Ukrainian researchers. It contains eight recommendations on how to improve the situation of Ukrainian researchers in practice.

The image shows the cover page of a UNSECO report. A woman with long brown hair is wearing a black T-shirt with a white graphic pattern and a blue scarf. She is walking through rubble. She is holding an electronic device with cables in each hand. A brick wall can be seen in the background. The UNESCO logo in black and white is located at the top left. The text below the person reads: ‘Resilient Minds The unseen struggles of scientists in wartime Ukraine’.
Cover of the UNESCO-Report.

Research in Ukraine is facing one of the greatest challenges in its history. Since 2014, and especially since the start of Russia’s aggressive war in February 2022, research institutions have been destroyed, researchers have been forced to flee their homes, and research projects have been interrupted. The new study provides comprehensive documentation of this crisis.


The UNESCO Report, which was presented at the Ukraine Recovery Conference 2025 in Rome, is based on an analysis of open data, statistics, scientific publications, and legislation, as well as the results of a series of group discussions with Ukrainian researchers conducted between December 2023 and January 2024.


The report reveals alarming yet unsurprising figures: 54.3% of researchers are no longer able to carry out their work at the same level as before, 29.4% of research facilities have suffered physical damage, and over 80% of respondents stated that their economic situation has deteriorated. Between 10 and 20% of researchers have left their home country.


The lead author Anastasiia Lutsenko is a doctoral candidate at the Institute and focuses her research on innovation systems, regional resilience, and the role of research in crisis regions. She contributed significantly to the preparation of the report, particularly in the areas of data analysis, political framework conditions, and the realities of life for researchers in times of war.


“The study shows that the crisis of Ukrainian science is not just a question of buildings and equipment – it is above all a question of people,” says Lutsenko. “Researchers are suffering from fear, uncertainty, loss of research content, and the destruction of their careers. At the same time, they are demonstrating incredible resilience. It is our duty to offer them not only moral support, but also practical assistance – through research, networking, and political pressure.”


Key Findings of the Report


  • Structural deficiencies existed even before the war: Ukrainian science had been suffering for decades from insufficient funding (2022: only 0.33% of GDP), low wages, and a brain drain.
  • Gender-specific burdens: Women with children are particularly affected – they are disproportionately likely to flee abroad with their children and lose their jobs. Men of working age are restricted by martial law.
  • Research interruptions: Many researchers have lost their dissertations, libraries, or research data – often in a single day.
  • Funding abroad: Most international funding goes to research abroad – not to Ukraine.

The report proposes specific measures: amongst other things, Ukraine should increase public funding for research, European Institutions could offer Ukrainian scientists to conduct research remotely, and Ukraine should improve opportunities for researchers to reintegrate after extended stays abroad.


Support for Ukrainian Researchers at the Institute


Since the start of Russia’s aggressive war against Ukraine, the Institute has supported eleven Ukrainian researchers by awarding them scholarships or offering them employment. In addition, employees of the Institute are involved in the #ScienceForUkraine initiative, which aims to bring Ukrainian researchers and suitable funding together.


To the report:

UNESCO (2025). Resilient Minds: The Unseen Struggles of Scientists in Wartime Ukraine.
https://doi.org/10.54678/ICVP5702