The most impressive achievements being made in AI innovation today are the result of massive resources. Large technology companies, known as big tech, have spent several years securing significant advantages in terms of data availability, computing power and cloud infrastructure. At the same time, AI start-ups are striving to implement innovative concepts and approaches and develop groundbreaking AI models. The pattern of dependency is recognizable – can the pitfalls in this practice be avoided?
The Control Issue
Even when competition authorities gain access to AI agreements, their terms remain largely unknown to the general public. What is widely known is that big tech companies often provide critical resources such as computing power, cloud infrastructure, data or financial resources. There is a risk that, in return, they will impose conditions that restrict both the free choice of licensing and, more generally, the innovation models of AI developers. Of particular importance is how the AI models resulting from these strategic partnerships are distributed and made accessible to both subsequent innovators and the general public.
Open-source licensing of AI models has been the subject of heated debate for some time. Some see it as an ideal means of promoting innovation and competition. Others have criticized it as a diversionary tactic by companies to strengthen their own position within the AI ecosystem. However, “openness” of model licenses cannot automatically and universally be equated with more innovation. On the one hand, openness can vary depending on the degree and type of AI components made accessible. On the other hand, the openness of AI models can have different, partly contradictory implications for innovation and therefore does not allow for an unambiguous normative evaluation; in some cases, control over certain resources may be justified as a legitimate competitive advantage.
Innovation Competition as a Discovery Process
Traditional competition law approaches reach their limits here. On the one hand, it is often unclear which theory of harm—if any—can be applied to capture competition concerns; on the other hand, it is often uncertain what impact certain competition strategies actually have on competition and innovation in this dynamic environment. The goal is not only to protect competition against restrains, including through the use of AI, but also to create conditions under which companies can freely and creatively pursue new avenues of AI innovation.
Recent cases, such as the partnerships between Microsoft and OpenAI and between Microsoft and Mistral AI, show that traditional competition law instruments are not sufficient to address the specific risks of these digital alliances. What is needed, therefore, is a distinct analytical approach that addresses specific concerns about dependencies between big tech and AI developers, particularly in the context of innovation competition. A promising framework is to base the competition law analysis on the concept of innovation competition as a discovery procedure. The key is to preserve the freedom of AI developers to choose their own licensing models and pursue independent innovation strategies without undue restrictions imposed under cooperation agreements.
In addition to applying innovation competition as a discovery procedure as the guiding concept for competition law enforcement, it is also worth considering a reform of the Digital Markets Act or even the introduction of a new competition law instrument to promote freedom of choice and access in digital markets in this context.
Access the paper on SSRN:
Josef Drexl, Daria Kim
AI Innovation Competition as a Discovery Procedure: The Role and Limits of Competition Law