Regulation does not necessarily stifle innovation, as many tech firms claim, but can in fact encourage a more sustainable, ethical side of proceedings, says Chirantan Chatterjee of University of Sussex Business School
As we arrive in the last quarter of 2023, there is a general impression globally that while the AI genie is out of the bottle, the genie needs to be regulated, otherwise there may be unanticipated consequences. An impression that has been accentuated with the emergence of generative AI. Not only might the genie displace jobs, it might also cause mayhem, with a slew of important elections on the horizon, contaminating the ecosystem with misinformation and deep fakes.
Against this backdrop, Spain has just created the first European AI Supervision Agency, UK Prime Minister Rishi Sunak has made a clarion call on setting standards in AI safety with his proposed November 2023 AI summit, while Microsoft Vice Chairman and President Brad Smith was espousing responsible AI governance in a recent visit to India.
Meanwhile, the anti-regulationists are unsettled by the idea, citing research that shows that regulations like GDPR may have adversely changed the nature and direction of inventive activity in European Union countries. They also cite other channels through which regulation may harm innovation. For example, how France’s labour laws may have impacted inventive activity in firms under a certain size threshold conditional on whether they were engaging in incremental or radical innovation. My own research in the US bio-pharmaceutical industry also shows that regulation could generate short-term benefits to consumer welfare, while in the long-term it may be detrimental to incentives for innovation, harming societies as well.
The debate on regulation and innovation is not new and has a long history going right back, for example, to the building of UK and US railroads, and to other technological diffusion events. We have also seen conversations on how to regulate given the implications for environmental innovation. While many have assumed strong positions on the matter, others like Harvard professor Michael Porter have been more nuanced, arguing that there may be an inverted-U in the relationship between regulation and innovation. In the early days of innovation in an industry, regulation may indeed be detrimental, but in later years, regulation may actually induce more responsible and sustainable innovation. Therefore regulation actually has a positive impact.
Posited as ‘Porter’s Hypothesis’ from his seminal 1991 paper, the idea that regulation can be helpful for innovation is highly relevant to the conversations happening around the world today concerning AI regulation. In particular there are sectoral implications here on how generative AI may have profound consequences in areas such as healthcare and education. For example, regulation, externally mandated or co-devised between policymakers and firms, could ensure that “safe” innovation creates better health and educational outcomes as certain sectors adopt AI.
Our own research published in 2016 also shows that in certain contexts, such as in dyes and chemicals, regulation may actually causally induce positive upstream innovation. Translated to the AI context, this may mean that regulation could impact how “safe” innovation upstream expands in the value chain of firms doing more basic R&D in AI.
A related noticeable aspect too is the country by country variation in the regulation-innovation conversations. In the US for example, there is a more hybrid, centralised-decentralised approach, whereby individual states adopt AI legislation in conjunction with federal regulatory approaches by the Biden administration. In China meanwhile, it seems there is a more centralised approach driven by Beijing on the regulatory front in AI. So, seems to be the case in the European Union while others, such as India seem still to be playing catch-up on their regulatory position where AI is concerned.
Overall, this suggests that while there seems to be global recognition for the need of AI regulation, it is still unclear at what margins this needs to be implemented, and more importantly how it can be cross-nationally and sub-nationally harmonised.
Generally, some 11 months after ChatGPT was released, the world still seems to be struggling with how to design regulation for AI, and generative AI in particular. There are also emerging conversations on the spill-over effects of generative AI on global intellectual property laws, especially for creative industries.
But countries around the world are gradually getting to grips with it all, some more centrally, others by adopting an innovation-oriented approach that is more decentralised. Firms, big and small seem to be waking up to the adverse consequences of unregulated AI with Chief Experience Officers making statements about being responsible in their in-house AI inventive activity.
Given all this, it is probably time for policymakers to appreciate the dynamics in the relationship between regulation and innovation going back to Porter’s Hypothesis – the idea that regulation can be helpful for innovation. Maybe the time is also ripe to set up evaluation bodies, as they do in bio-pharmaceutical regulation, where they monitor adverse events post-launch and apply labels and black box warnings on medical products and drugs. Such evaluation bodies can then revisit any decisions if there is an adverse consequence of over-regulation or lax regulation, and thereby ensure that AI is regulated for the good of all.
About the Author
Professor Chirantan Chatterjee is a Professor of Economics of Innovation and Global Health (SPRU – Science Policy Research Unit) – University of Sussex Business School. His work is at www.chirantanchatterjee.com