The AI wave signaled the year 2023. The general-purpose artificial intelligence known as ChatGPT, developed by the American company OpenIA, has gained global recognition. At the inaugural AI Security Summit, political and business leaders from across the globe gathered to discuss emerging threats and develop global governance around artificial intelligence (AI), having been caught off guard by the exponential nature of technological acceleration. “AI was established in the UK.” Like with most of the greatest groundbreaking discoveries for humanity, the powers of this technology seem both terrifying and enticing. The creation of assessment instruments to comprehend technological advancements and associated hazards seems essential in the current environment of ambiguous global governance.

The Need for Regulation

There isn’t yet a specific legislation in France governing artificial intelligence (AI) or the AI sector. Nonetheless, the French government and regulatory bodies have been actively involved in EU-level talks over AI regulation, particularly the EU Artificial Intelligence Act’s (EU AI Act) legislative process. 

The present French government emphasizes “digital sovereignty” as a matter of national policy and works to promote the growth of a domestic AI sector, particularly through start-ups that create and employ cutting-edge AI technologies. Law No. 2016-1321 of October 7, 2016 for a Digital Republic, among other things, aims to promote innovation and the growth of the digital economy. It also requires the French Data Protection Authority (CNIL) to investigate the ethical and societal questions raised by the development of pertinent technologies, even though there isn’t any specific legislation that applies to AI. 

As part of its purpose, the CNIL has been actively involved since 2017 in identifying ethical and legal concerns linked to artificial intelligence (AI), supervising cases involving its use, providing tools and resources to enhance public knowledge of AI, and controlling related risks. 

Consequently, the CNIL has led policy and legislative talks around AI in France and throughout Europe. In keeping with its mandate as the country’s data protection body, the CNIL adopts a stance strongly centered on basic rights, especially when it comes to the defense of personal information and privacy. A specialized AI Department (AID) consisting of five CNIL agents, including engineers and legal specialists, was established by the CNIL in January 2023. 

Oversight and Compliance

France is regrettably lagging behind and failing to react to this reflection, even as other prominent AI nations have already taken the lead by announcing the establishment of their own AI Safety Institutes and Europe is getting set to impose legally obligatory regulations on development and normal. 

The AI World Summit’s second major edition, which will take place in Paris soon, offers the chance to bring the topic to everyone’s attention while also making us face our obligations and demonstrating our seriousness of thought. This includes discussing the potential, risks, and assets of the fledgling French ecosystem as well as the governance tools that are crucial for secure management. 

Therefore, the Montaigne Institute advises the government to strive toward the establishment of an AI Authority, modeled after the National Authorities that are unique to France. This Authority’s primary goal would be to closely assess technology’s hazards and performance as it develops. Understanding the steps and legal restrictions associated with creating a Reference Authority of this kind, the Institut Montaigne brings together the relevant parties and provides a workable, feasible, and ethical path.

Guidance and Education

The employment market will be significantly impacted by the use of AI technology. In the medium to long future, almost half of the jobs might be automated, according to France’s Employment Orientation Council. The French AI plan addresses this problem by focusing especially on improving knowledge of future labor demand and skill requirements to effectively prepare for career transitions. 

The National AI plan places a strong emphasis on applied research and innovation to achieve its aims. The French National Research Institute for the Digital Sciences (Inria) has been tasked with overseeing the research portion of the AI national strategy. Its goals are to fortify the French AI industry as a whole, expedite technology spinoffs or transfers, and create industry-specific cooperation initiatives. The research institution will, among other things, oversee the strategy’s execution, offer scientific and technological know-how, and create initiatives for bilateral collaboration, particularly with Germany.

Conclusion

In conclusion, the AI strategy will continue to offer financial incentives to research and higher education institutions to increase the provision of initial training at all levels, intermediate and expert, dual programs, and the retraining or upgrading of talent in order to close the skills gap in AI, data science, and robotics on the labor market. There are programs in place to broaden the diversity of those working in computer science and AI.