Skip to content
braden-collum-87875-unsplash
29 November 2018| doi: 10.5281/zenodo.1845399

Global AI race: States aiming for the top

There hardly exists a buzzword today that fires everybody‘s imagination in the tech world like “artificial intelligence” (AI). But not only giants like Google, Facebook, Baidu or Alibaba are trying to overbid each other with new applications – also states around the globe have proclaimed to partake in a global AI race. Germany was rather late to the game. This article compares the strategies of France, the US and China and identifies their distinct approaches. What is common to them all: they count on AI as a driving force and problem solver no matter the field – environmental hazards, logistics, mobility, health or the brute force of arms.

Facing progress in machine learning through deep neural networks in the past five years and rushing business interest, governments around the world have been releasing strategy papers since 2016 to announce their policies and visions for how to use, regulate and foster AI technologies. Often, these papers are packed with grandiose ambitions and rather vague specifications, with little insight into how these should be achieved. Yet, they reinforce and shape dominant socio-technical imaginaries and metaphors that guide future policy, tech development and sense-making. At the same time, they regularly provide vast amounts of funding to pursue these avenues. Whereas many of the national strategies articulate commonalities in striving to become top research hubs and economic competitive leaders, they also fundamentally differ in focus, approach and values, even touching upon well-known national narratives. This article particularly focuses on the distinct strategic approaches of France, the US and China as key actors in that development.

 

France – Envisioning an AI for humanity?

“AI for humanity” is the title of the French AI strategy website, all letters coloured in the tricolor flag – the French government uses a bold imaginary to underline its philanthropically framed AI aspirations. On 8 September 2017, the premier Édouard Philippe charged the mathematician and member of parliament, Cédric Villani, to craft the French national strategy report. The report is titled “For a Meaningful Artificial Intelligence” and was published at the beginning of March this year. Lead by Villani, the report was written by officials and scientists from the “Conseil national du Numérique” (NUM), an independent digital governmental advisory body, which was established in 2011 by presidential decree, and from the “Institut national de recherche dédié aux science du numérique” (INRIA) a 231 Euro million strong computer science research hub, which supported the launch of 160 French tech start-ups. The report was accompanied by a hearing of 400 experts from a variety of scientific areas and a simultaneously published report “Intelligence Artificielle et Travail” (only in French) by NUM and the ministry of labour. Further, some governmental efforts were undertaken to raise public legitimacy, which included a public consultation by ”Parlement & Citoyens”, where 1639 people participated and a national survey conducted with about 3000 individuals.

Philanthropy meets competitive advantage

On the basis of this report, Macron in person, stressing to make AI politics a presidential and French etatist affair, solemnly announced the French AI strategy on March 29 at the Collège de France. The presentation reflected the strategic universalist and cosmopolitan orientation of French AI strategy. In grand style, Macron portrayed humanity as being at a turning point; to him, AI contains a “Promethean promise” that should not become a “dystopia”. The four “promising” AI sectors the Villani report focuses on are health, transport, environmental politics and defence & security. These sectors are seen as crucial, because many proposed AI applications, such as early detection of pathologies, zero-emission urban mobility, smart agriculture or defence against cyberattacks, are expected to be an attractive focus of interest and involvement from public and private stakeholders alike.

Further, such fields will enable the French government to take a lead and shine, as “they require strong public leadership to trigger the transformations.” Besides such philanthropic narratives, which portray AI as a public good, the report claims that inside these sectors, France can draw on its “economy’s comparative advantages and areas of excellence, focusing on priority sectors where our industries can play key roles at the global level.“ One should note that in his speech, Macron adopted the report’s leadership and public good narrative but only mentioned two of the four proposed sectors: the health sector, focusing on disease detection and prevention, and the transportation sector, announcing a regulative framework for autonomous driving by 2022.

The French state is showing the way

Besides shaping these strategic sectors, Macron clearly sees the French state in charge of claiming a managing and intervening role in research, data politics and ethical guidance. He boldly declares: “you can count on me so we can lead to the real renaissance that France and Europe needs”. In order to “boost the potential of French research”, Macron announced his intention to strengthen public research institutes ideally and financially (in addition to notable public-private research partnerships such as PRAIRIE) and stated his aim to create a national coordination research hub under the guidance of the INRIA, including a network of four or five institutes across France. In total, Macron plans to spend €1.5 billion in AI during his current presidency, with the biggest part for research and industrial projects. Further, the government wants to double the number of students trained in AI in France in the next three years and enable publicly funded scientists to spend 50% of their research time in private companies instead of 20%.

Concerning the challenges for the labour market, the Villani report envisions creating a public laboratory on the transformation of work to evaluate how AI will impact the working sphere. In this context, the report introduces the prospect of testing state funding for vocational training for employees to master the digital transformation. In his speech, though, Macron did not pick up on such proposals. Instead, he emphasised the topic of data politics, portraying data availability as a key competitive advantage in the global AI race. Access to huge data sets is necessary to train machine learning applications.

Thus, Macron sees the government in the role of encouraging economic players as well as the public sector to pool their data via shared platforms, with the state acting as a trusted third party. In his speech, the French president especially demanded that datasets in the mobility, health and agricultural sectors be accessible, providing the potential to “serve for the collective good”. At the same time, Macron emphasised that data protection, personal confidentiality and algorithm transparency should be of pivotal importance, not to risk potential public distrust or social inequality, echoing the inclusivity, fairness and transparency principles of the Villani report. Data portability shall give any individual the ability to migrate from one service ecosystem to another without losing their data history. Again, Macron did not offer more specific details on this proposal and only announced vaguely that citizens should have “the mastery over their own data.”

In total, the discrepancy between the vast policy proposals of the Villani report and the actual scarce governmental commitments announced during Macron’s speech (despite its ostentatious presentation) is striking. It remains to be seen whether the French government will adopt further propositions of the report during its remaining three-year term.

The US – no regulation is the best regulation

At the end of its presidency, the Obama administration published three strategic reports, and hence was one of the earliest governments to position itself on how to shape the present and future of AI. The reports Preparing for the future of AI (Oct. 2016), National Artificial Intelligence Research and Development Strategic Plan (Oct. 2016) and Artificial Intelligence, Automation, and the Economy (Dec. 2016) are rather extensive, and thorough written papers, supported by standpoints of the scientific community and empirical data. Broadly, they give a general framework about subjects like strategies for funding, research and workforce education; the impact of AI on automation and the economy; ethical considerations concerning fairness, safety, and governance; and global cooperative considerations considering international cybersecurity and AI weapon systems.

Trump’s shift towards deregulation and realpolitik

Ever since Trump took over power in the White House, the US AI strategy has radically shifted towards a market based, patriotic and rather Machiavellian realpolitik approach. On 10 May, the White House chaired, under the supervision of Michael Kratsios, deputy assistant to the president for technology policy, the “Artificial Intelligence for American Industry Summit”, attended by industry representatives, high state officials and scientists. The opening quote of the strategy paper by Kratsios sets the tone:

“Artificial intelligence holds tremendous potential as a tool to empower the American worker, drive growth in American industry, and improve the lives of the American people. Our free market approach to scientific discovery harnesses the combined strengths of government, industry, and academia, and uniquely positions us to leverage this technology for the betterment of our great nation.”

In contrast to the French approach, which gives the state a leading and regulating role, the Trump administration aims at removing barriers to AI Innovation “wherever and whenever we can.“ The US government wants to foster the combined strength of government, industry and academia and generate competitive advantage over other nations. Concretely, according to the strategy paper, the US has loosened regulative frameworks for AI in autonomous driving, the use of commercial and public drone operations and medical diagnostics. Concerning Research and Development (R&D) and the private sector, the Trump government emphasizes its ambition to remain “the global leader in AI”, increasing investment in unclassified R&D for AI by over 40% since 2015 ($1.1 billion in 2015).

Further, in his speech at the summit, Kratsios clearly addressed Trump’s electorate, stating: “President Trump will never forget the American worker. (…) Our policies must reflect the fact that people learn not just in lecture halls and libraries, but on factory floors, in offices, and out in the field.“ Acknowledging the fact that AI will remove current jobs (especially in the rust-belt industry area), here the Trump administration adopts an intervening role. Trump aims at promoting the creation of industry-recognized apprenticeship programs, committing $200 million in grant funds that were matched by a private industry commitment of $300 million to a “Science, Technology, Engineering, and Math (STEM) education program” to foster the skills of US employees. Certainly, this appears slightly contradictory to former deregulation claim.

In order to improve funding coordination on the federal level, the White House chartered a “Select Committee on Artificial Intelligence” under the “National Science and Technology Council”, which will advise and structure the White House AI R&D priorities. On the international level, the Trump administration is also pursuing international AI R&D collaboration through agreements with the UK and France.

In fierce competition in the AI arms race

The Trump administration emphasizes clearly that it considers AI as a means to achieve strategic military advantage in the arms race with China and Russia. The 2018 “National Defense Strategy” commits the US to “investing broadly in military applications of autonomy, AI, and machine learning”. According to the unclassified budget, the US Pentagon invested $7.4 billion on AI, big data applications and cloud computing. Only in June this year, the US created a “Joint Artificial Intelligence Center” (JAIC), next to the military project “Maven”, the development of an AI video analysis tool for military drones (where Google initially worked closely together with the Pentagon contributing its own AI-Software “TensorFlow” and only withdrew after employees’ protests). According to the newspaper Breaking Defense, JAIC will have “oversight over almost all service and defence agency AI efforts.“ The total US spending on military advancements in AI remains unclear, as a big part is considered as “classified“, hidden from public eyes. With such a realpolitik focus, it is maybe not surprising that the Trump administration leaves ethical AI and privacy considerations on the White House AI summit completely unmentioned – in stark contrast to the French positioning.

China – the totalitarian smartification?

Among all the governments, the Chinese Communist Party (CCP) presents the most detailed, comprehensive and ambitious AI strategy. In July 2017 the Chinese state council announced “A next Generation Artificial Intelligence Development Plan”, which contains a rigid three-step future plan. Until 2020 China aims at catching up with the technological leading nations like the US in the field of AI and plans to establish an AI market value of €18.7 billion. By 2025, it expects major breakthroughs and a leading role in specific AI applications and, finally, by 2030 China wants to claim the position of an international leader in R&D and AI applications (targeting an enormous market value of €130 billion).

Socialist planning marries commercialization

While the strategy paper acknowledges that there is still a gap between China’s overall level of AI development relative to that of “developed countries”, the Chinese government sets a determined and self-confident tone. China does not formulate ambitions, it boldly declares where it will stand in the future (“we will have achieved”). In order to “enhance society’s productive forces, national power, and national competitiveness”, the CCP lays out some leading and guiding principles that offer Chinese peculiarities. The party follows a double strategy of market orientation and governmental control, aiming to harvest both the advantages of a market-based development and commercialization of AI technologies and applications, while at the same time maintaining CCP steering control to “fully give play to the advantages of the socialist system to concentrate forces to do major undertakings, promote the planning and layout of projects, bases, and a talent pool.” The party’s centralisation and authoritarian top-down planning, leaving aside complicated and time-consuming democratic bargaining processes, aims to speed up AI solutions to current Chinese societal challenges. The expectations are huge.

The total “smartification” and “intelligentization”

AI applications are seen as a remedy to tackle pressing Chinese problems such as population aging or environmental degradation and will, according to the Next Generation Development Plan, “significantly elevate the capability and level of social governance, playing an irreplaceable role in effectively maintaining social stability.“ To meet these aims, the Chinese government targets the “smartification” and “intelligentization” of all possible fields. No matter if industry (the internet of things), logistics (robotics, smart transportation, sorting, processing etc.), mobility (autonomous driving), agriculture and environment (intelligent monitoring and predictive regulation), commerce (smart finance and connected household goods), medicine (surgical robots and disease assessment and treatment) or also intimate and social spheres like health and elder care (smart wearable and monitoring equipment) – the CCP is planning to use AI as an universal problem solver.

While such a list seems pretty bold, the Ministry of Industry and Information Technology and Science and Technology Department published in December 2017, the “Three-Year Action Plan for Promoting Development of a New Generation Artificial Intelligence Industry (2018–2020)” to concretize things. This detailed plan gives technical specifications of how to integrate AI into information and manufacturing industry in order to turn “China into a manufacturing (…) and a cyber superpower.“ The specifications are detailed enough to baffle the average reader. For example, concerning intelligent unmanned aerial vehicles (UAV) the report declares: “by 2020, intelligent consumer UAV 3-axis mechanical stabilization units should achieve a precision of 0.005 degrees, achieving 360-degree omnidirectional perception avoidance and realizing automatic and intelligent forced avoidance of air traffic control areas.” The same scrutiny is applied to areas like speech recognition, neural network chips or video image identification systems etc. Neither the French nor the American strategy papers have such accuracy and detail, once more stressing the CCP’s determination to fulfil its ambitious three-step future plan. Further, the government has just started to build a $2.1 billion technology park for AI R&D in Beijing and has crafted a partnership with tech giants like Baidu, Tencent and Alibaba. The latter points to a further Chinese peculiarity.

Public safety & smart control

While ethical considerations such as violations of personal privacy are mentioned in the strategy papers, it is hard to see how these can be addressed with what the CCP describes as the “construction of public safety and intelligent monitoring and early warning and control systems“ in the Next Generation Development Plan. Personal data protection is almost non-existent in China, since the authorities will have access to all databases generated by Baidu, Tencent and Alibaba by 2020 at the latest. Such enormous access to personal databases enable the private sector and the CCP to train machine learning applications – and apply them not only for profit generation, but societal regulation and control alike. AI applications like video image analysis, facial biometric identification technology and geo-tracking are already used for ubiquitous surveillance and predictive policing against the Uyghurian minority in Xinjiang province, or the public shaming of “misbehaving” citizens. Not to mention the “sesame social credit system”, filed under “promote social interaction and mutual trust” in the Next Generation Development Plan. Behind such rosy and lofty terms hides an all-pervasive scoring and rating infrastructure, enacting panoptic transparency, obedience and control through a system of discipline and punish.

Noticeable about the Chinese strategy is also the ambition to fuse such “civilian” AI technology with military innovations and applications. The Next Generation Development Plan announces the intention to “promote two-way conversion and application for military and civilian scientific and technological achievements and co-construction and sharing of military and civilian innovation resources.“ Hence, the CCP is strategically tapping civilian innovation for military use and vice versa. Whereas Google retreated from working together with the Pentagon, in China, according to US military expert Elsa Kania on Breaking Defense, governmental actors work hand in hand with commercial companies or simply strategically appropriate innovations from the private sector. She comments: “For instance, Baidu is partnering with the CETC, a major state-owned defense conglomerate, through the Joint Laboratory for Intelligent Command and Control Technologies, which seeks to advance the use of big data, artificial intelligence, and cloud computing for military command and information systems.” Here, without a doubt, the CCP is taking advantage of its authoritarian centralising power, enforcing synergies wherever it can and leaving aside ethical considerations to push China to become the leading AI nation.

Conclusion

AI is currently considered as one of the key fields where the good and bad of societies and nations are negotiated. At the same time, it opens a vast space of imagination. While machine learning applications are already deployed in various contexts, the current rush to AI in politics and business is highly stimulated by the strong imaginary power of the concept of AI. Truly, the current debate is severely over-hyped and the praise of robots such as Sophia is annoyingly and dangerously misleading. It creates a myth of human intelligence and empathy, which AI is simply not able to deliver, yet, the debate shapes the public discourse and guides policy and business measures alike. As Hamid Ekbia once wrote, AI is the “embodiment of a dream […] that stimulates inquiry, drives action, and invites commitment, not necessarily an illusion or mere fantasy.”

The national AI strategies currently popping up all around the globe constitute a peculiar hybrid of imaginary and policy measures: they reinforce and shape existing AI narratives to sketch the horizon of our digital future, and at the same time they formulate concrete measures to rush along these avenues towards these horizons. In this way, they powerfully co-produce the very future they envision. The striking differences between France, the US and China identified in this article obviously point to striking political and cultural differences – but they also show that the future, and especially the role of automation and AI in the future, is highly contested.

We are currently negotiating how we want to live with automation and AI in the future. And this negotiation is not only about technology, policy and budgets – it is strongly entrenched in myths and metaphors. Let’s be aware of that.

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Jascha Bareis

Associated researcher: The evolving digital society

Christian Katzenbach, Prof. Dr.

Associated researcher: The evolving digital society

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Artificial intelligence and society

The future of artificial Intelligence and society operates in diverse societal contexts. What can we learn from its political, social and cultural facets?

Further articles

The picture shows a white wall with several clocks, all showing a different time. This symbolises the paradoxical impact of generative AI in the workplace on productivity.

Between time savings and additional effort: Generative AI in the workplace 

Generative AI in the workplace is enhancing productivity, yet employees face mixed results. This post examines chatbots' paradoxical impact on efficiency.

The picture shows seven yellow heads of lego figures, portraying different emotions. This symbolizes the emotions university educators go through in the process of resistance to change due to digitalisation.

Resistance to change: Challenges and opportunities in digital higher education

Resistance to change in higher education is inevitable. However, if properly understood, it can contribute to shaping digital transformation constructively.

The picture shows a young lion, symbolising our automated German text simplifier Simba, which was developed by our research group Public Interest AI.

From theory to practice and back again: A journey in Public Interest AI

This blog post reflects on our initial Public Interest AI principles, using our experiences from developing Simba, an open-source German text simplifier.