An AI for an AI: Artificial Intelligence wars loom

Top Stories

AI, Artificial Intelligence

The technology is at risk of being appropriated by an oligarchy, which may not include just companies. Countries are in an arms race for AI supremacy.

By Shalini Verma

  • Follow us on
  • google-news
  • whatsapp
  • telegram

Published: Fri 18 Sep 2020, 10:39 AM

Last updated: Fri 18 Sep 2020, 6:44 PM

OpenAI has recently announced its new AI language model, GPT-3 that is a significant advancement on prevailing natural language processing (NLP) by machines. NLP is a technology where AI intersects with human language, allowing chatbots to converse with us, for instance.

GPT-3 brings AI closer to Artificial General Intelligence because it is versatile enough to perform a variety of tasks like writing songs and software code and crunching numbers. More importantly, it produces original text with an uncanny similarity to what humans write. Guardian recently published an article written entirely by GPT-3, which blew away most readers. Yet there is a disturbing development at play here and it is not about machines lording over humanity.

OpenAI that has built the breakthrough language model started out as not-for-profit. The organization's name suggested its eponymous mission of building Artificial General Intelligence that 'benefits all' and avoids 'enabling uses of AI or AGI that harm humanity or unduly concentrate power'. Turns out that OpenAI is not so open anymore. Last year, it pivoted to a for-profit company or more specifically a 'capped-profit' company. The profit cap will kick in when the company's profits are 100 times more than the invested amount. AI has a large appetite for compute power and talent, which costs money.

Investors include Microsoft that has pumped in a billion dollars, for which it has been designated as a preferred partner for commercializing OpenAI's technologies. Microsoft also provided a supercomputer with the kind of compute needed to train the GPT-3 model. The technology is available in a private beta as an API that everyone can apply for, but in reality, not many have access to. Later, it will be licensed to developers at a price just like other commercial AI services. There are some checks and balances with regards to investors, but there is no guarantee that this won't change in the future.

AI is also faced with a bigger challenge of the unknown. When developers leverage the services of well-known commercialized AI platforms, they are unable to find out how the AI model arrived at a certain decision. The AI vendors tend not to share this information or don't know how their AI model gave the output. Deep neural networks are inherently unexplainable. The way a neural network adds weightages across its connections is largely left to it to decide. Imagine once these pre-trained AI models are scaled and connected to thousands of organizations across the world. We can likely expect a Covid19 like virus or disruption in the technology world, wherein we are clueless about what went wrong and how to reign it in. Nearly a year after patient zero got Covid19, we still don't know how it started. To prevent such as scenario, we need the AI systems to be more open and transparent. AI is at risk of being appropriated by an oligarchy, which may not include just companies.

Countries are in an arms race for AI supremacy. While US companies were in negotiations to potentially buyout or take a stake in TikTok's US operation, the Chinese government redrafted its technology export rules by adding 'civilian use' to a list of technologies that are restricted for exports. AI technologies are part of the list of technologies that need a government approval. The Chinese government is in no mood to let US companies have a piece of the Chinese AI pie. Clearly, governments perceive AI as a source of geopolitical power.

Economists maintain that global wealth is getting increasingly concentrated in the hands of a few. We can expect similar imbalances in the area of AI unless global institutions work with member countries to arrive at a consensus on making AI systems more open to scrutiny. This is not just about offering an open sourced variant. Regulators need to understand this big picture of AI and collaborate. Their efforts are currently fragmented across industry verticals and nations. All organizations must be obligated to share their IP on AI given its magnitude of impact on humanity. If this is not possible, then I can say with absolute certitude that in a few years from now, we will have Big AI Tech that will wield enormous geopolitical influence. - Shalini Verma is CEO of PIVOT technologies


More news from