GT News

Taxes, accounting, law and more. All the key news for your business.

Veronika Odrobinová | Jan Nešpor | October 10, 2023

We probably can’t stop AI, so why not use it instead?

Share article:

At the end of this June, the Ministry of Industry and Trade announced its intention to update the National Strategy for Artificial Intelligence (AI). This announcement included the launch of a public consultation on this topic, due to which the public now has a say in the shape of the strategy and the use of AI in the Czech Republic in general.

AI is now a ubiquitous tool that you will encounter in everyday life in the bank, in transport, in healthcare or even when merely browsing the Internet. The specific implications that AI will have on not only the above activities, but on the functioning of the world as a whole, are as yet unknown. Perhaps for this reason, AI as such arouses sometimes positive and sometimes quite negative passions among the public.

For many people, AI is a tool that can transform the economy, saving them time and ultimately money. For the latter group, on the other hand, AI raises the darkest fears of a rebellion of machines against humans, reminiscent of the dystopian future depicted in films such as The Matrix or The Terminator.

However, the AI debate has not escaped the eye of regulators at both EU and international level. In 2019, an Organisation for Economic Co-operation and Development (OECD) panel presented 5 key principles to follow when using AI, along with 5 recommendations for OECD member countries to ensure that the potential of AI is maximised while minimising the risks associated with it.

In this report, among other things, the OECD put forward the now widely accepted definition of AI as “a system based on mechanical functioning that produces predictions, recommendations, or decisions that affect real or virtual environments based on human-specified goals. The AI system is designed to operate with different levels of autonomy. Moreover, artificial intelligence performs human cognitive functions through a machine.”

The OECD defines the above-mentioned principles as promoting inclusive and sustainable development of AI for the well-being of society in the areas of economic growth, combating discrimination or protecting the environment. The second principle is to ensure the use of AI based on human values, including justice, respect for the rule of law, human rights and democratic values. The OECD further defines the third and fourth principles as both transparency, explainability and understandability of the processes and conclusions that AI performs, along with ensuring the ability to challenge those conclusions and ensuring a robust and threat-resistant system that cannot be challenged or abused. The last principle then transfers the responsibility for the “operation of AI” to the so-called AI actors (entities that implement or control AI), both for damage and for violations of the above principles caused by AI.

Based on these principles, the OECD recommends that individual countries support the development and research of AI and provide a digital, political and social environment, in which AI can operate, including ensuring societal trust. To further develop human capacity to understand, work with and use AI and to support sociological dialogues and economic preparations for the transformation of society as a result of AI. Finally, the OECD also highlights the need for international cooperation to ensure trustworthy and safe AI in the everyday world.

The document published by the OECD contains and names pitfalls that may give the impression that there is nothing to worry about, but as such it is certainly not self-sustaining. To ensure safe and trustworthy AI, much more will certainly need to be done than putting forward an official recommendation. At the very least, however, this recommendation opens up a dialogue about the opportunities and threats that AI presents, can help regulators set the framework for how to use AI properly, and with any luck, will be one of the factors that will ultimately help achieve the desired goal. But only time will tell.

In conclusion, as the popular saying goes: “progress cannot be stopped”. This is doubly true for AI. So, I believe that in the coming years, rather than focusing on the question of “how to get rid of AI or how to stop it?”, I think it will be more important to focus on how to use AI safely to keep evolving as a society and move one step further into the great unknown.

Finally, we asked the AI itself, if it can take control of the world. Its answer can be found below.

Can AI take over the world?

Scenarios, in which artificial intelligence (AI) would take absolute control of the world, are often treated in science fiction stories and are more part of fiction than the real world. However, it is important to understand a few key points:

  1. Limitations of current AI: Current AI is limited to specific tasks and domains, such as data analysis, image recognition, or automotive driving. AI has no consciousness or self-awareness and is not capable of independent decision-making.
  2. Safety measures. The development of AI is accompanied by debates about ethical guidelines and regulations to protect society from potential risks.
  3. Control and accountability: It is up to people and organisations to maintain control over AI systems and ensure that these systems serve the public good. Responsibility for AI rests with human oversight.
  4. Social consensus: The development and deployment of AI should be consistent with societal values and democratic processes. Decisions on the use of AI should be made within the framework of political and ethical discussions.
  5. While it is important to keep in mind the potential risk of misuse of AI, current AI technologies do not have the ability to take total control of the world without human oversight and permission.

Currently, the biggest challenges in AI tend to be related to ethics, transparency, security and privacy. The key goal is to use AI in a way that benefits society and minimises potential risks.

Author: Veronika Odrobinová, Jan Nešpor