AI emerges as a global issue

Quang Dung
Chia sẻ
(VOVWORLD) - This year Artificial Intelligence (AI) became a major global issue. Along with its extremely broad potential for socio-economic development, AI presents great security and safety risks, prompting countries and international organizations to desperately seek controls.
AI emerges as a global issue - ảnh 1(Photo: CCO)

On November 1 dictionary publisher Collins selected ‘AI’, the abbreviation of ‘Artificial Intelligence’ as its word of the year because it “has accelerated at such a fast pace and become the dominant conversation of 2023”. The use of the word quadrupled in the past year.

Huge potential for AI

The introduction of ChatGPT, OpenAI's generative AI software, ignited a global sensation earlier this year, signifying a pivotal moment in reshaping perceptions of AI. ChatGPT, alongside competitors like Google DeepMind's Gemini and Elon Musk's Grok AI, heralds the onset of the AI era. These technologies have transcended mere passive responses to user needs, ascending to a more advanced stage characterized by proactive "primary thinking." They demonstrate a capability, not just to serve users, but to actively engage with them, incorporating features like metadata binding, cognitive self-development, and evolution through direct interaction with users.

AI emerges as a global issue - ảnh 2UN Secretary General Antonio Guterres (Photo: IRNA/VNA)

The latest wave of generative AI and advanced multitasking AI technologies, often referred to as ‘the AI frontier’, is unlocking significant application potential for socio-economic development. In Malaysia, AI is being used to help farmers map farming data and monitor crop productivity. In Israel, the financial sector is using AI to create forecasting models. In Thailand government officials use AI to analyze tax payments and monitor taxpayers' transactions on social networks. UN Secretary-General Antonio Guterres says if employed responsibly and equitably, AI has the potential to create breakthroughs in countries’ development trajectory.

“AI could supercharge climate action and efforts to achieve the 17 Sustainable Development Goals by 2030. But all this depends on AI technologies being harnessed responsibly and made accessible to all — including the developing countries that need them most,” said Guterres.

On a global scale, AI is increasingly being used to develop new disease prediction models, integrate health services, forecast weather, and engineer crop varieties adapted to climate change. This collective effort aims to establish a new, more sustainable global food system.

Responsible AI development must mitigate risks

In addition to its substantial potential benefits, AI is also raising concerns. Rapid progress in AI technologies this year has led countries, international organizations, and technology experts to worry about the risks AI poses to national security, societal stability, and the survival of our species.

In the middle of this year, CEOs of the world's leading AI companies and hundreds of researchers and experts endorsed a declaration emphasizing that minimizing the risks from AI must be a global priority as urgent as preventing a nuclear war. American billionaire Elon Musk, a pioneer in developing AI technology, also warned about the dangers of this technology, if uncontrolled.

Given the potential of AI spiraling out of control and destroying humanity, the global community accelerated efforts to regulate the development and deployment of AI this year. In November, the first Global Summit on AI Safety took place in the UK and created the Bletchley Declaration, which was signed by 27 countries, including the US, China, and the European Union, who pledged to promote responsibility and foster international collaboration in safely researching and using AI. They worked out AI control principles using the concept "secure by design" and encouraging AI developers to allow governments to assess the safety of their applications before releasing them to the public.

Other AI control mechanisms were also established this year. In October the UN established an AI Advisory Board, with 39 members including company directors, government officials, and academics, tasked with providing direction for AI management at the international level. In early December, the EU reached agreement on the provisions of a draft AI Act, which will be the first comprehensive international law on AI.

At the national level, the US and UK have both established AI Safety Institutes to test and evaluate new models, identifying all potential risks. China announced the “Global AI Governance Initiative,” which sets out a number of temporary regulations for generative AI technology. More than 50 corporations and research organizations, including Meta, IBM, Intel, Sony, Dell, and NASA, jointly established the Open Source AI Alliance to ensure more transparent cooperation in AI development.

Feedback