News

More than 60 countries agree to address the discrepancies generated by the use of AI in the war sector

AI has become one of the most important advances of humanity, and both companies and governments agree that this technology is destined to completely change very important aspects of our lives and our coexistence. Its enormous field of application, and its almost infinite possibilities, attest to this, but unfortunately not all of these applications could have a positive impact on our society.

One of the areas that raises the most doubts and fears is the war industry. Applying AI to war, in a broad sense, could have catastrophic consequences. I could give many examples, but even the simplest ones should already cause us real fear. Consider, for example, what would happen if a small AI-based light aircraft carrying nuclear weapons lost control and launched them wildly in different regions of the world.

To face the danger that AI could represent when applied to the world of the war industry, the first World Summit on Responsible Artificial Intelligence in the Military Domain (REAIM) was held, which led the participating countries to sign an agreement to give priority to responsible use of AI on their political agendas.

In said cover participated political representatives from more than 60 countries, among which were such giants as China. Russia was not invited for obvious reasons, and Ukraine did not attend the event, something totally understandable considering the situation the country is in due to the war it is waging after the invasion of Russia.

The countries that signed the agreement pledged to develop and use AI applied to the military field, following, at all times, established international legal obligations, and in a manner that does not undermine international security, stability and responsibility. This is equivalent, in short, to making responsible use of AI, as we have already mentioned before, although critical voices have highlighted that this agreement is not legally binding, and that important things are left “in the dark”.

During the meeting it was agreed to address other issues that include the reliability of the AI, the unintended consequences of its use for military purposes, risk escalation and the way in which humans should participate in decision making. Some of the attendees also highlighted the advantages of using this technology in an armed conflict, citing Ukraine as an example, a country that has relied on deep learning and technology in general to repel a “bigger and stronger” aggressor.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *