News

Microsoft backtracks on its facial recognition program

Microsoft has been in the spotlight in recent years and it seems that the functions that it has promoted based on AI and facial recognition have not had the expected results. Despite all the reviews received because of the facial recognition problems suffered by people of color or those who were in the third stage of their life, their escape route is going back.

This has been widely criticized and it is considered that Microsoft has had many years to solve this problem or take actions about. However, he has not done so and it is now that he has made the decision to give a deadline to withdraw some of these functions. This information can be found by users on the Azure Face website itself, for example, something that surprised everyone.

The criticized problems of discrimination

It’s hard to admit, however, after several investigations into Microsoft’s AI technology, experts realized that even the voice recognition was inaccurate and had a much higher error rate for people of color. This is something that cannot be allowed today, as it flagrantly violates people’s human rights.

What can cause all this? Since self esteem issues, to feelings of not feeling valid and the need to pretend (by changing the tone of voice, for example) so that the AI ​​can properly recognize a person. All of this has reached a point where Microsoft has decided to remove some of the features and only Microsoft customers and managed partners will be able to access them.

Why doesn’t Microsoft solve these problems?

This is a question that still has no answer and nobody knows for sure why Microsoft, instead of solving these problems that have the functions it offers based on AI, decides to eradicate all the work and time dedicated to research. It seems absurd, although you may also find very valuable information about it here. Could there be no solution to this?

The most critical eyes may think that Microsoft is not interested in investing money, time and effort in solving the problem; others may believe that it is necessary to start from scratch for there to be a feasible solution. Perhaps there is a basic error and it is that we all remember all the alarms that went off with the racist chatbot from Microsoft. That Bot that ended up becoming a real nightmare.

Tay started out as some kind of experiment, but things started to get complicated when she started responding with racist, sexist and even Nazi ideology messages. After the horror it aroused and all the criticism, this chatbot was completely removed, something that Microsoft is doing with some of the aforementioned AI functions that also tend to that discrimination that is condemned so much.

The deadline for withdrawal is 2023

Microsoft has clarified that on June 30, 2023, the AI-based functions will be permanently withdrawn that have been causing problems so far. It warns, therefore, that customers perform all necessary operations and be prepared to stop using them before they are surprised by their complete removal. This is a decision made that has no turning back.

The truth is that Microsoft has generated great mistrust and that new AI-based systems or features it launches may be viewed with some suspicion at first. The delay in making decisions and the lack of solutions confuse and anger. But, we’ll see how this all progresses.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *