Tech

4 examples that technology can be very scary

Your loved ones speak to you from beyond the grave (via Alexa)

granny alexa.jpg

No, it has nothing to do with the practical joke of programming an Alexa routine to make it look like a ghost is talking to you.

At the last re:Mars annual conference, Rohit Prasad, one of the senior positions within the Alexa project, presented the new functions that will be integrated into Alexa in the coming months. Many were fantastic, but there was one in particular that gave a yuyu Awesome. Specifically, they showed an example in which A grandma you told a good night story to his grandson using Alexa technology. The grace of the matter is that in the example, Mrs. he was no longer with us.

This looks like something out of an episode of BlackMirror it was possible after train an AI for hours with the person’s voice. And, although the idea is still beautiful – we all like to see a video of a loved one who is no longer there and who is “alive” again for a few minutes – we do not know how this could affect mental health, especially the little ones.

Spot, the Boston Dynamics dog

dog boston dynamics hands.jpg

Originally designed to help humans — that is, like any normal dog, which is why we humans have made centuries of artificial selection — the Boston Dynamics dog is yet another piece of technology that could get out of hand.

The company originally created it as a little watchdog to watch over factories and control inventories. To this day we have already seen him walking through the streets of León, dancing, doing airport checks and even with a set of hands attached to the robot. Seriously, this hulk is never going to give you the cuddles that a winemaker gives you.

The humanoid robot from Engineered Arts

At the end of last year, the company Engineered Arts presented to the world its new humanoid robot: Ameca. The company’s goal was to surprise us, but what they achieved is that we all looked at their invention with great rejection. You may be inspired by the NS-5 gauge I robot wasn’t such a great idea.

The Google AI that became aware

google lamda.jpg

Software engineer Blake Lemoine decided last week to skip all the Google confidentiality agreements sending a transcript of your conversation with the artificial intelligence LaMDA to Washington Post. He was convinced that the AI ​​he was working with had become sentient.

There are many artificial intelligence experts who have analyzed the conversation. And we could say that the majority has come to tell us that the concept of consciousness must be redefined. Why? Because LaMDA It is not just another artificial intelligence, but a language model for dialog applications. By training him as a conscious AI, our friend LaMDA will defend tooth and nail that he is human and even that he has feelings. Also LaMDA would defend tooth and nail that he doesn’t have to be a flight attendant if we trained him by saying that he is a transoceanic flight attendant.

What is fascinating in this case is that the AI ​​was able to deceive Lemoine. Today, AIs like GPT-3 have no conscience. Of course, they have impressive verbiage and are capable of convincing you that one plus one equals seven.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *