Adobe, from time to time, surprises us with projects like this Adobe In-Between. It is true, however, that on many occasions they stay at that, in functional prototypes that do not reach the market, and that they are more a sample of what they are capable of in the company, than a proposal aimed at its users. However, sometimes we do see how, some time later, they end up reaching some of their programs, mainly Photoshop and Premiere, the kings of the house (without the intention of detracting from the rest of the family, of course).
As we have seen in the presentation of Adobe In-Between, its developers, with the essential help of artificial intelligence, have managed to create a solution that, from from two photographs taken with, say, a second apart, it is capable of generating a simple video in which a “natural” animation is produced from the first to the last. For example, from one face pointing to the left and with its eyes closed, to another with its eyes open and looking to the right, Adobe In-Between will create the intermediate frames, in which we will see the face rotating while the eyes.
It is important to clarify, yes, that photos used with Adobe In-Between must be similar, that is to say, that the animation that we will ask the software must be simple. In the absence of being able to prove it, from what they have shown us, it follows that it is not possible, for example, to create videos that show elements that do not appear in the two images (or three, an animation can also be generated with a photo intermediate, in addition to the origin and destination), create complex movements, and so on.
At the moment Adobe In-Between is just a project, although I think it is one of those yes it has possibilities of finally reaching Photoshop, Premiere or even both programs. And I say this because, previously, we have already seen other technological developments that, for whatever reasons, have never left the laboratory, more than in demonstrations in which, yes, they have left more than one with their mouths open.
This is what we saw, at the time, with Adobe VoCo, another development also based on artificial intelligence, which after being fed with audio recordings of a specific voice, was capable of play a text with a digital replica of that voice. On that occasion, admiration for the development was mixed, of course, with the fear that this technology could be used to create deepfakes. And we are talking about 2016, when the risks associated with this type of counterfeiting began to appear on the horizon.
The most interesting thing is to see the importance the company attaches to artificial intelligence. Today we see it with Adobe In-Between, in 2016 with Adobe VoCo, but also with other tools that have been adding to its software for more than a decade. I am thinking, for example, of the fill function from Photoshop content. Without a doubt, the future of photo and video editing lies in the assistance that AI can provide, and in this regard, Adobe has already come a long way.