2023-05-12 20:55:26
As far as artificial intelligence (AI) is concerned, until a few years ago robotics was an uninteresting field. The limit of each machine was set by the ingenuity of its programmers, the physical capacity of its design. Nothing could make one mechanical interface faced with the uncertainty of the world that surrounds us, an infinite range of variables that simply cannot be seen on a computer.
It is not surprising, therefore, that artificial intelligence has learned to crawl to the other side of our screens. That its main exponents today are nothing more than a torrent of ones and zeros in huge databases. ChatGPT, Midjourney, Dal·le 2… None have the ability to interact with the physical world. Something that Google’s parent company, Alphabet Inc, has been trying to reverse for years (among other companies).
A few months ago a minute and a half video of Boston Dynamics, a subsidiary of Google specializing in the development of robots, where its model ‘Atlas’ could be seen performing all kinds of ‘fancy things’ on an acrobatic circuit. The document, which did not leave anyone indifferent at a time when everything related to AI arouses a certain fear, was criticized and praised in equal measure, but it did not take long for those who discredited what they saw to appear.
And it is that far from what Google tries to sell, the Boston Dynamics robots still do not rely as much as they would like on the technology of Deep Learning. They use it, above all, to perfect certain details in aspects such as their ability to see or their relationship with the environment, but always within a ‘catalog’ of movements pre-established by man. This is due, unfortunately, to the difficulty of training an algorithm from scratch in the physical world, since in the virtual world these ‘trainings’ are processed by supercomputers that shorten this task from several years to just a few hours.
Google soccer robots
This reflection leads us to deduce that, inevitably, the best training method for an AI-based robot would be to train it, first, on a computer and then transfer your knowledge to a mechanical interface. This could very well be done by simulating an environment similar to what the machine will face in the physical world in a computer program. However, that is precisely the biggest headache for programmers: that a simulation is still an approximate replica of the real world. So in practice the transfer is unfeasible.
This is where a second, lesser-known subsidiary of Google comes in, but equally at the forefront of AI development: DeepMind, acquired by the search giant in 2014. On April 28, the company run by Demis Hassabis He unveiled his robots capable of playing soccer (or something similar), trained, these yes, first in a virtual environment to later transfer their knowledge to a mechanical interface. All through an innovative learning method consisting of applying in the virtual format a multitude of random variables that the robot must be able to overcome to achieve its objective (in this case, score a goal).
still clumsy and somewhat erratic, were presented in a video where they could be seen playing one against one in a small field. The robots ran, kicked, tried to dribble and even understood that they had to make a tackle for the rival to snatch the ball. Its developers, of course, are the first to admit the early development phase of their ‘toys’, although they always admit that if AI has shown anything it is its exponential learning capacity. There probably won’t be much left for mechanical structures to out-skill human musculature and skeleton.
#Football #revolution #Google #robots #play #surpass #man