Chapter 21: HUMAN-LIKE WALKING ROBOTS AND NEW AI DEVELOPMENTS
Human-Like Walking Robot
While walking, the bones, joints, muscles, tendons, ligaments and other connective tissues in the musculoskeletal system, which are controlled by the nervous system, must move in a coordinated manner, and adapt to unexpected changes, speeds or problems and provide appropriate responses. This is not easy to achieve in robotic technologies! A research group from the Tohoku University Graduate School of Engineering has replicated a variable-speed walking mechanism that is just like a human, using a musculoskeletal model guided by a reflex control method that mirrors the human nervous system. This breakthrough in biomechanics and robotics sets a new benchmark in understanding human movement.
The team, Shunsuke Koseki, Hayashibe and Dai Owaki, describe their work in January in the journal PLOS Computational Biology as overcoming a complex challenge of replicating efficient walking at various speeds, one of the cornerstones of the human walking mechanism, and is crucial in pushing the boundaries of understanding human movement, adaptation and efficiency. This study used an innovative algorithm that optimizes energy efficiency at various walking speeds. The algorithm went beyond the traditional least squares method and helped develop a neural circuit model optimized for energy efficiency at different walking speeds.
Experts emphasize that the information revealed in the study will help pave the way for future technological developments. The successful simulation of variable-speed walking in a musculoskeletal model and its integration with neural circuits marks a significant advance in the integration of neuroscience, biomechanics, and robotics. Such developments could also enable the development of more comprehensive mobility solutions for people with disabilities. In the future, the aim is to further develop the reflex control framework by expanding the human walking speed and range of motion.
***
AI-Autonomous Chemistry Research
An artificial intelligence application has been developed that autonomously designs, plans and carries out complex chemical experiments.
Developing new chemical substances is often possible through long trial-and-error processes. Educated chemists use their knowledge and experience to decide which way to go.
Using autonomous devices in cumbersome trial-and-error processes has been a dream for many years. Today, there are robotic devices that add chemicals to reaction vessels at certain times. However, these and similar devices cannot perform tasks that require reasoning.
In recent years, there have been significant developments in artificial intelligence types classified under the name of large language models (LLM). Now, artificial intelligence applications are much more successful in understanding and using natural languages.
In an article published in Nature, Dr. Daniil A. Boiko and his colleagues announced that they have developed an artificial intelligence application that conducts chemical research semi-autonomously. The application, called Co-scientist (assistant knowledge), can search the internet and obtain the necessary information to perform the task given to it thanks to its language understanding skills. It can then plan the research process using this information and manage the experimental processes by controlling various devices.
The researchers conducted various tests to measure the capabilities of Co-scientist. It was seen that the application can develop detailed and chemically accurate synthesis procedures. Co-scientist also managed to coordinate and manage two different types of reactions that are frequently used in drug development studies.
It is stated that the application still needs to be developed. For example, when it is not provided with enough samples, its predictions at the beginning of the trial-and-error process can be weak. It is also stated that the application can currently handle relatively simple tasks. Complex tasks that require knowledge from different disciplines, such as drug development research, are beyond the capacity of Co-scientist.
***
Night-Time Vision Like Daytime with Heat Signals
Technologies that visualize the environment and classify objects in the images obtained are very important for autonomous vehicles. However, the methods used today for this purpose have difficulty in conditions where visibility is low, such as foggy weather or at night. A group of researchers has recently developed a new method that can solve these problems. The technology, called HADAR in short, uses thermal imaging methods and artificial intelligence. The new method, which uses artificial intelligence and is based on the detection of heat signals, is expected to be used in autonomous vehicles in the near future (Bhattarai, M. and Thompson, S., "Heat Signals enable day-like visibility at night", Nature, Vol. 619, p. 699, 2023).
Thermal imaging methods also allow vision at night. However, one of the important problems with these technologies is that the images obtained are not clear. The mixing of heat signals from different sources causes the images to blur. In addition, thermal radiation alone does not provide an idea about the physical properties of an object. The most important feature of the newly developed method is that it overcomes these problems.
The researchers first created a library about the heat emission of different types of materials that an autonomous vehicle may encounter. Then, they trained an artificial intelligence application using the information in this library. The developed application can analyse the heat signals collected by infrared cameras to determine both the temperature and type of objects in the environment. This allows the environment to be viewed clearly even under the most difficult conditions. The main application areas of HADAR technology are autonomous vehicles. This technology can also be used in health, wildlife observation and scientific research.
***
Roblox Language Problem Solved
The famous online game Roblox introduced its LLM-supported real-time translation model, which allows users to communicate with each other in different languages in real time. Of course, automatic translation is not a new topic, but in a virtual environment where 70 million daily active users constantly interact, the fact that real-time translation is performed in 100 milliseconds to support 16 languages, including Turkish, is a very important development. In other words, the translation is so fast that most users may think they are speaking the same language as the other person! Considering that gaming technologies pioneer many technologies, we may see a similar language service in many applications in daily life in the near future.
***
Digital Assistant
Multion is a startup that promises to "make daily activities easier, faster, and more seamless." Multion is a browser add-on that can automatically perform tasks we don't want to deal with (such as meeting planning, filling out forms, and online grocery shopping) using the internet browser. Multion, which has a user-friendly interface, saves users time by performing complex tasks with simple commands. Examples of such virtual assistants are increasing day by day. For example, Arc Browser offers a similar service with Arc Search, a mobile application that can compile information from multiple sources and present it to the user. The table assistant called Equals AI works like an assistant to process and visualize data in Excel-like tables. Equals AI helps even non-experts make sense of data by understanding the subject at hand and providing early error detection, one-click correction and command suggestions, creating charts, creating dashboards, and summarizing tables.
***
No More Tidying Up Your Home!
Scientists have developed a robot called OK-Robot that can move items from one place to another according to the instructions given. Using visual language models (VLM) that can match natural language queries with objects in a visual scene, this robot can move in an unfamiliar environment without training. When placed in a new environment, it first scans the environment with a camera to create a 3D map of the environment and recognizes objects in the environment. The robot, which can perceive spoken item moving commands, can detect the object that most resembles the specified item and hold it. Then, it completes its movement by determining the ideal path to carry it without hitting it. OK-Robot has been tested in 10 different homes and achieved a 58% success rate. But is it really successful enough to tidy up a room?
***
Brain Reading
WhatsApp owner Meta (formerly Facebook) has filed a new patent on brain reading. The device, called cognitive load estimation and developed to measure people's reactions to ads, is worn in the ear and measures brain activity. For this, it uses near-infrared spectroscopy (fNIRS) technology, which tracks changes in blood flow in the brain, and electroencephalography (EEG) technique, which captures brain waves. These two technologies come together to measure how hard our brains are working, in other words, our cognitive load.
Such ethically controversial technological developments are also drawing the attention of governments. A new regulation agreed upon by the EU restricts the use of biometric identification systems, bans artificial intelligence systems that exploit user vulnerabilities and social scoring systems. Consumers are given the right to complain and request explanations on such issues. The regulation foresees fines ranging from 35 million euros or 7% of global turnover to 7.5 million euros or 1.5% of turnover for violations.