New machine learning tool promises advances in computing algorithms

Systems controlled by next-generation computing algorithms could give rise to better and more efficient machine learning products, a new study suggests.

Researchers used machine learning tools to create a digital twin, or a virtual copy, of an electronic circuit that exhibits chaotic behaviour. They found that they were successful at predicting how computing algorithms behave.

Having access to an efficient digital twin is likely to have a sweeping impact on how scientists develop future autonomous technologies.

“The problem with most machine learning-based controllers is that they use a lot of energy or power, and they take a long time to evaluate,” said Robert Kent, lead author of the study.

“Developing traditional controllers for them has also been difficult because chaotic systems are extremely sensitive to small changes.”

How the digital twin can advance future technologies

The team’s digital twin was built to optimise a controller’s efficiency and performance, which researchers found resulted in a reduction of power consumption.

It achieves this quite easily, mainly because it was trained using a type of machine learning approach called reservoir computing.

Although similarly sized computer chips have been used in devices like smart fridges, according to the study, this novel computing ability makes the new model especially well-equipped to handle dynamic systems such as self-driving vehicles and heart monitors, which must be able to quickly adapt to a patient’s heartbeat.

Kent explained: “Big machine learning models have to consume lots of power to crunch data and come out with the right parameters, whereas our model and training is so extremely simple that you could have systems learning on the fly.”

Achieving complex computing algorithms

To test this theory, researchers directed their model to complete complex control tasks and compared its results to those from previous control techniques and computing algorithms.

The study revealed that their approach achieved a higher accuracy at the tasks than its linear counterpart and is significantly less computationally complex than a previous machine learning-based controller.

Though the outcome showed that their computing algorithm requires more energy to operate than a linear controller, this trade-off means that when it is powered up, the team’s model lasts longer and is considerably more efficient than current machine learning-based controllers on the market.

To advance their results, future work will likely focus on training the model to explore other applications, such as quantum information processing.

“Not enough people know about these types of algorithms in the industry and engineering, and one of the big goals of this project is to get more people to learn about them,” Kent concluded.

The article is sourced from the internet. Click the “Source” button to view the original content. If there is any copyright infringement, please contact our team for removal.

Share this article
Shareable URL
Prev Post

Google I/O 2024: What to expect

Next Post

超簡單AI繪圖工具Stable Diffusion安裝篇2024最新版

Read next
Subscribe to our newsletter
Get notified of the best deals on our WordPress themes.