Now, DNNs are increasingly limited by hardware energy efficiency. Since 2012, the computational requirements of DNN models have grown rapidly, outpacing Moore’s law 1. showed that the backpropagation algorithm could be efficiently executed with graphics-processing units to train large DNNs 35 for image classification. In 2012, building on earlier works, Krizhevsky et al. Like many historical developments in artificial intelligence 33, 34, the widespread adoption of deep neural networks (DNNs) was enabled in part by synergistic hardware. Physics-aware training combines the scalability of backpropagation with the automatic mitigation of imperfections and noise achievable with in situ algorithms. Physical neural networks have the potential to perform machine learning faster and more energy-efficiently than conventional electronic processors and, more broadly, can endow physical systems with automatically designed physical functionalities, for example, for robotics 23, 24, 25, 26, materials 27, 28, 29 and smart sensors 30, 31, 32. To demonstrate the universality of our approach, we train diverse physical neural networks based on optics, mechanics and electronics to experimentally perform audio and image classification tasks. Just as deep learning realizes computations with deep neural networks made from layers of mathematical functions, our approach allows us to train deep physical neural networks made from layers of controllable physical systems, even when the physical layers lack any mathematical isomorphism to conventional artificial neural network layers. Here we introduce a hybrid in situ–in silico algorithm, called physics-aware training, that applies backpropagation to train controllable physical systems. The advantages of backpropagation have made it the de facto training method for large-scale neural networks, so this deficiency constitutes a major impediment. Approaches so far 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22 have been unable to apply the backpropagation algorithm to train unconventional novel hardware in situ. ![]() Deep-learning accelerators 2, 3, 4, 5, 6, 7, 8, 9 aim to perform deep learning energy-efficiently, usually targeting the inference phase and often by exploiting physical substrates beyond conventional electronics. ![]() However, their energy requirements now increasingly limit their scalability 1. Deep-learning models have become pervasive tools in science and engineering.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |