According to this article in Nature:
Superconducting computing chips modelled after neurons can process information faster and more efficiently than the human brain.
We have seen the rise of ML first shift work to GPUs (which were designed for large amounts of linear algebra needed for non-linear editing and video games, making them inadvertently well suited for ML tasks).
We have seen the advent of FPGA and then dedicated hardware for ML, especially Google’s exciting work around Tensor Processing Units.
These are all still based on traditional approaches to computing and essentially classic von Neumann architecture.
Perhaps this new work will lead to a generation of hardware that combines neuroscience, electronic engineering and computer science. There is a level of energy efficiency and speed in the human brain that we are not yet close to matching, even as the raw computing power of ML and distributed systems increases exponentially. Carver Mead’s neuromorphic computing may finally become a practical reality.