Lightmatter, a leader in silicon photonics processors, announced its artificial intelligence (AI) photonic processor, a general-purpose AI inference accelerator that uses light to compute and transport data. Using light to calculate and communicate within the chip reduces heat—leading to orders of magnitude reduction in energy consumption per chip and dramatic improvements in processor speed.

Since 2010, the amount of compute power needed to train a state-of-the-art AI algorithm has grown at five times the rate of Moore’s Law scaling—doubling approximately every three and a half months. Lightmatter’s processor solves the growing need for computation to support next-generation AI algorithms.
The 3D-stacked chip package contains over a billion FinFET transistors, tens of thousands of photonic arithmetic units, and hundreds of record-setting data converters. Lightmatter’s photonic processor runs standard machine learning frameworks including PyTorch and TensorFlow, enabling state-of-the-art AI algorithms.
This new architecture is a massive advancement in the development of photonic processors. The performance of this photonic processor provides proof that Lightmatter’s approach to processor design delivers scalable speed and energy efficiency advantages over the current electronic compute paradigm and is the starting point for a roadmap of chips with dramatic performance improvements.
Source: InsideBigData

## Add comment