Light-carrying chips advance machine learning | University of Oxford
Schematic representation of a processor for matrix multiplications which runs on light. Together with an optical frequency comb, the waveguide crossbar array permits highly parallel data processing.
Schematic representation of a processor for matrix multiplications which runs on light. Together with an optical frequency comb, the waveguide crossbar array permits highly parallel data processing.

Light-carrying chips advance machine learning

A team of international scientists has demonstrated an initial prototype of a photonic processor using tiny rays of light confined inside silicon chips that can process information much more rapidly than electronic chips and also in parallel - something traditional chips are incapable of doing.

The collaborative discovery, by researchers at the Universities of Oxford, Münster, Exeter, Pittsburgh, École Polytechnique Fédérale (EPFL) and IBM Research Europe, has been published in Nature.

Machine learning and artificial intelligence applications make use of vast troves of data. Our data is increasing exponentially and using this data to create information requires computer processing. The capabilities of conventional computer processors are not sufficient to keep up with this demand. This international research team has developed a new approach and processor architecture which provides a potential avenue to perform these tasks at high throughput – essentially by combining processing and data storage functionalities onto a single chip – so called in-memory processors, but using light.

Senior co-author Wolfram Pernice at Münster University, one of the professors who led this research, said, ‘Light-based processors for speeding up tasks in the field of machine learning enable complex mathematical tasks to be processed at high speeds and throughputs. This is much faster than conventional chips which rely on electronic data transfer, such as graphic cards or specialised hardware like TPU’s (Tensor Processing Unit).’

Using light allowed the team to use multiple wavelengths of light to do parallel calculations since light has the amazing property of having different colours that do not interfere

The team implemented a hardware accelerator for so-called matrix-vector multiplications. Such operations form the backbone of neural networks (a series of algorithms which simulate the human brain) that are used to compute machine learning algorithms. Using light allowed the team to use multiple wavelengths of light to do parallel calculations since light has the amazing property of having different colours that do not interfere. However, to do this, they used yet another recent invention, a chip-based frequency comb, as a light source.

Tobias Kippenberg, Professor at EPFL, said, ‘Our study is the first to apply frequency combs in the field of artificially neural networks. The frequency comb provides a variety of optical wavelengths which are processed independently of one another in the same photonic chip.’

Once the chips were designed and fabricated, the researchers used a convolution neural network for the recognition of handwritten numbers. These networks are a concept in the field of machine learning inspired by biological processes. Used primarily in the processing of image or audio data, they currently achieve the highest accuracies for classification.

Johannes Feldmann, now based at the University of Oxford Department of Materials, said, ‘The convolution operation between input data and one or more filters – which can identify edges in an image, for example, are well suited to our matrix architecture.'

Nathan Youngblood added, 'Exploiting wavelength multiplexing permits higher data rates and computing densities, i.e. operations per area of processer, not previously attained.'

Professor C. David Wright of the University of Exeter, who leads the EU project Fun-COMP, which funded this work, said, ‘This work is a real showcase of European collaborative research. Whilst every research group involved is world-leading in their own way, it was bringing all these parts together that made this work truly possible.’

The results, published in Nature today, have a wide range of potential applications:

  • In the field of artificial intelligence more data can be processed simultaneously while saving energy.
  • The use of larger neural networks allows more accurate, and hitherto unattainable, forecasts and more precise data analysis.
  • Within clinical settings, the photonic processors can support the evaluation of large quantities of data for diagnoses, e.g. high-resolution 3D data produced in special imaging methods.
  • In the fields of self-driving vehicles, rapid evaluation of sensor data can be enhanced
  • IT infrastructures such as cloud computing can be expanded by providing more storage space, computing power and applications software.  

While the current work provides a pathway towards implementing such processors in the photonic domain, many daunting scientific and technological challenges remain. This is what makes this field an exciting and fast-moving area of research

Professor Harish Bhaskaran

Abu Sebastian, senior co-author who oversees the efforts on emerging computing paradigms at IBM Research Zurich, said, ‘One of the key differentiators for an in-memory photonic processor compared to its electronic counterpart is the ability to parallelize in the frequency domain making it particularly well suited for computational primitives such as convolutions.’

Professor Harish Bhaskaran, senior co-author at the Department of Materials, University of Oxford, said, ‘While the current work provides a pathway towards implementing such processors in the photonic domain, many daunting scientific and technological challenges remain. This is what makes this field an exciting and fast-moving area of research.’

Read ‘Parallel convolution processing using an integrated photonic tensor core’ in Nature after the embargo lifts here: https://www.nature.com/articles/s41586-020-03070-1

The work was carried out as part of the H2020 project Fun-COMP (#780848), see www.fun-comp.org for further details and with additional financial support from the European Research Council (ERC) Grants “PINQS” and “PROJESTOR”.

Oxford authors include Johannes Feldmann, Nathan Youngblood, Xuan Li and Harish Bhaskaran.