The followings are research themes in Miwa laboratory.

Heterogeneous Computing

Heterogeneous computing on hierarchical heterogeneous systems
Heterogeneous computing on hierarchical heterogeneous systems

Most studies for improved energy efficiency of computer systems are based on an idea that reduces non-performance-critical hardware devices in a computer system for reduction of their power consumption. However, it becomes difficult for this approach to further improve energy efficiency of modern computer systems due to prior extensive study of low-power computing. To overcome this problem, we propose hierarchically-increased hardware to improve energy efficiency of computer systems. Our approach allows a computer system to have a lot of heterogeneous hardware devices in various system hierarchies. The computer system automatically selects a combination of energy-efficient hardware devices depending on applications and their phases. Devices unused for computation are powered off for saving their power. This approach, which is a sort of heterogeneous computing, enables computer systems to achieve higher system performance with low power consumption. Now we are studying what devices should have heterogeneity, how they should be made, and how they should be managed. This work is collaborated with Columbia University and Nagoya University.

Power Management in High Performance Computing Systems

Power shifting for improved system performance under power constraints
Power shifting for improved system performance under power constraints

The future supercomputers are required for exascale computing power within 20-30 MW power constraints. For this purpose, we propose a system that overprovisions various hardware devices such as CPUs, memories, networks and accelerators. This system controls a power budgets of each hardware device depending on types of applications and their phases. For example, the power budget of a CPU is decreased by reduction of its frequency or reduction of awaken core count. This control process is called power shifting. Now we are studying power shifting between two devices (e.g., CPUs and memories; and CPUs and networks). Power shifting in entire devices is challenging work. This work is collaborated with the University of Tokyo, Kyushu University, and Fujitsu.

Computing Platform for Neuro-Computing

Programmable neural network accelerator.
Programmable neural network accelerators

Modern CPUs, which have been evolved on Von Neumann architecture, suffer from their large power consumption. Many researchers say that the future computer systems adopt another architecture like a neuro-computer. Although neuro-computing has greatly succeeded in pattern-recognition fields, the establishment of brain-like computing will be far in the future. Therefore, thorough studies about neural networks and machine learning are still needed to realize brain-like computer systems. Simulation of neural network is a time-consuming process, so we are developing a new hardware platform for neural network simulation. This work is collaborated with Nagoya Institute of Technology.