The power consumption of CPUs and memory systems has traditionally been constrained by the need for strict correctness guarantees: processor voltage, for instance, must allow enough slack as to prevent even the rarest timing errors. Many modern applications, however, do not require perfect correctness. An image renderer, for example, can tolerate occasional pixel errors without compromising overall quality of service. Approximate computing exploits these error-tolerant applications to run programs more efficiently.
The EnerJ programming language lets programmers express programs with approximate components. Truffle is an architecture that uses dual supply voltages to provide energy-efficient, approximate execution via precision-aware ISA and microarchitectural extensions.
More recently, we proposed Neural Processing Units, or NPUs, as a way to use learning mechanisms like neural networks as extremely efficient accelerators for approximate programs.