Computer Science and Automation Distinguished Lecture – Beating Floating Point at its Own Game: Posit Arithmetic

By:
Professor John L. Gustafson
National University of Singapore

Host Faculty: 
Prof. R. Govindarajan
Indian Institute of Science, Bengaluru

Date:
Thursday, October 26, 2017 – 4:00 PM

Venue:
CSA Seminar Hall (Room No. 254, First Floor), Indian Institute of Science, Bengaluru

Abstract:
A new data type called “posit” is designed as a direct drop-in replacement for IEEE Standard 754 floating-point numbers (floats). Unlike earlier forms of universal number (unum) arithmetic, posits do not require interval arithmetic or variable size operands; like floats, they round if an answer is inexact. However, they provide compelling advantages over floats, including larger dynamic range, higher accuracy, better closure, bitwise identical results across systems, simpler hardware, and simpler exception handling. Posits never overflow to infinity or underflow to zero, and Not-a-Number (NaN) indicates an action instead of a bit pattern. A posit processing unit takes less circuitry than an IEEE float FPU. With lower power use and smaller silicon footprint, the posit operations per second (POPS) supported by a chip can be significantly higher than the FLOPS using similar hardware resources. GPU accelerators and Deep Learning processors, in particular, can do more per watt and per dollar with posits, yet deliver superior answer quality.

A comprehensive series of benchmarks compares floats and posits for decimals of accuracy produced for a set precision. Low precision posits provide a better solution than approximate computing methods that try to tolerate decreased answer quality. High precision posits provide more correct decimals than floats of the same size; in some cases, a 32-bit posit may safely replace a 64-bit float. In other words, posits beat floats at their own game.

Biography of the Speaker:

Dr-John-L-GustafsonDr. John L. Gustafson is an applied physicist and mathematician who is a Visiting Scientist at A-STAR and Professor at NUS. He is a former Director at Intel Labs and former Chief Product Architect at AMD. A pioneer in high-performance computing, he introduced cluster computing in 1985 and first demonstrated scalable massively parallel performance on real applications in 1988. This became known as Gustafson’s Law, for which he won the inaugural ACM Gordon Bell Prize. He is also a recipient of the IEEE Computer Society’s Golden Core Award.

 

You may also like...

We value your feedback