FPGA Implementation of Double Precision Floating Point Multiplier

Main Article Content

Mohd. Abdullah
Bharti Chourasia

Abstract

High speed computation is the need of today’s generation of Processors.  To accomplish this major task,  many functions  are implemented  inside the hardware  of the processor rather than  having  software  computing  the  same  task. Majority of the operations which the processor executes are Arithmetic operations which are widely used in many applications that require heavy mathematical operations such as scientific calculations, image and signal processing. Especially in the field of signal processing, multiplication division operation is widely used in many applications. The major issue with these operations in hardware is that much iteration is required which results in slow operation while fast algorithms require complex computations within each cycle. The result of a Division operation results in a either  in Quotient  and  Remainder  or a Floating  point  number  which is the  major reason  to  make it  more complex than  Multiplication  operation.

Article Details

How to Cite
Abdullah, M., & Chourasia, B. . (2022). FPGA Implementation of Double Precision Floating Point Multiplier . International Journal on Recent and Innovation Trends in Computing and Communication, 10(12), 155–160. https://doi.org/10.17762/ijritcc.v10i12.5896
Section
Articles

References

Raul Murillo , Alberto A. Del Barrio, Guillermo Botella , Min Soo Kim, HyunJin Kim and Nader Bagherzadeh “PLAM: a Posit Logarithm-Approximate Multiplier for Power Efficient Posit-based DNNs”2021.

Geetam Singh Tomar, Marcus Llyode George, Abhineet Singh Tomar “Multi?precision binary multiplier architecture for multi?precision floating?point multiplication” 2021.

Hamzah Abdel-Aziz ,Ali Shafiee ,Jong Hoon Shin , Ardavan Pedram , Joseph H. Hassoun “Rethinking Floating Point Overheads For Mixed Precision Dnn Accelerators” 2021.

Varun Gohil, Sumit Walia, Joycee Mekie, Manu Awasthi “A Floating-Point Representation for Error-Resilient Applications” 2021.

N. Bhavani Sudha, Gamini Sridevi “An Efficient Design Of Multiplier And Adder In Quantum-Dot Cellular Automata Technology Using Majority Logic” 2021.

Y Mounica , K Naresh Kumar , Sreehari Veeramachaneni , Noor Mahammad “Energy efficient signed and unsigned radix 16 booth multiplier design” 2021.

Yuheng Yang, Qing Yuan1, And Jian Liu “An Architecture of Area-Effective High Radix Floating-Point Divider With Low-Power Consumption” 2021.

J. Jean Jenifer Nesam ,S. Sivanantham “Efficient half-precision floating point multiplier targeting color space conversion” 2020.

TaiYu Cheng , Yukata Masuda , Jun Chen , Jaehoon Yu , Masanori Hashimoto “Logarithm-approximate floating-point multiplier is applicable to power-efficient neural network training” 2020.

Thiruvenkadam KRISHNAN , Saravanan S2 “Design of Low-Area and High Speed Pipelined Single Precision Floating Point Multiplier” 2020.

Chuangtao Chen, Sen Yang , Weikang Qian, Mohsen Imani , Xunzhao Yin , Cheng Zhuo “Optimally Approximated and Unbiased Floating-Point Multiplier with Runtime Configurability” 2020.

Zhaojun Lu, Md Tanvir Arafin, Gang Qu “RIME: A Scalable and Energy-Efficient Processing-In-Memory Architecture for Floating-Point Operations” 2020.

Machupalli Lahari, Sonali Agrawal “Efficient Floating-Point Hub Adder For Fpga” 2020.

D S Bormane, Sushma Wadar, Avinash Patil, S C Patil “Acceleration Techniques using Reconfigurable Hardware for Implementation of Floating Point Multiplier” 2020.

Alahari Radhika, Kodati Satyaprasad, and Kalitkar Kishan Rao “Low Complexity Fused Floating Point FFT Using CSD Arithmetic for OMP CS System” 2020.

V. Ramyaa, R. Seshasayanan “Low power single precision BCD floating–point Vedic multiplier” 2020.

Manish Kumar Jaiswal, And Hayden K.-H. So “DSP48E Efficient Floating Point Multiplier Architectures on FPGA” [2019].

MohamedAl-Ashrafy,AshrafSalem,WagdyAnis“An Efficient Implementation of Floating Point Multiplier” [2019].

Ling Zhuo and Viktor K. Prasanna “Sparse Matrix-Vector Multiplication on FPGAs” [2019].

Gokul Govindu, Ling Zhuo, Seonil Choi and Viktor Prasanna “Analysis of High-performance Floating-point Arithmetic on FPGAs” [2019].

Soner Yes¸ Cansu S¸ Ali Ozg ¨ ur Y?lmaz “Experimental Analysis and FPGA Implementation of the Real Valued Time Delay Neural Network Based Digital Predistortion” [2018].

Aneela Pathan , Tayab D Memon and Sheeraz Memon “A Carry-Look Ahead Adder Based Floating-Point Multiplier for Adaptive Filter Applications” [2018].

Junzhong Shen, Yuran Qiao, You Huang Mei Wen and Chunyuan Zhang “Towards a Multi-array Architecture for Accelerating Large-scale Matrix Multiplication on FPGAs” [2018].

Y. R. Annie Bessant and T. Latha “Analysis of Area and Delay for Floating Point Matrix Multiplication” [2018].

Yixing Li, Zichuan Liu, Kai Xu, Hao Yu, Fengbo Ren “A Gpu-Outperforming Fpga Accelerator Architecture For Binary Convolutional Neural Networks” [2018].

Vladimir Rybalkin, Alessandro Pappalardo “FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs” [2018].

Manish Kumar Jaiswal, and Hayden K.-H So “DSP48E Efficient Floating Point Multiplier Architectures on FPGA” [2017].

Martin Langhammer, Bogdan Pasca “Single Precision Natural Logarithm Architecture for Hard Floating-Point and DSP-Enabled FPGAs” [2016].

Prasad Bharade, Yashwant Joshi, Ramchandra Manthalkar “Design and Implementation of FIR Lattice Filter using Floating Point Arithmetic In FPGA” [2016].

Mohammed Dali , Ryan M. Gibson , Abbes Amira, Abderezak Guessoum and Naeem Ramzan “An Efficient MIMO-OFDM Radix-2 Single-Path Delay Feedback FFT Implementation on FPGA” [2015].

E. George Walters “24-Bit Significand Multiplier for FPGA Floating-Point Multiplication” [2015]