program a given floating point binary number to its decimal 1

There are three questions. The second question and third question must be used by Matlab. Thank you.


(i) What is a floating point number? Why is it “floating point”?

(ii) Explain the use of the mantissa and exponent for its representation on a computer. Why is it necessary?

(iii) Explain the terms “normalization”, “hidden bit” and “bias” in the context of floating point number representation

2. Write a program that converts a given floating point binary number with a 24-bit normalized mantissa and an 8-bit exponent to its decimal (i.e. base 10) equivalent. For the mantissa, use the representation that has a hidden bit, and for the exponent use a bias of 127 instead of a sign bit. Of course, you need to take care of negative numbers in the mantissa also.

Use your program to answer the following questions:

(a) Mantissa: 11110010 11000101 01101010, exponent: 01011000. What is the base-10 number?

(b) What is the largest number (in base 10) the system can represent?

(c) What is the smallest non-zero positive base-10 number the system can represent?

(d) What is the smallest difference between two such numbers? Give your answer in base 10.

(e) How many significant base-10 digits can we trust using such a representation?

The third question was in the attachment.

Do you need a similar assignment done for you from scratch? We have qualified writers to help you. We assure you an A+ quality paper that is free from plagiarism. Order now for an Amazing Discount!
Use Discount Code "Newclient" for a 15% Discount!

NB: We do not resell papers. Upon ordering, we do an original paper exclusively for you.