# program a given floating point binary number to its decimal 1

There are three questions. The second question and third question must be used by Matlab. Thank you.

1.

(i) What is a floating point number? Why is it â€œfloating pointâ€?

(ii) Explain the use of the mantissa and exponent for its representation on a computer. Why is it necessary?

(iii) Explain the terms â€œnormalizationâ€, â€œhidden bitâ€ and â€œbiasâ€ in the context of floating point number representation

2. Write a program that converts a given floating point binary number with a 24-bit normalized mantissa and an 8-bit exponent to its decimal (i.e. base 10) equivalent. For the mantissa, use the representation that has a hidden bit, and for the exponent use a bias of 127 instead of a sign bit. Of course, you need to take care of negative numbers in the mantissa also.

Use your program to answer the following questions:

(a) Mantissa: 11110010 11000101 01101010, exponent: 01011000. What is the base-10 number?

(b) What is the largest number (in base 10) the system can represent?

(c) What is the smallest non-zero positive base-10 number the system can represent?

(d) What is the smallest difference between two such numbers? Give your answer in base 10.

(e) How many significant base-10 digits can we trust using such a representation?

The third question was in the attachment.