A 32-bit number can distinguish between 4 billion+ values instead of 256. In computers that is typically +- 2,147,483,647
Floating point means it can use that precision in different scales. It can draw a billion levels of depth between 1cm and 2cm or it can draw a billion levels between 1 billion cm and 2 billion cm and do it all in the same image if it needs to.
A 32-bit floating point number is big enough to represent a pentillion light years in centimeters and small enough to represent the width of the smallest atomic particle at the same time, too.
The upshot of all that precision is that you never have to scale a 256-depth depthmap represent a certain distance from near to far. 0 is zero but what is 256 in real-world units? 10 feet? 50 feet? a mile?