Conversion from a Decimal to
a float is well supported. Just apply the built-in float() callable.
On the other hand, converting a float instance to a Decimal cannot be well-defined without extra information, I think. A Decimal instance is supposed to be "as precise as needed", just like when a human being performs arithmetic with pencils and paper. However, a float is already a lossy representation of a real number, so we don't gain much by converting one to a Decimal. Converting a text string to Decimal makes better sense though, for a string literal (which can be arbitrarily long within certain practical limits) is compatible with our "natural", "human" and variable-precision notation of real numbers.
OK so much for my own buzzing noise
Please refer to the PEP for Decimal type
for a sketch of what the Python czars thought when the decimal module was conceived.
Python's support for decimal number is still in development, so there is an absence of some methods such as trig and other transcendental functions. However, I'm not sure how practical they are -- Is the precision gained worth the CPU time and memory consumption? For some applications, maybe yes; but I'm afraid it's "no" most of the time.