Minggu, 24 April 2011

one's complement binary

The one's complement binary numeral system is characterized by the bit complement of any integer value being the arithmetic negative of the value. That is, inverting all of the bits of a number (the logical complement) produces the same result as subtracting the value from 0.


The early days of digital computing were marked by a lot of competing ideas about both hardware technology and mathematics technology (numbering systems). One of the great debates was the format of negative numbers, with some of the era's most intelligent people having very strong and different opinions. One camp supported 2's complement, the system that is dominant today. Another camp supported 1's complement, where any positive value is made into its negative equivalent by inverting all of the bits in a word. A third group supported "sign magnitude", where a value is changed from positive to negative simply by toggling the word's sign (high order) bit.

1's complement allowed for somewhat simpler hardware designs as there was no need to convert values when passed to/from the math unit. But it also shared a characteristic with sign-magnitude that many in the academic world found distasteful – the ability to represent negative zero (−0). Negative zero behaves exactly like positive zero. When used as an operand in any calculation, the result will be the same whether an operand is positive or negative zero. One's complement subtraction can also result in an end-around borrow (described below). It can be argued that this makes the addition/subtraction logic more complicated or that it makes it simpler as a subtraction requires simply inverting the bits of the second operand as it's passed to the adder. The CDC 6000 series and Univac 1100 series computers were based on 1's complement.

2's complement is the easiest to implement in hardware, which may be the ultimate reason for its widespread popularity. Remember that processors on the early mainframes often consisted of thousands of transistors – eliminating a significant number of transistors was a significant cost savings. The architects of the early integrated circuit based CPUs (Intel 8080, etc.) chose to use 2's complement math. As IC technology advanced, virtually all adopted 2's technology. Intel, AMD, and IBM Power chips are all 2's complement.

Number representation

Positive numbers are the same simple, binary system used by 2's complement and sign-magnitude. Negative values are the bit complement of the corresponding positive value. The largest positive value is characterized by the sign (high-order) bit being off (0) and all other bits being on (1). The smallest negative value is characterized by the sign bit being 1, and all other bits being 0. The table below shows all possible values in a 4-bit system, from −7 to +7.


Adding two values is straight forward. Simply align the values on the least significant bit and add, propagating any carry to the bit one position left

Subtraction is similar, except that borrows are propagated to the left instead of carries. If the borrow extends past the end of the word it is said to have "wrapped" around, a condition called an "end-around borrow". When this occurs, the bit must be added back in at the right-most bit. This phenomenon does not occur in 2's complement arithmetic.

It is easy to demonstrate that the bit complement of a positive value is the negative magnitude of the positive value. The computation of 19 + 3 produces the same result as 19 − (−3).

Add 3 to 19.

Subtract −3 from 19.

Negative zero

Negative zero is the condition where all bits in a signed word are on (1). This follows the 1's complement rules that a value is negative when the left-most bit is 1, and that a negative number is the bit complement of the number's magnitude. The value also behaves as zero when computing. Adding or subtracting negative zero to/from another value produces the original value.

Adding negative zero:

Subtracting negative zero:

Negative zero is easily produced in a 1's complement adder. Simply add the positive and negative of the same magnitude.

Although the math always produces the correct results, a side effect of negative zero is that software must test for negative zero.

Avoiding negative zero

The generation of negative zero becomes a non-issue if addition is achieved with a complementing subtractor. The first operand is passed to the subtract unmodified, the second operand is complemented, and the subtraction generates the correct result, avoiding negative zero. The previous example added 22 and −22 and produced −0.

The interesting "corner cases" are when one or both operands are zero and/or negative zero.

Subtracting +0 is trivial (as shown above). If the second operand is negative zero it is inverted and the original value of the first operand is the result. Subtracting −0 is also trivial. The result can be only 1 of two cases. In case 1, operand 1 is −0 so the result is produced simply by subtracting 1 from 1 at every bit position. In case 2, the subtraction will generate a value that is 1 larger than operand 1 and an end around borrow. Completing the borrow generates the same value as operand 1.

The only really interesting case is when both operands are plus or minus zero. Look at this example:

This example shows that of the 4 possible conditions when adding only ±0, an adder will produce −0 in three of them. A complementing subtractor will produce −0 only when both operands are −0.


0 komentar:

Posting Komentar