Converting between signed magnitude and decimal is an important skill taught in computer science classes. Signed magnitude is a binary representation with the far left bit being a sign bit, such as 01111110. Decimal numbers are what you use in normal daily life, such as -1, 0, 1, and 2. Conversion between these two numerical forms requires understanding how binary and the sign bit in signed magnitude works.

Label each digit of the signed magnitude number with an increasing power of 2, starting from the far right digit and moving to the left. Powers of 2 are in the form of 2^0, 2^1, 2^2, 2^3 and so on. Ignore the far left number and ignore any padding 0's between the far left digit and the first 1. The numbering sequence is "32, 16, 8, 4, 2, 1" and so on. For example, the signed magnitude number "10000101" gets the labels "4, 2, 1", with the far left digit and the padding zeros being ignored.

Sum together all the label values where the corresponding signed magnitude number has a 1 in its digit. For example, 10000101 is "1+4=5".

Add a negative sign to the front of the number if the far left digit is a 1. For example, 10000101 becomes -5. This is the decimal equivalent of the signed magnitude number.