Copied!
Math Tool

Number Base Converter

Convert between binary, decimal, hexadecimal, and octal number systems instantly. This free online number base converter supports radix conversion for any base from 2 to 36, with BigInt precision for arbitrarily large numbers. Explore positional notation across base-2, base-8, base-10, and base-16 numeral systems with grouped nibbles, bit-length display, and real-time input validation. All conversions happen client-side in your browser.

number-base.tool
Binary Base 2
Octal Base 8
Decimal Base 10
Hexadecimal Base 16
 bits
Fits in

Frequently Asked Questions

How do I convert binary to decimal?
To convert binary to decimal, multiply each bit by 2 raised to the power of its position (starting from 0 on the right) and sum the results. For example, binary 1011 = (1 × 2³) + (0 × 2²) + (1 × 2¹) + (1 × 2&sup0;) = 8 + 0 + 2 + 1 = 11 in decimal. This positional notation principle applies to all numeral systems — the digit value is multiplied by the base raised to its position index.
What is hexadecimal and why is it used in programming?
Hexadecimal (base-16) uses digits 0–9 and letters A–F. Each hex digit maps to exactly 4 binary bits (a nibble), making it a compact way to represent binary data. For example, 0xFF = 11111111 in binary = 255 in decimal. Programmers use hex for memory addresses, color codes, byte values, and bitwise operations because it is far more readable than long binary strings while maintaining a direct relationship with the underlying binary system.
How do I convert between octal and binary?
Each octal digit corresponds to exactly 3 binary bits. To convert octal to binary, replace each digit with its 3-bit binary equivalent: 0=000, 1=001, 2=010, 3=011, 4=100, 5=101, 6=110, 7=111. For example, octal 357 = 011 101 111 in binary. To go from binary to octal, group binary digits into sets of 3 from the right and convert each group to its octal equivalent.
What bases are commonly used in computer science?
The most common bases are: Binary (base-2) — the fundamental language of computers, using 0s and 1s. Octal (base-8) — used in Unix file permissions and some legacy systems. Decimal (base-10) — the standard human numeral system. Hexadecimal (base-16) — widely used for memory addresses, color codes, and representing byte values compactly. Some systems also use base-36 (digits + entire alphabet) for compact ID encoding.
What is the difference between signed and unsigned binary numbers?
Unsigned binary numbers represent only non-negative values — an 8-bit unsigned number ranges from 0 to 255. Signed binary uses the most significant bit (MSB) as a sign indicator. In two's complement (the most common signed representation), an 8-bit signed number ranges from −128 to 127. The binary value 11111111 is 255 when unsigned, but −1 in signed two's complement. Understanding this distinction is crucial for low-level programming and bitwise operations in languages like C and Rust.