About Text to Decimal
Each character is converted to its Unicode code point as a decimal number, separated by spaces. This is useful for encoding text in numeric form for transmission or analysis. Runs entirely in your browser.
Each character is converted to its Unicode code point as a decimal number, separated by spaces. This is useful for encoding text in numeric form for transmission or analysis. Runs entirely in your browser.
Paste your text and the tool emits one decimal number per Unicode code point — 'A' becomes 65, 'é' becomes 233, '🎉' becomes 127881. It uses String.prototype.codePointAt rather than charCodeAt, which means surrogate pairs (most emoji and CJK extension characters) are returned as a single number above 0xFFFF instead of being split.
Decoder mode accepts space-separated decimal numbers and reconstructs the string with String.fromCodePoint. Numbers larger than 0x10FFFF (1,114,111) are invalid Unicode code points and will produce a replacement character. Useful for inspecting raw character values when debugging encoding issues, or for embedding code points in numeric escape sequences.
We use codePointAt, which returns the actual Unicode code point. Older tools using charCodeAt return UTF-16 code units, so emoji and rare CJK characters get split into a high and low surrogate pair.
1,114,111 (0x10FFFF) — the highest Unicode code point. Anything above that is invalid and decodes to the replacement character (U+FFFD = 65533).
The separator on output is configurable. The decoder accepts any whitespace or comma between numbers.
No — Unicode code points are non-negative integers from 0 to 1,114,111. Negative input is ignored.
Explore the full suite of ENCODERS tools and 290+ other free utilities at Chunky Munster.