Scientific notation keeps the precision of a number explicit by separating its significant digits (the mantissa) from its order of magnitude (the exponent). This converter renders the same number four ways at once — standard decimal, scientific (a × 10^b), E-notation as it appears in spreadsheets and code, and engineering notation where exponents are constrained to multiples of three to match SI prefixes.
A slider at the top sets the number of significant figures from 1 up to 15, which is the safe limit of double-precision floating point. Inputs may already be in scientific form — 1.23e6, 1.23 × 10^6 or 1.23x10^6 are all parsed correctly.
Both use a coefficient times a power of ten, but engineering notation forces the exponent to a multiple of three (10³, 10⁶, 10⁹…) so it lines up with SI prefixes (kilo, mega, giga). 12,345 is 1.2345 × 10⁴ in scientific but 12.345 × 10³ in engineering.
JavaScript numbers are IEEE 754 double-precision floats, which carry about 15–17 significant decimal digits of precision. Past that, the digits are noise from the binary-to-decimal conversion, not real data.
Yes. Negative coefficients and negative exponents both work, so −0.0000456 is shown as −4.56 × 10⁻⁵ in scientific and as −45.6 × 10⁻⁶ in engineering (matching "micro").
Plain decimals (12345.6), E-notation (1.23e6), and the human forms 1.23 × 10^6 and 1.23x10^6. The × and ^ are normalised to the JavaScript-friendly e before parsing.
Explore the full suite of Number tools and 290+ other free utilities at Chunky Munster.