java,math,arbitrary-precision,integer-arithmetic

Method 1 I came up with this method; it's not necessarily efficient, but it works. Notice that you can use the length of the input (in digits) to compute its logarithm. You can use this to perform division, and therefore modulus. Specifically, first notice that 849465603662254214335539562 / (578907659710377778063722 * 1000)...

c++,math,fractions,bignum,arbitrary-precision

If you're using boost, you can use boost::rational in conjunction with boost::multiprecision::cpp_int (arbitrary-precision integer): #include <boost/rational.hpp> #include <boost/multiprecision/cpp_int.hpp> using boost::multiprecision::cpp_int; typedef boost::rational<cpp_int> fraction_t; For some reason, the constructor fraction_t("1", "2") doesn't work, although cpp_int("1") does, so if you need large int literals, you can use this helper function to not...

python,algorithm,pi,arbitrary-precision

By default, python will use the standard floating points as defined by IEEE-754. This has a precision of some 12 digits and can represent numbers as lows as 2-1022, now you can resolve this problem by calling the mpf operator earlier in the process, a more readable and more precise...

c,precision,arbitrary-precision,exponentiation

You can precompute and tabulate all 256-bit values of the function for x in [65280, 65535] (i.e. 255 x 256 + i); you will lookup the table by the 8 least significant bits of the argument. That will take 8KB of storage. For lower values of the argument, shift the...

javascript,math,precision,arbitrary-precision

There are three short answers: By using an external library that supports math to that precision (then effectively using it) By doing calculations as integers (for example, multiply by 1,000,000,000, do the calculations, then divide by the same amount) By building the zoom function in to the code and re-drawing...

python,decimal,scale,maya,arbitrary-precision

Maya's native precision is usually based on 32-bit floats (for most linear distances and general purpose math) and 64-bit doubles (used mainly for angular values). Python floats are basically doubles, so they should be 'lossless' for all practical purpose although I've never tried to check that the python and C++...

java,bigdecimal,arbitrary-precision,numeric-precision

You have confused scale (total number of decimal places) with precision (number of significant digits). For numbers between -1 and 1, the precision does not count any zeroes between the decimal point and the non-zero decimal places, but the scale does. The second argument to BigDecimal.divide is a scale. So...

math,svg,precision,arbitrary-precision

Of course it is. Floating point math deals with relative, not absolute, precision. If you created a regular polygon at the origin, with radius 1e-7, then zoomed it to 1e7X size, you would expect to see a regular polygon with the same size and precision as an unzoomed circle with...

julia-lang,arbitrary-precision,bigfloat

Not directly. My eventual plan is to make Distribution types parametric, which would also allow for Float32 arguments, but that is a while away yet. In the meantime, there is the non-exported φ which gives the result you wanted: Distributions.φ(x) - pdf(Normal(), x_small) ...

performance,precision,arbitrary-precision

It is a trade-off between performance and features/safety. I cannot think of any reason why I would prefer using overflowing integers other then performance. Also, I could easily emulate overflowing semantics with non-overflowing types if I ever needed to. Also, overflowing a signed int is a very rare occurrence in...

c++,c++11,boost,hash,arbitrary-precision

You can (ab)use the serialization support: Support for serialization comes in two forms: Classes number, debug_adaptor, logged_adaptor and rational_adaptor have "pass through" serialization support which requires the underlying backend to be serializable. Backends cpp_int, cpp_bin_float, cpp_dec_float and float128 have full support for Boost.Serialization. So, let me cobble something together that...