![digipod 2016 digipod 2016](https://img.tttcdn.com/product/xy/2000/2000/p/gu1/D/0/D10580/D10580-1-0573-QUYU.jpg)
![digipod 2016 digipod 2016](https://panproduct.com/blog/wp-content/uploads/2016/07/C0036.00_04_50_08.Still001.jpg)
But, once this type of enhancement is built and done, you get to compute cube roots in "one machine cycle," for the arbitrarily-defined time interval that is "one machine cycle." Nimur ( talk) 16:57, 9 February 2016 (UTC) I think it's unlikely you'll get better results than the standard math library in your preferred language. Most mere mortals never get to provide such feedback to their silicon hardware architect. Is that kind of hardware worth the cost? Well, only if you really need to compute a lot of cube roots, and even then, only if you can convince the team who builds your floating-point multiplier into silicon. If you can follow their work, you can see how, by extension, one could build the same hardware for the cube-root polynomial expansion.
#Digipod 2016 series
Nimur ( talk) 05:22, 9 February 2016 (UTC) Here is a machine architecture enhancement to enable hardware-accelerated Taylor series expansion of the square root, for an IEEE-754 floating point multiply/divide unit: Floating-Point Division and Square Root using a Taylor-Series Expansion Algorithm, (Kwon et al, 2007). These are appropriate accelerations if you are solving numerically using an ordinary type of computer but if you're working with weird computational equipment - like, say, using constructive geometry to analytically solve for the root - there may be faster ways of finding the answer. This probably won't actually change the execution time in any significant way on modern computer hardware. There are a lot of similar dumb tricks named for smart mathematicians each one can shave off a couple of adds and multiplies. In this specific case, I'm not sure it will make any difference, as most of the polynomial coefficients are zero.
#Digipod 2016 code
This book actually provides code examples (in Maple), and works the numerical method for a few examples. Next I referred to my numerical analysis book, Numerical Analysis, Burden and Faires, which suggested applying Horner's method to accelerate convergence of Newton's method.
![digipod 2016 digipod 2016](http://img10.360buyimg.com/n7/jfs/t1846/170/2125124407/174154/68975a72/56aec031N0f637f95.jpg)
This is, basically, the Fundamental Theorem of Algebra.
![digipod 2016 digipod 2016](https://i.ytimg.com/vi/YkKURtaxOz8/mqdefault.jpg)
You have the advantage of knowing, analytically, that the function is monotonic and that there is a single zero crossing so you can use that fact to your advantage. what are your problem constraints? How accurate do you need to be? Can you use look-up tables for some or all calculations? Do you know that the input is centered around a particular value (suitable for a truncated Maclaurin series or other approximate method)? May we assume you have conventional floating-point computer hardware, or do we need to work with some other type of machine? Are we allowed to parallelize calculation work? My first instinct was to formulate the cube-root of k as a zero of the equation x 3 − k, and then to apply (essentially) Newton's method to find the zero. Is there a way to calculate the real cube root of a real number that is faster than the log and exponential method? Bubba73 You talkin' to me? 04:48, 9 February 2016 (UTC) Sure, there are loads of options. 1.3.1 follow-up: Do they make a digital conversion back so film cameras can have the film mount back replaced?įebruary 9 faster cube root calculation?.1.3 Buying digital cameras compatible with legacy analog lenses.