Functions like e^x, sqrt, sin etc. are calculated via their series expansions in calculators.
These are often optimized versions of the simple theorems that were first proven, but it really does seem like there is no ultimate shortcut to finding the answer to these functions other than using lookup tables.
What would Stephen Wolfram say about the computation reducibility of these functions?
Do, for example, exponential functions require step-by-step exponential growth simulations to get the answer no matter what?
Heee? What do you want?
I assume you are familiar with Stephen Wolfram's concept of computation irreducibility?
I am just curious if within his definition, natural functions like exponentiation, square root, sin, cos and the like are considered computationally irreducible or not.
There are numeric algorithms for approximating those functions, to whatever precision. They have known complexity. It is not clear how that ties into "computational irreducibility". What would you mean if you were to say "computing pi is/is not computationally irreducible"?
The numerical algorithms, to me, specifically look like optimized simulations to get to the answer, which to me speaks some form of computational irreducibility.
Golly, the cellular automata program, is also heavily optimized and parallelized, however this does not mean the particular cellular algorithms are computationally reducible.
What do you think? Thanks for your answer.
Murat, you are referring to the subject of Numerical Analysis as "Optimized Simulation". In my opinion this is an equivocation.