Message Boards Message Boards

29
|
183048 Views
|
378 Replies
|
667 Total Likes
View groups...
Share
Share this post:

New Functions I would like to see in future Wolfram Language versions

Posted 11 years ago
I was wondering, it would be interesting to try to use the community as a way to request new functions that could be incorporated into new versions of Wolfram Language, in a colaborative way. Sometimes users simply don't have the whole/deeper system view, to understand that the requested function is or too specific, too broad or already implemented, but I believe that another times we can have a nice insights, that Wolfram Research people haven't yet, or that do not have received much attention  yet.  To the idea is:

Post your's requested Function as a answer to this question, and let's upvotes show the more interesting ones!

Some rules
1- One Post per function (or class of function), you can have more than one request.
2- Exemplify your function use.
POSTED BY: Rodrigo Murta
378 Replies

Mathematica offers a great number of neuron nets for image, but can we have some neuron nets for satellite image (remote sensing) Thanks

POSTED BY: André Dauphiné

GeoImage[image,proj,bbox] and GeoArray[image,proj,bbox] objects. These would be rectangular datasets (arrays or images) bundled with a projection and bounding box info (in that projection). GeoImage and GeoArray objects would then be used as GeoGraphics primitives, being placed and reprojected automatically.

POSTED BY: Gareth Russell

GeoTIFFs would import by default as GeoImage objects. And Wolfram could use the format in its curated data, as well as hosting a repository of user-contributed GeoImage objects.

POSTED BY: Gareth Russell

The raster reprojection code must already exist, because it is done with the satellite imagery and other GeoGraphics background options.

POSTED BY: Gareth Russell

I'd like to see a RegionPlot3D in cylindrical and spherical coordinates

Unfortunately, it appears to work only for Cartesian coordinates.

POSTED BY: Mariusz Iwaniuk

I'd like to see a q-Calculus like :

1.q-Derivatives

2.q-Integrals

3.q-Exponential Functions

4.q-Sine Functions

5.q-Cos Functions

6.q-BETA Functions

7.q-Bernoulli Polynomials

8.q-Euler Numbers

9.q-Stirling Numbers

10.q-Orthogonal Polynomials

11.q-Appell Functions

and many more from: q-Calculus , Wiki and MathWorld

POSTED BY: Mariusz Iwaniuk

It's been 6 years when was the last time I wrote a post about hilbert transform and we still don't have.

The hilbert transform, sometimes called a quadrature filter, is useful in radar systems, single side-band modulators, speech processing, measurement systems, as well as schemes of sampling band-pass signals.

It's time to change that.

Regards M.I.

POSTED BY: Mariusz Iwaniuk

I feel like this would make an excellent contribution to the WFR!

As an added bonus, WFR functions are sometimes nominated for inclusion in future versions of Mathematica! Every so often, the resource system team gets together and discusses recent WFR submissions, taking special note of implementations that would make good system-level functions.

The Hilbert transformations are of widespread interest because they are applied in the theoretical description of many devices and systems and directly implemented in the form of Hilbert analog or digital filters (transformers). Let us quote some import- ant applications of Hilbert transformations:

1. The complex notation of harmonic signals in the form of Euler’s equation exp( jvt) ¼ cos(vt) þ j sin(vt) has been used in electrical engineering since the 1890s and now- adays is commonly applied in the theoretical description of various, not only electrical systems. This complex notation had been introduced before Hilbert derived his transformations. However, sin(vt) is the Hilbert trans- form of cos(vt), and the complex signal exp( jvt) is a precursor of a wide class of complex signals called analytic signals.

2. The concept of the analytic signal11 of the form c(t) ¼ u(t) þ jv(t), where v(t) is the Hilbert transform of u(t), extends the complex notation to a wide class of signals for which the Fourier transform exists. The notion of the ana- lytic signal is widely used in the theory of signals, circuits, and systems. A device called the Hilbert transformer (or filter), which produces at the output the Hilbert trans- form of the input signal, finds many applications, especially in modern digital signal processing.

3. The real and imaginary parts of the transmittance of a linear and causal two-port system form a pair of Hilbert transforms. This property finds many applications.

4. Recently two-dimensional (2-D) and multidi-mensional Hilbert transformations have been applied to define 2-D and multidimensional complex signals, open- ing the door for applications in multidimensional signal processing.

Bibliography

Bibliography1

Bibliography2

POSTED BY: Mariusz Iwaniuk

I'd like to see a new special functions:

1.AppellF2, AppellF3, AppellF4 functions.

2.Kampé de Fériet Function

3.Horn function

4.Lauricella Functions

5.MacRobert' s E function

6.The Multiple Zeta

7.[GeneralizedPolylog and MultiPolylog][7] represent the function class consisting of generalized polylogarithms, multiple polylogarithms, harmonic polylogarithms, hyperlogarithms, and related functions.

8.FoxH -Function of Several Complex Variables

References

[1] A.B.Goncharov. "Multiple polylogarithms, cyclotomy and modular complexes", Math Res.Letters. Vol. 5 (1998): 497-516.

[2] Jens Vollinga, Stefan Weinzierl. "Numerical evaluation of multiple polylogarithms", Comput.Phys.Commun. Vol. 167 (2005): 23 pp.

[3] H. Frellesvig, D. Tommasini, C. Wever. "On the reduction of generalized polylogarithms to Lin and Li22 and on the evaluation thereof", JHEP 1603 (2016): 35pp

[4] Generalized Hypergeometric Functions with Applications in Statistics and Physical Sciences

POSTED BY: Mariusz Iwaniuk

I'd like to see a new special functions:

1.Humbert series

and All from this list that dosen't exist in Mathematica.

Update 2024.01.08

Humbert series can be expressed by Lauricella Functions,then we need only Lauricella Functions

Regards.

POSTED BY: Mariusz Iwaniuk

Multivariable hypergeometric functions (such as the famous Appell, Lauricella and Kamp´e de F´eriet functions, etc.) and their various generalizations appear in many branches of mathematics and its appli- cations. Many authors have contributed works on this subject. In recent years, several authors have considered some interesting extensions of the Appell and Lauricella functions. Motivated by their works, we introduce a class of new extensions of the Lauricella functions and find their connection with other celebrated special functions.

Lauricella, G. in 1893 defined four multidimensional hypergeometric functions FA, FB, FC and FD. These functions depended on three variables but were later generalized to many variables. Lauricella’s functions are infinite sums of products of variables and corresponding parameters, each of them has its own parameters. In the present work for Lauricella’s function F(n) A , the limit formulas are established, some expansion formulas are obtained that are used to write recurrence relations, and new integral representations and a number of differentiation formulas are obtained that are used to obtain the finite and infinite sums.

The great success of the theory of hypergeometric functions of a single variable has stimulated the development of a corresponding theory in two or more variables. Multiple hy- pergeometric functions arise in many areas of modern mathematics, and they enable one to solve constructively many topical problems important for theory and applications.

A generalization of the Fox H-function is given by Ram Kishore Saxena. For a further generalization of this function, useful in physics and statistics was given by A.M.Mathai and Ram Kishore Saxena.

See here: and here and The H-Function Theory and Applications see Appendix in this book.

POSTED BY: Mariusz Iwaniuk

Quite often I would like to be able to simply write:

func @@@@ matrix
POSTED BY: Henrik Schachner

Clock in Mathematica is buggy (submitted bug report) and is very limited in what actions it can do. If its set as a recurring countdown clock sometimes it performs my action (i.e. NotebookSave[] & Ding[] to alert me) and at other times nothing even though i am not overloading the kernel stack or frontend with heavy processing. I really hate that it immediately starts when created instead of having the ability to define an event or function to start, pause, stop, resume and evaluate any expression ir combination of. There has to be a more efficient way of updating values when using clock without checking update interval every second in dynamic or whatever one usually sets too. for example an internal notify system that is set to only check once per cycle. It should have settable custom properties that can changed according to time passed. I realize i just about described taskobjects but it is not what i meant at all.. i want a local object that is easily configurable to show me time running to next evaluation (when in dynamic). i almost got a countdown clock to work in a button except thats when i discovered few bugs. countdown doesnt even wait for Button to click to start countion down and continues to countdown in dynamic even when scrolled out of view.

In short I want a clock i can actually work with not one that only halfway works like what we have now.

POSTED BY: Jules Manson

Not sure if this can be done with something like LocalSymbol or LocalObject but I would like to see persistent functions so that they could easily be overloaded at start of kernel or session for example to load packages in a very easy and direct fashion. Such a function might look like this...

loadPackages[args]=PersistentFunction[name,args, type->loc]:=Function[{symbols}, do something with args]

where the Head (Function) may also be any other applicable Head like Module, Block, With, etc. Even better would be to make a function with multiple definitions perhaps just set PersistentFunction to the SymbolName in place of defining it.

For example if we defined several different cases of loadPackages we could store only the function name as a persistent symbol which could be used immediately

loadPackages[args/;cond1]:=Function[{symbols}, do something with args];
loadPackages[args/;cond2]:=Function[{symbols}, do something else with args];
loadPackages[args]:=some error message;

PersistentFunction[name,args,type->loc]=loadPackages
POSTED BY: Jules Manson

The new Video function is very cool! But I miss auxiliar functions like in Audio.

In Audio we have: AudioJoin, AudioSplit, AudioPad and so on.

Working with video automation, It would be nice to have VideoJoin, VideoSplit, VideoPad and so on.

POSTED BY: Rodrigo Murta

I have already requested VideoJoin and ConformVideos actually. They know about it internally…

POSTED BY: Sander Huisman

VideoJoin introduced in 12.2 and updated twice.

VideoSplit also in 12.2 and updated in 12.3

POSTED BY: Sander Huisman

There is:

ImageMeasurements[image, "Transparency"]
POSTED BY: Piotr Wendykier

AlphaChannelQ

Doing exactly what you think it should do…

POSTED BY: Sander Huisman

Isn't that strange that this is missing? Here's how to do it: https://mathematica.stackexchange.com/a/157458/12 BTW the LibraryLink interface does have MImage_alphaChannelQ.

POSTED BY: Szabolcs Horvát

Agree, strange it is missing. I know of the circumventing methods, I was using the

RemoveAlphaChannel[img]===img

trick… But it is poorly documented for sure. RemoveAlphaChannel and SetAlphaChannel should definitely have it… I think it is worthy of its own function…

POSTED BY: Sander Huisman

Sander, RemoveAlphaChannel[img]===img is terribly inefficient and couldn't be considered more than a workaround. What I meant was

ImageMeasurements[img, "Transparency"]

which is perfectly good, except that it is very hard to find. One does not think of the presence of an alpha channel as something that need to be measured. Measurement implies computation, while this only needs to check a flag. When I originally needed this, I spent a lot of time searching and never found this.

@Piotr Wendykier, maybe there is an opportunity to improve the documentation here. As an interim measure, the hidden keyword alphachannelq could be added to the ImageMeasurements doc page. Also, I notice that almost all the "Global image properties" that ImageMeasurements can return that aren't really measured, just extracted, have their own function. We have ImageType, ImageChannels, ImageDimensions, ImageColorSpace, etc.

Even if I come across the ImageMeasurement doc page, I would assume (just by the name) that this is a function that computes properties like the mean pixel value. That is exactly what it does, and that is exactly what the immediately visible basic examples show. If I were to look for functionality to test for the presence of an alpha channel, even if I find the ImageMeasurements doc page, it would never occur to me to open the Details section and read through it carefully. A quick glance at the page would convince me that no, this can't possibly be the function I need.

Summary: Yes, the functionality exists, but it is near-impossible to find for many users. An AlphaChannelQ function would be a tangible improvement.

Additional note: Please add a link from the MImage_alphaChannelQ doc page to the ImageMeasreuments doc page.

POSTED BY: Szabolcs Horvát

Built in phase unwrapping like this.

POSTED BY: Sander Huisman

ListStepPlot3d should be an extension of ListStepPlot

POSTED BY: Neel Basu

A upgrade a NDSolve function to solve numerically Fractional differential equations:

  • ordinary fractional differential equation
  • ordinary fractional differential-algebraic equation
  • ordinary fractional delay differential equation(with a variable delay)
  • stochastic fractional differential equation
  • fractional partial differential equations
  • fractional stochastic partial differential equations
  • fractional partial random differential equations with state-dependent delay

My opinion these functions should be implanted already 20 years ago. Maple has fracdiff function and Matlab fde12

POSTED BY: Mariusz Iwaniuk

Examples for:

Functional (Delay ) Differential Equations Involving Caputo Fractional Derivative or Riemann-Liouville Fractional Derivative with initial conditions or boundary conditions .

NDSolve[{CaputoD[y[x], {x, 1/2}] - 3  y[x - 1] == 0, 
y[x /; x <= 0] == 0}, y[x], {x, 0, 1}];
NDSolve[{CaputoD[y[x], {x, 1/2}] - 3  y''[x - 4] == 0, 
y[x /; x <= 0] == 0, y'[x /; x <= 1] == 1}, y[x], {x, 0, 1}];
 NDSolve[{CaputoD[y[x], {x, 1/2}] - 3  y[2 x - 1] == 0, 
 y[x /; x <= 0] == 0}, y[x], {x, 0, 1}];
 NDSolve[{CaputoD[y[x], {x, 5/2}] - 3  y[2 x - 1] == 0, 
y[x /; x <= -1] == 1, y[x /; x <= 1] == 1}, y[x], {x, 0, 1}];
NDSolve[{CaputoD[y[2 x], {x, 1/2}] - 3  y[x/4] == 0, 
y[x /; x <= 2] == Sin[x]}, 
 y[x], {x, 0, 
1}]; NDSolve[{CaputoD[y[2 x + 3], {x, 5/2}] - 3  y'[Sin[x]] + 
y''[5 x - 1] == 0, y[x /; x <= 2] == Cos[x], y'[x /; x <= 0] == 0,
y''[x /; x <= 1] == Exp[x]}, y[x], {x, 0, 1}];
NDSolve[{FractionalD[y[x], {x, 1/2}] - 3  y[x - 1] == 0, 
y[x /; x <= 0] == 0}, y[x], {x, 0, 1}];
NDSolve[{FractionalD[y[x], {x, 1/2}] - 3  y''[x - 4] == 0, 
y[x /; x <= 0] == 0, y'[x /; x <= 1] == 1}, y[x], {x, 0, 1}];
NDSolve[{FractionalD[y[x], {x, 1/2}] - 3  y[2 x - 1] == 0, 
y[x /; x <= 0] == 0}, y[x], {x, 0, 1}];
NDSolve[{FractionalD[y[x], {x, 5/2}] - 3  y[2 x - 1] == 0, 
y[x /; x <= -1] == 1, y[x /; x <= 1] == 1}, y[x], {x, 0, 1}];
NDSolve[{FractionalD[y[2 x], {x, 1/2}] - 3  y[x/4] == 0, 
y[x /; x <= 2] == Sin[x]}, 
y[x], {x, 0, 1}]; NDSolve[{FractionalD[y[2 x + 3], {x, 5/2}] - 3  y'[Sin[x]] + 
 y''[5 x - 1] == 0, y[x /; x <= 2] == Cos[x], y'[x /; x <= 0] == 0,
y''[x /; x <= 1] == Exp[x]}, y[x], {x, 0, 1}];

Reference: Link1, Link2, Link3, Link4 Link5 Link6

POSTED BY: Mariusz Iwaniuk

A upgrade a NDSolve function to solve integro-differential equations.

Examples:

eq = Inactivate[y'[x] + 2*Sin[y[x]] + 5*Integrate[y[t], {t, 0, x}] == 
Piecewise[{{0, x < 0}, {1, x >= 0}}], Integrate];
NDSolve[{Activate[eq], y[0] == 0}, y, {x, 0, 1}]

eq1 = Inactivate[y[x] - 1/2*Integrate[Exp[y[t]]*x*t, {t, 0, 1}] == 5/6*x, Integrate];
NDSolve[Activate[eq1], y, {x, 0, 1}]

eq3 = Inactivate[{y1[x] == 
    x^2 - 1/5*t^5 - 1/10*x^10 + 
     Integrate[y1[t]^2 + y2[t]^3, {t, 0, x}], 
   y2[x] == x^3 + Integrate[y1[t]^3 - y2[t]^2, {t, 0, x}]}, Integrate]
NDSolve[Activate[eq3], {y1[x], y2[x]}, x]

eq4 = Inactivate[{y1'[x] == 
      1 + x + x^2 - y2[x] - Integrate[y1[t] + y2[t], {t, 0, x}], 
     y2'[x] == -1 - x + y1[x] - Integrate[y1[t] - y2[t], {t, 0, x}]}, 
    Integrate];
NDSolve[{Activate[eq4], y1[0] == 1, y2[0] == -1}, {y1, y2}, {x, 0, 1}]

For solving:

Forms of linear integral equations:

  • Fredholm second kind
  • Fredholm first kind
  • Fredholm third kind
  • Wiener - Hopf
  • Volterra second kind
  • Volterra first kind
  • Renewal equation
  • Abel equation
  • Cauchy singularst

Forms of nonlinear integral equations:

  • Fredholm second kind
  • Urysohn second kind
  • Hammerstein
  • Urysohn first kind
  • Urysohn - Volterra
  • Hammerstein - Volterra second kind
  • Hammerstein - Volterra first kind
  • Chandrasekhar H equation
  • Cauchy singular

Fractional Calculus:

  • fractional integro-differential equations
  • fractional integro-differential equations with state-dependent delay(with a variable delay)
  • stochastic fractional integro-differential equation
POSTED BY: Mariusz Iwaniuk

Examples for:

Functional(Delay ) Integro-Differential Equations Involving Caputo Fractional Derivative with initial conditions or boundary conditions .

NDSolve[{CaputoD[y[x], {x, 1/2}] - 3  y[x - 1] + 
     Integrate[y[2 t], {t, 0, x}] == 0, y[x /; x <= 0] == 0}, 
  y[x], {x, 0, 1}];

NDSolve[{CaputoD[y[x], {x, 1/2}] - 3  y[x - 1] + 
     Integrate[y[t + 1], {t, -1, 1}] == 0, y[x /; x <= 0] == 0}, 
  y[x], {x, 0, 1}];

NDSolve[{CaputoD[y[x], {x, 1/2}] - 3  y[x - 1] + 
     Integrate[y[2 t - 1], {t, 0, 1}] == 0, y[x /; x <= 0] == 0}, 
  y[x], {x, 0, 1}];

NDSolve[{CaputoD[y[x], {x, 1/2}] - 3  y''[x - 4] - 
     Integrate[y[2 t], {t, 0, x}] == 0, y[x /; x <= 0] == 0, 
   y'[x /; x <= 1] == 1}, y[x], {x, 0, 1}];

NDSolve[{CaputoD[y[x], {x, 1/2}] - 3  y[2 x - 1] + 
     Integrate[Cos[2 x + t] y[t], {t, 0, Pi}] == 0, 
   y[x /; x <= 0] == 0}, y[x], {x, 0, 1}];

NDSolve[{CaputoD[y[x], {x, 5/2}] - 3  y[2 x - 1] + 
     Integrate[Exp[-3 x] y[t], {t, 0, Infinity}] == 0, 
   y[x /; x <= -1] == 1, y[x /; x <= 1] == 1}, y[x], {x, 0, 1}];

NDSolve[{CaputoD[y[2 x], {x, 1/2}] - 3  y[x/4] - 
    Integrate[Exp[-3 I  x] y[t], {t, -Infinity, Infinity}] == 0, 
  y[x /; x <= 2] == Sin[x]}, 
 y[x], {x, 0, 
  1}];

NDSolve[{CaputoD[y[2 x + 3], {x, 5/2}] - 3  y'[Sin[x]] + 
    y''[5 x - 1] + 
    Integrate[y[Exp[t] - Abs[t]]/Sqrt[1 + t], {t, 0, x}] = = 0, 
  y[x /; x <= 2] == Cos[x], y'[x /; x <= 0] == 0, 
  y''[x /; x <= 1] == Exp[x]}, y[x], {x, 0, 1}];

Reference:

Link1 Link2 Link3

POSTED BY: Mariusz Iwaniuk

Is it really so important to bulk up the totality of built-in functions with yet another new one, RowReduceAugmented? It's just one or two steps to get that from existing functions, e.g.:

mat = RandomInteger[{-5, 5}, {4, 4}];
aug = Join[mat, IdentityMatrix[First@Dimensions@mat], 2];
RowReduce[aug]

And it's simple enough to define a single function to do all that:

RowReduceAugmented[mat_] := 
  RowReduce[Join[mat, IdentityMatrix[First@Dimensions@mat], 2]]

To some extent, "less is more": each new function added to the supply of built-in ones decreases a bit the ease of finding just the one you want. Live with a smaller number of them but become adept at combining them.

Just my opinion!

POSTED BY: Murray Eisenberg
Posted 6 years ago

Your point is well taken. And perhaps the best solution is to put your helpful snippets into the Mathematica help.

As someone who spent decades teaching mathematics and computation, you are well-posed to address this question. Does a simplified tool set help students to assimilate Mathematica more quickly? My thoughts are that an expanded, explicit toolkit would be welcomed by new users, with a trivial burden to acclimated users.

POSTED BY: Daniel Topa
Posted 7 years ago

An extremely valuable tool would be a RowReduce with the augmented identity matrix.

An example follows. Start with a matrix

example

The proposed command would augment the input matrix and reduce the system like so

proposed

The process of interest is depicted as

ear

Potential features: The null space vectors are red, range space blue. The partitioning separates $E_A$ from $R$.

At this point, we are in reach of the Holy Grail, resolving the four fundamental subspaces of the matrix. The command FTOLA[A] would produce the needed spans.

ftola

POSTED BY: Daniel Topa

I would like to see a function which is like KeyMap, but will handle key collisions by combining the corresponding values using a combiner function.

A possible implementation:

KeyCombine[fun_, asc_?AssociationQ] := GroupBy[Normal[asc], fun@*First -> Last]
KeyCombine[fun_, asc_?AssociationQ, comb_] := GroupBy[Normal[asc], fun@*First -> Last, comb]

Example usage:

KeyCombine[
 Sort,
 <|{1, 2} -> 10, {2, 1} -> 20, {1, 3} -> 30|>
 ]

(* <|{1, 2} -> {10, 20}, {1, 3} -> {30}|> *)

KeyCombine[
 Sort,
 <|{1, 2} -> 10, {2, 1} -> 20, {1, 3} -> 30|>,
 Total
 ]

(* <|{1, 2} -> 30, {1, 3} -> 30|> *)

Why introduce this function?

  • I find the concept intuitive. I can think in terms of KeyCombine.
  • I found several uses for it, including combining data where a single experimental subject may have been measured multiple times. It is closely related to the edge property combiner I asked for here
  • Perhaps a more efficient implementation is possible than the above GroupBy, which necessitates converting the association to a rule list first (not sure about this).

Why not?

  • Some might say that this GroupBy implementation is already simple enough. — Counter-argument: the original Merge implementation I came up with (see link below) is also simple but much slower. And we have ReverseSort now.

Link to StackExchange thread.

POSTED BY: Szabolcs Horvát
Posted 7 years ago

It would be pretty cool if TableForm would, um, put Datasets in TableForm.

POSTED BY: Matt Pillsbury
Posted 7 years ago

These may be too finicky to count, but it would be really nice if Select didn't give unpacked results when you pass in a packed list. A rigorous scientific study of the things that have annoyed me in the past month indicates that Select's current unpacking behavior is responsible for 137% of the performance problems with my code.

Also, the newish script mode is great, but it would be even better if it would direct $Messages to stderr on Mac/Linux, instead of stdout.

POSTED BY: Matt Pillsbury
Posted 7 years ago

More complete support for creating AsynchronousTaskObjects. Currently it seems the only ways to create an asynchronous task is URLSubmit and similar functions. However, there are quite a few other ways of doing things asynchronously, like StartProcess, which you can poll for output using the newish ReadString[..., EndOfBuffer], or just wait for it to finish using ProcessStatus. You can submit a task to a subkernel using ParallelSubmit and then use WaitNext/WaitAll to check to see if it's done, you can try doing a job in the cloud using CloudSubmit.

I may have forgotten a few.

As far as I can tell, there's no way to integrate all of these in a single place, which means learning and understanding a bunch of different APIs, and it means if you want to, say do a job on a sub kernel while you wait to check an external database where the result might be cached, you've gotta write a bunch of grotty custom code yourself. This is a shame.

POSTED BY: Matt Pillsbury

There is already support for this, but the asynchronous tasks must be programmed in C (LibraryLink), and the API is not explicitly documented. However, LibraryLink comes with several examples that show how to use them, and there are some posts on StackExchange that go into more details (based on these examples) and show additional examples:

POSTED BY: Szabolcs Horvát

I should also note that AsynchronousTaskObject is quite different from the other things you mention. Its unique capability is that once the task is done, it can trigger the evaluation of a function. This is the second argument of URLFetchAsynchronous.

StartProcess has an entirely different goal: start and manage other processes, including sending/receiving data.

ParallelSubmit is for parallel evaluation, with the goal of increasing performance. This uses subkernels, which are entirely separate processes.

I think the similarity you point out is superficial. These are entirely separate tools, and it doesn't make sense to unify them.

POSTED BY: Szabolcs Horvát
Posted 7 years ago

That capability isn't only useful for fetching URLs, though, and is why I'd like to see the unification. Being able to trigger a function when the the external process is done is potentially useful whenever there's some latency, which can happen when the latency comes from fetching a URL, doing some other task (since StartProcess can do virtually anything) or when performing a lengthy computation (which is what ParallelSubmit does).

I can emulate this functionality in these other cases, to be sure. One way is to use ScheduledTask.

POSTED BY: Matt Pillsbury

You are right, it would be nice to have a callback for when a process started with StartProcess terminates.

I am also hoping that the asynchronous LibraryLink stuff will get better documented, so we don't have to figure things out solely based on the example.

POSTED BY: Szabolcs Horvát

Hi Alexey,

That is indeed how it can be easily implemented for Take and Drop. For Part it seems to be more difficult to implement UpTo though... Especially for the multi-level cases, and in combination with set it gets complicated quite quickly...

a = {{1,2,3},{1,2,3,4,5},{1,2,3,4,5,6},{1,2}}
a[[All, ;; UpTo[3]]] += 1

would result in:

{{2,3,4},{2,3,4,4,5},{2,3,4,4,5,6},{2,3}}
POSTED BY: Sander Huisman

Being able to use UpTo inside Part:

{Range[6], Range[2], Range[8]}[[All, ;; UpTo[4]]]

would return:

{{1, 2, 3, 4}, {1, 2}, {1, 2, 3, 4}}

Also, in addition, being able to use UpTo with negative numbers (or make a new function called DownTo):

Take[Range[10], UpTo[-5]]
Take[Range[4], UpTo[-5]]

or alternatively:

Take[Range[10], DownTo[-5]]
Take[Range[4], DownTo[-5]]

would return:

{6,7,8,9,10}
{1,2,3,4}

I think this would be a very natural extension of UpTo

POSTED BY: Sander Huisman
ClearAll[DownTo];
DownTo/:Take[x_,DownTo[y_Integer]]:=Take[x,-Min[Length@x,Abs[y]]*Sign[y]]
DownTo/:Drop[x_,DownTo[y_Integer]]:=Drop[x,-Min[Length@x,Abs[y]]*Sign[y]]

Take[Range[10],DownTo[5]]
{6,7,8,9,10}

Take[Range[10],DownTo[-5]]
{1,2,3,4,5}

Take[Range[4],DownTo[5]]
{1,2,3,4}

Take[Range[4],DownTo[-5]]
{1,2,3,4}

Drop[Range[10],DownTo[5]]
{1,2,3,4,5}

Drop[Range[10],DownTo[-5]]
{6,7,8,9,10}

Drop[Range[4],DownTo[5]]
{}

Drop[Range[4],DownTo[-5]]
{}
POSTED BY: Alexey Golyshev

Functions like like Table and Do to be able to handle Associations with key and value like so:

Table[
    (*code*)
,    
    {k -> v, <|1->"a",2->"b"|>}
]

where k contains the key and v contains the value. Now one can only get the values, not the keys. So one has to iterate using indices and then get keys/values again...

Which form it is implemented does not really matter, though this is somewhat inspired by PHP. A form like {k,v} rather than k -> v is incompatible with the current implementation of iterators in e.g. Manipulate where v is the default value. Fortunately Table and Do do not have such 'default setting'. Though it might confuse people... Arrow notation is not very 'Wolframian' though, but only syntactic sugar in the end...

I'm aware of KeyValueMap, but is sometimes not very handy when a Do is needed (e.g. not storing intermediate data). Also multiline code is somewhat easier to write in Table, than in KeyValueMap as it requires a (pure) function... Especially if one wants to do something like:

Table[
    {k, #} & /@ v
 ,
    {k -> v, <|"a" -> {1, 2}, "b" -> {3, 4, 5}|>}
 ]
POSTED BY: Sander Huisman

I'd like to see a "MetaInformation" import element for MP3 files to read the ID3 tags. This element exists for other audio formats already and would be quite useful for MP3 as well.

POSTED BY: Bianca Eifert

A version of Pick that does not unpack packed arrays (or at least, repacks it after picking the rows). Already there is the data is packed…

POSTED BY: Sander Huisman

If the inputs are both packed, Pick is very fast (since version 8) and won't unpack:

Developer`PackedArrayQ@Pick[RandomReal[1, 100], RandomInteger[1, 100], 1]

(* True *)
POSTED BY: Szabolcs Horvát

A nice and efficient implementation of the Closest pair of points problem. Similar to Min[DistanceMatrix[...]] and NeighbourhoodGraph, but more efficient I think...

POSTED BY: Sander Huisman

I would like to see a new programming font with ligatures, comparable to Fira Code, but designed for Mathematica.

Fira Code improves the readability of code by employing ligatures. E.g., != displays as a slashed equal sign because it represents "not equal" in many languages. The underlynig text is not changed at all. It's simply displayed in a different way. Fira Code works with many programmer's editors, including IntelliJ IDEA for which we have an excellent Mathematica plugin. It is designed to play well with many languages, but it doesn't work so well with Mathematica. There are several other similar fonts, some designed specifically for one language (e.g. Hasklig).

I've been using Fira Code with C++ and I think it bring a genuine improvement in readability. But this is no surprise to Mathematica users as we already have a similar feature in the Front End. What I'd like is to be able to have this in any editor, as I only use the Front End for interactive work, not for writing packages.

Here's an illustration of how certain character combinations display with Fira Code. Right: with ligatures. Left: without. What's not really visible here is that Fira Code doesn't just change the glyph shapes, it also effectively changes the spacing, which has a big effect on readability. (Again, we know this from how the Front End works.) ... and :: don't change in shape, but they are shorter than three separate dots or two colons, so they have more space on the left and right. All this doesn't break the monospace nature of the font.

enter image description here

POSTED BY: Szabolcs Horvát

Extend TimeSeriesMap, to take Time as a second argument to the specified function (or first... when the two arguments are present).

Application: for instance, if I want to correct the value, depending on the date.

This allows for much more operations to stay within the TimeSeries framework (instead of having to get the values out ("Path"), processed, and put it back again into a TimeSeries.

POSTED BY: Pedro Fonseca

Leap seconds awareness.

Considering the increased focus on data science, can't we have a system that takes this mess into consideration?

If there's an apprehension that most users could get confused, it could eventually be added as an option to the time and date functions.

DateDifference[a, b, "LeapSecond"->True]

or

DateObject[{2015,6,30,23,59,59}]+Quantity[1, "Seconds"]
(*DateObject[{2015,7,1,0,0,0}]*)

vs

DateObject[{2015,6,30,23,59,59},"LeapSecond"->True]+Quantity[1, "Seconds"]
(*DateObject[{2015,6,30,23,59,60},"LeapSecond"->True]*)

Another example:

AbsoluteTime@DateObject[{2015,7,1,0,0,0}]
(*3644697600*)
AbsoluteTime@DateObject[{2015,7,1,0,0,0},"LeapSecond"->True]
(*3644697626*)
AbsoluteTime@DateObject[DateObject[{2015,7,1,0,0,0}],"LeapSecond"->True]
(*3644697626*)

Obviously there might be some dark corners (but I only gave this an hour of thought... the time to write this "post"). What should be the answer to this?

DateObject[3644697600]==DateObject[3644697626,"LeapSecond"->True]
(*there might be no answer, since we are comparing apples with oranges*)

Or how would DateListPlot work? Probably, just depending on how dates were specified. If the list of dates has some DateObjects that consider LeapSecond and others that don't, can it happen that we end up with reversed time? Is it an option for the plot, where we specify if we want everything to be converted to LeapSecond True or LeapSecond False (again, what would be the logic of comparing apples and oranges on the same plot?). By the way, I think that this plot should have a TimeZone option, so that I don't have to Block[{$TimeZone=whatever}, DateListPlot[...]]

Something more complex:

WindSpeedData["KSAC", {DateObject[{2008, 1, 1}], DateObject[{2015, 1, 2}]},"LeapSecond"->True]
(*most likely, it would return exactly the same thing as with LeapSecond->False, since there are probably no wind records on exactly the extra second... but the time series stamps would be kept with the LeapSecond->True, since that would allow for further analysis to take this specification into consideration*)

and its impact would be noticed on things like:

RegularlySampledQ[ ts ]
(*would true of false, depending on the TimeSeries specification*)

Obviously, there would be a:

$LeapSecond=True

By the way: most likely that LeapSecond is not a good choice for the option name, because in the future, we might get a MinuteLeap, etc. So, probably TimeLeap is better (if WL is still around in about a century).

POSTED BY: Pedro Fonseca

I'm curious, what applications is the inclusion of leap seconds so critical? I can't think of many to be honest... I heard that Google fixed it by stretching the second over the entire day, such that every second is just a tiny bit longer that day....

POSTED BY: Sander Huisman

What sense can we make of "AstronomicalData" (I mean, all data that used to be gathered as the AstronomicalData), and all the related physics field?

If Google considers that the seconds on that day got longer, they probably even messed it up more, since conversions are then needed for the entire day length, and not just as Events.

Daylight time saving is obviously more noticeable, since it hits more people. When processing records, the first thing I try to understand is if there is one hour of repeated dates or one hour of missing dates, or even worse, if the records were simply overlapped, which is generally difficult to detect...

POSTED BY: Pedro Fonseca

They just write all there logs with their own smooth stretched time, so there is no 'jump', which can cause all kind of weird behavior. After writing to the logs, they (presumably) just consider it as normal seconds. To me, I think it is a genius solution. If (e.g.) email arrivals are a few microseconds later, or even a second later, it doesn't really matter...

Also a problem is, is that future leap seconds are not known yet, they are irregular because the rotation of the earth is irregular. So how would one handle dates in the future also reliably? Or dates (far) before the leap second?

Regarding AstronomicalData is ~20 sec in 40 years really critical? that is 1 in ~63 million, most measured things have much much larger errors... I would be interested if it is critical (it might be, i don't know to be honest...)?

POSTED BY: Sander Huisman

If the convention had been this, why not. But it wasn't. And so, they just by-passed the problem on their very specific field.

I'm no specialist, but 20 seconds are probably already enough for a meteorite to hit our planet, or not...

But coming down to earth, I can think of many other cases where this might come in handy. Every data source with fast recording (seismic activity, critical equipments monitoring, bank transactions, etc.). Do we have double dates? Do we overlap dates, and lose records?

POSTED BY: Pedro Fonseca

You have a good point there! Maybe (like) UnixTime to have a function UTCTime that does all the not-so-nice arithmetic, but yeah it needs updates every half year to check for new possible leap seconds...

It's funny that half of the wiki page is about abolishing it and problems with it...

POSTED BY: Sander Huisman

We can abolish it, but we can't go back in time..

POSTED BY: Pedro Fonseca

By the way, Google method is actually much more interesting than I thought.

Assuming that my mass remains constant, should I consider that my weight changes for a day, or instead, a one day change on earth's time-space warping?

POSTED BY: Pedro Fonseca

A upgrade Limit function for a Multivariable( multidimensional limits ).

Example1:

f[x_, y_] := (x*y)/(x^2 + y^2)
Limit[f[x,y],{x->0,y->0}]
(* do not exist *)

Limit[f[x,y],{{x->0,Direction->1},{y->0,Direction->-1}}]
(* -1/2 *)

Limit[f[x,y],{{x->0,Direction->1},{y->0,Direction->1}}]
(* 1/2 *)

Let's check if not exist. With different path:

Limit[f[x, y] /. y -> m*x, x -> 0, Assumptions -> m > 0]
Limit[f[x, y] /. y -> m*x, x -> 0, Assumptions -> m < 0]
Limit[f[x, y] /. x -> c*y^2, y -> 0, Assumptions -> c > 0]
Limit[f[x, y] /. x -> c*y^2, y -> 0, Assumptions -> c < 0]
(*  1/2  ,-1/2,  0  ,0  *)
(* do not exist *)

Example2:

 f2[x_, y_] := (y^2*Sin[x])/(x^2 + y^2)
 Limit[f2[x,y],{x->0,y->0}]
 (* 0 *)

And Check:

Limit[f2[x, y] /. y -> m*x, x -> 0, Assumptions -> m == 1]
Limit[f2[x, y] /. y -> m*x, x -> 0, Assumptions -> m == -1]
Limit[f2[x, y] /. x -> c*y^2, y -> 0, Assumptions -> c == 1]
Limit[f2[x, y] /. x -> c*y^2, y -> 0, Assumptions -> c == -1]
(* 0, 0, 0, 0 *)

exist and is ZERO.

POSTED BY: Mariusz Iwaniuk

Version 11.2 introduces nested and multivariate limits and different directions.

POSTED BY: Sander Huisman

Hello Sander

Thanks for info.

My requests finally come true. :)

Regards Mariusz

POSTED BY: Mariusz Iwaniuk

I would like to see a revamped interpolation framework.

On ListStepPlot you can choose Left, Right or Center, while you can only have right with the Interpolation framework and the ListPlot always gives you the left, again, with no choice. While breaking current behaviour could be discussed, my main point here is options availability,

The amount of questions on SE on how to get an interpolation function out of a plot also seems to point to the fact that many of us would like to have more options on the Interpolation function.

In some way, shouldn't the Interpolation function be the interpolation kernel of all functions that use interpolation? And its options be populated as the Method options of all functions that use interpolation? If it is not up to the job, shouldn't it be revamped?

POSTED BY: Pedro Fonseca

The function Interpolation to have, next to InterpolationOrder, the option Extrapolation or ExtrapolationOrder. So you could do something like:

Interpolation[data,Extrapolation ->0]

would give 0 outside the domain of data. Or you could do:

Interpolation[data,ExtrapolationOrder ->0] 

then it would just keep it constant outside the domain...

POSTED BY: Sander Huisman

A new function to create Venn diagrams which is already available in W|A.

POSTED BY: Sander Huisman

In certain cases, with experimental data mostly I guess, you have a ListLinePlot, but the data might have gaps in time. Sometimes you want the data to be Joined, but not across those large gaps, an option "MaxConnectivity" (or so) which defaults to Infinity would be great! So basically it divides the data in to chunks that are connected, but not between the chunks, I now do this manually very often, but with multiple datasets, the PlotStyle commands also have to be copied the right amount of times and makes very 'clunky' code...

POSTED BY: Sander Huisman
Posted 8 years ago

So some kind of WyswygForm as an extension of TraditionalForm that preserves the order of output symbols "as is" would be nice to have

POSTED BY: Timur Gareev

I think you are looking for TraditionalForm@HoldForm[...].

Actually, it sounds like you are typing a formula and expect it to show as you typed it. Then why don't you just type it in a text cell and never evaluate it? If you don't evaluate it, nothing in it will change.

Press Alt-7 (Command-7 on Mac), then Control-9, then type your formula.

POSTED BY: Szabolcs Horvát
Posted 8 years ago

Sorry, I've missed not HoldForm but DisplayForm. DisplayForm // TraditionalForm My concern was different - I have self-made functions that have a nice output that can should be used in text that accompanies the presentation. HoldForm prevents evaluation that is needed.

POSTED BY: Timur Gareev

This post can be removed. Sorry my mistake.

POSTED BY: Mariusz Iwaniuk

A upgrade Derivative function for a symbolic differentiation, that is the computation of n-th order derivatives were n is a symbol.

Examples:

D[Log[a*x + b]^k, {x, n}]
(* a^n/(a*x+b)^n*Sum[Pochhammer[k-m+1,m]*StirlingS1[n,m]*Log[a*x+b]^(k-1),{m,0,n}] *)
D[BellB[a, x]^k, {x, n}]
(* Sum[Pochhammer[m-n+1,n]*StirlingS2[a,m]*x^(m-n),{m,0,a}] *)
D[BernoulliB[x], {x, n}]
(* Pochhammer[v-n+1,n]*BernoulliB[v-n,x]*)
D[Binomial[x, k], {x, n}]
(* Sum[(-1)^(m+k)*StirlingS1[k,m]*Pochhammer[m-n+1,n]*(x-k+1)^(m-n),{m,1,k}] *)
D[EulerE[v, x], {x, n}]
(* Pochhammer[v-n+1,n]*EulerE[v-n,x] *)
D[Sin[Cos[a*x + b]], {x, n}]
(* Sum[Sin[Cos[a*x+b+k*Pi/2]]*BellY[n,k,{Cos[a*x+b+Pi/2]*a,...,Cos[a*x+b+((n-k+1)*Pi)/2]}*a^(n-k+1)],{k,0,n}] *)
...

(* and for: 12 elliptic Jacobi functions as well as
the four elliptic JacobiTheta functions,the LambertW,LegendreP
and more function..... *)
POSTED BY: Mariusz Iwaniuk

nth order derivative can now be done (V11.1):

https://reference.wolfram.com/language/ref/D.html

POSTED BY: Sander Huisman

I'd like to see a new functions a Integral transform:

  • Mellin transform and inverse
  • Hankel transform and inverse
  • Hilbert transform and inverse

, but there seems to be no such function in Mathematica.

The hankel transform, sometimes referred to as the Bessel transform, has uses in particular types of differential equations.

The hilbert transform, sometimes called a quadrature filter, is useful in radar systems, single side-band modulators, speech processing, measurement systems, as well as schemes of sampling band-pass signals.

The Mellin and Inverse Mellin transforms is closely related to the Laplace and Fourier transforms and has applications in many areas, including:

  • digital data structures
  • probabilistic algorithms
  • asymptotics of Gamma-related functions
  • coefficients of Dirichlet series
  • asymptotic estimation of integral forms
  • asymptotic analysis of algorithms
  • communication theory
POSTED BY: Mariusz Iwaniuk

A upgrade LaplaceTransform function as the following below:

enter image description here

and much more.

LaplaceTransform[t*f[t], t, s]
LaplaceTransform[t*f'[t], t, s]
LaplaceTransform[t*f''[t], t, s]
LaplaceTransform[f[x + a], x, s](* Exp[a*s]*(LaplaceTransform[f[x],x,s]-Exp[a*s]*Integrate[Exp[s x]*f[x],{x,0,a}]) *)
LaplaceTransform[HeavisideTheta[x - a]*f[x - a], x, s](* Exp[-a*s]*LaplaceTransform[f[x],x,s]*)


 ...
LaplaceTransform[t^2*f[t], t, s]
LaplaceTransform[t^2*f'[t], t, s]
LaplaceTransform[t^2*f''[t], t, s]

 ...
and many more ....

Mathematica can't solve this, is very strange because these are the basics.

A upgrade LaplaceTransform function and InverseLaplaceTransform as the following below:

enter image description here

and much more....

 LaplaceTransform[ t^(\[Alpha] - 1)*MittagLefflerE[\[Alpha], \[Alpha], a*t^\[Alpha]], t, s];
 LaplaceTransform[MittagLefflerE[\[Alpha], -a*t^\[Alpha]], t, s];
 LaplaceTransform[1 - MittagLefflerE[\[Alpha], -a*t^\[Alpha]], t, s];
 LaplaceTransform[ t^(\[Beta] - 1)*MittagLefflerE[\[Alpha], \[Beta], a*t^\[Alpha]], t, s];

(*and inverse*)

 InverseLaplaceTransform[1/(s^\[Alpha] - a), s, t];
 InverseLaplaceTransform[s^\[Alpha]/(s*(s^\[Alpha] + a)), s, t]; 
 InverseLaplaceTransform[a/(s*(s^\[Alpha] + a)), s, t];
 InverseLaplaceTransform[s^(\[Alpha] - \[Beta])/(s^\[Alpha] - a), s, t];

 (*these  can't too*)
POSTED BY: Mariusz Iwaniuk

After 4 years, my dream has not come true yet. Mathematica 12.1.1 can't solve yet :(

POSTED BY: Mariusz Iwaniuk
Posted 8 years ago

I would like to see Maximize and Minimize functions with Option to return all solutions of optimization problem (as Langrangian method does). It it BTW irritating that documentation doesn't state explicitly that not all solutions may be returned.

POSTED BY: Timur Gareev

That would be useful, rather than having to use Reduce solve the KKT equations.

POSTED BY: Frank Kampas

Again no new function, but rather ease of functionality. Functions like:

  • Min
  • Max
  • Mean
  • StandardDeviation
  • Skewness
  • Median
  • Kurtosis
  • Variance

All work generally on the first dimension or on all dimensions (Min, Max). It would be great if all these function get a Level option. Of course I can achieve that with Map/Apply, but it does give very unreadable code. For example:

Mean[matrix,1]
Mean[matrix,2]
Mean[matrix,{2}]

Could mean: same as 'normal' mean (mean of rows), mean of mean (row and column), mean over 2nd dimensions (a column). Of course this could be extended to even higher dimensions. For me it strikes me as 'unpleasant' and lacking elegance to invoke powerful function like map and apply to do something so simple. The folks as matlab have similar syntax which I envy (though I never use it).

POSTED BY: Sander Huisman

p.s. The function Total already has this syntax, so why not extend it to other common functions like the ones I mentioned above...

POSTED BY: Sander Huisman

Just a small functionality request: it would be nice that FromUnixTime would be Listable by default. Of course I can do that myself every time, but I think it would be nice that the functionality is added.

POSTED BY: Sander Huisman

It would be nice that VoronoiMesh would return the Mesh (using MeshCells[...,2]) in the same order as the original points; now they are more or less random...

POSTED BY: Sander Huisman

It would be nice if Switch would compile without triggering MainEvaluate. So this:

SetSystemOptions[
   "CompileOptions" -> 
    "CompileReportExternal" -> 
     True]; 
Check[Compile[{{n, _Integer}}, 
   Switch[n, 1, 42, 2, 137]], 
  $Failed]

should not give $Failed as it does now.

Of course I can get the functionality of Switch by If's (something like:

Check[Compile[{{n, _Integer}},
  Module[{res = 0}, 
   If[n == 1, res = 42];
   If[n == 2, res = 137];
   res]], 
   $Failed]

but I fear it will be more inefficient. Or?

POSTED BY: Rolf Mertig

It should transform to:

If[n == 1, res = 42,
   If[n == 2, res = 137]
];

right? But what happens if none of them are true?

POSTED BY: Sander Huisman

A nice addition in various plot functions would be the addition of DataRange to accept also nonlinear specification. In many cases you have an array of data, and you can directly plug it in ListPlot3D or ListContourPlot, and you can set the x and y ranges very nicely. But what about cylindrical coordinates? or nonlinear coordinates? A specification like:

DataRange -> {{x1,x2,x3,x4,x5,x6...},{y1,y2,y3,y4,y5,y6,...}}

would allow for x and y coordinates that are independent. But for the dependent case (cylindrical case for example). It would be nice to have:

DataRange ->{{{x1,y1},{x2,y1},....{xn,y1}} , {{x1,y2},{x2,y2}....{xn,y2} ........}}

Basically an array with all the coordinates.

POSTED BY: Sander Huisman

Maybe not a new function, rather new functionality consistency:

data = {};
data[[All, 2]] = 3;       
Select[data, #[[2]] == 3 &];
MapAt[f, data, {All, 2}]
BinCounts[data, {0, 5, 1}]
BinCounts[data, {0, 5, 1},{1,5,2}]

So line 2 and 3 perfectly execute; they don't care it is an empty list. However MapAt does not work; it remains unevaluated. Quite strange I would say; in this case 'All' means 'all the zero elements' so I don't see the problem...

Then, BinCounts in 1D works on an empty list fine (it returns an array with zeros), but BinCounts in 2D does not work on an empty list!! I basically have my own safe-guard function that checks if Length[data]==0 then return a ConstantArray[0, dims]. Which is kind of not so nice.... I definitely think is a huge inconsistency, especially because 1D works, but not 2D!

POSTED BY: Sander Huisman

I don't see a perfect method using existing methods, because some intermediate storage seems to be needed. But one can just use tables to avoid unpacking.

MemoryInUse[]
MaxMemoryUsed[]

(* Out[772]= 607593496

Out[773]= 609310128 *)

In[774]:= n = 2000;
mat = RandomReal[1, {n, n}];
Developer`PackedArrayQ[mat]
SetSystemOptions["PackedArrayOptions" -> {"UnpackMessage" -> True}];
uppermat = ConstantArray[0., n*(n + 1)/2];
i = 1;
Do[uppermat[[i]] = mat[[j, k]]; i++;, {j, n}, {k, j, n}];
Developer`PackedArrayQ[uppermat]

Out[776]= True

Out[781]= True

MemoryInUse[]
MaxMemoryUsed[]

(* Out[782]= 671617616

Out[783]= 671739592 *)

mat // ByteCount
uppermat // ByteCount

(* Out[785]= 32000152

Out[786]= 16008144 *)

So the memory jusmp is about equal to size of mat plus size of uppermat (the bare minimum) plus another uppermat, hinting that maybe we had a copy done internally.

Can do similarly with Sow/Reap.

uppermat = Reap[Do[Sow[mat[[i, j]]], {i, n}, {j, i, n}]][[2, 1]];

It is slower and I think the intermediate memory used might be greater (different data structure than a packed array). It never unpacks anything though.

POSTED BY: Daniel Lichtblau

@Daniel Lichtblau Thanks for reminding me not to ignore the procedural approach! :-)

Can you find a way to compile this for better performance? A naive approach complains about

Compile::part: "Part specification uppermat[[i]] cannot be compiled since the argument is not a tensor of sufficient rank.

and I'm not sure how to tell it that it indeed is a vector.

cf = Compile[{{mat, _Real, 2}},
  Module[{uppermat, i, n},
   n = Length[mat]; (* assume square mat *)
   uppermat = ConstantArray[0., n*(n + 1)/2];
   i = 1;
   Do[
    uppermat[[i]] = mat[[j, k]];
    i++;,
    {j, n}, {k, j, n}
    ];
   uppermat
   ],
  {{uppermat, _Real, 1}}
  ]

A dedicated function would still be very nice though, but I realize that my use-case for this might be fairly unique. My LibraryLink solution takes 0.004 seconds for this 2000 by 2000 matrix vs 3 seconds for the Do loop. I tend to write a lot of LibraryLink code recently as LTemplate (even in this rudimentary form) makes it easy enough. Maybe a much better version of what I was trying to do with LTemplate would be a very useful future addition to Mathematica! The WTC2015 video on the new compilation technology was quite interesting.

POSTED BY: Szabolcs Horvát

The problem appears to be in use of ConstantArray. One workaroundchange that line to uppermat = Table[0., {n*(n + 1)/2}];. Another is to make sure it understands its second arg is an integer, by uppermat = ConstantArray[0., Round[n*(n + 1)/2]];. And you can get rid of the third argument to Compile now.

Once properly handled in compilation, this runs quite fast by the way.

POSTED BY: Daniel Lichtblau

Functions for extracting the lower and upper triangular parts of a matrix (and store it in a flat array).

I am not sure how much demand there is for this. I mentioned it because personally I need this very often. While this is easy to implement in terms of other functions, what I am looking for is (memory) efficiency: it should avoid unpacking arrays. Currently I use LibraryLink implementations specialized for real and integer types, so I can handle arrays that barely fit in memory. My preferred pure-Mathematica implementation is Join@@Pick[mat, LowerTriangularize[ConstantArray[1, Dimensions[mat]], -1], 1], but I needed better memory efficiency than what this can offer.

POSTED BY: Szabolcs Horvát

I would like to to be allowed to replace the clumpsy 'type specification'

f[x_ /; VectorQ[x, (Head[x==h])& ]] := ...

by the simple and suggestive form

f[x__h] := ...

The rational for this is: whenever one uses a head (e.g. 'par' for 'particle') for OO-like organization of a Wolfram Language program one probably also uses lists of objects with just this head (lists of particles in the example above). So, one should have an easy way to define functions for the treatment of lists (systems) of particles.

POSTED BY: Ulrich Mutze

What stops you from using

f[x:{___h}]:=...

?

POSTED BY: Leonid Shifrin

Thank you for this solution. I was not aware of

___h

Nevertheless I think that allowing

__h

(without a step of transforming a sequence into a list) would be desirable since it would be more natural to people who are used to

f[n_Integer, x_Real, z_Complex] := ...
POSTED BY: Ulrich Mutze

You can still do that, if you want. In such case you will have to put List around the match for __h on the r.h.s. For example,

total[elems___Integer]:= Total[{elems}]

But this would really mean taking a sequence as an argument, not a list / vector. So, if you want your argument to be a vector, this isn't right, semantically. The semantically correct way still would be

total[elems:{___Integer}]:=Total[elems]

Apart from this issue, the first definition has a number of others: you can't easily add other positional arguments (particularly in cases when their type is the same _h as that of your vector's elements), and also you will have to use Apply to pass an actual vector / list to such a function, like total @@ {1,2,3}. And for packed arrays, for example, Apply will unpack.

POSTED BY: Leonid Shifrin

Note that __h is already taken. It means one or more element with head h. ___h means zero or more elements with head h. These sequences of elements can be contained not only in a List, but also any other expression.

Finally, it is good to mention that patterns like {___Real} are optimized to work with packed arrays and will not unpack (nor will they test every element: all elements in a packed array are of the same type). VectorQ[arr, Head[#] === Real&] will unpack, and will test every element explicitly. Thus it will be slow. However, VectorQ with certain second arguments is optimized again and will not unpack (or test every element). Examples are VectorQ[..., NumberQ], VectorQ[..., NumericQ], VectorQ[..., IntegerQ], etc.

These observations are of interest only if you work with numerical types and packed arrays, of course.

POSTED BY: Szabolcs Horvát

@Szabolcs, thank you for freeing me from my misconception that __ deals with lists and ____ deals with sequences. You clarified the point and gave valuable additional information concerning VectorQ. Thanks again.

POSTED BY: Ulrich Mutze

RegionIntersectionQ or RegionOverlapQ, which gives True/False if two regions overlap. With an option if partial overlap of full overlap is required (i.e. does reg1 completely cover reg2 or partially).

POSTED BY: Sander Huisman

I think this would be a nice feature too. @Sander Huisman what applications you had in mind for this?

POSTED BY: Sam Carrettie

I'm for example looking at Voronoi diagrams, now I draw a line...which cells intersect with this line? I end up 'rasterizing' the line, and using RegionMember for each cell. Similarly say you want to look for Voronoi cells that are completely within the Convex hull; you have to do some magic now yourself. Probably one can think of more uses. A general construct would be very useful... Either calling it the names I proposed or expanding the RegionMember function to allow also for regions (any dimension) as the second argument.

POSTED BY: Sander Huisman

These are now (V11.1) included in the form of RegionDisjoint, RegionWithin, and RegionEqual.

POSTED BY: Sander Huisman

I'd like to see a new function called DChangeTransform[] which can change the variables of differential equations, but there seems to be no such function in Mathematica.

Examples:

DChangeTransform[D[f[x], {x, 2}] - f[x] == 0, {t = Exp[x]}, {x}, {t}, {f[x]}]
(* t^2*f''[t]+t*f'[t]-f[t]=0 *)
DChangeTransform[D[f[x], {x, 2}] - f[x] == 0, {t = Log[x]}, {x}, {t}, {f[x]}]
(* Exp[2*t]*f[t]+f'[t]-f''[t]=0 *)
DChangeTransform[D[f[x], x] - f[x] == 0, {t = Tan[x]}, {x}, {t}, {f[x]}]
(* (1+t^2)*f'[t]+f[t]=0 *)
DChangeTransform[D[u[x, t], {t, 2}] == c^2 D[u[x, t], {x, 2}], {a == x + c t, r == x - c t}, {x, t}, {a, r}, {u[x, t]}]
(* c (u^(1,1))[a,r]=0 *)
DChangeTransform[ D[z[x, y], {x, 2}] - D[z[x, y], {x, 1}, {y, 1}] - 2*D[z[x, y], {y, 2}] == 0, {u == 2*x + y, v == y - x}, {x, y}, {u, v}, {z[x, y]}]
(* -9 (z^(1,1))[u,v]=0 *)
DChangeTransform[x^2*D[f[x, y], {x, 2}] - 2*x*y*D[f[x, y], {x, 1}, {y, 1}] + y^2*D[f[x, y], {y, 2}] + x*D[f[x, y] == 0, {x, 1}] + y*D[f[x, y], {y, 1}], {u == x, v == x*y}, {x, y}, {u, v}, {f[x, y]}]
(* u ((f^(1,0))[u,v]+u (f^(2,0))[u,v])=0 *)
DChangeTransform[x*y^3*D[f[x, y], {x, 2}] + x^3*y*D[f[x, y], {y, 2}] - y^3*D[f[x, y], {x, 1}] - x^3*D[f[x, y], {y, 1}] == 0, {u == y^2, v == x^2}, {x, y}, {u, v}, {f[x, y]}]
(* u^(3/2) v^(3/2) ((f^(0,2))[u,v]+(f^(2,0))[u,v])=0  *)
DChangeTransform[D[f[x, y], x, x] + D[f[x, y], y, y] == 0, "Cartesian" -> "Polar", {x, y}, {r, t}, f[x, y]]
(* (f^(0,2))[r,t]+r ((f^(1,0))[r,t]+r (f^(2,0))[r,t])=0 *)
DChangeTransform[D[f[x, y, z], {x, 2}] + D[f[x, y, z], {y, 2}] + D[f[x, y, z], {z, 2}] == 0, "Cartesian" -> "Spherical", {x, y, z}, {r, t, s}, f[x, y, z]]
(*(Csc[t]^2 (f^(0,0,2))[r,t,s])/r^2+(Cot[t] \(f^(0,1,0))[r,t,s])/r^2+(f^(0,2,0))[r,t,s]/r^2+(2 \(f^(1,0,0))[r,t,s])/r+(f^(2,0,0))[r,t,s]=0  *)
DChangeTransform[(x^2 + y^2)*D[u[x, y], x, x] + D[u[x, y], y, y] == 0, "Cartesian" -> "Polar", {x, y}, {r, t}, u[x, y]]
(* 2 (-1+r^2) Sin[2 t] (u^(0,1))[r,t]+(1+r^2-(-1+r^2) Cos[2 t]) \(u^(0,2))[r,t]+r ((1+r^2-(-1+r^2) Cos[2 t]) (u^(1,0))[r,t]-2 (-1+r^2) \Sin[2 t] (u^(1,1))[r,t]+r (1+r^2+(-1+r^2) Cos[2 t]) (u^(2,0))[r,t])=0 \ *)
DChangeTransform[2 (-1 + r^2) Sin[2 t] D[u[r, t], {t, 1}] + (1 + r^2 - (-1 + r^2) Cos[2 t]) D[ u[r, t], {t, 2}] +  r ((1 + r^2 - (-1 + r^2) Cos[2 t]) D[u[r, t], {r, 1}] - 2 (-1 + r^2) Sin[2 t] D[u[r, t], {r, 1}, {t, 1}] + r (1 + r^2 + (-1 + r^2) Cos[2 t]) D[u[r, t], {r, 2}]) == 0, "Polar" -> "Cartesian", {r, t}, {x, y}, u[r, t]]
(* (u^(0,2))[x,y]+(x^2+y^2) (u^(2,0))[x,y]=0 *)
POSTED BY: Mariusz Iwaniuk

Given the number of replies here, perhaps the post should be broken up in subcategories such as Graphics, Symbolics, Numerics, Data Manipulation.

POSTED BY: Frank Kampas

Perhaps even a separate 'group'...

POSTED BY: Sander Huisman

I would like to see better support for vectorization when working with conditionals.

Examples:

  • Select all elements of an array satisfying some inequalities
  • Replace all elements of an array with 0 or 1 depending on whether they satisfy some inequalities of equalities

Doing the first one for 1D arrays is already easy for arbitrary criteria (not just inequalities). We have Select and Cases. But they are not fast. If we restrict ourselves to inequalities only, there are much faster ways to do it. These operations can be formulated as simple arithmetic done on packed arrays, combined with UnitStep and Unitize. Vector-arithmetic is very fast.

For example, let us select all elements of an array that are >= 3. The solution:

mask = UnitStep[arr-3];
Pick[arr, mask, 1]

This is much faster than Select but also much less readable and much more complicated to write. Especially if we now require > 3 (strictly greater). Then we end up with the convoluted

mask = 1 - UnitStep[3 - arr]
Pick[arr, mask, 1]

Do the same for a multidimensional array for a complicated criterion like x > 0 || x < -1 and we quickly end up with some very opaque code which might easily contain a small mistake ...

In MATLAB it is so much easier to do the same thing. It would be simply

arr( (arr > 0) | (arr < -1) )

I wrote the BoolEval package to make it easy in Mathematica too. With this package we can simply do

BoolPick[arr, arr > 0 || arr < -1]

and get almost the same performance as the UnitStep based construction while retaining readability, ease of writing and reducing the possibility of mistakes. But the package is of course not perfect, it struggles with multidimensional arrays, and due to the translation it needs to do from logical expressions to a UnitStep based form, it is not quite as fast as a directly written UnitStep.

I would like to see functionality similar to this built in. I'm not sure what is the best way to expose it, what I did in the BoolEval package might not be the best way (it likely isn't). That is why I wrote up in detail what sorts of problems I want to solve. I have been using this package regularly over the last couple of years so I am convinced that this functionality is needed.

Witness how dramatic the performance differences can be:

In[53]:= arr = RandomReal[1, 1000000];

In[54]:= Pick[arr, UnitStep[arr - 0.5], 1]; // AbsoluteTiming
Out[54]= {0.019223, Null}

In[55]:= BoolPick[arr, arr >= 0.5]; // AbsoluteTiming
Out[55]= {0.093456, Null}

In[56]:= Select[arr, # >= 0.5 &]; // AbsoluteTiming
Out[56]= {0.481385, Null}

If you search this thread for the term "vectorization", you'll find my earlier post requesting a UnitStep like function that returns 0 for 0 (UnitStep[0] is 1). The motivation was the same.

POSTED BY: Szabolcs Horvát

I can't agree more, I'm also a heavy Select user. It is even worse once one uses the new operator form:

arr = RandomReal[1, 1000000];
Select[arr, # >= 0.5 &]; // AbsoluteTiming
Select[arr, GreaterEqualThan[0.5]]; // AbsoluteTiming

which is roughly 1.5 times slower. (almost as bad as Replace[arr, _?(# < 0.5 &) :> Nothing, {1}]).

I can imagine that in an updated version of Select it internally translates these 'simple' criterions for the case of larger lists (100+ or so), like you did with your BoolPick function.

Apart from that, it would be nice to have a Select with a level specification, I constantly use Map and Select and create these crazy construct of pure function within pure functions, probably not the fastest and most elegant solution.

EDIT: Just slightly faster than the Select is to use Pick and Thread:

Pick[arr, Thread[arr >= 0.5], True] // Length // AbsoluteTiming
POSTED BY: Sander Huisman

This problem is quite bit improved in V10.4 where the operator form is only 10% slower

POSTED BY: Sander Huisman
Posted 8 years ago

I would like to have something like ShapeMatrix (analoguos for DiskMatrix, CrossMatrix or DiamondMatrix) for an arbitrary black-and-white 2D shape or special symbol, that can be used as an input.

POSTED BY: Timur Gareev

How would the shape be specified? There is already Rasterize which works both for graphics and other things that can be shown in a notebook (text). I am not sure how consistent is Rasterize between different platforms for graphics. I know that is it not completely consistent for text due to the font rendering of different OSs.

POSTED BY: Szabolcs Horvát
Posted 8 years ago

at first glance, Rasterize looks appropriate as one of options.

POSTED BY: Timur Gareev

Maybe some pure function?

Norm[{##}]<=1 would be DiskMatrix,

AnyTrue[{##}, # == 0 &] & would be CrossMatrix,

Norm[{##},1]<=1 DiamondMatrix ...

Combined with some Array function...

which should work in all dimensions, Rasterize is kind of limited to 2 dimensions, for higher and lower dimensions you will have to combine multiple rasterizes in some way...

POSTED BY: Sander Huisman

It would be nice that the BlockMap function would be extended as to handle overhang like kl and kr in Partition. And also the support for padding like Partition would be very welcome. Basically following the exact same arguments the function Partition has.

In addition to BlockMap, it would be nice to have a new function called BlockApply for completeness.

POSTED BY: Sander Huisman

Developer`PartitionMap has the desired functionality of BlockMap. Even more Developer`PartitionMap is slightly faster.

?Developer`PartitionMap

What do you expect from BlockApply? Have you seen ArrayFilter?

POSTED BY: Alexey Golyshev

I didn't know about PartitionMap, but it seems that BlockMap uses PartitionMap internally:

Trace[BlockMap[g, Range[5], 2, 1]]

What I would like to see from BlockApply is this:

BlockApply[g, Range[5], 2, 1]

{g[1, 2], g[2, 3], g[3, 4], g[4, 5]}

which would be equivalent to: BlockMap[g @@ # &, Range[5], 2, 1]

Basically it will be the Apply equivalent just like you have the Map/ BlockMap pair.

POSTED BY: Sander Huisman

You are right. It seems that BlockMap uses PartitionMap internally. Bug in Trace? ;-)

If BlockMap is PartitionMap without any optimization, it's strange that not all the functionality has been activated.

POSTED BY: Alexey Golyshev

I think it is without optimization, just a lot of safety-checks from what I can decypher from the trace results..

POSTED BY: Sander Huisman

I'd like to see a new function a ConvertFunction[ ]. The convert function is used to convert an expression from one form to another.

ConvertFunction[expr, Method -> target-expr] -> attempts to convert the specified expr to the specified target-expr

Examples:

ConvertFunction[BesselK[1/3, x], Method -> AiryAi]
(*  \[Pi]*Sqrt[3^(1/3)*2^(2/3)/(x^(2/3))]*AiryAi[1/2*3^(2/3)*2^(1/3)*\x^(2/3)]  *)
ConvertFunction[KelvinKei[n, x^2], Method -> BesselK]
(*  1/2*I*(Exp[1/2*I*n*Pi]^2*BesselK[n,(1/2-1/2*I)*x^2*Sqrt[2]]-\
BesselK[n,(1/2+1/2*I)*x^2*Sqrt[2]])/Exp[1/2*I*n*Pi]] *)
ConvertFunction[BesselJ[n, x], Method -> {HankelH1, HankelH2}]
(*  1/2*HankelH1[n,x]+1/2*HankelH2[a,x] *)
ConvertFunction[Piecewise[{{1, x < 0}, {2, x < 1}, {3, x < 2}}],Method -> HeavisideTheta]
(*  1+HeavisideTheta[x]+HeavisideTheta[-1+x]-3*HeavisideTheta[-2+x]  *)
ConvertFunction[Gamma[n + 3/2]/(Sqrt[Pi]*Gamma[n]),Method -> Binomial]
(*  n*(n+1)*Binomial[n+1/2,-1/2]  *)
ConvertFunction[1/2*x*Pi^(1/2)*(-2*x^2 + 2)^(1/4)*LegendreP[-1/2, -1/2, -2*x^2 + 1]/(-2*x^2)^(1/4), Method -> ArcTrig]
(* ArcSin[x]  *)
ConvertFunction[Exp[x^2] - 2*Sinh[x^2], Method -> Exp]
(* Exp[-x^2]  *)
ConvertFunction[Cos[x]*Sin[x], Method -> {Exp, Log}]
(*  -1/2*I*(1/2*Exp[I*x]+1/2*Exp[-I*x])*(Exp[I*x]-Exp[-I*x]) *)
ConvertFunction[Cos[x]*Sin[x], Method -> Tan]
(*  2*Tan[1/2*x]*(1-Tan[1/2*x]^2)/(1+Tan[1/2*x]^2)^2 *)
ConvertFunction[Sin[x], Method -> Sum]
(* Sum[(-1)^n*x^(2*n+1)/(2*n+1)!,{n,0,Infinity}]  *)
ConvertFunction[Sqrt[(1 - Sqrt[1 - x])/x], Method -> Sum]
(* Sum[Sqrt[2]*(4*n)!*16^-n*x^n/(((2*n)!)^2*(2*n+1)),{n,0,Infinity}] *)
ConvertFunction[WhittakerM[0, 1/2, x], Method -> ElementaryFunction]
(*  -2*I*Sin[1/2*I*x]  *)
ConvertFunction[DawsonF[x], Method -> Erf]
(*  -(-1/2*I*Sqrt[Pi]*Erf[I*x])/Exp[x^2]  *)
ConvertFunction[Gamma[x], Method -> Factorial]
(*  (x-1)!  *)
ConvertFunction[LegendreP[1/2, x], Method -> {EllipticK, EllipticE}]
(*  2*Sqrt[2]*Sqrt[z+1]*EllipticE[Sqrt[(z-1)/(z+1)]]/Pi-2*Sqrt[2]*\EllipticK[Sqrt[(z-1)/(z+1)]]/Pi/Sqrt[z+1] *)
POSTED BY: Mariusz Iwaniuk

For:

ConvertFunction[Piecewise[{{1, x < 0}, {2, x < 1}, {3, x < 2}}], 
 Method -> HeavisideTheta]

we can use:

 InverseLaplaceTransform[
  LaplaceTransform[Piecewise[{{1, x < 0}, {2, x < 1}, {3, x < 2}}], x, 
   s], s, x]

 (*2 - 3 HeavisideTheta[-2 + x] + HeavisideTheta[-1 + x]*)
POSTED BY: Mariusz Iwaniuk
Posted 8 years ago

It would be nice if WeightedData would take an association of values to weights as its argument; then you could use the results of Counts and CountsBy with it directly.

POSTED BY: Matt Pillsbury

Another useful function would be a function like Select, but that return two lists, the ones selected and the ones removed. So you can easily (for instance) separate a dataset in to two groups based on a threshold... This can be achieved using GatherBy and a function, but might be less intuitive...

POSTED BY: Sander Huisman

Reap + Sow with tags

selectSplit[data_,patt_]:=Reap[Sow[#,patt@#]&/@data;][[2]]

data=Range[10];
selectSplit[data,OddQ]
{{1, 3, 5, 7, 9}, {2, 4, 6, 8, 10}}
POSTED BY: Alexey Golyshev

I have similar implementation but there are always problems occurring when the data is (in your case) only odd, only even, or undecisive data is included... Of course this can also be fixed... You would always like an answer back that is of the same form {{gives data that gives True},{all data that give False}}... not the other way around ( {False, True} ), and not {False}, {True} et cetera....

POSTED BY: Sander Huisman

One might call the function Separate (which is not used yet, and sounds like what it does). The words Divide and Split are already used..

POSTED BY: Sander Huisman
SetAttributes[Separate, HoldRest]

Separate[data_, patt_] := Module[
  {res},
  res = GatherBy[data, patt];
  If[patt@res[[1, 1]], res, Reverse@res]
  ]

Separate[data, EvenQ]
{{2, 4, 6, 8, 10}, {1, 3, 5, 7, 9}}
POSTED BY: Alexey Golyshev
Posted 8 years ago

It's really easy with the new GroupBy function:

selectSplit[pred_] :=  GroupBy[pred] /* Lookup[{True, False}];
selectSplit[list_, pred_] := selectSplit[pred][list];

selectSplit[Range[10], OddQ]
(* {{1, 3, 5, 7, 9}, {2, 4, 6, 8, 10}} *)

Since M10 came down the pike, I've discovered that it's often most convenient to define the curried "operator form" of a function first, because it can be done very concisely as a "pipeline" of composed functions, and then define the "ordinary" form in terms of it.

POSTED BY: Matt Pillsbury

Both solutions do not account for the case when one of the groups (or both) is/are empty, so you need quite involved code to account for those cases as well:

ClearAll[Separate]
SetAttributes[Separate, HoldRest]
Separate[data_, patt_] := Module[{res},
  res = GatherBy[data, patt];
  {SelectFirst[res, If[Length[#] > 0, patt@#[[1]], False] &, {}], 
   SelectFirst[res, If[Length[#] > 0, ! patt@#[[1]], False] &, {}]}
  ]
Separate[Range[10, 20, 2], EvenQ]
Separate[Range[11, 21, 2], EvenQ]
Separate[Range[10, 20, 1], EvenQ]
Separate[{}, EvenQ]

making it a little less 'elegant'.

POSTED BY: Sander Huisman

a GroupBy, rather then a Gather(By) solution might be more elegant indeed...

POSTED BY: Sander Huisman
Posted 8 years ago

Oh, good point. You can't use the pipeline, but the default value argument for Lookup means it can still be pretty clean:

 ClearAll[selectSplit];
 selectSplit[pred_] :=
  Lookup[GroupBy[#, pred], {True, False}, {}] &
 selectSplit[list_, pred_] := selectSplit[pred][list];

selectSplit[Range[2, 10, 2], EvenQ]
(* {{2, 4, 6, 8, 10}, {}} *)
POSTED BY: Matt Pillsbury

This solution is indeed robust and I like it, Thanks! Though I would disregard the pure function in favour of a slightly different algorithmic structure:

 ClearAll[selectSplit];
 selectSplit[pred_][list_List] := selectSplit[list, pred]
 selectSplit[list_List, pred_] := Lookup[GroupBy[list, pred], {True, False}, {}]

 selectSplit[Range[2, 10, 2], EvenQ]
 selectSplit[Range[3, 11, 2], EvenQ]
 selectSplit[{}, EvenQ]
 selectSplit[EvenQ][4]
 selectSplit[3][{1, 2, 3}]

this will also immediately fend off the penultimate one from being evaluated...

POSTED BY: Sander Huisman

O, and one might want that in the group for which the pattern does not much also includes the indecisive ones like an unassigned symbol (variable name)...

POSTED BY: Sander Huisman

I would like to see much faster histogram calculations, both in Histogram itself and in BinCounts.

Histogram is a very flexible function that can compute many different kinds of histograms. Surely all of these can't be fast. But the most common cases should have very fast code paths. In the very least, when bin sizes are uniform and explicitly given, Histogram should be much faster, both in 1D and multiple dimensions.

I know that BinCounts is faster, but why can't Histogram be just as fast? Also, BinCounts is not nearly as fast as it could be, and can be outperformed using Mathematica code (no need to resort to C):

http://community.wolfram.com/groups/-/m/t/237660

Here's more proof that people need better performance:

http://mathematica.stackexchange.com/questions/96392/fast-1d-bincounts-alternative/96395#96395

On multiple occasions I was forced to implement histogramming using LibraryLink. Such a basic operation should be built in and perform much better than it currently does (for million to billion element numerical arrays).


A closely related request is to make it easier to plot pre-binned data as a histogram, with the same flexibility as Histogram allows for unbinned data. This would be handy for plotting data from BinCounts or custom histogramming functions.

http://community.wolfram.com/groups/-/m/t/593095


After this criticism of Histogram I should also point out that it is a very flexible and versatile function which makes it easy to create nice visualizations with little code. One of the big reasons why I want Histogram to perform better instead of being content with my own implementation is because I want access to all its features without compromising on performance.

POSTED BY: Szabolcs Horvát

It is strange that it takes so long indeed, because if you check Trace on a simple BinCounts:

Trace[BinCounts[{0.1, 0.5, 0.7, 0.75, 0.9}, {0.0, 1.0, 0.1}]]

and if I understood correctly, it relies on division (by the binwidth), using Floor to force it to an integer, and then subtraction to have the first bin as '1'. Then it uses Tally to well...tally it. However it might be that Tally in this case is not so efficient, as it is probably a n*log(n) algorithm... might be more efficient just to to go through the array to find the minimum and the maximum, if the maximum-minimum << n then make a ConstantArray[0, number of bins] (or sparse array?) and then to cycle through the point 1 by 1 and add 1 to the nth bin. if maximum - minimum > n it might be more advantageous to use Tally...

Unfortunately one can not use Trace on Tally, so probably in the Kernel (c), while BinCounts is not.

I think they have one general algorithm for an N-dimensional binning and little (no) special cases for the simple cases you mentioned.

But yes, such things should indeed be sped up! Just yesterday I was binning a couple million 2D-points, which took (what felt) forever. And that included explicit specifications in both directions...

And while they are at it, please also create my BinListBy or give me back indices in a function called BinIndices.

POSTED BY: Sander Huisman

Some more specific Clustering algorithms, like DBSCAN and OPTICS...

POSTED BY: Sander Huisman

I would like to see better support for package development, and more functions that would typically be used in this context (as opposed to interactive work). It would be nice to have more of the Developer` context symbols documented. I would like to see some of the Internal` context (and other undocumented) functions lifted into the Developer` context and being officially supported. I am thinking of things like PositiveMachineIntegerQ, RealValuedNumericQ, WithLocalSettings, etc. These are things that are typically (very) useful during package development, but less so during everyday work.

POSTED BY: Szabolcs Horvát
Posted 8 years ago

New Functions I would like to see in future Wolfram Language versions

POSTED BY: Timur Gareev
Posted 8 years ago

I would like to have something like a ListMask[] function with the following aim. Say, I have a nested list nlist1 = {a, {b}, {{c}}}. I would like to apply its "List mask" {_,{_},{{_}}} to an arbitrary flatten list of the same length as Flatten[nlist1], i.e. list {x, y, z} to get {x, {y}, {{z}}}.

POSTED BY: Timur Gareev

I'm not sure if this warrants a new function, especially with the name you proposed. ListTransform, ListReshape, ListShape et cetera might be better. However, I think, it's use is too specific; your function is however easy to implement:

ListMask[pattern_,list_List]:=ReplacePart[pattern,Thread[Position[pattern,Verbatim[_]]->list]]
ListMask[{_,{_},{{_}}},{1,2,3}]
POSTED BY: Sander Huisman
Posted 8 years ago

I was thinking about the function that would get the nested list in place of your pattern (so the idea is to grab pattern from a list). Besides I noticed such approach may be too slow for hugde list, i.e. just the first thing that came to mind ListMask[Split[list], Range[Length[list]]]

POSTED BY: Timur Gareev

This will 'copy' the structure of 1 list to another..

ListMask[list1_List,list2_List]:=With[{l=Replace[list1,_:>_,{-1}]},ReplacePart[l,Thread[Position[l,Verbatim[_]]->list2]]]
ListMask[{1,2,{3},{{4}}},{5,6,7,8}]
POSTED BY: Sander Huisman

And this will work with very complicated nested arrays of length ~80000 as well, quite fast:

tmp = Nest[{#, {#}} &, Range[5], 14];
Length[Flatten[tmp]]
ListMask[tmp, Range[Length[Flatten[tmp]]]];
POSTED BY: Sander Huisman
Posted 8 years ago

~80 000 may be, but 10^6 with relatively flatten structure took me 1+ minut of running. Nonetheless, thank you. It was just an idea.

POSTED BY: Timur Gareev

If it is relatively flatten then other strategies might be a better option... i took the most general case...

POSTED BY: Sander Huisman

Integration of Intel DAAL: increasing performance of existing methods in Classify/Predict + adding the new ones (AdaBoost, BrownBoost, LogitBoost) + online learning + outlier detection + association rules.

POSTED BY: Alexey Golyshev
Posted 8 years ago

One that I've had to implement myself, and which I suspect could be a lot more efficient if it were built in, is PrimeRange, which returns all the primes between its lower and upper bound. Currently I have something built on Prime and PrimePi, which works, I guess....

POSTED BY: Matt Pillsbury

Regarding the frontend:

  1. It would be nice to also have a function that has been implemented already during typing. When you type a closing brackets, it will highlight them both momentarily. It would be nice to have this extended; if you cursor/caret is next to a bracket, it would be nice to highlight the other side of the bracket, so you can easily find out in expression that end on: ]}]] which one is which. Now included (V11 or V11.1?)

  2. In addition it would be nice that once you have your caret on a variable, that it will highlight (bold?) all the instances of this variable.

  3. It would be nice to have a right-click "View definition" if you click on any function, if it is built-in then use the built-in search, otherwise go to the function in the code.

POSTED BY: Sander Huisman

I like a significant front-end improvement regarding (accidentally) showing big expressions. Now it will show an intermediary dialog with a short/shallow form of it with buttons of 'show more', but generating this sometimes takes sometimes forever or crashes it. I would like (an option) to disable this behaviour. I'm often working with variables of a gigabyte or more (big matrices). One typo and the front-end crashes, no way around it. Just a message saying "Expression too big to view" as output is fine, no need for this dialog with options and truncated views.

POSTED BY: Sander Huisman

We already have MapThread and MapIndexed, but a combination MapThreadIndexed would also handy be handy on some occasions (though easily made by myself using MapThread with 'another' list with indices (that is how MapIndexed works?!?)).

POSTED BY: Sander Huisman

Another addition would be to support custom PlotThemes, I know it is (kinda) possible now using (the hidden function):

Themes`AddThemeRules["epic",
  DefaultPlotStyle -> Directive[Blue,Opacity[0.5]],
  Background -> LightBlue,
  AxesStyle -> Red
]

but does not work well when you combine multiple themes...

POSTED BY: Sander Huisman

I don't see any difference with Mathematica 10.3.1 from a default plot theme if I use that "epic" rule and evaluate, say:

 Plot[Exp[-x] Cos[x], {x, 0, 2 Pi}, PlotTheme -> "epic"]
POSTED BY: Murray Eisenberg

Hmm try this (clean fresh kernel):

Themes`AddThemeRules["epic", 
 DefaultPlotStyle -> Directive[Blue, Opacity[0.5]], 
 Background -> LightBlue, AxesStyle -> Red]
Plot[Exp[-x] Cos[x], {x, 0, 2 Pi}, PlotTheme -> "epic"]

You should see a very ugly plot!

for me $Version is "10.3.1 for Mac OS X x86 (64-bit) (December 9, 2015)"

POSTED BY: Sander Huisman

My fault: when I copied and pasted your code for the Themes`AddThemeRules expression, I somehow introduced an error.

The undocumented Themes context has some interesting items, e.g.:

 Themes`ThemeGallery[]

I hope Themes`AddThemeRules gets officially documented and kept in the language: it's a very handy way to consistently treat plots.

POSTED BY: Murray Eisenberg

When can I finally copy a DMSString like this: DMSString[$GeoLocation]:

"45\[Degree]45'0.000\"N 4\[Degree]50'24.000\"E"

without getting those annoying [Degree]s in there, I would like to see the ° symbol! Inside Mathematica this copy-paste makes sense, but outside it doesn't..

Also more customisability for DMSString when given a GeoLocation would be very welcome!

POSTED BY: Sander Huisman

I'd like to see a new function Numerical inverse Laplace transform.

Gives a numerical approximation to the inverse Laplace transform of expr evaluated at the numerical value t, where expr is a function of s.

An Example:

f[s_] := 1/(s^2 + 1);
InverseLaplaceTransform[f[s], s, t] /. t -> 1
(*0.8414709848078965*)

 NInverseLaplaceTransform[f[s], s, 1]
 (*0.8414709848078965*)

Can't solve:

  f1[s_] := Exp[s]/s^3;
  f2[s_] := Exp[s]/s^2;
  f3[s_] := 1/Sin[s];
  f4[s_] := s/Gamma[s];

and the list goes!

See Web links: Link1 Link2 LInk3

POSTED BY: Mariusz Iwaniuk

Isn't this just

InverseLaplaceTransform[f[s], s, 1.]

POSTED BY: Jon McLoone

InverseLaplaceTransform is a symbolic solver.So it works only in simple cases.Take, for example:

g[s_] := Sqrt[Log[s]/s];
InverseLaplaceTransform[g[s], s, 2.]
(*{InverseLaplaceTransform[s/Log[s], s, 2.]}*)

Can't calculate at t = 2.

So why this new feature is needed

A numerical solver of InverseLaplaceTransform.

NILT = Compile[{{t, _Real}, {n, _Integer}, {e, _Real}, {a, _Real}}, 
Module[{k}, ((1/4) Exp[a t + e]/ t) ( ((1/2) Re[f[s]] /. s -> a + e/t) + Sum[Re[ Exp[(1/4) I k Pi] (f[s] /. s -> a + e/t + (1/4) I k Pi/t )], {k, 1, n}] )]];
 t = 2;
 n = 50000;
 e = 2;
 a = 0;
 NILT[t, n, e, a]
 (*-1.05866*)

for g[s]= Sqrt[Log[s]/s] at t =2 is -1.05866.

POSTED BY: Mariusz Iwaniuk

I would like Mathematica to include a primordial function such as Primordial [n_] := Product [ Prime [ I ] , {I,1,n } ] .Primorialsl are very useful in finding twin primes as they are found at or close to primorials .For example, the first 4 primorials are { 2, 6 ,30, 2310, 30030 }.Twin primes are on either side of these primorials. They are { {1,3}, { 5,7} , {29, 31 } ,{ 2309 , 2311} } .Primorials are also of use in factoring.

only 79 percent? Actually, I think that's impressive.

POSTED BY: Frank Kampas

@Frank Kampas The number is impressive by itself, until you see that Maple solves 92%, and does it on average 25x faster!

POSTED BY: Sander Huisman

Sander, please post an example of a DE solved by Maple but not Mathematica.

POSTED BY: Frank Kampas

Have a look at his exhaustive list:

http://12000.org/mynotes/kamek/version1/KERNEL/KEse1.htm#x3-20000

there is a section at the bottom:

Solved by Mathematica but not by Maple

and vice versa...

POSTED BY: Sander Huisman

I looked at a couple examples of DEs solved by Maple and not Mathematica and the Maple "solutions" had integrals in them. Not what I regard as a solution.

POSTED BY: Frank Kampas

Though not a 'full' solution. I'd rather have an explicit solution for (say) y in terms of an integral, than an unsolved ODE... Again, I'm not saying Mathematica is bad at solving ODEs. But I think there is still room for improvement.

I'm just curious though if there are mistakes in the solutions of Maple or Mathematica... Not giving a solution is better than a wrong one... I think both companies should do a showdown ;-)

POSTED BY: Sander Huisman

It's probably possible to write a program that goes through a list of DEs that Mathematica claims to solve and check to see if the solutions are correct.

POSTED BY: Frank Kampas
DSolve[y'[x] - y[x]^2 - y[x]*Sin[x] - Cos[2*x] == 0, y[x], x]
DSolve[y'[x] + 2*Tan[y[x]]*Tan[x] - 1 == 0, y[x], x]
DSolve[y'[x] - Tan[x*y[x]] == 0, y[x], x]
DSolve[(y'[x])^2 - 2*x^2*y'[x] + 2*x*y[x] == 0, y[x], x]
example 414.
example 415.
DSolve[3*(y'[x])^2 + 4*x*y'[x] - y[x] + x^2 == 0, y[x], x]
DSolve[x*(y'[x])^2 + (y[x] - 3*x)*y'[x] - y[x] == 0, y[x], x]
example 428.
example 429.
example 430.
DSolve[(x^2 + a)*(y'[x])^2 + 2*y[x]*x*y'[x] + y[x]^2 + b == 0, y[x], x]
example 465.
.....
.....

the list goes on...

let's take the first equation:

DSolve[y'[x] - y[x]^2 - y[x]*Sin[x] - Cos[2*x] == 0, y[x], x]
(*Out: DSolve[y'[x] - y[x]^2 - y[x]*Sin[x] - Cos[2*x] == 0, y[x], x]*)

enter image description here

  • Mathematica = 0.
  • Maple = 1.
POSTED BY: Mariusz Iwaniuk

A upgrade a DSolve function of solving analytically ordinary differential equations on the solutions presented in this Book.

Handbook of Exact Solutions for Ordinary Differential Equations.Valentin F. Zaitsev, Andrei D. Polyanin.

This book contains nearly 6200 ordinary differential equations and their solutions.

In this Book is included 1940 ordinary differential equations and their solutions. Mathematica can solve only 79 percent.

If DSolve can't solve the answer may like this:

eq = y''[x] + f[x]*y[x] == 0;
DSolve[eq, y[x], x]
(*DSolve[y[x] = Exp[Integrate[Y[x], x] + C[1]] -> {Y'[x] == -Y[x]^2 + f[x]->Y[x] == y'[x]/y[x]}]]*)

eq2 = y''[x] + x*Exp[x]*y[x] == 0;
DSolve[eq2, y[x], x]
(*DSolve[y[x] = Exp[Integrate[Y[x], x] + C[1]] -> {Y'[x] == -Y[x]^2 + x*Exp[x]-> Y[x] == y'[x]/y[x]}]]*)
POSTED BY: Mariusz Iwaniuk

I'd like to see a new Heun function.

Heun equations include as particular cases the Lame, Mathieu, spheroidal wave, hypergeometric, and with them most of the known equations of mathematical physics. Five Heun functions are defined as the solutions to each of these five Heun equations, computed as power series solutions around the origin satisfying prescribed initial conditions.

POSTED BY: Mariusz Iwaniuk

Would be very useful indeed!

POSTED BY: Sander Huisman

12.1 introduced: HeunB, HeunBPrime, HeunC, HeunCPrime, HeunD, HeunDPrime, HeunG, HeunGPrime, HeunT, and HeunTPrime

POSTED BY: Sander Huisman

I'd like to see a new function to solve integro-differential equations.

WithThis paper can solve a:

  • differential equations
  • difference equations
  • differential-difference equations
  • fractional differential equations
  • pantograph equations
  • integro-differential equations

Examples only for integro-differential equations:

eq = y'[x] + 2*y[x] + 5*Integrate[y[t], {t, 0, x}] == Piecewise[{{0, x < 0}, {1, x >= 0}}];
IntDSolve[{eq, y[0] == 0], y[x], x]
(*Out: {{y[x] -> 1/2*Exp[-x]*Sin[2*x]}}*)

eq1 = y[x] - 1/2*Integrate[y[t]*x*t, {t, 0, 1}] == 5/6*x;
IntDSolve[eq1, y[x], x]
(*Out: {{y[x] -> x}}*)

eq3 = {y1[x] == x^2 - 1/5*t^5 - 1/10*x^10 + Integrate[y1[t]^2 + y2[t]^3, {t, 0, x}], y2[x] == x^3 + Integrate[y1[t]^3 - y2[t]^2, {t, 0, x}]};
IntDSolve[eq3, {y1, y2}, x]
(*Out: {{y1[x] -> x^2}, {y2[x] -> x^3}}*)

 eq4 = {y1'[x] == 1 + x + x^2 - y2[x] - Integrate[y1[t] + y2[t], {t, 0, x}], y2'[x] == -1 - x + y1[x] - Integrate[y1[t] - y2[t], {t, 0, x}]};
 IntDSolve[{eq4, y1[0] == 1, y2[0] == -1}, {y1, y2}, x, Order -> 3]
 (*Out: {{y1[x] -> 1 + 2 x + x^2/2 + x^3/6 + O[x^4]}, {y2[x] -> -1 - x^2/2 - x^3/6 + O[x^4]}}*)

 eq5 = {f''[x] == 1 - x^3 - 1/2*g'[x]^2 + 1/2*Integrate[f[t]^2 + g[t]^2, {t, 0, x}], g''[x] == -1 + x^2 - x*f[x] + 1/4*Integrate[f[t]^2 - g[t]^2, {t, 0, x}]};
 IntDSolve[{eq5, f[0] == 1, f'[0] == 2, g[0] == -1, g'[0] == 0}, {f, g}, x, Order -> 3]
 (*Out: {{f[x] -> 1 + 2 x + x^2/2 + x^3/6 + O[x^4]}, {g[x] -> -1 - x^2/2 - x^3/6 + O[x^4]}}*)
POSTED BY: Mariusz Iwaniuk

A update a Derivative and Integrate functions of solving Fractional Calculus.

Examples:

FractionalD[nu_, f_, t_, opts___] :=  Integrate[(t - x)^(-nu - 1) (f /. t -> x), {x, 0, t}, opts,GenerateConditions -> False]/Gamma[-nu]
FractionalD[mu_?Positive, f_, t_, opts___] :=  Module[{m = Ceiling[mu]}, D[FractionalD[-(m - mu), f, t, opts], {t, m}]]

Solving derivative.

f[x_] := a*x + b;
FractionalD[1/2, f[t], t] /. t -> x
(*(4 a Sqrt[x])/(3 Sqrt[\[Pi]]) + (3 b + 2 a x)/(3 Sqrt[\[Pi]] Sqrt[x])*)

Solving integration.

FractionalD[-1/2, f[t], t] /. t -> x
(*(2 Sqrt[x] (3 b + 2 a x))/(3 Sqrt[\[Pi]])*)
POSTED BY: Mariusz Iwaniuk

A update a DSolve and NDSolve functions of solving Analytically or Numerically higher-order multidimensional partial differential equations.

Examples higher-order partial differential equations to solving:

pde1 = D[u[x, t], {x, 3}] - u[x, t]*D[u[x, t], x] + D[u[x, t], t] == 0
(*Korteweg-de Vries equation*)
pde2 = D[u[x, t], {x, 3}] == D[u[x, t], t]
(*Dym equation*)
pde3 = a^2*D[u[x, t], {x, 4}] + D[u[x, t], {t, 2}] == 0
(*Equation of transverse vibration of elastic rod*)
pde4 = D[u[x, y], {x, 4}] + 2*D[u[x, y], {x, 2}, {y, 2}] +  D[u[x, y], {y, 4}] == 0
(*Biharmonic equation*)

Examples multidimensional partial differential equations to solving:

multipde1 = D[u[x, y, z, t], t] - D[u[x, y, z, t], {x, 2}] - D[u[x, y, z, t], {y, 2}] - D[u[x, y, z, t], {z, 2}] == 0
(*Multidimensional Heat equation*)
multipde2 = D[u[x, y, z, t], {t, 2}] - D[u[x, y, z, t], {x, 2}] - D[u[x, y, z, t], {y, 2}] - D[u[x, y, z, t], {z, 2}] == 0
(*Multidimensional Wave equation equation*)
multipde3 =  D[u[x, y, z, t], {x, 4}] + D[u[x, y, z, t], {y, 4}] + D[u[x, y, z, t], {z, 4}] + D[u[x, y, z, t], {t, 2}] == 0
(*Multidimensional transverse vibration of elastic rod equation*)
POSTED BY: Mariusz Iwaniuk

I would like to see an option Series for DSolve.

Examples:

eq = {(x^2 + 1)*y''[x] - 4*x*y'[x] + 6*y[x] == 0};
DSolve[eq, y[x], x, Method -> "Series", Point -> (x = a), Order -> 3]
(*Out: {{y[x] -> y[a] + y'[a] (x - a) + (2*a*y'[a] - 3*y[a])/(a^2 + 1)*(x - a)^2 + O[x-a]^4}}*)


ibc = {y[0] == 1, y'[0] == 1};
DSolve[{eq, ibc}, y[x], x, Method -> "Series", Order -> 3

(*Out: {{y[x] -> 1 + x - 3 x^2 - 1/3*x^3 + O[x^4]}}*)


pde = D[u[x, t], {x, 2}] == Exp[Sin[x]]*D[u[x, t], t];
DSolve[pde, u[x, t], {x, t}, Method -> "Series",Point -> {x = a, t = b}, Order -> 2]
(*Out:{{u[x, t] -> 1/2*((x - a)^2*Exp[Sin[a]] + 2*t - 2*b)*Derivative[0, 2][u][a, b] +Derivative[1, 2][u][a, b] (x, a) (t - b) + 
    1/2*Derivative[2, 2][u][a, b] (t - b)^2 + 1/2*(2 x - 2 a)*Derivative[1, 0][u][a, b] + u[a, b]}} *)

ibc = {u[x, 0] == f[x], Derivative[0, 2][u][x, 0] == g[x]};
DSolve[{pde, ibc}, u[x, t], {x, t}, Method -> "Series", Order -> 2]
(*Out: {{u[x, t] -> f[0] + g[0] + f'[0]*x + g'[0]*x*t + 1/2*g[0]*x^2}}*)

ibc2 = {u[x, 0] == x, Derivative[0, 2][u][x, 0] == Sin[x]};
DSolve[{pde, ibc2}, u[x, t], {x, t}, Method -> "Series", Order -> 2]
(*Out: {{u[x, t] -> 1/2*Sin[u]*Exp[Sin[u]]*(-x + u)^2 + Sin[u]*t - t (-x + u)*Cos[u] + u}}*)
POSTED BY: Mariusz Iwaniuk

It would be nice to have a function that solves the Vehicle routing problem.

POSTED BY: Alexey Golyshev
Posted 8 years ago

I would like to see a method for the Feynman-Kac Theorem. It solves a very important class of parabolic PDEs.

Link to Wiki

POSTED BY: Edvin Beqari

An extension of PalindromeQ:

PalindromeQ[n,b]

Which would check if n is palindrome in its base b representation.

17 is not a palindrome in base 10, but it is in base 2: 10001

POSTED BY: Sander Huisman

Binning data with associate data

A common 'problem' is that you want to bin data by (e.g.) the x coordinate, and then you want to have the associated y with it. So to do this, I often do:

x={24,19,49,5,27,100,18,28,77,38,82,22,2,13,12,32,69,72,52,90,16,9,63,64,10,31,51,14,80,70,21,30,71,79,37,65,84,47,33,81,40,94,68,58,11,15,97,88,1,99,74,78,91,93,89,26,45,98,95,67,4,92,29,43,85,39,73,23,8,62,83,57,35,41,17,34,75,25,66,53,44,36,50,60,3,46,86,42,20,56,6,87,55,76,54,48,7,96,59,61};
y={6,64,21,13,34,100,7,89,83,50,19,32,43,38,60,14,1,31,99,40,80,78,68,95,55,72,63,65,91,71,9,51,70,97,37,25,20,52,88,22,62,81,66,69,35,75,29,4,26,27,41,33,93,18,42,98,77,44,85,17,11,12,57,94,61,54,23,28,30,67,10,3,46,45,87,79,96,16,24,73,58,53,48,36,90,76,92,2,82,39,5,59,49,15,74,8,47,84,56,86};
ybinnedbyx=BinLists[{x,y}\[Transpose],{0,100,10},{-10000,10000,20000}][[All,1,All,2]]

i.e. I turn it into 2D data, and then I make a very big bin size in the y direction, such that all the data falls in one bin. Of course this works with numerical data, but it doesn't work if the data in the y direction are strings or a list itself or other....

I see four possible solutions (the 3rd being the most neat):

1

A new binning function that returns lists of indices (BinIndices or BinPositions are good candidate names), so that it can used with Extract on other data. Still a little fiddly because it (presumably) returns a list of list of indices, and Extract does not handle that directly, so you probably have to do some combination of Map, Part, and a pure function.

2

A new option to BinLists, for which I have no good name yet but let's call it BinFunctions for now. By default, if I give some 2D data:

{{1,2},{2,4},{3,1.5},{4,8}....}

it will bin it first by Part[#,1]& of the expression and then by Part[#,2]&. That is, first by the first element, then by the second element... If we could supply our own BinFunctions, then we could do something like Part[#,1]& and 1&. such that all the y data goes in one bin.

3

Reduce the necessity to supply n number of binspecs when we have "vectors" of length n. So:

BinLists[{x,y}\[Transpose],{0,100,10}]] 

Would just only bin it by the first element of each vector, ignoring the rest of the vector... This will keep it backward compatible, so that is very good!

4

A new option for BinLists, that is called something like AssociatedData (and a combiner function?). Now it bins first the data, and then combine the results with the associated data using a combiner function (List by default).

POSTED BY: Sander Huisman

I agree with you. It will be very useful. Recently I had a similar problem. I needed to bin a very big list by {X, Y} and needed information about Z in the every bin. I solved this problem using associations ({X,Y} as a keys, Z as values and Lookup of bins from the BinLists).

POSTED BY: Alexey Golyshev