Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Wolfram Scienceshowthread.php?threadid=399 sorted by activeInverse Z transform, dots everywhere
http://community.wolfram.com/groups/-/m/t/1307682
Hello All. I hope you are doing well.
I need to perform an inverse Z transform
![enter image description here][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture.JPG&userId=922772
As you can see I dont get a clean output. what are that dots there?
For example
6.-5n Whats that?Julian Oviedo2018-03-23T14:43:27Z[✓] Count the total presence of this group in a sequence?
http://community.wolfram.com/groups/-/m/t/1306061
hello!
I have a group of numbers:
group={05, 60, **13, 58**, 47}
I want to count the total presence of this group in a sequence:
seq={78, **13**, 38, 65, 90, 25, 52, 77, 12, 33, **58**, 83, 20, 45, 70, 7, 32, 57}
Total[SequenceCount[seq, {#}] & /@ group]
out:
> 2
now I have a group of 6 sub-sequences, I want to count the total presence of the first group in each sub-sequence:
subseq={{20, 3, 76, 17, 90, 73, 14, 87, 70, 65, 48, 31, 62, 45, 28, 59, 42, 25}, {48, 83, 28, 55, 90, 35, 62, 7, 42, 3, 38, 73, 10, 45, 80, 17, 52, 87}, {38, 59, 80, 69, 90, 21, 10, 31, 52, 83, 14, 35, 24, 45, 66, 55, 76, 7}, {14, 85, 66, 19, 90, 71, 24, 5, 76, 59, 40, 21, 64, 45, 26, 69, 50, 31}, {40, 33, 26, 7, 90, 83, 64, 57, 50, 85, 78, 71, 52, 45, 38, 19, 12, 5}, {78, 13, 38, 65, 90, 25, 52, 77, 12, 33, 58, 83, 20, 45, 70, 7, 32, 57}}
***(in fact my list of sub-sequences is 3240 items)***
I tried with PART and EXTRACT but it does not work …
if I manually enter PART # or EXTRACT # it works:
Total[SequenceCount[subseq[[6]], {#}] & /@ group]
Total[SequenceCount[Extract[subseq, 6], {#}] & /@ group]
out:
> 2
>
> 2
but if I attribute to the # sub-sequence list, it does not work:
Total[SequenceCount[subseq[[#]]&/@Range[6], {#}] & /@ group]
or
Total[SequenceCount[Extract[subseq, #]&/@Range[6], {#}] & /@ group]
out:
> 0
>
> 0
what I want to get is this:
{0, 0, 0, 1, 1, 2}
which are the presence of the 5 GROUP numbers in each 6 SUBSEQ sub-sequence
Can anyone help me?
tnk'sMutatis Mutandis2018-03-22T02:35:59ZMMA optimized for Neon, VFP or ArmV7/8?
http://community.wolfram.com/groups/-/m/t/1307791
Hello,
just for my interest, is Mathematica on the Raspberry Pi 2/3 optimized for Neon, VFP or ArmV7/8-instructionset?
If not, wouldn't it be possible to create Pi2/3-users an optimized version?
Would its speed benefit from it at all?
Greets, MichaMichael Steffen2018-03-24T00:03:23ZGlycolysis Similarity Among Humans, Chimps, and Salmon
http://community.wolfram.com/groups/-/m/t/1307919
Well, I find myself between jobs again, so I'll take another step into the wild biology frontier. First, a few sentences on what I've learned from working remotely for the past two years for two failed companies. I've seen a lot of one strategy that doesn't work, which seems to stem from the mythos of Steve Jobs and the iPhone. Namely, a genius designing a masterpiece in isolation and then taking the world by storm with it. It doesn't seem to work very well when the goal is to automate and scale a process. Instead, the vaguely defined masterpiece never manifests itself, the team wastes huge amounts of time on work that is thrown away, zero successful completions of the process occur (in these cases getting students into universities they will like and getting people the best deals on loans), and then the company runs out of money. And even if they had been able to decide exactly what the magical app should have been, it wouldn't have worked as soon as it was confronted with the messy details of reality. I would have been much happier if they were drawing their inspiration from the big restaurant chains or stores that started as a single shop, the big transport services that started by delivering on foot within a single city, or even the early Steve Jobs days where Apple was building computers in a garage.
Anyway, back to working with biology data, which is one of my sanctuaries of slow but steady progress (the other being working on more interesting character AI for games, because current games get boring too quickly). I've been getting familiar with more of the various NCBI databases and tinkering with the API. All of the UI, terminology, and acronyms are somewhat overwhelming at first. I find having a toy genetic engineering project in mind to helps me stay focused on figuring out specific computations I would actually want to run. So my toy project is to genetically engineer grass that doesn't have to be cut. It should automatically stop growing (but stay alive) at an agreeable height for a lawn. I chose that project because it's beyond what can be done today, but perhaps much simpler than common goals like curing diseases in humans. And it's something where I can conceive of personally doing physical experiments, because the safety and regulatory concerns are much less stringent when dealing with a tray of grass. And it's perhaps even less controversial than other genetically modified plants, because it's weakening the grass from an evolutionary standpoint. So you don't have to worry about it taking over natural fields. It turns out golf courses have been interested in creating so-called dwarf grasses via selective breeding for a while, but I think to truly get the desired behavior will require engineering.
Most genetic engineering projects you read about involve adding or disabling single genes. As you start to investigate progressing to designing small gene circuits, you realize that the metabolic pathway data is lagging behind the genomic data (which is already not as complete as you want). The most popular database for pathway data is [KEGG Pathway][1]. It consists of a few hundred hand drawn pathway maps covering pathways that are somewhat consistent (called evolutionarily conserved) across many organisms, annotated with compounds and enzymes. Then for a variety of species it lists which genes produce the proteins corresponding to the various enzymes in the reactions. Of course, the completeness varies a lot between species, however, it's enough to begin imagining potential paths forward.
I've been studying general knowledge on plant and grass development, so using the pathways I can perhaps get some ideas for genes to try and disable in order to stop the growth. Plants use a much larger amount and variety of secondary metabolites than animals, so then perhaps I can find one whose concentration correlates with the height of the blade of grass. Many processes in biology are regulated by the concentration of a signaling molecule (like how your hand knows [where to grow a thumb instead of a pinky][2]). Perhaps the trickiest part will be to design an enzyme that is activated by a particular concentration of that signaling molecule, if I can't find one that already exists. But enzyme design is a hot topic, and as computational chemistry algorithms continue to advance (and become easier to use due to integration in the WL, more standardized formats, etc), eventually that will be possible too. Then the final step would be to add a gene or genes that just produce some RNA to interfere with the genes I want to disable ([a common technique][3]). Then my enzyme can [promote][4]/[activate][5] that gene when the enzyme is activated by the concentration of the signaling molecule. Whew. That's a long list of hypotheticals for such a simple sounding project, and we haven't even considered trying to control side effects. But I think this type of reasoning will become more common in the future.
However, those investigations are for future dates! Let's wrap this post up with a simple, interesting exercise using the KEGG database. Let's see how similar the DNA and protein amino acid sequences used for [glycolysis][6] (a fundamental process in all organisms) are among humans, chimps, and something really different like salmon. The way the KEGG API works is that for a given pathway we can request either the enzyme categories used in the various reactions (prefix "ec"), the formulas for the reactions the enzymes catalyze (prefix "rn"), or the genes that produce those enzymes (prefix varies per organism, "hsa" is human, "ptr" is chimp, "sasa" is salmon). KEGG has their own IDs for the genes, but then from those you can request the sequences or links back to other databases like the NCBI nucleotide and protein databases for more information. We'll start by grabbing the XML data (in a schema they call KGML) for the glycolysis genes for each species.
human = Import["http://rest.kegg.jp/get/hsa00010/kgml", "XML"][[2, 3]];
chimpanzee =
Import["http://rest.kegg.jp/get/ptr00010/kgml", "XML"][[2, 3]];
salmon = Import["http://rest.kegg.jp/get/sasa00010/kgml", "XML"][[2,
3]];
Let's define a function to download and parse out the name and sequence data for a given gene. It would be nice if they offered this data in JSON format.
getGeneInfo[keggId_] := keggId // <|
"Names" ->
First@StringCases[
Import["http://rest.kegg.jp/get/" <> #, "Text"],
"NAME" ~~ Whitespace ~~ Shortest[names__] ~~ "\n" :>
StringSplit[names, ", "]],
"NTSeq" ->
StringJoin[
StringSplit[
Import["http://rest.kegg.jp/get/" <> # <> "/ntseq", "Text"],
"\n"][[2 ;;]]],
"AASeq" ->
StringJoin[
StringSplit[
Import["http://rest.kegg.jp/get/" <> # <> "/aaseq", "Text"],
"\n"][[2 ;;]]]
|> &
Now let's download the info for each of the genes mentioned in each of the pathways. Sometimes multiple genes are given for a single enzyme or reaction node in the pathway.
humanGeneInfo =
Select[human, #[[1]] == "entry" && #[[2, 3, 2]] == "gene" &][[All, 2,
2, 2]] // StringSplit // Flatten //
Module[{i = 0},
Monitor[AssociationMap[(i++; getGeneInfo@#) &, #],
ProgressIndicator[i, {1, Length@#}]]] &
chimpanzeeGeneInfo =
Select[chimpanzee, #[[1]] == "entry" && #[[2, 3, 2]] ==
"gene" &][[All, 2, 2, 2]] // StringSplit // Flatten //
Module[{i = 0},
Monitor[AssociationMap[(i++; getGeneInfo@#) &, #],
ProgressIndicator[i, {1, Length@#}]]] &
salmonGeneInfo =
Select[salmon, #[[1]] == "entry" && #[[2, 3, 2]] == "gene" &][[All,
2, 2, 2]] // StringSplit // Flatten //
Module[{i = 0},
Monitor[AssociationMap[(i++; getGeneInfo@#) &, #],
ProgressIndicator[i, {1, Length@#}]]] &
You'll see some error messages thrown from a lot of the salmon genes missing names. Now let's go through each of the human genes and find the chimpanzee gene that has a shared name and list the name of the gene, the length of the nucleotide sequence, and the edit distances to get from the human to chimpanzee nucleotide and amino acid sequences for that gene. Then sort by ratio of edit distance to sequence length so the most similar ones come first and the most different ones come last.
Table[SelectFirst[chimpanzeeGeneInfo,
ContainsAny[ToLowerCase@humanGene[["Names"]],
ToLowerCase@#[["Names"]]] &] //
If[! MissingQ@#, {humanGene[["Names", 1]],
StringLength@humanGene[["NTSeq"]],
EditDistance[humanGene[["NTSeq"]], #[["NTSeq"]]],
EditDistance[humanGene[["AASeq"]], #[["AASeq"]]]},
Nothing] &, {humanGene, humanGeneInfo}] //
SortBy[#[[3]]/#[[2]] &]
{{ENO3,1305,0,0},{PGAM2,762,1,0},{PGK1,1254,2,0},{PDHA2,1167,2,1},{LDHB,1005,2,1},{HK1,2754,7,3},{ADH5,1125,3,2},{ADPGK,1491,4,1},{ALDOB,1095,3,1},{ACSS2,2106,8,4},{GAPDH,1008,4,0},{ALDH1A3,1539,7,1},{ALDOC,1095,5,3},{ENO2,1305,6,0},{DLAT,1944,9,4},{LDHA,999,5,1},{AKR1A1,978,5,3},{ALDH2,1554,8,3},{PGAM1,765,4,0},{HK2,2754,15,3},{PDHB,1080,6,0},{GPI,1677,10,5},{ADH4,1143,7,2},{ENO1,1305,8,0},{PCK2,1923,12,6},{ALDOA,1095,7,2},{GALM,1029,7,3},{FBP1,1017,7,3},{HKDC1,2754,19,4},{ENO4,1878,13,6},{LDHAL6B,1146,8,5},{PGM1,1689,12,6},{ACSS1,2070,15,5},{PGM2,1839,14,4},{PDHA1,1173,9,1},{BPGM,780,6,2},{ADH1A,1128,9,5},{ADH1B,1128,9,2},{GAPDHS,1227,10,5},{MINPP1,1464,12,5},{ALDH1B1,1554,13,5},{G6PC2,1068,9,5},{PGK2,1254,11,5},{FBP2,1020,10,1},{G6PC,1074,12,4},{PFKL,2343,29,0},{GCK,1398,28,13},{ADH6,1107,23,9},{PKLR,1725,36,13},{PGAM4,765,17,11},{PKM,1596,57,21},{ALDH9A1,1557,82,28},{ALDH3A2,1458,79,31},{DLD,1530,92,29},{HK3,2772,190,63},{PFKP,2355,202,68},{PFKM,2343,225,74},{ADH1C,1128,145,48},{TPI1,861,114,37},{ALDH3B1,1407,220,74},{ALDH3A1,1362,224,74},{PCK1,1869,409,135},{ALDH7A1,1620,359,121},{ALDH3B2,1158,267,97},{LDHAL6A,999,238,79},{LDHC,999,348,116},{ADH7,1161,414,170},{G6PC3,1041,454,173}}
We can see most of them are very similar. [Enolase 3][7] is completely identical between humans and chimpanzees. It has a few isoenzymes that are expressed in different tissue types in mammals. This particular one is most common in skeletal muscle. [G6PC3][8] has an edit distance that is almost half the length of the nucleotide sequence. This gene encodes the catalytic subunit of the G6Pase enzyme. In humans, mutations in this gene can cause babies to have low white blood cell counts. Let's do the same computation for salmon.
Table[SelectFirst[salmonGeneInfo,
ContainsAny[ToLowerCase@humanGene[["Names"]],
ToLowerCase@#[["Names"]]] &] //
If[! MissingQ@#, {humanGene[["Names", 1]],
StringLength@humanGene[["NTSeq"]],
EditDistance[humanGene[["NTSeq"]], #[["NTSeq"]]],
EditDistance[humanGene[["AASeq"]], #[["AASeq"]]]},
Nothing] &, {humanGene, humanGeneInfo}] //
SortBy[#[[3]]/#[[2]] &]
{{GAPDH,1008,213,49},{PGAM1,765,169,35},{ENO3,1305,292,64},{PGK1,1254,283,61},{PGAM4,765,175,44},{ALDH2,1554,356,107},{ALDOA,1095,252,66},{ALDOC,1095,257,75},{GCK,1398,348,99},{PCK1,1869,477,155},{ALDH7A1,1620,415,99},{ADH5,1125,305,71},{PGM1,1689,461,124},{ALDOB,1095,301,98},{FBP2,1020,284,86},{ALDH9A1,1557,468,145},{DLD,1530,461,91},{PCK2,1923,591,214},{PDHB,1080,332,92},{LDHB,1005,310,74},{LDHA,999,316,97},{ACSS1,2070,662,218},{ADPGK,1491,493,185},{ADH1C,1128,376,142},{PGM2,1839,622,182},{DLAT,1944,659,197},{GALM,1029,356,136},{ALDH3A2,1458,549,194},{G6PC3,1041,437,191},{G6PC2,1068,454,156},{ENO4,1878,819,363}}
Over half of them are missing because there are a lot of salmon genes that are unnamed or have no shared names with the human genes. The similarity ratio ranges from 0% to 44% in chimpanzees and 21% to 44% in salmon. This is expected, of course, because salmon are less similar to us than chimpanzees. Now let's ignore the annotated names and just find the salmon gene ID that is the most similar to each human gene based on finding all pairwise edit distances in the set.
Table[Join[{humanGene[["Names", 1]],
StringLength@humanGene[["NTSeq"]]},
First@SortBy[
KeyValueMap[{EditDistance[
humanGene[["NTSeq"]], #2[["NTSeq"]]], #} &, salmonGeneInfo],
First]], {humanGene, humanGeneInfo}] // SortBy[#[[3]]/#[[2]] &]
{{ENO1,1305,262,sasa:100194865},{GAPDH,1008,210,sasa:106575942},{ALDOA,1095,233,sasa:106583908},{PGAM2,762,165,sasa:106589759},{ENO3,1305,284,sasa:100196671},{ENO2,1305,285,sasa:106576545},{PGAM1,765,169,sasa:100194748},{PKM,1596,356,sasa:100195460},{PGK1,1254,283,sasa:106568020},{HK2,2754,623,sasa:106585103},{ALDOC,1095,250,sasa:106612788},{PGAM4,765,175,sasa:100194748},{ALDH2,1554,356,sasa:106578507},{HK1,2754,634,sasa:106587516},{GPI,1677,397,sasa:100196524},{GCK,1398,344,sasa:106585167},{PGK2,1254,316,sasa:100194765},{PDHB,1080,274,sasa:106566255},{FBP1,1017,259,sasa:106561989},{PCK1,1869,477,sasa:100195420},{ALDH7A1,1620,415,sasa:100194754},{PFKP,2355,604,sasa:100380410},{PFKM,2343,613,sasa:106566997},{HKDC1,2754,739,sasa:106612435},{PGM1,1689,457,sasa:106568718},{FBP2,1020,276,sasa:106593443},{ADH5,1125,305,sasa:100195992},{ALDH1A3,1539,421,sasa:106562593},{ALDH1B1,1554,427,sasa:106578507},{ALDOB,1095,301,sasa:100136522},{PDHA1,1173,335,sasa:106590467},{PFKL,2343,682,sasa:106582566},{ACSS2,2106,623,sasa:106566799},{ALDH9A1,1557,468,sasa:100195073},{DLD,1530,461,sasa:106561021},{LDHA,999,302,sasa:106573043},{PCK2,1923,582,sasa:100195420},{LDHB,1005,308,sasa:106609123},{G6PC,1074,333,sasa:106579337},{AKR1A1,978,306,sasa:106584055},{BPGM,780,247,sasa:100196266},{PDHA2,1167,370,sasa:106563586},{ACSS1,2070,662,sasa:106576722},{ADH1B,1128,361,sasa:106611602},{TPI1,861,280,sasa:106569985},{ADH1C,1128,369,sasa:106611602},{PKLR,1725,565,sasa:100195460},{ADPGK,1491,493,sasa:106560768},{ADH1A,1128,374,sasa:106611602},{PGM2,1839,622,sasa:106595213},{DLAT,1944,659,sasa:106604067},{GALM,1029,356,sasa:106608200},{LDHAL6A,999,350,sasa:106573043},{LDHC,999,351,sasa:106573043},{ADH4,1143,404,sasa:106611602},{ADH7,1161,414,sasa:106611602},{ALDH3B1,1407,506,sasa:106606778},{HK3,2772,1003,sasa:106585103},{ALDH3B2,1158,427,sasa:106606760},{ADH6,1107,415,sasa:100195992},{ALDH3A2,1458,549,sasa:100286782},{GAPDHS,1227,463,sasa:106577739},{LDHAL6B,1146,433,sasa:106573069},{G6PC2,1068,410,sasa:106579337},{ALDH3A1,1362,531,sasa:100286782},{G6PC3,1041,437,sasa:106589637},{ENO4,1878,819,sasa:106606505},{MINPP1,1464,642,sasa:106589393}}
I looked up several of these gene IDs in KEGG and then followed the links to the NCBI gene database. Many of them have names with "-like" appended to say it is like another gene or enzyme. Also, for example, the salmon doesn't have genes listed for enolase 1 or 2 (it does have 3 and 4), but it has one called alpha-enolase that is more similar to the human enolase 1 than any other pair of human to salmon genes in the set. It's also interesting that the salmon PGAM1 gene is slightly more similar to the human PGAM2 gene than to the human PGAM1 gene. Matching the full set of human genes to salmon had hardly any effect on the range of similarity ratios. It changed from 21%-44% to 20%-44%. So the maximum dissimilarity in salmon remained at 44% even when considering unnamed salmon genes.
And we'll end it there for now. I've attached a pretty messy notebook. Normally, as I make multiple passes over the code, I move and organize cells toward the top, but in this case the notebook is pretty raw. It also has some scratch work where I was parsing chemical reactions from other KGML files and looking at the number of genes involved in each reaction between the species. Until next time!
[1]: http://www.genome.jp/kegg/pathway.html
[2]: https://en.wikipedia.org/wiki/Sonic_hedgehog_%28protein%29
[3]: https://en.wikipedia.org/wiki/RNA_interference
[4]: https://en.wikipedia.org/wiki/Promoter_%28genetics%29
[5]: https://en.wikipedia.org/wiki/Activator_%28genetics%29
[6]: https://en.wikipedia.org/wiki/Glycolysis
[7]: https://www.ncbi.nlm.nih.gov/gene?term=%28eno3%5Bgene%5D%29%20AND%20%28Homo%20sapiens%5Borgn%5D%29%20AND%20alive%5Bprop%5D%20NOT%20newentry%5Bgene%5D&sort=weight
[8]: https://www.ncbi.nlm.nih.gov/gene?term=%28g6pc3%5Bgene%5D%29%20AND%20%28Homo%20sapiens%5Borgn%5D%29%20AND%20alive%5Bprop%5D%20NOT%20newentry%5Bgene%5D&sort=weightMichael Hale2018-03-23T19:07:50ZFinding Out What Is New (and Existing) For You
http://community.wolfram.com/groups/-/m/t/1307743
I was inspired by Wolfram's [chart of statistical distribution functions][1] and wanted to know what other sets functions where out there and how long they have been around. With the `WolframLanguageData` function and the `"FunctionalityAreas"` property I was able to get a list of all the official areas.
functionAreas =
AlphabeticSort@
DeleteDuplicates@
Flatten@EntityValue[WolframLanguageData[], "FunctionalityAreas"];
Length@functionAreas
> 181
functionAreas // Short
> {AlgebraicSymbols,AlignmentSymbols,AngleSymbols,AnnotationSymbols,ArraySymbols,<<172>>,VectorTeeOperatorSymbols,ViewerSymbols,WaveletSymbols,WolframAlphaSymbols}
Picking a functionality area.
functionality = "MachineLearningSymbols";
functionalityClass =
EntityClass[
"WolframLanguageSymbol", {"FunctionalityAreas" -> functionality}]
![enter image description here][2]
len = functionalityClass["EntityCount"]
> 48
To place the symbols on a circular chart the labels will need to be rotated. To rotate in quadrants the number of symbols are subdivided.
quads = Round@Subdivide[len, 4]
> {0, 12, 24, 36, 48}
A rotation helper function is used that takes the index of the symbol and returns the angle it should be rotated by.
angle[n_] :=
Piecewise[{
{(-(Pi/2))*((n - 1)/quads[[2]]),
n <= quads[[2]]},
{Pi/2 - (Pi/2)*((n - quads[[2]])/(quads[[3]] - quads[[2]])),
n <= quads[[3]]},
{2*Pi - (Pi/2)*((n - quads[[3]])/(quads[[4]] - quads[[3]])),
n <= quads[[4]]},
{Pi/2 - (Pi/2)*((n - quads[[4]])/(quads[[5]] - quads[[4]])),
n <= quads[[5]]}
}]
The symbols and the version they were introduced can be collected from the entity class.
verIntro =
KeySort@GroupBy[Last -> First]@
functionalityClass[{"Name", "VersionIntroduced"}];
verIntro // Short
> <|10->{AmbiguityFunction,AmbiguityList,ClassifierFunction,ClassifierInformation,<<6>>,PredictorInformation,PredictorMeasurements,UtilityFunction,ValidationSet},<<4>>|>
Create a version introduced legend for the chart.
ledgend =
Labeled[
SwatchLegend[
ColorData[{"Indexed", "VibrantColor"}] /@ Range[Length@verIntro],
Reverse@Keys@verIntro,
LegendLabel -> "Version",
LegendLayout -> (Grid[Join @@@ Partition[Reverse@#, UpTo@2],
Alignment -> Decimal] &),
LabelStyle -> {FontColor -> Automatic, FontSize -> Medium}],
functionality,
Top,
Frame -> True,
FrameStyle -> LightGray,
RoundingRadius -> 5
]
![enter image description here][3]
Then plot with `SectorChart` with the latest version as the inner most ring.
SectorChart[
{
MapIndexed[
Labeled[{1, 1}, Rotate[#, angle[First@#2], {1/2, 1/2}]] &,
Join @@ Values@verIntro],
Sequence @@ (Join @@ {ConstantArray[{1, .1}, #1],
ConstantArray[Style[{1, .1}, Transparent], len - #1]} & /@
Rest@Reverse@Accumulate@Values[Length /@ verIntro])
},
PerformanceGoal -> "Speed",
SectorOrigin -> {Automatic, 1},
SectorSpacing -> 0,
ChartStyle -> {ColorData[{"Indexed", "VibrantColor"}] /@
Range[Length@verIntro], None},
ChartBaseStyle -> {EdgeForm[LightGray]},
Epilog -> {
Inset[
ledgend,
{0, 0}
]},
BaseStyle -> {FontColor -> White},
ImageSize -> 1100]
![enter image description here][4]
For some unknown reason the legend label does not survive but other than that it is as expected.
All the symbols of the functionality area can be seen and the version they were introduced. Other areas can be viewed by simply changing the `functionality` variable and re-evaluating.
`"AudioSymbols"`
![enter image description here][5]
Have fun discovering.
[1]: http://www.wolfram.com/language/11/extended-probability-and-statistics/expanded-distribution-coverage.html?product=mathematica
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4UaYI.png&userId=294986
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=beGUX.png&userId=294986
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=D2V2B.png&userId=294986
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=SQ1DV.png&userId=294986Edmund Robinson2018-03-23T21:09:49ZConverting List of Dates for Use in Time Series
http://community.wolfram.com/groups/-/m/t/1307492
I've been scouring the forums and I've found many ways to transform dates. However, I'm stuck trying to figure out how to import a set of dates in a format that I can use in a time series. I'm not sure why Mathematica won't automatically recognize the format as that is were it originally came from via an exported DataDrop TSV file.
I'm able to import the dates, can remove the last part of the string, but when I try DateObject it does not work. Yet, entering a single value as noted below working.
hwd = Import["hw.tsv", {"Data", All, 1}];
rep = StringReplace[Rest[hwd], " GMT +0" -> ""] // InputForm;
DateObject[rep]
Ultimately I want to be able to import the file and be able to do a TimeSeries with the Date and the temperature without the degrees and plot it as a DateListPlot.Michael Madsen2018-03-23T17:55:36ZSaturn Flyby: Detailed Simulation
http://community.wolfram.com/groups/-/m/t/1306910
![enter image description here][1]
Creating a flyby simulation of a planetary scene in 3D involves multiple steps. This post walks you through the steps used to create the [final animation][2] (click this link or image below to play). I strongly recommend you watch the video in **full screen mode** with low lights so that all of the detail is visible. Some of it is subtle.
[![enter image description here][3]][4]
The Vimeo video hosting service has a habit of auto-advancing to the next video, but what it chooses as the next one is often strange in my opinion, so be prepared to click the back button to replay it.
**Creating the Planet**
Saturn can be treated as an oblate spheroid which we can model in the Wolfram Language using [ParametricPlot3D][5] and textures. [EntityValue][6] can give us a few pointers to get things scaled properly. First, we need to know the equatorial radius of the planet and its oblateness (e.g. how flattened it is at the poles compared to the equator).
saturnradius =
QuantityMagnitude[Entity["Planet", "Saturn"]["EquatorialRadius"],
"Kilometers"];
saturnoblateness = Entity["Planet", "Saturn"]["Oblateness"];
texture =
ImageReflect[EntityValue[Entity["Planet", "Saturn"],
"CylindricalEquidistantTexture"], Bottom];
With the data above, we can construct a ParametericPlot3D of an oblate spheroid scaled to the dimensions of Saturn. We use the curated data from above to perform the scaling.
planet = ParametricPlot3D[{saturnradius Cos[t] Sin[p],
saturnradius Sin[t] Sin[
p], (1 - saturnoblateness) saturnradius Cos[p]}, {t, 0,
2 Pi}, {p, 0, \[Pi]}, Mesh -> None, PlotStyle -> Texture[texture],
Lighting -> "Neutral", Boxed -> False, Axes -> False,
PlotPoints -> 100]
![enter image description here][7]
**Creating the Rings**
The rings of Saturn lie along a plane and can be modeled as an annulus with radial color and opacity variation. We make use of a texture obtained from https://www.classe.cornell.edu/~seb/celestia/hutchison/saturn-rings.png and stored as a [CloudObject][8] in the Wolfram Cloud and use ParametricPlot3D to apply this color and opacity texture to the annulus.
ringalpha = Import[
CloudObject[
"https://www.wolframcloud.com/objects/4e39f856-1c09-44d1-b2a9-\
00ffd480b6dd"]];
ringinnerradius = 74510;
ringouterradius = 140390;
rings = ParametricPlot3D[{r Cos[t], r Sin[t], 0}, {r, ringinnerradius,
ringouterradius}, {t, 0, 2 Pi}, Mesh -> None,
PlotStyle -> Texture[ringalpha], PlotPoints -> 100]
![enter image description here][9]
**Creating a Star Backdrop**
To give some additional subtle detail, we can provide more sense of motion and context for the opacity variations in the rings by providing a backdrop of stars. We make use of EntityValue to obtain position and brightness data for stars visible to the naked-eye (nearly 9,000 stars). This takes a couple minutes depending on your network connection.
In[9]:= stardata =
EntityValue[
EntityClass["Star", "NakedEyeStar"], {"RightAscension", "Declination",
"ApparentMagnitude"}];
In[10]:= stardata // Length
Out[10]= 8910
To make use of the previous data in a graphical setting, we need to convert the [Quantity][10] objects to numbers in radians and also rescale the apparent magnitude brightness values to [GrayLevel][11] values between 0 and 1. We round the values of apparent magnitude since we want to optimize the rendering to make use of multi-point primitives later. Takes about a minute to convert the data into the necessary format.
In[11]:= triples = With[{magrange = MinMax[stardata[[All, 3]]]},
{-QuantityMagnitude[#[[1]], "Radians"],
QuantityMagnitude[#[[2]], "Radians"],
Rescale[Round[#[[3]]], magrange, {1, .1}]} & /@ stardata];
Next, we group the stars based on their rounded values.
In[12]:= gb = GatherBy[triples, #[[3]] &];
We construct the star background primitives by converting the right ascension and declination values into Cartesian spherical coordinates and place them far enough outside of the Saturn system, 8 ring radii, that they can serve as a spherical backdrop assuming our camera stays inside this distance. Each magnitude value gets a specific GrayLevel and set of points with a single [Point][12] head.
In[13]:= stars = With[{r = 8 ringouterradius},
{GrayLevel[#[[1, 3]]],
Point[{-r Cos[#[[1]]] Sin[#[[2]] + Pi/2],
r Sin[#[[1]]] Sin[#[[2]] + Pi/2], -r Cos[#[[2]] +
Pi/2]} & /@ #]}] & /@ gb;
We can then assemble the star backdrop and assign a specific [PointSize][13] to all stars, using GrayLevel, not size, to represent brightness variations.
In[14]:= starscene = Graphics3D[{PointSize[.004], stars}];
**Defining the Flight Path**
The flightpath is a simple straight line. Its starts "in front" of Saturn at 4 ring radii out, and always keeps the camera pointed at the planet. The position of the camera changes with time. We construct the path using [Interpolation][14], one for each Cartesian coordinate. As time progresses, the y-coordinate extends out to the side of Saturn so we don't hit it. We also modify the z-coordinate to start above the ring plane and drop below it at the end.
xfun = Interpolation[{4 ringouterradius, 3 ringouterradius, 2 ringouterradius,
ringouterradius, 0, -ringouterradius}];
yfun = Interpolation[{0, ringouterradius, 2 ringouterradius,
3 ringouterradius, 4 ringouterradius, 4 ringouterradius}];
zfun = Interpolation[{4 saturnradius, 3 saturnradius, 2 saturnradius,
1 saturnradius, 0, -1 saturnradius}];
**Assembling the Scene and Generating Frames**
We can render an initial scene to get a sense of how it will look. We specify a point light source to look as if the system is being illuminated by the Sun from the "front".
gr = Show[{planet, Graphics3D[{Lighting -> {{"Ambient", GrayLevel[.33]}}, rings[[1]]}], starscene}, Background -> Black,
ImageSize -> .4 {1920, 1080}, SphericalRegion -> True, ViewAngle -> Pi/10,
ViewVector -> {{4 ringouterradius, 0, 1 saturnradius}, {0, 0, 0}},
PlotRange -> All,
Lighting -> {{"Ambient", GrayLevel[0]}, {"Point",
White, {3 ringouterradius, 0, 3 saturnradius}}}]
![enter image description here][15]
The first step in animating the scene is to generate a list of frames. The elements are all static, but the camera position changes in time using [ViewVector][16] and making use of the interpolating functions created earlier. The time step is small so that we can obtain enough frames (600) to make the animation play back smoothly. The [ImageSize][17] is set to standard HD resolution.
In[19]:= frames = Table[
Show[gr, ViewVector -> {{xfun[t], yfun[t], zfun[t]}, {0, 0, 0}},
ImageSize -> {1920, 1080}], {t, 1, 6 - 1/120, 1/120}];
In[20]:= frames // Length
Out[20]= 600
The initial frame can be seen, scaled down, using the following. Stars are more easily seen at full resolution.
In[21]:= Show[frames[[1]], ImageSize -> .4 {1920, 1080}]
![enter image description here][18]
We need to export the frames to a directory for later assembly. The first step is to set the working directory to the same directory as the notebook.
SetDirectory[NotebookDirectory[]];
Finally, we export the frames as PNG files. File names are of the form FrameXXX.png. This step of rasterizing each frame and exporting takes awhile due to the polygon count and opacity in the scene so you will need to be patient. Using [Export][19] to export individual frames, as opposed to generating a Table of frames to be exported in one pass, has at least one major advantage. It allows you to stop, at any point, and see how far you have progressed. You can monitor the directory you are exporting to to see the progress. You can abort the process and continue where you left off later.
Do[
Export["Frame" <>
ToString[PaddedForm[i - 1, 3, NumberPadding -> "0",
NumberSigns -> {"", ""}]] <> ".png", frames[[i]], "PNG"];,
{i, 1, 600, 1}
]
Once you have the directory of images, you can combine these into whatever video format you prefer. You can even re-import all the frames and export them from the Wolfram Language. Modern video formats are not standardized so leaves you open to a number of choices. I tend to prefer MPEG-4 with an H.264 codec for best quality and compression. Some may prefer a Quicktime animation. The choice is up to you. There are multiple tools available for combining such image sequences that range from command line tools like FFMPEG or packages like Blender. After combining the frames, you can upload the video to your favorite video sharing service such as Vimeo or YouTube. I prefer Vimeo since the compression algorithm they apply seems more optimal for quality. The final animation was already linked in the opening statement, but [here it is again][20].
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=7560SaturnHeader.png&userId=25355
[2]: https://vimeo.com/260948024
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Vimeo.png&userId=25355
[4]: http://reference.wolfram.com/language/ref/ParametricPlot3D.html
[5]: http://reference.wolfram.com/language/ref/ParametricPlot3D.html
[6]: http://reference.wolfram.com/language/ref/EntityValue.html
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5996Planet.png&userId=25355
[8]: http://reference.wolfram.com/language/ref/CloudObject.html
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=7558Rings.png&userId=25355
[10]: http://reference.wolfram.com/language/ref/Quantity.html
[11]: http://reference.wolfram.com/language/ref/GrayLevel.html
[12]: http://reference.wolfram.com/language/ref/Point.html
[13]: http://reference.wolfram.com/language/ref/PointSize.html
[14]: http://reference.wolfram.com/language/ref/Interpolation.html
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5763init.png&userId=25355
[16]: http://reference.wolfram.com/language/ref/ViewVector.html
[17]: http://reference.wolfram.com/language/ref/ImageSize.html
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=7032Frame1.png&userId=25355
[19]: http://reference.wolfram.com/language/ref/Export.html
[20]: https://vimeo.com/260948024Jeff Bryant2018-03-22T16:09:27ZProperly export a video?
http://community.wolfram.com/groups/-/m/t/1307037
Hi all,
I am interested in the capturing the dynamics of a 3d vector like the ones presented here: https://journals.aps.org/prb/supplemental/10.1103/PhysRevB.91.064423
The video is generated using Mathematica in Mac OS. I am using the same code for plotting my graphs. It works perfectly in Mathematica. But when I export it as a .mov/.avi file, it behaves weirdly (My video is attached). I am plotting the dynamics of a vector in 3d space. I found two issues:
Problems: 1. It rotates very quickly. 2. It switches from one position to another and then again comes back to the original place. Both of these problems were not present in Mathematica. Only when I am going to export it any video format, exported video behaves incorrectly. I am using 11.2 version of the software on Windows. In summary, my question is why I am unable to record the exact video that is displayed in Mathematica itself?
I'll greatly appreciate any reply.
Best,Abir Shadman2018-03-22T17:58:36ZAvoid the following error while using SemanticImport?
http://community.wolfram.com/groups/-/m/t/1307280
Hi,
Upon running below command (from SemanticImport documentation page)
sales = SemanticImport["ExampleData/RetailSales.tsv"]
I got an error saying " The operation CreateFile is not allowed while running in sandbox mode".
Has anyone encountered similar error and knows a solution?
Many thanks.xinkai wu2018-03-23T03:47:38ZImport text with SemanticImport?
http://community.wolfram.com/groups/-/m/t/1300467
Hello All.
I have been trying to run the command SemanticImport, but it won't work. I always get an error and no output. When I try to do the simple import of the same data it works. I have no idea what else to do. Has anyone experienced the same situation or has a solution for that? Thank you
In[16]:= data1 = Import["ExampleData/50states.txt", "Data"];
First[data1]
Out[17]= "Alabama"
In[18]:= data2 = SemanticImport["ExampleData/50states.txt"];
First[data2]
During evaluation of In[18]:= SemanticImport::unexpinvaliderr: Unexpected invalid input was detected. Processing will not continue
During evaluation of In[18]:= First::normal: Nonatomic expression expected at position 1 in First[$Failed].
Out[19]= First[$Failed]
In[20]:= $Version
Out[20]= "11.2.0 for Microsoft Windows (64-bit) (September 11, 2017)"Luciano Pinheiro2018-03-12T03:59:27ZConway's structures
http://community.wolfram.com/groups/-/m/t/1301097
Recently I noticed here a thread about gliders:
http://community.wolfram.com/groups/-/m/t/1120326
I must say that I have virtually almost no idea about "cellular autmatons" ("automata"?), but it reminded me about a book I have read quite a time ago (1979?).
**Manfred Eigen, Ruthild Winkler: "Das Spiel, Naturgesetze steuern den> Zufall" ( ISBN 3-492-02331-2)**
Manfred Eigen is a Nobel-Prize -Winner of 1967 in Chemistry ( for his work about very fast chemical reactions).
In this book several biochemical models are mentioned, which, as it seems, can well be described by cellular automata.
It is mentioned, that a lot of patterns with remarkable behaviour are described by Martin Gardner in "Mathematical Games" , Scientific American, Oct. 1970 and Feb. 1971 (citations given in the aforementioned book).
As I said I am a newby about cellular automata, and so I decided to write a procedure (in Mma 7) which is able to display a "gun", that is a device which emits periodically a glider, and an "eater" , which destroys the arriving glider. See the notebook attached.
First an oscillator and a glider are constructed and displayed, then the gun and eater, and finally both together.
I haven't looked at the behaviour of the system when the the patterns are shifted, but that could be easily done because I have provided a code which allows for translations.
Have a look at it, any comments are welcome.Hans Dolhaine2018-03-13T00:19:34ZHow to create taylors diagram
http://community.wolfram.com/groups/-/m/t/1307657
Sir,
I want to create taylors diagram by mathematica .I have calculated Correlation Coefficient, Root Mean Square Error, Standard Deviation and the Centered RMS difference for the data.But not able to draw taylors diagram,Can anybody help me?Tanmoyee Bhattacharya2018-03-23T13:48:06ZHow to fit two data with two functions, and then get confidence intervals？
http://community.wolfram.com/groups/-/m/t/1307629
The confidence interval is obtained by using command such as "NonlinearRegress". In my problem, I need to fit data1 with function f(k1, p1, k3, p3), and fit data2 with function f(k2, p2, k3, p3), finally get (k1, p1, k2, p2, k3, p3) with their confidence intervals...but if I use "NonlinearRegress", I can only fit one data at a time, for example NonlinearRegress[data1, f(k1, p1, k3, p3)], or NonlinearRegress[data2, f(k2, p2, k3, p3)]... then, k3 and p3 are not unique.
So how to fit two data with two functions that share some same parameters, and then get confidence intervals?
Thanks!!Nan Yang2018-03-23T13:31:40ZFind the contour of a pair of multi-variable functions?
http://community.wolfram.com/groups/-/m/t/1306794
I have two functions `f[x,y,z,a,b]` and `g[x,y,z,a,b]`, where the variables are restricted to a set of values: `x ∈ [x0, x1]`, `y ∈ [y0, y1]`, `z ∈ [z0, z1]`, `a ∈ [a0, a1]`, `b ∈ [b0, b1]`. I can find the minimum and maximum values within the restrictions using Minimize and Maximize.
These two functions are the X and Y coordinates of a point, I am using them to show some paths using `ParametricPlot` and to draw graphics using `Graphics`. The thing is, I need to know the boundaries which define the possible locations of the point.
How could I find the contour of the area for which the `(X,Y)` plot of these two functions is defined?
I have been trying to use `ParametricRegion`, but it takes an awful lot of time to give the results with 2/5 variables. It is kind of strange that using `ParametricPlot` with 2 variables is much faster than `ParametricRegion` with the same 2 variables.JOSE OLIVER2018-03-22T23:50:16ZPlot parabolic equations using modules?
http://community.wolfram.com/groups/-/m/t/1306502
I am writing Mathematica code on parabolic equations using modules.
However, When I input Parabola [3, Pi / 3,3], I am not get the output.
What is wrong?
Parabola[v0_, \[Theta]_, time_, options_] :=
Module[{}, g = -9.8; v0x = v0*Cos[\[Theta]]; v0y = v0*Sin[\[Theta]];
Vy[z_] := v0y + g*z;
Sy[t_] := v0y*t + 1/2 g*t^2; Sx[t_] := v0x*t;
Time = Solve[Vy[z] == 0, z];
Maxtime = Time[[1, 1, 2]];
Maxlength = Sx[2 Maxtime];
Maxheight = Sy[Maxtime];
Data1 = Table[{Sx[t], Sy[t]}, {t, 0, 2 Maxtime, Maxtime/20}];
ParabolarPlot =
If[time < 2 Maxtime,
ListPlot[Data1, options,
Prolog -> {PointSize[0.03], Blue, Point[{Sx[time], Sy[time]}]},
PlotStyle -> {GrayLevel[0], PointSize[0.005]},
ImageSize -> {340, 140}, AspectRatio -> 1/3,
PlotRange -> {{0, 1.1 Maxlength}, {0, 1.1 Maxheight}},
PlotLabel ->
StyleForm[
"V0=" <> ToString[v0, StandardForm] <> "(m/s), Angle=" <>
ToString[TraditionalForm[\[Theta]]] <> "(rad),t=" <>
ToString[NumberForm[N[time], 3], StandardForm] <> "(sec)" <>
",Position=" <>
ToString[NumberForm[N[{Sx[time], Sy[time]}], 3],
StandardForm], FontSize -> 9]],
Print["본 실험조건은 물체의 체공시간 =" <>
ToString[2 Maxtime, TraditionalForm] <> "(sec)이하로 설정해야 한다."]]]Yoon Young Jin2018-03-22T02:47:18ZAn algorithm to obtain appropriate numerical solutions for the bounce
http://community.wolfram.com/groups/-/m/t/1306568
I am looking to replicate an algorithm explained in the paper "Impact of new physics on the EW vacuum stability in a curved spacetime background" by E. Bentivegna, V. Branchina, F. Continoa and D. Zappal` to numerically solve the boundary-value problem encountered when trying to find the bounce solution in false vacuum decay. It can be found here: https://arxiv.org/abs/1708.01138
In the Appendix, "A Numerical computation of the bounce solution", it outlines an algorithm involving a shooting method to solve the following boundary problem:
$y^{\prime\prime}(x) + \frac{3}{x}y^{\prime}(x) = \frac{dU}{dy};\\
y(\infty) = 0,\ y^{\prime}(0) = 0, \\U(y) = \frac{1}{4}y^{4}(\gamma +\alpha\ln^{2}y + \beta\ln^{4}y),$
where ${}^{\prime}$ indicates differentiation with respect to $x$.
The value of the constants in the equation are as follows:
Mp = 2.435*10^18;
\[Alpha] = 1.4*10^-5;
\[Beta] = 6.3*10^-8;
\[Gamma] = -0.013;
\[Lambda]6 = 0;
The appendix outlines the algorithm as follows, where Eq.(2.23) is referring to the differential equation (and boundary conditions) stated above.
![Appendix Part 1][1]
![Appendix Part 2][2]
![Appendix Part 3][3]
I am attempting to recreate this algorithm in Mathematica but my attempts thus far of simply trying numerically integrate the differential equation using a shooting method with boundary conditions specified in the Appendix has just yielded errors owing to finding complex infinities.
Any help with this matter would be greatly appreciated.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=page1.png&userId=1240483
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=page2.png&userId=1240483
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=page3.png&userId=1240483Benjamin Leather2018-03-22T18:06:09ZWhy futures and forwards differ?
http://community.wolfram.com/groups/-/m/t/1297242
We discuss similarity and differences between interest rates forwards and futures. Differences are explained not through products mechanics, but as expectations based on different probability measures. We show that forward rates are not risk-neutral expectations of Libor rates as their measures are not identical. Several drift correction schemes are presented to show the conditions under which both rates converge.
![enter image description here][1]
## Introduction##
Futures and forwards are one of the oldest financial instruments. They represent the family of simplest derivative products and are well-known to all members of financial community. Despite being around for a long time, the question why and how they differ is still not completely understood. We therefore look at these products again and bring theoretical reasoning into the scope to show that futures and forwards are essentially different products with their own pricing dynamics driven by different probability measures. Although forwards and futures exist in various of financial classes, we focus on interest rates where this topic is most pronounced.
##Zero-coupon bond as a discount factor##
Time value of money is generally associated with (i) compounding and (ii) its inverse - i.e. discounting. The latter is normally associated with a discount factor - i.e. **zero-coupon bond** that pays unity at maturity. This is denoted as **B[t,T]** where t describes initiation time and T is maturity. A standard conventions assumes t=0, which translates into current discount factor **B[0,T]** for T>0.
Zero-coupon bond B[t,T] is not unique measure of the time value of money. Additional measure include:
A) yield
B[t,T] = Exp[-Y[t,T] δ[t,T]]
where δ[t,T] stands for the year-fraction this is unique constant rate under continuous compounding
B) term rate
B[t,T] = 1/ (1+R[t,T] δ[t,T]}
this is unique constant rate on interval t-T with no compounding.
The above term rate definition above is obtained from the floating rate note (FRN) pricing formula paying Libor index with one remaining payment:
B[t,T] (1+ R[[t,T] δ[t,T]) = 1
##Forward rates##
When interest rates are deterministic, it can be shown that B[0,t] B[t,T] = B[0,T]. However, if they are not, this cannot be established since B[t,T] will not be known until time t (t>0). However, B[0,T] / B[0,t] is till a useful measure as it denotes **forward discount factor** starting at time t with final maturity at T.
Since discount factor is function of rate process, we can define associated **forward rate f0[t,T]** as the rate observed today for borrowing (lending) starting on time t and maturing at time T. If the agreement to lend or borrow is done on the forward rate f0[t,T], then the contract value must have present value = 0.
Clear[B, F, f0, δ];
F[t, T] = B[0, T]/B[0, t];
F[t, T] = 1/(1 + f0[t, T] δ[t, T])
![enter image description here][2]
Agreements to borrow or lend in the future (i.e. forward rate agreements) are generally done on the fixed rate K. This is essentially an agreement to exchange market term rate **R[t,T]** for the fixed contract rate **K**. This contract is priced on the assumption that the market term rate R[t,T] will be the forward rate f0[t,T]. Although R[t,T] is unknown at time 0, f0[t,T] is, so this equality allows to price the forward rate agreement at inception at time 0.
##Futures##
Futures contract is *similar* to forward, but some *differences* exist. It is designed to lock in a price for purchase / sell of an asset before the physical purchase / sale occurs. At the same time, future corrects certain deficiencies of the forward contract - it allows daily variation of the rate for the given delivery rate, and due to mark-to-market nature, it eliminates the counterparty exposure. Daily settlement is primary difference in these two contracts.
Future contract is associated with a future price. This is the settlement price on which contract evaluates. In the risk-neutral world governed by risk-neural probabilities, the future price is simply ℱ = Ern[X[T]] = Exp[r T]* P0 where P0 is the future price at time 0. When the rate is constant, the future price equals forward price.
##Martingales##
A process is known as martingale under the risk-neutral probabilities if it satisfies the condition:
h(t) = Ern[h(T)] for t<T where Ern denotes the risk-neutral expectation. In other words, price now = Ern[price next].
This also applies to futures. Futures prices are determined by the fact the ℱ(t) is martingale relative to risk-neutral probabilities. In the context of risk-free world, we frequently work with money-market account which earns risk-free rate and compounds in forward time. At inception, the account is equal to A(0) = 1, and the balance at time t is A(t) = Exp[ r t] when rate stays constant. Given this, a method to determine the price of tradable asset becomes: h(t)/A(t) = Ern[h(T)/A(T)]
Since zero-coupon bond is tradable security, we establish that B[t,T] / A(t) = Ern[1/A(T)]. Since A(T) = A(t)*Exp[∫r(s) ds], the pricing formula for zero-coupon bond becomes B[t,T] = Ern[Exp[-∫r(s) ds]].
##The nature of future - forward difference##
###Future###
Interest rate future exists in many currencies and involves 3 months contract referenced to **Xibor** index (eg- Libor, Euribor, Tibor etc.). At contract maturity the holder makes loan to the counterparty equal to 3-months Xibor rate. This is the term rate R[t,T] we defined in the first paragraph. The associated future price (i..e future rate) is martingale:
R0[t,T] = Ern[R[t,T]]
This means that the **expected future rate** is the 3-month term rate R[t,T] for the period [t,T] observed today.
###Forward###
We can now define the similar reasoning for the forward term rate ==> **f0[t,T]**. We can get its definition form the forward discount factor F[t,T]
Clear[B, f0, F]
F[t, T] = B[0, T]/B[0, t];
Solve[F[t, T] == 1/(1 + f0[t, T] \[Delta][t, T]), f0[t, T]]
soln = f0[t, T] /. %;
![enter image description here][3]
This is the forward rate obtained from the ratio of discount factors.
We split the formula above into (i) numerator and (ii) denominator and express the two zero-bonds as risk-neutral expectations of zero-rates:
num1 = Numerator[soln];
num2 = num1 /. {B[0, t] -> Exp[-Integrate[r[s], {s, 0, t}]], B[0, T] -> Exp[-Integrate[r[s], {s, 0, T}]]}
![enter image description here][4]
We can simplify the above using the fact that interval [0,T] can be broken into two shorter intervals [0,t] and [t,T].
num3 = num2 /. {Exp[-Integrate[r[s], {s, 0, T}]] ->
Exp[-Integrate[r[s], {s, 0, t}]] Exp[-Integrate[r[s], {s, t, T}]]}
![enter image description here][5]
We rewrite the expression as follows:
num4 = Collect[num3, Exp[-Integrate[r[s], {s, 0, t}]]]
![enter image description here][6]
and then further simplify by bringing the year-fraction into the formula
num5 = num4 /. {Exp[-Integrate[r[s], {s, t, T}]] -> B[t, T]};
num6 = num5/δ[t, T]
![enter image description here][7]
Combining all terms brings us the formula:
f0[t, T] = (1/B[0, T]) Ern[num6]
![enter image description here][8]
Using the definition
B[t,T] = 1/(1+R[t,T] δ[t,T])
we further state:
Clear[B, R];
B[t, T] = 1/(1 + R[t, T] δ[t, T]);
num7 = (1 - B[t, T])/δ[t, T] // Simplify
which translates to
R[t,T] B[t,T]
Clear[B, R, f0];
num8 = num7 /. {R[t, T]/(1 + R[t, T] δ[t, T]) -> R[t, T]*B[t, T]} // Simplify
![enter image description here][9]
We can now express the expectation of the forward rate as:
f0[t, T] = (1/B[0, T]) Ern[
Exp[-Integrate[r[s], {s, 0, t}]] num8]
![enter image description here][10]
further rewrite the discount factor formula as rate process
f0[t, T] = f0[t, T] /. {B[t, T] -> Exp[-Integrate[r[s], {s, t, T}]]}
![enter image description here][11]
Noting that two periods combine into one [0,t] [t,T] -> [0,T]
f0[t, T] /. {Exp[-Integrate[r[s], {s, 0, t}]] Exp[-Integrate[
r[s], {s, t, T}]] -> Exp[-Integrate[r[s], {s, 0, T}]]}
![enter image description here][12]
which further simplifies to:
% /. {B[0, T] -> Ern[Exp[-Integrate[r[s], {s, 0, T}]]]}
![enter image description here][13]
The above formula shows that the forward rate **f0[t,T]** is **NOT** the risk-neutral expectation of the Xibor term rate R[t,T]. The formula above shows that the forward rate is the expectation taken under different probability measure where the Xibor rate is weighted by Exp[-∫r(s) ds].
Defining both rates in simplified notation with \[DoubleStruckCapitalE] representing the risk-neutral expectation, we can now quantify the difference between the two rates:
Clear[R, f0, B];
R[t, T] = \[DoubleStruckCapitalE][R];
f0[t, T] = \[DoubleStruckCapitalE][R B]/\[DoubleStruckCapitalE][B];
f0[t, T] - R[t, T] // Together
![enter image description here][14]
If R and B were *independent*, the nominator would vanish leading to the fact that forward rate = futures rate. However, independence cannot be justified since R is a rate and B is a discount factor that implies **negative correlation** between the two.
This will make the nominator negative, leading to a conclusion that forward rate < future term rate. This is not surprising. Since futures offer certain advantages over forwards - namely in daily settlement - their higher price relative to forwards is logical.
The adjustment of future rate to the forward rate is known as **drift correction**. Practitioners refer to it as 'convexity adjustment'. As the formula above shows, different dynamics for interest rates will lead to different drift correction formula.
##Various drift correction approaches##
We look at several examples of drift correction formulas where we assume certain distributions for the term rate R[t,T] and the discount factor B[t,T]. We set the Xibor distribution to **Normal** (explicitly allowing for **negative rates**), whereas we choose different processes for discount factor. The joint expectation is computed via **Copula distributions** (Binormal and Farlie-Gumbel-Morgenstern). We output (i) the joint copula expectation of rate and discount factor E[R B] and the final drift correction formula. As shown below, different combination of rate and DF processes lead to a different drift correction formulas.
###Normal- Normal process###
rDist = NormalDistribution[a, b];
bDist = NormalDistribution[c, d];
bc1 = CopulaDistribution[{"Binormal", ρ}, {rDist, bDist}];
jd1 = Expectation[x y, {x, y} \[Distributed] bc1,
Assumptions -> b > 0 && d > 0] // Simplify
eR = Expectation[x, x \[Distributed] rDist, Assumptions -> b > 0];
eD = Expectation[y, y \[Distributed] bDist, Assumptions -> d > 0];
dc1 = (jd1 - eR eD)/eD // Simplify
![enter image description here][15]
![enter image description here][16]
Drift adjustment formula depends on the covariance product and DF drift. This is similar to 'quanto' adjustment in equity options.
Plot3D[dc1 /. {c -> 0.95, ρ -> -0.85}, {b, 0.001, 0.02}, {d, 0.5, 1.5},
ColorFunction -> "TemperatureMap", PlotLegends -> Automatic,
AxesLabel -> Automatic,
PlotLabel -> Style["N-N drift adjustment - volatility dependence", 15, Blue]]
![enter image description here][17]
###Normal - Log Normal process###
bDist = LogNormalDistribution[c, d];
bc1 = CopulaDistribution[{"Binormal", ρ}, {rDist, bDist}];
jd1 = Expectation[x y, {x, y} \[Distributed] bc1,
Assumptions -> b > 0 && d > 0] // Simplify
eR = Expectation[x, x \[Distributed] rDist, Assumptions -> b > 0];
eD = Expectation[y, y \[Distributed] bDist, Assumptions -> d > 0];
dc1 = (jd1 - eR eD)/eD // Simplify
![enter image description here][18]
![enter image description here][19]
Normal - LogNormal process generates similar drift adjustment routine to the double-normal process.
###Normal - Laplace process###
bDist = LaplaceDistribution[c, d];
bc1 = CopulaDistribution[{"FGM", ρ}, {rDist, bDist}];
jd1 = Expectation[x y, {x, y} \[Distributed] bc1,
Assumptions -> b > 0 && d > 0] // Simplify
eR = Expectation[x, x \[Distributed] rDist, Assumptions -> b > 0];
eD = Expectation[y, y \[Distributed] bDist, Assumptions -> d > 0];
dc1 = (jd1 - eR eD)/eD // Simplify
![enter image description here][20]
![enter image description here][21]
Normal -Laplace process (as a special case of Variance-Gamma process) generates more complex adjustment rule with deeper dependence on each process.
###Normal - Gamma process###
bDist = GammaDistribution[c, d];
bc1 = CopulaDistribution[{"FGM", ρ}, {rDist, bDist}];
jd1 = Expectation[x y, {x, y} \[Distributed] bc1,
Assumptions -> b > 0 && d > 0 && c > 0] // Simplify
eR = Expectation[x, x \[Distributed] rDist, Assumptions -> b > 0];
eD = Expectation[y, y \[Distributed] bDist, Assumptions -> d > 0 && c > 0];
dc1 = (jd1 - eR eD)/eD // Simplify
![enter image description here][22]
![enter image description here][23]
Normal-Gamma process is well defined and relatively simple.
###Normal - Logistic process###
bDist = LogisticDistribution[c, d];
bc1 = CopulaDistribution[{"FGM", ρ}, {rDist, bDist}];
jd1 = Expectation[x y, {x, y} \[Distributed] bc1,
Assumptions -> b > 0 && d > 0 && -1 <= ρ <= 1] // Simplify
eR = Expectation[x, x \[Distributed] rDist, Assumptions -> b > 0];
eD = Expectation[y, y \[Distributed] bDist, Assumptions -> d > 0];
dc1 = (jd1 - eR eD)/eD // Simplify
![enter image description here][24]
![enter image description here][25]
Simplicity is also a feature of Normal-Logistic process where the adjustment formula is a product of covariances
###Normal - K process###
bDist = KDistribution[c, d];
bc1 = CopulaDistribution[{"FGM", ρ}, {rDist, bDist}];
jd1 = Expectation[x y, {x, y} \[Distributed] bc1,
Assumptions -> b > 0 && d > 0 && -1 <= ρ <= 1] // Simplify
eR = Expectation[x, x \[Distributed] rDist, Assumptions -> b > 0];
eD = Expectation[y, y \[Distributed] bDist, Assumptions -> d > 0];
dc1 = (jd1 - eR eD)/eD // Simplify
![enter image description here][26]
![enter image description here][27]
On the other hand, selection of Normal - K process (as representation of Exponential / Gamma mixture) for DF calibration leads to a complex adjustment rule
###Normal - Johnson SN process###
bDist = JohnsonDistribution["SN", c, d1, d2, d3];
bc1 = CopulaDistribution[{"FGM", ρ}, {rDist, bDist}];
jd1 = Expectation[x y, {x, y} \[Distributed] bc1,
Assumptions -> b > 0 && d3 > 0 && -1 <= ρ <= 1] // Simplify
eR = Expectation[x, x \[Distributed] rDist, Assumptions -> b > 0];
eD = Expectation[y, y \[Distributed] bDist, Assumptions -> d3 > 0];
dc1 = (jd1 - eR eD)/eD // Simplify
![enter image description here][28]
![enter image description here][29]
##Conclusion##
We have taken more rigorous approach to show why futures and forwards differ when the rates are stochastic. We have demonstrated that the nature of deviation lies in probability measure - due to the fact that futures and forwards are essentially subject to different expectations. This leads to a conclusion that forward rates cannot be unbiased estimates on future Xibor rates since forward process is principally different.
We have also demonstrated that drift adjustment formula to equate futures and forward is probability-dependent, and selection of different rate (discount factor) processes leads to different results. The rule complexity depends on (i) joint expectation and (ii) distributional assumptions about each component.
We conclude the exposition to model selection with the higher-order Normal-Johnson process of 'SN' type where more process parameters provide better control for DF process.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=NN_options.png&userId=387433
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=58821.png&userId=20103
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=11612.png&userId=20103
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=20383.png&userId=20103
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=612933.png&userId=20103
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=87514.png&userId=20103
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=14675.png&userId=20103
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=26226.png&userId=20103
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=76428.png&userId=20103
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=52639.png&userId=20103
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=560410.png&userId=20103
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=977211.png&userId=20103
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=754912.png&userId=20103
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1068513.png&userId=20103
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=891414.png&userId=20103
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=979115.png&userId=20103
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=153816.png&userId=20103
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=285717.png&userId=20103
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=520218.png&userId=20103
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=564619.png&userId=20103
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=560020.png&userId=20103
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=924921.png&userId=20103
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=352322.png&userId=20103
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=430923.png&userId=20103
[25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1072224.png&userId=20103
[26]: http://community.wolfram.com//c/portal/getImageAttachment?filename=160925.png&userId=20103
[27]: http://community.wolfram.com//c/portal/getImageAttachment?filename=595126.png&userId=20103
[28]: http://community.wolfram.com//c/portal/getImageAttachment?filename=109827.png&userId=20103
[29]: http://community.wolfram.com//c/portal/getImageAttachment?filename=208028.png&userId=20103Igor Hlivka2018-03-06T23:19:12ZBest way to process many timestamps?
http://community.wolfram.com/groups/-/m/t/1306188
Hi everyone,
I need to do calculations on millions of timestamps, such as
t0 = "2011-01-11 11:30:01.321"
If I want to subtract 12 hours from the timestamp and convert the result to a list of date items, I could use DatePlus and DateList like this
DateList[DatePlus[t0, {-12, "Hour"}]]
and it takes {**0.003359**,{2011,1,10,23,30,1.}} on my MacBook Pro. But it is apparently a lot faster subtracting the 12 hours this way
DateList[DateList[t0] + {0, 0, 0, -12, 0, 0}]
taking only {**0.000801**,{2011,1,10,23,30,1.321}}, where the outer DateList conveniently adjusts the day so that the hours position shows a positive integer. The second way also keeps the fractions of a second (1.321) where the first way drops them.
I'm leaning towards doing it the second way, and am curious to know whether anyone see a downside.Gregory Lypny2018-03-21T18:52:00ZAvoid this error "NDSolve::deqn: Equation or list of equations expected "?
http://community.wolfram.com/groups/-/m/t/1306723
Hi, after rigorous trying and help from some in the forum here, I turned out with the given command for a complex PDE:
op = Function[r*D[#, r] + Tan[\[Phi]] D[#, \[Phi]]];
I*Nest[op, \[CapitalPsi][r, \[Phi]], 3] ==
2 \[CapitalPsi][r, \[Phi]]*r^2/Cos[\[Phi]]^5
sol = CapitalPsi[r, Phi] /.
NDSolve[{op,
CapitalPsi[0, Phi] == 1,
Derivative[1, 0][CapitalPsi][0, Phi] == 0,
Derivative[2, 0][CapitalPsi][0, Phi] == 10,
Derivative[3, 0][CapitalPsi][0, Phi] == 0,
CapitalPsi[r, 0] == 1,
Derivative[0, 1][CapitalPsi][r, 0] == 0},
CapitalPsi, {r, 0, 3}, {Phi, 0, 3},
MaxSteps -> Infinity, PrecisionGoal -> 1,
AccuracyGoal -> 1,
Method -> {"MethodOfLines",
"SpatialDiscretization" -> {"TensorProductGrid",
"MinPoints" -> 32, "MaxPoints" -> 32, "DifferenceOrder" -> 2},
Method -> {"Adams", "MaxDifferenceOrder" -> 1}}] //
Plot3D[sol, {r, 0, 3}, {Phi, 0, 3}, AxesLabel -> Automatic]
> NDSolve::deqn: Equation or list of equations expected instead of r
> \!\(\*SubscriptBox[\(\[PartialD]\), \(r\)]#1\)+Tan[\[Phi]]
> \!\(\*SubscriptBox[\(\[PartialD]\), \(\[Phi]\)]#1\)& in the first
> argument {r \!\(\*SubscriptBox[\(\[PartialD]\), \(r\)]#1\)+Tan[\[Phi]]
> \!\(\*SubscriptBox[\(\[PartialD]\),
> \(\[Phi]\)]#1\)&,CapitalPsi[0,Phi]==1,(CapitalPsi^(1,0))[0,Phi]==0,(CapitalPsi^(2,0))[0,Phi]==10,(CapitalPsi^(3,0))[0,Phi]==0,CapitalPsi[r,0]==1,(CapitalPsi^(0,1))[r,0]==0}.
> ReplaceAll::reps: {NDSolve[{r \!\(\*SubscriptBox[\(\[PartialD]\),
> \(r\)]\(Slot[<<1>>]\)\)+Tan[\[Phi]]
> \!\(\*SubscriptBox[\(\[PartialD]\),
> \(\[Phi]\)]\(Slot[<<1>>]\)\)&,CapitalPsi[0,Phi]==1,(CapitalPsi^(1,0))[0,Phi]==0,(CapitalPsi^(2,0))[0,Phi]==10,(CapitalPsi^(3,0))[0,Phi]==0,CapitalPsi[r,0]==1,(CapitalPsi^(0,1))[r,0]==0},<<6>>,Method->{MethodOfLines,SpatialDiscretization->{TensorProductGrid,MinPoints->32,\[Ellipsis]
> ->32,DifferenceOrder->2},Method->{Adams,MaxDifferenceOrder->1}}]} is neither a list of replacement rules nor a valid dispatch table, and so
> cannot be used for replacing.
It appears it is complaining about the tan function. What is the problem with this command?
ThanksSer Man2018-03-22T13:59:28ZExport a Nx2 array data file that generated a u[x,] vs x line plot?
http://community.wolfram.com/groups/-/m/t/1306427
(* suppose a finite element run generated a solution u[x,y] with the \
command listed below*)
ufun = NDSolveValue[{op ==
Subscript[\[CapitalGamma], N1] + Subscript[\[CapitalGamma], N2] +
Subscript[\[CapitalGamma], N3], Subscript[\[CapitalGamma], D]},
u, {x, y} \[Element] mesh];
(* and suppose at constant value y=ycon, the command below \
sucessfully plots the solution u[x,ycon] vs x *)
Plot[Re[ufun[x, ycon]], {x, xlow, xhigh}, PlotLabel -> "ycon=0 Real u vs x " ]
(* QUESTION:
HOW TO EXPORT THE u[x,ycon] vs x PLOTTED CURVE AS A SIMPLE Nx2 two \
column data FILE *)Anthony Kalinowski2018-03-22T00:14:06ZGet from Mathematica 5.2 Graphics to Mathematica 11.3 Graphics?
http://community.wolfram.com/groups/-/m/t/1306703
A recommendation on resources available for complex conformal mapping in 11.3 would be appreciated. I have a source for complex conformal mapping that uses 5.3 Graphics but would like to convert it to 11.3 Graphics. For other graphics applications, it is fairly easy to upgrade the graphics, but Conformal mapping is Graphics intensive, and seems not to be straightforward. Thanks for anyone's help.Ernest Tollner2018-03-22T11:03:45ZGet the two missing terms when using Limit?
http://community.wolfram.com/groups/-/m/t/1305935
I use the command Limit,
In[503]:= eq1 =
cinf/(s - \[Theta]c) + (
Sqrt[Dc] E^((lh Sqrt[s])/Sqrt[Dc] - (Sqrt[s] x)/Sqrt[Dc]) F1)/(
Sqrt[s] (s - \[Theta]c)) - (cinf \[Theta]c)/(s (s - \[Theta]c)) + (
E^((Sqrt[s] x)/Sqrt[Dc]) s a[1][s])/(s - \[Theta]c) + (
E^((2 lh Sqrt[s])/Sqrt[Dc] - (Sqrt[s] x)/Sqrt[Dc]) s a[1][s])/(
s - \[Theta]c) - (E^((Sqrt[s] x)/Sqrt[Dc]) \[Theta]c a[1][s])/(
s - \[Theta]c) - (
E^((2 lh Sqrt[s])/Sqrt[Dc] - (Sqrt[s] x)/Sqrt[Dc]) \[Theta]c a[1][
s])/(s - \[Theta]c)
Out[503]= cinf/(s - \[Theta]c) + (
Sqrt[Dc] E^((lh Sqrt[s])/Sqrt[Dc] - (Sqrt[s] x)/Sqrt[Dc]) F1)/(
Sqrt[s] (s - \[Theta]c)) - (cinf \[Theta]c)/(s (s - \[Theta]c)) + (
E^((Sqrt[s] x)/Sqrt[Dc]) s a[1][s])/(s - \[Theta]c) + (
E^((2 lh Sqrt[s])/Sqrt[Dc] - (Sqrt[s] x)/Sqrt[Dc]) s a[1][s])/(
s - \[Theta]c) - (E^((Sqrt[s] x)/Sqrt[Dc]) \[Theta]c a[1][s])/(
s - \[Theta]c) - (
E^((2 lh Sqrt[s])/Sqrt[Dc] - (Sqrt[s] x)/Sqrt[Dc]) \[Theta]c a[1][
s])/(s - \[Theta]c)
In[504]:= eq2 = (Limit[Expand@eq23, x -> + \[Infinity],
Assumptions -> {s > 0, Dc > 0}]) // Simplify // Normal
Out[504]= \[Infinity] a[1][s]
Please tell me where they are,
cinf/(s - \[Theta]c)-((cinf \[Theta]c)/(s (s - \[Theta]c)))Zhonghui Ou2018-03-21T08:22:00ZModify the theme in the Presenter Notebooks introduced in 11.2 ?
http://community.wolfram.com/groups/-/m/t/1303048
I'm giving my first talk with the new Presenter Notebooks feature in 11.3.
I've used Slideshow in the past.
I'm being asked to add the logo of the conference onto my slides (e.g., a footer).
Is this possible? If so how?
I suppose I could use a DockedCell, but I am guessing there is a better solution.W. Craig Carter2018-03-16T21:35:26ZProblems with accuracy and confusionmatrix plot of classifier measurements?
http://community.wolfram.com/groups/-/m/t/1305462
**EDIT: Here is the complete code that i have used:**
Math note has been attached and here is the link to the dataset : https://drive.google.com/open?id=1IWIHXiDYU1X0HbdyzIJC_YXMfESzbEVO
class = 2;
className = {"car", "dog"};
width = 224; height = 224;
dir = SetDirectory["C:\\Users\\cnn\\Desktop\\images"];
loadFiles[dir_] :=
Map[File[#] -> FileNameTake[#, {-2}] &,
FileNames["*.jpg", dir, Infinity]]; trainingData =
loadFiles[FileNameJoin[{dir, "train"}]];
testingData = loadFiles[FileNameJoin[{dir, "test"}]];
files = FileNames["*.JPG" | "*.JPEG",
"C:\\Users\\cnn\\Desktop\\images\\test", Infinity]
testData = Import[#, ImageSize -> {224, 224}] & /@ files;
netEncoder =
NetEncoder[{"Image", {width, height}, ColorSpace -> "RGB"}];
netDecoder = NetDecoder[{"Class", className}];
conv1 = ConvolutionLayer[64, {3, 3}, "PaddingSize" -> 1,
"Stride" -> 1 ];
conv2 = ConvolutionLayer[64, {3, 3}, "PaddingSize" -> 1,
"Stride" -> 1];
conv3 = ConvolutionLayer[128, {3, 3}, "PaddingSize" -> 1,
"Stride" -> 1];
conv4 = ConvolutionLayer[128, {3, 3}, "PaddingSize" -> 1,
"Stride" -> 1];
conv5 = ConvolutionLayer[256, {3, 3}, "PaddingSize" -> 1,
"Stride" -> 1];
conv6 = ConvolutionLayer[256, {3, 3}, "PaddingSize" -> 1,
"Stride" -> 1];
conv7 = ConvolutionLayer[256, {3, 3}, "PaddingSize" -> 1,
"Stride" -> 1];
conv8 = ConvolutionLayer[256, {3, 3}, "PaddingSize" -> 1,
"Stride" -> 1];
conv9 = ConvolutionLayer[512, {3, 3}, "PaddingSize" -> 1,
"Stride" -> 1];
conv10 = ConvolutionLayer[512, {3, 3}, "PaddingSize" -> 1,
"Stride" -> 1];
conv11 = ConvolutionLayer[512, {3, 3}, "PaddingSize" -> 1,
"Stride" -> 1];
conv12 = ConvolutionLayer[512, {3, 3}, "PaddingSize" -> 1,
"Stride" -> 1];
conv13 = ConvolutionLayer[512, {3, 3}, "PaddingSize" -> 1,
"Stride" -> 1];
conv14 = ConvolutionLayer[512, {3, 3}, "PaddingSize" -> 1,
"Stride" -> 1];
conv15 = ConvolutionLayer[512, {3, 3}, "PaddingSize" -> 1,
"Stride" -> 1];
conv16 = ConvolutionLayer[512, {3, 3}, "PaddingSize" -> 1,
"Stride" -> 1];
block1 = {BatchNormalizationLayer[], conv1, Ramp, conv2, Ramp,
PoolingLayer[2, 2], conv3, Ramp, conv4, Ramp, PoolingLayer[2, 2],
conv5, Ramp, conv6, Ramp, conv7, Ramp, conv8, Ramp,
PoolingLayer[2, 2], conv9, Ramp, conv10, Ramp, conv11, Ramp,
conv12, Ramp, PoolingLayer[2, 2], conv13, Ramp, conv14, Ramp,
conv15, Ramp, conv16, Ramp, PoolingLayer[2, 2], FlattenLayer[],
4096, Ramp, DropoutLayer[0.50], 4096, Ramp, DropoutLayer[0.50],
class, SoftmaxLayer[]};
vggNet = NetChain[block1, "Output" -> netDecoder,
"Input" -> netEncoder]
vggNet2 = NetInitialize[vggNet];
trainedNet =
NetTrain[vggNet2, trainingData, ValidationSet -> testingData,
MaxTrainingRounds -> 1]
cm = ClassifierMeasurements[trainedNet, testData]
cm["Accuracy"]
cm["ConfusionMatrixPlot"]
c = Classify@trainedNet;
ClassifierMeasurements[c, testData, Accuracy]
ClassifierMeasurements[c, testData, F1Score]
ClassifierMeasurements[c, testData, ConfusionMatrixPlot]
c /@ {Accuracy, FScore, ConfusionMatrixPlot} // TableForm
Here, I have used a folder named images which contain two subfolders named train and test. Since i am doing all this on a CPU i just used two subfolders in train and test named car and dog which contained 70 images each. The test folder also has same subfolders with 30 images each.
**END EDIT**
Greetings,
I have come across yet another problem. I am trying to create VGGNet and with some help have overcome obstacles till training. I am facing problem with finding out the accuracy of my network and also plotting the confusionmatrix.
I had referred the documentation as well as the community discussions and found that my goals can be achieved by two methods:
1)
c = Classify@trainedNet;
ClassifierMeasurements[c, testingData, "Accuracy"]
ClassifierMeasurements[c, testingData, "F1Score"]
ClassifierMeasurements[c, testingData, "ConfusionMatrixPlot"]
c /@ {"Accuracy", "FScore", "ConfusionMatrixPlot"} // TableForm
when I use the above code, I get the following error:
`ClassifierMeasurements::mlincfttp: Incompatible variable type (Image) and variable value (File[C:\Users\cnn\Desktop\images\test\car\00301.jpg])`.
2)
I believe another method is
cm = ClassifierMeasurements[trainedNet, testingData]
cm["Accuracy"]
cm["ConfusionMatrixPlot"]
But when I use the above code I get the following error:
ClassifierMeasurements::mlincfttp: Incompatible variable type (Image) and variable value (File[C:\Users\cnn\Desktop\images\test\car\00301.jpg]).
Initial part of my code is:
dir = SetDirectory["C:\\Users\\cnn\\Desktop\\images"];
loadFiles[dir_] :=
Map[File[#] -> FileNameTake[#, {-2}] &,
FileNames["*.jpg", dir, Infinity]];
trainingData =
loadFiles[FileNameJoin[{dir, "train"}]];
testingData =
loadFiles[FileNameJoin[{dir, "test"}]];
files = FileNames["*.JPG" | "*.JPEG",
"C:\\Users\\cnn\\Desktop\\images\\test", Infinity]
testData = Import[#, ImageSize -> {224, 224}] & /@ files;
No matter whether I use testData or testingData for classifiermeasurements, I am still getting errors and am unable to get the accuracy as well as plot the confusionmatrix.
Any help would be highly appreciated.
**EDIT: I am also getting the following error:**
ClassifierMeasurements::bdfmt: Argument {,,,,,,<<40>>,,Image[RawArray[UnsignedInteger8,<75,50,3>],Byte,ColorSpace->ColorProfileData[RawArray[UnsignedInteger8,<560>],RGB,XYZ],ImageResolution->{240,240},Interleaving->True],Image[RawArray[UnsignedInteger8,<75,50,3>],Byte,ColorSpace->ColorProfileData[RawArray[UnsignedInteger8,<560>],RGB,XYZ],ImageResolution->{240,240},Interleaving->True],,<<10>>} should be a rule, a list of rules, or an association.
for the code:Ashish Sharma2018-03-20T18:57:16ZWhat is the substitute for MatrixNorm?
http://community.wolfram.com/groups/-/m/t/1304942
When I type these,
B={{1,2},{3,4}}
Norm[N[B]]
the output is,
> {{1, 2}, {3, 4}}
>
> 5.46499
But in old textbook, the command
MatrixNorm[N[B]]
is 7.(This command is not used anymore now but I cannot find the substitute for this :( )
How can I get this result?
Nice day!Jihun Park2018-03-20T01:26:14ZSolve an error minimization problem with respect to 2 parameters?
http://community.wolfram.com/groups/-/m/t/1306529
I try to solve an error minimization problem with respect to 2 parameters. The error is the absolute difference between the solutions of the ODEs
\[Epsilon] = 10^-6;
Delta[t_] := 1/(Sqrt[\[Pi]] \[Epsilon]) Exp[-(t/\[Epsilon])^2]
f[t_] := Delta[t]
sol = NDSolve[{w''[t] + (w[t] + w[t]^2) w'[t] == f[t], w[0] == 0,
w'[0] == 0}, w, {t, 0, 2 \[Pi]}, Method -> "MethodOfLines"];
and
solGreen[s2_] :=
NDSolve[{G''[t] + (G[t] + G[t]^2) G'[t] == s2 Delta[t], G[0] == 0,
G'[0] == 0}, G, {t, 0, 2 \[Pi]}, Method -> "MethodOfLines"]
I define
wsol[t_] := Evaluate[w[t] /. sol]
GGreen[t_] := Evaluate[G[t] /. solGreen[2]]
wGreen[t_] :=
s1 NIntegrate[GGreen[t - \[Tau]] f[\[Tau]], {\[Tau], 0, t},
Method -> "DoubleExponential"]
and store the error in a table
tab = Table[Abs[wsol[t] - wGreen[t]], {t, 0, 1, 0.01}];
Finally, I apply
NMinimize[Max[tab], s1]
for fixed s2. This algorithm finds the value of s1 minimizing the absolute difference between the solutions for **fixed s2**. Is there a way to modiy the above algorithm to minimize the error with respect to s1 and s2 simultaneously?Asatur Khurshudyan2018-03-22T08:31:37ZAutomatic documentation generation and distribution
http://community.wolfram.com/groups/-/m/t/1306464
This is a post I initially wrote [elsewhere](https://www.wolframcloud.com/objects/b3m2a1/home/main.html), but which I'm porting here as it shows once more why I think [subdomains](http://community.wolfram.com/groups/-/m/t/1250055) and [normal web operations](http://community.wolfram.com/groups/-/m/t/1264118) in the Wolfram Cloud are important features (and which I imagine should not be much work to implement)
---
This post is going to be long on design and relatively short on code. As usual, it's exposition of something I spent a while developing and have [cooked into one of my packages](https://github.com/b3m2a1/mathematica-BTools) .
I'm going to talk about how to make and distribute documentation in Mathematica, with specific emphasis on the automatic generation of documentation.
# Documentation Overview
To start, though, we'll break down the types of things that we find in the documentation and which we'll want to support in a package. At this point there are about 10 different types of documentation formats out there:
* Symbol pages
* Guide pages
* Tutorial pages
* Message pages
* Format pages
* Service Connection pages
* HowTos
* Workflows
* Overviews
Of these, there are really only 4 distinct types of things that we see in the documentation folders:
* Reference Pages
* Guides
* Tutorials
* Workflows
And of these I opted to only implement the first three as they are the ones I use the most.
## Reference Pages (Symbols)
Ref pages come in a few different flavors, but the most common one, and the most common type of documentation in general, is the symbol page.
As of version 11.1 symbol pages look like this:
Rasterize@Documentation`HelpLookup["ref/StatusArea"]
(*Out:*)
![making-mathematica-documentation-with-mathematica-526634225825234956](https://www.wolframcloud.com/objects/b3m2a1/home/img/making-mathematica-documentation-with-mathematica-526634225825234956.png)
Which can be split into 5 major parts
* Header bar
* Usage table
* Details
* Examples
* Related Links
We'll need to include each of these when we build our own docs
## Guide Pages
Guides have four parts. In the interest of space I won't show an actual image of a guide, but you can see one via:
Documentation`HelpLookup["guide/PlottingOptions"]
These parts are
* Header bar
* Title and abstract
* Function listing
* Related links
## Tutorials
Tutorials are so much more flexible than guides of symbol pages that it only really makes sense to discuss three sections, with an optional fourth
* Header bar
* Jump-links (optional)
* Tutorial content
* Related links
This flexibility makes them both easier and harder to handle.
# Generating Documentation
Now that we know what kinds of things we need to include we can move to how to include them. Before jumping into the actual code, though, it's worth noting that WRI does provide a tool for building docs.
## Workbench
Workbench is Wolfram's primary IDE. It's a plugin to Eclipse
### Why not just use Workbench?
There's no absolute reason not to use Workbench. Indeed it's probably got the lowest barrier to entry given that it's semi-battle-tested by WRI.
On the other hand, using Workbench restricts ones possibilities. It doesn't always stay up-to-date (as of writing this the documentation it builds by default is still on version 11.0) and by using it we lose the edge of our knowledge of Mathematica.
In general, Mathematica will be the best tool for manipulating Mathematica documents, so my view is why not simply provide a Mathematica package to do it?
## DocGen
The package that I developed is called [DocGen](https://github.com/b3m2a1/mathematica-BTools/blob/master/Packages/Paclets/DocGen.m) (inspired by, but not nearly as clever as, [doxygen](http://www.stack.nl/~dimitri/doxygen/) ). It uses my larger BTools toolchain to provide extra linkages to the entire Mathematica ecosystem.
### Documentation Templates
The first place any documentation starts is as a documentation template. This is simply a notebook with cell-types attached that will be post-processed into a full documentation notebook. We can make a new one for a Symbol Page like:
(* to load DocGen *)
<<BTools`Paclets`
DocGen["SymbolPage", MyFunction, Method->"Template"]//CreateDocument;
(*Out:*)
![making-mathematica-documentation-with-mathematica-7939871188899330678](https://www.wolframcloud.com/objects/b3m2a1/home//img/making-mathematica-documentation-with-mathematica-7939871188899330678.png)
We can simply type in this notebook to add content. I'll fille this out, then generate it and we'll see what happens:
![making-mathematica-documentation-with-mathematica-5746074635887181910](https://www.wolframcloud.com/objects/b3m2a1/home//img/making-mathematica-documentation-with-mathematica-5746074635887181910.png)
We can see that for the most part it looks the same, but now the notebook is formatted properly for use with the Documentation Center
A similar workflow is implemented for guides and tutorials
### Automatic Generation
This by itself doesn't give us much of a leg-up on Workbench. In fact, this template system may even be a little bit *worse* than Workbench's DocuTools (although significantly less bloated and quicker to use).
What does make this powerful is how it allows us to now *automatically* generate documentation, as all we need to do it extract parameters from a ```Symbol``` or context and feed them into this type of template.
The actual details behind this can be a bit gory, but you can read about them in [my post on StackExchange](https://mathematica.stackexchange.com/a/146671/38205) . In general we simply take the ```Symbol``` and extract the usage patterns to fill out the usage table, the calling patterns to fill out the details, make a few sample Examples based on the calling patterns, and provide See Also links based on camel-case similarity in the first hump.
This is what ```DocGen``` does by default, so for instance we can do:
DocGen@DocGen
![making-mathematica-documentation-with-mathematica-6455205443540996861](https://www.wolframcloud.com/objects/b3m2a1/home//img/making-mathematica-documentation-with-mathematica-6455205443540996861.png)
And we get a fully-functional documentation notebook automatically
We can take this further, though, and do the same for an entire set of *contexts* to link a package or multiple packages together:
DocGen@
{
"BTools`Paclets`","BTools`FrontEnd`",
"BTools`Web`", "BTools`External`",
"BTools`"
}
(*Out:*)
![making-mathematica-documentation-with-mathematica-5587525071093655299](https://www.wolframcloud.com/objects/b3m2a1/home/img/making-mathematica-documentation-with-mathematica-5587525071093655299.png)
This gives us a really powerful way to provide accessible documentation with a minimum of effort. In all, that makes it much more likely that the documentation will actually get made.
This also rewards good package design as the better ones definition patterns are, the clearer the documentation built will be.
# Distributing Documentation
Simply building the documentation isn't enough, though. Good documentation should serve as an advertisement for one's package. So the next thing to do is design a distribution system that allows us to publicize and distribute our documentation effectively. To do that we'll want to start with building some paclets for our docs.
## Documentation Paclets
### Paclets
I've talked about paclets before on a number of occasions, so I won't go into depth on them now, but if you want a refresher you can look [here](https://www.wolframcloud.com/objects/b3m2a1/home/building-a-mathematica-package-ecosystem-part-1.html#main-content) .
When we build our documentation paclets, we'll want to make them to have four properties:
* They are obviously documentation, not code
* They do not interfere or interact with the package they document in anyway
* They are modularized as much as possible
* They can be easily updated and versioned
The last one is easy using the paclet manager and a paclet server. The second to last can be done easily if packages are well partitioned into subcontexts. The first and second, however, are a little bit trickier.
If we have a paclet call ```"MyPaclet"``` and we wanted to distribute its documentation separately, we couldn't simply call the documentation paclet ```"MyPaclet"``` as well. Instead, we'll follow suit with what WRI does for many of its subpaclets, such as ```"ServiceConnections"``` and curated data and append a qualifier to the paclet name. So instead of ```"MyPaclet"``` we'll call it ```"Documentation_MyPaclet"``` to make it obvious where it comes from.
The issue with this is that it breaks our simple lookup procedure, but happily it's simple enough to fix this. In the ```"Documentation"``` extension to a ```Paclet``` expression we find the option ```"LinkBase"``` . This specifies what the lookup-root for things in the paclet should be. For instance if there is a page at ```"Guides/MyPaclet"``` in our ```"Documentation_MyPaclet"``` paclet, by using ```"MyPaclet"``` as the ```"LinkBase"``` this page will resolve to ```"MyPaclet/guide/MyPaclet"``` on lookup, and so the documentation will behave as expected.
Overall, then, we'll have our ```Paclet``` expression look like:
Paclet[
Name -> "Documentation_MyPaclet",
Version -> "1.0.0",
Extensions ->
{
{
"Documentation",
"Language" -> "English",
"LinkBase" -> "MyPaclet",
"MainPage" -> "Guides/MyPaclet"
}
}
]
This is something I go over [here](https://mathematica.stackexchange.com/a/169488/38205) as well.
### Paclet Server
With these documentation paclets in hand, we can go one step further and build a paclet server for our documentation (see the refresher link for paclets for a refresher on paclet servers, too). This will be an entirely generic paclet server, but it will serve as a way to easily share documentation in small chunks, just like curated data. I set up a server for all the documentation I've built [here](https://www.wolframcloud.com/objects/b3m2a1.docs/DocumentationServer/main.html) which looks like:
![making-mathematica-documentation-with-mathematica-2355226512675681026](https://www.wolframcloud.com/objects/b3m2a1/home/img/making-mathematica-documentation-with-mathematica-2355226512675681026.png)
People can then install pieces of documentation from there, like:
PacletInstall[
"Documentation_PacletManager",
"Site"->
"http://www.wolframcloud.com/objects/b3m2a1.docs/DocumentationServer"
]
(*Out:*)
![making-mathematica-documentation-with-mathematica-2019233375110478896](https://www.wolframcloud.com/objects/b3m2a1/home/img/making-mathematica-documentation-with-mathematica-2019233375110478896.png)
And they'll be immediately ready to use in Mathematica.
## Documentation Sites
One last thing to comment on is how we can take our documentation ```Notebooks``` and turn them into true HTML documentation which you can peruse on the web. This will involve taking some pieces out of Workbench (but which I've made accessible from a paclet server, so no worries if you don't want to download Workbench).
### HTML Documentation
Workbench provides some facilities for generating HTML documentation. These facilities are (as of when I wrote this package) limited to 11.0-style documentation, but that's more than good enough for most things.
The main thing I needed to do was apply a thorough cleaning to the documentation pages I generated to make sure the finicky package that actually generates the HTML (called ```Transmogrify``` which is in turn called by a higher-level package called ```DocumentationBuild``` ) won't hang when it reaches a directive it doesn't know how to process.
After that, the main issue was simply making sure all the appropriate resources are deployed, and then we're good to go.
I've built this into ```DocGen``` as a method. So if you want to build out HTML documentation for a paclet or set of paclets you can do it like:
DocGen["HTML", PacletFind["Documentation_BTools*"]]
(*Out:*)
{
{"~/Library/Mathematica/Applic"…"Gen/Web/BToolsWeb/guide/BToolsWeb.html",<<24>>,""…""},
{"~/Library/Mathematica/Applic"…"Gen/Web/BToolsWeb/guide/BToolsWeb.html",<<24>>,""…""},
{"~/Library/Mathematica/Appli"…"oolsExternal/guide/BToolsExternal.html",<<22>>,""…""},
{"~/Library/Mathematica/Appli"…"oolsFrontEnd/guide/BToolsFrontEnd.html",<<40>>,""…""},
{"~/Library/Mathematica/ApplicationData/DocGen/Web/BTools/guide/BTools.html"},
{"~/Library/Mathematica/Applicat"…"Web/BToolsPaclets/ref/AppAddContent.html"}
}
And this can be deployed to the web to use as documentation. If we want that we can simply run it with ```CloudDeploy->True``` and it will do so.
Alternately ```DocGen``` also support deploying built HTML automatically, which we can see in the ```"Methods"``` :
DocGen["HTML", "Methods"]
(*Out:*)
{Automatic,"Deploy"}
The best way to do this is passing the directory with all the HTML:
DocGen["HTML", PacletFind["Documentation_BTools*"], Method->"Deploy"];
### Documentation Site
We can take this a step further though, and build a wrapper around this type of functionality to get it to upload
DocumentationSiteBuild["BuildOverview"->True, "AutoDeploy"->True];
This creates [a site](https://www.wolframcloud.com/objects/b3m2a1.docs/main.html) where we can browse all of the exposed documentation, much like the paclet server we had before:
![making-mathematica-documentation-with-mathematica-7555384944798244364](https://www.wolframcloud.com/objects/b3m2a1/home/img/making-mathematica-documentation-with-mathematica-7555384944798244364.png)
And with that, I think, we're done.b3m2a1 2018-03-22T03:42:50Z[✓] Eliminate the variable X# on all the equations?
http://community.wolfram.com/groups/-/m/t/1305642
i want to eliminate all the x variables in all the systems but every time i try is not working: this is my code
equation1 = -318.1 x1 - 309.9 x2 + 993.1 x3 + 395.3 x4 - 599.2 x5 -
419.4 x6 + 676. x7 + 650.9 x8 + 512.9 x9 + 114.3 x10 - 491.7 x11 -
402.1 x12 + 111.4 x13 + 182.4 x14 - 593.9 x15 - 814.3 x16 +
91.7 x17 + 248.3 x18 - 421.3 x19 + 199.1 x20 - 711.4 x21 -
752.7 x22 == 249.7
equation2 = -281.9 x1 - 568.7 x2 + 206.9 x3 - 303.1 x4 + 674.3 x5 -
226.5 x6 + 507.4 x7 + 196.9 x8 + 764.6 x9 - 214.5 x10 -
587.5 x11 - 105.8 x12 - 869.8 x13 - 110.2 x14 + 431.9 x15 -
299.8 x16 + 979.4 x17 - 881.1 x18 - 16.6 x19 - 221.8 x20 -
33.5 x21 + 134.7 x22 == 637.8
equation3 = -577. x1 - 268.5 x2 - 692.2 x3 + 462.6 x4 + 81.4 x5 -
303.1 x6 + 736.6 x7 + 780.9 x8 + 560.8 x9 + 627.6 x10 +
698.8 x11 - 81.6 x12 + 436.9 x13 + 591.3 x14 + 651.7 x15 -
203.9 x16 + 449.1 x17 - 13.5 x18 + 964.6 x19 - 134.3 x20 +
692.6 x21 + 751.5 x22 == -462.
equation4 = -633.2 x1 - 500.6 x2 - 890.7 x3 + 10.4 x4 - 588. x5 +
144.1 x6 - 664.9 x7 + 777.8 x8 + 372. x9 + 684.1 x10 + 804.9 x11 -
215. x12 + 333.5 x13 - 860.5 x14 - 45.2 x15 - 10.3 x16 -
794.8 x17 + 992.3 x18 - 67.2 x19 - 200.6 x20 + 169.7 x21 +
70.6 x22 == 136.8
equation5 = -495.8 x1 + 823.8 x2 + 118.7 x3 - 219. x4 - 764. x5 +
785.8 x6 - 708.5 x7 + 224.4 x8 + 380.5 x9 - 517.9 x10 -
590.5 x11 - 967.3 x12 - 759. x13 - 981.3 x14 - 185.6 x15 -
703. x16 + 462.2 x17 + 793.8 x18 + 501.2 x19 + 11.8 x20 +
218.1 x21 - 727.9 x22 == -598.3
equation6 =
952.6 x1 - 555.6 x2 + 78.9 x3 + 601.1 x4 + 215.8 x5 - 255.1 x6 -
648.4 x7 + 576.4 x8 - 374.7 x9 - 253.8 x10 - 964.9 x11 +
576.1 x12 - 175.2 x13 - 71.6 x14 + 792. x15 + 118.1 x16 +
680.2 x17 - 632.4 x18 + 520.4 x19 - 789.7 x20 + 461.7 x21 -
141.9 x22 == 3.2
equation7 =
-717.9 x1 - 333.2 x2 - 977.4 x3 - 270.4 x4 + 610.1 x5 - 736.9 x6 -
5.5 x7 + 955.7 x8 - 530.1 x9 + 416.5 x10 - 427.6 x11 - 424.8 x12 -
423.5 x13 - 670.7 x14 - 364.6 x15 - 422.1 x16 + 277.3 x17 +
118.1 x18 - 203.6 x19 - 793.4 x20 - 520.7 x21 + 828. x22 == 229.4
equation8 = -570.9 x1 - 209.9 x2 + 946.4 x3 + 906.4 x4 - 927.9 x5 +
779.3 x6 + 105.1 x7 + 438.7 x8 - 512. x9 - 788.2 x10 - 820.1 x11 -
951.3 x12 - 144.4 x13 - 219.8 x14 - 976. x15 + 955.4 x16 +
808.8 x17 + 375.8 x18 - 94.8 x19 + 111.2 x20 + 447.2 x21 -
102.6 x22 == 184.7
equation9 =
765.3 x1 - 780.6 x2 - 695.2 x3 - 656.6 x4 + 35.1 x5 - 853.9 x6 +
62.6 x7 - 194.2 x8 - 928.4 x9 - 939.1 x10 + 963.2 x11 -
460.8 x12 - 185.4 x13 + 223.1 x14 - 310.2 x15 - 61.4 x16 -
150.7 x17 - 232. x18 - 4.3 x19 + 497.3 x20 - 60.7 x21 + 684.8 x22 ==
404.2
equation10 = -613.9 x1 + 606.2 x2 - 633.5 x3 + 578.5 x4 + 155.8 x5 +
397.8 x6 + 95.6 x7 - 327.6 x8 + 783.9 x9 - 677.1 x10 - 866.4 x11 +
839.8 x12 - 276.5 x13 + 457.1 x14 - 12.6 x15 + 765. x16 +
402.2 x17 - 539.8 x18 - 951.4 x19 + 741. x20 + 10.1 x21 -
21.3 x22 == 677.
equation11 = -974.4 x1 - 666.2 x2 + 706.6 x3 + 354.7 x4 - 416.6 x5 -
636.4 x6 - 443.9 x7 + 694.5 x8 + 450.9 x9 - 819.8 x10 -
891.7 x11 - 362.5 x12 + 874.9 x13 + 730.4 x14 + 516.2 x15 -
50.5 x16 + 559.3 x17 + 627.8 x18 + 8.9 x19 + 315.5 x20 +
70.5 x21 - 912.4 x22 == 617.1
equation12 = -908.1 x1 - 672. x2 + 731.1 x3 - 909.8 x4 - 413.6 x5 +
222.3 x6 + 272.4 x7 + 84.3 x8 + 137. x9 + 402.2 x10 + 812.6 x11 +
760.3 x12 + 785.5 x13 + 988.2 x14 + 31.1 x15 + 999. x16 -
544.8 x17 + 554.8 x18 + 647.5 x19 + 101.1 x20 - 969.4 x21 -
472.2 x22 == -106.5
equation13 =
395.5 x1 - 419.4 x2 - 278.8 x3 - 535.4 x4 + 211.9 x5 + 456.2 x6 -
637.8 x7 + 36.3 x8 - 542.7 x9 - 348.8 x10 + 113.8 x11 +
510.4 x12 - 701.2 x13 - 739.3 x14 + 192.8 x15 - 159.8 x16 +
452.8 x17 - 251.6 x18 + 945.3 x19 - 50.3 x20 + 339.5 x21 -
954.5 x22 == -459.5
equation14 = -233.4 x1 - 274.4 x2 + 784.3 x3 - 426.9 x4 + 441.4 x5 -
27. x6 - 552.6 x7 + 699.7 x8 + 869.7 x9 + 331.2 x10 - 646.9 x11 +
670.4 x12 + 993.3 x13 - 378.1 x14 - 648.3 x15 + 76. x16 -
364.8 x17 + 235.1 x18 - 337.3 x19 - 245.9 x20 - 21.4 x21 -
224.2 x22 == -448.5
equation15 =
608.8 x1 + 810.4 x2 + 943.2 x3 + 70.3 x4 + 113. x5 - 599. x6 -
726.2 x7 + 163.7 x8 - 303.6 x9 - 614.9 x10 + 459.5 x11 - 7.1 x12 +
131.3 x13 + 820.5 x14 + 556.5 x15 - 115.2 x16 - 404.8 x17 +
437.2 x18 - 34.1 x19 - 715.4 x20 - 187. x21 - 172. x22 == -222.6
equation16 =
601. x1 + 411.7 x2 - 606.8 x3 + 253.2 x4 - 463.5 x5 + 91.8 x6 +
294.6 x7 + 543.1 x8 - 466.2 x9 + 479.1 x10 + 428.4 x11 +
597.6 x12 - 631.5 x13 + 321.7 x14 - 868.7 x15 + 35.4 x16 +
727.9 x17 + 757.9 x18 + 300.3 x19 - 699.7 x20 + 381.2 x21 +
815.3 x22 == 38.4
equation17 =
463.2 x1 + 294. x2 - 28.1 x3 - 725.8 x4 - 735.1 x5 + 715.3 x6 -
80.1 x7 + 996.4 x8 - 814.8 x9 + 918.2 x10 + 703.6 x11 -
594.5 x12 - 892. x13 + 878.3 x14 + 373.9 x15 - 164.3 x16 -
598. x17 - 200.9 x18 - 527.9 x19 - 991.7 x20 + 762. x21 -
615.8 x22 == 986.9
equation18 =
753.2 x1 - 415.7 x2 - 327.8 x3 - 781.3 x4 - 391.6 x5 - 734.7 x6 +
473.1 x7 - 450.2 x8 + 343.4 x9 + 974.6 x10 + 92. x11 - 143.3 x12 +
272.2 x13 - 389.4 x14 + 189.6 x15 - 979.1 x16 + 196.3 x17 -
153.8 x18 + 913.7 x19 + 571.7 x20 - 420.1 x21 + 941.9 x22 == 189.5
equation19 =
84. x1 + 998.8 x2 + 96.7 x3 + 468.2 x4 - 711. x5 + 357.3 x6 -
501.4 x7 - 110.4 x8 - 478. x9 + 114.4 x10 + 69.3 x11 + 356.3 x12 +
9.5 x13 + 970.2 x14 + 425.5 x15 - 544.5 x16 - 844.8 x17 +
874.2 x18 - 102.2 x19 + 183.1 x20 + 588. x21 - 25.6 x22 == 936.1
equation20 = -675.2 x1 - 269.7 x2 - 829.9 x3 + 107.8 x4 - 908.1 x5 -
810.8 x6 - 664. x7 + 721.2 x8 - 173.7 x9 + 434.3 x10 + 518.8 x11 +
322.5 x12 + 719.3 x13 - 701.7 x14 + 29. x15 + 90.8 x16 -
47.7 x17 - 534.2 x18 + 419.6 x19 - 416.8 x20 + 303.3 x21 -
389.9 x22 == 378.8
equation21 = -351.7 x1 + 817.8 x2 - 373.1 x3 + 100.8 x4 - 605. x5 -
725.4 x6 - 162.2 x7 - 174.9 x8 - 191.2 x9 - 16. x10 - 796.3 x11 +
264.6 x12 + 839. x13 - 327.9 x14 + 130. x15 + 429.6 x16 +
108.2 x17 - 344.2 x18 + 374.3 x19 + 408.3 x20 - 268. x21 +
900.4 x22 == -444.8
equation22 =
345. x1 + 521.9 x2 + 746.8 x3 + 176.4 x4 + 960.2 x5 - 919.6 x6 -
311.7 x7 + 409.5 x8 - 809.9 x9 + 719.8 x10 - 55. x11 + 237.8 x12 -
168.2 x13 - 442.7 x14 - 677.7 x15 - 227.3 x16 - 296.4 x17 +
672.9 x18 + 574.1 x19 - 370.5 x20 + 413.9 x21 + 799. x22 == 352.
system = {equation1, equation2, equation3, equation4, equation5,
equation6, equation7, equation8, equation9, equation10, equation11,
equation11, equation12, equation13, equation14, equation15,
equation16, equation17, equation18, equation19, equation20,
equation21, equation22};
TableForm[system];
m = 22;
n = 22;
mX = Array[StringJoin["x", ToString[#]] &, {n}]
TableForm@mSystem
MatrixForm@mB
Solve[system]jose caballero2018-03-20T20:45:08ZCreate a table / an array with conditions?
http://community.wolfram.com/groups/-/m/t/1305658
Hello everyone,
I'm trying (and failing) to create an array with dimensions {i1,j1,k1,i2,j2,k2}, conditioned to: {i1 + j1 + k1 = n} & {i2 + j2 + k2 = n}, where n is a fixed (given) integer number.
In attachment is what I've done so far.
Thank you very much!Daniel Grun2018-03-21T01:00:31ZCheck if all roots of a polynomial have been found using Root?
http://community.wolfram.com/groups/-/m/t/1306312
Hi everyone, my name is Steven R. and this is my first time using Mathematica. I'm using it in school as of this semester. So I'm trying to see if what I got answers question B. I'm not sure how to tell if Mathematica recognizes these numbers as my roots of my polynomial function.![HW QUESTIONS][1] ![MY WORK PART 1][2] ![MY WORK PART 2][3]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot%2814%29.png&userId=1306197
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot%2815%29.png&userId=1306197
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot%2816%29.png&userId=1306197Steven Rodriguez2018-03-21T19:36:34Z[✓] Compare these to matrix operations?
http://community.wolfram.com/groups/-/m/t/1301384
Consider the following code:
Inverse[{{1,-1},{0,1}}]*{{3,1},{0,2}}^2*{{1,-1},{0,1}}
{{1,1},{0,1}}*{{3,1},{0,2}}^2*{{1,-1},{0,1}}
hello i'm a little bit curious, why the two sentences above give me two different answers .. ? ( notice: Inverse[{{1,-1},{0,1}}] = {{1,1},{0,1}} )
ThanksYujia Huang2018-03-13T08:32:02ZAvoid this issue with ToElementMesh in MMA 11.3?
http://community.wolfram.com/groups/-/m/t/1300139
This program work finewithe Mathematica 11.2, but not with Mathematica 11.3. Why ?
(*Equation reaction diffusion temporelle *)
ClearAll["Global`*"]
Needs["NDSolve`FEM`"]
parisradius = 0.3;
parisdiffusionvalue = 150;
carto = DiscretizeGraphics[
CountryData["France", {"Polygon", "Mercator"}] /.
Polygon[x : {{{_, _} ..} ..}] :> Polygon /@ x];
paris = First@
GeoGridPosition[
GeoPosition[
CityData[Entity["City", {"Paris", "IleDeFrance", "France"}]][
"Coordinates"]], "Mercator"];
bmesh = ToBoundaryMesh[carto, AccuracyGoal -> 1,
"IncludePoints" -> CirclePoints[paris, parisradius, 50]];
mesh = ToElementMesh[bmesh, MaxCellMeasure -> 0.05,
"MeshOrder" -> 0.1];
mesh["Wireframe"];
op = \!\(
\*SubscriptBox[\(\[PartialD]\), \(t\)]\(u[t, x, y]\)\) - \!\(
\*SubsuperscriptBox[\(\[Del]\), \({x, y}\), \(2\)]\(u[t, x, y]\)\) +
1.2*u[t, x, y] - 20;
usol = NDSolveValue[{op == 1,
DirichletCondition[u[t, x, y] == 0, Norm[{x, y} - paris] > .6],
DirichletCondition[u[t, x, y] == parisdiffusionvalue,
Norm[{x, y} - paris] < .6], u[0, x, y] == 100},
u, {t, 0, 20}, {x, y} \[Element] mesh]
frames = Table[
Plot3D[usol[t, x, y], {x, y} \[Element] usol["ElementMesh"],
PlotRange -> {0, 160}], {t, 0, 20, 2}];
ListAnimate[frames, SaveDefinitions -> True]
frames = Table[
ContourPlot[usol[t, x, y], {x, y} \[Element] usol["ElementMesh"],
PlotRange -> {0, 160}], {t, 0, 20, 2}];
ListAnimate[frames, SaveDefinitions -> True]
II receive this information :
ToElementMesh::femimq: The element mesh has insufficient quality of -359.763. A quality estimate below 0. may be caused by a wrong ordering of element incidents or self-intersecting elements.
NDSolveValue::femnr: {x,y}\[Element]ImproveBoundaryPosition[<<1>>,NumericalRegion[None,{{-4.79537,9.56036},{45.5126,59.623}}]] is not a valid region specification.
Thanks for your help
A. DauphineAndré Dauphiné2018-03-11T09:46:20ZGet custom import formats working with PacletManager?
http://community.wolfram.com/groups/-/m/t/1305770
Is there any chance of getting
System`ConvertersDump`$FormatsDirectory
hooked up to the PacletManager?
I'd like to auto-include my own formats via Paclet. I'm picturing it working via a `"Formats"` argument in the `"Extensions"`
I think it'd be useful for the developers I knowb3m2a1 2018-03-21T02:40:56ZUse alternative methods to NIntegrate in order to compute this integration?
http://community.wolfram.com/groups/-/m/t/1306118
Are there any alternatives to the NIntegrate method attempted here? The following numerical integration appears to gets stuck indefinitely (>> 12 hours) on a 32GB machine with Mathematica 11.2, with different (very low or high) values of MaxRecursions etc. The integral is *not* divergent. Tried other methods like DuffyCoordinates and LevinRule.
p1v = Table[p1[i], {i, 3}];
p3v = Table[p3[i], {i, 3}];
p4v = Table[p4[i], {i, 3}];
p5v = Table[p5[i], {i, 3}];
rv = {1, 1, 1};
$Assumptions = Element[p1v | p3v | p4v | p5v, Reals]
intvariables = Flatten[Join[{p1v}, {p3v}, {p4v}, {p5v}]]
intvariables1 = ({#1, -Infinity, Infinity} & ) /@ intvariables
jj1=((13/2)*(p1[3]*((-p3[2])*p4[1] + p3[1]*p4[2]) +
p1[2]*(p3[3]*p4[1] - p3[1]*p4[3]) + p1[1]*((-p3[3])*p4[2] + p3[2]*p4[3])) -
3000.*(((-p3[3])*p4[2] + p3[2]*p4[3])*p5[1] + (p3[3]*p4[1] - p3[1]*p4[3])*
p5[2] + p1[3]*(p3[2]*p5[1] - p3[1]*p5[2]) +
p1[3]*(p4[2]*p5[1] - p4[1]*p5[2]) + ((-p3[2])*p4[1] + p3[1]*p4[2])*p5[3] +
p1[2]*((-p3[3])*p5[1] + p3[1]*p5[3]) + p1[1]*(p3[3]*p5[2] - p3[2]*p5[3]) +
p1[2]*((-p4[3])*p5[1] + p4[1]*p5[3]) + p1[1]*(p4[3]*p5[2] - p4[2]*p5[3])))/
E^(I*(p1[1] + p1[2] + p1[3]))/((1 + p3[1]^2 + p3[2]^2 + p3[3]^2)^2*
((-p1[1] + p3[1])^2 + (-p1[2] + p3[2])^2 + (-p1[3] + p3[3])^2)*
(1 + p4[1]^2 + p4[2]^2 + p4[3]^2)^2*((p1[1] + p4[1])^2 + (p1[2] + p4[2])^2 +
(p1[3] + p4[3])^2)*(1 + p5[1]^2 + p5[2]^2 + p5[3]^2)^4)
Block[{p1v = 0, p3v = 0, p4v = 0, p5v = 0},
NIntegrate[jj1, Evaluate[Sequence @@ intvariables1],
Exclusions -> {{p3v == p1v}, {p4v == -p1v}},
Method -> "GaussKronrodRule",
MaxRecursion -> 5, WorkingPrecision -> 10, PrecisionGoal -> 6]]Arny Toynbee2018-03-21T15:36:10Z[✓] Manipulate with the plot of this function?
http://community.wolfram.com/groups/-/m/t/1305632
Hi,
I need to manipulate the function: x^2 + y^2 + cx^2y^2=1 to show how the graph changes as c ranges from 0 to 10.
Do I use the manipulate function or contour plot? I'm really lost and really new to mathematica.
I also need to implicity differentiate the function to find y'.
Thank you!Kelly Regan2018-03-20T19:11:43ZAvailability of UnityLink?
http://community.wolfram.com/groups/-/m/t/1306103
After watching the presentation video [UnityLink: Wolfram Language & Unity Game Engine][1] I am really curious about when UnityLink becomes available. I am working on a project right now where I would really love to use this functionality for some nice visualizations. Can somebody provide me with more details on the status of UnityLink?
[1]: https://www.wolfram.com/broadcast/video.php?v=2048Markus Roellig2018-03-21T11:14:41ZGet a simplified result from PDE with NDSolve?
http://community.wolfram.com/groups/-/m/t/1305883
Hi, I have tried out this command to test the results:
op = I*Nest[op, \[CapitalPsi][r, \[Phi]], 3] == 2 \[CapitalPsi][r, \[Phi]]*r^2/Cos[\[Phi]]^5
sol = CapitalPsi[r, Phi] /.
NDSolve[{op,
CapitalPsi[0, Phi] == 1,
Derivative[1, 0][CapitalPsi][0, Phi] == 0,
Derivative[2, 0][CapitalPsi][0, Phi] == 10,
Derivative[3, 0][CapitalPsi][0, Phi] == 0},
CapitalPsi, {r, 0, 3}, {Phi, 0, 3},
MaxSteps -> Infinity, PrecisionGoal -> 1,
AccuracyGoal -> 1,
Method -> {"MethodOfLines",
"SpatialDiscretization" -> {"TensorProductGrid",
"MinPoints" -> 32, "MaxPoints" -> 32, "DifferenceOrder" -> 2},
Method -> {"Adams", "MaxDifferenceOrder" -> 1}}] //
Plot3D[sol, {r, 0, 3}, {Phi, 0, 3}, AxesLabel -> Automatic]
However, I get a very messy output, with no plot.
Is there something missing here?
Thanks!Ser Man2018-03-21T09:05:52ZCalculate distance between two points?
http://community.wolfram.com/groups/-/m/t/1304507
Hi there,
I wrote a code and used the function possibility in order to make the code more lean, however it seems that the function functionality makes the thing not work.
![Mathematica code and output][1]
I computed the same equation in Excel and get the same answer as the final result of the Mathematica code
![enter image description here][2]
Help would be much appreciated. Not sure what I do wrong within the programming now. I somewhat assume the use of Degree inside the function might cause some trouble, but help is appreciated as mentioned before. Ah and just realised, the screenshot doesn't include the definition of rroller and wgap in Mathematica, but they are as such:
wgap = 0.0005 ;(*gap width in meters*)
rroller = 0.1; (*roller radius in meters*)
Cheers,
Eric
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture.PNG&userId=1304293
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture2.PNG&userId=1304293Eric Hoefgen2018-03-19T01:32:17ZPage 117-Mathematica Beyond Mathematics - Par. value in graph label and { }
http://community.wolfram.com/groups/-/m/t/1304845
On page 117, in Mathematica Beyond Mathematics it states "You can also include a drop -
down button.This will automatically be added when a parameter \
contains five or more choices as shown in the next example.In this \
case we have the possibility to choose several filling alternatives \
for displaying a graph.We use the Plot option :
Filling.We also use PlotLabel to show the parameter value when \
labeling the graph.Notice that the combination of Plot options with \
Manipulate greatly increases the number of possible combinations for \
dynamic interactions.Manipulate[
Plot[Sin[nx], {x, 0, 2 \[CapitalPi]},
I do not get PlotLabel changing the parameter in labeling the graph. In addition, I receive a list of 2 elements where the second element is the null set.
See attachment.
Why does the parameter not change in labeling the graph? Also why do I get a list that includes {Null}?
Filling \[RightArrow] choice, PlotRange \[RightArrow] 2,
PlotLabel \[RightArrow]
Style[StringForm["Sin(nt ) representation for n= `1`",
PaddedForm[n, {4, 2}]], 10]], {n, 1,
20}, {{choice, None, "Choice"}, {None, Axis, Top, Bottom,
Automatic, 1, 0.5, 0, -0.5, -1}},
ContentSize \[RightArrow] {200, 150}]"
However, when I type the following:
Manipulate[
Plot[Sin[n x], {x, 0, 2 \[Pi]}, Filling -> choice, PlotRange -> 2,
PlotLabel -> Style[StringForm["Sin(n t) representation for n='1'",
PaddedForm[n, {4, 2}]], 10]],
{n, 1, 20},
{{choice, None, "Choice"},
{None, Axis, Top, Bottom, Automatic, 1, 0.5, 0, -0.5, -1}},
ContentSize -> {200, 150}]
, I do not getSamuel Kohn2018-03-20T02:25:04ZAvoid document problem while opening a notebook?
http://community.wolfram.com/groups/-/m/t/1305862
When I opened my last document. It came up showing code like the stuff below instead of my equations. How do I make it look normal again?
> %!PS-Adobe-3.0 %%Creator: Wolfram Mathematica 11.2.0.0 for Microsoft
> Windows (64-bit) (September 10, 2017) Student Edition - Personal Use
> Only %%CreationDate: Thu Mar 15 08:27:13 2018 %%Pages: 2
> %%DocumentData: Clean7Bit %%LanguageLevel: 3 %%DocumentMedia: Letter
> 612 792 0 () () %%BoundingBox: 54 32 554 761 %%EndComments
> %%BeginProlog /languagelevel where { pop languagelevel } { 1 } ifeHava M2018-03-21T01:12:00Z[✓] Import data from a TSV without degrees?
http://community.wolfram.com/groups/-/m/t/1305811
I'm a hobbyist and dabble in a few things. This is my first post.
I have a .TSV that I exported from DataDrop that has a timestamp and a temperature in Degrees Fahrenheit. I'm able to get it to import using a variety of methods. However, the result of hwtemp is showing as # \[Degree]F.
How can I just get the number?
hwd = Import[ "hw.tsv", {"Data", All, 1}];
hwtemp = Import["hw.tsv", {"Data", All, 2}];
date = DateObject[Rest[hwd]];
temperature = Rest[hwtemp] // Short
Any help or feedback is welcome.
Thanks, MikeMichael Madsen2018-03-20T21:49:10ZOrder of the variables in a polynomial?
http://community.wolfram.com/groups/-/m/t/1304537
Hello! I have this code
K[Q_, n_Integer] :=
Module[{z, x},
SymmetricReduction[
SeriesCoefficient[
Product[ComposeSeries[Series[Q[z], {z, 0, n}],
Series[x[i] z, {z, 0, n}]], {i, 1, n}], n],
Table[x[i], {i, 1, n}], Table[Subscript[c, i], {i, 1, n}]][[1]]]
primeFactorForm[n_] :=
If[Length@# == 1, First@#, CenterDot @@ #] &[
Superscript @@@ FactorInteger[n]];
string = StringJoin[
Riffle[Table[poly = K[Sqrt[#]/Tanh[Sqrt[#]] &, i] /. c -> p;
gcd = GCD @@ List @@ poly /. Rational[n_, d_]*c_ :> d;
ToString[
Inactive[Set][Subscript[L, i],
1/primeFactorForm[gcd]*Plus @@ List @@ Distribute[gcd*poly] /.
Times[Rational[n_, d_], e__] :>
RowBox[{primeFactorForm[n]/primeFactorForm[d], e}] /.
x_ :> TraditionalForm@
DisplayForm@
RowBox[{1/Denominator[x], "(", Numerator[x], ")"}]],
TeXForm], {i, 3, 7}], "\\\\"]]
CopyToClipboard[string]
which gives an output like this:
$L_3=\frac{1}{3^3\cdot 5^1\cdot 7^1}(2 p_1^3-13 p_2 p_1+62 p_3)\\L_4=\frac{1}{3^3\cdot 5^2\cdot 7^1}(-p_1^4+\frac{-1^1\cdot 19^1}{3^1}p_2^2+\frac{-1^1\cdot 71^1}{3^1}p_1p_3+\frac{2^1\cdot 11^1}{3^1}p_1^2p_2+127 p_4)\\L_5=\frac{1}{3^4\cdot 5^1\cdot 11^1}(\frac{2^1}{3^1\cdot 7^1}p_1^5+\frac{2^1\cdot 73^1}{3^1}p_5+\frac{-1^1\cdot 83^1}{3^1\cdot 5^1\cdot 7^1}p_1^3p_2+\frac{-1^1\cdot 919^1}{3^1\cdot 5^1\cdot 7^1}p_1p_4+\frac{-1^1\cdot 2^4}{5^1}p_2p_3+\frac{79^1}{5^1\cdot 7^1}p_1^2p_3+\frac{127^1}{3^1\cdot 5^1\cdot 7^1}p_1p_2^2)\\L_6=\frac{1}{3^5\cdot 5^2\cdot 7^2\cdot 11^1\cdot 13^1}(\frac{-1^1\cdot 2^1\cdot 691^1}{3^1\cdot 5^1}p_1^6+\frac{-1^1\cdot 167^1\cdot 241^1}{3^1\cdot 5^1}p_3^2+\frac{2^1\cdot 23^1\cdot 89^1\cdot 691^1}{3^1\cdot 5^1}p_6+\frac{2^1\cdot 1453^1}{5^1}p_2^3+\frac{-1^1\cdot 33863^1}{3^1\cdot 5^1}p_1^3p_3+\frac{-1^1\cdot 159287^1}{3^1\cdot 5^1}p_2p_4+\frac{2^1\cdot 6421^1}{3^1\cdot 5^1}p_1^4p_2+\frac{-1^1\cdot 5527^1}{3^1}p_1^2p_2^2+\frac{-1^1\cdot 2^5\cdot 29^1\cdot 181^1}{5^1}p_1p_5+\frac{40841^1}{5^1}p_1^2p_4+\frac{83^1\cdot 349^1}{5^1}p_1p_2p_3)\\L_7=\frac{1}{3^2\cdot 5^1\cdot 7^1\cdot 13^1}(\frac{2^2\cdot 8191^1}{3^4\cdot 5^1\cdot 11^1}p_7+\frac{2^2}{3^4\cdot 5^1\cdot 11^1}p_1^7+\frac{-1^1\cdot 2^1\cdot 23^2}{3^5\cdot 11^1}p_2p_5+\frac{-1^1\cdot 2^1\cdot 113^1}{3^4\cdot 5^1\cdot 7^1}p_1^3p_4+\frac{2^4\cdot 277^1}{3^4\cdot 5^2\cdot 7^1}p_1^2p_5+\frac{-1^1\cdot 2^1\cdot 97^1\cdot 107^1}{3^4\cdot 5^2\cdot 7^1\cdot 11^1}p_3p_4+\frac{2^3\cdot 2087^1}{3^5\cdot 5^2\cdot 7^1\cdot 11^1}p_2^2p_3+\frac{-1^1\cdot 2^1\cdot 2161^1}{3^5\cdot 5^2\cdot 7^1\cdot 11^1}p_1^5p_2+\frac{-1^1\cdot 2^1\cdot 3989^1}{3^5\cdot 5^2\cdot 7^1\cdot 11^1}p_1p_2^3+\frac{-1^1\cdot 2^1\cdot 305633^1}{3^5\cdot 5^2\cdot 7^1\cdot 11^1}p_1p_6+\frac{2^2}{5^2\cdot 7^1}p_1^4p_3+\frac{2^3}{3^2\cdot 5^1\cdot 7^1}p_1^3p_2^2+\frac{22027^1}{3^5\cdot 5^2\cdot 7^1\cdot 11^1}p_1p_3^2+\frac{-1^1\cdot 39341^1}{3^5\cdot 5^2\cdot 7^1\cdot 11^1}p_1^2p_2p_3+\frac{1399^1}{3^3\cdot 5^2\cdot 11^1}p_1p_2p_4)$ $
Is there a way to organize the variables in a more intuitive way? For example in the $L_7$ the term $p_7$ comes before $p_1$ and stuff like this happens more often the more terms I have. In general the order is not consistent. I tried some basic grouping/factorizing commands, but they don't seem to work (or I don't place them in the right place). Can someone help me with this? Thank you!Silviu Udrescu2018-03-19T11:23:37ZImprove neural network performance with Mathematica 11.3 ?
http://community.wolfram.com/groups/-/m/t/1298554
I look at the blog post with the 11.3 word cloud with 'Blockchain' as the BIG center and ask how important is that? As I run the exact same data science GPU code on identical hardware software configuration except for the change from Mathematica 11.2 to Mathematica 11.3 and see my neural network performance go from 295 seconds on 11.2 to 2038 seconds on 11.3. Again NO change other than Mathematica version. And then I see that Mathematica 11.3 still does not support current XCode LLVM/GCC compiler or NVIDIA for CUDA tools (watch it move back to old paclet for Mathematica 10.5 after you upgrade your XCode command line tools to current version, am I expected to pay money to figure this out?) .
This is my experience as I explore the value of Mathematica since release 10 to today for data science and at the same time see the massive improvements and quality of Python, Jupyter, NVIDIA, iOS CoreML, Vulkan, Tensorflow and core GPU computing on MacOS, iOS and Linux.
Really questioning the value proposition of Wolfram for data science going forward. Sad.David Proffer2018-03-10T04:14:59ZPDE solving - which solver is best?
http://community.wolfram.com/groups/-/m/t/1305268
![enter image description here][1]
Hello, I am trying to find the best solver for this problem described above. The equation is given in Mathematica commands:
K1 = i;
V = Tan[x];
P = r^2/(Cos[y])^5
eq = K1*r^3*D[u[r, y], {r, 3}] + r^2*D[u[r, y], {r, 2}] + r*K1*D[V^2*u[r, y], {r, 1}] + V*r*K1*D[u[r, y], {r, 1}] + V*r*K1*D[V*u[r,y], {r,1}]+ V^2*r*K1*D[u[r, y], {r, 1}] +
V^3*K1*D[u[r, y], {r, 1}] == 2*u[r,y]*P
and the I.C.s are set to:
u(r,0)=1
u(0,y)=1
u'(0,y)=0
u''(0,y)=10
Which solver and method and form of input is best?
Thanks
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=8519Untitled.jpg&userId=967554Ser Man2018-03-20T14:09:30ZBasketball Boids
http://community.wolfram.com/groups/-/m/t/1305281
Since it is basketball season (right now we are in the middle of March madness) I thought it'd be fun to make a basketball simulation. Basketball Boids is motivated by the boids model of bird flocks, with a term for separation from teammates, a term for cover/separation from the other team (sign depends on offense or defense), an attraction towards the basket. You can try your own parameters, and you can try other models. Ideally, there would be so-called emergent properties like team work and creativity as in the bird flocks.
![basketball boids][1]
Blue is offense and red is defense.
m = Through[{m1, m2, m3, m4, m5}[t]];(*{m1[t],m2[t],m3[t],m4[t],m5[t]}*)
p = Through[{p1, p2, p3, p4, p5}[t]];(*{p1[t],p2[t],p3[t],p4[t],p5[t]}*)
Those are the variables and this solves the differential equation, with the boid parameters in the code. The separation and cover terms are inverse square forces, but the hoop term is just radial.
dm[n_] := separation* Sum[(m[[n]] - m[[i]])/((m[[n]] - m[[i]]).(m[[n]] - m[[i]]))^(3/2), {i, 5}] + cover*Sum[(m[[n]] -
p[[i]])/((m[[n]] - p[[i]]).(m[[n]] - p[[i]]))^(3/2), {i, 5}] + hoop*(-m[[n]]/(m[[n]].m[[n]])^(1/2))
dp[n_] := separation*Sum[(p[[n]] - p[[i]])/((p[[n]] - p[[i]]).(p[[n]] - p[[i]]))^(3/2), {i, 5}] - cover*Sum[(p[[n]] -
m[[i]])/((p[[n]] - m[[i]]).(p[[n]] - m[[i]]))^(3/2), {i, 5}] + hoop*(-p[[n]]/(p[[n]].p[[n]])^(1/2))
params={separation -> 1, cover -> 10, hoop -> 100};
sol = Quiet[NDSolve[Evaluate[N@Flatten[Table[{D[m[[i]], t, t] == dm[i],
D[p[[i]], t, t] == dp[i], (D[m[[i]], t] /. t -> 0) == {0, 0}, (D[p[[i]], t] /. t -> 0) == {0, 0}, (m[[i]] /. t -> 0) == RandomReal[1, 2], (p[[i]] /. t -> 0) == RandomReal[1, 2]}, {i, 5}]]/. params],Join[m, p], {t, 0, 1}][[1]]];
ParametricPlot[Evaluate[Join[m, p] /. sol], {t, 0, 1}, PlotStyle -> Join[Table[Blue, 5], Table[Red, 5]]]
![parametric plot of a solution][2]
The animation above was made with
Animate[ListPlot[Join[Style[#, Blue] & /@ m, Style[#, Red] & /@ p] /. sol /. t -> s,
Axes -> False, Prolog -> {Brown, Disk[{0, 0}, .2]}, AspectRatio -> 1, PlotRange -> 4 {{-1, 1}, {-1, 1}}], {s, 0, 1, .01}]
(I remember talking several years ago to a student at the [Wolfram High School Summer Program][3] about Boids, which is why I was thinking about them.)
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=basketball-boids.gif&userId=23275
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=basketball-boids.png&userId=23275
[3]: https://education.wolfram.com/summer/camp/Todd Rowland2018-03-20T14:23:53ZGet the right data type from MatrixExp?
http://community.wolfram.com/groups/-/m/t/1305503
Good morning,
I am using MatrixExp[] inside a Compiled function and, at a certain point, I assign to a square matrix the output of MatrixExp. I am getting an error complaining that MatrixExp is of type {_Real,0} and cannot be assigned to a variable of type {_Real,2}. The code is quite straightforward... The problem occurs with the instruction "M=M.Mexp". If I change that instruction to "M = M + Mexp" the problem goes away. Is M.Mexp not of type {Real,2}?? Any thoughts?
par = {5.7, 1.038, 1.016, 1.018, 1.015, 0.186, 0.003, 0.009, 0.004,
0.0003, 0.00004, 0.071};
dim = 69;
P = Compile[{{par, _Real, 1}},
Module[{i, L1, L2, L3, L4, L, q1, q2, q3, q4, gama, alfa, beta,
A},
L1 = par[[1]]; L2 = par[[2]]; L3 = par[[3]]; L4 = par[[4]];
L = par[[5]];
q1 = par[[6]]; q2 = par[[7]]; q3 = par[[8]]; q4 = par[[9]];
gama = par[[10]];
alfa = par[[11]]; beta = par[[12]];
A = ConstantArray[0.0, {dim, dim}];
A[[1, 1]] = -(L1 + q1); A[[2, 2]] = -(L2 + q2);
A[[3, 3]] = -(L3 + q3); A[[4, 4]] = -(L4 + q4);
A[[1, 2]] = L1; A[[2, 3]] = L2; A[[3, 4]] = L3; A[[4, 5]] = L4;
For[i = 5, i <= dim - 1, i++, A[[i, i + 1]] = L];
For[i = 5, i <= dim - 1, i++,
A[[i, i]] = -(L + gama + alfa*Exp[beta*i])];
A[[dim, dim]] = -(gama + alfa*Exp[beta*dim]);
Return[A]
]
];
OBJ2 = Compile[{{par, _Real, 1}},
Module[{M, Mexp, s, i},
Mexp = MatrixExp[P[par]];
M = ConstantArray[0., {dim, dim}];
For[i = 1, i <= dim, i++, M[[i, i]] = 1.;];
vec = ConstantArray[0., {dim + 1}];
For[i = 1, i <= dim, i++,
M = M.Mexp; vec[[i]] = Total[M[[1, All]]]];
Return[
Total[Table[((Q[[i]] - (1 - vec[[i + 1]]/vec[[i]]))^2)*
S[[i]]^2, {i, 1, dim}]]]
]
];João Janela2018-03-20T14:19:50Zresize large number of images to required size?
http://community.wolfram.com/groups/-/m/t/1305226
Greetings,
I am trying to implement the VGG19 neural network. I have downloaded the imagenet dataset and am testing a few classes.
I have figured out that NetEncoder can be used to resize images for training but am struck with resizing the images for testing.
I am using the following line of code to import the images into testData variable.
testData = Import["C:\\Users\\cnn\\Desktop\\images\\test", "*.jpg"];
Now my problem is that the images in the folder are of various sizes and i want to resize all of them to 224 x 224 so that i can use it to test my network.
I have tried using
testData2=ImageResize[testData,{224,224}];
but it has resulted in error.
> **EDIT :** **Forgot to mention that there are two sub folders in the path. Problem has been resolved, Thank you.**Ashish Sharma2018-03-20T11:50:56Z[✓] Use NDSolve for a third-order differential PDE?
http://community.wolfram.com/groups/-/m/t/1305242
Hi, I have prepared the given NDSolve script for a third-order differential PDE:
K1 = i;
V = Tan[x];
P = r^2/(Cos[y])^5
eq = K1*r^3*D[u[r, y], {r, 3}] + r^2*D[u[r, y], {r, 2}] + r*K1*D[V^2*u[r, y], {r, 1}] + V*r*K1*D[u[r, y], {r, 1}] + V*r*K1*D[V*u[r,y], {r,1}]+ V^2*r*K1*D[u[r, y], {r, 1}] +
V^3*K1*D[u[r, y], {r, 1}] == 2*u[r,y]*P
sol = u[r, y] /.
NDSolve[{eq,
u[0, y] == 1,
Derivative[1, 0][u][0, y] == 0,
Derivative[2, 0][u][0, y] == 10,
u[r, 0] == Cos[x],
Derivative[0, 1][u][r, 0] == 0},
u, {r, 0, 3}, {y, 0, 3},
MaxSteps -> Infinity, PrecisionGoal -> 1,
AccuracyGoal -> 1,
Method -> {"MethodOfLines",
"SpatialDiscretization" -> {"TensorProductGrid",
"MinPoints" -> 32, "MaxPoints" -> 32, "DifferenceOrder" -> 2},
Method -> {"Adams", "MaxDifferenceOrder" -> 1}}] // Quiet
Plot3D[sol, {r, 0, 3}, {y, 0, 3}, AxesLabel -> Automatic]
However, I get the error:
> NDSolve::dsvar: 0.00021449999999999998` cannot be used as a variable.
>
> ReplaceAll::reps: {NDSolve[{0.0002145 i Tan[x]
> (u^(1,0))[0.0002145,0.0002145]+0.0006435 i Tan[<<1>>]^2
> (u^(1,0))[0.0002145,0.0002145]+i Tan[<<1>>]^3
> (u^(1,0))[0.0002145,0.0002145]+4.60102*10^-8
> (u^(2,0))[0.0002145,0.0002145]+9.8692*10^-12 i
> (u^(3,0))[0.0002145,0.0002145]==9.20205*10^-8
> u[0.0002145,0.0002145],<<4>>,(u^(0,1))[0.0002145,0]==0},u,<<4>>,AccuracyGoal->1,Method->{<<1>>}]} is neither a list of replacement rules nor a valid dispatch table, and
> so cannot be used for replacing.
What is the error really?
Thanks!Ser Man2018-03-20T12:24:20ZSetup Workbench with Eclipse Neon3, current Java IDE, distribution?
http://community.wolfram.com/groups/-/m/t/1105418
Hi,
I am trying to setup Workbench with Eclipse Neon3, current Java IDE, distribution. It runs into the cerfificate (as expected), but after clicking OK does not finish the setup - at least after restart there is no sign of it.
![enter image description here][1]
![enter image description here][2]
any ideas anyone?
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Unbenannt.JPG&userId=196304
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=9228Unbenannt.JPG&userId=196304Erik Itter2017-05-24T10:29:12Z[✓] Get a clean matrix using this code?
http://community.wolfram.com/groups/-/m/t/1304776
i got a question about wolfram mathematica i want to make this 22 equations given to me and transform them into a 22x22 matrix.
this is my code.
equation1 = -318.1 x1 - 309.9 x2 + 993.1 x3 + 395.3 x4 - 599.2 x5 -
419.4 x6 + 676. x7 + 650.9 x8 + 512.9 x9 + 114.3 x10 - 491.7 x11 -
402.1 x12 + 111.4 x13 + 182.4 x14 - 593.9 x15 - 814.3 x16 +
91.7 x17 + 248.3 x18 - 421.3 x19 + 199.1 x20 - 711.4 x21 -
752.7 x22 == 249.7
equation2 = -281.9 x1 - 568.7 x2 + 206.9 x3 - 303.1 x4 + 674.3 x5 -
226.5 x6 + 507.4 x7 + 196.9 x8 + 764.6 x9 - 214.5 x10 -
587.5 x11 - 105.8 x12 - 869.8 x13 - 110.2 x14 + 431.9 x15 -
299.8 x16 + 979.4 x17 - 881.1 x18 - 16.6 x19 - 221.8 x20 -
33.5 x21 + 134.7 x22 == 637.8
equation3 = -577. x1 - 268.5 x2 - 692.2 x3 + 462.6 x4 + 81.4 x5 -
303.1 x6 + 736.6 x7 + 780.9 x8 + 560.8 x9 + 627.6 x10 +
698.8 x11 - 81.6 x12 + 436.9 x13 + 591.3 x14 + 651.7 x15 -
203.9 x16 + 449.1 x17 - 13.5 x18 + 964.6 x19 - 134.3 x20 +
692.6 x21 + 751.5 x22 == -462.
equation4 = -633.2 x1 - 500.6 x2 - 890.7 x3 + 10.4 x4 - 588. x5 +
144.1 x6 - 664.9 x7 + 777.8 x8 + 372. x9 + 684.1 x10 + 804.9 x11 -
215. x12 + 333.5 x13 - 860.5 x14 - 45.2 x15 - 10.3 x16 -
794.8 x17 + 992.3 x18 - 67.2 x19 - 200.6 x20 + 169.7 x21 +
70.6 x22 == 136.8
equation5 = -495.8 x1 + 823.8 x2 + 118.7 x3 - 219. x4 - 764. x5 +
785.8 x6 - 708.5 x7 + 224.4 x8 + 380.5 x9 - 517.9 x10 -
590.5 x11 - 967.3 x12 - 759. x13 - 981.3 x14 - 185.6 x15 -
703. x16 + 462.2 x17 + 793.8 x18 + 501.2 x19 + 11.8 x20 +
218.1 x21 - 727.9 x22 == -598.3
equation6 =
952.6 x1 - 555.6 x2 + 78.9 x3 + 601.1 x4 + 215.8 x5 - 255.1 x6 -
648.4 x7 + 576.4 x8 - 374.7 x9 - 253.8 x10 - 964.9 x11 +
576.1 x12 - 175.2 x13 - 71.6 x14 + 792. x15 + 118.1 x16 +
680.2 x17 - 632.4 x18 + 520.4 x19 - 789.7 x20 + 461.7 x21 -
141.9 x22 == 3.2
equation7 =
-717.9 x1 - 333.2 x2 - 977.4 x3 - 270.4 x4 + 610.1 x5 - 736.9 x6 -
5.5 x7 + 955.7 x8 - 530.1 x9 + 416.5 x10 - 427.6 x11 - 424.8 x12 -
423.5 x13 - 670.7 x14 - 364.6 x15 - 422.1 x16 + 277.3 x17 +
118.1 x18 - 203.6 x19 - 793.4 x20 - 520.7 x21 + 828. x22 == 229.4
equation8 = -570.9 x1 - 209.9 x2 + 946.4 x3 + 906.4 x4 - 927.9 x5 +
779.3 x6 + 105.1 x7 + 438.7 x8 - 512. x9 - 788.2 x10 - 820.1 x11 -
951.3 x12 - 144.4 x13 - 219.8 x14 - 976. x15 + 955.4 x16 +
808.8 x17 + 375.8 x18 - 94.8 x19 + 111.2 x20 + 447.2 x21 -
102.6 x22 == 184.7
equation9 =765.3 x1 - 780.6 x2 - 695.2 x3 - 656.6 x4 + 35.1 x5 - 853.9 x6 +
62.6 x7 - 194.2 x8 - 928.4 x9 - 939.1 x10 + 963.2 x11 -
460.8 x12 - 185.4 x13 + 223.1 x14 - 310.2 x15 - 61.4 x16 -
150.7 x17 - 232. x18 - 4.3 x19 + 497.3 x20 - 60.7 x21 + 684.8 x22 ==
404.2
equation10 = -613.9 x1 + 606.2 x2 - 633.5 x3 + 578.5 x4 + 155.8 x5 +
397.8 x6 + 95.6 x7 - 327.6 x8 + 783.9 x9 - 677.1 x10 - 866.4 x11 +
839.8 x12 - 276.5 x13 + 457.1 x14 - 12.6 x15 + 765. x16 +
402.2 x17 - 539.8 x18 - 951.4 x19 + 741. x20 + 10.1 x21 -
21.3 x22 == 677.
equation11 = -974.4 x1 - 666.2 x2 + 706.6 x3 + 354.7 x4 - 416.6 x5 -
636.4 x6 - 443.9 x7 + 694.5 x8 + 450.9 x9 - 819.8 x10 -
891.7 x11 - 362.5 x12 + 874.9 x13 + 730.4 x14 + 516.2 x15 -
50.5 x16 + 559.3 x17 + 627.8 x18 + 8.9 x19 + 315.5 x20 +
70.5 x21 - 912.4 x22 == 617.1
equation12 = -908.1 x1 - 672. x2 + 731.1 x3 - 909.8 x4 - 413.6 x5 +
222.3 x6 + 272.4 x7 + 84.3 x8 + 137. x9 + 402.2 x10 + 812.6 x11 +
760.3 x12 + 785.5 x13 + 988.2 x14 + 31.1 x15 + 999. x16 -
544.8 x17 + 554.8 x18 + 647.5 x19 + 101.1 x20 - 969.4 x21 -
472.2 x22 == -106.5
equation13 =
395.5 x1 - 419.4 x2 - 278.8 x3 - 535.4 x4 + 211.9 x5 + 456.2 x6 -
637.8 x7 + 36.3 x8 - 542.7 x9 - 348.8 x10 + 113.8 x11 +
510.4 x12 - 701.2 x13 - 739.3 x14 + 192.8 x15 - 159.8 x16 +
452.8 x17 - 251.6 x18 + 945.3 x19 - 50.3 x20 + 339.5 x21 -
954.5 x22 == -459.5
equation14 = -233.4 x1 - 274.4 x2 + 784.3 x3 - 426.9 x4 + 441.4 x5 -
27. x6 - 552.6 x7 + 699.7 x8 + 869.7 x9 + 331.2 x10 - 646.9 x11 +
670.4 x12 + 993.3 x13 - 378.1 x14 - 648.3 x15 + 76. x16 -
364.8 x17 + 235.1 x18 - 337.3 x19 - 245.9 x20 - 21.4 x21 -
224.2 x22 == -448.5
equation15 =
608.8 x1 + 810.4 x2 + 943.2 x3 + 70.3 x4 + 113. x5 - 599. x6 -
726.2 x7 + 163.7 x8 - 303.6 x9 - 614.9 x10 + 459.5 x11 - 7.1 x12 +
131.3 x13 + 820.5 x14 + 556.5 x15 - 115.2 x16 - 404.8 x17 +
437.2 x18 - 34.1 x19 - 715.4 x20 - 187. x21 - 172. x22 == -222.6
equation16 =
601. x1 + 411.7 x2 - 606.8 x3 + 253.2 x4 - 463.5 x5 + 91.8 x6 +
294.6 x7 + 543.1 x8 - 466.2 x9 + 479.1 x10 + 428.4 x11 +
597.6 x12 - 631.5 x13 + 321.7 x14 - 868.7 x15 + 35.4 x16 +
727.9 x17 + 757.9 x18 + 300.3 x19 - 699.7 x20 + 381.2 x21 +
815.3 x22 == 38.4
equation17 =
463.2 x1 + 294. x2 - 28.1 x3 - 725.8 x4 - 735.1 x5 + 715.3 x6 -
80.1 x7 + 996.4 x8 - 814.8 x9 + 918.2 x10 + 703.6 x11 -
594.5 x12 - 892. x13 + 878.3 x14 + 373.9 x15 - 164.3 x16 -
598. x17 - 200.9 x18 - 527.9 x19 - 991.7 x20 + 762. x21 -
615.8 x22 == 986.9
equation18 =
753.2 x1 - 415.7 x2 - 327.8 x3 - 781.3 x4 - 391.6 x5 - 734.7 x6 +
473.1 x7 - 450.2 x8 + 343.4 x9 + 974.6 x10 + 92. x11 - 143.3 x12 +
272.2 x13 - 389.4 x14 + 189.6 x15 - 979.1 x16 + 196.3 x17 -
153.8 x18 + 913.7 x19 + 571.7 x20 - 420.1 x21 + 941.9 x22 == 189.5
equation19 =
84. x1 + 998.8 x2 + 96.7 x3 + 468.2 x4 - 711. x5 + 357.3 x6 -
501.4 x7 - 110.4 x8 - 478. x9 + 114.4 x10 + 69.3 x11 + 356.3 x12 +
9.5 x13 + 970.2 x14 + 425.5 x15 - 544.5 x16 - 844.8 x17 +
874.2 x18 - 102.2 x19 + 183.1 x20 + 588. x21 - 25.6 x22 == 936.1
equation20 = -675.2 x1 - 269.7 x2 - 829.9 x3 + 107.8 x4 - 908.1 x5 -
810.8 x6 - 664. x7 + 721.2 x8 - 173.7 x9 + 434.3 x10 + 518.8 x11 +
322.5 x12 + 719.3 x13 - 701.7 x14 + 29. x15 + 90.8 x16 -
47.7 x17 - 534.2 x18 + 419.6 x19 - 416.8 x20 + 303.3 x21 -
389.9 x22 == 378.8
equation21 = -351.7 x1 + 817.8 x2 - 373.1 x3 + 100.8 x4 - 605. x5 -
725.4 x6 - 162.2 x7 - 174.9 x8 - 191.2 x9 - 16. x10 - 796.3 x11 +
264.6 x12 + 839. x13 - 327.9 x14 + 130. x15 + 429.6 x16 +
108.2 x17 - 344.2 x18 + 374.3 x19 + 408.3 x20 - 268. x21 +
900.4 x22 == -444.8
equation22 =
345. x1 + 521.9 x2 + 746.8 x3 + 176.4 x4 + 960.2 x5 - 919.6 x6 -
311.7 x7 + 409.5 x8 - 809.9 x9 + 719.8 x10 - 55. x11 + 237.8 x12 -
168.2 x13 - 442.7 x14 - 677.7 x15 - 227.3 x16 - 296.4 x17 +
672.9 x18 + 574.1 x19 - 370.5 x20 + 413.9 x21 + 799. x22 == 352.
system = {equation1, equation2, equation3, equation4, equation5,
equation6, equation7, equation8, equation9, equation10, equation11,
equation11, equation12, equation13, equation14, equation15,
equation16, equation17, equation18, equation19, equation20,
equation21, equation22}
MatrixForm[system]
when i run the program i dont get a clean matrix format i dont know why, please can somebody help my with this.jose caballero2018-03-19T20:42:01Z