Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Staff Picks sorted by activeUNET image segmentation in stem cells research
http://community.wolfram.com/groups/-/m/t/1341081
For my research project I had to encounter a thorny problem. But before I tell about the problem I would like to briefly mention something about my research project. Basically I am using embryonic stem cells that self-organize to form spheroids (balls of cells) to study gastrulation events. In order to not bog down the readers with technical jargon, “gastrulation” is a process where the stem cells start to form the different layers; each layer then goes onto form the various tissues/organs, in the process unraveling the developmental plan of the entire organism. I am using experimental techniques and quantitative principles from biophysics and engineering to understand some aspects of this crucial process
Now coming back to the problem at hand, the gastruloids (image below) are quite rough in their appearance and not as beautiful as one would like them to be (only a mother can love such an image). Any means of quantifying these gastruloids requires me to initially segment them. When you see a time-lapse images of gastruloids it becomes apparent that they shed a lot of cells (for reasons I do not know yet). This adds considerable noise to the system; oftentimes to the point that – as a human – my eyes are fooled and run into the difficulty of finding the right contours for the spheroids. Here comes the disclosure: classical means/operations in image-processing (gradients and edge detection, filtering, morphological operations etc.. ) prove utterly futile for image segmentation in my case.
![enter image description here][1]
(A gastruloid – virtually a ball of cells with many shed around the periphery)
So what can you do to address the problem where even the best image processing tool in existence – the human eyes – fails. This is precisely where you take help of neural networks. Neural networks are selling like hotcakes during the recent years and added life and hope to the once dead area of artificial intelligence. Again to avoid underlying technical details, neural networks is a paradigm utilized by the computer to mimic the working of a human brain by taking into account the complex interactions between the cells – but only digitally. There are many flavours of neural networks out there, each one geared towards performing a specific task. With advancements made in the area of deep learning/artificial intelligence, the neural nets have started to surpass humans in tasks that humans have been known to be best for i.e. classification tasks. A few recent examples that come to mind include Google’s AlphaGo beating the former World Go champion and an AI diagnosing skin cancer with an unprecedented accuracy.
I utilized one such flavour of neural networks (a deep convolutional network – termed as UNET) to solve my longstanding problem. I constructed the network in Wolfram-Language with external help from Alexey Golyshev. UNET is a deep convolutional network that has a series of convolutional and pooling operations in the contraction phase of the net (wherein the features are extracted) and a sequence of deconvolution & convolution operations in the expansion phase which then yields an output from the network. This output can be subjected to a threshold to ultimately generate a binarized mask (the image segmentation).
![enter image description here][2]
The architecture of UNET as provided by the author: https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/
(* ::Package:: *)
BeginPackage["UNETSegmentation`"]
(* ::Section:: *)
(*Creating UNet*)
conv[n_]:=NetChain[
{
ConvolutionLayer[n,3,"PaddingSize"->{1,1}],
Ramp,
BatchNormalizationLayer[],
ConvolutionLayer[n,3,"PaddingSize"->{1,1}],
Ramp,
BatchNormalizationLayer[]
}
];
pool := PoolingLayer[{2,2},2];
dec[n_]:=NetGraph[
{
"deconv" -> DeconvolutionLayer[n,{2,2},"Stride"->{2,2}],
"cat" -> CatenateLayer[],
"conv" -> conv[n]
},
{
NetPort["Input1"]->"cat",
NetPort["Input2"]->"deconv"->"cat"->"conv"
}
];
nodeGraphMXNET[net_,opt: ("MXNetNodeGraph"|"MXNetNodeGraphPlot")]:= net~NetInformation~opt;
UNET := NetGraph[
<|
"enc_1"-> conv[64],
"enc_2"-> {pool,conv[128]},
"enc_3"-> {pool,conv[256]},
"enc_4"-> {pool,conv[512]},
"enc_5"-> {pool,conv[1024]},
"dec_1"-> dec[512],
"dec_2"-> dec[256],
"dec_3"-> dec[128],
"dec_4"-> dec[64],
"map"->{ConvolutionLayer[1,{1,1}],LogisticSigmoid}
|>,
{
NetPort["Input"]->"enc_1"->"enc_2"->"enc_3"->"enc_4"->"enc_5",
{"enc_4","enc_5"}->"dec_1",
{"enc_3","dec_1"}->"dec_2",
{"enc_2","dec_2"}->"dec_3",
{"enc_1","dec_3"}->"dec_4",
"dec_4"->"map"},
"Input"->NetEncoder[{"Image",{160,160},ColorSpace->"Grayscale"}]
]
(* ::Section:: *)
(*DataPrep*)
dataPrep[dirImage_,dirMask_]:=Module[{X, masks,imgfilenames, maskfilenames,ordering, fNames,func},
func[dir_] := (SetDirectory[dir];
fNames = FileNames[];
ordering = Flatten@StringCases[fNames,x_~~p:DigitCharacter.. :> ToExpression@p];
Part[fNames,Ordering@ordering]);
imgfilenames = func@dirImage;
X = ImageResize[Import[dirImage<>"\\"<>#],{160,160}]&/@imgfilenames;
maskfilenames = func@dirMask;
masks = Import[dirMask<>"\\"<>#]&/@maskfilenames;
{X, NetEncoder[{"Image",{160,160},ColorSpace->"Grayscale"}]/@masks}
]
(* ::Section:: *)
(*Training UNet*)
trainNetwithValidation[net_,dataset_,labeldataset_,validationset_,labelvalidationset_, batchsize_: 8, maxtrainRounds_: 100]:=Module[{},
SetDirectory[NotebookDirectory[]];
NetTrain[net, dataset->labeldataset,All, ValidationSet -> Thread[validationset-> labelvalidationset],
BatchSize->batchsize,MaxTrainingRounds->maxtrainRounds, TargetDevice->"GPU",
TrainingProgressCheckpointing->{"Directory","results","Interval"->Quantity[5,"Rounds"]}]
];
trainNet[net_,dataset_,labeldataset_, batchsize_:8, maxtrainRounds_: 10]:=Module[{},
SetDirectory[NotebookDirectory[]];
NetTrain[net, dataset->labeldataset,All,BatchSize->batchsize,MaxTrainingRounds->maxtrainRounds, TargetDevice->"GPU",
TrainingProgressCheckpointing->{"Directory","results","Interval"-> Quantity[5,"Rounds"]}]
];
(* ::Section:: *)
(*Measure Accuracy*)
measureModelAccuracy[net_,data_,groundTruth_]:= Module[{acc},
acc =Table[{i, 1.0 - HammingDistance[N@Round@Flatten@net[data[[i]],TargetDevice->"GPU"],
Flatten@groundTruth[[i]]]/(160*160)},{i,Length@data}
];
{Mean@Part[acc,All,2],TableForm@acc}
];
(* ::Section:: *)
(*Miscellaneous*)
saveNeuralNet[net_]:= Module[{dir = NotebookDirectory[]},
Export[dir<>"unet.wlnet",net]]/; Head[net]=== NetGraph;
saveInputs[data_,labels_,opt:("data"|"validation")]:=Module[{},
SetDirectory[NotebookDirectory[]];
Switch[opt,"data",
Export["X.mx",data];Export["Y.mx",labels],
"validation",
Export["Xval.mx",data];Export["Yval.mx",labels]
]
]
EndPackage[];
The above code can also be found in the repository @ [Wolfram-MXNET GITHUB][3]
I trained my network over my laptop GPU (Nvidia GTX 1050) by feeding an augmented data (a set of 300 images constructed from a small dataset) . The training was done in under 3 minutes !. The accuracy (computed as the Hamming Distance between two vectors) of the generated binary masks with respect to the ground truth (unseen data) for a set of 90 images was 98.55 %. And with this a task that previously required me to painstakingly trace the contour of the gastruloids manually can now be performed in a matter of milliseconds. All the saved time and perspiration to be utilized somewhere else?
![enter image description here][4]
Below is the results obtained by applying our trained net on one input:
![enter image description here][5]
The interesting aspect for me regarding the network was that despite my gastruloids being highly dynamic (changing shape over time) I never had to explicity state it to the network. All the necessary features were learned from the limited number of images that I trained my network with. This is the beauty of the neural network.
![enter image description here][6]
Finally the output of the net as applied on a number of unseen images:
![enter image description here][7]
Note: I have a python MXNET version of UNET @ [python mxnet GITHUB][8]
The wolfram version of UNET however seems to outperform the python version even though it also utilizes MXNET at the back-end for implementing neural networks. It should not come as a surprise because my guess is that the people at Wolfram Research may have done internal optimizations on top of the library
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=gastruloid.png&userId=942204
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=u-net-architecture-initial-authors-implementation.png&userId=942204
[3]: https://github.com/alihashmiii/UNet-Segmentation-Wolfram
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=img1.png&userId=942204
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=img2.png&userId=942204
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=img3.png&userId=942204
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=img4.png&userId=942204
[8]: https://github.com/alihashmiii/blobsegmentationAli Hashmi2018-05-17T15:06:45ZHow fast is my fidget spinner spinning? A sound experiment!
http://community.wolfram.com/groups/-/m/t/1344151
I would like to measure how fast my 6-bladed fidget spinner spins. To do so, after giving it a hard spin, I gently touch the spinner with a wooden stirring stick to create a buzzing sound which usually last for a minute.
[![enter image description here][1]][2]
I have recorded and plotted the sound it generates:
[![enter image description here][3]][4]
How can I **automatically** generate a list of peak times for the above data? My final goal is to plot revolutions per second as a function of time to show spin decay.
### Data
To hear the sound in your Mathematica notebook, run the following code:
audio = Sound[SampledSoundList[
Flatten@ImageData@Import["https://i.stack.imgur.com/qHpp6.png"], 22050]]
![enter image description here][5]
This will download the following image, turn it into an array, and finally, convert it to a sound object.
[![enter image description here][6]][7]
First, import the audio and extract usable data from it:
audioDuration = Duration[audio];
audioSampleRate = AudioSampleRate[audio];
data = AudioData[audio][[1]];
Second, use `PeakDetect` to see which points are peaks (`= 1`) and which points are not peaks (`= 0`). Find the location of peaks in seconds.
peaks = PeakDetect[data, 150, 0.0, 0.4];
peakPos = 1./audioSampleRate Position[peaks, 1] // Flatten;
Length[peakPos]
The period of the spinner is the separation between the beats (peaks) times the number of blades:
periods = 6 (peakPos[[2 ;; -1]] - peakPos[[1 ;; -2]])/1
Spin rate, that is revolutions per second, is reciprocal of the period:
spinRates = 1/periods;(* Revolutions per second *)
Convert the data into a list of `{time, spin rate}` and plot it:
spinRateVStime =
Table[{i audioDuration/Length[spinRates], spinRates[[i]]}, {i,
Length[spinRates]}];
[![enter image description here][8]][9]
As it can be seen, the spinner spins 6 times per second and eventually comes to a stop after 12 seconds.
### Details
The parameters for `PeakDetect` needs to be adjusted. To do so, you need to reduce the amount of data to speed up the process, and plot `PeakDetect` on top of the data and look for a good agreement.
data = AudioData[audio][[1]][[800 ;; 11111]];
peaks = PeakDetect[data, 150, 0.0, 0.4];
ListLinePlot[{data , peaks}, PlotRange -> {All, {0, 1.1}}]
[![enter image description here][10]][11]
[1]: https://i.stack.imgur.com/WnMrF.gif
[2]: https://i.stack.imgur.com/WnMrF.gif
[3]: https://i.stack.imgur.com/oxAvw.png
[4]: https://i.stack.imgur.com/oxAvw.png
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sfd324qwrea.png&userId=20103
[6]: https://i.stack.imgur.com/qHpp6.png
[7]: https://i.stack.imgur.com/qHpp6.png
[8]: https://i.stack.imgur.com/XXPkC.png
[9]: https://i.stack.imgur.com/XXPkC.png
[10]: https://i.stack.imgur.com/H3hPm.png
[11]: https://i.stack.imgur.com/H3hPm.pngMilad Pourrahmani2018-05-23T14:57:14ZThe Hippasus Primes
http://community.wolfram.com/groups/-/m/t/965609
According to legend, when Hippasus proved [the irrationality of $\sqrt2$](http://mathworld.wolfram.com/PythagorassConstant.html), he was thrown off a ship. Poor guy.
..
Gauss discovered that the numbers 1, 2, 3, 7, 11, 19, 43, 67, 163 led to unique factorization domains, and conjectured these were the only such numbers. Another person almost forgotten was Kurt Heegner, who proved Gauss's conjecture. But there was a small gap in his proof. Years later, Alan Baker and Harold Stark proved the result. But then they looked at Heegner's proof and announced it was pretty much correct, four years after Heegner's death. In his honor, 1, 2, 3, 7, 11, 19, 43, 67, 163 are known as [Heegner numbers](http://mathworld.wolfram.com/HeegnerNumber.html).
..
The $\mathbb{Q}(\sqrt{-1})$ numbers are known as [Gaussian integers](http://mathworld.wolfram.com/GaussianInteger.html).
The $\mathbb{Q}(\sqrt{-3})$ numbers are known as [Eisenstein integers](http://mathworld.wolfram.com/EisensteinInteger.html).
The $\mathbb{Q}(\sqrt{-7})$ numbers are known as [Kleinian integers](https://en.wikipedia.org/wiki/Kleinian_integer).
..
What about $\mathbb{Q}(\sqrt{-2})$? Why doesn't it have a name? I propose we call these **Hippasus integers**. He doesn't get much credit for his discoveries about $\sqrt{2}$, so may as well give him this to fill in the gap.
..
So what do the Hippasus primes look like? Here's some code based on the [Sieve of Eratosthenes](http://mathworld.wolfram.com/SieveofEratosthenes.html) that seems to work. I'm sure it can be vastly improved upon.
heeg = 2;
klein = RootReduce[Select[SortBy[Flatten[Table[a + b (Sqrt[heeg] I - 1)/2, {a, -50, 50}, {b, -70, 70}]], N[Norm[#]] &], 1 < Norm[#] < 40 &]];
sieve = Take[#, -2] & /@ (Last /@ (Sort /@ SplitBy[SortBy[{Norm[#]^2, 2 Re[#], 2 Im[#]/Sqrt[heeg]} & /@ klein, Abs[#] &], Abs[#] &]));
primes = {};
Module[{addedprime, remove},
While[Length[sieve] > 1,
addedprime = sieve[[1]];
primes = Append[primes, addedprime];
remove = Union[Join[Abs[{#[[1]], #[[2]]/Sqrt[heeg]}] & /@ (ReIm[2 (addedprime.{1, Sqrt[heeg] I}/2) (#.{1, Sqrt[heeg] I}/2)] & /@ sieve),
Abs[{#[[1]], #[[2]]/Sqrt[heeg]}] & /@ (ReIm[2 (addedprime.{1, -Sqrt[heeg] I}/2) (#.{1, Sqrt[heeg] I}/2)] & /@ sieve)]];
sieve = Select[Drop[sieve, 1], Not[MemberQ[remove, #]] &]]];
Graphics[Table[Point[{{1, 1}, {1, -1}, {-1, 1}, {-1, -1}}[[k]] ReIm[#]] & /@ (#.{1, Sqrt[heeg] I}/2 & /@ primes), {k, 1, 4}]]
![Hippasus primes][1]
With a change of the Heegner number at the top, the Gaussian primes, Hippasus primes, Eisenstein primes, and Kleinian primes can all be calculated:
![Heegner 1 2 3 7][2]
In case you were curious, we can also calculate the primes based on Heegner numbers 11, 19, 43, and 67.
![Heegner 11 19 43 67][3]
Those last two look pretty weird, so maybe I'm making a mistake somewhere. The primes based on 163 look even stranger.
![Heegner 163][4]
There are so many weird patterns that I almost didn't show this one. But then I remembered the [lucky numbers of Euler](http://mathworld.wolfram.com/LuckyNumberofEuler.html), which are based on Heegner numbers. The long line of primes is likely accurate. If anyone can improve/speed up the code and make a much larger picture, I'd love to see that.
..
The same goes for a bigger picture of the **Hippasus primes**. If there is another name for these, please let me know. If you agree this is a great name for them, also let me know.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=HippasusPrimes.gif&userId=21530
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Heegner1237.gif&userId=21530
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Heegner11194367.gif&userId=21530
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Heegner163.gif&userId=21530Ed Pegg2016-11-17T23:30:46ZLoop subdivision on triangle meshes
http://community.wolfram.com/groups/-/m/t/1338790
(Cross-posted from [Mathematica.StackExchange](https://mathematica.stackexchange.com/q/161331/38178))
Every now and then, the question pops up how a given geometric mesh (e.g. a `MeshRegion`) can be refined to produce a i.) finer and ii.) smoother mesh. For example, the following triangle mesh from the example database is pretty coarse.
R = ExampleData[{"Geometry3D", "Triceratops"}, "MeshRegion"]
MeshCellCount[R, 2]
[![enter image description here][4]][1]
> 5660
Well, we _could_ execute this
S = DiscretizeRegion[R, MaxCellMeasure -> {1 -> 0.01}]
MeshCellCount[S, 2]
[![enter image description here][4]][1]
> 1332378
only to learn that the visual appearance hasn't improved at all.
So, how can we refine in a smoothing way with Mathematica? There are several subdivision schemes known in geometry processing, e.g. [Loop subdivision](https://en.wikipedia.org/wiki/Loop_subdivision_surface) and [Catmull-Clark subdivision](https://en.wikipedia.org/wiki/Catmull-Clark_subdivision_surface) for general polyhedral meshes, but there seem to be no built-in methods for these.
Implementation
---
Let's see if we can do that with what Mathematica offers us. Still, we need quite a bit of preparation. In the first place we need methods to compute cell adjacency matrices form [here](https://mathematica.stackexchange.com/questions/160443/how-to-obtain-the-cell-adjacency-graph-of-a-mesh/160457#160457). I copied the code for completeness. The built-in `"ConnectivityMatrix"` properties for `MeshRegions` return pattern arrays, so we start to convert them into numerical matrices.
SparseArrayFromPatternArray[A_SparseArray] := SparseArray @@ {
Automatic, Dimensions[A], A["Background"], {1, {
A["RowPointers"],
A["ColumnIndices"]
},
ConstantArray[1, Length[A["ColumnIndices"]]]
}
}
CellAdjacencyMatrix[R_MeshRegion, d_, 0] := If[MeshCellCount[R, d] > 0,
SparseArrayFromPatternArray[R["ConnectivityMatrix"[d, 0]]],
{}
];
CellAdjacencyMatrix[R_MeshRegion, 0, d_] := If[MeshCellCount[R, d] > 0,
SparseArrayFromPatternArray[R["ConnectivityMatrix"[0, d]]],
{}
];
CellAdjacencyMatrix[R_MeshRegion, 0, 0] :=
If[MeshCellCount[R, 1] > 0,
With[{A = CellAdjacencyMatrix[R, 0, 1]},
With[{B = A.Transpose[A]},
SparseArray[B - DiagonalMatrix[Diagonal[B]]]
]
],
{}
];
CellAdjacencyMatrix[R_MeshRegion, d1_, d2_] :=
If[(MeshCellCount[R, d1] > 0) && (MeshCellCount[R, d2] > 0),
With[{B = CellAdjacencyMatrix[R, d1, 0].CellAdjacencyMatrix[R, 0, d2]},
SparseArray[
If[d1 == d2,
UnitStep[B - DiagonalMatrix[Diagonal[B]] - d1],
UnitStep[B - (Min[d1, d2] + 1)]
]
]
],
{}
];
Alternatively to copying the code above, simply make sure that you have [IGraph/M](http://szhorvat.net/pelican/igraphm-a-mathematica-interface-for-igraph.html) version 0.3.93 or later installed and run
Needs["IGraphM`"];
CellAdjacencyMatrix = IGMeshCellAdjacencyMatrix;
Next is a `CompiledFunction` to compute the triangle faces for the new mesh:
getSubdividedTriangles =
Compile[{{ff, _Integer, 1}, {ee, _Integer, 1}},
{
{Compile`GetElement[ff, 1],Compile`GetElement[ee, 3],Compile`GetElement[ee, 2]},
{Compile`GetElement[ff, 2],Compile`GetElement[ee, 1],Compile`GetElement[ee, 3]},
{Compile`GetElement[ff, 3],Compile`GetElement[ee, 2],Compile`GetElement[ee, 1]},
ee
},
CompilationTarget -> "C",
RuntimeAttributes -> {Listable},
Parallelization -> True,
RuntimeOptions -> "Speed"
];
Finally, the methods that webs everything together. It assembles the subdivision matrix (which maps the old vertex coordinates to the new ones), uses it to compute the new positions and calls `getSubdividedTriangles` in order to generate the new triangle faces.
ClearAll[LoopSubdivide];
Options[LoopSubdivide] = {
"VertexWeightFunction" -> Function[n, 5./8. - (3./8. + 1./4. Cos[(2. Pi)/n])^2],
"EdgeWeight" -> 3./8.,
"AverageBoundary" -> True
};
LoopSubdivide[R_MeshRegion, opts : OptionsPattern[]] := LoopSubdivide[{R, {{0}}}, opts][[1]];
LoopSubdivide[{R_MeshRegion, A_?MatrixQ}, OptionsPattern[]] :=
Module[{A00, A10, A12, A20, B00, B10, n, n0, n1, n2, βn, pts,
newpts, edges, faces, edgelookuptable, triangleneighedges,
newfaces, subdivisionmatrix, bndedgelist, bndedges, bndvertices,
bndedgeQ, intedgeQ, bndvertexQ,
intvertexQ, β, βbnd, η},
pts = MeshCoordinates[R];
A10 = CellAdjacencyMatrix[R, 1, 0];
A20 = CellAdjacencyMatrix[R, 2, 0];
A12 = CellAdjacencyMatrix[R, 1, 2];
edges = MeshCells[R, 1, "Multicells" -> True][[1, 1]];
faces = MeshCells[R, 2, "Multicells" -> True][[1, 1]];
n0 = Length[pts];
n1 = Length[edges];
n2 = Length[faces];
edgelookuptable = SparseArray[
Rule[
Join[edges, Transpose[Transpose[edges][[{2, 1}]]]],
Join[Range[1, Length[edges]], Range[1, Length[edges]]]
],
{n0, n0}];
(*A00=CellAdjacencyMatrix[R,0,0];*)
A00 = Unitize[edgelookuptable];
bndedgelist = Flatten[Position[Total[A12, {2}], 1]];
If[Length[bndedgelist] > 0, bndedges = edges[[bndedgelist]];
bndvertices = Sort[DeleteDuplicates[Flatten[bndedges]]];
bndedgeQ = SparseArray[Partition[bndedgelist, 1] -> 1, {n1}];
bndvertexQ = SparseArray[Partition[bndvertices, 1] -> 1, {n0}];
B00 = SparseArray[ Join[bndedges, Reverse /@ bndedges] -> 1, {n0, n0}];
B10 = SparseArray[ Transpose[{Join[bndedgelist, bndedgelist],
Join @@ Transpose[bndedges]}] -> 1, {n1, n0}];
,
bndedgeQ = SparseArray[{}, {Length[edges]}];
bndvertexQ = SparseArray[{}, {n0}];
B00 = SparseArray[{}, {n0, n0}];
B10 = SparseArray[{}, {n1, n0}];
];
intedgeQ = SparseArray[Subtract[1, Normal[bndedgeQ]]];
intvertexQ = SparseArray[Subtract[1, Normal[bndvertexQ]]];
n = Total[A10];
β = OptionValue["VertexWeightFunction"];
η = OptionValue["EdgeWeight"];
βn = β /@ n;
βbnd = If[TrueQ[OptionValue["AverageBoundary"]], 1./8., 0.];
subdivisionmatrix =
Join[Plus[
DiagonalMatrix[SparseArray[1. - βn] intvertexQ + (1. - 2. βbnd) bndvertexQ],
SparseArray[(βn/n intvertexQ)] A00, βbnd B00],
Plus @@ {((3. η - 1.) intedgeQ) (A10),
If[Abs[η - 0.5] < Sqrt[$MachineEpsilon],
Nothing, ((0.5 - η) intedgeQ) (A12.A20)], 0.5 B10}];
newpts = subdivisionmatrix.pts;
triangleneighedges = Module[{f1, f2, f3},
{f1, f2, f3} = Transpose[faces];
Partition[
Extract[
edgelookuptable,
Transpose[{Flatten[Transpose[{f2, f3, f1}]],
Flatten[Transpose[{f3, f1, f2}]]}]],
3]
];
newfaces =
Flatten[getSubdividedTriangles[faces, triangleneighedges + n0],
1];
{
MeshRegion[newpts, Polygon[newfaces]],
subdivisionmatrix
}
]
Test examples
---
So, let's test it. A classical example is subdividing an `"Isosahedron"`:
R = RegionBoundary@PolyhedronData["Icosahedron", "MeshRegion"];
regions = NestList[LoopSubdivide, R, 5]; // AbsoluteTiming // First
g = GraphicsGrid[Partition[regions, 3], ImageSize -> Full]
> 0.069731
[![enter image description here][1]][1]
Now, let's tackle the `"Triceratops"` from above:
R = ExampleData[{"Geometry3D", "Triceratops"}, "MeshRegion"];
regions = NestList[LoopSubdivide, R, 2]; // AbsoluteTiming // First
g = GraphicsGrid[Partition[regions, 3], ImageSize -> Full]
> 0.270776
[![enter image description here][2]][2]
The meshes so far had trivial boundary. As for an example with nontrivial boundary, I dug out the `"Vase"` from the example dataset:
R = ExampleData[{"Geometry3D", "Vase"}, "MeshRegion"];
regions = NestList[LoopSubdivide, R, 2]; // AbsoluteTiming // First
g = GraphicsRow[
Table[Show[S, ViewPoint -> {1.4, -2.1, -2.2},
ViewVertical -> {1.7, -0.6, 0.0}], {S, regions}],
ImageSize -> Full]
> 1.35325
[![enter image description here][3]][3]
Remarks and edits
---
Added some performance improvements and incorporated some ideas by [Chip Hurst](https://mathematica.stackexchange.com/users/4346) form [this post](https://mathematica.stackexchange.com/questions/160443/how-to-obtain-the-cell-adjacency-graph-of-a-mesh/166491#166491).
Added options for customization of the subdivision process, in particular for planar subdivision (see [this post](https://mathematica.stackexchange.com/a/170604/38178) for an application example).
Added a way to also return the subdivision matrix since it can be useful, e.g. for [geometric multigrid solvers](https://mathematica.stackexchange.com/a/173617/38178). Just call with a matrix as second argument, e.g., `LoopSubdivide[R,{{1}}]`.
Fixed a bug that produced dense subdivision matrices in some two-dimensional examples due to not using `0` as `"Background"` value.
[4]: https://i.stack.imgur.com/nuWBd.png
[1]: https://i.stack.imgur.com/l1VcB.png
[2]: https://i.stack.imgur.com/qSbBh.png
[3]: https://i.stack.imgur.com/dp1BY.pngHenrik Schumacher2018-05-14T16:15:13Z