Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Recreation sorted by activeCreating Star Wars Kylo Ren voice in one line of code
http://community.wolfram.com/groups/-/m/t/1219764
*MODERATOR NOTE: for true Star Wars fans this post has a related Wolfram Cloud App, which you can access by clicking on the image below. Read the full post below to understand how the app works. May the Force be with you.*
[![enter image description here][1]][9]
----------
Kylo Ren voice in [Star Wars The Force Awakens][2] is very cool. When I watched the movie in 2015, one of the first things that came up to my mind was how to do such voice in Mathematica. At the time Mathematica capabilities to manipulate sounds were very simplistic, and was not possible to do such thing.
I'm far far way to be a sound expert, but I tried do something like this:
file = Import["https://s3-sa-east-1.amazonaws.com/rmurta/murta-audio.wav"];
audioOrg = Audio[file];
audioKylo = AudioPitchShift[audioOrg, 0.9];
audioKylo = AudioFrequencyShift[audioKylo,-200]//AudioAmplify[#, 4]&
- Original Murta's record: [Murta original][3]
- And here is the result: [Murta as Kylo][4]
[![Kylo Voice][5]][6]
[youtube link for Kylo][7]
See [original post in Stack Exchange][8], where I ask for improvement in the sound hack.
After that, why not create a cloud app for that?!
This is amazing about Mathematica, with one line of code you can create a App, and everybody can play with It!
CloudDeploy[FormFunction[{"sound"->"Sound"},
AudioAmplify[AudioFrequencyShift[AudioPitchShift[#sound,0.9],-200],4]&],
"kylo-voice-by-murta",Permissions->"Public"]
Here is the link, so you can try it: [kyle-voice-by-murta][9].
Record your sound and speak like Kylo!
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-11-14at1.40.15PM2.png&userId=11733
[2]: https://en.wikipedia.org/wiki/Star_Wars:_The_Force_Awakens
[3]: https://s3-sa-east-1.amazonaws.com/rmurta/murta-audio.wav
[4]: https://s3-sa-east-1.amazonaws.com/rmurta/murta-kylo-audio.wav
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=kylo-voice2.png&userId=11733
[6]: https://www.youtube.com/watch?v=zy-wqB4cbT8&feature=youtu.be&t=49s
[7]: https://www.youtube.com/watch?v=zy-wqB4cbT8&feature=youtu.be&t=49s
[8]: https://mathematica.stackexchange.com/questions/159744
[9]: https://www.wolframcloud.com/objects/murta/kylo-voice-by-murtaRodrigo Murta2017-11-11T13:06:38ZWhat Game of Thrones character do you resemble?
http://community.wolfram.com/groups/-/m/t/1242672
Hi all,
First time community poster here! I'll start off by saying, I am by no means an expert in machine learning. I dabble with the basic tutorials and training options offered by Wolfram, but when it comes to the guts of this stuff, I'm brand new. I say this as I think my project here is a good proof of concept on how easy it is to pick up machine learning with the Wolfram language. (Feel free to skip to the bottom of the post for a link to my notebook.)
**Initial testing**
To start out, I simply looked into the basic machine learning examples provided around our Mathematica 11 release, linked [here][1]. The [Create a Legendary Creature Recognizer][2] in particular is what I based my initial testing off of.
To start, I narrowed it down to two relatively different but very popular characters, Daenerys Targaryen and Jon Snow. Similar to the format in the creature recognizer, I simply imported images, associated them to the character name, and plugged this into my [Classify][3] function. To test it's capabilities, I used the "Top Probabilities" call to see with what confidence it was assigning characters.
Like I said, I'm new to machine learning, so this was a learning process for me. To my surprise, this solution was rather messy and just not at the level of accuracy I was looking for. It struggled with actors/actresses out of costume, was polluted by the background, and just didn't have enough images to be accurate.
This is what lead me to ultimately focus on 3 things when training my classifier function and eventually my neural networks. 1) Images of characters should include the actors or actresses both in and out of costume. 2) The [FindFaces][4] functions needed to be used to remove any background or outfit pollution. 3) The amount of images for each character needed to be as large as possible.
**Collecting images for my dataset**
Once I was able to mast extracting images using FindFaces, it was time to do some heavy internet image searching to get the relevant images. I decided to limit my identifier to just 7 characters: Dany, Jon, Tyrion, Cersei, Jaime, Arya, and Sansa (apologies to all of the Davos fans!) As mentioned before, I did specific searches for characters throughout the seasons as well as actors/actresses outside of costume (as well as additional searches for different hairstyles, facial hair, glasses, age, etc.)
To speed up this process, I first created a quick image tester to be sure that each of the images I pulled from my searches was being pulled correctly without having to test the entire dataset all at once. For example, I would test all of the season 1 images I found of Dany at one time. You can see what this looks like below.
![enter image description here][5]
testImgs = {}
i = 1;
testFaces = {};
While[i < Length[testImgs] + 1,
testFace = ImageTrim[testImgs[[i]], #] & /@ FindFaces[testImgs[[i]]];
AppendTo[testFaces, ImageResize[testFace[[1]], Tiny]];
i++;
]
testFaces
For proof of concept, the image includes a couple common errors that I found. In some cases, The image would not be able to find any faces per low-quality or just odd lighting/angles, in which would result in the error. In other cases, images would pick up background noise that resembled a face (not provided in this example). You can see in the last picture, another character's face was found first. These images would need to be removed from the set before being added to the full dataset. As I went along, it became easier to weed out images that would likely be unusable. You can see an example of what these image sets would look like below. Jaime's in particular was my smallest.
![enter image description here][6]
The final step in this process was to make the appropriate associations for the images and provide a list that could be used by the Classify function. You can see the code for this below. To summarize, I made separate lists of images for each character, then iterated through those list to associate them with the appropriate character name then add that associated object to my master list.
faceRec = {};
i = 1;
While[i < Length[dt] + 1,
dtface = ImageTrim[dt[[i]], #] & /@ FindFaces[dt[[i]]];
AppendTo[faceRec, ImageResize[dtface[[1]], {100, 100}] -> "Dany"];
i++;
]
i = 1;
While[i < Length[js] + 1,
jsface = ImageTrim[js[[i]], #] & /@ FindFaces[js[[i]]];
AppendTo[faceRec, ImageResize[jsface[[1]], {100, 100}] -> "Jon"];
i++;
]
i = 1;
While[i < Length[tl] + 1,
tlface = ImageTrim[tl[[i]], #] & /@ FindFaces[tl[[i]]];
AppendTo[faceRec, ImageResize[tlface[[1]], {100, 100}] -> "Tyrion"];
i++;
]
i = 1;
While[i < Length[cl] + 1,
clface = ImageTrim[cl[[i]], #] & /@ FindFaces[cl[[i]]];
AppendTo[faceRec, ImageResize[clface[[1]], {100, 100}] -> "Cersei"];
i++;
]
i = 1;
While[i < Length[jl] + 1,
jlface = ImageTrim[jl[[i]], #] & /@ FindFaces[jl[[i]]];
AppendTo[faceRec, ImageResize[jlface[[1]], {100, 100}] -> "Jaime"];
i++;
]
i = 1;
While[i < Length[as] + 1,
asface = ImageTrim[as[[i]], #] & /@ FindFaces[as[[i]]];
AppendTo[faceRec, ImageResize[asface[[1]], {100, 100}] -> "Arya"];
i++;
]
i = 1;
While[i < Length[ss] + 1,
ssface = ImageTrim[ss[[i]], #] & /@ FindFaces[ss[[i]]];
AppendTo[faceRec, ImageResize[ssface[[1]], {100, 100}] -> "Sansa"];
i++;
]
faceRec = RandomSample[faceRec, Length[faceRec]];
Per a note from one of our developers, I also chose to randomize the list of associations, as our neural network framework down assume a random order when processing. This didn't provide any noticeable difference in my dataset, but I thought it was an important thing to at least note.
**Using the Classify function**
With my images associated, I was ready to jump into creating my ClassifierFunction. With a single line of code, I was able to do so, as seen below.
gotFaces = Classify[faceRec]
To test my model, I pulled in some new images and used the same logic as before to create a testing set. I did try to find images in costume as well as in real life, with different hairstyles, and at different ages to provide some more variety in my testing.
![enter image description here][7]
test = {}
i = 1;
testFaces = {};
While[i < Length[test] + 1,
testFace = ImageTrim[test[[i]], #] & /@ FindFaces[test[[i]]];
AppendTo[testFaces, ImageResize[testFace[[1]], {100, 100}]];
i++;
]
testFaces
I will note, that doing this same process to this point with just Jon and Dany worked REALLY well. To my surprise, though, it just wasn't quite as promising with a larger cast of characters. Some characters were correct with relatively high confidence, some were right but with a split between a few characters, and some were wrong but at least had the correct character in close proximity to the top character.
Manipulate[gotFaces[character, "TopProbabilities"], {character, testFaces}]
![enter image description here][8]
![enter image description here][9]
![enter image description here][10]
This had me troubleshooting like crazy. I will say that simply duplicating the associations already in the list several times actually improved the accuracy quite a bit, which was expected. With this in mind, I decided to try my hand at our neural network functionality.
**Using the NetChain and NetTrain functions**
I'll say it once, and I'll say it again. I am by no means an expert in neural networks. It's a big topic in tech now, so I've developed a basic conversational understanding of the concept. However, this was my first time actually creating my own neural network in any language.
The process was relatively simple, much to my surprise, thanks to Wolfram's vast amount of documentation. Per some research, I learned about "convolutional" neural nets and found a similar image example of this in our documentation pages for [NetTrain][11]. Using this example, I made some minor adjustments to the NetChain to fit this to my own data and was well on my way to training my first neural net! You can see what this ended up looking like below. The variable "classes" simply being all of the potential characters.
lenet = NetChain[
{ConvolutionLayer[20, 5], Ramp, PoolingLayer[2, 2],
ConvolutionLayer[50, 5], Ramp, PoolingLayer[2, 2], FlattenLayer[],
500, Ramp, 7, SoftmaxLayer[]},
"Output" -> NetDecoder[{"Class", classes}],
"Input" -> NetEncoder[{"Image", {100, 100}}]
]
From here, the NetTrain function is super simple. You can see an image of what this looks like in motion below with real-time training progress as the program runs. For larger sets of data, there is also the option to add a time goal. For mine, I simply allowed it to run for the full 10-15 minutes. It's certainly interesting to be able to actually see the loss function change as the data set runs.
trained = NetTrain[lenet, faceRec]
![enter image description here][12]
This method did still have its errors but the accuracy as well as the confidence in the selections did seem to be significantly higher. I saw much more examples of split "TopProbabilities" for the wrongly classified images, as with the Dany and Arya pictures below, but also a much larger level of correctly classified images with 90%+ as their "TopProbabilities" return, as with the Jon and Sansa pictures below.
Manipulate[trained[character, "TopProbabilities"], {character, testFaces}]
![enter image description here][13]
![enter image description here][14]
![enter image description here][15]
![enter image description here][16]
As with the Classify function, adding more images (even duplicates) to the dataset did provide even more accurate results and confidence to the neural net model. Ideally, if using this for an actual application, I would have a more elegant way of importing the images in for the NetTrain. A suggestion from a meeting I sat in recently was to explore [NetEncoder][17]. I also got the vibe that there are some other options in the pipeline to support this, specifically with large numerical datasets.
**Real-time character matching**
Moving on to the fun part that you've all been waiting for! I started out by using the same logic as with my character image import to pull in images from LinkedIn of my coworkers to see how the ClassifierFunction and NetChain handled these outside inputs.
![enter image description here][18]
wolfram = {}
i = 1;
wolframFaces = {};
While[i < Length[wolfram] + 1,
wolframFace = ImageTrim[wolfram[[i]], #] & /@ FindFaces[wolfram[[i]]];
AppendTo[wolframFaces, ImageResize[wolframFace[[1]], {100, 100}]];
i++;
]
wolframFaces
I understand that this is not exactly what deep learning is intended for, but it's a fun little application to try to trick the trained sets to force us into a character. The models definitely vary on what character they give to each coworker, so it is noticeable that their methods are likely quite different. However, it did give a highly odd confidence for some of my coworkers in both cases.
Manipulate[
gotFaces[employee, "TopProbabilities"], {employee, wolframFaces}]
![enter image description here][19]
And finally, the quiz you've all been waiting for. THE LIVE IDENTIFIER! You can see a quick image of what this looks like below. It uses the camera on your computer to allow you to grab a live image of yourself and run this through the ClassifierFunction and NetChain.
Panel[
Column[{
Row[{ImageCapture[ImagingDevice -> $ImagingDevice]}],
Row[{
Button[TextCell["Grab Image", FontSize -> 24],
img = CurrentImage[ImageSize -> 900], ImageSize -> {337, 50},
Alignment -> Center]
}],
Row[{TextCell["Captured Faces", Bold, FontSize -> 24]},
Alignment -> {Top, Center}],
Row[{Dynamic[currentFace = ImageTrim[img, #] & /@ FindFaces[img]]}],
Row[{TextCell["Classify function ID", Bold, FontSize -> 24]}],
Row[{Dynamic[
Style[gotFaces[currentFace, "TopProbabilities"],
FontSize -> 16]]}],
Row[{TextCell["Neural net ID", Bold, FontSize -> 24]}],
Row[{Dynamic[
Style[trained[currentFace, "TopProbabilities"],
FontSize -> 16]]}]
}
], ImageSize -> 355]
![enter image description here][20]
It's interesting to use this "live" model to start exploring how the input images are affecting the final trained sets. I noticed that although I tended to lean towards a small group of characters (especially in the Neural net ID), some particular factors could vary the outputs. Image quality (per camera or amount of cropping from not being close enough), head tilt, hair in face, facial expressions, etc. did provide some level of variation in my quick analysis.
**Closing thoughts**
Although not a perfect model, it's been really reassuring to know the level of accuracy obtained on such a small dataset and with very little knowledge of the appropriate neural networks to be used with this type of analysis. Being able to jump right in per the support of the Wolfram documentation as well as the ease of built-in functions made it SUPER easy to build such a model from scratch.
If I were looking to develop further, I would like to explore more about neural networks geared towards facial recognition as well as better way to collect as well as import a vast dataset of even more characters. It would be my hope that this would 1) better identify certain defining features and 2) eliminate the variations that we see more obviously in the live camera model.
It's not perfect, but what a fun proof of concept and a great learning experience! You can download all of the files at the following links: [full notebook][21], [trained NN][22], and [webcam app][23]. Specifically, the GOTFacesApplication is a more simplistic model that imports the .wlnet model vs. having you have to sit and wait for the full project notebook to evaluate. Since some of the Import/Export on the [WLNet][24] functionality is still "experimental", I hit some snags on creating a CDF to be used with our free CDF player (my apologies to those who haven't made the jump to Mathematica yet!)
Hope you all enjoy!
[1]: https://www.wolfram.com/language/11/improved-machine-learning/?lipi=urn:li:page:d_flagship3_pulse_read;Dgg9GItqQQ%2bBZdXo2Z7y8A==
[2]: https://www.wolfram.com/language/11/improved-machine-learning/create-a-legendary-creature-recognizer.html?product=language&lipi=urn:li:page:d_flagship3_pulse_read;Dgg9GItqQQ%2bBZdXo2Z7y8A==
[3]: http://reference.wolfram.com/language/ref/Classify.html?lipi=urn:li:page:d_flagship3_pulse_read;Dgg9GItqQQ%2bBZdXo2Z7y8A==
[4]: http://reference.wolfram.com/language/ref/FindFaces.html?lipi=urn:li:page:d_flagship3_pulse_read;Dgg9GItqQQ%2bBZdXo2Z7y8A==
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=AAIAAQDGAAgAAQAAAAAAAA6zAAAAJDEwNTRhMDhjLTU4MTQtNGUxYy05NGZmLWE2MGQwMTNhOWYwZg.jpg&userId=1161398
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=AAIAAQDGAAgAAQAAAAAAAA5kAAAAJGQwNjFmMzRiLTIyNzQtNGE0Ni1iOGVhLWJmMmVkZjdjMTFiNg.jpg&userId=1161398
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=AAIAAQDGAAgAAQAAAAAAAAtPAAAAJGQ0Y2M3NGFjLTM3ZDEtNDU2Yy04NDRiLWNlMzQ3ZDUwZWE5OA.jpg&userId=1161398
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=AAIAAQDGAAgAAQAAAAAAAA18AAAAJDYxZWZmYTEzLTViYWEtNDVmZC05NThiLTZiYzQ2MGY0NTQ3OA.jpg&userId=1161398
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=AAIAAQDGAAgAAQAAAAAAAA8XAAAAJDkzMzcyNTQyLTU2YmEtNDkxYi04Zjk1LWY3NjQ3MWJlY2VhMA.jpg&userId=1161398
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=AAIAAQDGAAgAAQAAAAAAAAxKAAAAJDExODhmNzczLTQyNjMtNGY4MC04ZTA2LTgzMTY2MTczYzMxMg.jpg&userId=1161398
[11]: http://reference.wolfram.com/language/ref/NetTrain?lipi=urn:li:page:d_flagship3_pulse_read;Dgg9GItqQQ%2bBZdXo2Z7y8A==
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=AAIABADGAAgAAQAAAAAAAAuBAAAAJDhiMmNhMjU2LTJhOWYtNGE3OC04NWFlLTczNWViMDYzNzU1Yg.jpg&userId=1161398
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=AAIABADGAAgAAQAAAAAAAAvlAAAAJDI3NGYyMDAxLWQyNWQtNGM5NC1iZjNjLTU5NDdhYzdmZmY0YQ.jpg&userId=1161398
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=AAIABADGAAgAAQAAAAAAAA7eAAAAJGRkMTBkYWY1LTNiMGUtNDA5NS05YTk2LTg5MGIwYzVlNzE3Zg.jpg&userId=1161398
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=AAIABADGAAgAAQAAAAAAAA8KAAAAJGViNzc1MDU4LTUzZGUtNGY1ZC1hMWY3LTlkYzExNzVjMGRiZA.jpg&userId=1161398
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=AAIABADGAAgAAQAAAAAAAApWAAAAJDdhMGQ4YjI3LTg5OGEtNGE3Yi1hODY2LTIzMzE2Njc5NjIyOA.jpg&userId=1161398
[17]: http://reference.wolfram.com/language/ref/NetEncoder.html?lipi=urn:li:page:d_flagship3_pulse_read;Dgg9GItqQQ%2bBZdXo2Z7y8A==
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=AAIABADGAAgAAQAAAAAAAA1QAAAAJDFiNDRlNWYyLWI5ZDAtNDA0Zi1hYTdmLTBlMGQ4MzBmNjFmNw.jpg&userId=1161398
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=AAIABADGAAgAAQAAAAAAAAnJAAAAJDdmMmExMzI5LWQwM2MtNDE3My1iNDBlLTJkODc2YmIxMTVkMw.jpg&userId=1161398
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=AAIABADGAAgAAQAAAAAAAAtuAAAAJDMxNzIyOTljLWFkYTUtNDBkMC1iNWIyLTZjODNhYmZiMzE0Yg.jpg&userId=1161398
[21]: https://amoeba.wolfram.com/index.php/s/blYdcZCDLJvRbPq
[22]: https://amoeba.wolfram.com/index.php/s/M9ZuFlCQ7xjj2Cp
[23]: https://amoeba.wolfram.com/index.php/s/IwEnxaioz28IwrI
[24]: http://reference.wolfram.com/language/ref/format/WLNet.html?lipi=urn:li:page:d_flagship3_pulse_read;Dgg9GItqQQ%2bBZdXo2Z7y8A==Sam Tone2017-12-11T23:40:19ZI’m okay with making name tags with Mathematica
http://community.wolfram.com/groups/-/m/t/1244333
*This post was originally published at [Ultimaker][1].*
----------
Pioneer Chris Hanusa led a Mathematica and 3D printing workshop where he shared resources to help attendees get up and running with the program and 3D printing. At the end of the information-rich workshop, attendees learned how to to use Mathematica to create several 3D printable objects, including name tags.
In October, at my invitation, Assistant Professor Chris Hanusa of the Math Department at Queens College led an Introduction to 3D printing with Mathematica workshop for educators at [NYU’s ITP][2] in New York City. Chris is a mathematician and a designer who creates beautiful objects inspired by math (you can see his work on his website [hanusadesign][3]). Chris creates his objects and shares the math behind them on his site. He has also written several tutorials on his [blog mathzorro.com][4] about how to use Mathematica to create 3D objects. There is a great deal to learn about Mathematica before one can start designing printable 3D objects, and during the workshop Chris worked to provide the attendees with enough information to be just a little bit dangerous. Without a question, Mathematica is the kind of application that one needs to see in action, and then spend quite a bit of time with processing the information and doing further exploration on one’s own.
Chris’s teaching style encourages independence. He shows his college students how to rely on Mathematica’s extensive documentation, and he provides well-scaffolded tutorials to follow so that his students develop their understanding and confidence.
[Mathematica][5] is not a CAD package, nor was it created with 3D printing in mind. However, one can use Mathematica’s programing language to create printable 3D objects, basically watertight, mesh files and export them as STLs or OBJs. Like other programming languages (Processing, Openscad, OpenGL) that can create 3D objects with code, you don’t have the ability to drag and drop shapes. Basically, you create statements that generate nodes or vertices in 3D space. By specifying coordinates, you can then apply rotate, scale and translate transformations. You can use parametric curves, vector functions, and trigonometry to create your geometric objects. Remember when you asked in Trig class "When will ever use this?" Well, you’ll use your trig in Mathematica to create 3D printable objects. Math class would have been very different for me if I had access to both Mathematica and 3D printing back when I was in high school.
What I like about Mathematica (and Processing and OpenSCAD) is the algorithmic approach to generating objects. While Processing uses a sketch metaphor and OpenSCAD employs an editing window, Mathematica uses notebooks. For each of these options, programmers start with a general idea of a task for their computers to perform. Programmers then flesh out their general ideas into complete, unambiguous, step-by-step procedures to carry out their tasks. Such procedures are called algorithms. An algorithm is not the same thing as a program. A program is written in some particular programming language, like C, Java, Python, or the Wolfram Language. An algorithm is more like the idea behind the program. An algorithm can be expressed in any language, including English.
When you know what you want your program to do, many programmers find it helpful to write a description of the task or tasks. That description can then be used as an outline for their algorithm. As a programmer refines and elaborates on their description, gradually adding more details, they eventually end up with a complete algorithm that can be translated directly into a programming language. This method is called stepwise refinement, and it is a type of top-down design. As you proceed through the stages of stepwise refinement, you write out descriptions of your algorithm in pseudo-code, the informal instructions that imitate the structure of your program without complete details or the perfect syntax of actual code. Pseudo-code is where you figure out what kinds of data your program needs and what kind of data it returns. This is also the step where you think about the best way to solve the problems that you're going to run into during the project, and to you get to figure out solutions to try before even starting to write code.
Think of programming as a process of logical problem-solving. Your two big challenges are a) learning the syntax, and b) applying your logical problem-solving skills to an unfamiliar domain. When I taught programming at [Saint Ann’s School][6] in Brooklyn, NY, I told my students who were learning a new language to break things down into four areas:
- Code-reading—be able to look at code and figure out what it’s doing. Code-reading means two things:
- Being able to understand what a particular line of code is doing -
understanding the syntax of the program.
- It also means understanding something called the control flow of the program: when the program executes a line of code, what will it execute next?
- Pseudo-code—This is where you do a lot of thinking and not a lot of
writing code. Because most projects start with a vague idea, or an
English description, this step helps turn your project into something
that is approachable as a program. It forces you to define and
explain what you're trying to do very precisely.
- Code-writing—It can be intimidating to try and figure out how to
express what you're trying to do with an unfamiliar syntax of a
programming language. However, if you've done a good job with the
pseudo-code, writing your code should be in some ways the simplest
step. You’re performing a translation from the precise and clear
concepts you've figured out to the syntax of the programming language
in question. As you write the code, you'll figure out some issues
that your original design or pseudo-code didn't address. You'll
improvise, double back, and sometimes change your entire design. But
the more you separate the simple writing of code from the hard work
of figuring out what you want the code to do, the happier you will be
in the long run.
- Debugging—This is the process of testing and fixing the problems in a
program that you've already written. Unfortunately, this is the step
that will generally take most of your time. While occasionally you
will be able to write a program that's bug-free, generally your
program will either not work immediately or it will do something
completely unexpected. Learning techniques for figuring out what's
wrong (and even more important, solving the problem) is important.
The more code you write and the longer your programs become, the
harder problems become to track down and deal with.
Start small, modify, test and build incrementally. When you work in Mathematica, instead of modifying code that works, make a copy and modify the copy. This way you’ll have a record of what you started with that you can always return to at a later date.
Mathematica is also capable of nested commands. But ease into this. Until you know how each command works (what information a command takes and what it returns) you may want to consider separating the commands out. As you become more experienced, nest away with confidence.
During the workshop, Chris pointed out a few things that were helpful to keep in mind:
- Mathematica is case sensitive.
- All built-in Mathematica functions are spelled, capitalized, and
follow CamelCase rules.
- It is important to distinguish between parentheses (), brackets [],
and braces {}: Parentheses (): Used to group mathematical
expressions, such as (3+4)/(5+7). Brackets []: Used when calling
functions, such as N[Pi]. Braces {}: Used when making lists, such as
{i,1,20}.
- To calculate an expression, use Shift-Return.
- To find the options of a given function, highlight it in the notebook
and then press Command+Shift+F.
- A semicolon can be used to suppress output.
- Mathematica has four types of equals: =, ==, :=, and ===.
- Assignment: To define a variable to store it in memory, use =. For example, to define z to be 3, write z=3.
- Test for equality: Use == to check for equality. For example, 1-1==0 will evaluate to True and 1==0 will evaluate to False.
- Set Delay: Use := when you want to evaluate the function when it is called rather than when it is assigned . (This is advanced.)
- SameQ: Use === to test whether two expressions are identically the same.
- Adding comments to your notebook will help you remember what your intentions were when you look back on your notebook months later. Comments are also invaluable for other people who are navigating through your notebook. To write a sentence, create a new text cell by clicking below a cell. When the cursor turns horizontal type Option-7, or right click and navigate to Insert New Cell > Text.
- Use the documentation. If you are having trouble with a certain
function, use the ? command to ask for help. For example enter ?
Table and the output will be a yellow box with a quick synopsis of
the command. For more detailed information, click the blue >> at the
bottom right of this yellow box. This will open the Documentation
Center which gives examples of using the command in action, available
options for this command, and anything else you might want to know
about the command.
The following three statements represent a sphere, a cuboid and a cone:
Sphere[{0, 0, 0}, .28]
Cuboid[{-.05, -.05, .26}, {.05, .05, .35}]
Cone[{{0, 0, 0}, {0, 0, -1}}, .3]
You can also combine them using a list
Graphics3D[{
Sphere[{0, 0, 0}, .28],
Cuboid[{-.05, -.05, .26}, {.05, .05, .35}],
Cone[{{0, 0, 0}, {0, 0, -1}}, .3]
}]
They are combined, but if you press Shift Return, I suspect you may not get what you expect.
In order to see all the objects combined you have to pass these objects from Graphics3D[] to Show[]:
sphere = Graphics3D[Sphere[{0, 0, 0}, .28]]
cuboid = Graphics3D[Cuboid[{-.05, -.05, .26}, {.05, .05, .35}]]
cone = Graphics3D[Cone[{{0, 0, 0}, {0, 0, -1}}, .3]]
Show[sphere, cuboid, cone]
Like other programming languages, there's often more than one way to do do the same thing:
shapes = {
Sphere[{0, 0, 0}, .28],
Cuboid[{-.05, -.05, .26}, {.05, .05, .35}],
Cone[{{0, 0, 0}, {0, 0, -1}}, .3]}
Map[Graphics3D, shapes]
Show[Map[Graphics3D, shapes]]
Graphics3D[Table[Sphere[{i, 0, 0}, .2], {i, 0, 10}]]
To get the workshop attendees started, Chris created a [Mathematica Basics Crash Course][7] and a [3D Graphics in Mathematica][8] notebook.
If you're new to Mathematica, open Chris’s notebooks and execute each command by placing your cursor at the end of the line and pressing Shift+RETURN. Not everything will make sense right away, and that’s okay. Mathematica is packed with information and functionality. At the beginning just try to appreciate what it is doing, and then marvel at what might be possible when you have more experience.
Mathematica has a large number (195) of built-in polyhedra that you can create, export and print. They are accessible using the command PolyhedronData[]. How do I know how many polyhedra Mathematica can represent? Execute the following statement:
Length[PolyhedronData[All]]
To find out the properties associated with PolyheronData[] execute the following command:
PolyhedronData["Properties"]
And here is a handy statement to explore all the polyhedra:
Manipulate[
Column[{PolyhedronData[g], PolyhedronData[g, p]}], {g,
PolyhedronData[All]}, {p,
Complement @@ PolyhedronData /@ {"Properties", "Classes"}}]
Here how to create a 3D object that you can see and then export:
myShape = Graphics3D[PolyhedronData["Icosahedron", "Faces"]]
![enter image description here][9]
Save your notebook. Then to export the object as an STL file :
Export[NotebookDirectory[] <> "myFile.stl", myShape]
And then to see the STL:
Import[NotebookDirectory[] <> "myFile.stl"]
#Back to name tags#
Name tags or nameplates often seem to me to be the Hello World of 3D printing. They are generally pretty simple to create with a CAD package and they provide an easy introduction to 3D space. There is nothing inherently wrong with creating name tags or nameplates, but I think 3D printing is capable of so much more, and I try to advocate for finding the real potential for 3D printing in education. Yet here I am, encouraging you to use Mathematica to create a name tag. Why? Because I think it is helpful to start off with a familiar object in an unfamiliar context. Remember, Mathematica won’t let you click and drag. You are going to have to build a name tag from the ground up, and in the process, you’re going to become familiar with some of Mathematica’s mesh commands.
At it’s simplest, to make a name tag, you’re going to need a base and then a top. Remember that an STL file is a watertight mesh. That means that where the base and the top meet you will need to eliminate the top surface on the base and the bottom surface of the top. Let’s start with some text:
Text[Style["Ultimaker", Bold, FontFamily -> "Futura", FontSize -> 50]]
![enter image description here][10]
You now need to convert this text to a 2D mesh. To do that you use the command DiscretizeGraphics[]:
meshText2d = DiscretizeGraphics[
Text[Style["Ultimaker", Bold, FontFamily -> "Futura",
FontSize -> 50]], _Text]
![enter image description here][11]
To make the 2D mesh 3D use RegionProduct[] with the mesh and a vertical line:
RegionProduct[meshText2d, Line[{{0.}, {5.}}]]
![enter image description here][12]
Next you’ll create a 2D mesh of a polygon to act as the base:
base = DiscretizeGraphics[Graphics[{RegularPolygon[8]}]]
![enter image description here][13]
To see the two meshes together:
Show[{RegionResize[base, 1.3], RegionResize[meshText2d, 1]}]
![enter image description here][14]
Next you need to remove the surfaces where the two meshes intersect (RegionDifference[]) and make the base big enough to support the text (RegionResize[]):
polygon = RegionResize[base, 1.3];
text = RegionResize[meshText2d, 1];
intersection = RegionDifference[polygon, text]
![enter image description here][15]
Now you need to create boundaries from the two regions so that you can create walls:
bdrypolygon = RegionBoundary[polygon]
bdrytext = RegionBoundary[text]
![enter image description here][16]
![enter image description here][17]
Now you need to build the actual mesh. You’ll need to create 3 levels: the floor, the top of the base and the top of the text (You must use a floating point number):
level0 = 0.;
level1 = 0.1;
level2 = 0.15;
Now it’s time to put all the parts together:
Show[{
RegionProduct[polygon, Point[{{level0}}]],
RegionProduct[bdrypolygon, Line[{{level0}, {level1}}]],
RegionProduct[intersection, Point[{{level1}}]],
RegionProduct[bdrytext, Line[{{level1}, {level2}}]],
RegionProduct[text, Point[{{level2}}]]
}]
![enter image description here][18]
Export and Import. You can use the % as shorthand for the last thing you created:
Export[NotebookDirectory[] <> "nametag.stl", %]
Import[NotebookDirectory[] <> "nametag.stl"]
![enter image description here][19]
If the export doesn’t work, it could be that your notebook hasn’t been saved yet. Save the notebook, then export and import.
But what if you want to use a logo instead of text? No problem. Start with an image, convert it to a 2D mesh, resize it, convert to 3D, create a base, eliminate the surface where the meshes intersect, create boundaries, establish level values, and then put them all together.
Find a black and white image online, copy it, and then create a variable to hold it. Use the semicolon to suppress the output:
![enter image description here][20]
To convert a bitmap to a mesh first negate the color then use ImageMesh[] instead of DiscretizeGraphics[]:
catMesh=ImageMesh[ColorNegate[cat]]
![enter image description here][21]
Next, you need to scale the image:
scaledCat = RegionResize[catMesh, {{-.8, .8}, {-.6, .6}}]
![enter image description here][22]
Lets use a disk for the base. You’ll need to convert it to a mesh and make it slightly larger than 1 unit. (Remember you just scaled your image to be slightly smaller than the unit length:
base = DiscretizeGraphics[Graphics[{Disk[{0, 0}, 1.1]}]]
![enter image description here][23]
Let’s make a hole in the base:
hole = DiscretizeGraphics[Graphics[{Disk[{0, .9}, .1]}]]
To make sure everything fits:
Show[{base, hole, scaledCat}, PlotRange -> All]
![enter image description here][24]
Let’s first make the hole in the base:
base = RegionDifference[base, hole]
![enter image description here][25]
Now find the intersection:
intersection=RegionDifference[base, scaledCat]
![enter image description here][26]
Now you need to get the boundary of the 2D mesh:
bdryCat = RegionBoundary[scaledCat]
bdryBase = RegionBoundary[base]
![enter image description here][27]
![enter image description here][28]
Create the levels:
level0 = 0.;
level1 = 0.15;
level2 = 0.25;
And now to put it all together:
Show[{
RegionProduct[base, Point[{{level0}}]],
RegionProduct[bdryBase, Line[{{level0}, {level1}}]],
RegionProduct[intersection, Point[{{level1}}]],
RegionProduct[bdryCat, Line[{{level1}, {level2}}]],
RegionProduct[scaledCat, Point[{{level2}}]]
}]
![enter image description here][29]
Export and print!
There you have it: a name tag in Mathematica. Not only did you get to learn a little about Mathematica, but you also got to see how meshes from two shapes are constructed. Where does Mathematica and 3D printing fit in the curriculum? Math class is an obvious answer, but it’s also well suited for a programming class, and with access to computers, patience and time, why not bring Mathematica and 3D printing to the art studio?
Here’s a challenge: Try [Mathematica][30] for 15 days and see what you can do with it. Upload your 3D models to [Youmagine][31] and tag them #MathematicaAnd3DPrinting.
[1]: https://ultimaker.com/en/blog/51553-im-okay-with-making-name-tags-with-mathematica
[2]: https://tisch.nyu.edu/itp
[3]: http://hanusadesign.com/
[4]: http://blog.mathzorro.com/
[5]: https://www.wolfram.com/mathematica/
[6]: http://saintannsny.org/
[7]: https://qcpages.qc.cuny.edu/~chanusa/mathematica/Basics.nb
[8]: https://qcpages.qc.cuny.edu/~chanusa/mathematica/GraphicsObjects.nb
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2084polyhedra.png&userId=20103
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=text.png&userId=20103
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=01_mathematica_text.png&userId=20103
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=02_mathematica_3D.png&userId=20103
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=312203_polygon.png&userId=20103
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=713504_base_text_together.png&userId=20103
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=672905_intersection.png&userId=20103
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=326906_boundary_base.png&userId=20103
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=943207_boundary_text.png&userId=20103
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=08_mesh.png&userId=20103
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=09_stl.png&userId=20103
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10_cat.png&userId=20103
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=11_remove_color.png&userId=20103
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=12_resize.png&userId=20103
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=488713_base.png&userId=20103
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=14_show.png&userId=20103
[25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=719215_hole.png&userId=20103
[26]: http://community.wolfram.com//c/portal/getImageAttachment?filename=16_intersection.png&userId=20103
[27]: http://community.wolfram.com//c/portal/getImageAttachment?filename=735417_boundary_cat.png&userId=20103
[28]: http://community.wolfram.com//c/portal/getImageAttachment?filename=668918_boundary_base.png&userId=20103
[29]: http://community.wolfram.com//c/portal/getImageAttachment?filename=19_mesh.png&userId=20103
[30]: https://www.wolfram.com/mathematica/trial/
[31]: http://youmagine.com/Lizabeth Arum2017-12-12T18:51:20Z