Message Boards Message Boards

Code puzzles: turning docs into educational games

Teaching programing and assessing learning progress is often a very custom task. I wanted to create a completely automated "practically" infinite stream of random puzzles that guide a leaner towards improving programing skills. I think the major problem is content creation. To test whether the learner knows a programming concept, an exercise needs to be wisely designed. And it is better to have a randomized set of such exercises to definitely test the knowledge and exclude guesses and cheating and so on. Often creating such educational materials is very tedious, time consuming, and manual. Exactly like creating good documentation. I will explain one simple idea of using docs to make an educational game. This is just a barebone prototype to clearly follow the inner workings (try it out & share: https://wolfr.am/bughunter ). Please comment with feedback on how we can develop this idea further.

enter image description here

Introduction: efficient use of resources

The docs are the finest wealth and depth of information and should be explored beyond their regular usage. Manual painstaking time consuming effort of creating good programing documentation should be used to its fullest potential. An automated game play would be a novel take on docs. We can use existing code examples in docs to randomly pull pieces of code and make programing exercises automatically. Being able to read code and find bugs is, in my experience, one of the most enlightening practices. The goal of the linked above game is to find a defect of the input code (bug) and fix it. Hence, the "bug hunter". There are just 2 possible outcomes of a single game cycle, --- and after each you can "try again":

enter image description here

Core game code: making puzzles

Wolfram Language (WL) documentation is one of the best I've seen. It has pages and pages of examples starting from simple ones and going though the all details of the usage. Moreover the docs are written in WL itself and furthermore, WL can access docs and even has internal self-knowledge of its structure via WolframLanguageData. For instance, this is how you can show a relationship community graph for symbols related to GatherBy:

WolframLanguageData["GatherBy", "RelationshipCommunityGraph"]

enter image description here

We can use WolframLanguageData to access docs examples and then drop some parts of the code. The puzzle is then for the learner to find what is missing. For the sake of clarity designing a small working prototype lets limit test WL functions and corresponding docs' pages to some small number. So out of ~5000 (and we just released a new addition):

WolframLanguageData[] // Length

4838

built in symbols I just take 30

functions = {"Append", "Apply", "Array", "Cases", "Delete", "DeleteCases", "Drop", "Except", 
"Flatten", "FlattenAt", "Fold", "Inner", "Insert", "Join", "ListConvolve", "Map", "MapThread", 
 "Nest", "Outer", "Partition", "Prepend", "ReplacePart", "Reverse", "RotateLeft", "RotateRight", 
"Select", "Sort", "Split", "Thread", "Transpose"};

functions // Length
30

that are listed on a very old but neat animated page of some essential core-language collection. I will also add some "sugar syntax" to potential removable parts of code:

sugar = {"@@", "@", "/@", "@@@", "#", "^", "&"};

So, for instance, out of the following example in docs we could remove a small part to make a puzzle:

enter image description here

Here is an example of "sugar syntax" removal, which for novice programmers would be harder to solve:

enter image description here

Next step is to define a function that can check if a string is a built-in symbol (function, all 5000) or if it is some of sugar syntax we defined above:

ClearAll[ExampleHeads];
ExampleHeads[e_]:=
Select[
    Cases[e,_String, Infinity],
    (NameQ["System`"<>#]||MemberQ[sugar,#])&&#=!="Input"&
]

Next function essentially makes a single quiz question. First it randomly picks a function from list of 30 symbols we defined. Then it goes to the doc page of that symbol to the section called "Basic Examples". It finds a random example and removes a random part out of it:

ranquiz[]:=Module[
    {ranfun=RandomChoice[functions],ranexa,ranhead},
    ranexa=RandomChoice[WolframLanguageData[ranfun,"DocumentationBasicExamples"]][[-2;;-1]];
    ranhead=RandomChoice[ExampleHeads[ranexa[[1]]]];
    {
       ReplacePart[#,Position[#,ranhead]->""]&@ranexa[[1]],
       ranexa[[2]],
       ranhead,
       ranfun
    }
]

Now we will define a few simple variables and tools.

Image variables

I keep marveling how convenient it is that Mathematica front end can make images to be part of code. This makes notebooks a great IDE:

enter image description here

Databin for tracking stats

It is important to have statistics of your learning game: to understand how to improve it where the education process should go. Wolfram Datadrop is an amazing tool for these purposes.

enter image description here

We define the databin as

bin = CreateDatabin[<|"Name" -> "BugHunter"|>]

Deploy game to the web

To make an actual application usable by everyone with internet access I will use Wolfram Development Platform and Wolfram Cloud. First I define a function that will build the "result of the game" web page. It will check is answer is wrong or right and give differently designed pages accordingly.

quiz[answer_String,check_String,fun_String]:=
(
DatabinAdd[Databin["kd3hO19q"],{answer,check,fun}];
Grid[{
    {If[answer===check,
       Grid[{{Style["Right! You got the bug!",40,Darker@Red,FontFamily->"Chalkduster"]},{First[imgs]}}],
       Grid[{{Style["Wrong! The bug got you!",40,Darker@Red,FontFamily->"Chalkduster"]},{Last[imgs]}}]
    ]},

    {Row[
    {Hyperlink["Try again","https://www.wolframcloud.com/objects/user-3c5d3268-040e-45d5-8ac1-25476e7870da/bughunter"],
    "|",
    hyperlink["Documentation","http://reference.wolfram.com/language/ref/"<>fun<>".html"],
    "|",
    hyperlink["Fun hint","http://reference.wolfram.com/legacy/flash/animations/"<>fun<>".html"]},
    Spacer[10]
    ]},
    {Style["===================================================="]},
    {hyperlink["An Elementary Introduction to the Wolfram Language","https://www.wolfram.com/language/elementary-introduction"]},
    {hyperlink["Fast introduction for programmers","http://www.wolfram.com/language/fast-introduction-for-programmers/en"]},
    {logo}
}]
)

This function is used inside CloudDeploy[...FormFunction[...]...] construct to actually deploy the application. FormFunction builds a query form, a web user interface to formulate a question and to get user's answer. Note for random variables to function properly Delayed is used as a wrapper for FormFunction.

 CloudDeploy[Delayed[
    quizloc=ranquiz[];
    FormFunction[
       {{"code",None} -> "String",
       {"x",None}-><|
         "Input"->StringRiffle[quizloc[[3;;4]],","],
         "Interpreter"->DelimitedSequence["String"],
         "Control"->Function[Annotation[InputField[##],{"class"->"sr-only"},"HTMLAttrs"]]|>}, 
       quiz[#code,#x[[1]],#x[[2]]]&,
       AppearanceRules-> <|
         "Title" -> Grid[{{title}},Alignment->Center],
         "MetaTitle"->"BUG HUNTER",
         "Description"-> Grid[{
         {Style["Type the missing part of input code",15, Darker@Red,FontFamily->"Ayuthaya"]},
         {Rasterize@Grid[{
          {"In[1]:=",quizloc[[1]]},
          {"Out[1]=",quizloc[[2]]}},Alignment->Left]}
         }]
        |>]],
    "bughunter",
    Permissions->"Public"
]

The result of the deployment is a cloud object at a URL:

CloudObject[https://www.wolframcloud.com/objects/user-3c5d3268-040e-45d5-8ac1-25476e7870da/bughunter]

with the short version:

URLShorten["https://www.wolframcloud.com/objects/user-3c5d3268-040e-45d5-8ac1-25476e7870da/bughunter", "bughunter"]

https://wolfr.am/bughunter

And we are done! You can go at the above URL and play.

Further thoughts

Here are some key points and further thoughts.

Advantages:

  • Automation of content: NO new manual resource development, use existing code bases.
  • Automation of testing: NO manual labor of grading.
  • Quality of testing: NO multiple choice, NO guessing.
  • Quality of grading: almost 100% exact detection of mistakes and correct solutions.
  • Fight cheating: clear to identify question type "find missing code part" helps to ban help from friendly forums (such as this one).
  • Almost infinite variability of examples if whole docs system is engaged.
  • High range from very easy to very hard examples (exclusion of multiple functions and syntax can make this really hard).

Improvements:

  • Flexible scoring system based on function usage frequencies.
  • Optional placeholder as hint where the code is missing.
  • Using network of related functions (see above) to move smoothly through the topical domains.
  • Using functions frequency to feed easier or harder exercises based on test progress.

Please comment with your own thoughts and games and code!

POSTED BY: Vitaliy Kaurov
11 Replies
Posted 8 years ago

IÂ’ve made educational games for nearly twenty years, games that share most of the properties you list, Vitaliy, as advantages of Bug Hunter. Here are my observations about Bug Hunter:

  1. Eye candy does not make games better. Zombies, space ships, etc. are fun to make and pleasing to look at, but they do not make a game fun or playable. (My experience is with high school teens; I may be wrong when it comes to young children.) A good game has a balance of intellectual challenge, chance, and playability. This is the opposite of what most people believe about games. As you develop Bug Hunter, I recommend steering away from the artificial player-versus-bug analogy and just let the player demonstrate mastery.
  2. Humans love scores. If IÂ’m playing a game, I want to know whether I beat my previous score, bested my buddy, or even made it into the top ten on the leaderboard. In fact, that may be my primary reason to play. Perhaps thatÂ’s coming for Bug Hunter. The one-and-done nature of the game right now seemed lacking.
  3. Mining the docs for your game content is a brilliant idea. After I played seven rounds, I felt like the formula isn’t quite right yet. For example, one question had “MapThread” missing and I incorrectly tried “Riffle.” That’s it. I lost. I looked at the MapThread[] docs, but I didn’t feel like I could use it or even identify it next time. Perhaps the game could show what my attempt would output so I could compare it to the correct output. Even better, perhaps the game could allow me to keep trying multiple times, each time showing me what my output would be (and perhaps lowering my score).
  4. I do agree that there should be some sort of indication where the missing piece goes in the string, maybe a colored rectangle or some other graphic.

As I look back on what I have written, it seems a little negative. I don't mean it as such. I think the game is a great idea with vast potential. I'm looking forward to the next iteration.

POSTED BY: Mark Greenberg

@Mark thanks a lot for the consideration and comments! Here are a few responses:

Eye candy does not make games better. Zombies, space ships, etc. are fun to make and pleasing to look at, but they do not make a game fun or playable. (My experience is with high school teens; I may be wrong when it comes to young children.) A good game has a balance of intellectual challenge, chance, and playability. This is the opposite of what most people believe about games. As you develop Bug Hunter, I recommend steering away from the artificial player-versus-bug analogy and just let the player demonstrate mastery.

This is just a prototype. What I was going for with the thing you call "eye candy" is a hint of storytelling. I mostly agree with your comment, but in a sense that good storytelling is not essential, but adds to the excitement. I think if wisely executed it could add to the desire to play more. A good example is http://codemancergame.com written by an avid WL user. Of course there is no story in a picture of a bug. But it is a placeholder for it ;-)

Humans love scores. If IÂ’m playing a game, I want to know whether I beat my previous score, bested my buddy, or even made it into the top ten on the leaderboard. In fact, that may be my primary reason to play. Perhaps thatÂ’s coming for Bug Hunter. The one-and-done nature of the game right now seemed lacking.

Yes, gamification is the true essence of these things. It should be in the game. I said that at the end among improvement points: "Flexible scoring system based on function usage frequencies."

3 & 4

Yes, agree absolutely, thanks for the feedback!

POSTED BY: Vitaliy Kaurov

This is great! I'm an adult beginner myself, with no formal training in programming. What I continue to find the most challenging aspect of learning the WL is the syntax. You have to get up to speed with the syntax, and quickly, or much of the docs, and most of the great examples shared here won't be comprehensible. This is also why, as much as I enjoyed working through the exercises in SW's new EIWL textbook, I feel that it still places too much emphasis on covering functions and not enough (and not soon enough) on syntax. I've tried out BugHunter a few times and haven't encountered an example yet that removed the 'sugar'. So I agree that allowing a user to pick different areas or themes for testing (for example, 'quiz me on syntax', 'quiz me on functions related to network analysis' etc.) would be a good choice for an initial enhancement.

Thank you for making this!

POSTED BY: Arno Bosse

Great points, @Arno, I will take these in account when improving the app. Add yes "sugar syntax" cases are a bit rare, I will work on that too.

POSTED BY: Vitaliy Kaurov

Hello @Vitaliy Kaurov

I enjoyed playing your Bug Hunter web app. The code and the thought process behind this application is brilliant. I did not know the Legacy Animations Documentation pages existed. Its a very nice and intuitive way of presenting it to users.

I got a similar idea at Wolfram Summer School 2016 and built a simpler prototype. Link: http://community.wolfram.com/groups/-/m/t/886715 . Its called Infinite Coding Problems Generator in Wolfram Programming Language. I used templates and loaded a CSV file that contains questions from EIWL book.

Your web app is so much fun and its giving me new ideas for improving my prototype. These educational applications has tremendous potential and spikes the learning curve in students. I like to see your app growing big like http://challenges.wolfram.com.

Thank you

Try my web app here: https://wolfr.am/e0t5Zn50 enter image description here

POSTED BY: Manjunath Babu

This is some great work @Manjunath Babu! Could you explain to me briefly here, how did you handle the correctness check? Was it a verbatim-check of the code? In Wolfram Language often different versions of code yield correct result, this is why we call it multi-paradigm language. For example:

data = RandomReal[1, {100, 2}];
ListPlot[data]
Graphics[Point[data]]

Were you taking that into account?

POSTED BY: Vitaliy Kaurov

Hello @Vitaliy Kaurov ,

I haven't taken Random Functions into account yet.

JSON Template looks something like this:

[
{
        "Number": "11.2",
        "Question": "Make a single string of the whole alphabet, in upper case.",
        "Answer": "`Function`[StringJoin[Alphabet[\"`Alphabet`\"]]]",
        "Template": "Make a single string of the all `Alphabet` alphabets, using `Function`.",
        "Data": {"Function": ["ToUpperCase", "ToLowerCase"], "Alphabet": ["French","Russian", "Italian", "English","German"]}
    },
    {
       "Number":"1.3",
       "Question":"Multiply the whole numbers from 1 to 5.",
       "Answer":"Times @@ Range[1, `Number`]",
       "Template":"Multiply the whole numbers from 1 to `Number`.",
       "Data":{"Number": "RandomInteger[{5,10}]"}
    },
    {
       "Number":"4.1",
       "Question":"Make a bar chart of `Number`.",
       "Answer":"`Function`[`Number`]",
       "Template":"Make a `Function` of `Number`.",
       "Data":{"Function":["BarChart3D","BarChart","PieChart","PieChart3D","ListLinePlot"],"Number":"RandomInteger[50, 4]"}
    }
]

To check for correctness, I execute the originally solution expression and execute the user provided answer expression. If these two match, then its correct.

Since Wolfram Language is a symbolic language, it just compares at the lowest level of both executed expressions.

However, the problem with this approach is, This prototype doesn't work with Random Function. Since Random function will store the final answer in different values at the lowest level.

POSTED BY: Manjunath Babu

I think there also could be some problem with sorting functions. If the goal is to return a list of elements independent of their order, some approaches may return them in different order. Then SameQ will not match.

POSTED BY: Vitaliy Kaurov
Posted 8 years ago

The idea is fresh and new. However, I am afraid, if the kids would really love to play it. The illustrated examples appears all low level errors, which could be detected and assisted by WL grammar color system easily.

In future, the kids are very very very smart. That means they would become boring very quickly either. If we want those kind of projects get to work, we should design it really playful. Maybe, we should ask and test with kids.

POSTED BY: Frederick Wu

@Frederick Wu thanks for taking a look and the comments.

However, I am afraid, if the kids would really love to play it.

I was actually thinking about targeting adults who are a bit above the beginner level. A few folks I tested with were adults and they enjoyed playing it. Some said that the bugs were to scary though ;-)

The illustrated examples appears all low level errors, which could be detected and assisted by WL grammar color system easily.

If I understood what you mean by "grammar color system" correctly, then even those examples I are showed in the post already undetected by it - there is no colors in front end highlighting these:

enter image description here

enter image description here

Moreover a few testers suggested I use an optional indicator to show where exactly the code is missing, --- as a hint to help the solution. I also think those cases that will trigger "grammar color system" are quite good because learners need to have a good habit and understanding how code highlighting works to catch errors during a real programing workflow.

In future, the kids are very very very smart. That means they would become boring very quickly either. If we want those kind of projects get to work, we should design it really playful. Maybe, we should ask and test with kids.

Yes, if you mean young kids, they would probably need another type of game or at least some better game-play dressing for this idea. I'd love to hear your ideas who to make this work. If you come up with anything please share.

POSTED BY: Vitaliy Kaurov

Reserved for analytics

POSTED BY: Vitaliy Kaurov
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract