The approaches about natural language programming are describe here
Approach #1: Brute Force Crowd Source. This is the method used in Amazons ALEXA, Apples SIRI, Wolframs ALPHA, Microsofts CORTANA, Googles HOME, etc. In all these cases, a programmer imagines a question or command that a user will give the machine, and then he writes specific code to answer that specific question (Alexa, what is the temperature outside?) or carry out that particular command (Alexa, turn on the living room lights). Get enough imaginative programmers to write enough routines, et voila! Apparently Intelligent machines that actually exist and work and learn and grow, today.
Approach #2: Dynamically-Generated-User-Tweaked code. This is essentially describe here
If the programmer is happy with the generated code, (s)he can approve of it and it neednt be saved because it will generate correctly each time before compiling - a label would be attached to the high-level NLP program to tell the compiler that it compiles correctly. If the generated code isnt right though (or isnt complete), that label will not be attached to the NLP code and the support code will need to be saved as part of the program instead. Some of that support code could still be auto-generated initially, creating the loop and setting up the count, for example, while leaving the programmer to fill in the content of the loop manually.
Approach #3 is the one where you build AGI first so that it can solve all the programming problems itself.
What are the statements of Wolfram|Alpha programmers about the approach#1, approach#2 and approach #3 I quoted above?