Enemy Mine Issue: Bonus not working


The bonus “Use arguments with move statements” in Enemy Mine is not working for me (at least for Python or Coffeescript programs) though I am not sure why.

There is a “short-code” goal with Lines Of Code: humans=8 on the level, but even though I can win the level with 5 statements, the bonus is showing as failed on Run, and also on final submit. I tried doing exactly eight statements and also more than eight statements (just in case the test was somehow inverted) but neither activated the bonus.

Haunted Kithmaze has a similar bonus goal, defined the same way, but it seems to work just fine for me. This suggests the problem lies in this level somewhere and not with the goal testing code, but I just don’t see any difference in the way the goals are defined for the two levels.

While we’re on the topic, I think the bonus goal should probably be renamed because it is possible to solve in 9 statements while actually using arguments in some of the statements. I would suggest having the bonus name make explicit that the programmer is trying to solve with under a certain number of statements (similar to Haunted Kithmaze) since that is really the value of using arguments in this case anyway.


Post your code. It’s finicky but does work.



# Use arguments with move statements to move farther.


# Use arguments with move statements to move farther.

Niether of these get the bonus.


remove the 1’s and it will work.


Wow, thanks, you’re right. Never occurred to me to try that and it certainly makes me curious how the game is counting statements and checking the condition.

Given that this level is clearly designed for novices, I’m wondering if the way the bonus works isn’t likely to be confusing or frustrating for them? Even if it hadn’t miscounted my statements, there is still potential for problems because it’s not really testing what it’s asking for; “use arguments” is not exactly equal to having less than 8 statements. Even though it is true that you can’t solve with less than 8 statements without using arguments, it’s also possible that the user added say() or some other neutral statements, which took them over eight, and the phrasing of the bonus condition won’t make it clear what they actually did wrong. My feeling is, until the game engine can provide a more nuanced and reliable way to check that the lesson of the level has been learned, then the level probably shouldn’t include the bonus condition.

By the way, I just noticed that you’re the famous Kevin Holland! I have been enjoying your levels the last few weeks, both for playing and peeking behind the scenes to see how they were constructed. Thanks for contributing them.



Yes. the statement counting is incredibly buggy and I think there are plans to eventually overhaul that but i’m not sure it’s a priority at the moment. :slight_smile:


You’re right. I’ll remove the bonus for now, since we can’t make it work naturally until I do a lot more work on the statement counter.


I believe that the counter is actually counting statements, not lines.

So self.moveRight(1) counts as 2 statements, the 1 and the moveRight().


Ah, interesting. Though in that case I’d say the counter is counting “expressions” not “statements.”

Different languages are likely to have different ideas about exactly what constitutes a statement. And I don’t know if it’s the case here, but IF the count is being performed after transcoding things are going to be even more confusing for everyone.

Whatever you use to measure program complexity, if it is going to be scored or rewarded, it needs to be something that the programmer can measure the same way that the referee does, or it will be more a source of frustration than instruction.

Probably for purposes of the game, lines of (non-comment) code is the best thing to measure just because it is so simple for the player to understand and measure. That is dissatisfying in many ways of course, SLOC is a generally poor measure of complexity and can be gamed by writing qualitatively worse, harder-to-understand code, which is not what you want in a game about learning to code.

Perhaps some measure of expressions, operations, machine instructions, execution speed, memory footprint, or whatever could be worked into a general score that is displayed in the interface after a compilation or run of the code. If this was consistently displayed, then I think players might get enough of a feel for it that you could then base rewards on it, and have some interesting levels that challenged the player to come up with the simplest solution they can. I think this measured complexity feedback, shown consistently and properly gamified has the potential to really help players become better programmers.