Wednesday, 23 September 2015

Rock Paper Scissors AI | Random Algorithm

Introduction

Hello and welcome to part two of our series on AI strategies for the classic game of Rock Paper Scissors. This post will walk through the high level requirements for the AI strategy, refine them into Gherkin format acceptance criteria, make those executable using Cucumber, and then implement the requirements using TDD in Java. At the end, we should have a fully implemented, fully tested AI strategy for use in our Rock Paper Scissors game.

The first AI strategy we are going to be implementing is a random algorithm. This will be pseudo-random rather than truly random, but for a Rock Paper Scissors game, this doesn’t really matter. This algorithm will choose between playing rock, paper, or scissors at a roughly equivalent frequency. 

The idea behind designing a pseudo-random algorithm for our AI to begin with is two-fold. For starters, it’s one of the easiest algorithms to grasp- all we are going to do is select a random move to play. Secondly, it is theoretically impossible to beat an opponent that selects their moves randomly- the best scenario being a draw. This gives us a good fallback in case our newer algorithms do not perform very well. 

Formulating the Requirements

We already have a fairly intuitive grasp of what we need our random algorithm to do. We've formulated our basic requirements in the introduction above- that the algorithm needs to choose to play rock, paper, or scissors seemingly randomly. 

Formulating the Acceptance Criteria

Fair enough, but those requirements aren't particularly friendly for us to work with and know when we're finished. As stated in the introduction, we are going to use the Gherkin language to articulate our acceptance criteria of our requirements, so we can more easily know when we have met them correctly. 

Ok so the first scenario... fair enough. If we run it enough times, we should get all of the possible moves out. I can't imagine producing 100 moves and not getting paper at least once. In fact, if we didn't, there's probably something wrong with our algorithm. Or we've got really, really unlucky.

The second scenario makes sense from a business perspective- we want to make sure that our algorithm plays random moves, and if it is playing randomly then there should be a fairly even distribution of moves. However, a random algorithm is not always going to produce an even distribution of moves, which will cause our tests to randomly fail in turn. Randomly failing tests are the bane of any automated acceptance tests, so much so that they aren't worth having around. But what would the problems actually be?

Running a couple of quick simulations of what our test might be shows that when generating a random number between 1 and 3 in 1000 iterations, the number of times a particular number comes out can vary from between 280 to 380. Increasing the number of iterations to 1,000,000 gives us better numbers of between 331,000 and 335,000- but that test takes longer to run and doesn't necessarily guarantee we won't see random failures.

Refining the Acceptance Criteria

Like we said before, a test that fails for a reason other than the code being broken isn't a test worth having, so we're going to have to do something about the acceptance criteria to make them more palatable for testing.

What we've done here is up the number of iterations to the first acceptance criteria to 1000, as this adds very little overhead to our test and makes it a lot more likely to pass every time we run it.

We ran some simulations of what kind of tests we could write for the second acceptance criteria, and all of them failed at random. This led us to decide not to automate that set of acceptance criteria. That's not to say it's not important, just that we can't really write an automated test for it.

Implementing the Acceptance Criteria

So, let's get started with making those acceptance criteria executable. We are going to be using Cucumber to write and run our acceptance tests. Fortunately, we have already formulated our acceptance criteria in Gherkin format, so all we need to do is put it into a .feature file that Cucumber will understand, and write the implementation of the steps in Java code behind it. You can see this work completed below.

Very simple implementation of the requirements. We create our RandomAlgorithm, call it's generateMove() method 1000 times and add the results to a list, and make sure that all of the moves we pass in are contained within that list using the AssertJ library.

Passing the Acceptance Criteria

Now we need to make the tests pass, taking care to make sure that we also fulfil the Product Owner's requirements that we haven't captured in automated tests. We already have our Java class and it's API defined by our test- so now we just need to test drive the actual behaviour of our class out using TDD.

This implementation has passed our acceptance criteria- so we know it will produce all of the moves when run enough times. We have delegated the responsibility of ensuring the moves produced are random to the Random class in the Java language, which we are passing in to the constructor so we can change it in the future if we need to. Ideally we would wrap up the Random class in our own interface so we can change it more easily in the future and follow the Open Closed Principle more closely, but that is outside the scope of this post.

Conclusion

So, that's it. We've started with a general idea of what we want, fleshed it out into our acceptance criteria and looked at the implications of that, moulded it to make it executable and automated, and finally implemented the code, safe in the knowledge that they meet our requirements.

You can check out the entire project on GitHub.

Stay tuned for part 3, where we try to actually implement some strategy for our AI.