Wednesday, 11 May 2016

Build Games With Agile Katas and Kablammo Software

Agile Katas have teamed up with the Kablammo Software Blog to provide you with tutorials for game creation, using our katas as the base for the game.

You can follow along now with the first game tutorial- FizzBuzz on Android using libGDX.

Friday, 16 October 2015

Rock Paper Scissors AI | Pattern Recognition Algorithm


Hello and welcome to part three of our series on AI strategies for the classic game of Rock Paper Scissors. This post will walk through the high level requirements for the AI strategy, refine them into Gherkin format acceptance criteria, make those executable using Cucumber, and then implement the requirements using TDD in Java. At the end, we should have a fully implemented, fully tested AI strategy for use in our Rock Paper Scissors game.

The second AI strategy we are going to be implementing is a pattern recognition algorithm based on Markov chains. We will be implementing this in the simplest way we can for now- by storing a list of the moves played by our opponent and querying those. This algorithm will choose between playing rock, paper, or scissors depending on what it thinks the opponent is going to play, based on what the opponent has previously played.

We are moving on to using a pattern recognition algorithm at this point as we already have our random algorithm to start with and get a feel for what our opponents pattern is. It's also meatier than the pseudo-random algorithm, but can be started out more simply leading us to iterate on and improve it over time.

Formulating the Requirements

This will be a little bit more difficult than it was for the random algorithm. We know we want the strategy to work by looking at what moves our opponent has previously played, and then base our next move on what we think they're going to play next. Our algorithm needs to keep track of the state of the game, so we need to be able to feed it that data. Then it needs to use this history to decide it's next move, based on recognising patterns in the data it has collected. Ok, but what do we do if there is no history? We know we already have a strategy which employs a pseudo-random algorithm to choose moves, so we could just use that. But then the two strategies become conflated, and if you ever want to measure performance of using strategies into a kind of meta-strategy, then that won't work very well. Let's defer this decision until later. In the meantime, if we don't have enough history to make an accurate prediction then we'll just use good ol' rock, as nothing beats that.  

Formulating the Acceptance Criteria

As before, we are going to need to massage our requirements a little bit to get them into a format that we can use for automating the acceptance criteria. 

Hey look, we've jumped straight into acceptance criteria marked up ready for Cucumber to execute. The tiny additional syntax shouldn't make any difference when reading the requirements at this stage, and it saves us time converting it later.

You may notice that the acceptance criteria is quite high level- we just say that if it looks like the opponent is going to play a move, we will play the move that beats them. This serves two purposes- the first is that our requirements are readable and obvious to anyone without requiring any technical knowledge. It also hides the complexity away in the step definitions, so if "it looks like the opponent will play x" changes meaning, it only changes the implementation of the requirement rather than the requirement itself.

We also don't have the massive headache of testing for randomness for this algorithm- it should behave the same way every time for a given input, which is going to make testing much easier.

Understanding the Acceptance Criteria

So whilst the acceptance criteria might have clear requirements and be easy to test, that doesn't help us if we don't know what it actually means. We have a very wooly looks like the opponent will play a move which... well, it isn't really that descriptive. What does it mean for it to 'look like' the opponent is going to play a particular move? How do we know? 

Well, fortunately our product owner has done a little bit of prior reading and knows about Rock Paper Scissors AIs that already exist and how some of them work. She has stated that we probably want to look into Markov Chains and build up a model of moves the opponent has played and work out the probability that they will play a particular move on this turn. You can see this kind of thing in action and play against the AI on the New York Times Rock Paper Scissors page

Hopefully we have a good enough idea of what we're trying to build now. Looking at what the computer is thinking on the New York Times page showed a really nice way to iterate on the complexity of our algorithm- their computer checks the last 4 moves, then the last 3, then 2, and finally 1. We could build up our algorithm up the other way- start by checking single moves and then moving on from that. 

Refining the Acceptance  Criteria

Ok, let's think. We need to change our acceptance criteria so it is obvious we are building complexity. We're also going to do a restructure to our acceptance criteria to move an action onto our when step, which we really ought to have done earlier. This gives us a good separation of arrange, act, assert in our test, which makes it easier to reason about.

Oh, another thought. If our Product Owner is on board, we can structure the increasing complexity as 'difficulty' for the Rock Paper Scissors game that our algorithm is going to be put in. An AI that only takes into account a single move is going to be easier to beat than an AI that takes into account two moves, and so on. We can use this to hide our implementation in the feature file by wrapping it up in 'difficulty levels', and we can work on 'beginner' first. So, what does this look like?

Looks good to me.  By introducing the concept of difficulty levels, we have managed to split the story into chunks that deliver increased complexity over time, rather than a lot of complexity at once. Even if the Product Owner doesn't go on to use the different levels, the benefit from building up the complexity over time outweighs the slight increase in development overhead it will take to write the code. We have also taken this opportunity to move the action of the acceptance criteria to the when step, giving us a better flow for our tests.

Implementing the Acceptance Criteria

So let's jump in to implementing the acceptance criteria. We'll start by writing out the step definitions for what we think we would like our API to be. We know we need to feed the algorithm the move history, and it needs to tell us what move it wants to use next.

Bit more to talk about for the step definitions this time, as I've structured the code a bit differently from when we did our Random Algorithm. This time we create our algorithm with a difficulty enum that we pass in from our feature file, so we can build on the difficulty levels without writing new step definitions.

We then go on to feed our algorithm a pre-generated history of moves played by our opponent, so it has some data to work from. This currently resides in a test helper class called Moves, which is effectively just a wrapper around a map of moves by difficulty on to a list of moves acting as the history.

We get this list of moves back from the helper class and feed them into the algorithm one by one- as it would do if we were playing the game for real (with our current understanding of the requirements).

Finally we ask our algorithm to generate the next move it wants to play, and check that this move is the move that we expect it to play based on the history.

Passing the Acceptance Criteria

Now comes the fun part- making our acceptance criteria tests pass.

This implementation passes our acceptance criteria... but it's not quite right. This is before any refactoring has taken place, so it is a bit of a mess- fair enough. But the main problem is that the algorithm is incomplete and still passes our acceptance criteria. Clearly something is wrong here- and the issue lies in our test cases- they all tested the same thing just with different moves, and our naive implementation has passed them all at the same time. Revising the test cases for moves results in tests like this:

That should cover all of our bases- the naive implementation no longer passes the acceptance criteria, so we have to actually check frequency to get our algorithm to work properly- which we can go on to do now.

Alrighty. I believe that is what is called "a big mess". Yep, definitely not the prettiest or most understandable code ever written- and certainly not the most efficient. The important thing is if the algorithm is correct or not- it passes all of our acceptance criteria, which is everything we know about the requirements currently. There was something that came up when I was testing different scenarios over what happens when it's a tie- we don't have any acceptance criteria for that scenario, so... let's leave that as an exercise for the reader, along with making the algorithm more readable, more efficient, and more maintainable. Don't forget, we still need to implement difficulty so we can build this up into a fully fledged Rock Paper Scissors winning machine!


So, that's that. All done, coding over. I am starting to get concerned that the TDD showcased here isn't so much Red Green Refactor as it is Red Green... That'll Do. Going to have to rein that in a bit going forward, especially if we're going to be building on previous work moving forward.

As always, you can check out the entire project on GitHub.

Stay tuned for part 4, where we build on this simple pattern recognition algorithm and turn it into something more. 

Wednesday, 23 September 2015

Rock Paper Scissors AI | Random Algorithm


Hello and welcome to part two of our series on AI strategies for the classic game of Rock Paper Scissors. This post will walk through the high level requirements for the AI strategy, refine them into Gherkin format acceptance criteria, make those executable using Cucumber, and then implement the requirements using TDD in Java. At the end, we should have a fully implemented, fully tested AI strategy for use in our Rock Paper Scissors game.

The first AI strategy we are going to be implementing is a random algorithm. This will be pseudo-random rather than truly random, but for a Rock Paper Scissors game, this doesn’t really matter. This algorithm will choose between playing rock, paper, or scissors at a roughly equivalent frequency. 

The idea behind designing a pseudo-random algorithm for our AI to begin with is two-fold. For starters, it’s one of the easiest algorithms to grasp- all we are going to do is select a random move to play. Secondly, it is theoretically impossible to beat an opponent that selects their moves randomly- the best scenario being a draw. This gives us a good fallback in case our newer algorithms do not perform very well. 

Formulating the Requirements

We already have a fairly intuitive grasp of what we need our random algorithm to do. We've formulated our basic requirements in the introduction above- that the algorithm needs to choose to play rock, paper, or scissors seemingly randomly. 

Formulating the Acceptance Criteria

Fair enough, but those requirements aren't particularly friendly for us to work with and know when we're finished. As stated in the introduction, we are going to use the Gherkin language to articulate our acceptance criteria of our requirements, so we can more easily know when we have met them correctly. 

Ok so the first scenario... fair enough. If we run it enough times, we should get all of the possible moves out. I can't imagine producing 100 moves and not getting paper at least once. In fact, if we didn't, there's probably something wrong with our algorithm. Or we've got really, really unlucky.

The second scenario makes sense from a business perspective- we want to make sure that our algorithm plays random moves, and if it is playing randomly then there should be a fairly even distribution of moves. However, a random algorithm is not always going to produce an even distribution of moves, which will cause our tests to randomly fail in turn. Randomly failing tests are the bane of any automated acceptance tests, so much so that they aren't worth having around. But what would the problems actually be?

Running a couple of quick simulations of what our test might be shows that when generating a random number between 1 and 3 in 1000 iterations, the number of times a particular number comes out can vary from between 280 to 380. Increasing the number of iterations to 1,000,000 gives us better numbers of between 331,000 and 335,000- but that test takes longer to run and doesn't necessarily guarantee we won't see random failures.

Refining the Acceptance Criteria

Like we said before, a test that fails for a reason other than the code being broken isn't a test worth having, so we're going to have to do something about the acceptance criteria to make them more palatable for testing.

What we've done here is up the number of iterations to the first acceptance criteria to 1000, as this adds very little overhead to our test and makes it a lot more likely to pass every time we run it.

We ran some simulations of what kind of tests we could write for the second acceptance criteria, and all of them failed at random. This led us to decide not to automate that set of acceptance criteria. That's not to say it's not important, just that we can't really write an automated test for it.

Implementing the Acceptance Criteria

So, let's get started with making those acceptance criteria executable. We are going to be using Cucumber to write and run our acceptance tests. Fortunately, we have already formulated our acceptance criteria in Gherkin format, so all we need to do is put it into a .feature file that Cucumber will understand, and write the implementation of the steps in Java code behind it. You can see this work completed below.

Very simple implementation of the requirements. We create our RandomAlgorithm, call it's generateMove() method 1000 times and add the results to a list, and make sure that all of the moves we pass in are contained within that list using the AssertJ library.

Passing the Acceptance Criteria

Now we need to make the tests pass, taking care to make sure that we also fulfil the Product Owner's requirements that we haven't captured in automated tests. We already have our Java class and it's API defined by our test- so now we just need to test drive the actual behaviour of our class out using TDD.

This implementation has passed our acceptance criteria- so we know it will produce all of the moves when run enough times. We have delegated the responsibility of ensuring the moves produced are random to the Random class in the Java language, which we are passing in to the constructor so we can change it in the future if we need to. Ideally we would wrap up the Random class in our own interface so we can change it more easily in the future and follow the Open Closed Principle more closely, but that is outside the scope of this post.


So, that's it. We've started with a general idea of what we want, fleshed it out into our acceptance criteria and looked at the implications of that, moulded it to make it executable and automated, and finally implemented the code, safe in the knowledge that they meet our requirements.

You can check out the entire project on GitHub.

Stay tuned for part 3, where we try to actually implement some strategy for our AI. 

Thursday, 17 September 2015

Rock Paper Scissors AI | Introduction

Welcome to part one of our series on AI strategies for the classic game of Rock Paper Scissors.

Part one is an introduction to the parts that will follow, laying out what content to expect and the layout of that content. It is also a nice placeholder to link to from the Rock Paper Scissors kata from where people can select the areas they are most interested in. 


We will present the algorithms in the following structure:
  •  High level description of the algorithm we will be employing.
  •  Gherkin format acceptance criteria describing the behaviour of the algorithm.
  •  Java implementation of algorithm that meets the acceptance criteria.

Other posts in this series

Wednesday, 2 September 2015

Connect Four Kata Run Through in Java | Part One

Hello and welcome to the first kata run through on the Agile Katas blog! In this post, I will be running through or performing the first story in the Connect Four Kata, using Java as my programming language of choice. If you want to follow along with the kata, you can find it over at Agile Katas Connect Four Kata.

Getting Started

First things first- I am using my project template that I use for all katas that I perform- this is a simple basic gradle project with a few choice dependencies included- namely Cucumber-Java, AssertJ, and JUnit. I don't tend to use anything else for simple katas, but may include other dependencies depending on the story and functionality I am working on. I am also using IntelliJ IDEA as my IDE of choice. 

The First Story

Because we are using BDD, our first test is the acceptance criteria of our first story. We create a feature file that includes this information, and we have some executable documentation so we know when our task is finished. For Connect Four, this will look something like this:

The Stepdefs that go along with this feature file will look like this to begin with:

The implementation of these Stepdefs will change considerably depending on how you decide to tackle the problem. You may want to do a little bit of upfront design at this point, choosing what solution you want to use from analysing the requirements at present and possibly in the future, or by what you want to practice with. I have decided to go with an approach that utilises a single class to begin with- the Grid. This means my filled-in Stepdefs file looks like the following:

The Grid class doesn't contain anything except a (possibly definitely fluid) API at this moment, and it looks like this:

Usually this is where I would start to really test drive the code out using TDD, but I'm going to try something new this time and only use the BDD to ensure the behaviour is as expected. The idea is that if you work entirely from the BDD, any changes you make to the implementation of the feature shouldn't cause the tests to break, maybe them less fragile and brittle. This should work for simple classes and simple cases, so this Connect Four kata should be a good test of how well it does here- and if we decide to scale the kata up into a bigger project, we'll be able to see how well pure behaviour driven development scales.

After a very quick implementation, I end up with the following Grid class:

This implementation isn't the best code- it's quick and dirty but it gets the job done. Part of this is because our first instinct should be to get the tests to go green- once we have met the behaviour we have delivered the value for our customer in the short-term, and have something that we can deliver now and improve upon later. This is easier to do when the quick and dirty work only takes 5 minutes like it did here. The other reason is because I have a fairly strong feeling that the user story we just implemented didn't have all of the information we needed for our real solution- so getting it to pass and moving on before trying to refactor it nicely is a good idea- the implementation is probably (definitely) going to change in the next story.


And that's effectively it for the first story. At this point we can go back and refactor any of the code that we're not happy with, safe in the knowledge that any changes we make are covered by our acceptance criteria tests- and as long as they stay green, we know our product owner will still be happy with the behaviour. I won't be performing any refactoring at this point, as I have a sneaky suspicion that this implementation won't stand up to future requirements, and refactoring effect will be better spent after implementing those. 

Stay tuned for part two, where we implement the second story and give the first lot of code a much-needed refactor. 

Thursday, 27 August 2015

New Kata | Connect Four

The newest kata in the Agile Katas family is Connect Four! The first feature is focused on getting a minimum viable game to be able to play- nothing fancy. All that is involved is the ability to place tokens in the Connect Four grid, and a way to check if there are any winners.

The less-than-inspirational
image for the Connect Four kata
We have some ideas for where we want to take this kata, and if you're thinking that it's exactly the same way we take all of our other katas (add a basic AI to play against, add variations to rules) then you would be... right!

Now, does that sound interesting or what? So go on, arm yourself with your programming language of choice and have a go at the Agile Katas Connect Four Kata.

Stay tuned.

Tuesday, 25 August 2015

Getting Started

I suppose first things first is Hello, and welcome to our blog! And when I say our, of course I mean my, because there is no we at Agile Katas, not really. But we can stick with the fiction around that, we are Kablammo Software, producing a multitude of ill-conceived games and applications with no cost-benefit analysis, and you are the programmers lucky enough to implement the features. I am Sean O'Toole, writer and maintainer of Agile Katas, creator of the katas, and managing director of the real Kablammo Software- which doesn't mean anything at all.

I wanted to do a Getting Started style post on it's own to begin with, so I could get a feel for how this thing works- what the format is like, how to make it look at least somewhat presentable. Writing about the act of starting a blog also means I can keep future posts relatively clear of cruft and focus on what the subject of those posts will be.

The kind of things you're going to find here are similar to what you'd find on our social media platforms currently- posts about new katas, new solutions, and updated features to existing katas. But the blogging platform also allows more flexibility in going into more depth about some other characteristics of the creation of the katas- I have more than 140 characters to play with here! As such, there will also be posts that go into more depth around the solutions to katas, the way katas are chosen and the changing paradigms you'll see in different katas as I have learnt from mistakes and worked with people smarter than myself.

Another key area I want to post about is supporting material for some of the more "difficult" features in the katas- stuff that might not be so obvious and can't really be explained by the business in the fiction we have at Agile Katas. The idea that sparked the creation of the blog in the first place is the second feature of Rock Paper Scissors, where we add a rudimentary AI to play against.

Now, the first time I ever imagined writing any kind of artificial intelligence (And I use that term extremely loosely here), I was at a complete loss. Where would I even begin? Well, that's the completely true, you begin at Google, of course. I wanted some kind of reference for the kata itself though, and whilst there is a lot of existing reading around the web, I didn't want to link anywhere that I didn't have control, as links rot with time, and I might not always have time to find alternatives.

Thus the idea for starting a blog to amalgamate that knowledge (or a very limited subset of that knowledge) become a reality... with this post, I suppose. So, if you can follow this kind of thing then do that, it looks really cool when you see people are enjoying the work you put out.

I wanted something cool to say at the end that I could sign all of my posts with like... "Peace Out" or something, but nothing is coming to me.

Stay tuned!

That'll do for now.