Last week, I released Empty Black, my 2D shooter/puzzler/platformer. In this article, I’ll describe how I made the player movement deft and intuitive. Play the game before you read on, so you’ll know what I’m talking about.
My general approach was to change something, then try it out. I took the ideas for adjustments from several sources.
One. I examined the parameters affecting the movement of the player characters in other 2D platformers. Is the floor slippy? What is the ratio of sideways movement to jump height? Does the character accelerate as it moves? Is the character’s jump height affected by the length of time the player holds the jump button? Is the character slowed when it bumps into a moveable object?
Two. I examined the unusual behaviours of the player characters in other 2D platformers. Super Meat Boy makes the character leap away from the wall automatically when wall-jumping. Spelunky lets the character pull himself up and over ledges. In Castlevania, the character can do an extra jump while in mid-air.
Three. I got people to play test. Kemal told me that the character movement should be effortless. Specifically, if the character hits a wall near the top, it should slip up and over. Ricky told me it was weird that the player had no control over the height of the character’s jump. He showed me how, when hopping over an obstacle in a room with a low ceiling, he bumped his head. Ricky also pointed out the jarring effect of the initial slow down of the character when it lands after a jump. Everyone told me that airborne movement was too sensitive. Everyone told me that wall-jumping was too finicky.
Four. I read pieces written by programmers about their character movement algorithms. These pieces were mostly confined to short comments, rather than in-depth analyses. Hence, this article.
To the algorithm.
The short version: a pile of hacks.
The long version:
The player presses the jump key. The first question is: can the character jump? Which means: is the character in contact with anything that can be used as a place to jump from?
Empty Black uses Box2D to control the physics of the game world. Any movements are subject to Box2D’s models of the forces of friction and gravity. Further, Box2D handles the reactions of objects that collide: bounces, shoves, spins and slides. The game can interrogate Box2D and ask what objects a particular object is touching. If the character is currently touching something solid, the character may jump.
Except, it’s not quite that simple. As well as leaping from the ground, the character can wall-jump. This means landing and clinging onto a wall, then jumping again, away from the wall.
This technique is used frequently as a way for the player to get the character up a narrow shaft. They jump it back and forth between the vertical, parallel walls of the shaft, ascending with each jump.
This ability is bounded. The player may jump the character out from a wall and land it on the same wall. At this point, it may not jump again. If the player tries to make it jump, the character will fall. This bound is there to improve the gameplay. It is easier to design fun levels if I can rely on the unscalable nature of a solitary wall.
The bound makes it harder to decide if the character has a solid footing. The character should be able to jump from the ground as many times as the player likes. But it should not be able to jump from the same wall twice in a row.
Fortunately, Box2D has a metaphysical object that complements the corporeal walls, enemies and bullets: the sensor. This spiritual creature has no physical presence. It has a location in the world and it registers collisions. The programmer can interrogate it about such collisions, just as with physical objects.
What I did was to attach a wide, short sensor to the bottom of the character. It would look like this, if you could see it:
Notice how the sensor overlaps the ground.
Now, instead of asking the character object about collisions, the game asks the character’s bottom sensor.
How does that help? It doesn’t. But I can attach two more sensors to the character, one on each of its sides.
This means that the game can ask each sensor if it is touching anything. If the bottom sensor is touching some solid footing, jumps are always allowed. If only a side sensor is touching a solid footing, the game must investigate further.
Jumps will be allowed in all cases but two.
One. The character lands against the wall it has just jumped from.
The game keeps a record of the last sensor that registered the solid footing for a jump. If it was a side sensor, and the character is using the same sensor for registration of the current solid footing, the jump is not allowed.
If the player loses control during a wall-jump ascent, the character will begin to fall. The player might manage to press jump as it hits a wall on its way down. It is possible that this wall will be the one they most recently jumped from. If it is, the jump would be prevented. However, there is an exception that permits the jump if the character is lower than it was when it jumped the time before. This exception allows the player to recover from their mistake. It makes the controls more forgiving.
Two. The character lands and sticks to the wall. The player continues to press the direction key that holds the character against the wall. They press jump again. The jump is not allowed. If it were, the character would slide up the wall like this:
To stop this, the game only allows the jump if the player is not pressing the character into the wall.
There is an exception. If the character is near the top of a wall, it may jump. This lets it slip up and over the wall.
The game now knows if the character is allowed to jump. To enact the jump, upward force is applied for a single instant. The character’s velocity is high when it first starts moving. It is progressively attenuated by gravity. Some distance into the air, all the character’s momentum will be gone and it will begin falling.
If the player releases the jump button before the character has reached its peak in the jump, the character will immediately begin falling. I stole this idea from Super Meat Boy. The effect is that of a glass ceiling being inserted above the character’s head. This allows the player to control the height of the jump. It stops Ricky getting a sore head.
The magnitude of the force of the jump is usually constant. However, it is increased in two situations.
First, when the character is carrying a crate. This stops the weighted character’s jumps turning into lame little bunny hops.
Second, when the character is falling faster than usual. Imagine the character is wall-jumping and, because of a player fumble, it hits the next wall at a point lower than the point it jumped from. To help the player recover, their next attempt at a jump from a wall will lift the character with a force greater than normal. The difference from the normal force is proportional to the disparity between the normal and actual force of falling.
Now: sideways movement.
The player presses the left arrow key. What happens?
A leftward force is continually applied until the player releases the key. The magnitude of the force depends on how fast the character is currently moving. If it is at top speed, no force is applied. If it is stationary, a large force is applied. The idea is to get the character to top speed as fast as possible, then hold it at that speed. This makes the movement more predictable. It also solves Ricky’s second problem. The character immediately regains top speed after it lands a jump.
The player releases the key that moves the character left. What happens?
The character immediately stops. There is no slippiness. Thus, the character is easier to control.
When I was designing the character movement, it was hard to keep the code tidy. I tried to find the smallest number of rules that required the least subsequent modification with hacks.
One. I tried to eliminate the slippiness of the floor by setting the friction very high. But this had many undesired consequences. It was hard to get the player moving sideways at top speed without also making airborne movement too sensitive. Crates could no longer be shoved, and mis-throws would leave them perched on ledges. I added code to immediately stop the character upon release of the move key.
Two. Slipping up and over a ledge was just a matter of the player pressing the jump key near the top of a wall. But if the character was only half way up a wall, such a jump would mean it slid upwards and got stranded. I could have left that behaviour as it was. But that would have made wall-jumping harder. I could have automatically jumped the character away from the wall. But that seemed too nannyish. So, I prevented jumps when the player was pressing into the wall but was not near the top of it.
In both cases, it was worth slightly modifying a generally correct behaviour with an ugly but small hack that had no repercussions.
The general approach was to try stuff to explore the options. But the goal was a few broad rules. I did this by only deciding on a permanent change after a careful check of the ramifications. And I sometimes entirely remade the rules, as with the addition of the sensors to the character.
Here is a pocket-sized summary for you.
Examine the overall behaviours and specific parameters of other games. Get as much player feedback as you can. Make many changes. Tidy up after yourself.
A couple of Sundays ago, I wrote an implementation of Dijkstra’s algorithm in Clojure. The core algorithm came to twenty-five lines. I banged out the code as I sat in a coffee shop with some other people from the Recurse Center. I ran my program on a data set that has two-hundred nodes in a densely interconnected graph. The program produced best paths from a start node to all other nodes in the graph in about 200 milliseconds.
I closed my laptop, finished my peanut butter, banana and honey sandwich, said goodbye to my friends and spent the rest of the afternoon wandering around the Lower East Side in the dusty sunlight.
By Monday evening, my life had begun falling apart.
Dijkstra’s algorithm is a way to find the shortest route from one node to another in a graph. If the cities in Britain were the nodes and the roads were the connections between the nodes, Dijkstra could be used to plan the shortest route from London to Edinburgh. And plan is the key word. The algorithm does reconnaissance. It does not go on a road trip.
Imagine you are the algorithm. You examine all the cities directly connected to London. That is, all the cities connected to London by a single stretch of road. You record the distance between London and each city. You note that you have now explored London. You focus on the city that is the closest to London by road. Let us say Brighton. You examine each of the cities directly connected to Brighton. You ignore London because you have already explored it. For each city, you record the distance from London to Brighton to the city. You note that you have explored Brighton. You now focus on the next unexplored city nearest to London. You continue like this until your destination comes up as the next city to focus on. When this happens, you have found the shortest distance to Edinburgh.
On Monday, at the Recurse Center morning check-in meeting, I blithely reported that I was going to spend the next couple of days implementing a Fibonacci heap in Clojure.
I had read on the Dijkstra Wikipedia article that two men had reduced the running time of Dijkstra by inventing the Fibonacci heap and using it for node selection. Dijkstra’s algorithm still did the route planning. The Fibonacci heap just helped out by quickly finding the city to explore next.
This search is a time-consuming procedure. You must go through the list of all the unexplored cities in Britain and find the one for which you have noted the shortest distance from London. When I had implemented Dijkstra in the coffee shop, my code had just gone through the whole list and returned the one with the shortest distance. This was slow.
The Fibonacci heap solves this problem in a different way. It orders the cities by their distance from London. Therefore, when you want to get the next city to explore, you just take the first one.
To explain, I must set aside the cities and roads metaphor. To my great regret, I cannot take up a new metaphor as a replacement. Heaps are not quite like family trees, nor quite like green trees that grow. So, instead, a figure:
This is an ordinary heap. The Fibonacci heap is a little more complicated, but the idea is the same. Each circle - each node - has zero or more child nodes. Further, each node includes a numerical annotation. This is its sorting value, or key. The nodes are connected into a tree where the one with the lowest key is at the root.
I started by producing diagrams showing examples of the core Fibonacci heap operations being applied to imaginary Fibonacci heaps. With the help of multiple reads through the Wikipedia article, I drew out each operation.
By the end of Monday, I had a forest of trees drawn out. I had mastered the algorithm and I could confidently explain it to Vera and Pepijn.
That night, as I was lying in bed, waiting to fall asleep, I realised that I was thinking about Fibonacci heaps. My downfall had begun.
I came into the Recurse Center on Tuesday and spent another day just writing in my notebook. This time, I produced rough blocks of pseudocode for the core algorithms.
In the morning, I had taken the G train to the C train to school. I’d been reading How to Solve It, a classic mathematical text that lays out an informal method for solving any problem. I had also been half thinking about Introduction to Algorithms, an exploration of algorithms for manipulating lists, trees, heaps and graphs.
It is one of the most glorious feelings to look back on a period and realise that it is suffused with what you were working on at the time. To realise that your work seeped into your journeys, your conversations, your relationships. I don’t mean that you looked up at the trees rattling in the wind and saw upended Fibonacci heaps. It’s much simpler than that. You were thinking about your work as you stood on the pinchingly hot pavement outside The Sunburnt Cow, as you discussed with your dad how the brush strokes for the sandstone in Roadway with Underpass somehow suggest the sparklyness of the sun. Your work was just gently present with you, providing a back story, or an invisible friend.
Thus, the next two weeks were steeped in maths and heaps. I thought about them in the shower. I thought about them on the G train. I thought about them in this bar that I and some other Recursers ended up in following an attempt to go to a bossa nova night after I had been disabused of the notion that bossa nova is drum-heavy salsa-ish music by a few minutes plugged into James’s iPod as we waited for the train and had discovered that it is actually much lighter and more deft and sounds like a skittering barbershop quartet.
I began writing code. Immediately, I discovered that tree structures are more complicated in languages like Clojure that have immutable state. This is because changing a node requires rebuilding large parts of the tree. Imagine you want to give a new child to a node that is somewhere far down the tree. You must duplicate all the nodes and branches that comprise the ancestory of the parent node. This is time consuming and takes up a lot of memory.
Pepijn pointed me towards a solution: a concept called the zipper. This structure expresses a tree as the node you are currently focused on, and the rest of the tree as viewed from the current node. For example, for this tree:
a zipper focused on node J would look like this:
Recall that expectant mother node. Now that the tree is modelled as a zipper, giving birth is much easier. I am going to create a new zipper that is an amalgamation of the reusable parts of the pre-child zipper and the new pieces I must create to represent the modification. The time and memory required depend on how much of the original I can use. In this case, I can reuse the path and the left and right contexts. So, the only new thing I must create is a new focus that consists of the new child, L, attached to its parent, J:
If you want to learn more about zippers, I recommend reading this excellent article, from which I adapted the previous example.
On with the story.
Vera and I spent the next two days buried in code. By the time the Recurse Center demo day came, we had nothing finished that we could show. We could only talk about the way the Fibonacci heap algorithm works.
Vera explained that a Fibonacci heap is not one heap (tree), but a list of self-contained sub-heaps (trees). She said there is a minimum pointer which indicates the sub-heap root node with the smallest sorting value. I can’t remember if she used a metaphor of cities on a map, and if she explained that, in this metaphor, the keys on the nodes are the distances of the nodes from the starting city.
She explained the core Fibonacci heap operations.
She said that merge is the way that a new node is added to the Fibonacci heap. The node is inserted as the root of a new sub-heap in the list of sub-heaps. If the new node’s sorting value is smaller than the node currently indicated by the minimum pointer, the pointer is updated.
She said that find min takes the pointer to the smallest root node, follows it and returns the node itself.
She said that extract min takes the pointer to the smallest root node, follows it and removes the node from the sub-heap to which it belongs.
She showed how extract min tidies up by going through the former children of the node and making each one into a new stand-alone heap in the list of sub-heaps.
She showed how extract min tidies up further by consolidating the list of sub-heaps, combining pairs of sub-heaps that have the same number of direct descendents.
She explained how the decrease key operation works. She said that the first step is simply reducing the sorting value of a node. She said that, if this reduction makes the key smaller than the key of the node’s parent, the node must be removed from the sub-heap and made into the root of a new sub-heap. She also said that the minimum pointer might need to be updated to point at a new minimum root node.
She explained that, if any node loses more than one child to a decrease key operation, it, too, must be made into a new sub-heap.
The day following, a Sunday, I spent the afternoon sat in a coffee shop in a beam of sunlight, finding some respite from the turmoil. I wrote code that was isolated from the main problem. It produces the Fibonacci heap corresponding to a terse definition of a tree structure. This made writing tests much easier, because it made it easier to produce Fibonacci heaps that reflect a scenario to be tested.
On Monday and Tuesday of week two, Vera and I dived back into the chaos and implemented decrease key, thus completing the Fibonacci heap. We worked on Saturday - demo day - to wire it into my implementation of Dijkstra’s algorithm. Slowly, a horrible realisation dawned upon us. A Fibonacci heap written in a language with immutable state is, as far as we could tell then, and as far as I can tell now, incompatible with Dijkstra’s algorithm. In fact, it is incompatible with any algorithm that relies on a data structure that is distinct from the Fibonacci heap, but that also shares the same information. Pepijn has written on his blog about a generalised version of this difficulty, naming it the double index problem.
In Dijkstra with Fibonacci heap, there are two data structures. First, the Fibonacci heap. Second, the graph of nodes that tells you which node is connected to which (fuck it we’re going back to the map metaphor) city. Examining the neighbour of the city you are currently focusing on may neccessitate decreasing the best route distance you have stored for the neighbour. But, this is not so easy. When you get the list of neighbouring cities, you are looking at the graph data structure, which means you have graph nodes in your hands. Though these graph nodes represent the same cities as the nodes in the Fibonacci heap, they are distinct entities inside the computer. Thus, to call decrease key (distance) on a city (heap node), you must get hold of the node that represents the city in the heap. This is a time-consuming operation that involves searching through the heap.
I’m going to stage a little mock Q&A between an imaginary version of you and the actual version of me.
You: Why is this not a problem in languages with mutable values?
Me: Because the nodes in the graph can just contain a pointer to their counterpart in the Fibonacci heap.
You: What is a pointer?
Me: It’s like an arrow that gives you a direct, quick route to a computer’s representation of something.
You: Why could you not use use such pointers?
Me: We could, but they would not have the desired effect.
Me: Because, in an immutable language, when you change a piece of data, you get a whole new copy back.
You: Gosh, it’s annoying how you won’t go into any detail unless I ask. Why is the whole new copy thing a problem?
Me: Sorry. It’s a problem because when you update a node with a new distance, you copy that node as part of the change. Other pieces of data may have pointers to the node, but they now only have pointers to the old version. In a mutable language, you would have just updated a shared piece of data and not copied anything. Thus, all pointers to that data would have remained valid.
You: I see.
We implemented the find operation in the Fibonacci heap. This operation is given a node and finds it in the Fibonacci heap. This operation is antithetical to the point of the Fibonacci heap. When we used it, this operation was so time consuming that the original version of Dikstra’s algorithm ran twice as fast as the version with the Fibonacci heap.
Showing these results at the Recurse Center demo day was kind of disappointing and kind of funny. However, both these feelings were tempered - no, obliterated - by the warm, relieved afterglow of finishing the Fibonacci heap.
In March, I posted What I’ve been working on. In it, I showed four images of Empty Black, the game I was making. At that point, I was still planning for it to be driven by musical puzzles where you played tunes to open doors. Since then, it has turned into a game of three parts: deft jumping, tactile combat and block-throwing puzzles.
Watch the trailer:
The Recurse Center is a three month long programmers’ retreat that I have been attending in New York City. The following is a testimonial about my experience.
There are eight elements of the Recurse Center.
First, it is unusually supportive and safe. You can ask a question to clarify something you feel you ought to know, because you will get a gentle, illuminating answer. You can write a piece of code that you worry is shitty, then shape it into something beautiful with a fellow Recurser. You are isolated from all the people whose opinion might matter to you: your friends, your family, potential employers, the internet. In short, there are no negative consequences to showing your weaknesses.
Second, it is structured. If you feel awkward in social situations, you find that you always have a place. When you program on RC days, there is always a desk to sit at. At the social gatherings, you discover that everyone at RC is kind and inclusive. No one is ever left standing on their own.
Third, RC is an uncontrollable situation. You are guided towards the things that it is important for you to work on. This invisible hand is the aggregate of the projects that other people are working on, the fellow students who walk up and offer to work with you on your project, the subjects covered in the RC library, the languages your fellow students discuss at lunch, the juicy problem your deskmates are wrestling with, and the gentle guidance of the faculty. This invisible hand plainly shows you what you have been avoiding learning, what you thought was too hard, what you didn’t know you needed to know, what you didn’t know interested you.
Fourth, it is a place where programming is the most important thing in the world. Imagine Florence in the fifteenth century, except, instead of painting, everyone is inventing how to program, and instead of being surrounded by Donatello and Ghiberti and Botticelli and Raphael, you are working with the startlingly sharp programmers who no one has heard of, yet. The fact that it is socially acceptable to think about programming and talk about programming and work on programming means that programming is uppermost in your mind. Which means that you get better at it very fast. (This element was copped from Paul Graham’s essay on aesthetic taste: paulgraham.com/taste.html)
Fifth, there are almost no constraints on what you work on. Your project doesn’t have to make money, doesn’t have to build your portfolio of open source code, doesn’t have to be useful, doesn’t have to appeal to some particular community, doesn’t have to be cool, doesn’t have result in something commensurate with the effort you put in. There is one constraint: work at the edge of your programming capabilities. Which is to say: work on something that makes you a better programmer.
Sixth, there are people who are better than you and people who are worse than you. Even if you are the most inexperienced programmer in the whole of RC, you certainly know more than others about a particular operating system. Even if you are the most experienced programmer, you certainly know less than others about a particular language.
Seventh, you get to talk to and work with people who have truly brilliant minds. Some are fellow students at RC. Some are drafted in as speakers or co-programmers. All are your peers.
Eighth, and most importantly, RC is an expression of the faculty: Sonali, Nick, Dave, Alan and Tom. They are the people you’d want teaching you because they explain things clearly and they know a lot. They are the people you’d want to be friends with because they are nurturing and fun and funny. They are the people you’d want to have with you if you got into trouble because they would impose themselves on the situation and start fixing it. In short, they examine their environment and make it better.
Having David Nolen explain the ClojureScript compiler was one of the intellectual highlights of my life.
The hours at RC feel precious.
This is the fastest period of learning in my life.
I’m coming back.
I am spending the summer in New York City. I am attending the Recurse Center, a three month programme where the aim is to make yourself a better programmer. As a student, I spend most of my time sitting in a room with other clever people, either collaborating or working alone on open source projects. I am creating Isla, a programming language for children, and an accompanying environment for using Isla to write old school text adventure games.
I love this shot. Even though Travis Bickle is supposedly God’s lonely man, we know him as the subject of a Hollywood film. But, earlier in the film, in this shot, he is just another unremarkable guy, alone in the early morning, in the distance, swigging from a bottle of whiskey.
And I love how, in the “Are you talking to me?” scene, you can hear the everyday noise of people outside as Travis Bickle goes mad inside:
Tool and Mogwai made me realise that musicians are allowed to do anything they want.
But Tool had a stronger effect because they were my first real exposure to metal, which meant they were able to crystalise the most consistently important concept in my musical taste: beauty is harsh.
I first heard Mogwai in 1997, when I was about sixteen. I was lying in bed in the dark listening to The Breezeblock, Mary Ann Hobbs’s late night music programme on Radio One. She played Like Herod, a twelve-minute track from Mogwai’s first album, Young Team. I hadn’t heard music like that before: instrumental by default, symphonically structured, spoken word, moods rather than songs, occasional vocals that were accents, rather than scaffolding, and incongruous shifts in instrumentation and tone from section to section. I thought that Mogwai had, somehow, invented all this stuff. It wasn’t until 2002 that I realised that Slint had already got most of the way there by 1991.
I first heard Tool in 1998, when I was seventeen. My friend, Harry, lent me their 1996 album, Aenima, and I took it home and played it through the speakers built into the monitor of my Mac. I played it a lot over the next five or six years on my CD Walkman.
Aenima took me much further than Young Team. It was the first piece of modern music in which I heard the non-standard time signatures. It was the first record I heard that combined anger and sadness and melody into beauty. It was the first record I heard that had an overarching theme. The first record I heard that had continuity between songs. It made me consciously seek out weird, extreme music, music that would broaden my horizons and maybe give my brain more versions of that moment in Third Eye when Maynard James Keenan sings, “So good to see you, I missed you so much”: the joyous/agonising high of a sound that is simultaneously sad and beautiful, melodic and abrasive.
Most importantly, it was the record that made me fully aware of the fact that music doesn’t just come from some obscured, instinctual, idiot savant place in the brain. It is intentional art, just like novels and films and paintings. It is - can be - a series of conscious decisions, some of which the musician is unsure of. This is excellently illustrated in Third Eye by the two moments when Maynard James Keenan sings, “Prying open my third eye.” The first time, it stops the song with the long, arrhythmic pauses between repetitions. The second time, it is in parallel with a polyrhythmic drum beat, and repeated many more times, and totally cathartic.
Fourteen years later, poor Harry still hasn’t had his CD back.