Game of Life on Metal

by Dan Cutting

I was sad to hear John Conway had recently died of COVID-19. Conway was a mathematician who contributed much to his field, but for me as a computer scientist, I knew of him for his Game of Life (aka just Life).

Life was one of the first programs I wrote at university. I remember the excitement of watching the cells flicker on and off creating more and more complex behaviour from really simple rules. I’ve been fascinated with it ever since.

I was never very happy with the performance of my code, though, running as it was on a 90’s CPU. Since I’m learning Metal at the moment, it seemed like a good time to pay my respects to John Conway and implement Life on a GPU.

You can grab the result, Conway, from the Mac App Store. You can also play with the code in this blog post in this Swift Playground.

The Game of Life

  • The universe is a 2D array of cells that can be alive or dead
  • For each new generation of the universe, cells are born or die based on how many living neighbours they have

In my old uni. implementation, I used a for loop to iterate over the current generation to calculate the next generation one cell at a time.

But with a bit of thought, this looks like an embarrassingly parallel problem: each cell in the next generation can be calculated without any coordination. GPUs are great at this kind of problem.


Metal supports three sorts of GPU programs (or shaders). In simple terms, these shaders operate in parallel over all of their input data.

Vertex shaders take a point and transform it to another point. Usually this is for applying model, world, and perspective transforms to the triangles that make up 3D objects.

Fragment shaders return a colour for a single pixel within a triangle. This is where texture mapping happens.

Compute (or kernel) shaders are more general and can be used to process arbitrary data (such as a generation of the Life algorithm).

Life on Metal

Current generation texture mapped to two triangles

The gist of running Life on Metal is:

  • use two textures to represent the current and next generation of the universe of cells
  • render a full-screen rectangle made of two triangles, with the current generation texture mapped to it
  • run a compute shader that reads the current generation texture and writes to the next generation texture
  • page flip the textures so that next becomes current, and the old one is reused for the following generation.

Vertex and fragment shaders

The vertex and fragment shaders don’t do anything too complex. They just draw a full-screen rectangle with the current generation texture mapped to it.

The position property is passed through as is because the two triangles it’s operating on will already be configured to cover the full screen, i.e., the range (-1, -1) to (1, 1).

More interesting is the uv property which is mapped to normalised values between 0 and 1 across x- and y- axes. This is used in the fragment shader to sample the current generation texture at the appropriate coordinate.

using namespace metal;

struct Vertex {
  float4 position [[position]];
  float2 uv;

vertex Vertex vertex_shader(constant float4 *vertices [[buffer(0)]],
                            uint id [[vertex_id]]) {
  return {
    .position = vertices[id],
    .uv = (vertices[id].xy + float2(1)) / float2(2)

fragment float4 fragment_shader(Vertex vtx [[stage_in]],
                                texture2d<uint> generation [[texture(0)]]) {
  constexpr sampler smplr(coord::normalized,
  uint cell = generation.sample(smplr, vtx.uv).r;
  return float4(cell);

Compute shader

The compute shader is where the interesting stuff happens.

It accepts two textures: the current generation of the universe which it reads from; and the next generation which it writes to. The cells in each texture are unsigned integers where 0 means dead and 1 means alive.

This function is called in parallel for each cell in the universe, so it also has a 2D index parameter that it uses to look up the right cell.

To calculate whether this cell should be alive or dead in the next generation, the code counts all the living neighbours in the current generation. Then it applies a few simple rules and writes either a 1 for alive or 0 for dead into the next generation texture.

kernel void generation(texture2d<uint, access::read> current [[texture(0)]],
                       texture2d<uint, access::write> next [[texture(1)]],
                       uint2 index [[thread_position_in_grid]]) {

  short live_neighbours = 0;

  for (int j = -1; j <= 1; j++) {
    for (int i = -1; i <= 1; i++) {
      if (i != 0 || j != 0) {
        uint2 neighbour = index + uint2(i, j);
        if (1 == {

  bool is_alive = 1 ==;

  if (is_alive) {
    if (live_neighbours < 2) {
      next.write(0, index);  // die from under-population
    } else if (live_neighbours > 3) {
      next.write(0, index);  // die from over-population
    } else {
      next.write(1, index);  // stay alive
  } else {  // !is_alive
    if (live_neighbours == 3) {
      next.write(1, index);  // newborn cell
    } else {
      next.write(0, index);  // stay dead

And that’s it for the GPU code!

Metal pipelines

A functioning app also needs some supporting Swift code to create the textures and configure the Metal render and compute pipelines that run these shaders.

The only unusual part of this is flipping between the two generation textures, generationA and generationB. This is done by incrementing a generation counter each frame, and using it to choose one texture when it’s odd and the other when it’s even.

var generationA: MTLTexture
var generationB: MTLTexture
var generation = 0

func currentGenerationTexture() -> MTLTexture {
  generation % 2 == 0 ? generationA : generationB

func nextGenerationTexture() -> MTLTexture {
  generation % 2 == 0 ? generationB : generationA

Each frame is drawn through a render encoder that runs the vertex and fragment shaders. The vertex shader is passed a vertex buffer containing two triangles. The fragment shader is given the current generation texture.

func draw(in view: MTKView) {
    let buffer = commandQueue.makeCommandBuffer(),
    let desc = view.currentRenderPassDescriptor,
    let renderEncoder = buffer.makeRenderCommandEncoder(descriptor: desc)
    else { return }

  renderEncoder.setVertexBuffer(vertexBuffer, offset: 0, index: 0)
  renderEncoder.setFragmentTexture(currentGenerationTexture(), index: 0)
  renderEncoder.drawPrimitives(type: .triangle,
                               vertexStart: 0,
                               vertexCount: 6)


A compute encoder follows which runs the compute shader with the current generation texture in slot 0 and the next generation in slot 1. The shader is executed in parallel in a number of threads.


    let computeEncoder = buffer.makeComputeCommandEncoder()
    else { return }

  computeEncoder.setTexture(currentGenerationTexture(), index: 0)
  computeEncoder.setTexture(nextGenerationTexture(), index: 1)
  let threadWidth = computeState.threadExecutionWidth
  let threadHeight = computeState.maxTotalThreadsPerThreadgroup / threadWidth
  let threadsPerThreadgroup = MTLSizeMake(threadWidth, threadHeight, 1)
  let threadsPerGrid = MTLSizeMake(cellsWide, cellsHigh, 1)
                                 threadsPerThreadgroup: threadsPerThreadgroup)


Finally the frame is presented and the generation counter is incremented ready for the next frame.

  if let drawable = view.currentDrawable {

  generation += 1

Wrap up

You can experiment with all of the code in this post in this Swift Playground. Or if you just want to experience the Game of Life you can download my tribute, Conway, from the Mac App Store.

Hi, I'm Dan Cutting. I mainly write Swift for iOS and Mac but love playing with Metal and programming languages too.

Follow me on Twitter or GitHub or subscribe to the feed.