vwoo096’s post about video game music, particularly the part about Scape, got me thinking about the future of video game design. Scape is an iPad app with a “living environment” of sounds made possible by recent consumer hardware developments. The sounds independently “think”, change and react to each other, time and the user. The performance is different every time it is played; there’s a video embed in that post with an example. vwoo096 points out the technological potential for extra dimensions of depth and emotional immersion in video game music.
I think that Scape points out the potential for many other video game elements, not just music, which will be explored more in the future. After all, what does it mean to be “next gen”? Right now we’re just seeing improvements of the same features we already have – realistic graphics are a bit more realistic, large game worlds are a bit bigger, fast loading times are a bit faster, and so on. These sorts of quantitative improvements have diminishing returns as we fast approach the limits of human recognition, and the difference between each generation is less perceivable than the last.
So where to next? Instead of the way the game is presented (e.g. photorealism), we can apply technological advances to the actual experience to present, and this is where the Scape example comes in. Music tracks in video games right now are like Lego bricks (pardon the analogy). They are wholly separate works that developers can easily “snap in place” into a game wherever they want. Sure, they can tie into the atmosphere and story, but this is just aiding the game experience, it is not actually part of the game experience itself. They are not directly part of the gameplay or the programming of the experience.
For example, if we got Call of Duty and replaced the music, we wouldn’t think of it as another game, we would just think of it as Call of Duty with different music. The complex artificial intelligence and interactivity that makes video games unique aren’t extended to these sorts of elements, which are as static and passive as those in film. Scape’s musical experience is a counterexample that applies advanced processing power to influence the content itself, not just how it is rendered (realistic sound reproduction is already ubiquitous in consumer technology anyway).
But reactive colours, shapes and sound aren’t the limit of what we can achieve with this approach. Procedural generation is another video game design idea that is becoming more tied into the content of the gameplay experience. This is when content is generated by an algorithm in real-time, rather than manually designed and handcrafted prior to presentation. The algorithm is still created, designed and guided by human creators but produces random results when run.
A close analogy is how each session of a multiplayer game generates its own story. The developers have not determined the story beforehand, they can only influence it indirectly through the game’s rules. In a similar way, games with procedural generation are programmed to generate different content every session.
This is not a new technique, but early uses were usually motivated by technological limitations or to lower development costs, and the result had very little impact on the game experience. Examples include randomising locations of loot, or making enemy models all slightly different to enhance realism. With advancements in computer speed, games are beginning to implement very advanced algorithms to affect the experience as a whole.
A few selected examples of procedural generation as an integral part of the game:
- Dwarf Fortress generates terrain, history, cultures, art and literature for an area the size of Earth at the start of each play-through. This acts as a new playing space and backstory.
- Left 4 Dead adjusts its own level designs every time you play, based on previous sessions and the player’s statistics.
- No Man’s Sky generates an entire universe to explore and interact with. The player can traverse any location they see, whether it be in space, air, land and sea. Every single planet is uniquely generated on demand, including wildlife, ruins, and more.
The advantages of using procedural generation in this way are numerous. The games have more replay value, every experience is unique, there are unpredictable outcomes, there is a heightened sense of the sessions being your game, and they can have huge worlds with lots of variety. The exploration in No Man’s Sky feels more genuine and exciting as you are actually discovering a new place that nobody else has seen before. Critics say that the generated content has no single artistic vision, favours quantity over quality and doesn’t guide the player, but these are only relevant to older, less intelligent algorithms or when the technique is done badly.
Now, I’m not saying that procedural generation by itself is “the future of gaming”. There are situations where it does and doesn’t fit. But this technique and the dynamic music of Scape could become small parts of a larger toolset in a new paradigm. Much like how the shift from 2D space to 3D space in games stimulated new techniques and genres, a similar thing could happen when the focus of video game technology shifts to a “meta-space”. Here, elements like music, story, level design, etc. inherently integrate with the complex programming and interactivity of the experience in intelligent, exciting and self-reflexive ways. The possibilities are endless.