Nondescript

Blog 518: Adventures In Markov Chains

I’ve done a reasonable bit of work with procedural terrain generation of two different kinds recently (well, when I say “recently”…) for Project Y4, hopefully into techniques that can be re-applied in the future to different projects. But while lusting after the most hilarious procedural generation technique of them all for a long time, it’s only been in the last couple of weeks that I’ve delved into the wonderful world of Markov chains…

House of Chains

Basically, Markov chains are a way of generating sequences. Big woop? The key is that they generate plausible (well, fairly plausible) sequences, based on a given sample set of sequences. Take, for example, a blog such as this one — you can feed it into the system, the sample sequences being the sentences that comprise it. From that sample, you construct the chains themselves: discrete subsequences that follow on from another element. The word “From” that began the last sentence would be assigned the chain “that sample”, “that” is given “sample, you” and so on. You can vary the lengths of the chains to vary your output; shorter chains will produce output that is more random (lol so random haha xx) while longer chains will have more recognisable elements from the sample but will, on the whole, make more sense (only ever making as much sense as your sample data, natch).

It’s a bit strange to get your head round at first, but once you do — like all these things — it makes perfect sense but becomes impossible to explain. With all these chains in hand, you pick one to start from and just run with it. Take a random word and its chain, plop them down. Take the end of that chain as the word for the next, plop that chain down. Rinse and repeat for as long as you want or until you hit a word that has no chain.

Ideally, your sample will be large enough that each word, each token, ends up with more than one chain attached to it, from which you can pick randomly, or else you’d end up with the same output sequences all the time. Which would be much less fun.

Words don’t come easy…

The Easy Application

So that’s the easy application — taking sentences and then producing different sentences that can still function. It’s what spammers do to try and bypass spam filters, and it’s very probably how bots like the legendary Horse_eBooks function (though I can only guess what it uses as its samples). And that’s the best bit right there — generated output is often unintentionally hilarious. There’s something about those bizarre combinations of words that don’t quite sit right that gives me the giggles. Like the written version of computer generated faces’ Uncanny Valley, except funny rather than creepy.

Another common application of Markov chains is on a slightly smaller scale, as a name generator. You can easily change your sentence processor to look at the character level instead, and feed it a load of syllables that you like instead of a passage of text. Or, hell, you can still pass it a few paragraphs and watch it invent English-sounding words.

Although Gregg linked me to what has become my favourite application of this hilarity — Garkov. It’s Garfield comic strips, filled with Markov chain generated dialogue, generated from transcripts of the existing comics. Obviously the output is pretty variable depending on how the generator runs, but a couple of refreshes will always guarantee something appropriately ridiculous.

Stop being silly, Garfield.

The Plan Is So Generic

So, Markov chains are awesome and endless fun. I wrote my implementation in C#, but I’ve got some plans for the future that mean I wanted to do it a little bit differently.

From all the examples I’ve scouted, most people do their generators focusing on one thing — either words or characters. Fair enough, those are your common usages.

I decided that I wanted to make my generator able to process any kind of sequence. Words? Sure. Characters? Deffo. Numbers? Aye, if you fancy it. Plot elements? Wait a–

My ultimate plan is to somehow render storylines into sequences of plot elements. Say, get quest -> search for Macguffin -> find Macguffin -> betrayal -> revenge -> rescue princess -> antagonist somehow survived door open for the sequel… And so on. Maybe a little bit finer-grained than that, but you get the picture. Generate a plot from the basic elements (let’s face it, you can render any plot down into such tangible bites), tie that up to some procedurally generated terrain… See where I’m going with this?

It’s soooo generic.

It made the implementation a lot of mind-bending fun, to be sure. The chains are stored in a gloriously generic Dictionary<List<T>, List<List<T>>> (with T being the kind of sequence element — String, Char, PlotElement), and with a custom comparator function the job is done (remember: reference types are hashed by their references rather than their contents, so two lists with the same contents would (by default) register as different keys and the whole thing would fall down. Thank you Stack Overflow!). I’ve also put a few twists in; usually, you stick to a single token length, but I’ve made mine count back from a maximum token length so that there’s more chance of it finding a token to continue from.

It’s beautiful, really. As long as the sequence element type has a solid Equals function, I can set this chain generator on it.

The possibilities are, quite literally, endless.

P.S. WordPress offers an export-blog-to-XML function. I wonder…

6 thoughts on “Blog 518: Adventures In Markov Chains”

    1. Well, every time I’ve done procedurally generated game levels and stuff they feel a bit soulless. I figure there must be some way to create a procedural plot to go with them to add the missing spice and, well, what is a plot if not a sequence of items in some reasonable order? “What could possibly go wrong?”

      Liked by 1 person

      1. Makes me wish I’d found this blog post a week or two ago, before I wrote my Markov comic generator. My program is intended to handle only text, but on the other hand it is written in Python which uses dynamic typing; I haven’t tried using non-strings yet.

        Like

      2. I have an unhealthy obsession with generics so this was pretty much inevitable. I think it might have become less effective as a Markov generator in the transition from just being string-based, though; I confused myself a bit along the way so I’m not convinced the logic is 100% correct anymore. Theoretically all you need is to be able to compare equality between sequences, so once you abstract that out you’re laughing. Theoretically. If Python is clever enough, you might not need to make any changes at all!

        Like

And you tell me...

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.