a moveable beast

Idle ideation. Coarse cognition.

A Lesson in Creative Problem Solving


Last night, I was wondering how many possible ways there were to partition a set while preserving the order of its elements. The resulting set is called a partition topology. For instance:

{{0123}}        # One big lump
{{0}{1}{2}{3}}  # Totally divided
{{0}{1}{23}}
{{0}{12}{3}}
{{0}{123}}
{{01}{2}{3}}
.
.
.

The first idea that came to mind was a recursive calculation, but that wasn’t fruitful.

My second plan of attack was an iterative process, but the summation left out possibilities I found difficult to capture with further sums.

Then I thought, Rather than divide up a set, why don’t I eliminate the “spaces” between elements of a totally divided [separated, discrete] set instead? I reasoned that there are n-1 “spaces” between n elements, and I need only determine the number of ways to merge adjacent elements.

But then I thought, Well, that’s the same as if I’d started with one big [indiscrete] lump and made “cuts” in all those places; the number of possible cuts is the same as the number of possible merges.

Suddenly, it was clear that the inverse of a “merge” was a “cut,” and vice versa—like a switch. Anything in the universe that is somehow “switch-like” is equivalent to a bit. Hence, a particular set of cuts or merges can be perfectly represented by a string of bits.

For n elements, with n-1 “spaces” between them, a string of n-1 bits describes a unique arrangement of cuts or merges. Therefore the number of possible arrangements is simply 2n-1. That’s it: the closed form I was looking for. No recursion or iteration necessary.

The lesson I took from this is that there are two more good tactics I should try when solving a problem. One tactic is to try solving it in the opposite direction—inverting it, reversing it, flipping it on its head. The second tactic is to move the focus from the object of study to the unspoken parts in between—from foreground to background.

Through study and observation, I’ve come to conclude that the “creative sauce” is rooted in our ability to generate analogies. You take the object under consideration, identify verbs for what that object “does,” then think of other objects to which those verbs apply. This is a bit like how a graph database works, where you have some relation (e.g. “is the parent of”), and performing a query with that relation returns all pairs of nodes for which the relation holds true.

Working strictly within a given problem space, we get stuck thinking the same ideas, as our brains latch on to objects as singular symbols, which reliably light the same, isolated pathways after years of training. Analogies, by comparison, deeply connect many different concepts at once, and the richer the analogies we mine, the more ideas we can unearth.

This notion is essential to mathematics, made rigorous as isomorphisms, homeomorphisms, diffeomorphisms, and so on, all stemming from the idea that two seemingly unrelated things can have the same “shape” or underlying structure. Since shape, or structure, is the direct result of an object’s symmetries (i.e. all the ways you can change that object while some property remains invariant), it makes sense that identifying all the ways in which two disparate objects behave similarly is necessarily the same as identifying invariant properties among those objects. And invariance is the stuff solutions are made of.

About the author: Failed network engineer. Lame programmer. Armchair mathematician. Suspected member of Homo sapiens.