# Getting a sense for how big design spaces can be

I was recently reading Design Beyond Human Abilities. It has some neat points about how to define design, but that what all definitions seem to encompass is the idea of activities you do before building something. Furthermore, the author lists three strategies for how you go about design. He calls them three "metadesign spaces" -- in each space there are non-overlapping designs you can come up with using the assumptions of the meta space.

The work comes originally from Adrian Thompson and others. The three metadesign spaces I'll characterize by the terms "Abstract", "Iterative", and "Evolution". In the first, abstraction, the key assumption is that there is a tractable inverse model. That is, you can look at an example of the thing you want to build, or get sufficient data about it, and then work backwards to produce an abstract plan -- a blueprint, say -- and from this abstract plan more or less any competent craftsman in the field can go off and build a pretty near exact replica of the original thing.

A lot of things can fall under this category (hence, blueprints) and some software is among them. The idealized "waterfall" methodology works ok for it -- you do your design up front which involves gathering all the requirements of the final thing someone wants into a formal plan, then execute.

The second metadesign space, iteration, has the assumption that the inverse model isn't tractable, but at least the forward model is. That is, you can't just look at the final output and inverse your way to a full plan that can be executed without much thought to again reach the final output. You can however look at some existing model and predict forwards what the final output ought to look like, and what minor tweaks will do to that final output. Sometimes the fastest way is to just build the thing, or a small part of the thing, rather than trying to predict everything ahead of time, but prediction is what helps guide changes and it's possible since generally the forward model is tractable.

In essence, it's the agile idea (though the pdf takes time to note it did not originate with the agile movement and is even older than software itself) of think a little, build and test or predict a little, reflect and repeat. It's how you go from wanting a supersonic warplane like the Blackbird, but only having a good design for a Wright Brothers propeller plane. It takes a lot of work, a lot of inspired ideas like "what if the frame is made out of metal instead of wood?", and if you could diff over the decades the final design of each shares probably little more than the idea of flying. But how else would you do it?

The third metadesgn space, evolution, is for when neither the forward nor inverse models are intractable. The paper highlights an evolved FPGA circuit that could not be beaten by human design, nor for a long time even understood by humans in the sense of how it works. (Of course we could understand that it did work, and very well, on one particular physical FPGA.)

The third space is interesting because it can be much larger than the other two, especially when we add in another observation the author makes. That is, when humans do design, we are constrained by our human values. We naturally seek out designs that are more modular, easier to understand by others, have a sense of 'beauty', are simple, and so forth. Our designs, whether done with abstraction or with iteration, tend to "canalize" -- our human constraints are like little offshoots that eventually converge into a central canal from which you can't escape. Other designs, completely out of water even, are possible which lack those things but nevertheless can produce a better final result, as judged from the difference of what we want and what we get. Evolutionary techniques are a window into such designs.

I think other "machine learning" techniques offer another window. GANs, gradient descent in general, GOFAI, etc. Is the space of such designs bigger? Perhaps. Even evolution has its own canalization of a sort, largely due to its slowness. Evolution has designed many kinds of birds, yes, but given many lifetimes of the universe, it would never produce something like the Blackbird. It would have to produce a more advanced optimization process, namely a general intelligence creature, who could apply techniques from a different metadesign space to reach the Blackbird. Deep neural nets are very impressive these days, but can they "learn" what causality is or are the higher rungs (as Judea Pearl puts it) up the ladder of causality inaccessible?

Well, whether it's bigger or not, it's certainly big and there, inaccessible to our other approaches. Now consider the idea of mind design space -- the types of minds that exhibit general intelligence. Humanity might someday build one. The design of such a thing might involve a unique and critical insight, or several, that is some deterministic algorithm, that when implemented gives rise to a general intelligence. (Metaspace 1.) Or it might instead be designed with a copy-and-tweak approach, taking the human brain design as a base model. (Metaspace 2.) Or perhaps we'll get there with more work on evolutionary algorithms, GANs, and other weak-AI machine learning advances. (Metaspace 3.)

My point here is that the design could be any of these, and once we have one perhaps it'll be smart enough to explore designs in the other kinds of spaces. Thus the design space of minds is truly vast, encompassing possibilities in each of the three metaspaces, compared to something like a Blackbird which is rather inaccessible to both the first and third metaspaces.

#### Posted on 2019-11-29 by Jach

LaTeX allowed in comments, use $\\...\\$\$ to wrap inline and $$...$$ to wrap blocks.