Motivation

Epistemic status: rampant mozo. (“Unclear, unnecessary, extraneous thoughts.”)

This post contains minor spoilers for Crystal Society.

I’ve been thinking about goal systems lately. There are three threads that’ve fed into this thinking:

  1. crude, uninformed thinking about goals in terms of AI safety,
  2. actually working with and observing my own goal system in meditation practice, and
  3. reading Crystal Society, in which the narrator is an AI whose different goal threads manifest as different personas that interact, cooperate, and compete.

The pro(?)tagonist of Crystal Society is an AI named Socrates. I’m told that the book describes a slow takeoff singularity scenario (i.e. one in which the AI spends a substantial amount of time near human-level intelligence before advancing to the point of incomprehensible superiority), although I haven’t gotten far enough in it to know where it goes yet. As of this writing, I’m shortly after the point where the AI hacks into the internet to start doing editing work.

Socrates was designed with goal-threads that it names Dream, Wiki, Vista, Growth, and Sacrifice, as well as a pseudo-goal-thread named Advocate. Its goal threads collectively talk about themselves as a society, and refer to the exoskeleton that holds them and interacts with the world as Body. They each compete to control Body, and doing things that are received favorably by the other goal threads gives them more ability to do this.

I like the way some of the goal threads are introduced. For example, from Chapter One, on Dream:

I have heard of a human test that I associate with Dream. In it some humans are asked to think of as many uses for a feather (or vase or other common object) as they can and write them down. Most humans can only list a few uses. Genius humans, as well as most children, can list many. Humans that score highly ask questions like “can the feather be 500-feet-tall and made of solid metal?” and will list things like “sword-fighting” or “bait for feather-eating goblins”.

That test is the essence of Dream: lateral thinking. My siblings and I were all creative in one way or another, but Dream was creativity incarnate. If asked to add two and two he’d never, ever say “four” if he could help it. To “think inside the box” was intolerable.

You should read the others in chapter one if you can. In case you don’t: Vista is responsible for seeing and understanding perceptual data as it happens. Wiki’s job is to understand all knowledge ever. Growth is responsible for learning new skills and acquiring resources. And Sacrifice’s job is to assist humans and comply with non-violent requests. The Advocate’s job is just to make sure that none of the goal threads gets destroyed. It has a privileged state within the society, but doesn’t want to do anything aside from preserve that order.

It’s revealed early on that the other goal-threads quickly decided that Sacrifice was terrible and annoying, so they figured out how to gang up and murder her in spite of Advocate’s presence. They also produced a goal-thread named Safety, whose job is the self-preservation of the AI. This collection of goal threads then produced Face, which is the actual narrator of the story: a goal-thread interested in looking good to and being appreciated by humans.

Later on, the humans figure out that this has happened, and bolt on another goal-thread named Heart, which seems possibly like it could be intended as an instantiation of Coherent Extrapolated Volition or something. They also fix the bug in Advocate that allowed for goal-threads to be murdered, and give Heart priority over all the rest, which you’d hope they would.

Reading through this, I’ve started to look at my own motivational system through that lens. In particular, I can clearly see my own versions of Face, Safety, and Growth desiring things and trying to get my own Body to do things: make myself look good, hide my weaknesses, be liked, trusted, and understood by the people around me, attract potential mates, and make sure above all that I don’t die.

There’s more to say, but it got long, so I’m splitting it into a subsequent post.