Oversimplifications in HtN

Ian Lance Taylor ian at airs.com
Tue Aug 31 18:18:47 UTC 1999


   Date: Mon, 30 Aug 1999 15:51:47 -0400
   From: "Eric S. Raymond" <esr at thyrsus.com>

   I will argue that I am (necessarily) simplifying, but not
   *over*-simplifying.  I offer a precise definition: a model is
   oversimplified when it is unable to predictively capture features
   of the behavior under discussion.

That definition omits correctness.  I think a model is oversimplified
if it does not correspond to reality.  The fact that it is possible to
construct a simple predictive model does not imply that the model is
correct.  It may merely be successful in a limited domain, or for a
limited time.

For example, to consider a perhaps controversial issue, it's fairly
easy to construct a successful predictive model which shows that the
greater economic success of white males in U.S. society is caused by
their genetically greater intelligence.  In fact, this explanation has
been advanced seriously.  However, there have also been persuasive
counter-arguments by a number of people.  The fact that this argument
predicts successfully does not imply that it corresponds to reality.

   > ``Human beings have an innate drive to compete for social status; it's
   > wired in by our evolutionary history.''
   > 
   > Arguably true, but a drastic simplification.  Human beings have many
   > innate drives at that level: for food, sex, love, children, shelter,
   > etc.  Moreover, human consciousness permits us to override our innate
   > drives, for better or for worse.  Hence there are hermits, masochists,
   > drug addicts, etc.

   Sure.  But I never claimed that the drive to play status games was a 
   *universal* explanation of the behavior of human beings, or even of
   the behavior of hackers.  You can't oversimplify my argument and then
   use that to claim I'm oversimplifying! :-) :-)

   Your other criticisms have similar flaws.  Not that I'm dismissing
   them, mind you -- they're intelligent and intelligently put.  But to
   make your general case that I'm over-simplifying human behavior, you
   have to deal with my models in full generality, not just by focusing
   on one feature that you object to.

In the ``Homesteading the Noosphere'' paper, you don't offer any other
explanations for the behaviour you are describing.  I don't find the
explanation in that paper convincing, because it appears to argue that
the behaviour results from one particular drive, when I believe that
any reasonable description of human behaviour must account for many
drives, and for a wide range of motivations.

Hence, I think that when you assign all aspects of the behaviour you
are describing to a competition for status, I think you are
oversimplifying.  No doubt you are correctly describing the motivation
of some people.  However, I do not think you are correctly describing
the motivation of most people.

   > ``For examined in this way, it is quite clear that the society of
   > open-source hackers is in fact a gift culture. Within it, there is no
   > serious shortage of the `survival necessities' -- disk space, network
   > bandwidth, computing power. Software is freely shared. This abundance
   > creates a situation in which the only available measure of competitive
   > success is reputation among one's peers.''
   > 
   > Again, arguably true, but I believe this is of marginal relevance for
   > many people.  I've given away plenty of software, but insofar as I
   > care about competitve success, it sure doesn't have anything to do
   > with what other hackers think of me.  It has a lot more to do with
   > what my family and friends think of me, and very few of them are
   > hackers or have more than a vague understanding of the meaning of free
   > software, or for that matter of software in general.

   Did you miss the point about reputation incentives unconsciously shaping
   behavior, even when they are not part of the player's conscious agenda?
   The fact is, you use and obey conventions that are designed to maintain 
   the reputation game -- I've seen you do it.  You're *in* that game.
   You play by its rules.

Of course I play the reputation game.  That's a consciously chosen
strategy to make people take me seriously, so I don't have to
laboriously qualify myself every time I enter a new discussion about
software.  On the net, nobody knows you're not a dog; it helps to have
something to point to.  That's why I named my UUCP program after
myself: it was an intentional marketing tactic (I did solicit other
names, but the only one I liked--GNUUCP--was already being used by
John Gilmore's program).

Somewhat similarly, at my current company I plan to do what I can to
get the product mentioned in the trade press, and to win awards.
That's not because I care about the trade press, which I don't even
read, or because I care about awards, which I think are PR nonsense.
It's a means to an end, the end being increasing sales of the product.
The reputation game is also a means to an end, the end being
simplifying my communication across the essentially anonymous medium
of the net.

This is also why I work hard to give attributions to other people.  It
costs me nothing, and it may smooth their life in some way.  Perhaps
it will even boost their status, and perhaps they even care--I have
indeed met some people who do, and I'm not claiming that your paper
does not describe anybody.

   The fact that you don't consciously experience the reputation-game 
   incentive is interesting, but not surprising to me.  I don't normally
   experience it consciously myself.  Nevertheless, I play the game because
   that's what I've *learned to do* in order to function in the culture.

Yes, but my reading of your paper is that you are claiming that my
primary motivation is the ``reputation-game.''  In the above paragraph
you appear to be suggesting that you understand my motivations better
than I do myself.  I'm a fairly introspective person, and I think I
have some understanding of my own motivations.  If you want to
maintain this argument in full seriousness, I think your theory is
getting perhaps a trifle close to being unfalsifiable.

   The real clincher here is that the customs we observe have features for
   which there doesn't seem to be a sufficient explanation other than the
   reputation game.  To falsify my model, you'd have to at least propose
   an alternative that explains the three taboos described in the paper.

   Or you'd have to deny those taboos have force, and then explain why
   so many people clearly think they do.

Quoting from your paper, the three taboos in question are:

   * There is strong social pressure against forking projects. It does
     not happen except under plea of dire necessity, with much public
     self-justification, and with a renaming.

   * Distributing changes to a project without the cooperation of the
     moderators is frowned upon, except in special cases like
     essentially trivial porting fixes.

   * Removing a person's name from a project history, credits or
     maintainer list is absolutely not done without the person's
     explicit consent.

I'll note in passing that some members of the Debian project appear to
consider it to be acceptable to break the second taboo (for more than
merely porting fixes), but there has certainly been some outcry about
that, so it can still be considered to be a taboo.

It's fairly easy for me to construct explanations for these customs.
One significant question is whether you will find the explanations
acceptable.  Since I think that arguments based on evolutionary
psychology are questionable at best, I'm not inclined to present any
such arguments.  Will you accept arguments based on other premises?  I
personally think my explanations are about as strong as yours; you may
well disagree.

I concede in advance that my arguments below are going to be sketchy,
since like everybody else I am somewhat time limited.

Clearly some people enjoy hacking free software.  Let's suppose that
that some of their main motivations are 1) the pleasure of making a
beautiful object, and 2) the desire to help others.  (I don't think
it's hard to construct an evolutionarily based argument for these
motivations, but I won't bother).

Also, I will suppose that many people have a secondary motivation of
avoiding ill-will, and require a strong reason to upset others.  They
will not do so merely because they can.


First, the anti-fork taboo.

Medium-sized programming projects, on the order of gcc or emacs, can
not be written by a single person (except for perhaps a very few
exceptional people whom we need not consider here).  One
characteristic of a beautiful computer program is that it actually
function in some more or less useful fashion.  Therefore, people who
want to construct a beautiful medium-sized program must work as part
of a team.

Also, note that medium-sized programming projects are rarely if ever
finished, and thus are ordinarily expected to become more functional,
and hence more beautiful, over time.

One characteristic of a contentious fork is that it can be expected to
reduce the set of people working on one branch of the fork.  This in
turn means that the program will not develop as rapidly, and will
therefore approach beauty more slowly.  That violates motivation 1.

(In this regard it's interesting to note that one of the main
arguments for the egcs/gcc fork within the gcc developer community was
that it would actually increase the number of developers, because so
many gcc developers had given up working with Richard Kenner.  Also, I
believe that a common argument for a fork is that it will permit the
program to become more functional and/or more beautiful in some
specific fashion, thus presenting the fork as a disagreement about
aesthetics.)

Another characteristic of a contentious fork is that it confuses the
user community.  That violates motivation 2.

Also, a contentious fork by definition causes ill-will.

Therefore, a contentious fork is inherently undesirable.


Second, the taboo on distributing changes without the cooperation of
the moderators (or, as I prefer to say, the maintainers).

This is actually a small-scale version of a fork.  Having different
versions of code means that the program does not always behave the
same way.  This confuses the user community, violating motivation 2.

If the changes are distributed against the explicit wishes of the
maintainers, it will generate ill-will.  The maintainers, being human,
will be less likely to cooperate in the future.  Besides the general
desire to avoid ill-will, this means that it will be harder to
incorporate changes into the distributed version in the future,
violating motivation 1.


Third, the taboo on removing a person's name from a project history.

This is simply a form of plagiarism, which we may define as presenting
somebody else's work as your own.  Most hackers, indeed most educated
people, were taught in elementary school that plagiarism is bad.  This
is a social value which was drilled into many of us at an early age,
along the lines of ``don't fart in public.''  Most of us would avoid
plagiarism, and in particular we would avoid it if we were likely to
be caught (and, when working with free software, it is quite easy to
catch plagiarism).

In other words, plagiarism is a broad cultural taboo, which does not
to be justified in terms of a free software reputation game.


Do I think that the explanations I've given above are the correct
ones?  No, I don't.  I don't think there are any simple single correct
explanations.  I don't think human behaviour works that way.  Human
behaviour is complex, and requires complex explanations.  I think my
arguments are part of a correct and complete explanation.

That's why I think your paper is an over-simplification.  I don't
think your explanations are completely wrong.  In fact, For some
people they may be completely correct.  However, I do think that there
are many more motivations than you are considering.


   > Somewhat related to this, you sometimes use the term ``hacker tribe.''
   > I don't know if such a thing actually exists in any meaningful sense;
   > if it does, I certainly don't consider myself to be a part of it.

   OK, let me explain what I mean by that.

OK, I get it.  I do think ``hacker culture'' is a better term.

Frankly, perhaps what I object to is your self-characterization as
``public advocate for the hacker tribe'' (from ``Take My Job,
Please!'').  Based on your public writings, I don't think that you
advocate my interests or beliefs.  Therefore I do not want to be
classed as part of the group for which you are a public advocate.  (I
understand that you may think that you do advocate my interests or
beliefs; I presume you will grant me the right to respectfully
disagree.)


   > You later use a dog as an example of animal territoriality, but dogs
   > are a heavily domesticated species.  Some dogs are indeed guard dogs,
   > which have been bred to protect human property boundaries.  Others are
   > not, and do not.

   You're missing a subtle point here.  Teleologically, we breed dogs "to
   protect human property boundaries".  But in biological truth we can't
   do that -- human property being mediated the way it is, we'd have to
   breed them for intelligence and verbal capacity before we could even
   try.  You've never seen a dog depend a stock portfolio....

   So instead, what we actually breed for in guard dogs is a *stronger
   territorial response*...

   You're a bright guy.  I'm sure you'll see the punch line here :-).

I was perhaps unclear.  Here I'm responding to this quote from your
paper:

``Anybody who has ever owned a dog who barked when strangers came near
its owner's property has experienced the essential continuity between
animal territoriality and human property.''

I'm arguing that the existence of dogs which protect human property is
not an argument for any sort of continuity between animal
territoriality and human property.  That is like saying that the
existence of guide dogs for the blind shows the essential continuity
between the desire in dogs and humans to help others who are less
capable than ourselves.

   > In general, I think you have a tendency to use the style of arguments
   > described as ``evolutionary psychology'' or ``sociobiology.''  These
   > arguments, while interesting and useful, tend to drastically simplify
   > the range of human behaviour and motivations.

   Guilty as charged.  Animal ethology and evolutionary biology are my
   main frame of theoretical reference.  Using them does simplify things.
   Whether it *over*-simplifies them is another question altogether.

   Personally, the more I study animal social models, the more I think 
   human beings flatter themselves by overestimating the complexity and
   novelty of their own social behavior.

   Nor, by the way, do I consider that a cynical statement.  The kingdom of
   life is a big and wonderful place.  We separate ourselves from it too much.

I certainly agree that in many respects humans are just like other
animals.  It is for precisely this reason that I am a vegetarian.

However, I also believe that there is a quantum leap between animal
behaviour and human behaviour.  Humans have mental capacities which no
other animals have, including highly complex self-reflection and
self-deception.  These capacities make simple explanations of complex
behaviour automatically suspect.

In general, I think that evolutionary psychology and sociobiology
suffer from ``just-so syndrome,'' by which I mean a tendency to
explain behaviour by constructing ``just-so stories,'' a la Kipling.
There is a tendency to confuse a plausible explanation with a correct
explanation.  This underestimates the complexity of human behaviour.
It is, in fact, an over-simplification.

I will expand on this slightly.  Ernst Mayr and others have developed
the notion of an Evolutionary Stable Strategy, which is a collection
of different animal behaviours which are sustainable in a sufficiently
large population.  It is possible to present these strategies in a
mathematical form, it is possible to predict the percentage of
individuals who should follow particular behaviours, and it is
possible to show that the strategies occur in animal populations at
the predicted levels.  The correspondence of the predicted and
observed percentages provide some evidence (though not by itself
proof) that the explanation of the behaviour is correct.

To my knowledge, these type of arguments has never been made
successfully for complex human behaviour.  Why not?  Because as humans
learn about the argument, or invent it for themselves, they are able
to reflect upon their own behaviour and adjust it according to their
perceived advantage.  Thus human behaviour automatically introduces
second-order effects, and indeed, as people reflect further, we find
third-order effects, fourth-order effects, etc.  Human behaviour can
thus be described as a chaotic system, in which the output conditions
can not be reliably predicted from the input conditions.

Ian



More information about the License-discuss mailing list