The Road to GNU

Richard Stallman describes the experiences that prepared him to fight for a free software world.

Around April 1983, Stallman wrote an introduction to The Happy Hacker - A Dictionary of Computer Slang. In it, he talks about his experiences at the MIT AI Lab and the Lisp Machine Wars. He also details how Emacs was developed. The text below is an updated version from late 1983, shortly after the initial announcement of GNU.


Machine Room Folk Dance, Thursday at 8 PM
Come Celebrate the Joy of Programming,
with the World's Most Enjoyable Computers.
There were only five of us dancing, but we had a good time.

My first experience with computers was with manuals for various languages that I borrowed from counselors at camp. I would write programs on paper just because of the fascination of the concept of programming. I had to strain to think of what the programs should do, because I had nothing to supply me with a goal except that I wanted to program. I wrote programs to add up the cubes of a table of numbers in several assembler languages at various times.

The first actual computers I met were IBM 360's, at the IBM New York Scientific Center, when I was a student in high school. There I quickly developed interest in language design, operating systems and text editors. Hired for the summer to write a boring numerical analysis program in Fortran, I surprised my boss by finishing it after a couple of weeks and spent the rest of the summer writing a text editor in APL.

I also quickly manifested a lack of proper reverence for authority. The whole center had been denied access to the IBM computer in the building, and we had to use slow telephone connections to the Cambridge Scientific Center. One day an IBM executive came to tell us about the work various IBM scientific centers were doing, and finished with, “Of course you all know the important work being done here.” I asked him, “If our work is so important, why can't we use the computer in this building any more?” After the meeting, my friends told me they had wanted to say such a thing but were afraid of reprisals! Why? Certainly nothing happened to me as a result. They seem to have learned the habit of cowering before authority even when not actually threatened. How very nice for authority. I decided not to learn this particular lesson.

The Artificial Intelligence Lab

The New York Scientific Center closed down just in time for me to move away to college. I found that the Cambridge Scientific Center wasn't interested in me, which was very lucky for me, because it spared me from remaining ignorant of the far superior non-IBM computers, especially the PDP-10 and PDP-11 from Digital. Awake now to the fact that all computers were not equal fun, I sniffed around for the most enjoyable ones, and found them at the MIT Artificial Intelligence Lab. There a bunch of people who termed themselves “hackers” had created their own timesharing system, the Incompatible Timesharing System, designed specifically to facilitate hacking. ITS and all the utility programs (including the debugging program DDT which was also the “shell” called HACTRN) were maintained right there. I came by looking for documentation of their system (how naive of me). I left without any documentation since it didn't exist, but with a summer job instead. I had been hired by an engineer/administrator, Russel Noftsker—ironically, the same man who was later to play a primary role in the lab's ruin. The job became permanent and lasts to this day.

Once I showed I was competent, I had free rein of the entire operating system, an opportunity to learn and be productive that few labs and no company would have given me. The hackers' attitude was, “If you can do a good job, go right ahead—whoever you are.

With the AI lab as a comparison, I came to see how little freedom and how many unnecessary difficulties people had elsewhere. At IBM, and at Harvard, power was very unequally distributed. A few people gave orders and the rest (if they were not me) took them. Professors would have their own terminals, which were usually idle, while the rest of us often could not work because there were too few shared terminals. People would ask, “Are you authorized to do this,” rather than, “Do you know how to do this? Is it constructive?” They would rather have a job done by an authorized moron than by an unknown genius. I ceased to frequent Harvard's computer lab because MIT was so much better. (I was majoring in physics; there was no need for a natural hacker to take formal classes in computers, as hacking challenging programs among good hackers is a better training).

The AI lab attitude was different. We had a tradition of breaking open the door of any professor who dared to lock up a terminal in his office. They would come back to an open door and a note saying, “Please don't make us waste our time unlocking this terminal.” The terminals are there to be used, and they are wasted if they are idle. We extended the same attitude to computer time. The PDP-10 executes 300,000 instructions every second. If no user asks for them, it spends them on counting how long it has had nothing useful to do. It's better for them to be used by anyone at all for any constructive purpose, than to be wasted. So we allowed “tourists”—guest users—as long as they did not get in the way. We encouraged them to learn about the system, looking for the few who would become hackers and join us. There are at least two lab staff members and one MIT professor who got started this way.

I found that the computer systems reflected these differences in attitudes between organizations. For example, most computer systems are designed with security features that allow a few people to tell everyone else what they can and can't do. The few have the power and nobody can challenge it. We hackers called this “fascism” because such computer systems really have the social organization of totalitarian police states.

In order to prevent the users from turning off the security, a fortress must be erected around the system programs. Every possible avenue through the walls has to be guarded, or the downtrodden masses will sneak through. It turns out to be impossible for the computer to distinguish between sneaking through the walls and many other activities that people frequently need to do in order to do their jobs. Since maintaining security is more important than getting work done, all such activities are forbidden. The result is that you must frequently ask one of the elite to do something for you that you aren't allowed to do. If he doesn't like you or anything about you, or if he wants a bribe, he can make your job twice as hard as it really ought to be with hardly any effort.

It's taken for granted that only the elite will be allowed to modify or install any system programs lest the underlings sneak in a “trojan horse” to turn off the security. (This restriction is enforced using “file protection.”) Just the opposite of the AI lab where a tourist working on system programs meant he was starting to make himself useful and become a hacker. Their way fewer people can contribute to improving the system, and the users learn a fatalistic, despairing attitude toward system deficiencies. They learn the mental outlook of a slave.

At a place like Digital Equipment, even the people whose job it is to improve the system have to contend with so much bureaucracy that their effectiveness and morale are halved. As Robert Townsend said in “Up the Organization,” most institutions demoralize their workers and waste their potential by hindering them from doing their jobs well. Security and privileges are the way it is done on a computer system.

Most people accept such regimes because they expect jobs to be onerous and hope for nothing from their jobs except money. But for the hackers hacking was more than “just” a job, it was a way of life. The original hackers made sure they would have no such problems by omitting security and file protection from the design of the system. Users of our system were free men, asked to behave responsibly. Instead of an elite of power, we had an elite of knowledge, composed of whoever was motivated to learn. Since nobody could dominate others on our machine, the lab ran as an anarchy. The visible success of this converted me to anarchism [1]. To most people, “anarchy” means “wasteful, destructive disorder,” but to an anarchist like me it means voluntary organization as needed, with emphasis on goals, not rules and no insistence on uniformity for uniformity's sake. Anarchism does not mean advocating a dog-eat-dog jungle. American society is already a dog-eat-dog jungle, and its rules maintain it that way. We wish to replace these rules with a concern for constructive cooperation.

The file protection on most computer systems means that great attention is paid to how you can restrict who can do what to your files. Users are taught to expect that file protection is all that stands between them and having their work destroyed every day. We hackers, who lived happily for years without file protection and did not feel we were missing anything called their attitude “paranoia.” It was extremely useful that everything in the system was accessible; this meant a bug could not hide in a file you were not allowed to fix.

We carried these attitudes into programming language design as well. Consider the “structured programming” movement, with its “ban the GOTO” platform. These people said, “All you programmers except we few are [in]competent. We know how you should program. We will design languages that force you to program that way, then we will force you to use them.” We hackers felt that a more appropriate way to improve programming languages was to identify and provide constructs that were easier to use; to help the user write good programs rather than hassle him if he might be writing a bad one. And we provided the facilities so that users could create their own constructs if they did not like the ones we provided.

Philosophy Manifest in the Lab's Achievements

The AI lab attitudes are an intrinsic part of my best-known work, the EMACS full-screen editor (to which Guy Steele and others also contributed). Nowadays full-screen editors (“word processing” programs) are common, and are found on every home computer. In 1973, display terminals were more expensive than printers, so most people still used printing terminals, and those who had display terminals usually used them as if they were printing terminals (that is, as “GLASS TTY”'s). The AI lab had displays but no screen editor yet.

EMACS is unusual among screen editors because it is powerful and extensible. EMACS contains its own programming facility which I used to provide commands that other editors don't have, and which users use to provide any commands they want which I didn't give them. Users can make libraries of commands and share them, and when they do a good job, the libraries become part of the standard EMACS system just by being included in the manual.

Many other editors have had “macro” facilities. EMACS has a programming language for writing editor commands, completely separate from the usual editing language. Because it does not have to be an editing language, it can be a much better programming language, good for writing complicated programs. The second ingredient is to make no distinction between the implementor and the user. Nearly all the “built in” commands of EMACS are written just like user extensions. Each user can replace them or change them for himself.

The development of EMACS followed a path that illustrates the nature of the lab. When I came to the lab, the editor was TECO, a printing-terminal editor with some more programming facilities than other editors. The user would type a command string of many commands, and then TECO would execute it. On a display terminal, TECO knew how to redisplay the text of the file after each command string. The natural way to provide screen editing was to add it to TECO and adapt the existing redisplay mechanism.

Originally, the screen editor was just one of TECO's commands. Its power was very limited, and if you needed to do anything fancy, such as save the file on disk or search for a string, you would exit from the screen editor and use regular TECO for a while. Then a user suggested that I provide a couple of screen-editor commands that the user could hook up to a saved TECO command string or “macro.” In implementing this, I discovered that it was just as easy to let the user replace any of the screen editor's commands with a saved TECO command string.

This touched off an explosion. Everybody and his brother was writing his own collection of redefined screen-editor commands, a command for everything he typically liked to do. People would pass them around and improve them, making them more powerful and more general. The collections of redefinitions gradually became system programs in their own right. Their scope increased, so that there was less and less reason ever to use TECO for actual editing. It became just a programming language for writing editors. We started to categorize it mentally as a programming language rather than as an editor with programming as an extra feature, and this meant comparing it with other programming languages instead of other editors. The result was a demand for many features that other programming languages had. I improved TECO in this way while other hackers used the new features to improve their editors written in TECO.

After about two years of this wild evolution, Guy Steele decided it was time to write one editor that would combine the best ideas of all the rest. We started together, but he soon drifted off to his other interests. I called the editor EMACS, for “editing macros.” Besides, I wanted the name of the new editor to have a single-letter abbreviation, and “E” was one of the letters not already in use.

Thus, the standard EMACS command language was the result of years of experimenation by many user-maintainers on their own editors, something possible only because of extensibility and the AI lab's attitude of encouraging users to add to the system. On the fateful day when I gave users the power to redefine their own screen editors, I didn't know that it would lead to an earth-shaking new editor. I was following the AI lab heuristic that it is always good to give the user more power. AI lab attitudes then encouraged users to use the power and to share what they produced thereby.

I worked on EMACS for about five years, distributing it to everyone free with the condition that they give back all extensions they made, so as to help EMACS improve. I called this arrangement the “EMACS commune.” As I shared, it was their duty to share, to work with each other rather than against. EMACS is now used at all the best university computer science departments and lots of other places. It's also been imitated about ten times. Sad to say, many of these imitations lack the real essence of EMACS, which is its extensibility; they are “ersatz EMACSes” which imitate the superficial appearance only.

Nowadays EMACS users hardly ever edit with TECO, and most don't even know TECO. In fact, I've forgotten how to edit with TECO. I got so used to thinking in terms of programming with TECO that on a few rare occasions when I needed to edit with it I was at a loss for a minute or so. The reflexes were all gone.

I've noticed that one sign that an editor improvement is a valuable one is when, after using it for a couple of weeks, I forget how to do without it. This proves it must have required a great effort to keep in practice to do things the old way.

I don't think that anything like EMACS could have been developed commercially. Businesses have the wrong attitudes. The primary axiom of the commercial world toward users is that they are incompetent, and that if they have any control over their system they will mess it up. The primary goal is to give them nothing specific to complain about, not to give them a means of helping themselves. This is the same as why the FDA would rather kill a thousand people by keeping drugs off the market than one person by releasing a drug by mistake. The secondary goal is to give managers power over users, because it's the managers who decide which system to buy, not the users. If a corporate editor has any means for extensibility, they will probably let your manager decide things for you and give you no control at all. For both of these reasons, a company would never have designed an editor with which users could experiment as MIT users did, and they would not have been able to build on the results of the experiments to produce an EMACS. In addition, the company would not like to give you the source code, and without that, it is much harder to write extensions.

What's Your Printer's Name?

When I was installing a new typeface for the EMACS manual on a laser printer system at the lab, I noticed that the initialization menu included a slot for changing the printer's name, which appeared on the cover sheet of each user's output. (This feature was important if you had more than one printer and wanted to know which one had produced your output.) Our printer had the cutesy and meaningless name “Tremont” It was my duty as a hacker to replace it with something more fun. I chose “Kafka,” to bring up disturbing associations. (Did you hear about the man who woke up as a laser printer one morning?)

For the next few days, other hackers kept talking about the new name, and suggesting additional amusing names (“Treemunch,” “Thesiscrunch,” “Cthulhu,” …). I tried each name for a few days, while collecting more suggestions. It was great fun for just about everyone. The one exception was a professor who told me that I was not authorized to do this, and that I should stop. I replied that I knew first-hand that people were having fun as a result, and therefore I ought to continue, at least as long as the suggestions held up. Finally, I told him, in stern and official terms, that he was not authorized to say that hacking was unauthorized.

The poor guy didn't let it end there. He said, “If you think renaming the printer is so much fun, why don't you rename the PDP-10's?” This was a truly brilliant idea, for which I remain grateful. The next day, the DM PDP-10 (home of Zork) was called “Dungeon Modelling” instead of “Dynamic Modelling”; the ML PDP-10 (used for research in mathematics and in medical decision making) was called “Medical Liability” instead of “Math Lab”; the MC PDP-10 was “Maximum Confusion” instead of “MACSYMA Consortium”; and the AI PDP-10 was called “Anarchists International” instead of “Artificial Intelligence.” I didn't hear any more complaints.

The Lab Betrayed

There is still an institution named the MIT Artificial Intelligence Lab, and I still work there, but its old virtues are gone. It was dealt a murderous blow by a spin-off company, and this has changed its nature fundamentally and (I believe) permanently.

For years, only we at the AI lab, and a few other labs, appreciated the best in software. When we spoke of the virtues of Lisp, other programmers laughed at us, though with little knowledge of what they were talking about. We ignored them and went on with our work. They said we were in an ivory tower.

Then parts of the “real world” realized that we had been right all along about Lisp. Great commercial interest in Lisp appeared. This was the beginning of the end.

The AI lab had just developed a computer called the Lisp machine, a personal computer with a large virtual address space so that it could run very large Lisp programs. Now people wanted the machine to be produced commercially so that everyone else could have them. The inventor of the Lisp machine, arch-hacker Richard Greenblatt, made plans for an unconventional hacker company which would grow slowly but steadily, not use hype, and be less gluttonous and ruthless than your standard American corporation. His goal was to provide an alternative way of supporting hackers and hacking and to provide the world with Lisp machines and good software, rather than simply to maximize profits. This meant doing without most outside investment, since investors would insist on conventional methods. This company is Lisp Machines Incorporated, generally called LMI.

Other people on the Lisp machine project believed this would not work, and criticized Greenblatt's lack of business experience. In response, Greenblatt brought in his friend Noftsker, who had left the lab for industry some years before. Noftsker was considered experienced in business. He quickly demonstrated the correctness of this impression with a most businesslike stab in the back: he and the other hackers dropped Greenblatt to form another company. Their plan was to seek large amounts of investment, grow as rapidly as possible, make a big splash, and the devil take anybody or anything drowned in it. Though the hackers would only get a small fraction of the fortunes the company planned to make, even that much would make them rich! They didn't even have to work any harder. They just had to stop cooperating with others as they had used to.

This resulted in two competing Lisp machine companies: Greenblatt's LMI and Noftsker's Symbolics (generally called “Slime” or “Bolix” around the Al lab). All the hackers of the AI lab were associated with one or the other, except me because even LMI involved moral compromises I didn't want to make. For example, Greenblatt is against proprietary operating system software but approves of proprietary applications software; I don't want to refuse to share either kind of program.[2]

Symbolics proceeded directly to get millions of dollars of investment and persistently hire away everyone at MIT not welded down. Greenblatt had envisioned people working part time at LMI and part time at the AI lab, in order to minimize the trauma to the lab. Symbolics made accusations of conflict of interest, forcing the LMI people to leave MIT as well. Suddenly I was the last hacker, and one person was not enough. The lab was dying.

I strongly suspect that the destruction of the AI lab was a deliberate act. Once a businessman gets a golden egg, he kills the goose to make sure he has a monopoly.

It is painful for me to bring back the memories of this time. The people remaining at the lab were the professors, students and non-hacker researchers, who did not know how to maintain the system or the hardware, or want to know. Machines began to break and never be fixed [3]; sometimes they just got thrown out. Needed changes to software could not be made. The non-hackers reacted to this by turning to commercial systems bringing with them fascism and license agreements. I used to wander through the lab, through the rooms so empty at night where they used to be full and think, “Oh, my poor AI lab, you are dying and I can't save you.” Everyone expected that if more hackers were trained, Symbolics would hire them away, so it didn't even seem worth trying. The lab administration made no effort to rally us, and the MIT administration acted as moneygrubbing as a profit-making company, further demoralizing people.

In the past, hackers had gone from time to time, but new ones had been trained to replace them by the ones who remained. Now the whole culture was wiped out, there was not enough left to provide a model for a new person, and no greatness to draw the best people here. For example, hackers used to eat dinner together (usually Chinese) every day. No one person was there every day, but you could count on finding other people to eat with at dinner time. Now this practice disintegrated, and when people could no longer expect to find others to eat with, they would not plan to show up hungry at the usual times, thus compounding the effect.

The whole AI lab used to have one common phone number and a public address system. (The phone's extension was 6765, and we answered it “6765,” or “Fibonacci of 20,” since 6765 is the 20th Fibonacci number.) It was easy to call and reach anyone and everyone. Now most of the people and terminals have moved to other floors where 6765 does not reach, and the 9th floor, the lab's original heart, is filling up with machines. This change is further reducing the lab's social cohesion. Now I can't even call up and find out if anyone is hungry and nobody can get in touch with me on the phone.

Thus I lost all at once my social network, my opportunity to pursue my career in an upright fashion, and most of what I had helped to build. I felt that I was the last survivor of an extinct tribe, doomed to spend my life among uncomprehending strangers. There was not much chance of building a new lab with the AI lab's good qualities if an existing and previously healthy one could not survive the pressure. The computer industry would not be disposed to let me share with other hackers as the golden rule requires. I began looking for a new career that would not involve computers, but didn't expect to find one, and saw no future except to work on accounting programs or other things that no hacker (including me) would be interested in. It would be a pointless life, but at least I would not have the shame of refusing to share with other hackers if they would not want what I was doing. I wasn't sure this was better than a more direct form of suicide.

For about a year there were LMI, Symbolics, and the remains of the AI lab. The Lisp machine operating system was shared by all three. From time to time the Symbolics hackers would respond to a bug report by saying, “This cannot be fixed on the current system. Wait for our new machine.” This was to make the new machine sound like more of an improvement. It was great fun for me to announce, shortly thereafter, that I had already fixed the bug.

War Breaks Out

But things were to get worse, because LMI was not the failure that Symbolics had vocably predicted. It was making and selling Lisp machines, and selling them for a lot less than Symbolics, which had a giant investment to recoup and so many salaries to pay. After about a year, Symbolics realized that its well-advertised inevitable triumph would not happen without more violent measures. Their plan: to end the three-way sharing of software improvements. Since LMI was much smaller, they expected that LMI would be unable to keep up with them. (The AI lab was no longer considered a significant contributor.)

Symbolics demanded that the AI lab submit to new terms: to use improvements made by Symbolics but not share them with LMI [4]. This demand was announced, Newspeak style, as a great act of generosity. Actually, even allowing MIT to continue using their improvements was simply another tactic, designed to lock the lab in, so it would provide bug reports and demos for them and buy from them alone. This is not an unusual motivation. Many companies donate computers they make to MIT for just this reason. But usually they try to gain MIT's cooperation by generosity rather than by cracking the whip.

Symbolics doubtless expected the lab to cave in immediately and switch entirely to their brand of software. But I refused to capitulate, refused to be conscripted into helping Symbolics against LMI. LMI was more worthy of my aid. No longer allowed to remain neutral, I would fight against those who forced me to fight [5].

Instead of using the improvements from Symbolics, I made similar improvements to the last shared system. Most of the lab's users continued to use the MIT system; some through dislike of Symbolics, some because they considered it technically superior, and some because they were more free to change it. For the past year and a half I've been doing this, keeping the MIT system just as good and sometimes better. Since LMI gets to use all the improvements I make, LMI too has a system just as good. The main result of Symbolics's refusal to share was a lot of hassles for the users due to incompatibilities between the two systems.

Generally I let Symbolics design a new feature, then look over their documentation and implement something mostly compatible. I could improve the system just as well without paying attention to them, but this would be a bad strategy. They could copy my improvements verbatim and spend their time on additional improvements. Or they could ignore my design and implement something similar but incompatible, making trouble for all the users. Just as in a bicycle race it is much less work if you are right behind the other guy. As one man racing against a large team, I need this advantage. I can easily dart out in front, but that is not an efficient use of my energy.

Symbolics fights back by threatening lawsuits (though they have not filed one) and by trying to get me fired. Rumor has it they read my computer mail several times a day looking for something to accuse me of; once they were caught and it backfired against them. (It is against my principles to stop them with security measures that punish everyone.) They think it is bad if anyone gets something for nothing; better that something should go to waste than that it benefit their competition equally with them. This is the kind of divisiveness that has paralyzed our country.

By working against Symbolics this way, I not only escape having to submit to their terms, I also help bring about justice and the punishment they deserve for destroying the old AI lab. Initially I hoped also to provide a nucleus of self-sufficiency to revitalize the lab. But nobody joined me; everyone sticks to his research now.

Where Do I Go Now?

Symbolics never did achieve superiority in software, but their new, faster machine was ready sooner than LMI's new, faster machine. Now they have delivered many of these to MIT, and my users are switching to them. Using the MIT system version on those machines is not practical because the machines are too different.

The loss of users makes it hard for me to verify that my new software really works. But with luck I will be able to hang on just long enough to keep Symbolics from winning in the end. LMI has just begun deliveries. Soon they'll be very successful and supporting system development themselves, and Symbolics will be stuck with lean and aggressive competition. Once LMI is able to go on without my help, the eventual punishment of Symbolics will be fully arranged. Then I can stop work on Lisp machines. I have set Thanksgiving of this year as the time to stop.

And once I've arranged the punishment of the wrongdoers, it is time for me to begin rebuilding what they destroyed.

It cannot be rebuilt at the AI lab. MIT attempts to license anything useful that is done here, to stay here and keep sharing is a struggle in itself. And being surrounded by Symbolics machines and semicompetent sell-outs is no fun anyway. I need to make a fresh start in life, and the first step is to move away from the ruins of the past. Therefore, I am going to quit.

It cannot be rebuilt by working on Lisp machines. MIT claims to own the Lisp machine software, so it can only be shared secretly. (LMI is an exception; they have a contract with MIT.) Such underground cooperation is better than none at all, but it cannot produce a new way of life. That requires open, public, widespread cooperation. It seemed righter to work on the Lisp machine system than to let Symbolics win by default, but it is not a good way to live any longer than necessary. For the same reason, I cannot work for LMI, even though they are willing to let my work be partly public. I can make compromises in fighting a war, but when it comes to building something good such compromise is useless, since it would make whatever I build fail to be good.

Instead I have chosen an ambitious project that strikes at the root of the way that the commercial, hostile way of life is maintained. I am going to write GNU, a complete replacement for the Unix software system (kernel, compilers, utilities and documentation), to be given away free to everyone.

GNU will make it easy for hackers to decide to live by sharing and cooperation. Making use of a computer requires a software system. Now, with no free software systems available, it is a tremendous sacrifice to refuse to use owned software. But once a desirable software system is available free, that pressure will be forever lifted. Hackers will be free to share.

I start on Thanksgiving. I'm asking computer manufacturers for donations to the cause, but I'm going to do it even if I have to work as a waiter. Already other programmers who miss the old ways are rallying to the cause. Join in and help! and maybe the old spirit of the AI lab will live again.

Good Hacking
Richard M Stallman
The Happy Hacker

Footnotes

  1. I loved the AI Lab's anarchistic way of life and manner of operation, but that's not really being a whole hog anarchist. I didn't call for abolishing the state and its many useful activities, and the possibility of making society's decisions in a democratic way. See Why we need a state.
  2. The AI lab was neutral between the two companies; I was content to be part of that neutrality.
  3. The AI lab PDP-10 broke in February 1982, and was never repaired.
  4. Symbolics issued its ultimatum on March 16, 1982, by coincidence my birthday. I thought of that as the day when Symbolics attacked the AI Lab and LMI, aiming to subjugate the former to destroy the latter.
  5. Ironically, outright conflict pulled me out of my despair, by showing me something positive to strive for. I was no longer lost with no direction to advance in. A struggle had fallen on me, out of the blue—an aggression whose defeat was worth exerting the utmost of my ability.