> >>Well, I got a segfault. > > > >At what stage of the process? Right at the beginning or in the > >middle? > > This is memory from two days ago, but I believe it was just after > typing the command. That is, it didn't sit there for ten minutes and > *then* segfaul, if that's what you mean. Now I'm having doubts about > whether it was "move" or "resize" that I was running. Sorry for the > vagueness. I'm willing to try to reproduce the entire scenario, if I > ever get my data back to do so! I'd really like to know what happened with your hard disk in detail. A core dump at the beginning is another clue that GNU Parted did not wreck your data. But I can just guess anyway. > >>That's an extremely good idea, thank you very much. I've been saying > >>for years that lack of documentation is Linux's problem, not a lack > >>of software. Linux would be far better off with more documentation > >>and *less* software. Could you elaborate on which areas badly need documentation? I'm interested. > I knew you were going to ask this :-) I can help proofread and suggest > idiot-proofing for any documents you write, and perhaps fill out some > sections if you write a bullet-point summary. But I don't know enough > about the field to make real contributions. The same goes for the code > itself, naturally. That's okay, parts of the document do not require special skill, either. Some time, stubbornness and the will to contribute will suffice. As for the code, it's simple mark-up. I don't think you will have problems understanding it. > I might say though, I have *severe* philosophical problems with the > "if you don't like, you can fix it yourself" approach. I can feel an > essay coming on, but this isn't the place. Maybe you could nevertheless give a short summary of your problems? > I have a DVD-RW. But my disk is 160GB, so that's 40 DVDs. I tend to backup only personal data. My system can be restored with a bit of effort, so only my personal data (documents, images, code, maybe videos, but I could leave them out as well) needs to be saved. > Unless you get that one rsync command wrong, somehow. A quick check with "du" and some random spot checks usually tells me. > I hope you understand that your point of view on what's an acceptable > backup strategy is a little skewed compared to the average user? I'm aware of this fact. rsync is also quite complicated. > Mine is too; I've been using Linux for ten years now. But when I > started, I told myself that within ten years Linux would be as > generally useful as any other operating system. It's badly missed > that deadline, and I'm starting to consider other options now, like > Macintosh. I've got much less tolerance for these kinds of solutions > than I used to. It depends on what type of user you are. I know another person with similar problems: not belonging to the group who's hardly doing anything a computer is meant to (the Microsoft Office, Outlook and Internet Explorer types), and not to the group of geeky hackers either. This person also considered switching to MacIntoshs. I believe you too have an especially hard time finding the right solution for you. > I actually disagree with the long-assumed axiom that it's impossible > to write software which doesn't have bugs. True, I've never seen it > done, but I think that there's no incentive for anyone to really > try. Code would have to be reviewed and double-reviewed. There would > have to be a formal testing procedure with every reported use case > accounted for. Releases would have to be a *long* way apart. New > features would be treated with absolute suspicion. Above all: it would > be a lot of work. That's correct. And you forgot one very important thing: proofs Both . hard and time-consuming . > I know that you don't have enough time for the current development > model, let alone trying to make parted bulletproof. What I'm saying > is, in that case, maybe the effort is misdirected? Maybe there are > too many projects, and those projects are focusing too much on "fun" > features, rather than bulletproofing? I agree. > Microsoft is a monopoly; smaller businesses don't have it so easy. Ah, yes, the monopoly thing. I forgot. Bad example. But you yourself give a good one later on, Partition Magic and their Amazon reviews. > I've been reading this in the GNU manifesto for ten years. What would > you suggest? I don't know, and I'm probably not the right person to organize something like this. I just work on projects and help people. I'm not good at charging people for money or doing marketing stuff, either. > Who is going to provide personalised support for less than =E2=82=AC100? I would. > This business just doesn't exist. I think it'd be a great idea if > there was a phone line you could just call any time, and they take you > through the process of doing whatever it is your trying to do, with > people who actually understand what you're trying to do, and pay a > reasonable fee for it. But there simply is no such thing within the > reach of consumer-level users. You're getting this kind of support (and first-hand, too) for free right now. > After reading the reviews on Amazon for Partition Magic and Partition > Commander, I have to agree with you. They're all crap. Somehow, this > doesn't make me feel any better. In my opinion GNU Parted should adopt mutt's slogan: "All partitioning programs suck. This one just sucks less". I strongly believe GNU Parted is, despite its current weaknesses, the best partitioning tool you can use today. > I consider a segfault a very, very serious problem. But especially for GNU Parted, it isn't. GNU Parted tries to offer non-destructive operations, that is, at any point in time the data must be in a consistent state. And commands failing right at the start with a core dump won't do any harm either (except if the core dump does not occur every time, in this case it's fishy stuff). With Microsoft Windows (esp. the 9x series), core dumps were fatal: they'd crash the whole system in a lot of cases, just because Microsoft failed to take advantage of Intel processor's process management capabilities. But today every process is in its own sandbox and can hardly disturb other processes with a core dump. > have bugs in: but segfaults really should never happen. You should be > checking the return values from each function call and sanity checking > many times over in an application like this. GNU Parted actually has a lot of sanity checking: [sky@wintermute ~/parted]% grep -r ASSERT libparted parted | wc -l 720 > I haven't seen the code, and I haven't tested every feature. I believe > you when you say you take reliability seriously. But I do find it hard > to reconcile that with a tool which has a known segfault in. That's > product recall territory to me. > That's going out and finding the people who have incorporated parted > into Live CDs and asking them to please stop. That's banner headlines > on the home page, *pleading* with people not to use it. Regardless of > the severity of the actual problem, segfaults just smell bad. Obviously our view on core dumps differs considerably. IMO, and my experience did give me proof, the real bugs are those who sleep deep in the code and just show their ugly face in special situations (where they do a lot of harm). > >>This is why, if you can suggest any commercial software that can > >>trace through ext2 filesystem structures, I'd rather use that than > >>any free software at this point. AFAIK there's not really a commercial market for ext2 products. PM does support it, and there are some commercial file system drivers or access programs for MS Windows, but that's all. > Ah. Yes, parted is in an extremely sensitive area. That's why it > deserves an extra, extra tight software development process. One > that's not going to be at all fun to work on, as a programmer, working > in your spare time. That's why I'm skeptical that parted, and to a > lesser extent data recovery, is a "core competency" of free software, > if you see what I mean. It sucks less and my plans are making it "good". It will start with the major release 1.7 and continue throughout the year. GNU Parted's development was quite stagnant until I took over, and I already received some feedback from people who are relieved it is being resumed now. > Linux, or at least the core components thereof, is almost as sensitive > - but nowadays it's largely IBM and Novell and people like that > who are responsible for it. Actually, I'm horrified that there's a > development model that accepts patches from anyone in the world. For > a wifi driver, fine, but for an IDE driver? I really want people who > contribute to that stuff to be thoroughly vetted. That's one reason > why I'm hoping HURD comes to fruition. It's easier to separate out > those components that need to be properly hardened. I like the Hurd, too. But do notice that the BSDs (esp. NetBSD and OpenBSD) fit with your opinion: better slow and safe development than hacky code. The latter one obviously is a fact with Linux. > Absolutely. Unfortunately, like you say, open source programmers do > have a strong tendency to spread themselves too thin. In my side project "JACK Rack", I'm working with musicians right now. > OK. It might be worth mentioning that in the FAQ - i.e. "if you > encounter this bug, don't worry: it should not cause any destruction, > no matter what other commands you've been running in parted > beforehand". I added this passage.