January 2002, Issue 74       Published by Linux Journal

Front Page  |  Back Issues  |  FAQ  |  Mirrors  |  Search

Table of Contents:

Linux Gazette Staff and The Answer Gang

Editor: Michael Orr
Technical Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Ben Okopnik, Dan Wilder, Don Marti

TWDT 1 (gzipped text file)
TWDT 2 (HTML file)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.
Linux Gazette[tm],
This page maintained by the Editor of Linux Gazette,

Copyright © 1996-2002 Specialized Systems Consultants, Inc.

The Mailbag

HELP WANTED : Article Ideas

Send tech-support questions, Tips, answers and article ideas to The Answer Gang <>. Other mail (including questions or comments about the Gazette itself) should go to <>. All material sent to either of these addresses will be considered for publication in the next issue. Please send answers to the original querent too, so that s/he can get the answer without waiting for the next issue.

Unanswered questions might appear here. Questions with answers--or answers only--appear in The Answer Gang, 2-Cent Tips, or here, depending on their content. There is no guarantee that questions will ever be answered, especially if not related to Linux.

Before asking a question, please check the Linux Gazette FAQ to see if it has been answered there.


Mon, 26 Nov 2001 22:18:58
Philippe (philippe341 from

[with a bow to our translator Frank Rudolf] Any reader out there inclined to help out, please mail Philippe, and copy us here in The Answer Gang, -- Heather

----- Forwarded message from philippe -----

salut je recherche des personnes qui connaissent dbman ,j'ai quelques problemes a installer les modifications , jesouhaiteégalement créer un forum sur ce logiciel

Hi, I am looking for people who know dbman. I have some problems installing the patches. I would also like to create a forum about this software.

dial-up and DSL

Wed, 12 Dec 2001 19:53:24 +0800
Henry Jesus S. Lastimosa Jr. (henryjl from
Karl-Heinz gave this a shot but any of our readers with more experience in this regard are welcome to join in the fray, or even write up a longer article for the Gazette. -- Heather


i wonder if u can answer this question it really keeps on bugging me .... at present my company is connecting to the internet via DSL , is there a way that i can configure my linux box with a dial-up account from an ISP in case my DSL bugs down ?

it goes this way, i'll set up my linux box with DSL connection using IP masq and fetchmail(for e-mail), in any circumstances that my DSL goes down, i have to connect to an ISP which serves as a backup for my DSL. how can this be done ? or can this be done ???


thanks ,

henry lastimosa

I'm not familiar with DSL -- I assume it will use an ethernet adapter for the network connection. Basically nothing much changes if it's pppoe or similar.

You can check the DSL connection by pinging relevant machines outside or checking device status (ifconfig, cat /proc/***).

If this goes down you can/should disable the default routing over the DSL and start up a ppp connection to your ISP. This will give you a new IP number and a working ppp device. pppd will set the default routing for that ppp device.

If your box would be standalone and this would be only for the local machine that's it. But you have masquerading and maybe firewall rules set for the IP number with DSL -- which now won't work due to the IP number change.

You've got to setup the firewall/forwarding/masquerading rule again for the new IP number (probably every time new if dynamic IP like usual with dial up). After that it should work like before. You can even leave the DSL device active (but not default route) and check if it's online again. Then change back to DSL.

How to precisely setup the forwarding/masquerading for this I would be interested myself. Especially for automatic dynamic IP adapttion.


ssc, "Linux@Gazette" Request for assistance.

Mon, 10 Dec 2001 17:33:49 +0800
k.s. Teo (quality from
This reader clarified the initial email so I merged the letters. Anyone who works in real estate, manages their properties using free software, and feels inclined to tell us what you're using, please let us know. It'd make a really great article! -- Heather

Dear Editors,

To all Editors, should any of the Editors come across some application software on "Property Maintenance" please let us know.

We are referring to an Application software to manage the Maintenance of a high-rise Residential complex and its compound ( gardening, parking lots allocation, electrical replacement, refuse disposal, building maintenance, sport facilities book by residents, swimming pool, etc...etc.. ) ( apartment are owners occupied.)

We do not want custom program software, and would prefer existing & Tested application software.

We appreciate your assistance.

Yours sincerely,
K.S. Teo
Hotel Quality Source Co.


Comment on Dennis Field article. Why Linux is not winning the battle of the desktops.

Thu, 6 Dec 2001 08:34:37 -0800 (PST)
Javier Isassi (j_isassi from

Greeting fellow Linux Lovers.

The follwing comments are in reqard of an article published in your December issue of the linux gazette entitled "Why Linux is not winning the battle of the desktops"

Let me start by saying: There's no such battle.

<Wry look> That pretty much sums up my take on the whole thing. As soon as I saw that article, I figured that it was going to draw a fair bit of flamage; I'm pleasantly surprised to see that the responses have been generally well-reasoned.

Besides - a rout is not a battle. <grin> We're not battling anyone, just taking a pleasant little walk in the park. If outdated businesses happen to fall by the wayside because they've stepped on their own shoelaces, why, <insert innocent look here> what do we have to do with it? <blink, blink>

-- Ben Okopnik

Oh, the battle exists, but only in the minds of the mainstream media who invented it. For them Linux won't "win" until there's no longer a need for an underdog OS to support. -- Jim Dennis

Moreover, the article was focused on one particular distro. If it were me, I would choose one of the major distros that I thought came from a big enough company to provide the basic features I needed to support the type of hardware I intended to run it on, then add the applications for the ecommerce (or whatever it happened to be) part of it later. I don't see any reason why the author was bound to use the same distro as had been chosen to run on the desktop machines in the business office environment.

Also, in the case of somewhat specialized hardware such as a laptop, as mentioned here in the past, there are a few web sites which cover Linux on laptops pretty thoroughly - he didn't mention looking at those sites to iron out the difficulties.

Back in the days of RH4.2, I recall having trouble installing to a desktop 486 machine I had. I tried Debian and RH without success. Then I went to Slakware and was able to get it installed. Those were the early days of hardware auto-detection and automated installs. At the time, Slakware was still very much a manual install, and so avoided the problems that the other distros were encountering. What I'm trying to say is that instead of banging ones head against the wall with one distro, it pays to try others. It was more work, but I had a functional Linux box, which included X.

-- John Karns

While developers of the multi-flavored Linux arena are working towards making Linux easier to run and configure it is accepted, well understood and furthermore ADVIRTISED that Linux is not the choice of the neofite moron trying to learn how to use a computer (AKA Windows user)

Furthermore the subsequent remarks towards making Linux a more "friendly" OS are also off the mark. Let's mention a few.

"Make Linux idiot proof"

There's already an idiot proof OS. Is called MAC OS, not Windows. Is robust and more secure than Linux and Windows put together. Drawback, you can't jack with it. Main reason Linux exist: "An OS that you can jack with it"

Or, to quote a UNIX old hand, Doug Gwyn:

"UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things."

However, specialist distros of Linux, designed to do only one thing well, do exist (routers are very popular variants, as are rescue disks). Companies sell special eqwipment for special purposes, which sometimes have a free OS under the hood. For instance, the thinkNIC ( is a bookend PC with no hard disk, designed primarily for playing solitaire and web surfing. People who can't spell "OS" can't tell it's Linux; they just know they have to stuff its CD in there when they turn it on. -- Heather

"Give Linux users better customer service"

I worked in the customer service dept at Dell Computer for over 3 years. The number one reason people called could be nailed in one single sentence: "I was jacking with my system and things went wrong, can you change my diapers and fix my system?"

What kind of numbskull with pour money to support a staff to hear customers rebuilding the kernels or installing modules they code and compile? What is it that you are supporting? Coding? Linking and running? Unlike the wint-tel world where you have "parties" (vendors) providing you with software there are no "parties" in the OSC (Open Source Community) because NOBODY is paying for it.

First define the customers, then you can define the service. Companies that couldn't do the first, went early to the "dot bomb." There are companies making okay money by selling "professional services" aka rebuilding things and coding. Ship a pretty darn good product and excellent manual, and you still get calls, but more of them will be off the far ends of the bell curve... asking to do things that are complex, or completely beyond the scope (ok so now that I have Linux you guys can help me build my own TiVo before my 90 days are up?) or people who think that "ordinary" things like making sure the monitor is on are non-obvious and should have been in the book. Honest. I've been there too! (4+ years in MSwin and antivirus tech support.)

However, the same team that can, as you put it, change diapers may not be terribly good at wreaking deep kernel magic, and vice versa.

But I wouldn't say NOBODY is paying for things; We could hardly have so many boxed products in their third or fourth major revision, if that were the case. Imagine telling folks back in '94 that Linux was going to be on endcaps at Fry's, taking up half aisles, and random PCI cards would proudly stamp themselves "linux compatible". Hah! They'd have sent for the little white men. -- Heather

Anybody who believes that because they dished out 40 bucks at staples for a copy of Mandrake they are "entitled" to ANYTHING, the soon realized otherwise.

Entitled to keep the manual inside that box on the shelf and read it until it is happily dog-eared. If you're the sort who understands things without needing manuals, you don't need boxed Linux anyway. If you're not sure where your A: is (oh! the floppy! why didn' ya SAY so!) then that "90 days install support" may be valuable in helping you use the quickstart guides.

It's the job of the folks who design the box to set the expectations of the customer who will pick up and buy that box. -- Heather

To recap. Linux off the shelf is a poor example of a vanilla robust desktop OS. And proud of it.

We're not vanilla. We're mint chocolate chip, the other favorite flavor. Strawberries cost extra, low fat options available, etc. -- Heather

If all you want to do is browse the web and read your email get an iMAC. If alll you want is someone else read your email and browse your system get Windows with Outlook. For anything else...Linux.


Javier Isassi.

(no subject)

Mon, 3 Dec 2001 14:39:22 +0100
Ian Carr-de Avelon (ian from

In LG-73 Mr Field again argues, that to win the battle of the Desktop Linux "vendors" need to provide a much higher level of support. The battle for my desktop was won by Linux years years ago, but it may well be that the battle for Mr Field's desktop is not worth winning at the moment.

There is a famous quote (anyone know from who?) that "users must be made to believe that it is not the administrator's job to make them happy, it is the administrator's job is to make sure the system works. Then the system will work and the users will be happy most of the time. If users believe that the administrator has to make them happy, they will never be happy and the system will never work." This is not about whether users have a right to happiness, it is just a practical point that if the technically able staff in an organisation don't have the status to refuse to attempt to deliver what they know they cannot deliver, they will deliver nothing.

I wonder whether Mr Field's book shop sells books in foreign languages. If he sold a book in Russian and the client could not read it, because they didn't know the language, would he as "vendor" feel that he was failing to provide customer support? How could he expect to sell books to customers who could not read at all? Obviously he could not, he relies on schools and parents and the customer themselves to put in a huge effort to be able to use the products he sells. Maybe he should make use of his bookstore to purchase some books on Linux and take the time to learn Linux at a realistic rate. I'm not against Linux users helping each other for free, nor am I against people who need assistance paying a company for that if they can afford to. However when Mr Field suggests, that if what he paid for the distribution could never finance the open ended unlimited support he would like, that they could at least encourage their knowledgeable users to spend 10 hours sorting him out for a chance at a 5$ hat, we see what kind of person we are dealing with. Maybe he should start offering 5$ hats to customers who will give free Russian lessons so he can sell books in Russian.

If you believe that a knowledgeable person could solve your problems in 10 hours, and that that would be good use of their time, please pay them for that 10 hours. If someone is prepared to give 10 hours to making Linux better, please let them decide for themselves what they will do in that 10 hours. If Linux can be difficult to install, that may put some people off, but I can't see Linux users working 10 hours for a baseball cap as a way to encourage people to become Linux users. Linux users of the world unite, you have nothing to loose but the chance of a 5$ baseball cap.

When efforts are going to made it is only reasonable that those providing the resources decide what they should be used for.



... to which Mike replied, and Ian responded ...

In LG-73 Mr Field again argues, that to win the battle of the Desktop Linux "vendors" need to provide a much higher level of support. The battle for my desktop was won by Linux years years ago, but it may well be that the battle for Mr Field's desktop is not worth winning at the moment.

There are two sides to this issue,

No, there are many sides to the issue, because Xfree86-GNU-Linux is not a simple vendor & client product. Mr Field's basic argument is exactly that he paid Linux for a CD and it didn't work out, so Linux should get its act together. We all understand that there are a whole series of groups here: open source developers (Linus, FSF, LDP, Xfree86), the distribution, the satisfied users and dissatisfied non-users like Mr Field. Each has their own motivations and it can't be accepted that we all go down together at the battle of Mr Field's desktop. (actually laptop, but lets keep this clean).

However, I think what Dennis is saying is that a higher level of vendor support is necessary for Linux to be a viable alternative in many retail and other workplace situations.

I accept this, but the response to the article has to be a) how people in the situation can realisticly use Linux as it is, and b) consideration by knowledgeable people of how resources which can be made available can best be put to use. If we allow the complaint to undermine our confidence in Linux, as a system we have proven in use ourselves, and accept that we should apply our selves not as we do, but as Mr Field thinks best, then we will have allowed Mr Field to become toxic to us.

This is also

known as "enterprise-level" support, and any company that switches a vital component of their business (such as their inventory system) to a new application will make sure the support is available, either from the vendor or in-house.

I have no problem with this, but I don't expect enterprises to get this level of support for the price of a Linux-CD or a hat. This initial problem relates to getting Linux installed on a single specific PC. Do you think that if the distribution sent someone round and made Linux work on this PC, that Mr Field would soon have his inventory system working under Linux? My guess is that he will run straight into another problem and another. Solving problems and accepting that this modem or that scanner does not work and will have to wait for a development or you to learn more, is the reality of using Linux. It may even be that if the installation goes too easily, you have lost an important chance to learn and have gained an unrealistic expectation of how things will go with the whole system.

"Not worth winning": perhaps, perhaps not. It may not be the vendor's "responsibility" to provide the support; but on the other hand, if they want those customers, they will provide the support.

If the vendor says "we will provide support" they have a duty to do that (however you quantify support), but it can't be accepted that Linux users have a responsibility to provide the support which a vendor promised. If the cost of a Linux CD plus the cost of the support Mr Fields needs is an attractive one to Mr Fields' employers, let the vendor make the sale and Linux can advance; but don't lets have high-maintenance users and vendors using us all to meet unrealistic expectations for a baseball hat or two.

Giving up on those

customers means they will be stuck with a commercial OS that only works at all for them simply because they happen to be included in the OS company's marketing target. If the OS company decides his business (and that of everybody like him) is insufficiently significant to their [the OS company's] bottom line, the next version of the OS may be incompatible with what he needs, and then he'll be up the river.

Who ever produces the software they use, it takes effort. The fact that a commercial organisation (two if we count the Linux vendor) can benefit is not in itself sufficient reason to work 10 hours for a baseball cap IMHO.

Yours Ian

Mike made an effort to forward the conversation to Dennis, the thread continued, and some of the conversation never made it to me. But here's the tail end of it... -- Heather

Until such time as we can get all the people who are currently running their small businesses and home offices with Windows to take several years of graduate courses in Linux, then there is no point in even trying to compete with Microsoft.

Either they learn enough to use Linux as it is available now, or Linux has to be out of the box ready, or they can't use it. I'm not saying how it should be, or it would be nice if it was. Learning to use Linux is something it is easy to give pointers to. Making Linux more out of the box ready is generally more difficult and there are several ways of going. If you go in the direction of writing clever scripts which detect the hardware and set the configuration, then SuSE and Red-hat are about as good as you can get with the resources anyone has available. If they are not good enough for you, maybe you will get lucky with the next release, or the same release on a different PC, but there are no miracle distributions just round the corner. You suggest that users could sort themselves out if there was a web forum. In fact there is lots of help on the Internet, database of laptops with Linux, almost every package has its own web site and mailing list. I recently installed Slackware 8.0 on a Tulip PC and found problems like the address in Netscape being displayed black on black. I worked out a way round and emailed XFree86. In order get the information to someone who may be able use it and avoid every distribution which has the same Xfree86 version having to have someone reinvent the same wheel, I had to understand quite a lot about how the Linux system operates just to make a decent bug report. The other way to to make Linux out of the box is to supply preinstalled systems, even with remote administration, or be a Linux based ASP and let the customer use your Linux via the Internet.

But I guess Red Hat and SUSE and Caldera don't care about selling to the small business market.

These are top companies at what they do. Would you write off Ford because their cars take 20 hours (personal tuition) to learn to drive?


only thing I don't understand is why does IBM provide all that information about their products? Surely IBM's customers could just figure it out for themselves if their computer doesn't work?

IBM has all that information to hand and the costs of putting it onto the net are less than having someone to pick up the phone to say "hello this is IBM, anybody who knows anything is too busy to talk right now."

Yours Ian

What must Linux vendors do?

Mon, 3 Dec 2001 10:58:23 -0800
Dan Wilder (The Answer Gang)

[ ... ] if the technically able staff in an organisation don't have the status to refuse to attempt to deliver what they know they cannot deliver, they will deliver nothing.

This is elegantly put, and certainly true of situations far beyond the intended context of the discussion. I like it!

-- Dan Wilder

Link Update Request

Thu, 6 Dec 2001 11:15:23 -0800
StuffIt Web Evengelist (evangelist from

Hello there,

During a recent surf of your site,, we noticed that at the following URL(s): offer users help on how to handle downloaded files and you recommend to handle downloaded files such as .zip, .rar, etc.

Hmm, are you sure you have the right people? I went there and I didn't see a Linux Gazette mirror site. -- Heather

We'd like you to consider including a link to StuffIt, or even replacing your existing recommendations with one for StuffIt. <>;



The competitors are not "free", but shareware, meaning your users will get a nagged to purchase every single time they download a file from the Internet. With StuffIt, unregistered users are only nagged when they create archives, NEVER when they open them.

StuffIt is the only product available on all the platforms your users may use. (Available for Windows, Macintosh, Linux, and Solaris.)

StuffIt handles more formats <>; than any competing product and is the only product which handles the popular .sit format, which means your users have a better chance of accessing a file with StuffIt than with any other utility.

As the number one compression utility in the retail channel for Windows, StuffIt has proven itself as the compression utility of choice where it counts, on the street.

So do your users a favor and refer them to StuffIt <>;, in your FAQ's, and on any pages that offer .zip, .sit, or other supported file types for download.

If that sounds good, but you're wondering what might be in it for you? We have an answer! If you respond to this email to let us know that you have added a link to StuffIt to your web site, we will gladly offer you a choice of a free registered copy of StuffIt in any platform you would like - OR - a free t-shirt (black) that says ".sit happens!". (T-shirts are in limited supply so act quickly if you want one!)

Please let us know if you have any questions and especially if you'd like to collect on some free software or logoware.


Eric Kopf
StuffIt Web Evangelist

We don't offer .zip or .zit files, only .tar.gz. -- Mike

Aren't you supposed to use "squeeze" for that last one? Or does "pop" provide the same functionality? -- Ben Okopnik

We don't offer .rar either and infoZIP is free enough for most of our users.

I regret to note that I have trouble using Aladdin's "stuffit for Linux" to reliably unpack .sit files meant for Macs (I was trying to get at some PICT resources that fit a theme I'm messing with, I wanted to see if GIMP would load them. All but the text files unpacked to zero bytes length). I assume that the Linux version is allowed to fall behind the Mac version and it shows. It just doesn't win points for me if Aladdin's app doesn't work with their own Stuff :(

As for free. "only nagged when they create" isn't very free. Most shareware I have encountered never nagged anyone at all except in the documentation. (Including the about box, of course, so you know how to get ahold of the author.) Most Linux utilities don't even need a postcard. For some of our, ahem, more evangelistic types, free means we know how it works under the hood (academic papers ok, code preferred), and for the more vehement among those, it includes the right to make derivatives that stay free in the same sense. You really have to be careful about the difference between "0 dollars and no sales tax" and "freedom of assembly" :) around here.

I don't think we have any serious all-in-one decompressor libraries... and why should we? The individual ones work fine, and we have lots of shiny front ends for the itty bitty command line apps or to call our .so APIs. mc is my personal favorite, but some of my friends like GUItar. -- Heather

Your "Cleaning up the MBR" instructions

Thu, 13 Dec 2001 10:21:54 -0800
Ben Okopnik (The Answer Gang)

Hi Ben,

I have a laptop that was turned into a doorstop when I tried to reinstall the original image after experimenting with Mandrake 8.1 (really needs more of a machine than that laptop is). Every attempt at fdisk seemed to work but attempting to boot the machine froze with "LI" and a blinking cursor on the screen.

I tried your instructions using Tom's root-boot, and got nowhere but an error message stating that /dev/zero was an invalid option for if in dd (I'm sorry, I had already tried the assembler version before I thought of the fact you might like the actual text of the error. . .duh!!).

No big deal, although I would have been curious to see the error. If it does say something like that, however, it's possible that "dd" is somewhat broken in Tom's rootboot; several of the "adaptations" of programs (most of them seem to have been rewritten in "lua") are, to some degree. For instance, the "chroot" in Tom's doesn't let me spawn a shell, which I consider broken behavior.

However, it's not a problem: any method by which you can write 512 nulls to the beginning of "/dev/hda" will do.

# If you just don't care about what's on the HD...
x="\0"; for n in 1 2 3 4; do x=$x$x$x$x; done; printf $x$x > /dev/hda

# A nicer way to do it
x="\0"; for n in 1 2 3 4; do x=$x$x$x$x; done; printf $x$x > nada
dd if=nada of=/dev/hda bs=512 count=1

Anyhow, your DOS-based "debug" method appears to have worked. . .I was able to put a bootable DOS partition on the box again. Thanks for having alternatives; you might want to dig into the Linux solution a little further. FYI, this is a Toshiba 7000CT pII-266 with 4GB HDD and 64M in case you were wondering. Thanks very much for having this resource "out there!"

You're welcome, Dan. I get fairly regular mail thanking me for this one, which is certainly nice; it's even better to get one with a bug report included. Thanks!

what now?

Sat, 8 Dec 2001 15:46:01 -0800
Thomas P. Rowland (thomas.p.rowland from


You've been around the block a couple of times. I've been Linuxing since '94(Slackware).

Anyhow, how can the Linux community stem the tide? Voluteer time to local schools to build networks? Online tutorials? I don't know the answer. But I'd like to help.

I don't believe that this is a "Linux" problem. Linux has been a solution for some, may be the solution for many, and offers hope for everyone.

I don't think of the situation as an inrushing tide to be stemmed. However, if I accept that analogy, then we are not on the shore; we are riding our own waves. Since we have already set sail a mere tide will not sink us. Other currents may run the S.S. Penguin aground, a gail may capsize us, or we might find ourselves becalmed (resting in our laurels?) and adrift.

As for how we can make Linux a better solution for a broader range of users, that's a bigger question. I would hate to sound like a communist but one slogan that comes to mind is:

From each as he or she is able, to each as he or she needs.

No single effort will do. This is not about defeating Microsoft, nor even about undermining commercial and proprietary software as an industry. It's about providing alternatives.

So, what can each of us do? I can contribute through technical support writing, by teaching and informed advocacy. Linus, Alan Cox, et al contribute through coding (and project management, and technical vision). The KDE and GNOME teams contribue through a different level of coding (user space applications framework rather than core kernel work). The FSF provides the tool chain and the utility set that fit between the kernel and the application space.

Perhaps you could help wire up your school. However, that is not a Linux effort. You should not volunteer with your local school board specifically to push a Linux aggenda. First it should be "your" school, in the sense that you are involved in it. If, from the vantage of understanding *it's* needs, you believe that Linux is the best available solution to some of their problems, then you can propose it.

If you can create an online tutorial; that's great. Better, if you can improve an existing one.

For example there is the GBDirect sponsored "Open Source Training" effort at: which offers curricula for the professional trainer under licensing terms that are very close to the Linux Documentation Project (LDP) free documentation license. (In other words we are all granted royalty free license to copy, modify and present the materials; though publication/distribution of derivative works must be approved by the author).

There is a whole section of the dmoz ( and Google's ) directory devoted to training:

... so there's already a body of work to which we can contribute.

Of course online training only works for people who are exceptionally self-motivated. It also requires a persistence and a special mindset. Let's face it, most people can't benefit as readily by simply "reading up on it" as through more interactive means. A good instructor can teach more and more quickly than most people would learn on their own.

Otherwise the LDP ( ) and a computer with a 'net connection would be all anyone needed. (Arguably that's all that most of us needed to get started; but the point is that it's not enough to attract many other people to Linux).

So, those who are comfortable with public presentation and excel in the materials, might contribute by teaching.

Linux and other open source systems (such as FreeBSD and its ilk) are grass roots projects. They are the reaction of some programmers to the state of the industry. A true grass roots movement is not about grandstanding. It's about regular people doing what is right for them.

(This is not to say that Linux and the "open source movement" faces no real threats. The SSSCA, DMCA, and UCITA laws certainly pose great risks to fundamental liberties for programmers and users of all software. I wish I could claim that this was just an American problem --- but it isn't. These (proposed) laws are evidence that the U.S. legislature has been almost completely subverted by commercial interests and that only the barest whisper of lip service to our constitution and our Bill of Rights, remains. It remains to be seen how far the injustice will go and what measures may be necessary to stem that tide).


PS Very good article on the briar patch!

Paul Rowland Architecture and Engineering

Thanks -- Jim Dennis

Copying linux to a new disk

Thu, 20 Dec 2001 16:30:05 +0800
Gregory J Smith (greg.smith from

G'Day from Australia!

Love your Gazette. I have a couple of Linux systems at home.

[his question, trimmed like an xmas tree.]

Cheers, Merry Xmas

Please ignore my question sent previously - followed your advice and found info in a mini-HOWTO. Will try soon and post some question about it. Fingers crossed.

Greg Smith

Thanks, Greg, we hope that HOWTO works out for you. But if not, let us know! -- Heather

Free software appreciation

Mon, 31 Dec 2001 09:16:26 -0800
Bryan Henderson (bryanh from

Mike Orr writes in the December issue about one of the dangers every free software developer faces: lack of appreciation from users. His point is a good one, but the article was inspired by the resignation of Christoph Pfisterer from the Fink project, which doesn't really illustrate the point.

Mike writes, "A developer is resigning from a free software project because of the unappreciative demands of its users." I know that issue pretty well, and it interests me, so I read the resignation letter and the references linked from the letter, and I discovered that this is not a case of unappreciative users.

This is a case of an arrogant developer who doesn't appreciate the situation of his users. Two of his references for why he is resigning are bug reports that look pretty polite and appreciative to me, but Pfisterer flames the user for being to lazy and stupid to solve the problem himself. He also seems to take personal offense at the suggestion that his work may be defective.

There's nothing the user community can do to keep a prima donna like this working on free software.

The other references have to do with beneficiaries of Fink not giving sufficient credit to the people who worked on Fink. But those appear to be genuine misunderstandings and disagreements over how much credit Fink deserves.

From the facts available, I believe Pfisterer is new to supporting software used by the masses, and in time he will mellow and start contributing to free software again.


Unsubscribe to newsletters?

Thu, 20 Dec 2001 09:02:24 -0800
anonymous (address withheld)

Please take me off the mailing list for your newsletters or tell me how I can unsubscribe.

Go to and you will have an opportunity to unsubscribe. If you don't remember your password, there's a section where you can have it mailed back to you.

-- Mike Orr

Re: A querry

Sun, 2 Dec 2001 10:28:25 -0800
dinesh (dinesh from

Dear Sir,

Can you help me if I have a querry pertaining to Linux ? How can I ask questions, if there is any forum or something, kindly let me know.

See The Answer Gang FAQ at -- Mike Orr


Thu, 27 Dec 2001 14:45:22 -0800
Mike, Ben, and Chris (Linux Gazette Editors)

The latest TAG FAQ and KB are up. A big round of applause to Ben Okopnik and Chris Gianakopoulos for bringing these up to date!!!

-- Mike Orr

<twisting toe shyly in the sand> Shucks. 'Twern't nothin'... err, I lie. It was a hell of a lot of work, and a BIG chunk of it done by Chris this month while I was dealing with Real Life and wrestling with the various relevant meta-issues involved in the production. YAY, Chris!

<Grin> All made worthwhile by seeing the result, though - and it's going to get even bigger, and be a better resource for the community. Mike, whose oversight is just as much of a contribution as any, deserves a big hand too.

Good to be working on this with both of you guys. -- Ben

Thanks for that recognition! It's fun to be part of the Linux Gazette. I also thank everyone for the encourgement that you all have given me for the past two years with respect to Linux stuff.

Have a good set of holidays -- all of you! -- Chris G.

This page edited and maintained by the Editors of Linux Gazette Copyright © 2002
Published in issue 74 of Linux Gazette January 2002
HTML script maintained by Heather Stern of Starshine Technical Services,

More 2¢ Tips!

Send Linux Tips and Tricks to

Setting up ipchains when using ftp: Problem Solved

Fri, 21 Dec 2001 22:55:37 -0600
Chris Gianakopoulos (The Answer Gang)

Hello Gang,

I figured out why my ftp client, on my Windows95 machine, did not appear to work using my Linux machine with IP masquerading. I had to type the following command on my Linux machine that was doing the masquerading:

insmod ip_masq_ftp

I found this information at the URL:

It had all kinds of other stuff for using ipchains.

Installing tulip.o in 6.2 (Question #8 - Dec)

Mon, 10 Dec 2001 15:08:34 -0700
Jeff Craig (craig from

I've actually had direct experience with this problem. Newer Linksys cards don't work with the Kernel module that was included in the 2.2 Kernel tree. I was helping friends install Linux on their machines, and had to do some scrambling of my own.

What I did to solve to problem was to download the latest 2.4 tree onto their windows partitions, then perform the Debian install, unpack to tree to /usr/src/linux and recompile (a person should always compile their own kernels IMO). The card worked beautifully after that.

[LG 72] 2c Tips #3

Sat, 15 Dec 2001 03:13:24 -0500
Greg Messer (greg from

I think Carlos needs to use:

force user = someuser
force group = somegroup

in his smb.conf file on a per share basis

That way any samba user who access to that share can write to any other user's files.

Recovering from MySQL table problems

Thu, 13 Dec 2001 09:26:05 -0800
Mike Orr (Linux Gazette Editor)

Somebody on another list had a problem with MySQL losing tables. Since the answer is good for troubleshooting various MySQL table problems, I'm submitting it as a 2-Cent Tip.

I've never seen MySQL lose tables without a specific DROP command. First, be sure you're looking in the correct database?

  1. Look in the MySQL data directory (maybe /var/lib/mysql). There should be one subdirectory for each database, containing three files for each table (tablename.MYD, tablename.MYI, tablename.frm). Do the file sizes look plausable or are they "really small"?
  2. Check file ownership/permissions. The user the MySQL server is running under must have read/write access to all data files, and read/write/execute access for directories.
    cd /var/lib/mysql
    chown -R mysql.mysql /var/lib/mysql
    	# Or 'nobody' or whoever the MySQL server runs as.
    chmod -R u+rwX /var/lib/mysql
    	# Or 'ug+rwX' or 'ugo+rwX' for less security.
    mysqladmin -u root -pPASSWORD flush-tables
    Something on your system may have reset the ownership to root.root. If MySQL doesn't have read access, I think it will say the table doesn't exist.
  3. Do a MySQLdb query of "SELECT DATABASE();". Does it return the correct name?
  4. Use the 'mysql' interactive utility. Do "USE mydatabase", "SHOW DATABASES;", "SHOW TABLES;", etc. If it can't find the tables, none of MySQL can.
  5. Do you have two copies of MySQL installed and two data directories? Maybe it's looking in the wrong directory. Run "mysqld --help" and it will tell you where it thinks the data directory is.

passwd disabling

Fri, 30 Nov 2001 17:11:42 -0700
Eric Larson (thelarsons from

I recently read an article from your site: "SysAdmin: User Administration: Disabling Accounts-From Glenn Jonsson on 05 Aug 1998"

It spoke of placing an * in the password field of the /etc/passwd file. This doesn't restrict the account on my system(Solaris 8). Could you have meant placing the * as the first character in the password field of /etc/shadow.

thanks for any feedback


Definitely. That trick only works when placed in the passwd field which is actually going to be used ... and since most Linux systems now support shadow files, that means /etc/shadow. In 1998 those were a bit less common. -- Heather

Re: HTML/CSS question

Mon, 3 Dec 2001 10:20:24 -0500 (EST)
Larry Kollar (lkollar from

I am currently trying to write html which will insert page breaks for printing, which is [CSS2 and] not implemented in mozilla.

Is any anyone aware of any solutions to this using HTML/CSS1

I don't think so, but if your HTML qualifies as well-formed XML, you could use XSLT (XML stylesheet and transformation language) to transform it into something that can be printed. The W3C spec at does a pretty good job of describing the language.

If your source is valid (i.e. passes through an SGML parser without complaints from the parser), you can use DSSSL to convert it to a printable format. The beginnings of some how-to docs are at

If I had to do this, I would use Sablotron (a free XSLT processor from and write a stylesheet to transform XHTML to groff for printing. It's not as convenient as printing directly from Mozilla, but much more flexible and easier to control.

Hope this helps,

-- Larry "Dirt Road" Kollar

Linux equivalent for Active Directory?

Wed, 5 Dec 2001 16:01:59 -0500
Rick Holbert (holbert.13 from


Take a look at the latest version of Samba. Samba makes a linux box look like an NT file and print server. The latest beta version of Samba has Active Directory support.

The Samba url is

Good Luck! Rick

Browse email

Wed, 5 Dec 2001 16:35:04 -0500 (EST)
Chuck Peters (cp from

Mark E. Nosal asked:

I've been asked to provide our LAN clients with web access to their email. Our present NOS is dare I say it, NT4 w/Exchange 5.5.

I refuse to install IIS to use OWA (w/exception to being fired that is). I've downloaded Apache for wintel, printed all the "how to's" and plan to be enlightened.

I've been to; (per advise of another). They offer imap & pop3 web mail access.

The problem is I haven't any Apache knowledge, and limited mail knowledge in general. I used your search engine (in addition to other Linux based sites) but I haven't found what I need. Would you please clue me so I may tackle this task and hopefully justify bringing Linux in-house. One small step for penguin......

We use IMP here at CCIL at If you use Debian, it simplifies the install process. Although we did have a problem on the last security update of IMP that broke it. We just set it up on another box until we had time to fix it in a couple of days. CCIL is a non-profit freenet and all volunteer work for the techs anyway, we have a part time paid Executive Director as of 2 months ago.


There are lots of webmail apps; Debian definitely makes some of them easier to install (aeromail comes to mind). Most distros come with Apache set up alright for a single domain... a lot of webmail apps are perl based or PHP based. If you don't like IMP and its fellow apps in The Horde, you could try Squirrelmail ( or Phorecast ( both of which have been updated recently... or type "webmail" into the search gadget at Freshmeat and see what suits your fancy.

For a recent client of mine, his tastes were simple and we found ourselves very happy with OpenWebMail. However, it doesn't do IMAP, just POP. -- Heather

Sophisticated excluding backup

Sat, 10 Nov 2001 23:38:47 +0100
Matthias Posseldt (The Answer Gang)

In issue 72 (November 2001) we published Ben's 2c Tip about sophisticated excluding backups (

... in which he comments to Matthias:

- and, heck, since you're putting yours up, I might as well add mine to the list.

Arggh, just figured out a major/minor/whatever bug in the date string. Here comes a fixed version.

Ciao, Matthias

See attached

See attached

Kernel versions

Tue, 6 Nov 2001 01:03:25 -0800
Mike Orr & Heather Stern (Linux Gazette Editors)

Do not use kernel 2.4.11, especially on SuSE Instead, use any earlier or later versions. -- Mike

2.4.11 had a nasty error which Linus almost immediately regretted... many of the 2.4.x series have had significant improvements while occasionally mangling something rather ordinary (e.g. loop.c, needed for loopback mounting, didn't work in 2.4.14 ... I check my fresh-cut CDs that way, argh... it appears that unnecessary "deactivate_page" lines were the culprit. I can't say I discovered that on my own, but it seemed to work, anyway).

The kernel maintainers are still fussing over having a working virtual memory handler - Andrea Arcangeli with a new one which Linus accepted, while Alan Cox and Rik Van Riel worked towards improving (some might say repairing) the original VM. Although Alan eventually agreed that Andrea has an ok design, the new VM's very new vintage and limited comments in the code still have a few people favoring Rik's VM, and Rik continuing to improve it. Keep watch at the current "Kernel Traffic" summaries

... if the linux-kernel mailing list itself is too much to wade through, As of press time the current kernel of the 2.4 series is 2.4,17 with some 18-pre's already posted. -- Heather

Printing big text

Wed, 21 Nov 2001 12:06:58 -0500
Ben Okopnik (The Answer Gang)
Julio Cartaya (

OK, so Answer Gang discussions get me thinking - even if it's a question I asked first. :) Heck, in some circles, thinking's not only acceptable, people actually do it regularly! And nobody laughs at'em, either.

Anyway... my question was "how do you print a sign ('Welcome!', for example) big enough to cover a sheet of paper without using a GUI?" In effect, I wanted some utility that would work like this:

printbig -size 1024x768 'Welcome!'

Well, the closest thing was a TeX solution by Karl-Heinz... great stuff for those that know TeX (which I find obscure, complex, and just Too Darn Big for the occasional dinky little "fancy printing" jobs I need to do), but I was looking for something simpler still. Then, I remembered a set of tools that came with a tarball I'd downloaded a while ago, "libungif-4.1.0" (I would imagine it's been through a few versions since then, but it worked for me).

echo 'Welcome!'|text2gif -c 128 0 0|gifrsize -s 12 > welcome.gif

This gives a rather blocky-looking output, with the text magnified 12X (think of the Courier font at about 150 points or so) and a red foreground (the color is optionally set by the "-c R G B" switch.) For much more flexibility in conversion - anti-aliasing, blurring, drawing boxes around the text, convolving, embossing, and many, many other options, try using "convert" (part of the ImageMagick utilities) after the "text2gif" has done its job:

echo 'Welcome!'|text2gif|convert -monochrome -geometry 800x200 gif:- welcome.jpg

This one gives a beautiful "lace fringe" effect to a softly rendered black-and-white picture of the text, as if the letters were covered in snow and edged with frost. Note that "convert" has also changed the format into JPG; this is a much faster output option than GIFs.

Ben Okopnik

Perhaps this could help: the file attached, poster.tgz, contains the sources for a program that allows you to use a regular printer to print arbitrarily large posters, assuming the starting picture has sufficient detail.

Best wishes, Julio

I repackaged it so all files were at the same level, rather than making you all have to open a second tarball. DOS and MSwin readers can use his pre-compiled executable. -- Heather

Print Info

Thu, 6 Dec 2001 20:38:08 -0500 (COT)
John Karns, Heather Stern (The Answer Gang)

We have just switched our network from a Novell server to a SuSE Linux server. However, one of the most missed features was the ability to receive a pop-up indicating that a print job sent to the network printer had successfully completed.

We would like to do the following:

  1. Notify the workstation when a print job, sent to the network printer, arrives.
  2. Print a type or cover page identifying the origin of the print job. (We have many a stack of papers on the printer waiting for the owner!)

Alan Whiteman

You don't mention any specifics about how your handling our print requests, etc. Assuming that you're using samba and that you're running MSW clients, you can run winpopup on the client, and send a msg to it using smbclient with the appropriate command line option - see the smbclient man page. Sorry I can't give specifics, as really haven't set up samba to do much printing. It would probably involve writing one or two bash or perl scripts. -- John Karns

The sheets announcing what user has the print job are called "burst pages" in the UNIX world. In 'lpr' you would take "sh" out of the printcap entry, and (if you like these seperators after the print job) maybe add "hl". For the notification you'd have to abuse the print accounting system, I think... have that shell script send email, that'd be the easiest. But, there are other print spooling systems, all of them much newer. I'd look at a lot of stuff at before working too hard. -- Heather

OT: PC XT Keyboards

Thu, 6 Dec 2001 20:40:38 -0500 (COT)
John Karns, Ben Okopnik (The Answer Gang)

Mike Orr asked:

PS. How do you get Linux to leave Num Lock on by default? I have it set on in the BIOS startup, but Linux turns it off.

I believe it's specific to your distro. On SuSE, there is a parm in /etc/rc.config to handle it. -- John Karns

"setleds" is what I've used in the past. -- Ben Okopnik

Re: Setting up IP Masquerading

Fri, 7 Dec 2001 10:14:33 -0800
Mike Orr (Linux Gazette Editor)
Can somebody who uses DHCP modify this script so that it can be used in both static and dynamic situations? -- Mike

If you can't get your IP Masquerading working, try this "simple" script. If it works from the command line, put it in your boot sequence somewhere or reference it in your startup scripts (see "man init").

Remember to set the variables at the top of the script.

It works on kernels 2.4 and 2.2 only, using iptables on 2.4 and ipchains on 2.2. Your kernel must have the appropriate firewall/masquerading/forwarding compilation options enabled.

It tries to allow all connections initiated by the internal network, while prohibiting connections to the internal network from outside. This is minimal security, you can add iptables/ipchains commands to block certain ports on the gateway if you wish.

For FTP, IRC, RealAudio, etc, you may have to load additional modules.

This script assumes you have a static IP. If you have a dynamic IP (DHCP), you'll need to determine your current public IP and plug it in. You can run ifconfig to see the "inet addr:" manually, or modify this script to automatically determine the current IP.

See the iptables/ipchains manual pages for more information, and the firewalling/masquerading HOWTOs.

The 'xx' function displays each command line as it's run.

See attached

List tweaks

Tue, 4 Dec 2001 09:05:26 -0800
Dan Wilder (The Answer Gang)

Chuck Peters asked:

We are using mailman for our freenet support, CCIL Help Desk Team <>, and often the users reply to only the individual who originally answered the question. As much as I don't want to munge the header with a reply-to it would be be better than our problem of users not replying to the list.

I took a quick look at the msg_footer and Python's string formatting rules, but its not giving me the clues to figure out how you are changing the reply-to to the list and the user, or the header containing "Original question from: user". How did you do that?

A wrapper. I'd threatened to post details, and since you ask, I'll do so.

It was a quick hack. Improvements and generalizations happily accepted.

The list begins by delivering to a procmail recipe. In /etc/aliases:

"|/usr/bin/procmail -m /etc/procmailrcs/linux-questions-only"

Because of the location and ownership of the procmailrc, mail is delivered as the user which owns the procmail recipe /etc/procmailrcs/linux-questions-only. In our case we have it owned by "list" which has permission to write to the temporary directory /var/lib/mailman/tmp/.

After several procmail recipes irrelevant to the present thread, the final delivering recipe says:

| /usr/lib/mailman/localbin/

If you don't need procmail and you can deal with Sendmail's smrsh, or if you're using exim, postfix, qmail, mmdf, etc, you could deliver directly to over /etc/aliases.


See attached

and then,

See attached

The data file /var/lib/mailman/localdata/linux-questions-only is generated by script run from a cron job:


/usr/lib/mailman/bin/list_members linux-questions-only >/var/lib/mailman/localdata/linux-questions-only

The membership of the list doesn't change very fast, so we run this nightly.

An' that's it.

-- Dan Wilder

linux software

Fri, 2 Nov 2001 19:21:43 -0600
dwane boyle (crystalgroup from

my queston can linux run on a rs6000 ibm workstation

Yes. That is a PowerPC architecture. Check distributions which offer PowerPC support for more details, but I've definitely seen it mentioned in Debian, Yellow Dog Linux, and Rock Linux.

-- Heather Stern

Tux the Penguin

Fri, 21 Dec 2001 09:24:01 -0800
Mike Orr, Ben Okopnik, and Heather Stern (Linux Gazette Editors)

Hardy Boehm asked:
This may be a stupid question which already was answerd a million times, but I was unable to find an answer on the net.

When I gave her a stuffed Tux as a present, my Girlfriend asked me, what it's sex is?

Can you help me on this???

<patiently> It's obvious. Geek, of course. -- Ben Okopnik

Four out of five sexist computer nerds surveyed agree Tux is male. -- Mike Orr

That might refer to Linus' original comment that penguins are happy because they have just stuffed themselves full of herring or have been hanging out with lady penguins. We only know that Tux is stuffed full of herring, but we can assume Tux hangs out with lady penguins. -- Heather

ftp macro variables

Sun, 16 Dec 2001 16:07:57 -0500
Faber Fedor (The Answer Gang)

jonesrf1 asked:
I am trying to write an ftp macro to run automatically in .netrc. macro is nammed init as in

macdef init

The macro should get the current date as in

!pre=`date '+%m%d'`

Is that ! supposed to be there?

and use that date to retrieve a set of files as in

cd /var/spool/fax mget pre*

where the files are named 1215somethingorother

I can't get the variable pre to be recognized by mget mget uses instead of 1215 ie current date*

I would think you'd need to do

mget $pre*

Any ideas? Any place to find help on ftp macro? I have tried web search

I always use the expect programming language ( when I need to do an "ftp macro".

Help... (Gnome)

29 Nov 2001 23:41:15 +0000
mike martin (The Answer Gang)

I don't know where to start. I have used (and been frustrated by) Windows for a long time. Linux seem to be a blessing from above. However, the practical matter is that some things don't work as advertised. There are so many, I don't know where to begin. Lets start with the Genome Calendar. I am running Redhat 6.0 and using the Gnome desktop. I have read the instructions about the Calendar application, but when I set an appointment it never notifies me of it's passing. I leave the user logged in and the application running and minimized on the desktop. The date and time of the appointment comes and goes and nothing happens. Additionally I don't know where to look for further help. Can you suggest something?

Thank you... Larry Gilson

First off RH6 is really old (2 and half years) Cant really comment on gnomecal, but you may want to upgrade gnome (its worth it) and try evolution you can upgrade gnome fairly painlessly from there as well

Windows Shares

30 Oct 2001 15:05:31 +0000
mike martin (The Answer Gang)

I am new to Linux and need to get a network involving a Windows2000 box up and running.

I have a windows share which has the "everybody full control" permission set on a windows box on my network.

I can "see" the share on my linux box and can read all data in the share as a normal user. However as a normal user I am totally unable to write to the windows share. I do have write access as root :)

I have tried using mount with the -o rw options also the chown, chgrp and chmod commands. All meet with failure. The mounted share just will not allow me to alter its permissions so that as a normal user I can write to it.

Do you have any suggestions, I would really appreciate any assistance you can give, this problem has been driving me batty for weeks!

Best Regards

I know that when I was using samba with NT, if you put uid=(any user uid) that user will be able to write, you may be able to make it work using gid - never had chance to try it out

linux telnet question

Wed, 14 Nov 2001 12:35:02 -0800
Dan Wilder, Heather Stern, John Karns (The Answer Gang)

votecrosby asked:

I have a problem that occurs with telnet on my linux machines. the only fix for it i've found is to reload it. telnet will work fine for a few months, and then the same problem recurs. the issues is that when i try and telnet into the machines, i get the first part of the prompt

Red Hat Linux rlease 6.0 (Hedwig)
Kernal 2.2.5-15 on a i 586

followed by:

/usr/bin/login: no such file or directory

of course, that directory doesn't exist when telnet is working either, so i can't see what the problem is. i have a hacker that's been plauging me, someone in korea, and i am pretty certain that he's responsible for this issue, but thus far i haven't been able to keep him out nor keep telnet running. any suggestions on how to make it work again without reloading the OS would be appreciated.

My first suggestion would be to turn off telnetd permanently. The thing's a horrible security risk, and nobody should use it any more except within a network containing only trusted hosts.

Instead, use Openssh ( which may be available as .rpms for your Red Hat, someplace.

Get OpenSSH-2.9.9p1 or later.

If not available, you can build it from source. You'll need to build OpenSSL and zlib first, as openssh depends on libraries from these.

There's a W*ndows openssh client:

which I have not personally tried. It requires the cygwin.dll libraries, which are a pretty fair-sized download. There's also a small open-source standalone ssh client, putty.exe,

-- Dan Wilder

It's certainly worth your while to download putty's scp program too. Even if you continue to use telnet in some places, putty is a better telnet client than the one that comes with MSwin. -- Heather

If someone has cracked your system and messed with /usr/bin/login (it's a binary file rather than a directory - on my SuSE7.1 system, it's /bin/login) then it would be worth your while, even mandatory to reload the OS. There's no way to tell to what degree your system has been compromised, and what kinds of trojan horse binaries may have been planted.

If you're going to stick with RH6.0, then after re-installing you should visit the RH site and update all the rpm's which were updated for security fixes. After that install a firewall and / or some security programs such as tripwire, port sentry, etc. Consult the security HowTo(s) for more info.

-- John Karns

Also, is well worth an extended visit. -- Heather

Implementation of a little ToDo list

Sat, Nov 03, 2001 at 12:37:20PM +0100
Matthias Arndt (The Answer Gang)

Many users want to keep a little of reminder information for themselves.

Take me for example. Sometimes I want to remind myself of installing a software package, compiling some code, playing a particular game or simply to do my homework.

What I want is a little reminder display at login.

I' m working most of the time in X so I put the following line in my .xinitrc file BEFORE launching the window manager.

test -f ~/.ToDo && xmessage -center -file ~/.ToDo -buttons Discard:0,Keep:1 && rm ~/.ToDo

This one checks if the reminder file ($HOME/.ToDo) exists. If yes, the file is displayed with the xmessage command centered on the screen giving the choice of either discard it or to keep it. If I want to keep it, I click on "Keep", if not, the rm command will remove it.

To be able to edit the file, I use two methods. First of all I have a shortcut to my favourite editor loading the ToDo file in my window managers menu.

Second I have the following lines at the very end of my .xinitrc file:

if [ ! -f ~/.ToDo ]; then
    xmessage "Create TODO list?" -center -buttons yes:0,no:1 && xjed ~/.ToDo

This block asks me at session end if I want to create a TODO file but only when this file is non existent. Substitute xjed with your favourite text editor.

Using the console? Simply put the following line in your .profile or .bash_profile file:

test -f ~/.ToDo && cat ~/.ToDo

This will simply type the ToDo file on your console at login. With a little more of shell programming you can achieve a deletion of the ToDo file at logout as well.

Experiment a while with these - it is a nifty feature and you do not need any extra software. Simply Linux standard packages that come with all Linux distros.

bind: Address already in use

Thu, 20 Dec 2001 16:35:50 -0500
Faber Fedor (The Answer Gang)

Harjit Gill asked:

I am having a bit of a problem with suse linux 7.2. My problem is on the xconsole I get an error message stating the below:

inetd[838] smtp/tcp (2): bind: Address already in use

The process inetd (process id 83 8) tried to run some SMTP protocol program (that also uses TCP) but the address that the SMTP program wants is already in use by someone else.

My guess is you're running an email program like sendmail and also running another SMTP program (read: mail) from inside of inetd. Check to see what's uncommented in /etc/inetd.conf, cross reference that with /etc/services and see if anything uses port 25 (which is listed in /etc/services).

Setting up a web-based archive for a mailing list

Tue, 06 Nov 2001 11:01:13 +0200 (EET)
Peter Georgiev (peterg from

Hiya everyone at the Gazette,

Great job again with Issue 72. I especially liked "PDF Service with Samba" by John Bright.

Well I'd like to comment on "Setting Up a Web-based Archive for a Mailing List" by Lawrence Teo.

Let's assume we've already set the mailing list as described in the previous article -- "A Quick and Easy Way to Set Up a Mailing List" and also compiled and installed hypermail. So we're at item 2.2. -- Creating a dummy account, which IMHO has some drawbacks.

Well suppose our project has about 20 researchers enlisted in the mail-list. They also want to share file attachments via e-mail e.g. drawing charts, spreadsheets, tarballs of source code, whatever. So our mail traffic is pretty high. It will soon result with a dummy user mbox several hundred Mbytes of size which will keep growing. Hypermail has to parse the whole mbox to re-index the archive. On P200 128MB RAM it takes 30 sec to parse a 5 MB mbox and 2 min to parse a 25 MB mbox. Suppose you have a 500 MB mbox and cron starts hypermail every 2 min -- despite hypermail's locking mechanism soon you will end with an endless queue of hypermail processes waiting to be executed or if you switch locking off -- even bring the box down to it's knees.

Well all the above may be a bit too far from the real-world situation, neither have I tested it thourougly. However there is a way to go around it and it's actually easier to setup.

What we have to do is as follows:

  1. /path/to/hypermail -v > /path/to/projarch.conf
    This command will dump a sample config file for hypermail which we'll have to edit. It's pretty self-explanatory so I won't discuss it in detail. However look at the "mbox =" option. It sets the mbox to read messages in from. Giving this option a value of NONE will set hypermail to read messages from standard input.
  2. Open /etc/aliases in your favorite editor and create an alias for projarch (this we shall use for our archiving purposes)

    projarch: "|/path/to/hypermail -c /path/to/projarch.conf"
    This will pipe each incoming message for into hypermail. Save /etc/aliases and issue the

    command. Do not forget to set the output directory for hypermail archives somewhere under the web server document root (Option "dir =" in /path/to/projarch.conf). Create the output directory e.g.

    and give the user sendmail runs under (usually user mail) write access to it.

    chown mail:apache /var/www/html/projarch; chmod 750 /var/www/html/projarch
    Pay attention to possible values of the "dir =" option in the config file (man hmrc). Using substitution cookies, you can tell hypermail to archive messages in different directories by the date they were received.
  3. Test hypermail sending a message to your mailing-list. If sendmail bounces it back with an error message like:

    sh: hypermail not available for sendmail programs
    554 5.0.0 |"/path/to/hypermail"... Service unavailable
    it means sendmail uses smrsh (Sendmail restricted shell) to execute binaries. In this case do the following:

    ln -s /path/to/hypermail /etc/smrsh/hypermail;
    Then restart sendmail

    /etc/init.d/sendmail restart
    Test hypermail again sending a message to the mailing list and pointing your web browser to:
    It should be all set up.

With this setup of hypermail we do not have to create a dummy user -- hence no multi-Mbyte mbox to parse. We process messages one by one straight as they arrive and update the web archive this very instant - so we don't need no cron job, and we don't need extra setup of Apache.

No need to mention you will need root access to the system but you will need it in the first place -- setting up the mailing list. And note your environment paths may differ from above examples depending on the distribution you use, which is well explained in the original article.

Hope this helps,

Boot Screen

Mon, 10 Dec 2001 17:34:25 +0530
Sayamindu Dasgupta (unmadindu from

Joseph Adamo asked:

I just bought Linux-Mandrake 8.0 and i have it dual booted to my Windows 2000. Linux has a boot up screen menu. The default is Linux , i would like to know how to change the order default so i can change it to Windows 2000 or DOS 6.22, etc.


here's what to do

login as root open up /etc/lilo.conf in ur favourite text editor u'll find a line like this


just cange it to dos (or whatever it might be..and u'r done) oopss.i forgot, run

lilo -v

after saving the changes in ur file and if some idiotic winblows antivirus complains abt a changed mbr after that, don't pay any attention to that


Of course, if you have such an antivirus program, you may want to temporarily disable it, or otherwise advise it that you are deliberately updating the MBR. Otherwise you risk getting it put back the way it was... -- Heather

whitepaper on CFS?

Thu, 13 Dec 2001 19:11:20 +0100 (MET)
Karl-Heinz Herrmann (The Answer Gang)

moka asked:

I wonder if one can dig up a short of whitepaper on crypto file systems(also AES perhaps).

AES (Advanced Encrytption standard) is the new encryption standard after DES and the US government finally decided to use the Rijndael algorithm. This is available with a "free" license and open source.

"AES" in google, third link from top:

which is the official US gov site anouncing Rijndael as chosen AES algorithm along with details on the algorithm, links to source and executables as well as links to the Rijndael developers and more material.

I have been unable to point a friend who is interested in such security issues to a document that addresses not the technical details, but the whys and in broad terms hows

On the Crypto File system for Linux:

put "crypto File system" in the search filed of and the 4th link from top will be which seems to be exactly what you are looking for -- not very hard though.


If you would at least use a search engine first you would be more welcome.

Linux Journal WNN Tech Tips

Running an X program on a remote display

Use ssh -n to run an X program from one computer on another.

For example,

ssh -n frodo gimp &

will run the GIMP on the host frodo, but display locally.

Using ssh for this is much easier and more secure than setting it up in X manually.

Replicating a Debian system

How many times have you installed some cool software on one of the systems at your office, gotten used to running it, then one day tried to run it from a different system only to find it wasn't there?

Now there's an answer. Jablicator for Debian:

automatically builds a package file based on your current software load. Apt-get that package on all your other hosts, and they'll keep in sync.

Color inkjet printers

Color inkjet printers vary widely in their support under Linux. Vendors make these family-oriented units as dumb as possible to keep the cost down. (Think of a color inkjet printer as an in-home display unit to sell you color inkjet cartridges.) As in a Winmodem, all the decisions get made in the driver, and some vendors offer decent drivers for Linux while others don't.

You might find the same printer gives you photo-quality prints from a proprietary OS and a faded, blurry image under Linux. Visit

for up-to-date reports on printers and drivers, so you don't get stuck taking your printer back.

For business or even home office use, a reconditioned laser printer with network interface is less hassle than a parallel port inkjet and much cheaper per page. Unless you really want color.

Your Editor had to replace his color printer recently, and I got an Epson Stylus C80 based on the evaluations of the Linux Printing site. It works great from the Gimp with the Gimp Print driver, once I realized the latest Debian Gimp package is "gimp1.2" rather than "gimp". Still not working with LPRng/Ghostscript, but that's a configuration issue rather than a capability issue. My current Debian Ghostscript works fine with my laser printer but doesn't contain the Gimp Print driver for the C80. I tried installing a binary version of Ghostscript with that driver, but that screwed up my LPRng configuration and my other printing. So I can't print directly from Netscape. For now, I'm just opening pictures a second time in the Gimp, which is time-consuming but it works. -Iron.

How to include attachments when forwarding mail from mutt

Mutt doesn't forward messages with MIME attachments by default. To give yourself the ability to include MIME attachments when forwarding a message, set mime_fwd in .muttrc. In our humble opinion this is the most useful setting; it allows you not to include attachments by default but to include them when you want.

set mime_fwd=ask-no

This page edited and maintained by the Editors of Linux Gazette Copyright © 2002
Published in issue 74 of Linux Gazette January 2002
HTML script maintained by Heather Stern of Starshine Technical Services,

(?) The Answer Gang (!)

By Jim Dennis, Ben Okopnik, Dan Wilder, Breen, Chris, and the Gang, the Editors of Linux Gazette... and You!
Send questions (or interesting answers) to

There is no guarantee that your questions here will ever be answered. Readers at confidential sites must provide permission to publish. However, you can be published anonymously - just let us know!

TAG Member bios | FAQ | Knowledge base


¶: Greetings From Heather Stern
(?)Control-Left = go left one word doesn't work in X
(?)Hi Gazzete (Squid)
(?)DHCP to DNS
(?)printing the timestamp of a given file
(?)SQL on the internet
(!)Getting volume label for CD
(?)linux book
(?)random crashes - how to prepare bug report?

(¶) Greetings from Heather Stern

Hi folks. I've been having such fun this season. The only thing sad for me is, I still haven't gotten around to updating my workstation. I did update my laptop tho. Debian Testing is coming along nicely.

Okay, I'll make the Peeve of the Month quick. First a big hand to most of our querents for using real subject lines! Some of you still need to work on it tho. However, abuse of Quoted Printable when you only have plain English messages jumps back into number one. Our foreign messages are up, so maybe half the people who did this really had a romance language to defend.

We've got some very good general information this month which I hope you'll find tasty.

Before I take on this years "New Years Resolution" (21" diagonal sound good?) I suppose I'd better finish setting up last years... I've got a color inkjet here, a nice little Epson Stylus. Of course if I want it to work under most circumstances I have to recompile Ghostscript with gimp-print extensions, which means adding a half dozen -devel rpms, and... and... you know, this is a real pain. I don't even see that one of the fancier print environments would help. Aaaargh.

And to think I was ragging on word processors last year. I have to say they've gotten much better. They crash less often than Netscape (well, ok, that isn't saying much for some folks, but I got NS to be pretty stable a while ago. Leaving JS off seems to help a lot). Documents are getting to be kinda usable. I saw a freshmeat app pushing to be a desktop publishing program. What I really wonder is when someone is going to write the "obvious" wrapper around the GIMP or ImageMagick to do all those old "Print Shop Deluxe" kind of things in a fairly slick way. Of course I'm bucking for The GIMP, because it's supposed to make my color printer happy...

Well, enjoy your bit of the bubbly, try not to blow up anything when you set off your OpenGL firecrackers, and don't get run over, it's bad for your health. I won't be at LWE New York, I've been travelling way too much lately, but if you're going, consider writing a show review for the Gazette, okay?

See ya!

(?) Control-Left = go left one word doesn't work in X

From Jay Christnach

Answered By Ben Okopnik, Dan Wilder, John Karns, Mike Orr, Karl-Heinz Herrman

I already spent hours trying to fix this annoying problem:

I don't even know if this normally works, but pressing the control and left-arrow keys simultaneosly should move the cursor one word back.

(!) [Ben]
Nope, this doesn't normally work - because there's no such thing as "normally". The kind of functionality you're talking about is specific to a given piece of software, or, in several window managers, might even be a sequence that is caught and handled by the WM itself.
(!) [John K] In the case of many versions of fvwm2, ctrl-arrow key combos move the mouse cursor. However, it seems that it no longer holds true as of fvwm2 ver 2.3.31 or so (or maybe it was changed by SuSE).
(!) [Ben] When you ask this kind of a question, you always need to specify which application you're using. In Unix, one of the guiding principles is "don't set policy; provide mechanisms." Unlike other OS's GUIs, there's no single common interface (unless the window manager - KDE and Gnome are good examples - enforces one.)

(?) Is this a problem in the xkb symbols? Is this a functionality that has to be provided by the applications and they simply don't have this shortcut? I don't know anymore where to look to fix this. Thanks for your help.

(!) [Dan] This is functionality that has to be provided by the application.

(?) Thanks for answering and trying to help

Well I asked a friend if this keyboard shortcut would work on his Linux box (Mandrake KDE) and he tried several applications and found that even xvi provides this kb shortcut.

(!) [Ben] Err... Jay? Did you read our answers? Like, the content, not just the envelope? I'll repeat it again, just in case Dan's one-line statement and my longer explanation weren't clear:

It's application specific.

There's no magic file, or download, or anything else that you can install that will make that combination work in every editor. Whatever the author of that piece of software decided to put in as the "jump-word" key combo, that's what you get.
To correct your misconception, above: it's not "even xvi provides". The correct version reads "xvi is at least one editor that provides". What "xvi" provides bears no relation to what an author of another editor might use.

(?) I asked him to send me a copy of his /usr/lib/X11/xkb directory. I suspected there were a missing Keyboard Symbol in my xkb config (I hacked it for being able to use dead-circumflexes and diaeresis for my sf keyboard, those were missing in the files which came with my debian distro) I use the Gnome Desktop (ximian) and sawfish window manager. I'm pretty shure that Abi-Word usually is able to handle the CTRL-Cursor thing. (It is nearly a copy of MS Word).

(!) [Ben] Huh? That makes no sense. It's written for a different OS... with a different programming interface... everything, except the types of files that it can open is different from MS Word... and you expect the keystrokes to be the same? They might be - it's not an unusual key combo for the job - but expecting it is just plain silly.

(?) Also in most text-widgets I am able to select the entire line with shift-Home or Shift-End which is consistent to ctrl-Shift-Cursor for selecting words and I think this is an accepted standard or at least should be.

(!) [Ben] Ah, there's the problem: "accepted standard or at least should be." I knew there had to be a root cause of all this somewhere, and I'm glad we discovered it so early - it could get really bad if left to grow and spread unchecked. Here, let me excise that for you...
"Accepted standard" begs the question of "accepted by whom?" "By me" is not a valid answer; neither is "by MS Windows users." "Should be" according to you is obviously not a "should be" according to software authors. Since you're not one (that's a guess, but a fairly informed one), you don't get to decide what "should be". If the editors that exist don't suit you, you're always welcome to write one of your own - including whatever keystrokes you decide it should have.
(!) [Mike] This is a bit harsh. The reason KDE and Gnome exist is because ppl see the importance of adhering to cross-platform user-interface standards.
There is a standard for word processors/text editors (regarding how they treat the arrow keys and select/cut/paste operations) that was originally set by MacWrite years and years back, because ppl who tried it found it very intuitive to use and remember.
(!) [Ben] <wince> OK, here's a seemingly minor niggle that's got a hidden kicker to it: the definition of the word "standard". As you're using it here, it means "what a lot of folks have been using for a while". What it means to me is "a defined set of specifications." Confuting the two leads to... well, MS Windows is an example. The querent's original assumption is another.
In a way, I find myself agreeing with a minor premise of Jay's: I would like it if there was such a thing as an "editor keystroke standard" - to be exact, if there were several of them, each one a well-thought out, coherent, non-internally-conflicting set of keystrokes. Then, you could have a "flagship" implementation for each - Emacs, vi, MSWord, whatever - and all the other editors could then use, say, a library that simply eliminated the whole bloody job of writing a command parser. Now, throw in a couple of editors like the old "PE3" from DOS (gosh, I loved that thing! I miss it...) where you could actually modify the "keydefs" file any way you wanted to - including building macros to be assigned to specific key combos - and you'd have the world covered.
All that... yeah, sure... BUT.
I'm not a software developer. I don't consider myself as having the right to moan and groan about the issue without being able to make a material contribution - which, again, would only become a contribution in the full sense of the word if it passed the "community acceptance test". The only thing I can do, IM!NSHO, is to put in the time testing the available editors (I've installed and run every editor available with Debian, other than obvious clones, plus a number of others) to see how well they suit me. If they don't, I don't use them - but I don't complain about them, either; they obviously suit other people to a tee.
Had the querent asked STL "I'm looking for an editor that has the same keystrokes as the ETAOINSHRDLU editor - do you folks know of any?", I could have probably found something that would help him - and would have been glad to; I like being able to help people. As it was, I found the fact that he completely ignored my and Dan's original responses, and the attitude of "well, real editors all have this!", irritating.
BTW - I wasn't aware that it was MacWrite that used those keydefs originally. Interesting nybble of info.
(!) [Mike] It has been widely duplicated in MS Word and practically all Mac and Windows word processors and text editors ever since. Even the DOS edit command recognized the sense of this scheme and was compatible with at least part of it (shift-arrow extends the selection, ctrl-arrow moves by words, shift-ctrl-arrow does both). However, part of the paradigm (ctrl-Z/X/C/V for undo/cut/copy/paste) was adopted by everything except DOS edit and MS Works. (Of course, Mac had to use the clover key ["command key"] because there was no ctrl key on the Mac keyboard at the time, a stupid unnecessary attempt to improve on standards without offering anything better, and some programs like Netscape 4 use alt instead of ctrl, but modifier-key exceptions are easy enough to learn.)
(!) [Ben] <grin> Control-Meta-Hyper-Super-Shift-Top-Front-X? According to The Jargon File, all of the above were modifiers - at the same time - on the LISP machines' keyboards at MIT (does it surprise anyone that this influenced the design of Emacs?) "Ten-finger typist", indeed...
(!) [Mike] In the Unix world, applying this standard wholesale is a bit difficult. It's fine for graphical programs that imitate Windows/Mac programs. But vi and emacs have existing standards that conflict with these. Also, ctrl-C is very commonly used in Unix to mean "abort this program".
Also on Unix, you have the problem that when logging in under various circumstances, the terminal type gets out of sync and the non-typewriter keys become inaccessible (insert/delete, pgup/pgdown, and sometimes even backspace). Thus, you must have alphabetic or ctrl-letter keys to perform these actions as an emergency fallback.
Also, vi and emacs typists will say they are more efficient because they never have to take their hands off the typewriter keys.
(!) [Ben] If your editor you write now survives the process of acceptance by the Linux community - i.e., a significant number of folks start using it - then, ta-daa! You've just become one of the folks who decide what "should be". See how easy that was?
<sigh> Pardon me if I sound a bit ascerbic... but, over time, I've grown rather tired of people who are perfectly willing to use the software that other people have spent thousands of hours writing - and complain about it. To me, that smacks of - uh, no, defines - ingratitude.
(!) [Mike] This is certainly correct, not just for the current situation but in general. However, what's really happening here is a clash of worldviews, which cause two topics that don't have anything to do with each other to conflict.
JAY: All programs should stick to the established Windows/Mac standard re the arrow keys, a standard that has proved itself valuable.
BEN: Don't you realize that any change you suggest to a program requires HOURS OF WORK by UNPAID VOLUNTEERS? Why is it their obligation to code things to your specifications?
MIKE: The issue that's falling off the table is, is the Windows/Mac arrow-key standard a good one we should generally adopt, working around conflicts with existing applications as much as feasable? I say yes.
(!) [Ben] If I had any say, my input would be "yes, as one of the standards". One of the reasons I really like using the editor in Midnight Commander is that it follows that set of keydefs pretty closely. Now that I've had to grit my teeth and really learn to work with "vi" ("VIM", actually), I find that I like the functionality - and learning only a small subset of the keystrokes (plus being able to look up all the others via the help facility) is highly feasible. Those are the two that I've settled on, and they cover the entire range of what I need in editors.
Pretty-text editors (word processors) are an entirely different kettle of fish. I've found that 99.9% of the time, I don't need them; in Windows, I used to use them because Notepad was so bad (although GTEdit came very close to Unix functionality), but with Linux, I have choices. The one-in-a-thousand times when I do need that - making up a sign with large lettering, for example[1] - either HTML (yechhh) or KWord suffice. I'll be the first to admit that fancy WP stuff is still not a Known Science under Linux.
[1] This seems like such an obvious lacuna that I wonder: is it me? Am I missing something obvious? There must be some quickie LaTeX thing you can whip up, or something of the sort; I just can't believe that a gap like that would exist in Unix, where a part of the philosophy seems to be "small tools that will roll into and eventually fill every crack". E.g. - I want to print a sign on an 8.5x11 sheet that says "Welcome!" in letters large enough to pretty much cover the sheet. Can anyone think of a simple way, using Unix-native (i.e., not fancy modern GUI) tools?
(!) [K.-H.] This requires you to type everything in vi ;-) cut-paste with mouse is surely a fancy GUI method, isn't it?
(!) [Ben] Nope; I've got "gpm" running. :) Seriously - I meant exactly the type of solution you're suggesting, and I thank you for relieving my sense of frustration. I just knew that there had to be something of the sort - although I could wish that it was easier, something like
echo 'Welcome!'|makebig --pagesize A4 --stretch-percent 90x90|lpr
(!) [K.-H.] Would be nice yes, but even TeX has some idea what a scientific paper should look like. One has to "switch off" lots of things to get something out of the normal scope.
(!) [Ben] I would imagine that a knowledgeable TeXnician could write a macro that could work that way. I don't know that I want to get into TeX in that much detail (my previous forays into it left me covered in cold sweat), but I'll play around with the bits that you've suggested.
(!) [K.-H.] A TeX Macro, even one which chooses the font size automatically is certainly possible. On the other hand this is possible with plain postscript .
Have a look at
Thats a perl script which uses a postscript template for creating cdlabels. On the backside the postscript itself scales the fontsize down if the lines would be too long otherwise. It should be possible to go that way with lots more direct control -- but I've never learned the programming language postscript, never appealed to me as a convenient one ;-) . But it seems to be "turing complete" and I know at least one postscript file which prints a mandelbrot picture -- by calculating it. Takes ages on your stock 66MHz printer if it comes out at all.
(!) [Ben] Thank you again!
(!) [K.-H.] Hmm..... sorry no oneliner. At least not if you would like the comments. Will require any standard TeX installation (like tetex 0.X, 1.X), dvips should be included with tetex, gv would be nice but gs alone will do.
You need file HugeTexttestTeX.tex containing:

See attached HugeTexttestTeX.tex.txt

then run:
  > tex HugeTexttestTeX
This is TeX, Version 3.14159 (C version 6.1)
Babel <v3.6h> and hyphenation patterns for american, german, ngerman,
loaded.[1] )
Output written on HugeTexttestTeX.dvi (1 page, 268 bytes).
Transcript written on HugeTexttestTeX.log.

  > dvips -T 11in,8.5in  HugeTexttestTeX
This is dvipsk 5.58f Copyright 1986, 1994 Radical Eye Software
' TeX output 2001.11.20:1824' ->
<><8r.enc><>. [1]

  >  gv HugeTexttestTeX
The vertical spacing/centering caused me a little trouble there. Whats actually happening in that line starting with the "$" is:
Anyway -- nicely centered "Welcome!" on a landscape letter page. How to get rid of the pagenumber is left as an exercise, I would recommend ther TeXbook by Donal E. Knuth to get the details.
One could also increase the letterspacing in TeX so it would exactly fill the line instead of adding space left and right of the text -- that's definitely beyond any M$-word I know of. QuarkExpress has a very good control of things like that though.....
$\vcenter to \vsize{\vfil\hbox to \hsize{\Myfont W e l c o m e !}\vfil}$
The spaces help in word too, but they won't stretch as far as here and adding some more spaces will be necessary and they will never add up to the exactly same linewidth.....
Try that instead:

See attached portrait-large-text.tex.txt

this time it's not landscape so you can just use:
tex file[.tex]
dvips -t letter file[.dvi]
gv file[.ps]
One could also become very fancy and write a TeX macro which calculates the width of a given text and scales that to pagewith by increasing the fontsize.
Also in LaTeX there are nice scaling/rotating features which make more sophisticated stuff possible. Using a GUI drawing program to make little eps files which are then scaled comes to my mind.
(!) [Mike] Of course, we'll have to compromise with ctrl-C and ctrl-Z, but emacs (for instance) already makes its own compromises in that regard. (ctrl-Z it emulates; ctrl-C it hijacks for another purpose, but provides a related command "ctrl-X ctrl-C" that does a safe exit).
(!) [Ben] If that sounds like I'm saying that you have to earn the right to complain, you're right. Only the fishermen who bring home the fish get braggin' rights; only those who've put in the effort get to grouse about the results. Anything else is whining.
Here is something you can do to contribute instead of complaining, even if you're not a programmer. Join a list (if one exists) for a given piece of software and put your dearest wish on the "wish list" - there usually is one - and if the author likes your idea, it just might get implemented. If you find an actual bug in the software and report it in detail, most authors would be grateful.
(!) [Mike] Ben is right. Many distributions have the README files in a standard place (/usr/share/doc/PACKAGE/* on Debian, /usr/doc/packages/PACKAGE/* on SuSE). Look at the READMEs for the offending programs and find the place to report wishlist items. It may be a mailing list or a bug tracking system. You can also see whether anybody else has also requested the same thing. If you know enough programming to provide a patch, so much the better. If you don't, do you know enough programming to provide even a few technical details? Those details make the maintainer's job easier, and may even convince them to provide the enhancement if they wouldn't otherwise.

(?) No, I am no programmer. But I know what it takes to write a program. I have some knowledge of programming and wrote a few small programs. Also I am not really complaining, I only thought this thing wouldn't work on my computer whereas it works on other machines which are configured differently.

(!) [Ben] That's why both Dan and I said "application-specific", right off the bat. It's not you, it's not your computer, and your friends can't do it any better. :)

(?) I really would like to contribute to the development and debugging, enhancing of Linux apps. Unfortunately my wife already complains that I spent too much time in front of the screen and I don't have the time to do better because of my studies.

(!) [Ben] As Mike and I have mentioned, there are many other ways to contribute - some of which take only a little time and effort. Sending in a detailed bug report, or adding your favorite item to a wishlist - which may just be the request that tips the scales - are all good things. Writing up and sending in an article about your battle with the different key-handling mechanisms, even though it was a frustrating and eventually bootless experience, would be another good thing.
(!) [Mike] Yes, that would be a very good article. Would you like to write up your experiences, Jay, and contrast the keystroke handling of various Unix applications with non-Unix ones, and explain how the differences impact the usability of each system?
Let us know if you want to, so we can hold off publishing the Answer Gang thread that's been accumulating. We also can send you a tarball of the existing messages if that would help provide material for the article.

(?) I think however that it would be a good idea to have a standard for keybindings.

(!) [Ben] As I'd said previously, I agree - with the caveat that it should not be _a_ standard, but rather a choice of standards, plus an implementation that lets you build your own.

(?) The people contributing to the Gnome project are discussing about it on their mailing list and I hope that if they find a good compromise that developers will accept that standard (not only for Gnome-Apps) . Thanks again for all of your answers.

(!) [Ben] Yes, you too can participate. Complaints about how things "should be", without a significant contribution of your own, are... tacky.

(?) If you had a clue I would be very thankful.

(!) [Ben] We have lots of clues - for which I'm certainly very thankful. In fact, we often have to employ a clue-by-four to drive them home; there are plenty of times that several of us have found that to be necessary...

(?) oh yes I forgot: the Linux Gazette is by far the best Linux magazine, compared to the magazines I can find in bookshops in lu. I consider downloading every issue automatically with wget from now on.

(!) [Mike] Thanks. You can also use the FTP files; then you only have to download one file per issue (plus the base-new file).

(?) This will also be the last time I will bother you with my word-jumping problem. I solved the problem by trying another window-manager, I now use enlightenment and the ctrl-cursor combo now works in x-emacs, lyx, mozilla, abiword and probably many other apps. You're still right that it of course is application dependent as long as you consider the window-manager as an application.

(!) [Ben] Or even if you don't. All that a WM can do, in that regard, is either intercept keystrokes before they get to the app or not; it cannot make an application accept keystrokes that it was not programmed to accept, or make it perform any functions on those keystrokes that were not programmed in. <Checking several apps> It works for me, in several of the apps that you named, under "icewm" (my usual WM) and "twm" (the "baseline" WM - does the minimum necessary to be a WM and nothing more) - but not in a number of other apps ("xedit", "gvim", "flipbook", etc.) It seems that most widgets and toolkits, especially the newer ones, do indeed support the selection method, but, again, it's a per-application thing.
Obviously, whatever WM you were using before was intercepting your "Ctrl-cursor" keystrokes (which would prevent them from being seen by the application). Clearly, "Enlightenment" doesn't do that, at least not by default - I'm not very familiar with it, but I seem to remember a configuration panel in it which allows you to capture specific key combos.

(?) If you like I will try different WMs and report which ones do that trick. This feels much better now :-)


(!) [Ben] That would be great - especially if you could dig a little bit into the configuration dialogs and see if the "intercept mechanism" can be enabled or disabled. In "icewm", for example, I can completely disable "keystroke grabbing" by tapping the scroll lock key, even though I have several "Ctrl-Alt-" combos defined in my "keys" file.
(!) [K.-H.] That's neat. I was was just getting used to kde2 (soming along with SuSE per default) when I found out that I can switch off most key-grabs but not one specific key grab -- Ctrl-Tab. It does some win-like switching between app-windows.
(!) [Ben] Yep; that kind of behavior (defaults that can't be disabled, tons of "pre-made decisions" of that sort that are either difficult or impossible to change, etc.), plus the fact that it is a huge resource hog, are the things that completely turned me off KDE/KDE2. I'm sure that some people love it; in my opinion, it comes closest to the feel of the MSWindows GUI, more so with every release. Me, I want a WM to do the basics, give me just a touch of pretty stuff (window ornamentation, toolbar clock, APM display, etc.) with the ability to turn it all off if I want to - and have a reasonably small memory and CPU footprint. Over the years, I've tried pretty much every major WM, and none of the others suit me quite as well. Besides, Marko Macek (the author) has been reading my mind :) - when I first started using "icewm", I had a few grumbles about some of the features (or the lack of them), and he's fixed every one.
(!) [K.-H.] Now I'm an xemacs user and want that key for switching the buffers in xemacs so fvwm2 is my window manager again ;-) There I can define the grabs I want and switch off any of the default ones if necessary.
I'm aware that I could switch xemacs to a different key, but then just the idea that I actually coul find no option to switch that off at all was enough to get me "unfriendly" with kde.
(!) [Ben] Yep. Give'em their due, though: they certainly have quite a large number of people enthralled, and a number of the "K suite" apps are rather nice.
(!) [K.-H.] icewm's feature to toggle them is quite nice. Mybe I'll have a look at that one sometime soon.
(!) [Ben] <rubbing hands> The subversion of the innocents continues apace. My plan for world domination will soon be complete...

(?) Hi Gazzete (Squid)

From Cybernetica Aduanal

Answered By Thomas Adam


(?) Hi Gazzete: Hi again, write you for help to work with the squid, I need to control access to this program, i have been used the ACL (access control list) and put the status in "Deny" to avoid that a list that i make stay in of the access to the squid. But this continuos accessing, what a can do..?

(!) [Thomas] It is strange actually, since in my LWM article next month, I'll be writing about the use of Squid and SquidGuard :-) ........
How is your proxy server set up?? Are you using an external filtering program such as SquidGuard to filter your acl's too???
If you are just using "/etc/squid.conf" then you need to make sure that you do the following under the appropriate section:
  1. Under the ACL section, you should have defined your ACL in a format such as this:

    acl aclname acltype "file"
    making sure that "file" contains one URL per line.
    This has told squid that the "file" contains websites to which you either want to allow access to or deny access to.
    But there is one final step to implement this...
  2. Having defined the ACL we now need to tell Squid what to do with it, thus:

    http_access deny "aclname" localhost (or IP address)
    What the above says is that it will deny access to "aclname" if the request comes from the localhost (or a suitable other IP address specified).
I hope this helps to solve your problem.
Should you get stuck, let me (us) know and we'll see what we can do :-)
--Thomas Adam

(?) proxy

From Dadi

Answered By Jim Dennis, Mike Orr


I connect to the web thru a LAN and recently the only way I can do that is thru a proxy (which is also the gateway) at 8080 port. What do I need to set up (can I?) to get it work with lynx, ftp and any browser? I can configure Konqueror and any browser with the proxy and the port, but I need a general connection (that will work with any program, without any further seting to the program). If I have an antivirus (let's say) that updates thru the web, it won't work. I hope I was clear enough.

Thanks in advance, Dadi

(!) [JimD]
Under UNIX and Linux most HTTP capable programs (such as lynx, wget and curl) will honor the value of the http_proxy environment variable. (curl might require that to be HTTP_PROXY, I'm not sure). So the following settings to your .*profile/.login or env scripts:

export http_proxy HTTP_PROXY
Should work for most browsers and other HTTP capable programs. I have no idea whether your anti-virus update package would honor this environment setting.
Some sites use transparent proxying. You can read about one approach to providing this at:
Transparent Proxying with Squid Mini-HOWTO:
(!) [Iron] Do set both HTTP_PROXY and http_proxy. Some programs use one and other programs use the other. I think lynx uses the lower-case version.
There's also FTP_PROXY and ftp_proxy if your proxy server provides that (squid does).

(?) DHCP to DNS

From Michael Majetich

Answered By Heather Stern

(?) Hello again!

I posted the question below a week ago.

(!) I remind you (or inform you, if you've never read the top of our web pages) that The Answer Gang does not quarantee that we will or can answer you.
IF any amongst the Gang do, though, we also can't guarantee that it will be in a timely manner. The longest anyone who HAS gotten an answer waited was months ... I think it might have been over a year ... because back in issue 36 Jim went through his entire backlog. We weren't the "Gang" yet, and a full backlog check isn't likely to ever happen again.
Somehow lucky for you, you got me in a good mood even though you make this annoying assumption that we'd give you instant feedback. Often that wouldn't be enough - in fact it would encourage me to shuffle your mail away, since as an editor here I see ALL the TAG mail, and have to go through it in much detail later - but I see an opportunity to get some useful data to everyone else out there too. So you win the "Answer Gang" lotto and I'll give it a shot. You get a slight roasting for free.
Sadly for you, I still use Bind8, and so any tech I know to sync dhcp with bind (very little; DHCP's not my specialty anyway and my own network presently does use static IPs) might not be as useful. Not that this sort of lack of knowledge has ever stopped me before :)

(?) Can anybody at least point me someplace to get the answer

(!) [HOWTO use search engines effectively]
However in going to the Google! Linux area ( and make sure NOT to put the slash at the end) ... giving it the keywords: bind9 dhcp
...the second item might be useful, as he's discussing doing something like that and gives some parts as a study example:
[FAQ item # infinity]
Q. I need a fast answer for my problem <foo>, and I didn't find it here. Time's running out for me! What am I gonna DO?
A. If you need a timely reply from someone who specifically knows a topic, I recommend hiring a paid consultant on that topic.
Generically, you may be able to find them by visiting the Consultants HOWTO in the LinuxDoc project:
That howto is maintained by the folks at LinuxPorts; they also have searches into it at:
That's pretty decent for finding individuals as well as companies of varying size, just in case you have any prejudice against mega-consultant houses.
Specifically, any companies who commercially maintain code related to the programs you are using, may offer "professional services". It is worth checking their sites for further documentation first though.
[Now for the good stuff, answers from the real world, that might be able to lead you in the right direction. Though the direction you eventually choose may not be where you were heading when you began.]

(?) I have a "Mixed" network of linux and microcrap so this would be a big help. I would rather not use fixed IPs.

(!) I can certainly sympathize with that; another possibility is to use network address translation (sometimes called NAT) and the private, reserved address ranges. Under RFC 1918 (I used but there are mirrors everywhere) these ranges are: - (192.168/16 prefix)
readable by the rest of us less netmask-aligned sorts as:

although a lot of people wimp out and only use 192.168.0.* or 192.168.1.*, you can do some pretty cool stuff by using more of them, or avoid possible collisions with other nets coming in by using s third octet value other than 1 or 0. - (172.16/12 prefix)

For some reason a lot of people forget about this one entirely.
and the possibly infamous "10 net": - (10/8 prefix)

where again, a wide tendency to use 0 or 1 sometimes leads people to unnecessary collisions.
That is, you could use static IPs for machines which are not going to move around a lot, without having to request more from your provider. This does not negate some good uses for DHCP though:
  1. it's much easier to renumber a zillion MSwin boxes if they all just use dhcp. Note renumbering is far less needed with private addresses. Note if they all come on at the same time there will be a broadcast storm while the busy server hands out everybody's addresses.
  2. You may want laptops to only use a limited range of IP addresses. Your laptop users may not be up to setting their static IP settings to match your office, and then away to something else for other sites they'll be visiting.
However, you do not really need to tie bind to dhcp unless the machines need to be able to be addressed in the DNS by name. In other words, if they are providing services of some sort. Most companies of any notable size think it's a bad idea to let their individual desktops be addressed by the outside world anyway. But the inside world... well, you could be using split DNS I suppose. That is, the DNS your inside folk see for your domain is a more complete version, which is not shown to the outside world. Outsiders only see the usual obvious things like your web server a few mail servers, and of course, an outside-world nameserver or three.

(?) This is my first post. I assume that this question has been asked 1000 times already, but I can't find a resonable answer on the web.

How do you get the dhcpd to update BIND9. I am running SuSE7.3 with the servers on the same machine. In dhcpd.conf I've made the ddns-hostname(tried both name and IP) , domain, update-style(ad-hoc) entries. In Bind I've allowed update from localnet, and localhost. Nothing happens. both start with no errors that I can see. What am I missing?

-- Mike Majetich

(!) Bind (aka named) and DHCP are maintained by the Internet Software Consortium.
They have consulting. They point at a book, "DHCP" by Ted Lemon and Ralph E. Droms.
Although I will mention that the OpenBSD folk also heartily recommend "DNS and Bind" by Paul Albitz and Cricket Liu, as being an excellent intro to the topic.
In my personal involvements in the community, I also know that Nominum did a bunch of coding in the Bind9 project. They're a big commercial creature, and it so happens they are one of the entities offered at ISC as your possible consultant:
The other one -- if you're in Europe somewhere it's probably closer to you -- is Mind:
If you go "the enterprise route" then purchasing your support contract through ISC supports their efforts, bandwidth use, etc. towards these really rather cool projects.
For just general DNS questions I find that the very best web based resource is "Ask Mr.DNS" - although Acme Byte and Wire was bought up, the new owners have graciously allowed his to continue doing that, and the archives stay online:
Cool, he's got a category just for dynamic updates such as you're asking after...
Best of luck, happy holidays. If things work for you, please feel free to let us know, or even to write up an article for us. If you did that, then the next time someone asks this sort of thing, we can point them at your successful efforts :)

(?) printing the timestamp of a given file

From Matthias Arndt

Answered By Dan Wilder

Dear Editor,

I was looking for a solution to extract the timestamp of a file with plain shell methods. So I browsed through a book and found the command cut.

ls -l <some filename> | cut -b44-55

does the job pretty well and simply prints the timestamp of the given file. It works with the GNU ls. If the output of your ls -l command differs, you'll need to adjust the positions after the -b switch. It even works with a list of filenames but this would only print the timestamps not the corresponding filenames.


(!) However "ls" produces dates in two different formats, according to the age of the file:
ls -l host.conf
-rw-r--r--    1 root     root           26 Sep 25  1995 host.conf

ls -l passwd
-rw-r--r--    1 root     root         1179 Nov 12 17:10 passwd

(?) The fields are correct in their width. The output should be ok in any case as long as ls doesn't format the long version with other field widths.

(!) OK, but annoyingly different in its format, according to the age of the file. That works for human readers, but causes complications for machine readers.
I use this sort of thing in web page scripting, where the output will be parsed by other programs, and it saves me time in writing those other programs.
(!) A slight elaboration allows consistent formatted output. As a shell script:

date -d "$(/bin/ls -l $1 | cut -b44-55)" +"%b-%d-%Y"
I use "/bin/ls" to evade the likelyhood that "ls" may be an alias.

(?) I guess in most cases ls will not point to a much different binary. But it is more consistent.

(!) Also works for me, who normally makes "ls" an alias for something or other that changes with time.
mylittledatescript host.conf

mylittledatescript passwd

(?) Nifty! The whole script is much better than my quick'n'dirty solution. But it doesn't work on my machine.

$ ./filedate filedate
date: ung?ltiges Datum Dez 6 22:22'

which translates to "invalid date Dec ...." and stands for an error

Very strange....

(!) Araugh. "date" seems to change what it'll accept as input with each version ... hopefully it'll stabilize some day.

date --version
date (GNU sh-utils) 2.0.11
My little script will no doubt need modification if your version of date is different from the above.
(!) The "date" command does a great job of formatting a wide range of inputs. The "+" string to the date command offers many different output formats. See "man date".

(?) I didn't have the idea to pipe the output through date as well because the simple date field from ls is sufficient for me. This adds new perspectives however.

(!) A variation allows printing the Discordian date of a file using "ddate" which is much fussier about its input than "date":
ddate $(date -d "$(/bin/ls -l $1 | cut -b44-55)" +"%d %m %Y")

mylittledatescript passwd
Sweetmorn, The Aftermath 24, 3167 YOLD
"ddate" also has formatting options.

(?) This notation does not make any sense to me. I don't need it now. But for someone who likes this sort of output, it might be handy.

many thanks,

(!) A distraction, joke or diversion:

(?) SQL on the internet

From Fabiano Bonin

Answered By Jim Dennis

I have a Linux box connected to internet, and a NT box in my intranet.

My NT box is running SQL server (port 1433) and i want that people outside can access this port through the Linux port.

Example: - In the SQL Server client, i put the address of my Linux box (real IP) and the connection is forwarded to my local NT box.

Is there some way?

(!) First, please realize that this is a reckless way to expose your database server. If you accomplish this, you will be wholly dependent on the SQL server's own robustness for the integrity of your data.
At first it sounds like you want a port forwarder. With IP Masquerading it's possible for you to "hide" your NT box on an RFC1918 reserved IP address (such as any from the block of class C nets) behind a Linux box (which, naturally has both an internal address and some sort of DRIP -- directly routable IP). You'd then configure any of several port forwarding utilities to simply forward packets that arrive on the DRIP TCP port 1433 to the internal NT port 1433.
Normally, the portforwarder would only change the destination IP address. The source (return) address would remain unmodified. Thus the NT box would attempt to route response packets as normal. The Linux box, NATurally would be configured as the default router for the NT box so it's return packets would then be routed appropriately after they arrive at the Linux system.
NATurally, the Linux box must be configured to do routing, usually with a command like:
'echo 1 > /proc/sys/net/ipv4/ip_forward'
... though many distributions may hide the ugly details by offering some friendlier interface.
This all sounds easy enough. However you have also said that you want to configure the MS SQL Server to simply accept addresses that appear to be from the Linux gateway. In the example I gave, the Linux gateway is transparent (more like a router). So the SQLServer connections "appear" to come from some public address on the Internet. Arguably this is what most people would prefer, since they can then configure the SQLServer to selectively allow or deny access to specific blocks of public IP addresses. (Also, it's easier that way).
You could write a proxy. This sort of proxy could be written in PERL, Python, C, Java or just about any language that offers lower-level access than awk and the shell. It would accept connections on the DRIP/interface TCP port 1433, initiate new connections on the internal IP address, and relay the application level data from one to the other and vice versa. It could be blocking (only one connection at a time) or non-blocking (handling multiple concurrent connections). If it was written to be called via inetd, and non-blocking, then one child/proxy process would be started for each connection (and the code would be much simpler, though the latency and overhead would be higher). If it was written to run "standalone" it could use any of several models of threading and/or forking (process spawning) to handle concurrent connections, lower latency and (possibly) lower it's memory footprint.
The disadvantage of writing a proxy is that you might have to know a bit about the application's protocol. In particular it might be that the MS SQL Server networking protocol uses additional "ephemeral" or "negotiated" TCP ports. In other words, there might be traffic on ports other than the TCP 1433 port. I don't know the details.
It's possible that a simple "plug-gw" proxy might work (plug-gw was part of the TIS, Trusted Information Systems, FWTK, firewall toolkit). TIS was eventually absorbed by McAfee Associates (later Network Assoc. Inc). Although the sources are freely available *for non-commercial and internal use*, TIS FWTK is not "free software" (no derivative works, limitations on re-distribution, consultants are not allowed to install it for customers, etc).
However, there are tools like plug-gw. The most notable is probably the Juniper FWTK from Obtuse Systems ( ). That is currently distributed under a BSDish license.
I don't know much about the MS SQL Server or the net/wire protocol that it uses. However, there is a free (GPL) package by David Muse called SQLRelay ( ) which incoporates quite a bit of knowlege about it and various other SQL servers. SQL relay is probably overkill for what you want, but it might give you the information you need, and a small subset of its features might do the trick for you.

(?) pseudo-chroot

From Faber Fedor

Answered By Mike "Iron" Orr, Heather Stern

Hi guys (and Heather)!

Is there a way to chroot a user such that they can't travel out of heir home dir but without having to copy a bunch of binaries to their home dirs?

I'd like to restrict my users to not being able to see into /bin, /etc, and most importantly /home/httpd without jumping through hoops.

(!) [Iron] For /home/httpd, set the ownership and permissions so the webserver process has read access, the person who maintains the content has read/write access, and nobody else has any access.
The standard Debian setup is for the webserver to run as user 'www-data', group 'www-data', and the HTML directory (/var/www) is:
drwxrwsr-x   11 root     www-data     1024 Nov 12 17:25 /var/www/
Unix comes with a catchall user 'nobody', group 'nogroup', for processes that shouldn't have any privileges. But in that case, you'd either have to make /home/httpd world-readable (which is what you said you don't want), or owned by 'nobody' or group 'nogroup' (which is bad because 'nobody' should never own any files, although some sysadmins disagree).
Chroot requires you to copy the binaries, as you say.
The 'bind' filesystem in recent 2.4 kernels allows you to make a directory appear to be in two different locations, and the shadow location can be inside a chroot jail. Or so some documentation I saw a few months ago said. That may or may not be more convenient than copying binaries and shared libraries.
Why don't you want your users reading files in /bin and /etc? Normally it's only a few senstitive files that need to be protected (those containing passwords). For each case, you'll need to think of a strategy that allows the user to do their work without being able to read the password. For programs, make the file world-executable but not world-readable (mode -rwxrwx--x).
To prevent users from listing the files in /bin (to discover commands they didn't know existed), but still allow them to run or list programs whose names they know, make the directry itself world-executable but not world-readable (mode -rwxrwx--x).
To prevent users from reading sensitive files in /etc, arrange to have the program run as a different user or group, and give only that user/group access to the configuration file(s). But that means making the program setuid or setgid, so that it will run under its own permissions rather than the user's, but set[ug]id itself is a security risk. Relatively speaking, setgid is safer than setuid.
As an alternative to setuid, you can arrange for the programs to run via 'sudo' or 'super', two proxy programs that do something like suid but in a safer and more configurable way.
It's probably a bad idea to make /etc non-world-readable. Numerous standard programs would break.
(!) [Heather] Hmm, if most of your users don't need to access the web server, and you aren't offering home based web access... ... then you could simply run the web daemon in a more complete than usual chroot, and only give members of the webmaster team accounts within its jail. You'd need more than one ssh running... one per jail, and one for the top... but it can be very effective.
Each "child jail" can have much more limited /etc contents and a seriously stripped binaries tree, as well as only having the user accounts that match its purpose. The top can house your syslog, as there's an easy option for reading multiple /dev/log nodes. I think you get up to 19 extras.
The trick works especially well if combined with some of the recent "chroot as a one way trip" patches being offered out there. These nearly always prevent double chrooting, so you'll need to tweak the trapped daemons to be ok with not being able to chroot any further. The patches keep changing too, so I haven't settled on a preferred one yet.

(!) Getting volume label for CD

Useful scripts and tidbits from Ernesto Hernandez-Novich, Michael Blum, Richard A. Bray

Answered By Ben Okopnik, Mike Ellis

It can be argued that there are some dangers in posting code blocks which are not actually correct. However, I think the thought processes revealed in deciding which tricks to use or not while reading data "closer to the metal" than shells normally go is valuable in and of itself. -- Heather

Greetings from Venezuela.

Someone asked that on a mailing-list I suscribe to; I gave the short-short answer that happens to be in the CD-ROM HOWTO at Later I answered with code that gives you the label and some more... <g>

Check out and feel free to reproduce the code sample at

(!) [Ben] Good stuff. Thank you! I've modified it a tiny bit by adding
die "Usage: ", $0 =~ m{([^/]*)$}, " <iso_file|cd_device>\n"
        unless @ARGV && -e $ARGV[0];
at the beginning - just in case I forget how to use it - and modified the "open" to check the return value in case of problems:
open CD, $ARGV[0] or die "Can't open $ARGV[0]: $!\n";
It's great otherwise - I've already got it stowed away as "iso9660info" in my "/usr/local/bin". :)

(?) [Ernesto] If your spanish is rusty, the paragraph above the Perl code reads more or less like:

'Nevertheless, before someone asks "How can I find out who prepared the CD? When? For what company? Does it belong to a multiple-CD set? Which one on the set is it?", and since I know that isn't in the HOWTO, allow me to present a small fragment of (hopefully useful) code. BTW, the comments along de Volume Descriptor are nothing but the appropiate mkisofs options needed to fill the values while creating the ISO image.'

If that sounds harsh is because someone suggested that I didn't know jack about the ISO-9660 filesystem and was quoting HOWTO's to get credit <g> (go figure). And so I made a pun at the end of the message, but only works in spanish.

(!) [Ben] Didn't sound harsh to me - I certainly give (and get!) credit for quoting HOWTOs. The trick is knowing which ones to quote, and which part. Besides, why does it matter where you got the answer as long as it's right?

(?) [Ernesto] BTW, feel free to announce the Venezuelan Linux User's Group mailing list in future installments of LinuxGazette. It's specially well suited for spanish-speaking Linux users, who can suscribe to l-linux emailing; we have our archives available for browsing in complete with a searching form working over three years worth of messages.

Keep up the good work!

(!) [Heather] Thanks. We are definitely seeing an increase in spanish requests and I'm sure our readers will find your list handy.
---- He certainly wasn't the only reader helping out...

(?) [Michael Blum] I just came across in your November issue a question on reading the volume label from a CD. If it's in ISO9660 format, which includes the Joliet type CD your reader was burning, it's actually pretty easy to write a command line tool to read the label.

Here's a bash shell script:

See attached blum-rd_label.bash.txt

Note that the parameter is the device file for the CD, e.g. /dev/hdc, and that the CD does not have to be mounted. You need to be 'root' to run the script.

Here's a C program to do the same thing. I've used this program under both Linux & IBM's AIX.

See attached blum-rd_label.c.txt

The only real advantage of the C program is that when compiled the executable can be made suid to root, allowing you to run the program as a non-root user. Just as with the shell script the parameter is the device file for the CD, and the CD does not have to be mounted.

Hope you find this useful! Thanks for your publication - I've learned a lot from it over the years.

(?) [Richard A. Bray] I finally broke down and read the iso9660 format instead of sleeping the other night.

Here are the basic commands to get the data. It will clean it up later to make sure there is a disk in the drive first, and that no errors have occurred. It should run dd only once to load the CD header into a file. Then report the results out of that.

I don't know what formats will be compatible with this, but it seemed to work fine on all of my Windoze CDs and even my Red Hat install CD. I guess I will have to check and make sure that it will work with UDF format someday.

[root@winserver bin]# cat cdinfo

See attached

(!) [Ben] <wince> This is not a good idea. You're hitting the hardware device over and over when you could do it all in one read:
# Make sure that a block device was specified
[ -b "$1" ] && { printf "Usage: ${0##*/} <cd_device>\n"; }

# Read the entire header
data=`dd if=$1 ibs=863 skip=32769 count=1 2>/dev/null`
Now you can let your CD go back to sleep, and extract whatever pieces you wanted from the variable:
echo "FSTYPE: ${data:0:5}"
echo "OSTYPE: ${data:6:32}"
This also lets you cut out the temporary variables.
(!) [Mike] Ben's suggestions got me wondering - did all those clever tricks really work? Unfortunately not, because the CD header format includes a lot of NUL characters (ASCII 0) which bash treats as "end of variable".
(!) [Ben]

ben@Baldur:~$ a="`dd if=/dev/hdc ibs=1 skip=32808 count=863 2>/dev/null`"
ben@Baldur:~$ expr length "$a"
Works for me, Mike. The problem may be that you're not quoting the string - or, quoting the individual chunks (not quoting them is what I use to get rid of the extra whitespace.) I didn't experiment with this all that much, but I tested the solution that I suggested, at least for the first few variables:

data="`dd if=/dev/hdc ibs=1024 skip=32 count=1 2>/dev/null`"

echo "FSTYPE    :" ${data:1:5}
echo "OSTYPE    :" ${data:8:32}
echo "CDNAME    :" ${data:40:32}

provides the output:
ben@Baldur:~$ ./cdinf
FSTYPE    : CD00
(!) [Mike] Here's my version of the CD volume label extractor... the handling of non-UTC timezones is wrong, but otherwise it seems to work OK...

See attached ellis-cdlabel_extractor.bash.txt

(!) [Ben] <gazes admiringly at the data = dd stuff piped through tr line> That is a cute trick, though. <stuffing it away in my own toolbox> Thanks!
An even cheaper way to fold that whitespace: don't quote the variable. "bash" will swallow anything that is defined as the first two characters of $IFS - and that happens to be spaces and tabs.
(!) [Mike] One problem with eating spaces. I need those for the offsets to work. :)
(!) [Ben] That's why you only do that when printing out the individual variables, not for the entire string. The program flow is "get string -> grab chunks via offsets -> print w/o spaces."

(?) Now all I am missing is the cd serial number that Windoze generates. I can't seem to find how to compute that. I may just checksum the first 32K of the drive and use that.

(!) [Ben] I seem to vaguely remember Windows showing some weird number. Are you sure it's not stored in the CD header itself? Note that I'm not saying that it is; I'm just wondering.

(?) OK. Here is my current version of the script. I added error checking to properly return errors if no media or of wrong type.

Thanks to Ben and Mike.

See attached

(!) [Ben]
dd if=$1 bs 1 skip=32768 count 2048 >/tmp/cdinfo$$ 2>/dev/null
[ ... ]
data=`cat /tmp/cdinfo$$ |tr '[\\000-\\037]' '.*'`
(!) [] Why'd you go and do that? :) The file creation is completely unnecessary, and will leave junk in "/tmp" if your script crashes for any reason. If you want to use that mechanism, simply do it on the fly, like Mike did:

data=`dd if=$1 bs=1024 skip=32 count=1 2>/dev/null|tr '[\000-\037]' '.'`

(?) Well, that is because the pipe to tr will always set $? to 0. Then I wouldn't be able to test for failure of dd. Sorry, but that's the rub.

(!) [Ben] [ -z "$data" ] && { printf "Oops, read failed.\n"; exit; }
:) I think this would be even better. What we really care about is that we have data in $data, right? Best to test the end result - although intermediate tests, in addition to the final one, certainly don't hurt.

(?) If I want to use tr to trap for weird characters, then I will have to store the data somewhere. I suppose it is possible for it to crash before reaching the rm -f /tmp/cdinfo$$ line but, if that does happen I probably have something seriously wrong with tr.

I suppose I could stuff the data in a variable from dd and then echo it to tr, that would work wouldn't it?

(!) [Ben] Well, Mike's contention was that you would lose anything past a null when just assigning it that way. I didn't do any rigorous testing, but I'm willing to believe - "\0"s being the way strings are normally terminated. The one header that I tested didn't chop off short, but it may not have contained any nulls.
BTW, Mike - that "tr" function could stand a bit of twiddling. :) The extra '\'s in your "first list" convert backslashes to '.'s; the '*' in your "second list", as the second character, has the "truncated second list" effect - i.e., all matches other than backslashes will be converted to asterisks. That's probably not what you wanted.
(!) [Mike] Well spotted! Gets that's what you get for lazy quoting (well, it doesn't usually cause any nasty problems!)
(!) [Ben] Thanks! Just a matter of clean code. Although printing out an unquoted "$data" has a very interesting result: it shows the header with all the control chars converted to stars... and immediately followed by a listing of the current dir. Why is only the last asterisk interpolated? <shrug> These are the questions that try men's souls.
I usually try to make sure that my code doesn't do anything that I didn't tell it to do, like hanging out in seedy bars with suspicious characters and drinking till all hours. Gotta watch that stuff, or - bam! - it'll grab your credit card and be buying drinks all around.
As well, since all of the data is in the first K, it's not necessary to grab a 2K block; and since the numbers divide neatly by 1024, it's more effective to have "dd" reading it a K - rather than a byte - at a time.
(!) [Mike] Also a good point, although I'd go one stage further since the CD block size is standardised as 2K, it's probably most clear (and quickest...?) to use

data=`dd if=$1 bs=2048 skip=16 count=1 2>/dev/null|tr '[\000-\037]' '.'`
although I concede that it does read a lot more than is strictly necessary.

(?) Yes... The drivers "probably" optimize the command, but it would be better to use the correct size blocks.

Thanks for the tr tips. I've never used tr before. I guess I'll have to actually read the man page.

(!) [Ben] <grin> "When you have learned to snatch the error code from the trap frame, grasshoppa, it will be time for you to leave." Good luck with your coding.

(?) linux book

From P.Sreekanth Reddy

Answered By Thomas Adam, Karl-Heinz Herrmann, Mike "Iron" Orr, Chuck Peters

Dear sir,

I am new for Linux operating system in fact i am new to computer field. I know about windows. Please suggest me one good, basic, which eaches easily about linux operating system and a book for operating system concepts.

thank you, sir

P.Sreekanth Reddy


(!) [Thomas Adam] Hello,
This is a very common question among people new to Linux. But this question is very broad. There is a whole plethora of books to choose from.
The one which got me started was called "Running Linux" which is published by the leading book publishers of Linux material --- "O'Reilly". Take a look at the following website:
To give you more information.
If you could perhaps be a little more specific as to what you think you will be using Linux for, then maybe we here at TAG can tailor our answers to suit your needs.
In the meantime, I hope that helps,
(!) [K.-H.] this is a rather broad question. Since you say you've some windows experience I assume you've got some computer and you probably need "Linux" and some idea what to do with it.
Do you already have a Linux distribution? Did you already install it and want help with "what now"? At what step do you need help?
There are lots of books on Unix in general and Linux specifically. If you are looking for downloadable and/or online versions have a look at:
especially Linux administrators guide. That one helped me a lot at the beginning.
There are several companies or groups putting together packs (distributions) of kernel, OS programs and application programs which can be easily installed. Some have their own manuals (printed or online) for their specific setup.
Tell us more and we could help you more on your specific problem....
(!) [Chuck] A basic book designed to help Windows users become productive on Linux ASAP is Everyday Linux. Its available online at
I should note I am a bit biased as I know the authors well and I own the domain.
(!) [Heather] One of my local LUGs received a copy of "A 12 Step Guide To Curing Your Windows Addiction". It was given away as a door prize at my local LUG, but I think it was pretty decent. Since you say you know about Windows, it may help more than some books which might assume you have a bit more computer knowledge already.
Way back in the dusty ages when I didn't know UNIX, I learned most of the good stuff to get me up to speed in Mark Sobell's books. "A Practical Guide To The Linux System" should help you get a little more hands-on experience.
Of course, Jim did co-author a book, "Linux System Administration" pubbed under the New Riders imprint... it's split half and half, theory and practical matters, but as some of the intended audience are execs and other managerial sorts who may not deal with the nuts and bolts, maybe it will help you too.
(!) [Iron] Go to the new Linux Gazette Knowledge Base and scroll down to "How can I get help on Linux?" There are a few books listed.
Jim Dennis has also mentioned books in his Answer Gang answers, and the "Greetings from Heather Stern" entry (the first entry in each The Answer Gang column) also occasionally mentions books. I would point you to a specific URL, but searching for "linux books jim dennis" brings up 24 pages of entries in the search engine, so it would take a while to evaluate all the pages.
(!) [Ben] You know - I'm just loving this. This is exactly how I foresaw this resource being used, a simple place we could point querents.
Major-league case of warmfuzzies here, as I go back to pounding the topic list...

(?) random crashes - how to prepare bug report?

From N.P.Strickland

Answered By Thomas Adam, Mike Ellis, Ben Okopnik, Huibert Alblas


My linux machine is crashing randomly once every couple of days - it freezes up and will not respond to anything (including ctrl-alt-del, or ping from another machine) except the on/off switch. The load on the machine is light, and the work it is doing is not particularly unusual.

1) Can anyone suggest how I could gather useful information about what is going on?

I put a line like this in /etc/syslog.conf:

*.debug;mail.none;authpriv.none;cron.none /var/log/messages

As far as I understand it, this should get all possible debugging information out of syslogd, although I'm not completely clear whether any more could be squeezed out of klogd. In any case, I'm not getting any messages around the time of a crash. I've also turned on all the logging options that I can find in the processes that I am running, without any helpful effect.

(!) [Thomas] Have you added any memory to your machine recently?? This has been known to "crash" machines randomly.
What programs do you have running on default?? Perhaps you could send me (us) an output of the "pstree" command so that we can see which process is linked to what.
(!) [Mike] Quite right, Thomas. If you have two or more memory modules (DIMMs probably) in your machine, try removing one of them if you can. If the fault goes appears to go away, try putting the module back in and see if the fault re-appears. If the fault never goes away, replace the first module and removing another and try again.
As you're running a 2.4 kernel, make sure you have plenty of swap. Sadly the 2.4 kernels aren't as good as the older 2.2 and making maximum use of swap, with the result that you are now strongly recommended to... look at if you need help. I haven't heard tales of this causing random lock-ups, but you never know!
(!) [Halb] Yes, the early 2.4 kernels had 'some' trouble with swap space. But at the time of 2.4.9 a completely new ( build from scratch ) VM was introduced by Andrea Arcangeli, and incorperated by Linus since 2.4.10.
You can read a good story on:
It is an interresting, not too long story.
(!) [JimD]
However, if you're using the new tmpfs, it might be wise to err on the side of generosity when allocating swap space. Using tmpfs, your /tmp (and/or /var/tmp or other designated directories) can be sharing space with your swap (kernel VM paging).
Still, one or two swap partitions of 127Mb should be plenty for most situations. I still like to keep my swap partitions smaller than 127Mb (the historical limit was 128, but cylinder boundaries usually round "up"). I also recommend putting one swap partition on each physical drive (spindle) to allow the kernel to balance the load across them (small performance gain, but neglible cost on modern hard disks).

(?) 2) If I can get any usable information about the problem, does anyone know where I should send it?

(!) [Thomas] Here, to both me and the rest of TAG.

(?) If I knew that it was a kernel problem, I'd try the linux-kernel mailing list. But that looks pretty intimidating, so I'd want to be sure I knew what I was talking about first! Also, I guess that some kind of hardware problem is more likely.

(!) [Thomas] I'm still edging my bets on memory...if it is a Kernel problem then you could try to re-compile it using the latest stable release.

(?) I'm using Red Hat 7.2, which includes the 2.4.7-10 kernel, on a machine with an Intel Pentium 4 CPU running at 1.5 GHz and 512M of RAM. Crashes occur even when I am not running X and no users are logged on. The main process that I am running is the Jakarta Tomcat web server, which runs a Java servlet, which runs the symbolic mathematics program Maple as an external process. As far as I can tell from the logs, when the last crash occurred, there had been no request to the web server for some time. It's just possible that a request triggered the crash, which prevented the request from being logged, but I doubt it.

Thanks in advance for any suggestions.

Neil Strickland

(!) [Thomas] I might also suggest that you run the "strace" commands on processes you think might be crashing. That will then tell you where and how...if nothing else.
(!) [Ben] I'm pretty much of the same mind as Thomas on this one; Linux is pretty much bullet-proof, what tends to cause crashes of this sort is hardware - and that critical path doesn't include too many things, particularly when the key word is "random". Memory would be the first thing I'd suspect (and would test by replacement); the hard drive would be the second. I've heard of wonky motherboards causing problems, but have never experienced it myself. I've seen a power supply cause funky behavior before - even though that was on a non-Linux system, it would be much the same - and... that's pretty much it.
"strace", in my opinion, is not something you can run on a production system. It's great for troubleshooting, but running a web server under it? I just tried running "thttpd" under it, and it took approximately 30 seconds just to connect to the localhost - and about 15 more to cd into a directory. Not feasible.
(!) [Thomas] Hum, perhaps I wasn;t too clear on that point. What I meant was that he should run strace on only one process which he thinks might be causing the crash. Hence the reason why I initially asked for his "pstree" output.
But I agree, strace is not that good when trying to analyse a "labour intensive" program such as a webserver, but then I fail to see the need as to why one would want to run "strace" on such a program anyway....afterall, Apache is stable enough :-)

(?) Thanks again for all your help.

(!) [Mike & Ben] You're welcome.

(?) Memory would be the first thing I'd suspect (and would test by replacement);

I downloaded memtest86 (from and ran through its default tests twice (that took about 40 minutes - I haven't yet tried the additional tests, which are supposed to take four or five hours, altogether). Nothing came up. Do you think that's reliable, or would you test by replacement anyway?

(!) [Mike] The problem may be an intermittent fault: if the tests take 40 minutes and the machine usually runs for (say) 4 days, you've effectively given it less than a 1% chance of finding the problem [40/(4*24*60)]. I'd still seriously consider a test by replacement and/or removal of DIMMs.
(!) [Ben] My rule of memory testing, for many years now, has been "a minimum of 24 hours - 48 is better - and hit it with freeze spray at the end." For a system that needs to be up and running, however, "shotgunning" (wholesale replacement of suspect hardware) is what offers the highest chance of quick resolution.

(?) the hard drive would be the second

I've seen a power supply cause funky behavior before

These don't sound like easy things to test :-( . Do you have any suggestions?

(!) [Mike] They aren't, sadly. Testing by replacement is really the best option for these sorts of problems, but beware, we had a machine here with a dodgy PSU recently which cost us a lot more than a new PSU )-: By the time we'd tracked down the problem we had...
The whole lot had to be disposed of because we had used the faulty PSU with them, and the fault was that it generated occasional over-volt spikes during power-up. These potentially weakened any or all of the other components in the system rendering them unsuitable for mission-critical applications (we actually purchased a cheap case, marked all the bits as suspect and built them into a gash machine for playing with).
In your case, try cloning the hard-drive and replacing that. You can use dd to clone the drive - dd if=/dev/current_hard_disc of=/dev/new_hard_disc bs=4096 - assuming the hard-drives are the same size. Don't use the partitions, though - /dev/hda and /dev/hdc will work, /dev/hda1 and /dev/hdc1 won't since the partition table and MBR won't be copied. Using the raw devices will also copy any other partitions if you've got them.
<Ding/> One bright idea that has just occurred to me - are you using any external devices? If, for example, you've got an external SCSI scanner on the same chain as your internal SCSI discs, a dodgy connection or termination could potentially cause random crashes. It might also be worthwhile checking any USB or fireware devices you've got connected. I doubt serial or parallel devices would cause a problem, but it might be worth checking just in case. Internal connections are also suspect - a CD-ROM drive on the same IDE chain as your boot disc might cause problems: you might even like to remove it completely if you don't use it often. Any PCI cards are also candidates for suspicion - make sure they're all plugged in fully.
Let us know how you get on!
(!) [Ben] Unfortunately, all my best suggestions come down to the above two. I used to look for noise in power supply output with an oscilloscope - interestingly enough, it was a fairly reliable method of sussing out the problematic ones - but I suspect that it's not a common skill today. There are a number of HDD testers out there, all hiding behind the innocuous guise of disk performance measurement tools... but Professor Moriarty is not fooled. :)
Seriously, if running one of those (e.g., "bonnie++") for a few hours doesn't make your HDD fall over and lie there twitching, you're probably all right on that score.

This page edited and maintained by the Editors of Linux Gazette Copyright © 2002
Published in issue 74 of Linux Gazette January 2002
HTML script maintained by Heather Stern of Starshine Technical Services,

"Linux Gazette...making Linux just a little more fun!"

News Bytes


Selected and formatted by Michael Conry

Submitters, send your News Bytes items in PLAIN TEXT format. Other formats may be rejected without reading. You have been warned! A one- or two-paragraph summary plus URL gets you a better announcement than an entire press release.

 January 2002 Linux Journal

[issue 93 cover image] The December issue of Linux Journal is on newsstands now. This issue focuses on networking, has an interview with Costa Rica's Minister of Technology (they use Linux!), and has that great picture of Linux-on-a-wristwatch on the cover (it's a prototype). Click here to view the table of contents, or here to subscribe.

All articles through December 1999 are available for public reading at Recent articles are available on-line for subscribers only at

Legislation and More Legislation

 Sklyarov's Charges Dropped

Good news for all those following the story of Russian programmer Dmitry Sklyarov. It looks like Dmitry will not have to face charges under the DMCA for speaking publicly in the US about software to circumvent Adobe e-book encryption. The press release from the US Attorney's Office can be found here. Basically, the agreement means that charges against Dmitry will be postponed for one year, or until the case against Elcomsoft (Dmitry's employers during the development of the contested technology) concludes, whichever is longer. During that time, Dmitry can return to Russia (as he did, happy news for both himself and his family). He will be "prohibited from violating any laws" (aren't we all!), and will have to testify truthfully in the US case against Elcomsoft (to do otherwise would be perjury). If he fulfills these obligations, then at the end of the deferment the charges will be dropped permanently.

Although this development is welcome, and has made headlines throughout the computer press (e.g. in The Register, in Wired and in Planet PDF ) as well as in the mainstream press (e.g. in USA Today), this story is far from over. Richard Stallman was quick to comment on the initial news (which was somewhat confused: it appeared the charges had been unconditionally dropped), cautioning that the DMCA was still a real threat to freedom. He also made a renewed call for active resistance and protest against the DMCA and its supporters. Later, under the impression that a plea bargain had been made, Stallman was quite critical of Dmitry, accusing him as a defector. Following clarifications Stallman apologised for earlier comments (which many felt were unwarranted, though well intentioned). Indeed, there seemed to be generally quite a lot of confusion surrounding the whole affair, apparently due to som unclear issuings from the State Attorney's Office. Some clarifying statements from Dmitry, his employer, and the defence team can be found here.

At the end of the whole episode, what has come out as the most important point is that the DMCA is still there. The US DOJ case against Elcomsoft should be a crucial test of the legality and applicability of this law, but as RMS keeps pointing out, it is important to follow every avenue and opportunity available in the fight for freedom (hopefully that is not too melodramatic!). The Electronic Frontier Foundation have an excellent page of resources on the Sklyarov case (and other DMCA related matters). Be sure to keep informed.

In other news, reported by The Register, it appears that copyright-enforcement happy Adobe is in hot water itself. A judge issued an injunction for Adobe to stop selling InDesign, its Quark-killer program, pending trial. Trio Systems has sued Adobe, claiming Adobe illegally used Trio's code in InDesign.

 MS Links

Christian Loweth mailed us a link to his website: The Microsoft Collection. This site contains quotes and links from many sources which address Microsoft's role in respect to monopoly activities, consumer privacy, legal issues, internet, systems interoperability, web standards, corporate ethics and more. This is probably of some interest given the legal negotiations Microsoft is involved in at the moment.

The recent Microsoft Antitrust settlement is still a bone of contention. Nine dissenting states and certain industry groupings are holding out for more punitive conditions, such as forcing MS to opensource Internet Explorer. Among the involved industry figures is Red Hat CEO Matthew Szulik who recently testified before the United States Senate Judiciary Committee on the settlement. He argued that the 9 dissident states' remedies were more appropriate and potentially effective than the current arrangement. The Register has given a lengthy analysis of the various remedies, but as John Lettice vividly wrote, the dissenting states `...probably are just flinging themselves in front of a speeding train'. Certainly, Microsoft is pulling no punches in defending the position of the original settlement.

An interesting commentary on the proposed settlement can be found in Lawrence Lessig's testimony before the Senate Committee hearing. Lessig's main focus is on the inadequate enforcement provisions. He also makes the point that Microsoft is not the only enemy of competition out there (very true) and he even has some kind words on the .NET strategy. This is worth reading.

There was also a Slashdot discussion of these issues which included a useful link to some Linuxplanet advice for those who want to register their opinion on this matter (there is a 60 day comment period from Nov. 28).

A more recent cause of concern regarding Microsoft's intentions is the patent claim [] that it has been granted for a `Digital rights management operating system'. This is an operating system which has certain features to make it easier to protect `rights-managed data'. For example (taken from patent abstract) if you are running a trusted program using such data, no untrusted programs will be allowed to run. There are various other features along the same general idea. This story was reported by The Register, following the publication of the patent claim. Operation of the scheme would require a database of the particulars of users PC's:

"the content provider would have to maintain a registry of each subscriber's DRMOS identity or delegate that function to a trusted third party,"

Seth Johnson of the Committee for Independent Technology (C-FIT) posted a very bleak assessment of the situation to the software patents mailing list (also here). The MS DRMOS is seen as a large part in an overall movement to deprive the public of the power to work with and control information, with the ultimate aim of rendering them nothing more than passive consumers. This contribution builds on an earlier (and also pessimistic) article by David Winer which speculated on the nature of the deal done between Microsoft and the DOJ. Certainly, a patent on a DRMOS is worrying, in particular with legislation like the SSSCA doing the rounds which could make such technology mandatory.

Linux Links

LinuxFocus articles:

The Duke of URL has a review of the Pogo Linux Altura Athlon XP Workstation. Sadly, this is the Duke's last article, because the site is going +down. Another victim of the it's-so-much-work-and-I'm-not-getting-paid-for-it +syndrome. We'll miss the "concise and accurate information on Linux hardware and software" on the site. For now, the archives are available. Contact the Duke (Pat) if you want to make a $$ contribution toward putting the archive on CD-ROM, or +if you can donate webspace to host the archive.

Google's relaunched usenet archive received recent press both in an article in Wired and in a story on The Register. In particular there is a Google archive of historic announcements including Linus and his pet project, Tim Berners-Lee's announcement of what would become WWW, Microsoft's first mention in the media, and so on. Good nostalgia, especially at this time of year.

NewsForge have a story on Ximian's release of Evolution 1.0. Also covers the release of Ximian's proprietary MS Exchange client for Linux. Although some may have qualms about Ximian releasing such a proprietary extension, there are compelling reasons for this course of action, not least of which is staying in business! In any case, it should be a good asset to Linux users who are forced to operate in a MS Exchange environment. Story also covered here and here. have a review of Linux on Playstation 2 ( courtesy Slashdot).

O'Reilly Net have some pieces which might be of interest, including

The following links found on Linux Weekly News are worth checking out:

The Register have the following links

Newsforge recently took a look at whether one of the biggest problems with Linux usability is that the people teaching newbies are just too good. Interesting reading. Also at The Register.

Slashdot have the following links worth noting

Linux Journal article on perceptions of Linux among undergraduate sysadmin students.

Upcoming conferences and events

Listings courtesy Linux Journal. See LJ's Events page for the latest goings-on.

Consumer Electronics Show (CEA)
January 1-11, 2002
Las Vegas, NV

Bioinformatics Technology Conference (O'Reilly)
January 28-31, 2002
Tucson, AZ

COMNET Conference & Expo (IDG)
January 28-31, 2002
Washington, DC

LinuxWorld Conference & Expo (IDG)
January 30 - February 1, 2002
New York, NY

The Tenth Annual Python Conference ("Python10")
February 4-7, 2002
Alexandria, Virginia

Australian Linux Conference
February 6-9, 2002
Brisbane, Australia

Internet Appliance Workshop
February 19-21, 2002
San Jose, CA

Internet World Wireless East (Penton)
February 20-22, 2002
New York, NY

Intel Developer Forum (Key3Media)
February 25-28, 2002
San Francisco, CA

COMDEX (Key3Media)
March 5-7, 2002
Chicago, IL

BioIT World Conference & Expo (IDG)
March 12-14, 2002
Boston, MA

Embedded Systems Conference (CMP)
March 12-16, 2002
San Francisco, CA

CeBIT (Hannover Fairs)
March 14-22, 2002
Hannover, Germany

COMDEX (Key3Media)
March 19-21, 2002
Vancouver, BC

March 19-21, 2002
Washington, DC

SANS 2002 (SANS Institute)
April 7-9, 2002
Orlando, FL

LinuxWorld Conference & Expo Malaysia (IDG)
April 9-11, 2002

LinuxWorld Conference & Expo Dublin (IDG)
April 9-11, 2002
Dublin, Ireland

Internet World Spring (Penton)
April 22-24, 2002
Los Angeles, CA

O'Reilly Emerging Technology Conference (O'Reilly)
April 22-25, 2002
Santa Clara, CA

Software Development Conference & Expo (CMP)
April 22-26, 2002
San Jose, CA

Federal Open Source Conference & Expo (IDG)
April 24-26, 2002
Washington, DC

Networld + Interop (Key3Media)
May 7-9, 2002
Las Vegas, NV

Strictly e-Business Solutions Expo (Cygnus Expositions)
May 8-9, 2002
Minneapolis, MN
8-9, 2002
Minneapolis, MN
PC Expo (CMP)
June 25-27, 2002
New York, NY

USENIX Securty Symposium (USENIX)
August 5-9, 2002
San Francisco, CA

LinuxWorld Conference & Expo (IDG)
August 12-15, 2002
San Francisco, CA

LinuxWorld Conference & Expo Australia (IDG)
August 14 - 16, 2002

Communications Design Conference (CMP)
September 23-26, 2002
San Jose, California

Software Development Conference (CMP)
November 18-22, 2002
Boston, MA

News in General

 Linux and Viruses ran a recent article reporting that Linux would be the next virus target (in the mould of the various email worms currently circulating the Windows world). It featured quotes from representatives of Trend Micro and McAfee which were surely well intentioned but at times sounded a little suspect. For example, did you know that `In fact it's probably easier to write a virus for Linux because it's open source and the code is available.' As Don Marti commented in his Aspiring to Crudeness newsletter: `How many damn "The Linux viruses are coming! Virus checkers are still relevant!" articles are we going to have to read until even the Mainstream Media starts ignoring the anti-virus vendors?.' Don also links to a good article by Rick Moen explaining why Linux is not such a likely target as some people believe. Roaring penguin also have a page covering various myths regarding Linux and viruses, which specifically addresses points raised in the article.

Perhaps the most imminent impact of viruses on Linux lies in the fact that if the current rash of virus outbreaks continues, it seems likely that many more security conscious customers will seek alternatives to the current market leaders. Secure (or at least more secure) software is bad news for anti-virus software makers.

 Quake 2 Source Code Released Under the GPL

John Carmack has released the sources to the fabled action shoot-em-up game: Quake 2. From Carmack's .plan file at id Software (
`However, all in all this may spur the development of many new (free) Linux games and may encourage some hackers who are not "just" coders (musicians, graphics artists, and others) to create new games by creating, compiling and plugging in new data sets.'
Fine sentiments indeed.

 Practical PostgreSQL PDF Now Available

Command Prompt, a developer of Linux and PostgreSQL custom development and managed services solutions, has announced the pre-production release of Pratical PostgreSQL. Practical PostgreSQL is a publication co-produced between Command Prompt, Inc. and O'Reilly & Associates covering the PostgreSQL ORDBMS. You may retrieve a pre-production PDF from the following URL:

 Special Event of Linux User Club India

Gitesh Trivedi mailed to point out that the Linux User Group of India is arranging an event for its users. The subject of the day is System Administration on Linux. It will be held on 13th January,2002. 10.00 A.M to 6.00 P.M at 26,Jagganathpark,Nr.Malav, Talav,Jivarajpark, Ahmedabad-380051 Gujarat INDIA. Further details here.

Distro News


Linux Today report the availability of Mandrake Linux 8.1 for Intel Itanium Architecture. The Itanium 64-bit architecture is a high-performance platform commonly used for servers.

The Register recently reviewed Mandrake 8.1, from the point of view of ease of install, and found it "easier than Win-XP". Overall, a very positive endorsement of the distro (particularly following the ordeal which ensued during an earlier Red Hat install).


SuSE Linux, has announced a version of "SuSE Linux Firewall on CD" available for "Virtual Private Networks" (VPN).

SuSE Linux, has announced the availability of SuSE Linux 7.3 for Sun Microsystems' SPARC architecture. The new version is available for download. SuSE provides Linux Kernel 2.2.20 for deployment in Sun4c and Sun4m series 32-bit machines and Kernel 2.4.14 for Sun4u series 64-bit systems. Among other features, Kernel 2.4.14 offers an extended range of drivers and USB support for new UltraSPARC models. SuSE Linux 7.3 for SPARC is based on the program library glibc 2.2.4 and includes XFree86 4.1.0.

 Yellow Dog have a report on Yellow Dog Linux and future directions the distribution could take (courtesy Linux Today).

Software and Product News

 Opera 6.0 for Linux Technology Preview

Opera Software has released Opera 6.0 for Linux, Technology Preview 2 (TP) for download with new features, including the ability to display non-Roman characters, a completely new and customizable user interface, as well as a range of different improvements that increases the speed and enjoyment of Linux users' browsing sessions.

 Kohan: Immortal Sovereigns Now Available for Linux

TimeGate Studios and Loki Software have announced that the fantasy and real-time strategy game, Kohan: Immortal Sovereigns, shipped for the Linux platform on Saturday, August 25.

Kohan has an MSRP of $49.95 (USD) and is now available for order from the Loki webstore. A listing of resellers is also available. Linux gamers are welcome to sample Kohan by downloading the free demo at

Proving again that good taste is no substitute for good gameplay, developer Running With Scissors announced that they will join forces with Loki Software to bring the long-awaited Linux version of POSTAL PLUS to Windows-weary gameplayers, everywhere.


Jim Watkins mailed to draw our attention to OpenFly: an open souce game engine for Flight Simulator toolkit that is Linux Compatible. He says "this looks like an awesome project and would be linux's first true Combat Flight Simulator".

 McObject Linux-based Benchmark Paper

McObject's have released a new white paper (pdf): "Main Memory vs. RAM-Disk Databases: A Linux-based Comparison". This paper addresses performance and availability implications of different approaches to database management in embedded systems running on Linux. It looks at databases running in embedded applications on hard-disks, on ram-disks, and in memory only operation.

McObject's benchmark tests the company's MMDB against a widely used embedded database, which is used in both traditional (disk-based) and RAM-disk modes. Deployment on RAM-disk boosts the traditional database's performance by as much as 74 percent, but still lags the memory-only database in this test (performed on Red Hat Linux version 6.2).


VMware has announced the launch of VMware Workstation 3.0. VMware Workstation enables multiple operating systems to run on physical computers in secure, transportable, high-performance virtual computers. Workstation 3.0 provides support for the latest operating systems including Microsoft Windows XP and the latest Linux distributions, supports additional peripheral devices, and provides significant enhancements in networking and overall performance.

 Tommy Hilfiger is Dressing Up Linux and Other IBM News

IBM have announced that Tommy Hilfiger has turned to IBM and Linux for an e-business infrastructure designed to expand the company's reach to its specialty retailers, factories and employees.

Tommy Hilfiger is creating three innovative new web portals using IBM eServer xSeries running Linux, IBM eServer iSeries running Java, DB2 Universal Database and a suite of software products from IBM Business Partner eOneGroup.

IBM has started shipping its first Eclipse-based tool for Linux -- the WebSphere Studio Application Developer for Linux beta. This follows IBM's earlier announced strategy, when it donated $40 million of software -- codenamed Eclipse -- to the new independent open-source community. Developers working on WebSphere Studio and other Eclipse-based tools use a common, easy-to-use interface that provides a consistent "look and feel," regardless of vendor, which cuts training costs for customers. Eclipse will also enable customers to integrate business processes used to create electronic-business applications, such as those for Web services. 150 software vendors, including IBM, Red Hat, TogetherSoft and others are already working together on Eclipse software. Downloads here.

As part of an initiative to stimulate the development of new Linux solutions specifically for the small and medium business market, IBM is announcing a "virtual Linux server" for independent software vendors. The eServer iSeries Linux "Test Drive" uses IBM's mainframe-inspired partitioning technology to give software vendors internet access to their own iSeries server, where they can write, port and test Linux applications for eServer iSeries. IBM believes Linux running on eServer iSeries is a combination that can reduce cost and complexity by consolidating onto a single, easy-to-manage, mainframe-class server.

 Project Management Software for Linux

The Project Management Software AUX RDP for Linux has been developed by SYSI GmbH Software Systeme. AUX RDP is a multiuser software tool for planning and control schedules, resources, costs, results and risks with numerous text and graphic reports. Additionally, AUX RDP includes a generator of Web-based Project Information System for creating project information within Intranet/Internet automatically. AUX RDP is available as Shareware and can be downloaded at

 Linux System Administration Course

Training etc wish to publicise their Linux system administration course This course equips participants with the tools to insure the well being of a LINUX system. Lab sessions include the installation, troubleshooting, and maintenance of a LINUX system

 Texas Instruments, RidgeRun and DSP

Extending a joint commitment to enable the rapid development of real-time applications, Texas Instruments and RidgeRun have announced the availability of an end-to-end embedded Linux development suite for TI's new system-level digital signal processors (DSPs). The combination of the RidgeRun DSPLinux operating system and Board Support Package (BSP) with TI's power-efficient, programmable DSPs should "reduce cost, power consumption and board space by 40 percent for designers of real-time embedded applications".

Copyright © 2002, Michael Conry and the Editors of Linux Gazette.
Copying license
Published in Issue 74 of Linux Gazette, January 2002

"Linux Gazette...making Linux just a little more fun!"

Micro web server: how to save CPU time and hard disk space

By Matthias Arndt


A personal web server. Today, almost any Linux user has one. Some folks do really serve content with them; others use it for development of PHP or CGI programs. Others like me just have it to read the documentation via the browser and to play with it. I decided that running the Apache web server is overkill for my personal applications. Currently I have access to a CGI and PHP capable provider so I do not need support for these on my own machine. Just plain serving of files without having to run a huge Apache binary in background.

As a result, I decided to drop running my own Apache web server in favor of having a simple micro web server that only answers requests when there are any. It saves me some disk space and RAM, although that wasn't really a significant factor since my computer has plenty of capacity. Mostly I wanted to play around with new software and nifty small but usable solutions.

What do I exactly want to do with my web server?

Just a few ordinary things, nothing involving PHP or CGI:

This leads to another important thing: at least some sort of directory indexing must be supported by the web server. That is, if the final URL component is a directory, redirect to that directory (add the final slash), then serve up the index.html in that directory. (The redirect is important so that relative links on the page will function correctly.) Although this can be done with automated scripts run by cronjobs. But I prefer a simple builtin solution. It doesn't have to be as complex as the Apache indexing function although that one is very nice indeed.

In short: I can use almost any web server that supports the http protocol but it doesn't need many fancy features.

Do I need extensive configuration?

In fact no. All of these can be accomplished by symlinking external pieces into the web server's root directory. No need for "Alias" directives or other complicated options. Just the web server root and I'm happy. Perhaps customizing the port the web server listens on.

But nothing more. A simple command line like this one should be sufficient for my purpose: "binary /path/to/webserver/root".

Standalone server or called via TCP wrapper?

I decided to use a TCP wrapper solution. The web server binary gets only called when there really is a request. No need to mess around with init scripts. Just a simple line in /etc/inetd.conf and off we go.

However such a solution is not very performant. In fact, if you plan to have more than a few sporadic accesses to your server, go for a standalone server that runs all the time.


Beside a few really awkward solutions ( there are web servers written in Java, bash or awk out there), I decided to go for a compilable solution.

I found a web server called micro_httpd at This one is written in plain C, takes just around 150 lines of code and does exactly what I want. Runnable from TCP wrapper, no CGI nor PHP, plain serving of files with indexing capability.

I just added a few more mime types in the code and it worked out of the box.

Grab the sources of micro_http and unpack them.

  1. tar xvzf micro_http.tar.gz
  2. cd micro_httpd
  3. rework the source the file if needed, tweak the #define directives to suite your needs
  4. make
  5. su -c "make install"
And now you should have a binary called micro_httpd in /usr/local/sbin/.

Become root and edit /etc/initd.conf with your favorite editor. Add a line

http    stream  tcp     nowait  wwwrun  /usr/sbin/tcpd  /usr/local/sbin/micro_httpd /var/httpd/wwwroot/
to it and restart the Internet super-server inetd.

On my SuSE 7.2 Linux, I type "/etc/init.d/inetd restart" as root.

Make sure to substitute "/var/httpd/wwwroot/ in the example above with the correct path to your new document root.

Substitute the wwwrun with any valid user account, preferably one that has almost no rights on the system for security reasons.

Now try it out: put a few html files in your new WWW root and make them readable by the user account specified. Then point your favorite browser to http://localhost/. You should get either an automated index or your index.html file.

Got this far? Great, your small and micro web server is up and running.

Note:The TCP wrapper does log all connects to the server to /var/log/messages. But don't expect a complete Apache-style log from it. Just plain lines like this:

micro_httpd[886]: connect from x.x.x.x (x.x.x.x)
However with knowledge of the http protocol and the code it should be possible to code an advanced logging facility. I leave that one up to you.

In general, any web server that can be run from inetd can be setup like this one. So look around at Freshmeat.


If your needs are as simple as that, it takes a few minutes to switch from Apache to such a minimalistic solution.

It works pretty good although I'm aware that this solution will fail if there are too many requests. For a simple personal web server without heavy traffic it should be sufficient.

At least I'm a bit happier now. Decide - perhaps such a solution would suit your needs as well?

[There's also Tux, a micro web server in a Linux kernel module. It works similar to micro_http, and can chain to a bulkier web server for URLs it can't handle (e.g., CGI scripts). But note that Tux and micro_http serve different niches. Tux is for high-traffic sites that serve lots of simple files (e.g., images) and must keep per-request overhead low to avoid overloading the system. micro_http via inetd is for sites with light web traffic, where the greater overhead of running a separate process for each request is overshadowed by the no overhead at all when there are no requests. Of course, both micro_http and Tux serve a third niche: nifty small usable solutions you can play with. Or as LG contributing editor Dan Wilder would say, "small sharp tools that each do one thing well in the honorable UNIX toolbox tradition."

For more information about Tux, see Red Hat's Tux 2.1 manual. I thought Tux was in the standard kernel but I can't find it in 2.4.17, so you'll have to look around for it. -Iron.]

Matthias Arndt

I'm a Linux enthusiast from northern Germany. I like plain old fifties rock'n'roll music, writing stories and publishing in the Linux Gazette, of course. Currently I'm studying computer science in conjunction with economics.

Copyright © 2002, Matthias Arndt.
Copying license
Published in Issue 74 of Linux Gazette, January 2002

"Linux Gazette...making Linux just a little more fun!"

Fil & Lil

By ESC Technologies






Who are Fil & Lil Tux?
They are evolving characters! Fil Tux is a Linux Zealot trying to indoctrinate Lil Tux.

When did Fil & Lil Tux begin?
Concept was created December 18, 2001. The first cartoon appeared on December 26, 2001.

Would you like to use Fil & Lil on your website? Go for it! All we ask is that you link back to their home page:

ESC's Lora Heiny writes:

The first cartoon is a play on the marketing of OS and how the users react. Windows 95/98 is archaic and users are in the dark about OS options. Windows XP is primary and attracts basic users. Linux users see things as black and white, such as, "there should be Linux on every desktop".

Here's a little brief on the characters in the cartoon:


Age: 39
Eats: Fish & chips
Favorite TV show: Hawaii Five-0
Favorite cartoon: Batman
Favorite comedian: Groucho Marx
Favorite Marx quote: "You know I could just rent you out as a decoy for duck hunters?"
Age: 29
Eats: bird seed
Pets: AJ & Gracie (fictitious dogs on
Lil wants to know why people use Linux and what Linux is all about.
Lil's quote: We're not cartoonists. We were just sitting around the table,
Fil started making jokes, and I started writing them down.

Now to more serious stuff:

Fil and Lil are combinations of people we know, customers, distributors, and manufacturers. ESC Technologies operates computer information websites, in addition to being a system builder and component supplier. We like Linux and thought the community needed a chuckle or two.

Layne Heiny, VP ESC Technologies R&D, comes up with most of the jokes and draws Fil & Lil. Loren Heiny, Founder, also comes up with jokes and funny scenarios.

Lora Heiny, General partner ESC Technologies, draws the background and layout for the cartoon. I edit and delete the REALLY bad jokes.

Copyright © 2002, ESC Technologies.
Copying license
Published in Issue 74 of Linux Gazette, January 2002

"Linux Gazette...making Linux just a little more fun!"

Installing Software from Source
What Do I Do with this file.tar.gz Thing?

By Ben Okopnik

The other day, I decided to download "cuyo" (see Mike Orr's review in this issue), a new game that had been mentioned on the Answer Gang admin list. When I went to the website, however, I found only a source tarball instead of a package - even though the e-mail had mentioned an available Debian archive. No big deal, I thought - I've done this before...

[The cuyo .deb is in the Debian Unstable distribution. But this article applies to any program you want to install that's either not in your distribution, or where the distribution's version is old or inadequate. -Iron.]
For those who don't know, a tarball is a "tar"red and usually "gzip"ped list of source files that can be compiled to produce a program; the filename of a tarball is usually either "progfile-1.23.tgz" or "progfile-1.23.tar.gz", with "progfile" being the name of the program and "1.23" (obviously, the numbers can be almost anything) standing for the version. When you install a package - whether RPM, DEB, or whatever your distro uses - you're simply placing the libraries, documentation, and the precompiled binary or binaries in the directories where they belong. Compiling the source is the part that normally gets done for you by the package maintainer.

After downloading the tarball to my "/home/ben/TGZs" subdirectory, one I'd created specifically for the purpose of storing downloaded tarballs, I put a copy of it in "/tmp", where I would be compiling the sources. Note that some folks prefer to do it in "~/tmp", a directory under their home; the reasoning there is that "/tmp" usually gets wiped on bootup, and a compile that goes REALLY wrong could lock your machine... which would require a reboot (oops!) I can't fault their thinking, but continue to be the dangerous daredevil that I am - I trust my Linux. :)

The file was called "cuyo-1.03.tar.gz" - so, the appropriate bit of magic which turns it back into useful files is

tar xvzf cuyo-1.03.tar.gz

This created a directory called "cuyo-1.03" right there in "/tmp".

(OK, so that's not exactly how I did it; I looked inside the tarball with Midnight Commander, opened "/tmp" in my second pane, and hit "F5" to copy out the compressed directory. I'm spelling it out here for those folks who want to or have to do it manually.)

Note that some program authors are not that "polite" about making up their tarballs: sometimes, untarring one dumps the entire list of files in the current directory. What a mess, especially if it's in your home directory! Several dozen files intermixed with yours; a bunch of directories, too - and it gets much worse if some of these have the same name as your files (not that yours will be overwritten, but it's still a mess) or your directories (stuff will get dumped into them which you would have to then fish out.) How rude! This is why I like to peek into tarballs and copy, instead of just wholesale untarring. For those who don't use Midnight Commander or another file manager that's capable of looking inside a tarball, just do

tar tvf <filename>

This will show you the contents of it - and if everything isn't prefixed with a directory name, beware! Well, not really: all you have to do is create a directory (if you make it the same as the tarball "progname", you won't lose track of what it is, later) and untar the file inside it.

mkdir rudeprogram-6.66
tar xvf rudeprogram-6.66.tgz -C rudeprogram-6.66

Now, all of the files from the "rudeprogram" tarball will be extracted to the specified subdirectory.

Fortunately, the author of "cuyo" is a polite fellow (as most authors are), and "cuyo" was tarred up in a subdir of its own. Inside it, there was the list of files, including the ones that you should always check out prior to starting operations: "README" and "INSTALL". The first is usually the author's instructions, recommendations, etc. The second is fairly standard - it's a file that explains the operation of "configure", an extremely cool program usually created by "autoconf" that will check out your system and correctly (well, usually) set up the Makefile, which is what we need to compile the program. The huge advantage of this is that, if the author was careful in writing his program, "configure" will create the correct Makefile on any version of Unix - and perhaps even other OSs.

Allow me to digress for a moment: some programs are so simple that they do not require "configure", and simply come with a Makefile (these may be capitalized or all lower-case). Others are simpler yet - all you see is a single "progfile.c", or "". With these, compilation consists of simply running "make" in the first case, or "cc progfile.c -o progfile" in the second.

Anyway - I ran "configure" in the "cuyo" subdir; it chewed on my system for a while, which is its job, and built me a Makefile. Wasn't that nice of it? :) There was a bit of a problem, though: "configure" prints out messages as it runs, and warns you in case of failures (usually by stopping and printing an error.) The message that it gave me - without stopping, however - was

checking the Qt meta object compiler... (cached) failure 
configure: warning: Your Qt installation seems to be broken!

Hmm. Well, it built the makefile, anyway. Usually, the non-fatal errors just mean that you won't get some of the features of the program, but it will still compile. Well, let's try it.

I then ran "make" - just by typing "make" on the command line, which by default reads the "Makefile" (or "makefile"), and follows the commands specified in target "all" and, ...

Ooops. It failed.

It was at this point that I decided to write this article. I hadn't been thinking of doing that; I actually had lot s of work to do this month - but I believe that installing from tarballs is a skill that is necessary for any Linux user, and my thought here was to document the process, including troubleshooting installations that go wrong. It's something I struggled with in my early Linux days, and I'd like to save others at least a bit of that pain. :)

So. We go bravely on. When I say that it failed, exactly what did I see? Well, a "make" should run without errors. Sometimes - often - you'll get warnings, which are not the same thing; your libraries may be slightly different, or perhaps your compiler is a bit more strict about declarations - but these are usually not fatal. The errors that drop you out of a compile without finishing it - those are the ones that we have to fix. So, here's what it all looked like:

Baldur:/tmp/cuyo-1.03$ make
make all-recursive 
make[1]: Entering directory `/tmp/cuyo-1.03'
Making all in src 
make[2]: Entering directory `/tmp/cuyo-1.03/src' 
c++ -DHAVE_CONFIG_H -I. -I. -I.. -DPKGDATADIR=\"/usr/local/share/cuyo\" 
	-Wall -ansi -pedantic -c bildchenptr.cpp
In file included from bildchenptr.h:21, 
	from bildchenptr.cpp:18: 
inkompatibel.h:13: qglobal.h: No such file or directory
make[2]: ** [bildchenptr.o] Error 1 
make[2]: Leaving directory `/tmp/cuyo-1.03/src'
make[1]: [all-recursive] Error 1 
make[1]: Leaving directory `/tmp/cuyo-1.03'
make:  * [all-recursive-am] Error 2 

The error begins at the line that starts with "In file included...", and ends with (at least the part we want) "...qglobal.h: No such file or directory". OK - we're missing a header file. I took a quick look through the source tree of "cuyo", just to make sure that the author didn't forget to include one of his own files (yeah, it happens) - nope. Must be one of mine - that is, his program must be looking for a file that comes with a library which I need to have on my system for his program to compile. Hmm. Which one? Whichever one contains "qglobal.h", of course.

On my system, I have set up several scripts to help me with standard installation tasks. One of these is "pkgf" - it finds whatever file I'm looking for in the entire Debian distro, and tells me in what package that file exists (this is not the same as "dpkg -S <file>", which does that for installed packages only.) If you use Debian, you can get the same functionality by downloading the current "Packages.gz" from <> and "zgrep"ping through it for the name of the file - or, just go to <> and use their search utility. The point is to find which package contains "qglobal.h" and install it.

pkgf "qglobal.h" 
usr/include/qt/qglobal.h 	devel/libqt-dev 

Well, well - it looks like I have a choice of packages. OK, "libqt3-dev" looks like the latest thing:

apt-get install libqt3-dev

The installation went fairly quickly, and... I got the same error when I re-ran "make". And so would you. So, don't do that. The thing to remember here (and I knew that I would get the error - I did this to make a point) is that you already ran "./ configure": the old (broken) values are still in the Makefile, as well as in several other files, so, rather than wasting time and trying to find out where they may be:

ben@Baldur:/tmp/cuyo-1.03$ cd ..
ben@Baldur:/tmp$ rm -rf cuyo-1.03 
ben@Baldur:/tmp$ tar xvzf ~/TGZs/cuyo-1.03.tar.gz -C . 
ben@Baldur:/tmp$ cd cuyo-1.03

In other words, I just blew away the entire "cuyo" directory and replaced it with a fresh copy of the source. This is a good rule of thumb in general: when in doubt, go back to the original sources. Believe it or not, I learned that trick from a boat mechanic who did extraordinarily good work. The way Kenny phrased it was "whack it back to the stuff that you know is good, then build it up from there." I've never seen his advice go wrong; admittedly, clients tend to scream when you tell them that you have to throw away the piece of garbage software that they have right now and replace from the ground up... but after a while, the word spreads: "Hey, this guy's work is good." You might lose some jobs that way - I know I do - but, like Kenny, I'm not willing to have my name on a piece of garbage.

I know, I know - I'm talking about things that are more generalized than just a plain old tarball install. The thing is, the philosophy of how you do things has to come from somewhere - and it's best if you figure out how you're going to do things before you actually do them, overall methodology as well as job specifics. OK, so, back to the main question - did it work or not???

ben@Baldur:/tmp/cuyo-1.03$ ./configure
<No errors>
ben@Baldur:/tmp/cuyo-1.03$ make
<lots of output elided> 
make[2]: Leaving directory `/tmp/cuyo-1.03/src'
Making all in data 
make[2]: Entering directory `/tmp/cuyo-1.03/data'
make[2]: Nothing to be done for `all'. 
make[2]: Leaving directory `/tmp/cuyo-1.03/data'
Making all in docs 
make[2]: Entering directory `/tmp/cuyo-1.03/docs'
make[2]: Nothing to be done for `all'. 
make[2]: Leaving directory `/tmp/cuyo-1.03/docs' 
make[2]: Entering directory `/tmp/cuyo-1.03' 
make[2]: Nothing to be done for `all-am'. 
make[2]: Leaving directory `/tmp/cuyo-1.03' 
make[1]: Leaving directory `/tmp/cuyo-1.03' 

Ta-daaa!!! No errors - and when I enter the "cuyo-1.03/src" directory, there's a very nice-looking executable called "cuyo" sitting in there. At this point, if I wanted to continue the installation (rather than just testing the game to see if I like it), I would type

make install

This would read the Makefile and execute all the commands under the "install" target which would most likely install the executable[s], the man pages, and the documentation. However, I tend to play with the program first, and see if I like it - most tarball makefiles do not include an "uninstall" target (which I think is a shame; that would make tarball packages almost as easy to install and remove as it is, say, RPMs or DEBs.)

To recap the entire tarball install:

1) Check if it contains a directory or just (how rude!) scattered files
2) Untar into a directory under "/tmp" or "~/tmp"
3) Run "configure" if it exists.
4) Run "make", or "cc" if it's just a plain single "file.c" or ""
5) Run "make install" if the result is what you wanted.

That's pretty much it. Note that I did not discuss security anywhere in here (do you really trust the author of this tarball or package? You're not logged in as root while playing with that binary, right?), nor many of the other issues that pertain to system administration; these issues are very important and highly pertinent, but outside the scope of this short article. The wise system administrator - and that, my dear home Linux user, is you; there isn't anyone else for your machine! - will read much, think deeply, and consider wisely.

Good luck, and may all your dependencies end up being resolved ones. :)

Ben Okopnik

Ben Okopnik

A cyberjack-of-all-trades, Ben wanders the world in his 38' sailboat, building networks and hacking on hardware and software whenever he runs out of cruising money. He's been playing and working with computers since the Elder Days (anybody remember the Elf II?), and isn't about to stop any time soon.

Copyright © 2002, Ben Okopnik.
Copying license
Published in Issue 74 of Linux Gazette, January 2002

"Linux Gazette...making Linux just a little more fun!"

The Foolish Things We Do With Our Computers

By Mike "Iron" Orr

The submissions to this column have slowed down. There's only one this month.

Video Memory

By John Joe

I read "The Foolish Things We Do To Our Computers" and I have a story of my own.

I have a Trident 9680 display card, bought in 1996. Recently, uncertain parts of the screen were blurred in both M$ Windows and Debian/Linux. When I screen capture, some pixels' values are wrong. this makes me think the monitor is OK. If refresh, they may be clear, they may not. Finally I decided to buy a new old card, a fake S3 card and the screen is OK. the S3 card has 1M memory. I try to add 1M from the 9680 card. I used a screwdriver to get memory from 9680 and failed. I'd never added display memory before. I feared i might destroy display memory. The fake S3 had difficulty when probed by XFree86, so I plugged the 9680 back in. This time the screen is OK!

I guess that when the screwdriver touched the display memory on 9680, static electric charge on it might be released.

Mike Orr

Mike ("Iron") is the Editor of Linux Gazette. You can read what he has to say in the Back Page column in this issue. He has been a Linux enthusiast since 1991 and a Debian user since 1995. He is SSC's web technical coordinator, which means he gets to write a lot of Python scripts. Non-computer interests include Ska/Oi! music and the international language Esperanto. The nickname Iron was given to him in college--short for Iron Orr, hahaha.

Copyright © 2002, Mike "Iron" Orr.
Copying license
Published in Issue 74 of Linux Gazette, January 2002

"Linux Gazette...making Linux just a little more fun!"

LG's Funniest Moments, part 1

By Mike "Iron" Orr

To begin the new year, I'm starting a new series "LG's Funniest Moments". It's a look back at the most hilarious quotes and images from the back issues, as well as a kind of rough timeline of LG milestones. (Maybe next year we'll make a real timeline....) This month, we'll look at issues #1 - 11, July 1995 - September 1996.

John Fisk -- LG's illustrious founder, editor of issues #1 - 8, originator of the Weekend Mechanic column, and all-around swell guy -- introduced the two LG slogans/goals/mission statements all the way back in issue #8 or earlier: Making Linux just a little bit more fun, and Sharing ideas and discoveries.

In issue #1, John explained, "So what is the Linux Gazette?". He answered, 'Primarily writings, ramblings, and other stuff... and then playing around with things until they either worked or broke (fortunately, mostly the former :-), Linux finally began to make sense to me. If you're in the same boat... keep paddling!"

Issue #2 showed the first hint of the LG FTP files, although it took several issues to find a host for the archive. Also, commenting on the formats LG would be available in, John remarks, "And if ANYONE broaches the subject of a PostScript file... No."

Typical of John's writing style is this quote:

Whereas UN*X barely belches when you innocently type in
	cd / ; rm -rf *
just because your smart aleck roommate told you it'd help clean out some unnecessary files.

Issue #3 saw the inauguration of the first LG FAQ, the first Mailbag and the first 2-Cent Tips. A few quotes:

Issue #5 says LG is now freely available for mirroring. It also gives an indication of how much reader participation was already going into building the Gazette: "Well, I've had so much mail recently, and so many great suggestions and such that this month's LG is "dedicated" to those of you that have written."

Issue #9 was the issue that John Fisk turned LG over to SSC, and Marjorie Richardson became the editor, a position she would hold for almost three years. (I'm LG's third editor.) During that time, she gave herself successive promotions, from mere Editor to Overseer, and finally to Ruler of the Gazette. (By the way, now is the time to give SSC a bit of recognition. People wondered whether LG would remain free. Five years later, it still is.)

Issue #9 also shows the inauguration of the TWDT files. John Fisk published LG with the entire issue in one HTML file. Margie found it more convenient to put each article in a separate file. However, LG's most-requested feature immediately became a return to the one-file format. Margie finally threw up her hands and said, "OK, here's The Whole Damn Thing", and TWDT became a parallel file in each issue. Nowadays, we don't say "damn" because of a controversy that erupted a couple years later, but you'll have to wait till a future "LG's Funniest Moments" to read about that. So we euphemize it to "TWDT" and "the all-in-one file".

Margie's first issue also introduced much of the artwork and formatting styles that LG is still using. However, the logo was different. It looked like this: [old LG logo]

Issue #11, October 1996, has nice binder rings on the left side of the title page, some of the common artwork icons we occasionally recycle (the penguin reading the newspaper, the Weekend Mechanic looking under the hood of his car), John Fisk's first Weekend Mechanic column and Michael Hammel's first Graphics Muse column. #11 also started what has become a Halloween tradition: changing the slogan from "Making Linux just a little more fun!" to "Making Linux just a little less scary!", with a jack-o'lantern image.

Not to be missed are Rick Bronson's thumbs-up signature, and John R Potter's 2-Cent Tip that begins, " I thought you might be interested in my favorite vi trick, which is not a vi trick at all."

Next month, I'll look at the next ten issues or so. See you then.

Mike Orr

Mike ("Iron") is the Editor of Linux Gazette. You can read what he has to say in the Back Page column in this issue. He has been a Linux enthusiast since 1991 and a Debian user since 1995. He is SSC's web technical coordinator, which means he gets to write a lot of Python scripts. Non-computer interests include Ska/Oi! music and the international language Esperanto. The nickname Iron was given to him in college--short for Iron Orr, hahaha.

Copyright © 2002, Mike "Iron" Orr.
Copying license
Published in Issue 74 of Linux Gazette, January 2002

"Linux Gazette...making Linux just a little more fun!"

The Cute Game 'Cuyo'

By Mike "Iron" Orr

A new game appeared in Debian this month, and it's so cute I have to write about it. Here's the Debian description:

Description: Tetris-like game with very impressive effects. Cuyo, named after a Spanish possessive pronoun, shares with Tetris that things fall down and how to navigate them. When enough "of the same type" come "together", they explode. The goal of each level is to blow special "stones" away, you start with. But what "of the same type" and "together" means, varies with the levels. If you hear someone shout that a dragon is always burning his elephants, so that he is not able to blow the volcano away, there a good chances to find Cuyo on his screen. WARNING: It is known to successfully get many people away from more important things to do.

Level 1 screenshots


The object is to join like colors together. They don't all have to be in a straight line, they just have to be next to each other.

When you get enough like colors joined, they disappear with a "poof"!

You also get these gray little Pac-Man ghosties, of unknown purpose.

When one side gets the bottom row taken out, the bottom row slides over from the other side. In the previous panel, the grass has already slid right, which is why there are two layers of grass. Now in the next panel, the second row of grass has slid up, and a row of ghosties is sliding right, to go underneath it.

Level 2 screenshots

The same principles apply here but the theme is different.
[screenshot] [screenshot] [screenshot]

Other comments

Get the package from Debian Unstable or download the source from the original site: Poor Ben tried to install from source and and found he was missing some QT library files, as you can read about in his article.

Ben wasn't happy about the fact that the window isn't resizable. This caused him problems when an 800x600 window took over his 800x600 screen and he couldn't reach for his taskbar. To me, the problem is insignificant since my screen is 1152x864.

I hope the next version has even more cool themes!

Mike Orr

Mike ("Iron") is the Editor of Linux Gazette. You can read what he has to say in the Back Page column in this issue. He has been a Linux enthusiast since 1991 and a Debian user since 1995. He is SSC's web technical coordinator, which means he gets to write a lot of Python scripts. Non-computer interests include Ska/Oi! music and the international language Esperanto. The nickname Iron was given to him in college--short for Iron Orr, hahaha.

Copyright © 2002, Mike "Iron" Orr.
Copying license
Published in Issue 74 of Linux Gazette, January 2002

"Linux Gazette...making Linux just a little more fun!"


By Jon "Sir Flakey" Harsem


Jon "SirFlakey" Harsem

Jon is the and creator of the Qubism cartoon strip and current Editor-in-Chief of the CORE News Site. Somewhere along the early stages of his life he picked up a pencil and started drawing on the wallpaper. Now his cartoons appear 5 days a week on-line, go figure. He confesses to owning a Mac but swears it is for "personal use".

Copyright © 2002, Jon "Sir Flakey" Harsem.
Copying license
Published in Issue 74 of Linux Gazette, January 2002

"Linux Gazette...making Linux just a little more fun!"

Writing Documentation, Part II: LaTeX with latex2html

By Christoph Spiel


Let me first define what LaTeX is and what its primary goals are. LaTeX is a huge add-on macro package for the TeX typesetting system developed by Prof. Donald E. Knuth. If we are not overly picky, we mean ``TeX plus all LaTeX macros'' when we say ``LaTeX system'' or just ``LaTeX''. LaTeX itself was written by Leslie Lamport, who found TeX to be very powerful, but too difficult for everyday use. Therefore he modeled LaTeX after the Scribe system. Scribe puts its emphasis on the logical structure of a document instead of the physical markup. (For those readers proficient in HTML tag <em> is an example logical markup and tab  <i> is the corresponding physical markup.)

LaTeX -- as plain TeX -- allows a normal computer user to typeset documents with production-ready quality. It has been intended that a LaTeX author prepares articles or even books on her local computer, then walk over to the printer shop with a diskette to have the document printed on a high-resolution phototypesetter, and finally have it bound as a book (... shipped off the book to all bookstores in the alpha-quadrant, make millions from it, and two years later win the Intergalactic Pulitzer Prize. -- OK, this is a bit of a stretch).

In the next sections I will introduce very briefly to LaTeX, but I would like to recommend the Not So Short Introduction to LaTeX to everyone who wants to learn LaTeX. The 95-pages document is available for free on the Net. Please see ``Further Reading'' for details.

LaTeX gets installed by most current Linux distributions. You can check whether it is available on your machine by asking

    latex --version

at the command line. My system responds with

    TeX (Web2C 7.3.1) 3.14159
    kpathsea version 3.3.1
    Copyright (C) 1999 D.E. Knuth.
    Kpathsea is copyright (C) 1999 Free Software Foundation, Inc.
    There is NO warranty.  Redistribution of this software is
    covered by the terms of both the TeX copyright and
    the GNU General Public License.
    For more information about these matters, see the files
    named COPYING and the TeX source.
    Primary author of TeX: D.E. Knuth.
    Kpathsea written by Karl Berry and others.

Overall Document Structure

Here is an example of a very short, yet complete LaTeX document:

    % preamble
    % body
    Here comes the text.

Every LaTeX document consists of a preamble and a body. The preamble reaches from the definition of the document's class, \documentclass[options]{class}, up to, but excluding \begin{document}. The body is everything from \begin{document} to \end{document}.

The preamble in the example features only one command, \pagestyle{empty}, which instructs LaTeX to omit all page decorations such as running heads or page numbers. The percent signs introduce comments that extend to the ends of the respective lines.


Paragraphs are separated by one or more blank lines. The number of blank lines does not influence the output; one is as good as many. The same holds true for spaces (which separate words, but didn't you know that?): one hundred spaces produce the same output as a single space. Newlines, this is line-terminators, are counted as spaces, so are tabulator characters.

If we apply these simple rules to the three different versions of two paragraphs that follow, we conclude that they all will be typeset the same. I have added line numbers at the beginning of each line to point out empty lines, which separate the paragraphs. The numbers are not part of the text.

Version A
    1    I am a short sentence in the first paragraph.
    3    I'm the only sentence in the second paragraph.
Version B
    1    I am a short sentence
    2    in the first paragraph.
    4    I'm the
    5    only sentence
    6    in the second
    7    paragraph.
Version C
    1    I   am   a   short    sentence   in   the  first paragraph.
    4    I'm the only sentence
    5        in the
    6            second paragraph.
Special Characters
Most non-alphanumeric characters carry a special meaning inside LaTeX. This is one of the features that appalls LaTeX beginners. However, after some time, the user is alert of their particular behavior.

I have collected the few most important special characters along with the ways how to insert them literally into a text.

Introduce a command, like ``\dots'' or ``\/''.

Note that ``\\'' does not insert a single backslash character into the text as many C-programmers might assume right now. The control sequence ``\\'' inserts a line break, whereas a literal backslash is produced by ``$\backslash$''. To maximize the confusion, `` ''--this is a backslash followed by a blank space--is a command, too! It inserts a so-called control space, a space (more precisely: exactly one space) that is never eaten up like ordinary spaces as explained in section ``Paragraphs''.

Group arguments together.

You get literal curly braces by quoting them with a backslash like this ``\{'' and ``\}''.

Start a comment that reaches to the end of the line.

Comments extend up to and include the newline character at the end of a line. Thus LaTeX comments differ from one-line comments in all general programming languages, as those exclude the newline character. For the user this means, he can mask a newline by ending a line with a comment.

    Triangular % <- note space directly in front of the %-sign

is equivalent to

    Hessenberg-Triangular Reduction

To typeset a literal percent sign, use ``\%''.

Make an unbreakable space, like ``&nbsp;'' in HTML.
Switch to math mode and back.

The sequence math is typeset inline in mathematical typesetting mode. To get a literal dollar sign, use ``\$''.

The following table summarizes all ASCII characters that are treated specially by LaTeX. The rightmost column of the table suggests one or more possible equivalent sequences to get the plain ASCII character into the text. As can be guessed from the entries for caret and twiddle, \charcode_number inserts the ASCII character with the decimal index code_number into a document.

ASCII characters that are special for LaTeX. The right column denotes the strings (in LaTeX) which produce the ASCII characters in the middle column.
sharp # \#
dollar $ \$
percent % \%
ampersand & \&
multiplication sign * * or $*$
minus sign - $-$
less-than sign < $<$
greater-than sign > $>$
backslash \ $\backslash$
caret ^ \char94
underscore _ \_
curly braces {, } \{, \}
vertical bar | $|$
twiddle ~ \char126
LaTeX commands usually start with a backslash character ``\'' and either extend from the backslash to the next non-letter character (kind 1) or consist of exactly one non-alphanumeric character (kind 2). So ``\raggedleft'' and ``\makebox'' are commands of kind 1 whereas ``\\'' and ``\"'' are commands of kind 2. Arguments are passed to commands within curly braces ``{'', ``}''. Empty arguments can be omitted.


    \raggedleft{}                      % no argument
    \raggedleft                        % same as above
    \makebox{Text inside of a box.}    % single argument
    \parbox{160pt}{This text is
    typeset inside of a box.}          % two arguments

The number of arguments passed to a command is fixed. However, some commands accept optional parameters. These are passed within square brackets (``['', ``]'') and usually precede the arguments just as the options precede the arguments in most UN*X utility programs.


    \parbox[t]{10cm}{I am a top-aligned
    paragraph.} % one option, two arguments

Here t is the optional parameter.

Spaces that follow a type 1 command name without arguments (like the second ``\raggedleft'' above) are ``eaten''; they are not passed on to the output.

Environments are pairs in the form


Text within the environment.


An environment changes the appearance of the text within it. Environments control the alignment, the width of the margins and many other things. Some predefined environments are: center, description, enumerate, flushleft, flushright, itemize, list, minipage, quotation, quote, tabbing, table, tabular, verbatim, and verse.

Environments do nest. For example, to get a quotation typeset flush against the right margin, use the flushright environment and the quotation environment.

            Letters are things,     \\
            not pictures of things. \\
            -- Eric Gill

An environment only affects text inside of it; it encapsulates all changes, like a different indentation occurring within the environment. (Well -- unless you happen to change a global variable, but I won't tell you how to do that, so you're safe.)


LaTeX knows three or four heading levels depending on the documentclass. Class article has three section levels, whereas classes book and report feature chapters as a fourth and topmost heading level.

\chapter{heading} % only for class book and report




Note that as in POD, discussed in Part I, sectioning commands act as separators. They do not group together text with a start marker and an end marker, but their mere appearance groups the text. This will be different in DocBook, as I shall show in next month's article.


LaTeX ships with three kinds of list-generating environments:

They correspond to unnumbered lists, numbered lists, and definition lists in HTML, or =item *, =item 1, =item term lists in POD.

The items themselves are introduced with ``\item''. An item can consist of more than one paragraph.

For description lists the optional parameter given to ``\item'' as in ``\item[term]'' specifies the term. The text following ``\item[term]'' is term's definition.


Itemized List
    What emacs can do for you:
        \item Cut and paste blocks of text
        \item Fill or justify paragraphs
        \item Spell check documents
Enumerated List
    Starting emacs for the first time
        \item Start emacs from the command line:
        \texttt{\$ emacs}
        emacs will show you its startup screen and soon switch to a
        buffer called \texttt{*scratch*}.
        \item Hold down the Control~key and press~H.  You see a prompt
        at the bottom of the screen (or emacs window).
        \texttt{C-h (Type ? for further options)-}
        \item Press~T to start the emacs tutorial.
Description List
    Some emacs commands:
        \item[C-x C-c] Quit emacs.
        \item[C-x f] Open a file.
        \item[C-x r k]
            Kill rectangle defined by mark and point, this is, by the
            active region.


All cross references need two parts: a pointer (the link) and a pointee (the anchor). Anchors in LaTeX are inserted with \label{anchor-name}. Every anchor is located in a particular section and on a particular page. These two pieces of information are retrieved with \ref{anchor-name} and \pageref{anchor-name} at any place in the document.

Example use of \ref:

    As has been pointed out in section~\ref{section:setup} `Setup', ...

Example use of \pageref:

    The steel used in the sample chamber is alloyed with Ti (0.5\%),
    Cr (0.1\%), and Mn (0.1\%).\label{definition:chamber-alloy}
    For sample chamber is made of stainless steel (see
    page~\pageref{definition:chamber-alloy} for the exact
    metallurgical composition), ...

Defining Your Own Commands and Environments

One of the major advantages of the LaTeX typesetting system is to allow the user to define her own commands and environments. Say you want to mark up all replaceable parameters in the description of a UN*X utility, like in

    cd directory

to be rendered as, for example,

cd directory

Here, cd is the utility's name, and directory is the replaceable parameter.

Often utility names are typeset in bold face, and replaceable parameters in italics. Thus, a good solution would be to write

    \utilityname{cd} \replaceable{directory}

where \utilityname and \replaceable switch fonts to bold face and italics respectively. With the help of \utilityname and \replaceable we can consistently mark up further command lines:

    \utilityname{pushd} \replaceable{directory}
    \utilityname{ls} \replaceable{filename}

To define a new LaTeX command, use

\newcommand{command-name}[number-of-arguments ]{command-sequence}

where command-name is the new command's name, number-of-arguments is the number of arguments the new command takes (it defaults to 0 if omitted), and command-sequence are the LaTeX commands to execute when command-name is called.

For our example, define \utilityname and \replaceable as:


The predefined commands \textbf and \textit switch fonts to text bold face (in contrary to math bold face) and text italic. Arguments are referred to by #digit, where digit takes on values from 1 to 9.

To give you an impression of the usefulness of our newly defined commands, suppose we would like to generate an index entry for each utility that is mentioned in the text. Command \index{term} puts term in the index. We only need to modify the definition of \utilityname to


and are done. (For the curious: index levels are separated with vertical bars. So, we probably would prefer \index{utility|#1} as it neatly groups all utilities together. See the documentation of makeindex for details.)

New environments are defined with

\newenvironment{environment-name}[number-of-arguments ]{starting-sequence}{ending-sequence }

the only difference being that \newenvironment requires two command sequences: one to open the environment, starting-sequence, and one to close it, ending-sequence. Continuing the example of a quotation typeset flush left against the page's margin, we define our own own quotation environment:

    \newenvironment{myquotation}% Note: "%" masks newline

which is then used like this:

        Letters are things,     \\
        not pictures of things. \\
        -- Eric Gill

Neither commands, nor environments can be defined multiple times with \newcommand or \newenvironment. These commands only serve first time definition. Redefinitions are done with \renewcommand and \renewenvironment, which take on the same arguments as their first-time cousins.

Inline Markup

LaTeX offers an extremely rich set of inline markup. I restrict the discussion to the same three inline markup changes which I discussed for Perl's plain old documentation format: emphasis, italics, bold face, and typewrite (code) font.

Emphasis and Italics
\textit{argument} -- Typeset argument in text italics.

\emph{argument} -- Emphasize argument. The default configuration switches to and from italics depending on the current font setting. If the current font is upright, \emph uses italics; if the current font is italics, it uses an upright font. This way the emphasized parts of text always stand out.

Why have \textit and at the same time \emph? The commands express different requests. \textit unconditionally demands the argument to be typeset using an italics font. Period. \emph on the other hand asks for emphasizing its argument, however the emphasizing may look like. The default uses an italics font as explained above, but \emph can be redefined to use a bold font, underlining, or anything else the writer imagines for her preferred method of emphasizing. The command name emph always catches the concept of emphasis and hides the implementation.

Bold Face
\textbf{argument} -- Typeset argument in text bold face.

Based on \textbf, we can define our own logical markup commands, like for example

Typewriter Font
\texttt{argument} -- Typeset argument in text typewriter font.

As with \textbf, \texttt can be wrapped into user-defined commands:


LaTeX Tool Chain

LaTeX files usually carry the extension tex. LaTeX translates these tex-files into so called device independent (dvi) files. dvi files are a binary representation of the source. They can be previewed to dvisvga on the console (given the terminal supports high-resolution graphics), or, for example, xdvi under the X11 windowing system. Often dvi files are converted to Postscript with the dvips tool. If Portable Document Format is desired, pdflatex transforms tex files into pdf files in a single step.


So far so good. LaTeX makes wonderfully looking Postscript documents, and its pdf sibling does the same, but outputs Portable Document Format files. Didn't we say we want HTML, too? Sure, we did! But LaTeX cannot help us here; we need another tool: latex2html. This tool transforms a LaTeX source file into a set of html files that are properly linked together according to the source file's structure.

latex2html has a home page at where it is available for download. It can also be obtained from or better one of its many mirrors. To see whether it is installed on your Linux system, try

    latex2html --version

and you should get an answer like

    This is LaTeX2HTML Version 2K.1beta (1.57)
    by Nikos Drakos, Computer Based Learning Unit, University of Leeds.

What do I have to change to make my LaTeX document translatable with latex2html? -- Good news: almost nothing! Just ensure that the packages html and makeindex are referenced in the document's preamble, this is, at least add


to it. Now file my_document.tex can be translated to HTML with the call

    latex2html my_document.tex

References Revisited

latex2html takes care of almost all issues that arise when a LaTeX file is translated into a set of html files. However, references to other parts in the document or other documents are conceptually different in printed documentation and HTML. Consider the LaTeX snippet

    In the following, we summarize the findings
    using a cylindrical coordinate system.  See
    for the definition of the coordinate system.

where LaTeX dutifully replaces \pageref{definition:coordinate-system} with the page number on which \label{definition:coordinate-system}, the anchor of the page reference, occurs. Where is the problem? First, a set of html pages does not have a rigid notion of a ``page number''. Second, latex2html does replace \pageref{definition:coordinate-system} with a hyper-link to the spot where \label{definition:coordinate-system} is rendered. The link is a dark square for graphical browser or the marker ``[*]'' for text browsers. But the whole construct looks awkward -- almost distracting and this is not latex2html's fault:

In the following, we will summarize the findings using a cylindrical coordinate system. See page  [*] for the definition of the coordinate system.

Latex2html needs our help! The paragraph, which contains the reference, ought to be rephrased for the on-screen version, for example to:

    In the following, we will summarize the
    findings using a <a>cylindrical coordinate

where I have indicated the hyperlink with HTML anchor tags. To allow for two different versions depending on the output format, latex2html defines the \hyperref command.

\hyperref[reference-type]{text for html version}{pre-reference text for LaTeX version}{post-reference text for LaTeX version}

The optional parameter reference-type selects the counter the reference refers to:

Cross reference to a section number like \ref does. The reference text is the section number (``4'', ``1.5.2'', ``'', etc.).
``page'' or ``pageref''
Reference to a page number like \pageref does. The reference text is a page number (``25'', ``xxiii'', etc.).

Rewritten with \hyperref our example looks like this

    In the following, we will summarize the
    findings using a \hyperref[pageref]%
    {cylindrical coordinate system}% for HTML
    {cylindrical coordinate system.  See page~}% for LaTeX
    { for the definition of the coordinate system}% trailing text for LaTeX
    {definition:coordinate-system}.% label the reference refers to

LaTeX renders it to

In the following, we will summarize the findings using a cylindrical coordinate system. See page 97 for the definition of the coordinate system.

and latex2html produces

In the following, we will summarize the findings using a cylindrical coordinate system.

from it.


A problem related to the one we have just encountered with references happens when hyperlinks come into play. In the HTML version of the document hyperlinks are essential; in the printed version, they are of little use: Compare ``Click here'' with ``Press your pencil against this letter''? Sometimes, however, the author really wants to include the target of the hyperlink, an universal resource locator (URL), in the printed text. latex2html defines two commands that exactly cater these needs.

\htmladdnormallink{link text}{universal resource locator}

\htmladdnormallinkfoot{link text}{universal resource locator}

Both commands generate the hyperlink <a href = "universal resource locator">link text</a> in the HTML version. The first only renders link text in the LaTeX version, suppressing universal resource locator completely. The second adds a footnote containing universal resource locator. The typical usage of these commands is

The text of this article can be downloaded from our \htmladdnormallink{web site}{}.


The text of this article can be downloaded from our \htmladdnormallinkfoot{web site}{}.

where the LaTeX result of the first looks like this

The text of this article can be downloaded from our web site.

for the second web site gets a footnote marker and a footnote with the URL is placed at the bottom of the page. The HTML output will show up both times as

The text of this article can be downloaded from our web site.

Format Specific Commands

As a last resort several commands and environments enable the writer to divert her text between LaTeX and HTML versions of the document:

I recommend to use diversion of output only if no more specialized latex2html command or environment can produce the desired markup, for splitting always requires to keep both branches in sync.

latex2html Pros and Cons

  • Completely configurable through user-defined LaTeX commands and environments
  • Extremely high-quality printed output
  • Handles tables and graphics (not shown in this article)
  • ``Impedance mismatch'' between LaTeX and HTML not completely compensated by latex2html
  • Flat learning curve of LaTeX

Further Reading

Next month: DocBook

Christoph Spiel

Chris runs an Open Source Software consulting company in Upper Bavaria, Germany. Despite being trained as a physicist -- he holds a PhD in physics from Munich University of Technology -- his main interests revolve around numerics, heterogenous programming environments, and software engineering. He can be reached at

Copyright © 2002, Christoph Spiel.
Copying license
Published in Issue 74 of Linux Gazette, January 2002

"Linux Gazette...making Linux just a little more fun!"

Linux Socket Programming In C++

By Rob Tougher


1. Introduction
2. Overview of Client-Server Communications
3. Implementing a Simple Server and Client
3.1 Server - establishing a listening socket
3.2 Client - connecting to the server
3.3 Server - Accepting the client's connection attempt
3.4 Client and Server - sending and receiving data
4 Compiling and Testing Our Client and Server
4.1 File list
4.2 Compile and test
5. Conclusion

1. Introduction

Sockets are a mechanism for exchanging data between processes. These processes can either be on the same machine, or on different machines connected via a network. Once a socket connection is established, data can be sent in both directions until one of the endpoints closes the connection.

I needed to use sockets for a project I was working on, so I developed and refined a few C++ classes to encapsulate the raw socket API calls. Generally, the application requesting the data is called the client, and the application servicing the request is called the server. I created two primary classes, ClientSocket and ServerSocket, that the client and server could use to exchange data.

The goal of this article is to teach you how to use the ClientSocket and ServerSocket classes in your own applications. We will first briefly discuss client-server communications, and then we will develop a simple example server and client that utilize these two classes.

2. Overview of Client-Server Communications

Before we go jumping into code, we should briefly go over the set of steps in a typical client-server connection. The following table outlines these steps:

Server Client
1. Establish a listening socket and wait for connections from clients.  
  2. Create a client socket and attempt to connect to server.
3. Accept the client's connection attempt.  
4. Send and receive data. 4. Send and receive data.
5. Close the connection. 5. Close the connection.

That's basically it. First, the server creates a listening socket, and waits for connection attempts from clients. The client creates a socket on its side, and attempts to connect with the server. The server then accepts the connection, and data exchange can begin. Once all data has been passed through the socket connection, either endpoint can close the connection.

3. Implementing a Simple Server and Client

Now its time to dig into the code. In the following section we will create both a client and a server that perform all of the steps outlined above in the overview. We will implement these operations in the order they typically happen - i.e. first we'll create the server portion that listens to the socket, next we'll create the client portion that connects to the server, and so on. All of the code in this section can be found in simple_server_main.cpp and simple_client_main.cpp.

If you would rather just examine and experiment with the source code yourself, jump to this section. It lists the files in the project, and discusses how to compile and test them.

3.1 Server - establishing a listening socket

The first thing we need to do is create a simple server that listens for incoming requests from clients. Here is the code required to establish a server socket:

listing 1 : creating a server socket ( part of simple_server_main.cpp )
#include "ServerSocket.h"
#include "SocketException.h"
#include <string>

int main ( int argc, int argv[] )
      // Create the server socket
      ServerSocket server ( 30000 );

      // rest of code -
      // accept connection, handle request, etc...

  catch ( SocketException& e )
      std::cout << "Exception was caught:" << e.description() << "\nExiting.\n";

  return 0;

That's all there is to it. The constructor for the ServerSocket class calls the necessary socket APIs to set up the listener socket. It hides the details from you, so all you have to do is create an instance of this class to begin listening on a local port.

Notice the try/catch block. The ServerSocket and ClientSocket classes use the exception-handling feature of C++. If a class method fails for any reason, it throws an exception of type SocketException, which is defined in SocketException.h. Not handling this exception results in program termination, so it is best to handle it. You can get the text of the error by calling SocketException's description() method as shown above.

3.2 Client - connecting to the server

The second step in a typical client-server connection is the client's responsibility - to attempt to connect to the server. This code is similar to the server code you just saw:

listing 2 : creating a client socket ( part of simple_client_main.cpp )
#include "ClientSocket.h"
#include "SocketException.h"
#include <iostream>
#include <string>

int main ( int argc, int argv[] )
      // Create the client socket
      ClientSocket client_socket ( "localhost", 30000 );

      // rest of code -
      // send request, retrieve reply, etc...

  catch ( SocketException& e )
      std::cout << "Exception was caught:" << e.description() << "\n";

  return 0;

By simply creating an instance of the ClientSocket class, you create a linux socket, and connect it to the host and port you pass to the constructor. Like the ServerSocket class, if the constructor fails for any reason, an exception is thrown.

3.3 Server - accepting the client's connection attempt

The next step of the client-server connection occurs within the server. It is the responsibility of the server to accept the client's connection attempt, which opens up a channel of communication between the two socket endpoints.

We have to add this functionality to our simple server. Here is the updated version:

listing 3 : accepting a client connection ( part of simple_server_main.cpp )
#include "ServerSocket.h"
#include "SocketException.h"
#include <string>

int main ( int argc, int argv[] )
      // Create the socket
      ServerSocket server ( 30000 );

      while ( true )
	  ServerSocket new_sock;
	  server.accept ( new_sock );

	  // rest of code -
	  // read request, send reply, etc...

  catch ( SocketException& e )
      std::cout << "Exception was caught:" << e.description() << "\nExiting.\n";

  return 0;

Accepting a connection just requires a call to the accept method. This method accepts the connection attempt, and fills new_sock with the socket information about the connection. We'll see how new_sock is used in the next section.

3.4 Client and Server - sending and receiving data

Now that the server has accepted the client's connection request, it is time to send data back and forth over the socket connection.

An advanced feature of C++ is the ability to overload operators - or simply, to make an operator perform a certain operation. In the ClientSocket and ServerSocket classes I overloaded the << and >> operators, so that when used, they wrote data to and read data from the socket. Here is the updated version of the simple server:

listing 4 : a simple implementation of a server ( simple_server_main.cpp )
#include "ServerSocket.h"
#include "SocketException.h"
#include <string>

int main ( int argc, int argv[] )
      // Create the socket
      ServerSocket server ( 30000 );

      while ( true )

	  ServerSocket new_sock;
	  server.accept ( new_sock );

	      while ( true )
		  std::string data;
		  new_sock >> data;
		  new_sock << data;
	  catch ( SocketException& ) {}

  catch ( SocketException& e )
      std::cout << "Exception was caught:" << e.description() << "\nExiting.\n";

  return 0;

The new_sock variable contains all of our socket information, so we use it to exchange data with the client. The line "new_sock >> data;" should be read as "read data from new_sock, and place that data in our string variable 'data'." Similarly, the next line sends the data in 'data' back through the socket to the client.

If you're paying attention, you'll notice that what we've created here is an echo server. Every piece of data that gets sent from the client to the server gets sent back to the client as is. We can write the client so that it sends a piece of data, and then prints out the server's response:

listing 5 : a simple implementation of a client ( simple_client_main.cpp )
#include "ClientSocket.h"
#include "SocketException.h"
#include <iostream>
#include <string>

int main ( int argc, int argv[] )

      ClientSocket client_socket ( "localhost", 30000 );

      std::string reply;
	  client_socket << "Test message.";
	  client_socket >> reply;
      catch ( SocketException& ) {}

      std::cout << "We received this response from the server:\n\"" << reply << "\"\n";;

  catch ( SocketException& e )
      std::cout << "Exception was caught:" << e.description() << "\n";

  return 0;

We send the string "Test Message." to the server, read the response from the server, and print out the response to std output.

4. Compiling and Testing Our Client And Server

Now that we've gone over the basic usage of the ClientSocket and ServerSocket classes, we can build the whole project and test it.

4.1 File list

The following files make up our example:

Makefile - the Makefile for this project
Socket.h, Socket.cpp - the Socket class, which implements the raw socket API calls.
SocketException.h - the SocketException class
simple_server_main.cpp - main file
ServerSocket.h, ServerSocket.cpp - the ServerSocket class
simple_client_main.cpp - main file
ClientSocket.h, ClientSocket.cpp - the ClientSocket class

4.2 Compile and Test

Compiling is simple. First save all of the project files into one subdirectory, then type the following at your command prompt:

prompt$ cd directory_you_just_created
prompt$ make

This will compile all of the files in the project, and create the simple_server and simple_client output files. To test these two output files, run the server in one command prompt, and then run the client in another command prompt:

first prompt:
prompt$ ./simple_server

second prompt:
prompt$ ./simple_client
We received this response from the server:
"Test message."

The client will send data to the server, read the response, and print out the response to std output as shown above. You can run the client as many times as you want - the server will respond to each request.

5. Conclusion

Sockets are a simple and efficient way to send data between processes. In this article we've gone over socket communications, and developed an example server and client. You should now be able to add socket communications to your applications!

Rob Tougher

Rob is a C++ software engineer in the NYC area. When not coding on his favorite platform, you can find Rob strolling on the beach with his girlfriend, Nicole, and their dog, Halley.

Copyright © 2002, Rob Tougher.
Copying license
Published in Issue 74 of Linux Gazette, January 2002

"Linux Gazette...making Linux just a little more fun!"

Play with the Lovely Netcat:
Reinvent /usr/bin/yes

By zhaoway

Netcat and Yescat

The first but secondary purpose of this article is to introduce you this nifty networking tool: /usr/bin/netcat which is well available from the Debian GNU/Linux under the package name netcat. (The drill: apt-get install netcat and you're done.) There are very well written companion documentation by the anonymous software author, and from which a well formatted Unix manual page by my fellow Debian developers. Reading the companion documentation is really an interesting experience. It would almost certainly reminds the gentle reader that there is truly this kind of creature called Unix gurus living somewhere at the large. That kind of hackish feeling, think it, insists on and successful in being anonymous, after written such an excellent piece of software. Only true Unix guru could do that!

Since the netcat documentation is of such excellent quality, I will not duplicate it here. (However, I recommend you read the netcat documentation before reading this article.) For those of you with little patience, netcat could forward data stream from stdin to a TCP or UDP socket, and from a TCP or UDP socket to stdout. Just like the cat program to forward data from stdin to stdout. According to unconfirmed sources, that's the origin of the netcat program name.

The second but primary purpose of this article is to show you how tedious and clueless an article author (like me) can be, introducing a piece of software which does not have any graphical user interface, or any interactive help system. Ya know, I would simply go crazy if I cannot capture a screenshot or two!

So here we introduce the nutty yescat for a purpose which will show itself later: /usr/bin/yes. Nearly nobody even noticed it. But it quietly lies there in a corner of /usr/bin for so long that nearly none of us latecomers to the Linux world ever noticed it in any of our Linux systems. Its origin remains a mystery. Its popularity is just as /sbin/init! What does it do? Lets' see for our own eyes:

zw@q ~ % yes

Isn't it wonderful? ;-) (Press ctrl-C to stop the y's, otherwise they'll march down the screen forever.) It can even say no too!

zw@q ~ % yes no

In the following sections we will develop two companion utilities with which we will eventually reinvent /usr/bin/yes with the help from /usr/bin/netcat of course! Lets' start the journey now!

Hub and cable

The hub (hub.c) and cable (cable.c) utilities are certainly inspired by netcat which could forward data stream from a socket to stdout, and from stdin to a socket. Did I forget to recommend the netcat companion documentation for you to read? ;-) Hub is designed to be like a server, and cable is designed to be like a client. Instead of forwarding data between stdin/stdout and a socket, hub and cable forward and multiplex data from a socket to any other sockets. That's where the names come from. They're just like Ethernet hub and cable. Lets' see a screenshot. Yeah, screenshot! ;-)

zw@q ~ % ./hub
lullaby internetworks lab: (server alike) hub $Revision: 1.2 $
Copyright (C) 2001  zhaoway <>

Usage: hub [hub buffer size] [tcp port number] [number of hub ports]

o hub buffer size is in bytes. for example 10240.
o tcp port number is at least 1024 so i do not need to be root.
o number of hub ports is at least 2. happy.
zw@q ~ %

Hub will listen on a TCP port simulating a many port Ethernet hub. Data come in from one hub port will be forwarded to other hub ports. You could test the hub alone without cable using netcat. Note: nc is the acronym for netcat.

  1. Launch hub in the console A: ConA % ./hub 10240 10000 2
  2. From console B, connect a netcat: ConB % nc localhost 10000
  3. From console C, connect another netcat: ConC % nc localhost 10000
  4. Then you could type in ConC and read the output in ConB, vice versa.

Then there is cable:

zw@q ~ % ./cable
lullaby internetworks lab: (client alike) cable $Revision: 1.2 $
Copyright (C) 2001  zhaoway <>

Usage: cable [cable buffer size] [1st ip] [1st port] [2nd ip] [2nd port] ..

o cable buffer size is in bytes. for example 10240.
o ports should be listening or connection attempts will fail.
o number of ip addr and port pairs is at least 2.
zw@q ~ %

Cable is more or less like a shared Ethernet bus coaxial cable. It forwards and multiplexes data between listening socket daemons. Let's test it too.

  1. Launch a netcat daemon in ConA: ConA % nc -l -p 10000
  2. Launch another netcat daemon in ConB: ConB % nc -l -p 10001
  3. Arrange the cable: ConC % ./cable 10240 10000 10001
  4. Then you could type in ConA and read the output from ConB, vice versa.

There are some interesting techniques used in developing hub and cable. Notably the select() function call. But for now, we will focus on our course to reinvent the /usr/bin/yes first. ;-)

Reinvent the wheel

It's not a very easy task to reinvent /usr/bin/yes using netcat and hub and cable. I could only give a cheat answer. And that's why I need to set the buffer size command line argument. But anyway, let's begin!

The main idea is as following. First we set up a three-port hub, then we using cable to connect two hub port together, after that we could using netcat to echo any character into the remain free hub port. It's like the following diagram:

            |            cable
           \|/        ,---------,
            |         |         |
            V         V         V
	,--[ ]-------[ ]-------[ ]--.
        |   A         B         C   |
        |       three-port hub      |

Because the nature of the hub, data sent in from port A, will be forwarded to port B and port C, since port B and C are connected by a cable, the data come out of the hub will go right back in, and then being multiplexed and forwarded to port A and circulating in the cable loop to eternity. Eventually port A will receive infinite copies of the original data sent in.

Lets' construct the device.

  1. In ConA, we launch the three-port hub: ConA % ./hub 10240 10000 3
  2. In ConB, we loop the cable: ConB % ./cable 10240 10000 10000

Now after we finished construction of our device, then we will using netcat to finally finish our reinvention of /usr/bin/yes.

ConC % echo "y" | nc localhost 10000

The tricky exercises left for the reader is: what if we change the buffer size of both cable and hub from 10240 to 1? You could try and see for yourself.

Have fun and good luck!


zhaoway lives in Nanjing, China. He divides his time among his beautiful girlfriend, his old Pentium computer, and pure mathematics. He wants to marry now, which means he needs money, ie., a job. Feel free to help him come into the sweet cage of marriage by providing him a job opportunity. He would be very thankful! He is also another volunteer member of the Debian GNU/Linux project.

Copyright © 2002, zhaoway.
Copying license
Published in Issue 74 of Linux Gazette, January 2002

"Linux Gazette...making Linux just a little more fun!"

The Back Page

Wacko Freshmeat Entry of the Month


Contributed By Jim Dennis

pyDDR 0.2.5
by theGREENzebra - Saturday, December 22nd 2001 00:39 EST

About: PyDDR is a clone of DDR ("Dance Dance Revolution") written in Python. The idea of DDR is simple. There's a mat with four directional arrows, and the game scrolls arrows up the screen to the beat while playing a song. When the arrows reach the top of the screen (not sooner and not later), the player hits the corresponding arrow on the pad, and given that it's hit on time with the beat, points are scored. Based on how well the dance is put together, s/he is graded at the end of the song.

Changes: PyDDR now has working DDR mat support. STEP files can now contain starting/ending markers to shorten a full-length MP3 into a DDR-length song without modifying the file, and song and group names are also displayed at the top of the playfield. A few bugfixes and improvements were made regarding fonts, misses, and combos.

This is a game written in Python 2.1 and using the Pygame package (which is a set of bindings between Python and the SDL game-development libraries).

The thing that's wacky is that it's intended to be used with one of those DDR "dance mats." These are little floor mats with four arrows arranged in a cross pattern (like old fashioned cursor keys before the advent of the "inverted T cursor/arrows" on PC keyboards). You can "dance" on the mat, providing "step" input (timing and direction or foot placement) for the game. It then awards points based on how closely you follow the dance step (which it's displaying and scrolling to the tempo of some MPEG encoded music).

You might have seen video games where kids where dance for a high score. I know that I saw lots of these in Japan, where it's apparently *very* popular.

I suppose this is the most exciting non-violent, completely G-rated fun that's available for kids on the 'net.

(Maybe the fact that *I* think it's "wacky" reveals too much about me!)

Not The Answer Gang


Answered By Huibert Alblas, Ben Okopnik, Iron, Don Marti,

Huibert Alblas asks:
Ext3 and ext2 are compatible filesystems, you can mount ext3 filesystems with an "only ext2" kernel, _but_ it has to be cleanly unounted (damn, what is the correct past tense for that what I want to express?)

(!) [Ben]

"Has to have been cleanly unmounted." English can get very funky sometimes... OTOH, Spanish isn't much better. Hey, Mike! Does Esperanto suck just as much with tenses, or (being a designed language) did they actually do something with this mess?

(!) [Iron]

It would be the same in Esperanto. (But see below.)

Ext2 kaj ext3 estas fajl-sistemoj kunlaborivaj. Oni povas mauxnti ext3-an fajlsistemon en koro "nur" ext2-a, *sed* gxi devas esti pure malmauxntita.

(!) [Ben]

Hey, that looks like code I've been writing lately! :) I don't think I've ever seen written Esperanto before, other than single words or so - my memory says I have but can't provide written proof. This is cool.

(!) [Iron]

But it would be more natural to transform the sentance:

... *if* it has been cleanly unmounted == 
	oni povos ... *se* gxi estos pure malmauxntita.
	(-os : both clauses in future tense because of the "if")

... *only if* == *nur se*

... *if* one unmounted it cleanly first ==
	*se* oni jam malmauxntos gxin pure
	(literally: "already will mount")
There's no way around the fact that "has to be cleanly unmounted" requires three verbs, with the last one being a past passive participle. What Esperanto gives you is a complete set of active and passive participles for all tenses.
mauxnti   = to mount (pronounced "mount-ee")
mauxntas  = I/you/we/they mount, (s)he mounts
mauxntis  = mounted
mauxntos  = will mount
mauxntu   = mount!  (imperative)
mauxntus  = would mount  (subjunctive, as in:
	If I had mounted ext3, my files wouldn't be ruined.
	Se mi mauxntus ext3'on, miaj fajloj ne estus ruinitaj.

	If I had been accustomed to mounting ext3, my files wouldn't be ruined.
	Se mi kutimus mauxnti ext3'on, miaj fajloj ne estus ruinitaj.
	(kutimi = to do something habitually)

It's easier to explain the participles with "prezidi" (to preside):

prezidanto   = president  (he-who-is-presiding)
prezidinto   = former president   (he-who-was-presiding)
prezidonto   = president-elect    (he-who-will-preside)

prezidato    = subject   (he-who-is-presided-over)
prezidinto   = former subject   (he-who-was-presided-over)
prezidonto   = future subject   (he-who-will-be-presided-over)

Not officially a part of Esperanto, but you can get away with:

prezidunto   = (subjunctive: he-who-would-be-president [but he's not])
preziduto    = (subjunctive: he-who-would-be-presided-over [but he's not])

When you want to get away from tense:

prezidento   = President (no tense affiliation; a separate word root
                          ...but most verbs don't have an -ent counterpart)

gxi devas esti malmauntita == it must be unmounted (it must have been unmounted)

li devas esti malmauntinta gxin ==
	he must have unmounted it
	he was obligated to have unmounted it

li devus esti malmauxninta gxin ==
	he should have unmounted it   (subjunctive: but he didn't)

Unofficially, you can combine "esti malmauxntinta" into one verb: 
	malmauxntinti     (to have unmounted something)

	gxi devas malmauxntiti    (it must have been unmounted)
collapsing three verbs into two.

Or even:
	malmauxntintis    (is having unmounted something)

So these are equivalent:
	li estas malmauxntinta gxin
	li malmauxntintas gxin
	== he has unmounted it.

	li estis malmauxntinta gxin
	li malmauxntintis gxin
	== he had unmounted it.
But one normally tries to keep the verbs as simple as possible, and not use participles unless necessary. English and Spanish habitually say "is doing", "was doing" when the participle isn't necessary: this is *not* done in Esperanto. Although if you do it, it's not "incorrect", just weird.

The unofficial forms aren't in the grammar books or used by the great writers, so they aren't recommended for academic/professional use, but because they are logical extensions of the grammar system, they aren't "wrong" per se. If enough people use them, eventually they will be acknowledged in the Plena Vortaro (Complete Dictionary, literally "full word-collection").

(!) [Ben]

The implications behind all of that are fascinating, "great writers" and "academic/professional use" particularly. Any estimates on how many Esperanto speakers there are in the world?

(!) [Iron]

The only number I heard was that it's the same size as the smallest countries in the United Nations. I forget which those were. I suppose we can say, a bit smaller than Liechtenstein. How big is Liechtenstein now?

The difference is that Esperantists are scattered all over the world rather than being concentrated in one country. So for instance, you can take an around-the-world trip and stay only at Esperanto-speaking lodgings using the Pasporta Servo ("passport service", This gets you the inside scoop on a country whose language you don't know, even if the hosts don't understand your language.

"Great writers" was an exaggeration. I meant the most respected Esperanto writers and translators. E-o's creator L L Zamenhof translated the Bible and Hamlet himself before introducing the language, and wrote numerous original poems and proverbs. (The regularities of the language make finding rhyming and metric pairs relatively easy.)

"Famous" original works in Esperanto include _Metropoliteno_ by Vladimir Varankin, written in the 1920s about the building of the Berlin and Moscow subways. (The author was either a loyal Soviet or submitted to Soviet censorship rules, so you have to ignore the propaganda-speak in it.) _Mr Tot Acxetas Mil Okulojn_ (Mr Tot Buys a Thousand Eyes), a humorous look at a travelling salesman with comments about the invasion of privacy (Carnivore, PGP back doors, I *knew* we could tie this to Linux somehow!). _Kredu Min, Sinjorino!_ (Believe Me, Ma'am!). etc. Also the infamous _Knedu Min, Sinjorino!_ (Knead Me, Ma'am!), a dictionary of "taboo and insulting expressions", whose title is a satire of the previous book.

Most Esperanto books, however, are translations. But whereas most translations to English come from the top five big languages, translations to Esperanto come from a wide variety of small languages. Hungary and Bulgaria were centers for Esperanto translation and academia during part of the 20th century, and there was also significant activity in England and Germany before WWII. In the late 20th century, China produced a significant number of children's books and translations of Chinese literature, due to government sponsorship of Esperanto. (The way the government is now sponsoring Linux projects.) Japan produces a science-fiction anthology series _Sferoj_ ("spheres", but also a pun: "sferoj => science-fiction-pieces" analogous to "negxeroj => snowflakes [units of snow]") containing sf from many countries, sometimes translated, sometimes original. Brazil, Finland and the Netherlands have translators doing their own national works and also works from many other countries. There are also works that have been overlooked in English translation; e.g., _Lirikaj Perloj de Al-Andalus_ (Lyric Pearls of Al-Andalus_), "Spanish and Jewish lyric poetry from Spain during the Golden Age of Islam". And of course, the Koran is available, as well as Kempis' _Imitation of Christ_, Confucian and Buddhist text and apologies, Spinoza, Hillel, Descartes, etc.

An Esperanto bookstore in Emeryville, California, with several hundred titles:
My Esperanto page:
A variety of information:
The Linux Esperanto-HOWTO (in Esperanto):

In another thread...

(!) [Iron]

Actually, around that time, my LG connections did put me in touch with a Linux Esperantist in Vietnam. The only other Linux Esperantist I know of.

(!) [Ben]

Linux. Esperanto. In Vietnam.

Tell me, Mike - don't you ever get out of that rut and do anything out of the ordinary? I mean, all that sounds so... well... *common*. <grin>

(!) [Don]

The Bay Area is crawling with them. I'm one of the few local Linux freaks I know who can't at least tell people how to reinstall LILO in Esperanto.

It even starts to get on people's nerves.

Two weeks later, a letter from

Estimata samideano Majk!
/* Miaopinie Vi ne konas min, cxar mi ne skribis al Vi antawe... Mallong-dire mi estas 36-jara programmisto el Rusio (urbo Volgograd) kaj krome la Linux-sxatanto */
Mi deziregas gratuli Vin al la NovJar-festo kaj deziri al Vi bonan farton, sukcesan kreadon kaj privatan felicxon!!!

Mi ankaw volas sekvi Vian konsilon pri plezur-faro al homoj, do mi informas ke konstante legas artikolojn de la *gazette* rusigitajn far Sergeo Skorohxodov (dissendolisto comp.soft.linux.gazette en SUBSCRIBE.RU) kaj opinias tiun La Bona Afero! Unufraze: estu tiel plu!

Amike, Dmitrij W. Vronskij (aka dww[RU])

Esteemed fellow-Esperantist (= member-of-the-same-idea) Mike!
I don't think you know me since you've never written to me... To make it short, I'm a 36-year-old programmer in Volgograd, Russia, and also a big Linux fan.
I'd really like to wish you a Happy New Year, and hope things go well in your personal affairs.
I also want to follow your advice about doing good for people (lit: doing pleasure to people), therefore I inform (keep informed?) and constantly read articles in the Gazette russified by Sergej Skorohodov (from the list comp.soft.linux.gazette at SUBSCRIBE.RU) and think it's a Good Thing! In a phrase: keep on truckin'!
Friendlily, Dmitrij W Vronskij (aka dww[RU])
"Geniulo inventas, talentulo efikigas, stultulo uzas kaj ne dankas"
--Kozma Prutkov, fabela rusa filosofiulo
"A genius invents, a talented person produces, a stupid person uses but doesn't thank."
--Kozma Prutkov, fabled Russian philosopher

More on Ben's reputation

Answered By Ben Okopnik, Iron, Guy Milliron, Thomas Adam, Chris Gianakopoulos

(!) [Ben]

Heh. In my PC hardware classes, lo these many years past, I used to destroy my students' MBRs for fun. Or wipe their CMOS... or crunch the DBR... or even make loops in the File Allocation Table, making DOS/Win loop infinitely as it tried to read, say, IO.SYS. All quickly fixable.

(!) [Iron]

I knew it, I just knew it. Never trust anybody who wears dark sunglasses, you never know what they're hiding. I knew that Ben Okopnik character was going to be trouble. Heather, call the FBI.

(!) [Ben]

<shrug> No need to call them; I already offered to corrupt their machines a long time ago (for a very reasonable fee, even!), but they told me they were running Wind*ws and were well served in that area.

If you have any contacts at the CIA, however, I'd be grateful.

(!) [Iron]

You don't already have contacts??? I thought for sure some of your KGB kronies must be double agents.

(!) [Ben]

They won't *share*. <pout>

(!) [Guy]

*laugh* Reminds me of a DOS based Fidonet software, Opus. In the manual under requirements:

  1. Sunglasses
  2. A Nerf Bat
though completely optional in both cases, yet highly recommended.

I can't believe I started in FidoNet when it was a meer 1000 nodes and left when it was just about to crest 32,000 nodes.

Your mouse has moved. Windows must be restarted
for the change to take effect. Reboot now? [ OK ]

In another thread...

(!) [Thomas]

Dear TAG,
Just thought I'd wish you all a merry christmas and a happy new year!!

I'd just like to apologise for my "attitude" while answering some of the questions posted here. I have been under a lot of duress and a heavy workload has made me irratable.

But as of next year, I'll be usual cherry self :-) <Ben....stop sniggering> :)

(!) [Chris]

No way do you have "attitude"! You are easy going.

(!) [Iron]

[Who never noticed Thomas being non-cheerful about anything.]

I guess he'll have to try harder, if he wants ppl to talk about him like we talk about Ben.

(!) [Ben]

'Ey! I resemble that remark!

Uninstalling Linux

Answered By Iron, Ben Okopnik, Mike Martin

How do you remove linux from the hard drive completely?

(!) [Iron]

Go to the LG search engine ( and search for "uninstalling" or "uninstall". You'll find several items. Here's one of the better ones:

(Ben, we need an "uninstalling Linux" entry in the TAG FAQ.)

(!) [Ben]

It's already there:

(!) [Mike]

Not to be too stroppy - but do we? I would see this as more a question for whatever windows equivalents there are to the answer gang.

(!) [Iron]

It comes down to being a responsible OS. Linux has gained lots of brownie points by being the OS that's compatible with more systems than any other, access a wider variety of filesystems and network protocols, has a less buggy compiler and more sysadmin/developer support tools, etc. In essence, the one that saves the day for sysadmins/developers trying to work around the shortcomings in other systems. Do we want to lose this good PR by not recognizing that uninstalling Linux is just as legitimate as installing it, and people may have good reasons to? Perhaps they're a newbie trying Linux out and got lost. Perhaps they inherited a computer with Linux on it. Whatever. It's about making Linux into a system that "plays nice with others". Or more correctly, enhancing the already-good job Linux does with this. It's about being a responsible OS.

Now think about what help The Borg gives you if you want to uninstall it to install Linux. Is there any documentation in the Windoze manuals for this? What about documentation on how to set up Windoze so that it can share the system with Linux? Of course not. Nobody in their right mind would want to uninstall The Borg. It has all the features consumers are demanding, and it's "innovative". After all, The Borg had Plug-n-Play first!

Thus, it's a feather in Linux's cap to make sure the "uninstalling Linux" entry is prominently displayed near the top of the FAQ. It shows that we're confident enough in the OS to help you uninstall it if you want to. (You'll be back...) It gives newbies a safety valve in case they need to uninstall Linux someday, they'll know where to look. And finally again, it's a feature Windows *doesn't* have.

(!) [Ben]

Uninstalling Linux works out to pretty much the same thing as uninstalling Wind*ws - and Microsoft does indeed have an entry in their Knowledge Base that describes how to do that (I found the link at Dell, while searching for serial port loopback info. Go figure.) In reality, we're providing instruction for either one. Hmm, there's a different way of looking at it...

I definitely agree with the above logic if not the fine details.


(!) [Ben]

Hello!!! Your questions!!! have lots of randomly scattered exclamation!!! points!!!, so they must!!! be very!!! important!!!!!!!!!!!!!! Thank you!!!!!! for letting us know!!!!!!!!!!!


Wow, that's really exciting. Is there a reason that you're telling us about this? I'm sure that if you wanted help, you would have provided a list of exactly which errors you got (preferably by copying and pasting rather than retyping), in which kernel version, which module(s), etc. As it is, - well, my neigbor's favorite goldfish died a month ago, so I'm fresh out of sympathy. <shrug> I guess that you *are* the only one with this problem... at least you're the only one who _knows_ about any part of this that's a problem. The rest of us are completely in the dark, due to lack of information. 5. No FTP I connect to the web thru a LAN! It works!!! Wow. More excitement. Now, if we only knew which particular "it" that refers to... Web connection? FTP? Pouring milk into your breakfast cereal without spilling any? Tune in for our next exciting episode, when our mysterious guest reveals all!

Tux trivia

Answered By Iron

When I gave her a stuffed Tux as a present, my Girlfriend asked me, what it's sex is?

(!) [Iron]

Four out of five sexist computer nerds surveyed agree Tux is male.

World of Spam

From: supercow

YES. What aliens says IS truthpwd We REOPENEDpwd So visit us at XXXXX

Subject: Completely FREE to download, join the revolution, NapsterPorn!

Below is the result of your feedback form. It was submitted by TheNapsterOfPorn@XXXXX on Sunday, December 2, 2001

Dear Sir or Madam: Imagine a place just like napster, but with people trading porn instead of music?

We have recently visited your site: We thought there was substantial potential for making revenue for you by placing banners or advertising on your site if you have a reasonable flow of traffic.
[The LG copying page??? High traffic? -Iron.]

We operate on the pay per click method and checks are issued on the 5th of each month. Pay per Click means each time a surfer sees the banner ad on your site and clicks though to the advertised site you are paid for the click.

Advertising on your site increases the importance and prestige of your site.

[It does? Are you sure about that? -Iron.]

Do You Suspect Your Spouse Is Having A Cyber Affair On Your Computer While You Are Away? Have You Ever Lost Hours Of Hard Work... Just Because Your Computer Crashed? Do You Wonder What Your Kids Or Employees REALLY DO Online?

Introducing... XXXXX -- Secret Keystroke Recorder & Backup Utility

  • Monitor All Day..All Night In Complete Secrecy!
  • Pasword Protected Activity Logs!
  • Completely Undetectable To The End User!
  • XXXXX can record start-up/shut down time of your computer
  • XXXXX can record windows captions of programs used.
  • Records Chat Room And Instant Messaging Conversations!
  • Record time stamp at time interval you specified.

You received this email because you signed up at one of Vertical Mails websites or you signed up with a party that has contracted with Vertical Mail.

I have visited your website today and noticed that you have a great site that would work really well selling Evidence Eliminator.

You make a STAGGERING conversion rate, PLUS 10% of earnings of referred webmasters and an INCREDIBLE Webmaster loyalty and retention performance to bring you the World's premium cash payout. There is nothing better.

This is the World's best and best-selling program. Nothing converts better. That's official - EE rules. Try it and see how amazing it converts. You will not be disappointed.

Webmasters ranging from Adult, to Basic Home Sites, are taking AMAZING earnings, many Associates are making in excess of 100-200K $ (US) a year.

Recently the company upgraded the commissions from 30% to an AMAZING 50% of their top-selling and World-Famous product, now allowing even the very newest of Webmasters to generate very generous earnings.

You have been selected as a potential candidate for a free listing in the 2002 Edition of the International Executive Guild Registry Please accept our congratulations for this coveted honor As this edition is so important in view of the new millennium, the International Executive Guild Registry will be published in two different formats; the searchable CD-ROM and the Online Registry.

Since inclusion can be considered recognition of your career position and professionalism, each candidate is evaluated in keeping with high standards of individual achievement. In light of this, the International Executive Guild thinks that you may make an interesting biographical subject.

Russian Joke of the Month

A newspaper boy in Soviet Russia announces his wares:

  • There's no more "Truth"! (Pravda)
  • "Soviet Russia" is completely sold out!
  • All that's left is "Labor" for three kopecks!
--Ben Okopnik

Happy Linuxing!

Mike ("Iron") Orr
Editor, Linux Gazette,

Copyright © 2002, the Editors of Linux Gazette.
Copying license
Published in Issue 74 of Linux Gazette, January 2002