Monday, September 11, 2006

But it'll still run Linux!

I was talking with a friend yesterday as we transported a few computers to his car. Some old systems we were going to put together for a cluster of sorts - though quite a heterogenous system at that. The one system - and early Mac system with a 68000 processor - was being transported and he was debating getting rid of it. The system, however, will still run Linux quite well.

We mused however, as the Linux folks will always - or nearly always at least - be able to cry "But it'll still run Linux!" as even the latest kernel series (2.6) will still run a ton of old hardware. Yes, you may have to specially build it, but you can usually find a distribution for some of them, and then build to the latest if you desire. Regardless, the memory requirements of the kernel will still do very well for older processors and computers. No, it doesn't necessarily mean that one would be able to run the latest and greatest distro that comes out - likely not. But with some crafting, most software - even X Windows - is still fully usable on older hardware.

So, when will it ever be said "But it'll still run Windows?" I doubt one will ever hear that said and still be able to run what Microsoft is just releasing.

So if you have some old hardware to play with, then check out the competition. Learn a bit more (or a lot more) about computers and their uses. Linux is coming around. It'll extend the life of older computers for quite a long time. (My Sun IPC is still supported by the 2.6 series. It was built in 1991.) And perhaps you'll find that you like it well enough to run it on your main computer.

But, if nothing else, it'll give you or your kid something fun to do, and teach the better side of computers and show you just how safe a computer can really be.

(c)2006

Sunday, September 10, 2006

The Silver Bullet for software?

A while back I read an articled posted on Rebel Science's website entitled The Silver Bullet. At the time, I didn't have a blog so I tried posting it to Slashdot, only to get it rejected. Any how...here's my little talk on it below. Hopefully, someone will find some benefit to the discussion here.

Rebel Science has an article entitled
(http://www.rebelscience.org/Cosas/Reliability.htm) The Silver Bullet: Why Software Is Bad and What We Can Do to Fix It talking about why the author thinks that Brooks' No Silver Bullet theory is wrong.

Abstract from the article: "There is something fundamentally wrong with the way we create software. Contrary to conventional wisdom, unreliability is not an essential characteristic of complex software programs. In this article, I will propose a silver bullet solution to the software reliability and productivity crisis. The solution will require a radical change in the way we program our computers. I will argue that the main reason that software is so unreliable and so hard to develop has to do with a custom that is as old as the computer: the practice of using the algorithm as the basis of software construction(*). I will argue further that moving to a signal-based, synchronous(**) software model will not only result in an improvement of several orders of magnitude in productivity, but also in programs that are guaranteed free of defects, regardless of their complexity."

I, for one, do not agree with him - at least in his conclusion - as he seems to miss out on several factors. First and foremost, hardware IC designers do encounter algorithmic methods in hardware (he asserts the opposite); and secondly and more importantly, there are factors in the software environment that are simply not present in the hardware environment. Third, event-based programming is just as complex as algorithmic programming; and for this I shall simply point to concurrent programming models (which are basically event-based models) as the example.

That is to say, the environment in which hardware runs is decidedly (and irrevocably) governed by the laws of physics, and can be very easily determined. The IC paths are decided once when the circuit is created in the hardware board, and will never change short of physical damage of some sort (e.g. heat, a screw driver falling on it, etc.). So, the hardware folks can from the physical aspects alone guarantee that the hardware will run the same every time. In the case of componentization (e.g. ISA, PCI, USB, etc.) everything is set to a specific standard that is based on the same principles, and thus can be just as guaranteed until it reaches the software (e.g. driver) level in which case it then enters the software environment.

However, the software environment is not guided by any laws or principles that are set in stone. The software environment is what you make it to be. It is first constructed by the Operating System, and then modified by every program and hardware driver that is then executed by the Operating System. Thus, from an environmental view point (even aside from looking at algorithmic vs. event-based models), the environment of software is unable to be known for sure. Software is designed to work and is tested in known environments where the software environment can be controlled (e.g. what OS, hardware, hardware drivers, software, etc. are used and run and what order they are used and run in). However, in the real world - when software is distributed to customers and users for use - the environment of the software is heavily unknown. One customer could have one hardware configuration and another could have one completely different (e.g. ATI video card vs. nVidia video card); moreover, two customers could have identical hardware configurations but completely different software environments (e.g. Linux vs. Windows, or even Windows 2000 vs. Windows XP vs. Windows Vista) that may be either homogenous or heterogeneous in nature to what the software was tested
in.

It is for the environmental reasons alone that Brooks was correct in his assertion that there is no silver bullet - even if he did not realize or vocalize that reason. That does not mean, however, that unreliability is a must for software.

Software fails for numerous factors: physical failures (hardware); environmental failures (OS, drivers, libraries, etc.); design; bad programming practices (e.g. lack of error checking); security (e.g. buffer overflows, etc); and numerous others. While the physical failures cannot be helped, they, like any of the others, can at least be minimized in their impact by programmers checking for errors, and it is the very lack of programmers checking for errors that leads to numerous errors and failures - allowing a single non-fatal error to cause a catastrophic failure.

Hardware designers can trust that their environment will never change outside of certain parameters and do their best to make the parameters that the hardware runs in to cover what most people will experience (e.g. 0 degrees centigrade to 200 degrees centigrade). However, software programmers and engineers must not rely on their environment to be what they thought it may be - every function call (aside from the extreme basics - addition, subtraction, comparison) must be scrutinized and error codes - all of them - handled to the best of the ability of the program at the earliest possible moment of failure. Software can be made
reliable.

(c)2006