« Making sense of dollar and sense | Main | What you assume is what you may get ! »

Frankenstein or The Modern Prometheus

I never thought that I would see such a thing in my life. It is only from a story teller, right!. I was completly wrong, and it is not the first time I was ...

Specially in large scale software(s), it is pivotal that it is designed with care. Otherwise, only the orignal designer could perhaps come to rescue.  I was poking thru one such software, and found how much of a design problem is there. There are deadlocks, exceptions, and other stuff.... I've been debugging softwares written by others on and off for quite sometime. So I do have an uncanny knack of criticizing vital defects in software design...

This lead me to talk about asynchronous processing. What is it? Where the heck it came from? What's the use of it? And a host of other questions your wandering mind might have.

Most of the computer architecture are sequential in nature.It simply performs one instruction at a time and in sequence. It was found early in the history of computing that saturating the CPU bandwidth are almost always a rarity than common use of a CPU, and I/O processing is essential to make computing exciting. Think about one case, where you are to solve a bunch of partial differential equations ( if you know what it is :-), or to run a pobabilistic sampling on the memory bitness , to find out what is the probablity that your memory has more one bits than zero bits. These are not  really  very interesting unless you can convey your result in one or other form to your audience. So the I/O processing came as a vital part of computing. But I/O processing could be slower than your CPU. You really can not waste CPU time just by waiting for the I/O to finish and give you intermediate or final result. So what we do now?  Well device interrupt technology came to rescue us from it. We send a command or two to a device, and we could go ahead to do our business until we get a notice that the request is done or there is a device error. Now how and when the device going to let the CPU know that a request is done?. The answer to when is that when the device is ready to let CPU know. So if the CPU continues working sequentially, it needs to be stopped from what it is doing and look at the result of the request. This is where asychronous processing comes into play, since the CPU needs to halt what it is doing and start looking at the interruption.  How it is going to notify CPU is the mechanism to handle interruption from external devices.

Now suppose we have a situation that the CPU should halt executing our current steps of computation, since we are depending on the result from the device. In this case, we should try to use the CPU for something else. Here the OS tries to put the current task to go on sleep, and gets another task to run. In this case we can see how the computer resouce is being shared by differnt computations. And we achieve a pseudo parallelism.

Today these concepts are available to programmers who are not directly involved with OS development. For example, threading is one. If we have more than one completely disjoint computations, we can do by creating seprate thread of computation, and it is easy enough!. But if we have some dependencies among several thread of computations, we will have to be very careful. These things brings up, race conditions, deadlocks, data corruption etc.

When the design is not well thought out, it becomes a very difficult software to understand...

 

Posted on Sunday, March 18, 2007 at 06:31PM by Registered CommenterProkash Sinha | CommentsPost a Comment

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
All HTML will be escaped. Hyperlinks will be created for URLs automatically.