Short Circuit - No Disassemble!

Remember the dialog of this excellent and funny movie !

When working with proprietory code, some time it is absolutely necessary to disassemble pieces of code to understand some funcky stuff. On example is working with Windows internals. One can't get far without disassembling some parts of kernel implementations. Windows debugger is good at that, and the micorsoft team are not afraid to encourage us to do that, well of course if we know what we are doing!

So for some one not too familiar about x86 family of code, it is a good starting point, just in case you might need it sometime. I heard way too many times that - "Hey we are engineers, we make things !". But hardly every you will hear that - "Hey we break things too !" Sometime breaking little stuff gives you the confidence, along with frustration and craving for wanting to know more. 

Before we go ahead, it is good to mention that Windows also support other architectures. And in case you are completely new at delving deep into assembly language while you are quite familiar with higher level language like C/C++ then first thing to follow is to read  - "Just enough assembly assembly language to get by ( http://www.microsoft.com/msj/0298/hood0298.aspx )". It is two part article, and enough to get one become curious. It did to me, though I was not new to assembly language.

 On the other hand, you might learn directly using assembly language to build little applications. So my suggestion is to use both methods.

Here our discussion is bit different. It is about using gnu assembler (as), and it is bit more arcane than what normal assembly language programming. It is about embedded ( inline ) assembly feature of gnu assembler, named GAS.

First compatibility wise, some of the inline assembly does not quite work from one platform to another. Particularly be watchful, if you happen to test some of them in freebsd 7.2 or earlier. Remember that we are discussing inline assembly in C file. GCC, the complier does not have any notion of assemby syntax, so it does not try to parse anything inside such instruction, it passes it to assembler who knows what to do with them.

Next, we might ask why do we need it. There are times when you need to access the register transfer level instructions for various reasons like: Kernel programming; Interrupt handling; fast performance; etc.

Examples:

asm("statements");     

is the general structure of a simple inline assembly statement.

asm("nop") ; asm("cli"); asm("sti");  are some of the simple statements.

 In case you have any named variable asm in global scope, you may want to use __asm__ instead to avoid name conflict.

Extended inline assembly is where this feature really shine !. If you happen to have a look at Linux or XNU kernel, you will find quite a bit of example use and their power.

 Form of an extended inline assembly is -

asm ("assembly statement" : "output constraint(s)" : "input constraint(s)" : "clobbered constraint(s)" );

In a simple inline assembly, none of the constraint(s) are needed. That tells that they are optional.  So any of the above constraints could be absent in an extended inline assembly instruction. Also within any constraint clause we can have more than one constraints.  

Most important constraint is the clobbered one. GCC does not want to know or care about what registers are being used as target of operations in your statement, so if we want to perserve consistent machine state we need to tell the compiler what register sets are being clobberred so gcc can produce code that would save the registers before being used in the inline statment.

Posted on Friday, February 27, 2015 at 10:13AM by Registered CommenterProkash Sinha | CommentsPost a Comment | References2 References

Monkeys on Trees V

Place holder

Posted on Friday, February 27, 2015 at 10:12AM by Registered CommenterProkash Sinha | CommentsPost a Comment | References3 References

Monkeys on Trees IV

Place holder

Posted on Friday, February 27, 2015 at 10:11AM by Registered CommenterProkash Sinha | CommentsPost a Comment | References2 References

Monkeys on Trees III

Amortization is an averaging technique over a sequence of operations as opposed to taking worst case of individual operations. Here the operations are from the ADT point of view and not from the intrinsic cost of machine level operations. Moreover, in a macro coded machine, the intrinsic cost of a machine level operation is also an ADT operation at a finer granularity!

It is the right time to draw an analogy,  between  Discrete Dynamic Programming(DDP) and Amortized Analysis ! In DDP we want to avoid any duplication of work. If anything we computed already, then we would like to use the result again and and again. Think about Fibonacci sequence computation, for example. In amortized analysis we want avoid adding unnecessary or somewhat absurd complexity, and look at the big picture (i.e. the whole sequence of operations), and average them out, not a probabilistic average.

There are three main types of Amortizations techniques in use: Aggregate; Accounting; and Potential. There is no such restrictions for picking up some methods besides those three, as long as it make real sense for the underlying analysis. As we know, it does not impact the implementation, and should not impact at all.

At this point it is enogh to know that Aggregate method assigns average costs to each operation types, and intuitive in nature. One remark deserves here, is that Total Average cost and individual operation type cost.  Whereas, in Accounting and Potential methods, individual operation type cost could be different in the averaging process. Please take a look at some standard textbook on Algorithms.

We will emphasize Potential method, since it is one of the main analysis type for Online Algorithms. And it also have interplays with Probabilistic algorithms. A probablistic algorithm has to tackle the probabilistic bound, as well as the complexity bound. Lot of such algorithms are basically from limiting sense, so it is better to think from the limit theorem, limiting probability etc...

Posted on Wednesday, May 7, 2014 at 05:05PM by Registered CommenterProkash Sinha | Comments Off

Monkeys on Trees II

So what is so important about MTF ?

Well, this has been used widely in some operating systems related coding, and fairly simple and intuitive way to make a LRU linked list. Idea is to move the accessed node into front. The C code is fairly simple. Create a linked list with keys ( or items), and try to access random node of the list, and as you access ( for whatever reasons other than taking out of the list and delete), move the node up in front. Leaving it as an exercise :).

Before we get into some basic foundations for the analysis, lets discuss bit more about what we are up against. Lets take an offline example, meaning that we have aprioi knowlege of the sequence of operations. Now lets say there is an adversery who always access the last item on the list, but we just don't know. In that case, every time we access the give node by our adversery, we will have to traverse the list. So for list with N nodes, we will have O(N^2) for N operations. And surely, we are not achieving any perf gain! This is the absolute worst case. In worst case analysis, this is the upper bound.

First wave of analyses, that made sense came from the idea of giving a reasonable estimate, that make sense... Now we need to discuss what are they and the underlying ideas behind them, before we get into Online algorithm analysis.

Most of the original analysis were based on - Given N operations, what is the worst case, what is the best case, something along this line. Then came average analysis, which happens to depend on probability. As an example, lets say N is 100. So we can have 100 factorial ( 100!) permutation, and in an uniform distribution, one such sequence of 100 has 1 / 100! probability, which is rare in worst case analysis above. Hence came the making sense out of analysis. So it is clear that for small N, say 10, it is highly likely that an absolute pathetic sequence of operations can happen that makes MTF look bad.

But better approach is to use Amortized analysis, because above analysis may not make much sense either !!!

 So what we know so far is that the worst case analysis is pessimistic, stressing the absolute worst case, and not much practical. And probablistic analysis has to base on some probablity metrics/parameters that is only best assumptions. What else, any analysis is not part of crafting the code from an algorithm, it is an analysis technique that gives estimation of time and / or memory complexity, two important resouces of a computing device!

Posted on Saturday, May 3, 2014 at 08:27AM by Registered CommenterProkash Sinha | CommentsPost a Comment | References2 References