Monkeys on Trees III

Amortization is an averaging technique over a sequence of operations as opposed to taking worst case of individual operations. Here the operations are from the ADT point of view and not from the intrinsic cost of machine level operations. Moreover, in a macro coded machine, the intrinsic cost of a machine level operation is also an ADT operation at a finer granularity!

It is the right time to draw an analogy,  between  Discrete Dynamic Programming(DDP) and Amortized Analysis ! In DDP we want to avoid any duplication of work. If anything we computed already, then we would like to use the result again and and again. Think about Fibonacci sequence computation, for example. In amortized analysis we want avoid adding unnecessary or somewhat absurd complexity, and look at the big picture (i.e. the whole sequence of operations), and average them out, not a probabilistic average.

There are three main types of Amortizations techniques in use: Aggregate; Accounting; and Potential. There is no such restrictions for picking up some methods besides those three, as long as it make real sense for the underlying analysis. As we know, it does not impact the implementation, and should not impact at all.

At this point it is enogh to know that Aggregate method assigns average costs to each operation types, and intuitive in nature. One remark deserves here, is that Total Average cost and individual operation type cost.  Whereas, in Accounting and Potential methods, individual operation type cost could be different in the averaging process. Please take a look at some standard textbook on Algorithms.

We will emphasize Potential method, since it is one of the main analysis type for Online Algorithms. And it also have interplays with Probabilistic algorithms. A probablistic algorithm has to tackle the probabilistic bound, as well as the complexity bound. Lot of such algorithms are basically from limiting sense, so it is better to think from the limit theorem, limiting probability etc...

Posted on Wednesday, May 7, 2014 at 05:05PM by Registered CommenterProkash Sinha | Comments Off

Monkeys on Trees II

So what is so important about MTF ?

Well, this has been used widely in some operating systems related coding, and fairly simple and intuitive way to make a LRU linked list. Idea is to move the accessed node into front. The C code is fairly simple. Create a linked list with keys ( or items), and try to access random node of the list, and as you access ( for whatever reasons other than taking out of the list and delete), move the node up in front. Leaving it as an exercise :).

Before we get into some basic foundations for the analysis, lets discuss bit more about what we are up against. Lets take an offline example, meaning that we have aprioi knowlege of the sequence of operations. Now lets say there is an adversery who always access the last item on the list, but we just don't know. In that case, every time we access the give node by our adversery, we will have to traverse the list. So for list with N nodes, we will have O(N^2) for N operations. And surely, we are not achieving any perf gain! This is the absolute worst case. In worst case analysis, this is the upper bound.

First wave of analyses, that made sense came from the idea of giving a reasonable estimate, that make sense... Now we need to discuss what are they and the underlying ideas behind them, before we get into Online algorithm analysis.

Most of the original analysis were based on - Given N operations, what is the worst case, what is the best case, something along this line. Then came average analysis, which happens to depend on probability. As an example, lets say N is 100. So we can have 100 factorial ( 100!) permutation, and in an uniform distribution, one such sequence of 100 has 1 / 100! probability, which is rare in worst case analysis above. Hence came the making sense out of analysis. So it is clear that for small N, say 10, it is highly likely that an absolute pathetic sequence of operations can happen that makes MTF look bad.

But better approach is to use Amortized analysis, because above analysis may not make much sense either !!!

 So what we know so far is that the worst case analysis is pessimistic, stressing the absolute worst case, and not much practical. And probablistic analysis has to base on some probablity metrics/parameters that is only best assumptions. What else, any analysis is not part of crafting the code from an algorithm, it is an analysis technique that gives estimation of time and / or memory complexity, two important resouces of a computing device!

Posted on Saturday, May 3, 2014 at 08:27AM by Registered CommenterProkash Sinha | CommentsPost a Comment

Monkeys on Trees I

A refresher -

A binary tree is a kind of tree, that can have only two subtrees. So it will have left subtree, and a right subtree. This is going to be our first topic in this series...

The use of it became important, mainly due to dynamic sturctures that needed for searching, among other operations like:Add, Delete etc. If we keep records (aka structure or objects ) in a linear fashion, then given a key ( which is nothing but an ID of a record), we need to find if it exists or not. If it does, return it or delete it or move up to the root etc. The operations depends on what we want to do with it.

We can see that if we use keys to make the tree in a sorted order, then assuming the best case, we can search for an item in the tree in O(logN), since the max height of the tree is O(logN) when fairly balenced. This is when BST (binary search tree) comes handy.

From the search and / or other operational perspective, there are 3 fundamental tree traversals: In order; Preorder; and PostOrder. Two fundamental assumption here is that a node has two children. When we say Inorder, we mean to say - Operate on left child, then operate on the node, then operate on the right child. In data structure and algorithm books you will find them called visit, instead of operate on a node, but they are same. This is where the operations takes place.

Preorder traversal is - Operate on the node, then operate on the left node, then on the right node. Postorder is defined as - Operate on the left, then on the right, then on the node.

Note that, in a BST usually the keys are in non-decreasing sorted order. So if you traverse the nodes, in Inorder, you will have the sorted list of keys. Hence the name, InOrder.

BFS is breadth first order, so if you draw a simple BST and relate the search to preorder, you see that BFS is achieved. Where as, DFS is depth first and it is InOrder traversal technique.

DFS is inherently recursive, so it uses the underlying call stack, while BFS is inherently iterative, so it uses a queue to push and pop nodes to operate on in prescribed order.

Before we go into those traversals, and related problems that could be solved, of course using examples, we want to have some introduction to the analysis technique. Basic analysis of these are well laid out in many books. What is new here, is the online algorithm analysis. So ...

For now, take a look at a linked list. Single link following the next element, until it reaches a null node. When it is bit large, and we want to optimize the access pattern using some heuristics like: Move the recently accessed node to front, hope it would be accessed again soon, we get into a debate of how good is the approach!! If we know the offline sequence of access pattern, then analysis is different than when there is no pre determination of the sequence of operations. This is where Competitive cost analysis came to play. 

MTF(move to front) in a linked list was first under the radar of analysis when it was quite popular in some systems level programming, and a due analysis was required. AFAIK, this kicked off a new branch of analysis called - Compititive cost analysis. And it was started by Bob Tarjan and Dan Sleator, back in 80s.

Posted on Saturday, April 26, 2014 at 09:20AM by Registered CommenterProkash Sinha | CommentsPost a Comment

Monkeys on Trees !

First my apology for not being active for couple months! Lately, jumping around like a monkey from one place to another and buring really big holes in my pocket, I had to back off from writing...

I've always thought about writing some thing about Tree ADT, and its various incarnations. First it is a very common data structure, second my mother told me that I've monkey's bone in my hip because I do seem to jump around many places except study desk in my school days. Ah, that gives me enough confidence write something about trees...

 Tree was considered, mainly because of applying order relation to find data in O(logN). Basically the binary search tree(BST) is based on implicit order relation, and expect the container is fairly balenced. In reality, a sequence of data may cause the binary tree to be quite unbalenced, so there are many re-incurnations based on different needs. The area of BST has been under active research for long time.

Due to distributed data processing for search, social media, etc. there is another thing came to be useful is - Compititive cost analysis for Online algorithms. This is now a fundamental basis for analysis of lot of data structures including BST and its variants.

The idea behind this analysis technique is to have some solid upper bound to compare with for online objects. Here an object could be thought of an element wrapped inside a data structure object. In case of a BST, inside a BST node. For distributed processing of queries, social media trolling etc. the elements are random, and there is no apriori knowledge of the elements. Think about a random infinite sequences of elements. Now apply your best strategy for you implementations, and analyze using the Compititive cost analyss paradigm.

A FRIENDLY WARNING - This set of notes going to be rather long, and perhaps boring, but I hope it is going to high light some key points !!!

The tree structure and its implementations & analysis is really a subset of graph algorithms, which in turn is a subset of Discrete Mathematics. For example, given a connected graph, and a vertex of the graph, create a tree rooted at that vertex is one example of what I meant by tree being a subset of graphs.

 Way back when I was in graduate school persuing Discrete Math and Algorithms, I was lucky enough to have both my advisors with Erdos Number One. A person has erdos number one, if co-authored a paper with Paul Erdos, a legend in the field of discrete math. So I got some exposure to think from high enough level to encompass related algorithms.

I intend to cover the following topics about Tree, and some more as I go along -

1) Creating random trees, using c programming and data structures.

2) Tree traversals ( inoder, preorder, postorder). BFS, DFS

3) Parameters and attributes of a Tree, why they are important etc.

4) Random set of operations - insertion, deletion, rotations, balencing etc.

5) Kinds of trees, and why they are needed - Red-Black, AVL, among others.

6) Different transformations - these are to solve tree related algorithms.

7) Offline and online analysis - Worst case, Average, Amortized, and Compititive analysis.

8) etc.

 

 

 

Posted on Sunday, March 23, 2014 at 11:04AM by Registered CommenterProkash Sinha | CommentsPost a Comment

parshing county, Nevada - continues

So what is parsing ???.

Defintion - A parsing function is a function that tries to parse thru a bit stream. This is a recursive definition!

At it's basic tenate, it is a step to take a stream and try to interpret. So for example, you just got a buffer full of bits, and one program (i.e. a parsing function) says, well it is a bit stream or byte stream or char stream. These are almost always NULL parser(s). Some has underlying, albiet very small, assumption, and some don't have any at all!. Stop for a moment, and see which has those assumptions!! What I mean here? Well, at its lowest level a bit is an identifiable unit. But for our purpose, it is more interesting to interpret as a part of a larger thing. For example, a nibble, a byte, a char, a number. So there are those assumptions. These may not be interesting while we talk about parsing, but is another fundamental assumption that we need to understand. Also getting bigger picture, almost always gives us a step foward to our understanding of things. For example, a parser usually deals with a token! So what is a token? It definitely has some assumption, and I wlll leave it at that... But the main point is  alphabet / separator / terminator etc. That I will discuss soon.

Now let say, given a stream of chars, a parsing function parses to say that it is a sentence for english language. And yet another took a stream of chars, and says it is a valid arithmetic expression. What is common here is an underlying rule that the function is trying to recognize. The rule is an english grammer in the first case, and an Algebra rule in the second case.

If you ask "Is there a possiblity, that a given stream can be interpreted as two different rules ... ?", then you will be right into the theory behind parsing - Autometa Theory. But we are going there, not yet at least...

Definition of a Parser - A parser is a program that implement a rule. A NULL parser don't implement any rule. A rule is usually a specification to recognize something like: a valid ip address; a valid statement of a computer language etc.

When a specification for a rule is used, it is usually thru a wide variety of techniques. One such powerful and robust technique is regular expression. A regular expression recogmize a regular language over a defined set of alphabet.  For the definition of alphabet, and regular langagues, please see wiki.

 As a very simple example, assume that we need to parse a string buffer and find out if it is a valid ip address or not. What is a valid ip address, you may ask!... A valid ip address is in the form of nnn.nnn.nnn.nnn in ipv4 version, where n is any number from the set {0, 1, ... 9}. A leading zero in any of those four parts, separated by dot (.) character could be allowed for simplicity. So 001.02.3.4 could be a valid ip address.

 

Now write a C program to see for yourself. This usually takes a bit of thought to program correctly ! I would try first, as if I never seen regular language and regular expression. Then we will go about definining a regular expression...

 

ip address  == [0..9][0..9][0..9].[0..9][0..9][0..9].[0..9][0..9][0..9].[0..9][0..9][0..9]

 

As you can see, leading zeros are allowed!. Now we can refine it if we want to. Then we will write program to test the specification and implementation is correct or not.

 

History - When regular expression, and parsing became a branch of science and widely popular, not many application of the technique were found beyond compiler and tools writer. Then came the internet and parsing became almost essential in the WWW technology. The result is that lot of languages now provide some kind of support to define regular expression using the underlying regular language, and parse information based on the definition of the expresssion. To name a few: Perl, Python, C++, C#, and Java.

 

More ...

Posted on Sunday, June 23, 2013 at 09:23AM by Registered CommenterProkash Sinha | CommentsPost a Comment | References3 References
Page | 1 | 2 | 3 | 4 | 5 | Next 5 Entries