## Monkeys on Trees II

So what is so important about MTF ?

Well, this has been used widely in some operating systems related coding, and fairly simple and intuitive way to make a LRU linked list. Idea is to move the accessed node into front. The C code is fairly simple. Create a linked list with keys ( or items), and try to access random node of the list, and as you access ( for whatever reasons other than taking out of the list and delete), move the node up in front. Leaving it as an exercise :).

Before we get into some basic foundations for the analysis, lets discuss bit more about what we are up against. Lets take an offline example, meaning that we have aprioi knowlege of the sequence of operations. Now lets say there is an adversery who always access the last item on the list, but we just don't know. In that case, every time we access the give node by our adversery, we will have to traverse the list. So for list with N nodes, we will have O(N^2) for N operations. And surely, we are not achieving any perf gain! This is the absolute worst case. In worst case analysis, this is the upper bound.

First wave of analyses, that made sense came from the idea of giving a reasonable estimate, that make sense... Now we need to discuss what are they and the underlying ideas behind them, before we get into Online algorithm analysis.

Most of the original analysis were based on - Given N operations, what is the worst case, what is the best case, something along this line. Then came average analysis, which happens to depend on probability. As an example, lets say N is 100. So we can have 100 factorial ( 100!) permutation, and in an uniform distribution, one such sequence of 100 has 1 / 100! probability, which is rare in worst case analysis above. Hence came the making sense out of analysis. So it is clear that for small N, say 10, it is highly likely that an absolute pathetic sequence of operations can happen that makes MTF look bad.

But better approach is to use Amortized analysis, because above analysis may not make much sense either !!!

So what we know so far is that the worst case analysis is pessimistic, stressing the absolute worst case, and not much practical. And probablistic analysis has to base on some probablity metrics/parameters that is only best assumptions. What else, any analysis is not part of crafting the code from an algorithm, it is an analysis technique that gives estimation of time and / or memory complexity, two important resouces of a computing device!

## Monkeys on Trees I

A refresher -

A binary tree is a kind of tree, that can have only two subtrees. So it will have left subtree, and a right subtree. This is going to be our first topic in this series...

The use of it became important, mainly due to dynamic sturctures that needed for searching, among other operations like:Add, Delete etc. If we keep records (aka structure or objects ) in a linear fashion, then given a key ( which is nothing but an ID of a record), we need to find if it exists or not. If it does, return it or delete it or move up to the root etc. The operations depends on what we want to do with it.

We can see that if we use keys to make the tree in a sorted order, then assuming the best case, we can search for an item in the tree in O(logN), since the max height of the tree is O(logN) when fairly balenced. This is when BST (binary search tree) comes handy.

From the search and / or other operational perspective, there are 3 fundamental tree traversals: In order; Preorder; and PostOrder. Two fundamental assumption here is that a node has two children. When we say Inorder, we mean to say - Operate on left child, then operate on the node, then operate on the right child. In data structure and algorithm books you will find them called visit, instead of operate on a node, but they are same. This is where the operations takes place.

Preorder traversal is - Operate on the node, then operate on the left node, then on the right node. Postorder is defined as - Operate on the left, then on the right, then on the node.

Note that, in a BST usually the keys are in non-decreasing sorted order. So if you traverse the nodes, in Inorder, you will have the sorted list of keys. Hence the name, InOrder.

BFS is breadth first order, so if you draw a simple BST and relate the search to preorder, you see that BFS is achieved. Where as, DFS is depth first and it is InOrder traversal technique.

DFS is inherently recursive, so it uses the underlying call stack, while BFS is inherently iterative, so it uses a queue to push and pop nodes to operate on in prescribed order.

Before we go into those traversals, and related problems that could be solved, of course using examples, we want to have some introduction to the analysis technique. Basic analysis of these are well laid out in many books. What is new here, is the online algorithm analysis. So ...

For now, take a look at a linked list. Single link following the next element, until it reaches a null node. When it is bit large, and we want to optimize the access pattern using some heuristics like: Move the recently accessed node to front, hope it would be accessed again soon, we get into a debate of how good is the approach!! If we know the offline sequence of access pattern, then analysis is different than when there is no pre determination of the sequence of operations. This is where Competitive cost analysis came to play.

MTF(move to front) in a linked list was first under the radar of analysis when it was quite popular in some systems level programming, and a due analysis was required. AFAIK, this kicked off a new branch of analysis called - Compititive cost analysis. And it was started by Bob Tarjan and Dan Sleator, back in 80s.

## Monkeys on Trees !

First my apology for not being active for couple months! Lately, jumping around like a monkey from one place to another and buring really big holes in my pocket, I had to back off from writing...

I've always thought about writing some thing about Tree ADT, and its various incarnations. First it is a very common data structure, second my mother told me that I've monkey's bone in my hip because I do seem to jump around many places except study desk in my school days. Ah, that gives me enough confidence write something about trees...

Tree was considered, mainly because of applying order relation to find data in O(logN). Basically the binary search tree(BST) is based on implicit order relation, and expect the container is fairly balenced. In reality, a sequence of data may cause the binary tree to be quite unbalenced, so there are many re-incurnations based on different needs. The area of BST has been under active research for long time.

Due to distributed data processing for search, social media, etc. there is another thing came to be useful is - Compititive cost analysis for Online algorithms. This is now a fundamental basis for analysis of lot of data structures including BST and its variants.

The idea behind this analysis technique is to have some solid upper bound to compare with for online objects. Here an object could be thought of an element wrapped inside a data structure object. In case of a BST, inside a BST node. For distributed processing of queries, social media trolling etc. the elements are random, and there is no apriori knowledge of the elements. Think about a random infinite sequences of elements. Now apply your best strategy for you implementations, and analyze using the Compititive cost analyss paradigm.

A FRIENDLY WARNING - This set of notes going to be rather long, and perhaps boring, but I hope it is going to high light some key points !!!

The tree structure and its implementations & analysis is really a subset of graph algorithms, which in turn is a subset of Discrete Mathematics. For example, given a connected graph, and a vertex of the graph, create a tree rooted at that vertex is one example of what I meant by tree being a subset of graphs.

Way back when I was in graduate school persuing Discrete Math and Algorithms, I was lucky enough to have both my advisors with Erdos Number One. A person has erdos number one, if co-authored a paper with Paul Erdos, a legend in the field of discrete math. So I got some exposure to think from high enough level to encompass related algorithms.

I intend to cover the following topics about Tree, and some more as I go along -

1) Creating random trees, using c programming and data structures.

2) Tree traversals ( inoder, preorder, postorder). BFS, DFS

3) Parameters and attributes of a Tree, why they are important etc.

4) Random set of operations - insertion, deletion, rotations, balencing etc.

5) Kinds of trees, and why they are needed - Red-Black, AVL, among others.

6) Different transformations - these are to solve tree related algorithms.

7) Offline and online analysis - Worst case, Average, Amortized, and Compititive analysis.

8) etc.

## parshing county, Nevada - continues

So what is parsing ???.

Defintion - A parsing function is a function that tries to parse thru a bit stream. This is a recursive definition!

At it's basic tenate, it is a step to take a stream and try to interpret. So for example, you just got a buffer full of bits, and one program (i.e. a parsing function) says, well it is a bit stream or byte stream or char stream. These are almost always NULL parser(s). Some has underlying, albiet very small, assumption, and some don't have any at all!. Stop for a moment, and see which has those assumptions!! What I mean here? Well, at its lowest level a bit is an identifiable unit. But for our purpose, it is more interesting to interpret as a part of a larger thing. For example, a nibble, a byte, a char, a number. So there are those assumptions. These may not be interesting while we talk about parsing, but is another fundamental assumption that we need to understand. Also getting bigger picture, almost always gives us a step foward to our understanding of things. For example, a parser usually deals with a token! So what is a token? It definitely has some assumption, and I wlll leave it at that... But the main point is alphabet / separator / terminator etc. That I will discuss soon.

Now let say, given a stream of chars, a parsing function parses to say that it is a sentence for english language. And yet another took a stream of chars, and says it is a valid arithmetic expression. What is common here is an underlying rule that the function is trying to recognize. The rule is an english grammer in the first case, and an Algebra rule in the second case.

If you ask "Is there a possiblity, that a given stream can be interpreted as two different rules ... ?", then you will be right into the theory behind parsing - Autometa Theory. But we are going there, not yet at least...

Definition of a Parser - A parser is a program that implement a rule. A NULL parser don't implement any rule. A rule is usually a specification to recognize something like: a valid ip address; a valid statement of a computer language etc.

When a specification for a rule is used, it is usually thru a wide variety of techniques. One such powerful and robust technique is regular expression. A regular expression recogmize a regular language over a defined set of alphabet. For the definition of alphabet, and regular langagues, please see wiki.

As a very simple example, assume that we need to parse a string buffer and find out if it is a valid ip address or not. What is a valid ip address, you may ask!... A valid ip address is in the form of nnn.nnn.nnn.nnn in ipv4 version, where n is any number from the set {0, 1, ... 9}. A leading zero in any of those four parts, separated by dot (.) character could be allowed for simplicity. So 001.02.3.4 could be a valid ip address.

Now write a C program to see for yourself. This usually takes a bit of thought to program correctly ! I would try first, as if I never seen regular language and regular expression. Then we will go about definining a regular expression...

ip address == [0..9][0..9][0..9].[0..9][0..9][0..9].[0..9][0..9][0..9].[0..9][0..9][0..9]

As you can see, leading zeros are allowed!. Now we can refine it if we want to. Then we will write program to test the specification and implementation is correct or not.

History - When regular expression, and parsing became a branch of science and widely popular, not many application of the technique were found beyond compiler and tools writer. Then came the internet and parsing became almost essential in the WWW technology. The result is that lot of languages now provide some kind of support to define regular expression using the underlying regular language, and parse information based on the definition of the expresssion. To name a few: Perl, Python, C++, C#, and Java.

More ...

## It's parshing county, Nevada. Wild wild west

It's parsing and not much different from what I said in the title. Yes, it is wild wild west. As you go along doing your day to day business, no matter how hard you try not to think about it, you eventually end up with stuff that you will feel that it is in your finger tips or you it is far out!

Parsing is one such thing when you try to program. I've seen many different mistakes and not so intuitive for lot of people ( including me). So what is parsing, pattern matching etc.???

A bit of a history first, if you don't happen to know. Long long back, computers were very specific to do one type of computation or other. So there were computers for scientific computation, some were for massive data proccessing and business computations. So a genralization effort brought them together to have universal ( sort of ) computers where you can do many different types of computations. Currently, it is just simply taken for granted.

Now, for tackling interesting and somewhat big problems, there were already few languages sutiable to specific use, instead of using almost machine level language Assembler. For example, Fortran for scientific computation, Cobol for business computation. There were others that came, did their service, and went away. Since some of these languages were to solve specific types of problems, they were bit awkward for other types of problems. Moreover, the I/O systems were thoughtout to have block level interfaces ( specially to optimize system performance), and was directly exposed to the programs.

A major shift was that the I/O should be represented as a stream of something, and let the program interpret the way it wants. Most successful of this effort is the birth of C language, where stream of something was stream of characters. And the encoding for character is ASCII. Computer needs bits and bytes, not characters, not a word, not some formula... nothing.

So given a character stream what should I interpret it to, and how? This is where parsing comes to play. Basically what it means is that I, the program, want to interpret this stream this way, I know I can get meaningful information out of it.

No wonder, it is parshing county, Nevada in around 1850! Wild wild west, everyone has her own rule and interpretation!!

Parsing, to me still very elegant art, Art of Programming. It is mostly used to parse streams for computer language statment validation/interpretation. It has a rich foundation of computer science. Scanning a stream, giving it meaningful representation, then parsing is heart of computer languages and lots of other things. And to appreciate the beauty, and its non intuitive nature, just try to code up a program to parse a stream of text, and findout out if there is a valid c program statement. To be more specfic -

int main(){;;;;; return 0);} - Is it a valid c program? And insert a dot ( . ) somewhere to see what it says. Very soon we will see that if-then-else, switch etc. going to make the program a living mess. And two questions we will have to answer (1) Is it correctly interpreting (2) Is it efficient?

So there has to be a science behind it too!!. If not, then why not would be the question for inquiring minds.

More...