Switching in general ...

As a programmer, often curiosity runs in real-time priority, meaning swapping other things out, and be focused on curiosity. I also heard in my earlier days that Curiosity is good but overly curious is not necessarily good. I did not and still don't know what that tells, but I guess it is basically saying "Need to know...". 

In this case, it was not like that. It is just that I wanted to play around few technologies like: Linux, Mac X, Windows. They all became part of my experience profile over the years. In some area, I'm at starter level, in some other ( well shamelessly) I know enough to do damage :)

One of my machine ( my day to day bread earner ) is Windows 7, x64 bit. It runs the firewall, and it has avast anti-virus. And I see surprising behavior. About a year ago, it was infected with some virus that was eating up random character from my editors buffer. All kinda editors were infected. Having left the security related programming ( needless to say it is a huge area these days) couple years ago, I was kinda hapless and hopeless. I've an antivirus, that runs every night to scan the whole system, and I've a firewall that blocks or supposed to block worms and other stuff. Right? -- Well that is not entirely true. For certain kinda developer, it is essential that we should be logged in as root/admin, that that could come into play when it boils down to dig your own morge. 

Though I would not count myself as an expert, I'd a funny feeling that one type of platform ( Windows, Mac X, Linux) is not good enough to keep things going. Have to have all of them, first for curiosity, second for redundancy. Curiosity I explained, for the other - I need to have things up and running. I should be able to have internet up and running, and I don't really like to be blamed for my kids not submitting the home work ( these days lot of are computer based) in time. I simply can not convince them, why I can not help them. Though there are times I had to take another machine, supposedly not infected, to get the files and have them printed from kinkos.

 Being in the security area, though not involved totally in all sphere of it, it always bugs me about the current state of afairs. So what I did in the past, is to read a lot of books, yes curiosity, and tried to understand what is the scope and/or vastness that one has to conquor. It's huge. And the major steps are --

 

  • prevention
  • detection/analysis
  • recovery/mitigation

In my case, I tried to prevent, may be not totally by switching off the computers or change to non administrative login. But I can not switch off the machines, it is just like turning off heater in winter of new england. I'm not yet totally sure if I could avoid all the problems by loggin in as non-admin, but again, there are times when I need to be admin. 

 

Before we go on to the detail of the three major steps, it is essential to understand the scope of these in large scale, since we all are (un)fortunately in the connected world. Our life style has changed. Our economy, our survival, and our prospect all depends on the so-called high-tech. Make no mistakes, there are plenty of examples floating around the internet, that will give anyone an idea about the stakes, their value (material or not ), stake holders' risk.

 

For me, at a very personal level, I saw that it randomly chewed out lots of files. You would not see them, not even in the recycle bin. I use lot of utilities, that are handy, and got used to them over the years. I've seen Mac X blocked my work like crazy. Same with Linux too. It's very time consuming, even if you turn on wireshark/ethreal to see what is going on, and it is just one part of it that could go wrong. There are other areas, that could be infected/compromised, and I'm doomed to the phrase "I don't know what I don't know"...

 

tobe continued ...

 

Posted on Tuesday, August 7, 2012 at 07:10PM by Registered CommenterProkash Sinha | Comments Off

Stable Marriage Algorithms!

Before, I continue on patterns of x64 assembly language, I've to take another pause. Yes you can see that the heading chanaged again...

I was stuck to dig out one problem we thought could be totally transient. This is where, simple, crisp design is so important when it comes to kernel software. Until we get our head wrap around it, we should always look for a way to decompose. The reason behind it is reasoning. Some transient problems are so hard, that disambiguation or rather make it consistently reproducible is very hard. 

I was trying to find this transient behavior, in a software component, that is somewhere in storage stack to enhance usuability. Since we can not trust the file system, I was using block level I/O to produce it and if possible make it deterministic. This was a requirement...

So I started out with opening a target disk, and write, read, and compare the data, synchronously & asychronously. For sync I/O we know, just do the I/O one sector at a time using the Windows API, for async we have few choices: Threads, Overlapped I/O etc. I picked overlapped I/O, and wait until all the payloads are written, then read, and compare...

How do we compare? And how do we make it as deterministics as possible? 

 

First one was not that tough, we had pre fabricated data basically a sequence starting from whereever we want, and each sector will have a specific pattern at double word buckets. First sector will have the first number, second sector will have the second number etc... So comparing is easy, from the buffers as well as looking at the raw disk.

For getting it to deterministic, repeatable reproduction of transient bug, I simply could not do that. I had payload with random number of sectors with random starting address, one sector at a time, multiple sectors at a time, clustered sectors etc and they can be used both for sync and async. For example, if I want clustered, it will break a payload of n sectors into 1, 2 , 4, 8, 16, ... remaining so that 1+2+4+...+ residue = n sectors, and you can do both sync, async among those payload.

While we found some other bugs that were easy to fix, one transient behavior was not solved. 

What I know, and it repeat itself is that testing alone is not the way to get out all the bugs of a component. Its a blend of design/design/design, simplify/simplify/simplify, structure/structure/structure the primitives... Then reason/reson/reason while the process of development goes on...

 

By the way, did I mention that I read the book: Science of Debugging. And I was carring a paper called "Needle in Haystack" and another monograph named "Stable Marriage and its relation to other ...".

The last two were to show respect and hope the family undestand why I'm working long hours when 90% of people around the small city are in bed :)

 

-pro

 

Note that it is a good testing mechanism, since the component under test, being a kernel module needed to be bombarded with random I/O at a high speed. Moreover, when I was doing just that, it was helping me debug both the target component, as well as the test component. It was helping each other, if you need more, I would be happy to explain later.

 

Posted on Friday, June 15, 2012 at 08:40PM by Registered CommenterProkash Sinha | CommentsPost a Comment | References3 References

Things to remember when in emergency!

I'm sure we all know that emergency has different meaning for different people and different things. Emergency preparedness, when debugging could be handy I think. And it is more important, when we think it is emergency, hence not all regular matters. Being not regular categorically implies that we will not remember it dearly...

When it comes to x64bit assembly instructions, it is quite a bit of different from x86.

So here are some, just to warm up -

 

// fuction returing void, taking two args
void function_return_void_takes_2_arg ( ULONG fristArg, ULONG secondArg)
{
     //chk if 1st arg is zero
     if (fristArg == 0 ){
     return;
}
    //chk if 2nd arg is non-zero
    if (secondArg != 0 ){
       return;
    }
}

 

### Notice how the code was chewed up - no code for the logic, different from x86

### Look at how the args offset on the stack are stuffed in 

 

_TEXT SEGMENT

fristArg$ = 16

secondArg$ = 24

function_return_void_takes_2_arg PROC

; 23   : {

$LN5:

mov DWORD PTR [rsp+16], edx

mov DWORD PTR [rsp+8], ecx

push rdi

; 24   : //chk if 1st arg is zero

; 25   : if (fristArg == 0 ){

; 26   : return;

; 27   : }

; 28   : //chk if 2nd arg is non-zero

; 29   : if (secondArg != 0 ){

; 30   : return;

; 31   : }

; 32   : } 

pop rdi

ret 0

 //void f takes 4 args

void f_void_take_4_arg( ULONG First, ULONG Second, ULONG Third, ULONG Fourth )

{

int i, *ptr_i;

// init i

i = 0;

//take the addr of 

ptr_i = &i;

}

### Notice how the args are pushed thru registers ( 1st 4 args usually )

_TEXT SEGMENT

### Note how the offsets are being stored in these *$ vars, where * is the variable name

### Also watch how the offset took care of any stack push. Compare to x86 gened code too

i$ = 36

ptr_i$ = 56

First$ = 80

Second$ = 88

Third$ = 96

Fourth$ = 104

f_void_take_4_arg PROC

; 37   : {

$LN3:

### Watch this, callee is pushing the args onto stack, caller did not - This is shadowing the args!

mov DWORD PTR [rsp+32], r9d  # 4th arg

mov DWORD PTR [rsp+24], r8d

mov DWORD PTR [rsp+16], edx

mov DWORD PTR [rsp+8], ecx  # 1st arg

push rdi   ## This push was taken into account when *$ vars for respective offsets are defined

sub rsp, 64 ; 00000040H ## Another fine points its not just for local variable space, also for calling down path

mov rdi, rsp

mov rcx, 16

mov eax, -858993460 ; ccccccccH

rep stosd

mov ecx, DWORD PTR [rsp+80]

 

; 38   : int i, *ptr_i;

; 39   : // init i

; 40   : i = 0;

mov DWORD PTR i$[rsp], 0

; 41   : //take the addr of 

; 42   : ptr_i = &i;

lea rax, QWORD PTR i$[rsp] ## get the address of i. lea does not evaluate the indirection, just adds the offset.

mov QWORD PTR ptr_i$[rsp], rax  ### put into the ptr_i 

; 43   : 

; 44   : }

mov rcx, rsp

lea rdx, OFFSET FLAT:f_void_take_4_arg$rtcFrameData

call _RTC_CheckStackVars

add rsp, 64 ; 00000040H  ## to get to the register we pushed, we need to go back up the stack sub used

pop rdi

ret 0

f_void_take_4_arg ENDP  ## end of procedure

_TEXT ENDS  ## end of this code seg

 

Next I will delve into some common pattern blocks like: while; do while; for; switch; if then else etc. Then I will also give some normal stuff like structure access, procedure calls etc. So that we will have a handy reference for the code when we have to look at disassembled systems code.

 

Enjoy!

 

Posted on Friday, April 27, 2012 at 09:59PM by Registered CommenterProkash Sinha | CommentsPost a Comment | References2 References

Pi day !

Today, the 14th of March is Pi ( 3.14...) day!. Does it mean it's an irrational day!! Perhaps not...

There is an interesting constructive approach to show that there are irrational numbers. IIRC, Cantor had the constructive method to show that the number system ( i.e. the real number systems) does have other numbers beside rational numbers ( intergers happen to be the degenrate case of rational number, since the denominator is trivially 1).

Mathematical computation has been engaged for long to get even larger decimal expansions of Pi. While they are on their quest for larger and even larger Pi, one thing to note is that we expect not to find any finite recurrence of repeatable numbers. So whatever the expansion we have so far, there should never be a finite numbers of decimal expansion that repeats itself. If we ever get to a situation like that, Pi would no longer would be qualified as interseting, yet irrational number!. It will degenarate into rational number.

Dedekind also had a way to construct real number, that includes irrational numbers!. Following a similar approach, there is new number system - called surreal numbers.  This has a significant implication in combinatorial games, and artificial intellegence. In most board games, alternate moves of the players creates different positions. These postions can have game values in certain kind of games, and that can dictate the search strategy...

 

 

Posted on Wednesday, March 14, 2012 at 05:30PM by Registered CommenterProkash Sinha | Comments1 Comment | References2 References

Memory on the High Lane!

Flash memory technology is now getting to a state that hard disk secondary storage for lots of devices becoming a defacto standard. One of the main reason is speed due to lack of seek time. Yeah there is no rotating disks. Another is reliablity. So as of 2011/2012 we see lot of devices that are using SSD made of Flash memory. Solid state disk (SSD) is becoming very popular every day.

From the systems point of view, one of the interesting feature of SSD is that it can have multiple I/Os concurrently depending on the packaging and FLT(flash translation layer). And this opens up a relatively new ways of building systems. In particular, file systems, host device drivers, and flash firmware designs became very hot as well as very sohphisticated.

Traditionally, Filesystems were designed with HDD ( hard disk drive) in mind, and lot of them are not necessarily well performing file systems for SSD based storage media. Some of the old file systems are supported by providing host device driver and flash firmware that takes care of the underlying mechanics of FLASH somewhat.

So for the systems designer/programmer, understanding the underlying attributes and applying the necessary technological alternatives are very challeging. In a following series of notes I will try to encompass or rather chalkout some of the pros and cons of HDD based storage subsystem. Then will try to introduce the main design alternatives being considered for file systems.

Traditional File Systems (FS) were to give a representation of the sectors of an HDD in a structured and intutive way. Structured meaning, from the users point of view, we only need to understand directories, and files in an hierchachical fashion. And it is intuitive, since we know the office/home cabinet and filing systems. Then to avoid excessive file corruptions, log structured file systems became popular. Log sturctured file systems writes a log about a transaction, before committing the data into the physical storage that backs up the file system. The idea behind this is to ALL or NOTHING paradigm. Loosly speaking a transaction may involve (a) metadata change(s); (b) commiting the data into the backing storage. So to avoid excessive incoherent user data, ALL or NOTHING paradigm became essential.

If the system fails after committing the log, but before committing the actual data, then log file could be used to recover the transaction and commit the data. In that case ALL state for that transaction can be achieved. But if the failure occurs while it is committing the log, it can always go back to NOTHING. So retry the transaction again from the start, if possible.

Now it is only interesting to keep only the last few ( few is relative word here), log file does not need to continuously grow. Hence log file is usually circular in nature, meaning old records are erased.

For SSDs, they are mostly NAND base. And NAND has random access ability, so it is different from HDD in terms of seek time, latency etc. Also flash memory uses out-of-place writes. Meaning that an old valid data can not be over written. It has to be written to an yet fresh write location. So the mapping of virtual to physical sectors changes all the time. As an example, if we write some data to a sector once, we can not use that sector before erasing that block and make the bits to 1 for that sector. When a sector has all the bits set to 1, it is a write-once state, then once data is written, we can not over-write without erasing that sector. So every once in a while, someone or something has to move the good data of a sector to a different place, then erase the sector. Flash uses block erase, where a block consist of one or more sectors. This is one reason that random write could slow down the performance. Not because of the read access becomes slower, but because the underlying systems needs to garbage collect by erasing and compacting valid data into blocks of NAND storage.

These and other attributes opens up lot of alternative approaches to system design and underlying software techniques. For example, lot of data stuctures now have interesting applications for these systems, and Log structured file systems(LSFS) are becoming bit more practical and important.

An LSFS is purely based on log structure, meaning the whole file system is based on LOG. 

 

Finally I will try to emphasize some of the analytical methods and associated data structures that are suitable for SSD based I/O subsystems.

Posted on Friday, January 13, 2012 at 03:55PM by Registered CommenterProkash Sinha | CommentsPost a Comment | References2 References