I do know how important it is to have a proper communication, as I said earlier. Most people learn to have a persuasive and intelligent communication that results in achieving objetives. But in computer communications between communicating processes, there is hardly any room for being persuasive, or rather persuasive means correct.
In my previous note, I mentioned how shared memory can be used for interprocess communications, but all the synchronization will be needed to achive correct communication. And it is something that can be delegated to the underlying systems by using other methods. In windows, one such method is to use named pipe. It is FIFO, and bit more.
The objective of my experiment was to have send and receive loop between server and multiple applications (client) that are to communicate with the server. Essentially, the server will have a dedicate pair of channels between a specific client and itself. So if 10 clients try to communicate with the server, there will be 10 pairs of IPC channels. Each pair has two unidirectional channels going in each direction. The choice of having a pair of unidirectional channel instead of one bidirectional channel is to simplify the implementation and debugging. By nature, an application gets a handle to an opened instance from the system, and that's it. Now to make a sane channel, you and I will have to wrap this as an element of an abstract channel. The abstract channel can have lots of additional information like: state, message received or sent, last used time so on and so forth. Now the channels are really cross-bar switch. The read-end of one is the write-end of the partner and vice versa.
I started out with such abstract channel, just because I knew that for debugging and performances I would be needing these. One problem, and it is always the problem when communicating processes are to have a sustained communication is - THE DEADLOCK. This is mainly due to foreseeing the flaws in assumptions.
My assumption was that if (1) send and receives are blocking call (2) If they starts in alternating ( i.e. ping-pong) fashion (3) IPC channel is flawless, then I don't need any synchronization. Theoritically this is true. We can argue by stepping thru the scenarios that satisfies the above assumptions and show that there does not need any synchronization...
Now since I implemented the IPC, how could I prove the channels are somewhat ( if not totally) flawless. This is the reason I started out with the abstract channel. But assumption (1) was not true. Blocking here is from the local system point of view, it is not blocking or synchrous with respect to the other end. The assumption was that the call will be blocked until the local end-point knows how to satisfy the call. The end-point is the end-point of the channel instance.
The problem is that, if you just try to use the above assumption, and try out IPC as stated, it would work. In my case I was testing some of my own software, that does not necessarily send the message to the local end of the IPC channel, depending on the load of the system. Hence making send side essentially asynchrous. Correct implementation will have a pending queue with associative aging algorithm to get it off the queue and send it, then we would not see the deadlock in ping-pong style communication. But that was the side-effect, in a good way, to find the bug out to see why it was deadlocking...
Net result of this note, is that as long as the asumptions (persuasiveness ) are clearly understood, communication is fun, otherwise big challenges to find flaws in assumption(s).
Really, I'm strong believer of communication. Actually in every form, be it written, spoken, sign and signals. It is really the input of all facet of learning. And who does not want to learn?
Communicating Processes means two or more processes will communicate among them. Simple example of it is two processes communicating with each other. And the media is of course information bus. This information bus could be almost anything. Processes talks over wireless, wired networks, over physical media other than what we call network today. The most fundamental aspect of communication is - Signal processing. This is in the physics and engineering domain. But our topic here is digital communication. Particularly, using Windows systems.
Usually the steps to achieve a good / effective / reliable communication, the following steps are important -
- Choose the infromation Bus: Network, Physical Media, storage systems etc.
- Get a way to programmatically Talk: It is to transmit junk back and forth to see the bus is active and raw.
- Devise protocol(s) to have good / effective / reliable communications
- Incrementally implement.
- Test & Debug.
In order to achieve a quick ( Well, fairly quick I would say ! ) implementation, I took the simplest approach first. It is the shared memory technique in user level programming. In a hurry, I will have few bad pointer references, and user level would save lot of pain and agony if you understand what I mean. Now shared memory is thru file mapped paradigm in Windows system. So the information Bus I selected is shared memory.
In order to make things simple, I picked a pair of such information bus, each in one direction. Here we can have a choice to communicate from one end to other what bus would be used in which direction or we can have an apriori assumption about which bus would take what direction. For shared memory, I just took the apriori assumption by giving explict name like: client shared memory, server shared memory. The naming is purely based on who would be using the bus for writing information to the bus. So Server shared memory information bus is for server to write and client to read. Things are simple. Why? Because by the time client or server starts, everything about the bus is already in place. The just blindly need to communicate with each other, just like the way we talk and never thought of the atomosphare ( particularly eather ) that acts as a bus.
Once the bus is in place and in raw mode, we can talk - no matter how nonsensical they are :). This is to see information is getting exchanged. Albiet, both side having bad experiences in learing, if anything since it is like talking rubbish, and no communication at all.
Now the next step is to carry the payload across with meaningful context. Remember in older days, when we had very noisy telecom line for long distance trunk system and we use to loose context and used asked for what what, please repeat etc.
So this meaningful contex is really the protocol. In this particular protocol we had the following -
1) A payload would be processed only once by the destination since that payload was conveyed only once.
2) The payload with associated protocol information is called message. Each message is atomically processed.
3) Every message has to be processed.
Now since a message is processed only once, consumption of the message means it is gone from the pipe once processed. In our case the reader will atomically read, and if it is indeed a message from writer, it will atomically erase. On the writer side, it will make sure that the pipe/bus has no message ( it is single message channel) and atomically write the whole message.
The payload is wrapped with hdr informations that indicates say: client id, sequence number, message length etc. The message itself is within the total payload.
Following these simple rules, it is fairly easy to cookup a base line information bus for communicating processes.
So what are the challenges, and how significant are they???
Significance is subjective, hence fairly easy to answer! It just depends on what you are dealing with, what is / are the impacts. So it could be nothing to enormous, it all depends on the affect of these on us!
I was reading a book "exploring randomness - by Gregory J. Chaitin" that I bought a few years back. If you read and understand the concept behind it, you will sure appriciate the vastness of this... But to touch some areas ...
-- Types: Virus / Worms / Bots / etc.
-- Delivery Techniques: Over the internet / Plug-in device / Device firmware itself / etc.
-- Language they speak: VB / Java / scripts / C / assembler / many others
-- Dialects of the language: Calling conventions / Name decorations / etc.
-- Dialects: At machine level, different compiler will have different code generation.
-- Their Idols: Many since they love to emulate their idols
-- Capability: Simple killing of a program to make a system unstable to take down the internet etc.
-- Stealthness: Again totally visible to totally stealth.
There are other traits, that anyone can find online. But imagine one thing, given a binary file, sometime it is essential to find under which compiler it was generated and many disassemblers might be needed to capture the essence when analyzing such infected module...
Now if I just take the dialect of C, there are at least three different compliers ( and more if you try out open source ): Watcom, Boreland, and Microsoft. Now there are quite a few calling conventions: _cdecl, Pascal, _stdcall, _fastcall. Also for C++, as an example the register convention to passing THIS pointer. ONE QUESTION WOULD BE TO GIVEN A BINARY, how do I dissassemble correctly and uniformly? YET ANOTHER QUESTION IS, How do I know what language and what dialects was used?
Observation, just a couple months ago I was debugging a BugCheck in Windows kernel, and I saw it says image is corrupted. Two questions: What got corrupted? How got it corrupted? By dumping near by disassembly, it was clear that code was corrupted. Well at least the Windbg disassembly was showing that corrupted places. Note, in the past I played around with disassembly a bit to see, how it handles the disassembly if I present with say some addresses of the .text segment. Let's say foo() is a function, and I say u foo, where foo is the starting address, then try foo+1, foo+2 etc. You might see that disassembler would blisfully ignore and try to interpret what you give. NOW YOU SEE, how easy to get a corrupted code segment, and the result is known.
But when the .text is properly aligned, not corrupted, we always want to be able to say - Ah, this is the dialect of that language being used here. THAT WOULD LEAD US, to have some mental make up to look for know patterns, and try to analyze the rest...
Cautious reader might have one question - Why did I mention the book about "Understanding Randomness"? What if any it has got to do with this discussion? Well, if you are really serious 'bout it, I would recommend yet another book - Malicious Cryptography By Dr. Adam L. Young & Dr. Moti Yung. I'm sure this will enlighten lot of readers about the depth of this topic!!!
Now I hope I was able to convey what we are up against ...
So for the preventive measure, we should ask one thing... How do we prevent ourself from being infected by virus? Well we take vaccine and that may not cover all the upcoming newly fomed virus. That is why some people gets infected with new virus, even after taking the vaccines. Then the question boils down to chances of getting infected. So all the precausion and general good health is what matters sometime too. In this case, it make sense too. Precausion alone is not enough, a general good health is also required...
But the software platforms I mentioned earlier may not have good health. This is a relatively new area - to measure the good health of a platform. Also what do we mean by good health here? I certainly don't have a definite answer, but it bugs me too!
Todays antivirus technology is to start vaccinate others, when some infection is detected/analyzed. So there are some systems that would be compromised, then others ( being the luck ones) would probably have a vaccine. What about those with good health? This is a fundamental problem!!. What makes a platform in good health???
I can take up regular exercise, eat the right stuff, within six months I have a better goodness index, when it comes down to health... What is the equivalent of this in Platforms??? I don't know, but a good thing to think about, IMO.
Just to get a test for the things we are up against, think about the scenario I've with my Windows or Mac X machine. There are random times, when system is very unresponsive, what do I do? Run some sniffer to capture what is / are going on? By the time, I get around to this, by starting sniffer, perf tools, process lists etc., I will see normal behavior, and would conclude, Oh well it is some lockup of the system. But how could I rationalize that on the face of me, lots of files go deleted... I will delete them, never / ever, simply forget that doubt.
Basically I don't know if I could trust this machine! Ah, that is really a trouble then. Can it take some important infos and exfiltrate? Lots of questions ...
Naturally, we know there is / are problems. But we are very sure of what they all are! Don't know what they could to you and I today or tomorrow or couple years down the road in case you happen to have the machine around for a while...
On the otherside of the fence, curious people abound. And now, there is the problem. Curious people will try to come up with newer and newer technique to fool the health of system, and there would be lot people involved in eradicating them or jail them from becoming wide-spread.
For example, forensic analysis of a compromised disk. It is by itself is a major topic of discussion. It is a whole new branch of science, IMO. Natural question would be how could something be planted inside the disk, and be not noticed? Unless the analysis tools knows every corner of how the disk system works, it would fail to detect planted virus... Hmm, this is really ...
Before ending this note, I would like to mention that last few years showed enough evidence that this is becoming increasingly more challenging. And for that reason, an wide spread emphasis to tackle these situations are being given everyday. Now there are lots of courses, books, and jobs being created in this area...
Huge challenge ahead!!!
As a programmer, often curiosity runs in real-time priority, meaning swapping other things out, and be focused on curiosity. I also heard in my earlier days that Curiosity is good but overly curious is not necessarily good. I did not and still don't know what that tells, but I guess it is basically saying "Need to know...".
In this case, it was not like that. It is just that I wanted to play around few technologies like: Linux, Mac X, Windows. They all became part of my experience profile over the years. In some area, I'm at starter level, in some other ( well shamelessly) I know enough to do damage :)
One of my machine ( my day to day bread earner ) is Windows 7, x64 bit. It runs the firewall, and it has avast anti-virus. And I see surprising behavior. About a year ago, it was infected with some virus that was eating up random character from my editors buffer. All kinda editors were infected. Having left the security related programming ( needless to say it is a huge area these days) couple years ago, I was kinda hapless and hopeless. I've an antivirus, that runs every night to scan the whole system, and I've a firewall that blocks or supposed to block worms and other stuff. Right? -- Well that is not entirely true. For certain kinda developer, it is essential that we should be logged in as root/admin, that that could come into play when it boils down to dig your own morge.
Though I would not count myself as an expert, I'd a funny feeling that one type of platform ( Windows, Mac X, Linux) is not good enough to keep things going. Have to have all of them, first for curiosity, second for redundancy. Curiosity I explained, for the other - I need to have things up and running. I should be able to have internet up and running, and I don't really like to be blamed for my kids not submitting the home work ( these days lot of are computer based) in time. I simply can not convince them, why I can not help them. Though there are times I had to take another machine, supposedly not infected, to get the files and have them printed from kinkos.
Being in the security area, though not involved totally in all sphere of it, it always bugs me about the current state of afairs. So what I did in the past, is to read a lot of books, yes curiosity, and tried to understand what is the scope and/or vastness that one has to conquor. It's huge. And the major steps are --
In my case, I tried to prevent, may be not totally by switching off the computers or change to non administrative login. But I can not switch off the machines, it is just like turning off heater in winter of new england. I'm not yet totally sure if I could avoid all the problems by loggin in as non-admin, but again, there are times when I need to be admin.
Before we go on to the detail of the three major steps, it is essential to understand the scope of these in large scale, since we all are (un)fortunately in the connected world. Our life style has changed. Our economy, our survival, and our prospect all depends on the so-called high-tech. Make no mistakes, there are plenty of examples floating around the internet, that will give anyone an idea about the stakes, their value (material or not ), stake holders' risk.
For me, at a very personal level, I saw that it randomly chewed out lots of files. You would not see them, not even in the recycle bin. I use lot of utilities, that are handy, and got used to them over the years. I've seen Mac X blocked my work like crazy. Same with Linux too. It's very time consuming, even if you turn on wireshark/ethreal to see what is going on, and it is just one part of it that could go wrong. There are other areas, that could be infected/compromised, and I'm doomed to the phrase "I don't know what I don't know"...
tobe continued ...