Latency, a Performance Metric you can never ignore!

One of the least thought about Performance Metric for an application is Latency. It "is a time delay between the moment something is initiated, and the moment one of its effects begins or becomes detectable".

Bing and Google Agree: Slow Pages Lose Users. Another interesting stat is that Amazon found every 100ms of latency cost them 1% in sales. A broker could lose $4 million in revenues per millisecond if their electronic trading platform is 5 milliseconds behind the competition.

I think it's a crime to loose the sales 1% when there are many proven techniques to bring down the latency. Let us see how?

While you can write a full book on sources & strategies to reduce latency, this is my humble attempt to simplify the list
  • Hardware Level Latency - Things to consider includes OS, Processors, Memory, Local Storage I/O & Network I/O such as SAN/NFS.
  • Transmission Infrastructure Level Latency - Another hurdle to reduce latency is the internet infrastructure itself like DNS, Network links, Router Infrastructure, TCP Overheads - in other wors the existing World Wide Web itself is a bottleneck and the bulk of the latency is with the information round trip between the browser and the server. In other words, the lesser round trips the better.
  • Software Application Level Latency - latency from the UI scripts, Other Encrytpion/Decryption Stack, Serialization Stack, Image Processing, Bugs/Exceptions, etc.
  • Software Dependencies - internal messaging system latency, the file system, the database, frameworks(while they add to developer productivity, they could add performance overheads), Web Servers(Ex: IIS, Apache), Proxy/Cache(Ex: Squid, MemCache), VPN, Firewalls, SSL, External Services(Ex: Payment Gateways).
This list is in no way exhaustive and depends on the applicaiton in question.

If countering Latency were a process it will be an infinite loop; meaning latency has to be countered at each and every step and it lasts for the lifetime of that applicaiton.

A brief list to counter the latency are:
  • Keepup with Moore's law, Get a better hardware! Well this is not always feasible and is only true upto a limit, meaning good hardware will not go the full way.
  • Reduce round trips to counter the transmission latency.
  • Scale-out as early as possible. Use Active/Active approach, which allows all the servers to serve the users in parallel, rather the other way round with Active/Passive Approach. Scale-out early in the applicaiton evalution will mean that there is less costyl Re-Architectures down the road.
  • ACID is dead, Long live ACID properties! CAP Theoram rejects the pillar of old style DB design. It states that all four properties - atomicity, consistency, isolation & durability(ACID) cannot be had simultanouesly with distributed asyncronous systems that is the Web. But what we could do is to to design systems that are inherently partitioned and loosely coupled which uses eventual consistency. With eventual consistency you can make an update to one partition and return. You don't have to coordinate a transaction across multiple database servers, which makes a system have a higher and more variable latency.
  • Use Pareto principle, and cache more than 80% of the user requests. This while improving latency will reduce the overall costs! Ex: Reverse Proxy, MemCached.
  • Use Network Latency beating Stradegies like TOE, FastTCP, SSL Offloading, Optimize Firewalls, Use CDNs, etc.
To say that I have over simlified the latency problem with this blog post is an understatement. What I have done though isto provide a sign post which shows you the way for a better performance.

Did this post benefit you? Do you want us to write on anything specific?

Bookmark/Share this post with:
Bookmark and Share

Read more!

Bugs in Google Suggest

We all know that Google suggest is really a cool feature of Google Search Engine. But recently I noticed a bug in Google suggest, it’s interesting.
Let’s look at it.

I was trying to search for a C# namespace in google, when I was typing “C# System.IO.”, google suggested me correct namespace which begins with "System.IO" but instead of showing "C# System.IO.Text", it showed "C System.IO.Text". When ever we try to search for C# help, suggest showing the result of C but intersting is when you select 1st suggestion and search for it, it will show your expected output even though the search keyword was "C System.IO.Text".

The above image is showing you everything i said, Anothor fact is count shown in google suggest is always different from the actual. And it never shows "C#" in suggest. Following image is very much interseting.

I was trying to search for "C# and c difference" but google suggest me "c and c difference". Its funny, isn't it ?

Let's see what google is saying about it.

Google Help on Search is saying that
"punctuation are ignored (that is, you can't search for @#$%^&*()=+[]\ and other special characters) while performing the search".

Then what i said above is not bug. BUT Google also saying that
"Punctuation in popular terms that have particular meanings, like [ C++ ] or [ C# ] (both are names of programming languages), are not ignored."

Then why google suggest missed this exception case ?

Bookmark/Share this post with:
Bookmark and Share

Read more!

A Case Study on Female Managers breaking stereotypes

Meera of BusinessWorld provided an exhilarating Case Study yet again in this week's BusinessWorld Issue dated 06 July 2009.

To provide you a brief, this case is about a Male Manager's expectations about a bright, young, potent female employee (FM) wanting to take a break to concentrate on her family. The Male manager(MM) is into thinking that the FM would not “retain her drive to be a brilliant manager” if FM ceased to contribute to the organization in the way he believes she should(by not taking the break the FM wanted)! Then MM is confronted by another intelligent lady who brings about the change in MM's mindset with a honest, healthy & practical debate. The MM's mindset change is complete inspite of opposition from within the other parts of the organization to maintain the statusquo.

The case was an exhilarating read because of the following reasons:
  • The Case's topic is regarding breaking Stereotypes about Women at work, a subject dear to my heart. So my views could biased, but disclaimers apart...
  • The Case lucidly explains the mindset or the dilemmas of a male manager who manages a women (who want to take long breaks off work to cater the needs of her family).
  • The Case also lucidly captures the perspectives of women involved - their drive to contribute, their dilemmas about Work-Life Balance, experiences of women wanting to take breaks which are viewed negatively within the organizations, etc.
  • Above all the case beautifully captures, how a lady can bring about a positive difference in the thinking process of a male manager by
    • being straightforward, logical and practical to the problem at hand
    • consciously grounding her points of view in reality with practical examples
    • probing assumptions, rephrasing/clarifying them objectively & providing evidence to make her point
    • providing practical solutions to potential organizational expectations

  • This is a case study not just on common Gender Bias, but is also on bringing about organizational change through changing people's mindsets, another issue quite dear to my heart.
You can find the case in this Weeks' BusinessWorld (Issue 06 July 2009) or find a softcopy of the case attached here. Find the Analysis for this case study here and here.

Let me know on what you think about this case study by leaving a comment.

Bookmark/Share this post with:
Bookmark and Share

Read more!

IIS Compression Features Will Save You Money!

Compression has been a feature that has been with IIS for quite sometime. But many in us seem completely oblivious to its existence given its performance advances & savings through lower bandwidth! For Starters Bandwidth is costlier than CPU.

With IIS 6 people experienced upwards of 4 times the compression which directly translates to bandwidth savings and ofcourse faster loading of pages for the end user also. Things, as we will see, has only gotten better with IIS 7.

Lets get our hands dirty and learn something and leave this small(read business) talk to lazy people shall we?

With IIS 6, back-up the IIS metabase before you do anything. Metabase is a sensitive/dangerous yet a powerful file and Its always good to have a configuration backup handy just in case.

The actual Compression configuration can be done as follows:
From the IIS snap-in,
- Right-click on the Web Sites node and click on Properties
- Select the Service tab
- Enable Compress application files
- Enable Compress static files
- Change Temporary Directory to the folder that you created above, or leave it at it's default(make sure the destination directory has the right security access)
- Set the max size of the temp folder to something that the hard drive can handle. i.e. 1000
- Save and close the Web Site Properties dialog

Changes in the IIS Metabase directly, pls be careful with the file
- Open the metabase located at C:\Windows\system32\inetsrv\metabase.xml in Notepad
- Search for IIsCompressionScheme
- There should be two of them, one for deflate and one for gzip. Basically they are two means of compression that IIS supports.
- First thing to do is add aspx, asmx, php and any other extension that you need to the list extensions in HcScriptFileExtensions. Make sure to follow the existing format carefully, an extra space will keep this from working correctly. Do this for both deflate and gzip.
- HcDynamicCompressionLevel has a default value of 0. Basically this means at if you did everything else right, the compression for dynamic contact is at the lowest level. The valid range for this is from 0 to 10.
- Reset IIS

And you have the compression up & running on your IIS!

The differences between IIS 7 & IIS 6 regarding the Compression feature:
Instead of compression per extension as in IIS 6, it's per mime type with IIS 7. Additionally, it's easier to enable or disable compression per server/site/folder/file in IIS 7. It's configurable from IIS Manager, Appcmd.exe, web.config and all programming APIs.

Additionally, IIS 7 gives the cool ability to have compression automatically stopped when the CPU on the server is above a set threshold. This means that when CPU is freely available, compression will be applied, but when CPU isn't available, it is temporarily disabled so that it won't overburden the server.

Trade-Offs using the Compression feature with IIS is that
  • as you increase the levels compression (0 to 10) the Time to first byte and CPU util increases while the transactions per second nose dives.
  • Static files are compressed once and cached, so you could use a higher compression level with them.
  • But the dynamic you have a lot more to consider. If you have a server that isn't CPU heavy, and you actively administer the server, then crank up the level as far as it will go. If you are worried that you'll forget about compression in the future when the server gets busier, and you want a safe setting that you can set and forget, then leave at the default of 0, or move to 4.
  • Make sure that you don't compress non-compressible large files like JPG, GIF, EXE, ZIP. Their native format already compresses them, and the extra attempts to compress them will use up valuable system resources, for little or no benefit.
Leave a comment on your impressions of this article, so that we can do a follow-up post if needed.

Bookmark and Share

Read more!

Essential Blogs and Google Reader for The Dummy

To keep growing with technology, reading blogs should be a habit for professionals. Here I have listed some useful blogs for IT professionals.

Read On and get benefitted. is watching Microsoft like a hawk. is the leading source for Microsoft news and downloads. was started in 1999 by Steven Bink, an IT Professional from Amsterdam, The Netherlands. Starting with just a static HTML site, has grown to a fully featured interactive web community.

RSS: has technology news about Microsoft, Linux and Mac. News about the Microsoft is almost same as content in It also has the news about new software and its description. It's a good site for watching beta and new software's. it's also equally good for gamers. launched in October 2000 by Steven Parker & Marcel Klum.


MSDN: Visual C# Headlines

It's a good web site for C# developers; here you can read some interesting facts about c#. This website is part of MSDN. It often has sample c# code for new features. Number of post here is very less. It must be read by all C# developers


Scott Hanselman's Computer Zen

This blog is very useful for ASP.NET developers. This blog is created and maintained by a Microsoft employee Scott Hanselman. He is aiming to spread good information about developing software, usually on the Microsoft stack. This blog covers most of the latest features in ASP.Net. He writes solutions for most of ASP.NET problems.


Tech Crunch IT

TechCrunchIT (TCIT) is the newest blog in the TechCrunch network. TCIT is dedicated to obsessively profiling products and companies in the Enterprise Technology space. TCIT aims to promote an understanding of emerging and existing Enterprise technologies and to analyze their commercial, social, and consumer impacts.


Yes, we have some essential blogs for IT professionals. Now let's look at Google Reader.

Google Reader

Google Reader shows you all of your favorite sites in one convenient place. It's like a personalized inbox for the entire web. Millions of sites publish feeds with their latest updates, and our integrated feed search makes it easy to find new content that interests you. It's very good RSS reader application. For me it's Number 1, Web RSS Reader application. Go visit Google reader and subscribe all above RSS url. Have look at I personally suggest you to use the Google reader and it will make a new habit of reading blogs. If you are lazy to add the above url one by one, Download this and import it in the Google Reader.


Happy reading !!!

Bookmark and Share

Read more!

The Semantic Applications - User Experience 2.0?

People have been thinking about the making applications more User Friendly. One area of huge potential is making applications understand the human language and the true meaning in the language, also called as Semantics of the language. One example is the word "Press" could mean any of the following based on the context - Journalism, or a Printing Press, or a machine that shapes material by the application of pressure or a even weight training exercise! A machine/software cannot understand this outright.

This problem was thought to be a 'hard to crack' research project, if not unsolvable. Recent developments, about which this post is all about, is changing exactly that:

Open Calais, a Thomson Reuters initiative, wants to make the Web's content accessible, interoperable & valuable. Open Calais is a set of web services that automatically creates rich semantic metadata for any content that you provide. The Automated service has the ability to recognize people, places, companies, and events from unstructured text; which goes a long way in machine learning and natural language processing. Try a few showcase apps and find out for yourself.

Meet Siri, your very own online virtual personal assistant. Its a new way interact with the web. Like a secretary or a personal assistant, Siri attempts to get things done. "You can ask Siri to find a romantic place for dinner, and get reservations for Saturday night. You can discover things to do over the weekend, get tickets to the movies, or call a cab. You don't have to search through a bunch of web pages, following links and hunting down facts. Siri does all the work giving you the information you need at your fingertips." Science Fiction? I hate to disappoint you, but check

The most popular of all, a Computational knowledge engine that draws on multiple sources to answer user queries directly - wolfram | alpha, needs little introduction. It already made news waves even though it is its early stages of evolution.

Tired of SOAP? Any Information from Any data source is Available to Any existing application in real-time - that is a mouthful! But metatomix promises exactly that. Metatomix promises "a common semantic understanding of your information across the enterprise, providing you a 360° view of your business information".

Hunch literally means premonition! True its name hunch tries to predict what the user really wants. Consumer research have consistently shown that users answers to complex queries, and often do not even know what they want. You can ask Hunch questions like ‘what computer should I buy’ or ‘where should I holiday’, and it will give you a good recommendation rather than a list of places or computer manufacturers.

Semantics is a fast growing market and is NOT a field of research anymore. Semantic capabilities will be an expectation of future computer systems, else the systems will be left to bite the dust! But these are still early days, so keep your eyes wide open.

Bookmark and Share

Read more!

How do world's best web applications Scale?

I have put together my thoughts in a slide-deck about making better websites, about proven technology patterns that improve the performance of website by atleast 200%!

The slide-deck can be found here.

In days to come you can expect detailed implementations/explanations about each of these patterns. You can also expect discussions & opinions about
  • Strategies relating to performance engineering
  • Technology Trends
  • Interesting Facts
  • ecSoftware (the company that Anto and I work with)
  • Open Source Technology
  • Solutions to everyday technical problems
If you are interested in other topics, please drop us a line through the comments section and we would consider taking them up.

Bookmark and Share

Read more!

Why Blog?

"Why Blog?" People ask.

Blogging is many different things to many different people. Highly successful companies like Microsoft & Google support their employees to blog, to help establish thought leadership in their respective fields of interest. This would help them market their products better.

Many freelancers blog in order to earn their living through blogging. Quite a few people do blogging as a full time job. Many others do this as a hobby. Still others do this as to keep a record/journal of some sort.

This blog TechTalk does not fit into any of the above categories!

Anto and I, have a belief that we are amongst the geekiest (read smartest) people. This blog is a real good way to express ourselves and form opinions within our company and its ecosystem.

More than anything else Anto and I believe that
"We cannot live for ourselves alone. Our lives are connected by a thousand invisible threads, and along these sympathetic fibers, our actions run as causes and return to us as results." Having learnt quite a lot from blogs throughout the internet, we feel we have a responsibility in help teaching to others what we have learnt.

We hope you will be benefited.

Bookmark and Share

Read more!