Thursday, May 14, 2009

Mostly used Practises to Speed up your website

Minimize HTTP Requests



tag: content



80% of the end-user response time is spent on the front-end. Most of this time is tied up in downloading all the components in the page: images, stylesheets, scripts, Flash, etc. Reducing the number of components in turn reduces the number of HTTP requests required to render the page. This is the key to faster pages.



One way to reduce the number of components in the page is to simplify the page's design. But is there a way to build pages with richer content while also achieving fast response times? Here are some techniques for reducing the number of HTTP requests, while still supporting rich page designs.




Combined files are a way to reduce the number of HTTP requests by combining all scripts into a single script, and similarly combining all CSS into a single stylesheet. Combining files is more challenging when the scripts and stylesheets vary from page to page, but making this part of your release process improves response times.




CSS Sprites are the preferred method for reducing the number of image requests. Combine your background images into a single image and use the CSS background-image and background-position properties to display the desired image segment.




Image maps combine multiple images into a single image. The overall size is about the same, but reducing the number of HTTP requests speeds up the page. Image maps only work if the images are contiguous in the page, such as a navigation bar. Defining the coordinates of image maps can be tedious and error prone. Using image maps for navigation is not accessible too, so it's not recommended.




Inline images use the data: URL scheme to embed the image data in the actual page. This can increase the size of your HTML document. Combining inline images into your (cached) stylesheets is a way to reduce HTTP requests and avoid increasing the size of your pages. Inline images are not yet supported across all major browsers.



Reducing the number of HTTP requests in your page is the place to start. This is the most important guideline for improving performance for first time visitors. As described in Tenni Theurer's blog post Browser Cache Usage - Exposed!, 40-60% of daily visitors to your site come in with an empty cache. Making your page fast for these first time visitors is key to a better user experience.



top | discuss this rule



Use a Content Delivery Network


tag: server



The user's proximity to your web server has an impact on response times. Deploying your content across multiple, geographically dispersed servers will make your pages load faster from the user's perspective. But where should you start?


As a first step to implementing geographically dispersed content, don't attempt to redesign your web application to work in a distributed architecture. Depending on the application, changing the architecture could include daunting tasks such as synchronizing session state and replicating database transactions across server locations. Attempts to reduce the distance between users and your content could be delayed by, or never pass, this application architecture step.


Remember that 80-90% of the end-user response time is spent downloading all the components in the page: images, stylesheets, scripts, Flash, etc. This is the Performance Golden Rule. Rather than starting with the difficult task of redesigning your application architecture, it's better to first disperse your static content. This not only achieves a bigger reduction in response times, but it's easier thanks to content delivery networks.


A content delivery network (CDN) is a collection of web servers distributed across multiple locations to deliver content more efficiently to users. The server selected for delivering content to a specific user is typically based on a measure of network proximity. For example, the server with the fewest network hops or the server with the quickest response time is chosen.


Some large Internet companies own their own CDN, but it's cost-effective to use a CDN service provider, such as Akamai Technologies, Mirror Image Internet, or Limelight Networks. For start-up companies and private web sites, the cost of a CDN service can be prohibitive, but as your target audience grows larger and becomes more global, a CDN is necessary to achieve fast response times. At Yahoo!, properties that moved static content off their application web servers to a CDN improved end-user response times by 20% or more. Switching to a CDN is a relatively easy code change that will dramatically improve the speed of your web site.


top | discuss this rule



Add an Expires or a Cache-Control Header



tag: server



There are two things in this rule:



  • For static components: implement "Never expire" policy by setting far future Expires header

  • For dynamic components: use an appropriate Cache-Control header to help the browser with conditional requests



Web page designs are getting richer and richer, which means more scripts, stylesheets, images, and Flash in the page. A first-time visitor to your page may have to make several HTTP requests, but by using the Expires header you make those components cacheable. This avoids unnecessary HTTP requests on subsequent page views. Expires headers are most often used with images, but they should be used on all components including scripts, stylesheets, and Flash components.


Browsers (and proxies) use a cache to reduce the number and size of HTTP requests, making web pages load faster. A web server uses the Expires header in the HTTP response to tell the client how long a component can be cached. This is a far future Expires header, telling the browser that this response won't be stale until April 15, 2010.


      Expires: Thu, 15 Apr 2010 20:00:00 GMT


If your server is Apache, use the ExpiresDefault directive to set an expiration date relative to the current date. This example of the ExpiresDefault directive sets the Expires date 10 years out from the time of the request.


      ExpiresDefault "access plus 10 years"


Keep in mind, if you use a far future Expires header you have to change the component's filename whenever the component changes. At Yahoo! we often make this step part of the build process: a version number is embedded in the component's filename, for example, yahoo_2.0.6.js.


Using a far future Expires header affects page views only after a user has already visited your site. It has no effect on the number of HTTP requests when a user visits your site for the first time and the browser's cache is empty. Therefore the impact of this performance improvement depends on how often users hit your pages with a primed cache. (A "primed cache" already contains all of the components in the page.) We measured this at Yahoo! and found the number of page views with a primed cache is 75-85%. By using a far future Expires header, you increase the number of components that are cached by the browser and re-used on subsequent page views without sending a single byte over the user's Internet connection.


top | discuss this rule



Gzip Components



tag: server



The time it takes to transfer an HTTP request and response across the network can be significantly reduced by decisions made by front-end engineers. It's true that the end-user's bandwidth speed, Internet service provider, proximity to peering exchange points, etc. are beyond the control of the development team. But there are other variables that affect response times. Compression reduces response times by reducing the size of the HTTP response.


Starting with HTTP/1.1, web clients indicate support for compression with the Accept-Encoding header in the HTTP request.


      Accept-Encoding: gzip, deflate

If the web server sees this header in the request, it may compress the response using one of the methods listed by the client. The web server notifies the web client of this via the Content-Encoding header in the response.


      Content-Encoding: gzip

Gzip is the most popular and effective compression method at this time. It was developed by the GNU project and standardized by RFC 1952. The only other compression format you're likely to see is deflate, but it's less effective and less popular.


Gzipping generally reduces the response size by about 70%. Approximately 90% of today's Internet traffic travels through browsers that claim to support gzip. If you use Apache, the module configuring gzip depends on your version: Apache 1.3 uses mod_gzip while Apache 2.x uses mod_deflate.


There are known issues with browsers and proxies that may cause a mismatch in what the browser expects and what it receives with regard to compressed content. Fortunately, these edge cases are dwindling as the use of older browsers drops off. The Apache modules help out by adding appropriate Vary response headers automatically.


Servers choose what to gzip based on file type, but are typically too limited in what they decide to compress. Most web sites gzip their HTML documents. It's also worthwhile to gzip your scripts and stylesheets, but many web sites miss this opportunity. In fact, it's worthwhile to compress any text response including XML and JSON. Image and PDF files should not be gzipped because they are already compressed. Trying to gzip them not only wastes CPU but can potentially increase file sizes.


Gzipping as many file types as possible is an easy way to reduce page weight and accelerate the user experience.


top | discuss this rule



Put Stylesheets at the Top



tag: css



While researching performance at Yahoo!, we discovered that moving stylesheets to the document HEAD makes pages appear to be loading faster. This is because putting stylesheets in the HEAD allows the page to render progressively.


Front-end engineers that care about performance want a page to load progressively; that is, we want the browser to display whatever content it has as soon as possible. This is especially important for pages with a lot of content and for users on slower Internet connections. The importance of giving users visual feedback, such as progress indicators, has been well researched and documented. In our case the HTML page is the progress indicator! When the browser loads the page progressively the header, the navigation bar, the logo at the top, etc. all serve as visual feedback for the user who is waiting for the page. This improves the overall user experience.


The problem with putting stylesheets near the bottom of the document is that it prohibits progressive rendering in many browsers, including Internet Explorer. These browsers block rendering to avoid having to redraw elements of the page if their styles change. The user is stuck viewing a blank white page.

The HTML specification clearly states that stylesheets are to be included in the HEAD of the page: "Unlike A, [LINK] may only appear in the HEAD section of a document, although it may appear any number of times." Neither of the alternatives, the blank white screen or flash of unstyled content, are worth the risk. The optimal solution is to follow the HTML specification and load your stylesheets in the document HEAD.


top | discuss this rule



Put Scripts at the Bottom



tag: javascript



The problem caused by scripts is that they block parallel downloads. The HTTP/1.1 specification suggests that browsers download no more than two components in parallel per hostname. If you serve your images from multiple hostnames, you can get more than two downloads to occur in parallel. While a script is downloading, however, the browser won't start any other downloads, even on different hostnames.


In some situations it's not easy to move scripts to the bottom. If, for example, the script uses document.write to insert part of the page's content, it can't be moved lower in the page. There might also be scoping issues. In many cases, there are ways to workaround these situations.


An alternative suggestion that often comes up is to use deferred scripts. The DEFER attribute indicates that the script does not contain document.write, and is a clue to browsers that they can continue rendering. Unfortunately, Firefox doesn't support the DEFER attribute. In Internet Explorer, the script may be deferred, but not as much as desired. If a script can be deferred, it can also be moved to the bottom of the page. That will make your web pages load faster.


top | discuss this rule



Avoid CSS Expressions



tag: css



CSS expressions are a powerful (and dangerous) way to set CSS properties dynamically. They're supported in Internet Explorer, starting with version 5. As an example, the background color could be set to alternate every hour using CSS expressions.


      background-color: expression( (new Date()).getHours()%2 ? "#B8D4FF" : "#F08A00" );


As shown here, the expression method accepts a JavaScript expression. The CSS property is set to the result of evaluating the JavaScript expression. The expression method is ignored by other browsers, so it is useful for setting properties in Internet Explorer needed to create a consistent experience across browsers.


The problem with expressions is that they are evaluated more frequently than most people expect. Not only are they evaluated when the page is rendered and resized, but also when the page is scrolled and even when the user moves the mouse over the page. Adding a counter to the CSS expression allows us to keep track of when and how often a CSS expression is evaluated. Moving the mouse around the page can easily generate more than 10,000 evaluations.


One way to reduce the number of times your CSS expression is evaluated is to use one-time expressions, where the first time the expression is evaluated it sets the style property to an explicit value, which replaces the CSS expression. If the style property must be set dynamically throughout the life of the page, using event handlers instead of CSS expressions is an alternative approach. If you must use CSS expressions, remember that they may be evaluated thousands of times and could affect the performance of your page.


top | discuss this rule



Make JavaScript and CSS External



tag: javascript, css



Many of these performance rules deal with how external components are managed. However, before these considerations arise you should ask a more basic question: Should JavaScript and CSS be contained in external files, or inlined in the page itself?



Using external files in the real world generally produces faster pages because the JavaScript and CSS files are cached by the browser. JavaScript and CSS that are inlined in HTML documents get downloaded every time the HTML document is requested. This reduces the number of HTTP requests that are needed, but increases the size of the HTML document. On the other hand, if the JavaScript and CSS are in external files cached by the browser, the size of the HTML document is reduced without increasing the number of HTTP requests.



The key factor, then, is the frequency with which external JavaScript and CSS components are cached relative to the number of HTML documents requested. This factor, although difficult to quantify, can be gauged using various metrics. If users on your site have multiple page views per session and many of your pages re-use the same scripts and stylesheets, there is a greater potential benefit from cached external files.



Many web sites fall in the middle of these metrics. For these sites, the best solution generally is to deploy the JavaScript and CSS as external files. The only exception where inlining is preferable is with home pages, such as Yahoo!'s front page and My Yahoo!.
Home pages that have few (perhaps only one) page view per session may find that inlining JavaScript and CSS results in faster end-user response times.



For front pages that are typically the first of many page views, there are techniques that leverage the reduction of HTTP requests that inlining provides, as well as the caching benefits achieved through using external files. One such technique is to inline JavaScript and CSS in the front page, but dynamically download the external files after the page has finished loading. Subsequent pages would reference the external files that should already be in the browser's cache.


top | discuss this rule




Reduce DNS Lookups



tag: content



The Domain Name System (DNS) maps hostnames to IP addresses, just as phonebooks map people's names to their phone numbers. When you type www.yahoo.com into your browser, a DNS resolver contacted by the browser returns that server's IP address. DNS has a cost. It typically takes 20-120 milliseconds for DNS to lookup the IP address for a given hostname. The browser can't download anything from this hostname until the DNS lookup is completed.


DNS lookups are cached for better performance. This caching can occur on a special caching server, maintained by the user's ISP or local area network, but there is also caching that occurs on the individual user's computer. The DNS information remains in the operating system's DNS cache (the "DNS Client service" on Microsoft Windows). Most browsers have their own caches, separate from the operating system's cache. As long as the browser keeps a DNS record in its own cache, it doesn't bother the operating system with a request for the record.


Internet Explorer caches DNS lookups for 30 minutes by default, as specified by the DnsCacheTimeout registry setting. Firefox caches DNS lookups for 1 minute, controlled by the network.dnsCacheExpiration configuration setting. (Fasterfox changes this to 1 hour.)


When the client's DNS cache is empty (for both the browser and the operating system), the number of DNS lookups is equal to the number of unique hostnames in the web page. This includes the hostnames used in the page's URL, images, script files, stylesheets, Flash objects, etc. Reducing the number of unique hostnames reduces the number of DNS lookups.


Reducing the number of unique hostnames has the potential to reduce the amount of parallel downloading that takes place in the page. Avoiding DNS lookups cuts response times, but reducing parallel downloads may increase response times. My guideline is to split these components across at least two but no more than four hostnames. This results in a good compromise between reducing DNS lookups and allowing a high degree of parallel downloads.


top | discuss this rule




Minify JavaScript and CSS



tag: javascript, css



Minification is the practice of removing unnecessary characters from code to reduce its size thereby improving load times. When code is minified all comments are removed, as well as unneeded white space characters (space, newline, and tab). In the case of JavaScript, this improves response time performance because the size of the downloaded file is reduced. Two popular tools for minifying JavaScript code are JSMin and YUI Compressor. The YUI compressor can also minify CSS.


Obfuscation is an alternative optimization that can be applied to source code. It's more complex than minification and thus more likely to generate bugs as a result of the obfuscation step itself. In a survey of ten top U.S. web sites, minification achieved a 21% size reduction versus 25% for obfuscation. Although obfuscation has a higher size reduction, minifying JavaScript is less risky.



In addition to minifying external scripts and styles, inlined <script> and <style> blocks can and should also be minified. Even if you gzip your scripts and styles, minifying them will still reduce the size by 5% or more. As the use and size of JavaScript and CSS increases, so will the savings gained by minifying your code.


top | discuss this rule




Avoid Redirects



tag: content



Redirects are accomplished using the 301 and 302 status codes. Here's an example of the HTTP headers in a 301 response:


      HTTP/1.1 301 Moved Permanently
Location: http://example.com/newuri
Content-Type: text/html


The browser automatically takes the user to the URL specified in the Location field. All the information necessary for a redirect is in the headers. The body of the response is typically empty. Despite their names, neither a 301 nor a 302 response is cached in practice unless additional headers, such as Expires or Cache-Control, indicate it should be. The meta refresh tag and JavaScript are other ways to direct users to a different URL, but if you must do a redirect, the preferred technique is to use the standard 3xx HTTP status codes, primarily to ensure the back button works correctly.


The main thing to remember is that redirects slow down the user experience. Inserting a redirect between the user and the HTML document delays everything in the page since nothing in the page can be rendered and no components can start being downloaded until the HTML document has arrived.


One of the most wasteful redirects happens frequently and web developers are generally not aware of it. It occurs when a trailing slash (/) is missing from a URL that should otherwise have one. For example, going to http://astrology.yahoo.com/astrology results in a 301 response containing a redirect to http://astrology.yahoo.com/astrology/ (notice the added trailing slash). This is fixed in Apache by using Alias or mod_rewrite, or the DirectorySlash directive if you're using Apache handlers.


Connecting an old web site to a new one is another common use for redirects. Others include connecting different parts of a website and directing the user based on certain conditions (type of browser, type of user account, etc.). Using a redirect to connect two web sites is simple and requires little additional coding. Although using redirects in these situations reduces the complexity for developers, it degrades the user experience. Alternatives for this use of redirects include using Alias and mod_rewrite if the two code paths are hosted on the same server. If a domain name change is the cause of using redirects, an alternative is to create a CNAME (a DNS record that creates an alias pointing from one domain name to another) in combination with Alias or mod_rewrite.


top | discuss this rule




Remove Duplicate Scripts



tag: javascript



It hurts performance to include the same JavaScript file twice in one page. This isn't as unusual as you might think. A review of the ten top U.S. web sites shows that two of them contain a duplicated script. Two main factors increase the odds of a script being duplicated in a single web page: team size and number of scripts. When it does happen, duplicate scripts hurt performance by creating unnecessary HTTP requests and wasted JavaScript execution.


Unnecessary HTTP requests happen in Internet Explorer, but not in Firefox. In Internet Explorer, if an external script is included twice and is not cacheable, it generates two HTTP requests during page loading. Even if the script is cacheable, extra HTTP requests occur when the user reloads the page.


In addition to generating wasteful HTTP requests, time is wasted evaluating the script multiple times. This redundant JavaScript execution happens in both Firefox and Internet Explorer, regardless of whether the script is cacheable.


One way to avoid accidentally including the same script twice is to implement a script management module in your templating system. The typical way to include a script is to use the SCRIPT tag in your HTML page.


      <script type="text/javascript" src="menu_1.0.17.js"></script>

An alternative in PHP would be to create a function called insertScript.


      <?php insertScript("menu.js") ?>

In addition to preventing the same script from being inserted multiple times, this function could handle other issues with scripts, such as dependency checking and adding version numbers to script filenames to support far future Expires headers.


top | discuss this rule




Configure ETags



tag: server



Entity tags (ETags) are a mechanism that web servers and browsers use to determine whether the component in the browser's cache matches the one on the origin server. (An "entity" is another word a "component": images, scripts, stylesheets, etc.) ETags were added to provide a mechanism for validating entities that is more flexible than the last-modified date. An ETag is a string that uniquely identifies a specific version of a component. The only format constraints are that the string be quoted. The origin server specifies the component's ETag using the ETag response header.


      HTTP/1.1 200 OK
Last-Modified: Tue, 12 Dec 2006 03:03:59 GMT
ETag: "10c24bc-4ab-457e1c1f"
Content-Length: 12195


Later, if the browser has to validate a component, it uses the If-None-Match header to pass the ETag back to the origin server. If the ETags match, a 304 status code is returned reducing the response by 12195 bytes for this example.

      GET /i/yahoo.gif HTTP/1.1
Host: us.yimg.com
If-Modified-Since: Tue, 12 Dec 2006 03:03:59 GMT
If-None-Match: "10c24bc-4ab-457e1c1f"
HTTP/1.1 304 Not Modified


The problem with ETags is that they typically are constructed using attributes that make them unique to a specific server hosting a site. ETags won't match when a browser gets the original component from one server and later tries to validate that component on a different server, a situation that is all too common on Web sites that use a cluster of servers to handle requests. By default, both Apache and IIS embed data in the ETag that dramatically reduces the odds of the validity test succeeding on web sites with multiple servers.

The ETag format for Apache 1.3 and 2.x is inode-size-timestamp. Although a given file may reside in the same directory across multiple servers, and have the same file size, permissions, timestamp, etc., its inode is different from one server to the next.

IIS 5.0 and 6.0 have a similar issue with ETags. The format for ETags on IIS is Filetimestamp:ChangeNumber. A ChangeNumber is a counter used to track configuration changes to IIS. It's unlikely that the ChangeNumber is the same across all IIS servers behind a web site.

The end result is ETags generated by Apache and IIS for the exact same component won't match from one server to another. If the ETags don't match, the user doesn't receive the small, fast 304 response that ETags were designed for; instead, they'll get a normal 200 response along with all the data for the component. If you host your web site on just one server, this isn't a problem. But if you have multiple servers hosting your web site, and you're using Apache or IIS with the default ETag configuration, your users are getting slower pages, your servers have a higher load, you're consuming greater bandwidth, and proxies aren't caching your content efficiently. Even if your components have a far future Expires header, a conditional GET request is still made whenever the user hits Reload or Refresh.

If you're not taking advantage of the flexible validation model that ETags provide, it's better to just remove the ETag altogether. The Last-Modified header validates based on the component's timestamp. And removing the ETag reduces the size of the HTTP headers in both the response and subsequent requests. This Microsoft Support article describes how to remove ETags. In Apache, this is done by simply adding the following line to your Apache configuration file:

      FileETag none

top | discuss this rule





Make Ajax Cacheable



tag: content



One of the cited benefits of Ajax is that it provides instantaneous feedback to the user because it requests information asynchronously from the backend web server. However, using Ajax is no guarantee that the user won't be twiddling his thumbs waiting for those asynchronous JavaScript and XML responses to return. In many applications, whether or not the user is kept waiting depends on how Ajax is used. For example, in a web-based email client the user will be kept waiting for the results of an Ajax request to find all the email messages that match their search criteria. It's important to remember that "asynchronous" does not imply "instantaneous".



To improve performance, it's important to optimize these Ajax responses. The most important way to improve the performance of Ajax is to make the responses cacheable, as discussed in Add an Expires or a Cache-Control Header. Some of the other rules also apply to Ajax:




Let's look at an example. A Web 2.0 email client might use Ajax to download the user's address book for autocompletion. If the user hasn't modified her address book since the last time she used the email web app, the previous address book response could be read from cache if that Ajax response was made cacheable with a future Expires or Cache-Control header. The browser must be informed when to use a previously cached address book response versus requesting a new one. This could be done by adding a timestamp to the address book Ajax URL indicating the last time the user modified her address book, for example, &t=1190241612. If the address book hasn't been modified since the last download, the timestamp will be the same and the address book will be read from the browser's cache eliminating an extra HTTP roundtrip. If the user has modified her address book, the timestamp ensures the new URL doesn't match the cached response, and the browser will request the updated address book entries.



Even though your Ajax responses are created dynamically, and might only be applicable to a single user, they can still be cached. Doing so will make your Web 2.0 apps faster.



top | discuss this rule




Flush the Buffer Early



tag: server




When users request a page, it can take anywhere from 200 to 500ms for the backend server to stitch together the HTML page.
During this time, the browser is idle as it waits for the data to arrive.
In PHP you have the function flush().
It allows you to send your partially ready HTML response to the browser so that
the browser can start fetching components while your backend is busy with the rest of the HTML page.
The benefit is mainly seen on busy backends or light frontends.




A good place to consider flushing is right after the HEAD because the HTML for the head is
usually easier to produce and it allows you to include any CSS and JavaScript
files for the browser to start fetching in parallel while the backend is still processing.

Example:


 
... <!-- css, js -->
</head>
<?php flush(); ?>
<body>
... <!-- content -->


Yahoo! search pioneered research and real user testing to prove the benefits of using this technique.



top




Use GET for AJAX Requests



tag: server




The Yahoo! Mail team found that when using XMLHttpRequest, POST is implemented in the browsers as a two-step process:
sending the headers first, then sending data. So it's best to use GET, which only takes one TCP packet to send (unless you have a lot of cookies).
The maximum URL length in IE is 2K, so if you send more than 2K data you might not be able to use GET.


An interesting side affect is that POST without actually posting any data behaves like GET.
Based on the HTTP specs, GET is meant for retrieving information, so it
makes sense (semantically) to use GET when you're only requesting data, as opposed to sending data to be stored server-side.





top




Post-load Components



tag: content




You can take a closer look at your page and ask yourself: "What's absolutely required in order to render the page initially?".
The rest of the content and components can wait.



JavaScript is an ideal candidate for splitting before and after the onload event. For example
if you have JavaScript code and libraries that do drag and drop and animations, those can wait,
because dragging elements on the page comes after the initial rendering.
Other places to look for candidates for post-loading include hidden content (content that appears after a user action) and images below the fold.



Tools to help you out in your effort: YUI Image Loader allows you to delay images
below the fold and the YUI Get utility is an easy way to include JS and CSS on the fly.
For an example in the wild take a look at Yahoo! Home Page with Firebug's Net Panel turned on.



It's good when the performance goals are inline with other
web development best practices. In this case, the idea of progressive enhancement tells us that JavaScript, when supported, can
improve the user experience but you have to make sure the page works even without JavaScript. So after you've made sure the page
works fine, you can enhance it with some post-loaded scripts that give you more bells and whistles such as drag and drop and animations.


top




Preload Components



tag: content




Preload may look like the opposite of post-load, but it actually has a different goal.
By preloading components you can take advantage of the time the browser is idle and request components
(like images, styles and scripts) you'll need in the future.
This way when the user visits the next page, you could have most of the components already in
the cache and your page will load much faster for the user.



There are actually several types of preloading:



  • Unconditional preload - as soon as onload fires, you go ahead and fetch some extra components.
    Check google.com for an example of how a sprite image is requested onload. This sprite image is
    not needed on the google.com homepage, but it is needed on the consecutive search result page.

  • Conditional preload - based on a user action you make an educated guess where the user is headed next and preload accordingly.
    On search.yahoo.com you can see how some extra components are requested
    after you start typing in the input box.

  • Anticipated preload - preload in advance before launching a redesign. It often happens after a redesign that you hear:
    "The new site is cool, but it's slower than before". Part of the problem could be that the users were visiting your old site with a
    full cache, but the new one is always an empty cache experience. You can mitigate this side effect by preloading some
    components before you even launched the redesign. Your old site can use the time the browser is idle and request images and scripts
    that will be used by the new site




top




Reduce the Number of DOM Elements



tag: content




A complex page means more bytes to download and it also means slower DOM access in JavaScript. It makes a difference
if you loop through 500 or 5000 DOM elements on the page when you want to add an event handler for example.



A high number of DOM elements can be a symptom that there's something that should be improved with the markup
of the page without necessarily removing content.
Are you using nested tables for layout purposes?
Are you throwing in more <div>s only to fix layout issues?
Maybe there's a better and more semantically correct way to do your markup.



A great help with layouts are the YUI CSS utilities:
grids.css can help you with the overall layout, fonts.css and reset.css
can help you strip away the browser's defaults formatting.
This is a chance to start fresh and think about your markup,
for example use <div>s only when it makes sense semantically, and not because it renders a new line.



The number of DOM elements is easy to test, just type in Firebug's console:

document.getElementsByTagName('*').length



And how many DOM elements are too many? Check other similar pages that have good markup.
For example the Yahoo! Home Page is a pretty busy page and still under 700 elements (HTML tags).



top




Split Components Across Domains



tag: content




Splitting components allows you to maximize parallel downloads. Make sure you're using
not more than 2-4 domains because of the DNS lookup penalty.
For example, you can host your HTML and dynamic content
on www.example.org
and split static components between static1.example.org and static2.example.org



For more information check
"Maximizing Parallel Downloads in the Carpool Lane" by Tenni Theurer and Patty Chi.


top




Minimize the Number of iframes



tag: content




Iframes allow an HTML document to be inserted in the parent document.
It's important to understand how iframes work so they can be used effectively.



<iframe> pros:



  • Helps with slow third-party content like badges and ads

  • Security sandbox

  • Download scripts in parallel



<iframe> cons:



  • Costly even if blank

  • Blocks page onload

  • Non-semantic



top




No 404s



tag: content




HTTP requests are expensive so making an HTTP request and getting a useless response (i.e. 404 Not Found)
is totally unnecessary and will slow down the user experience without any benefit.




Some sites have helpful 404s "Did you mean X?", which is great for the user
experience but also wastes server resources (like database, etc).
Particularly bad is when the link to an external JavaScript is wrong and the result is a 404.
First, this download will block parallel downloads. Next the browser may try to parse
the 404 response body as if it were JavaScript code, trying to find something usable in it.




top






tag: cookie




HTTP cookies are used for a variety of reasons such as authentication and personalization.
Information about cookies is exchanged in the HTTP headers between web servers and browsers.
It's important to keep the size of cookies as low as possible to minimize the impact on the user's response time.





For more information check
"When the Cookie Crumbles" by Tenni Theurer and Patty Chi.
The take-home of this research:



  • Eliminate unnecessary cookies

  • Keep cookie sizes as low as possible to minimize the impact on the user response time

  • Be mindful of setting cookies at the appropriate domain level so other sub-domains are not affected

  • Set an Expires date appropriately. An earlier Expires date or none removes the cookie sooner, improving the user response time



top






tag: cookie




When the browser makes a request for a static image and sends cookies together with the request,
the server doesn't have any use for those cookies. So they only create network traffic for no good
reason. You should make sure static components are requested with cookie-free requests. Create
a subdomain and host all your static components there.



If your domain is www.example.org, you can host your static components
on static.example.org. However, if you've already set cookies on the top-level domain
example.org as opposed to www.example.org, then all the requests to
static.example.org will include those cookies. In this case, you can buy a whole new domain, host your static
components there, and keep this domain cookie-free. Yahoo! uses yimg.com, YouTube uses ytimg.com,
Amazon uses images-amazon.com and so on.



Another benefit of hosting static components on a cookie-free domain is that some proxies might refuse to cache
the components that are requested with cookies.
On a related note, if you wonder if you should use example.org or www.example.org for your home page, consider the cookie impact.
Omitting www leaves you no choice but to write cookies to *.example.org, so for performance reasons it's best to use the
www subdomain and
write the cookies to that subdomain.


top




Minimize DOM Access



tag: javascript




Accessing DOM elements with JavaScript is slow so in order to have a more responsive page, you should:



  • Cache references to accessed elements

  • Update nodes "offline" and then add them to the tree

  • Avoid fixing layout with JavaScript



For more information check the YUI theatre's
"High Performance Ajax Applications"
by Julien Lecomte.



top




Develop Smart Event Handlers



tag: javascript




Sometimes pages feel less responsive because of too many event handlers attached to different
elements of the DOM tree which are then executed too often. That's why using event delegation is a good approach.
If you have 10 buttons inside a div, attach only one event handler to the div wrapper, instead of
one handler for each button. Events bubble up so you'll be able to catch the event and figure out which button it originated from.



You also don't need to wait for the onload event in order to start doing something with the DOM tree.
Often all you need is the element you want to access to be available in the tree. You don't have to wait for all images to be downloaded.

DOMContentLoaded is the event you might consider using instead of onload, but until it's available in all browsers, you
can use the YUI Event utility, which has an onAvailable method.




For more information check the YUI theatre's
"High Performance Ajax Applications"
by Julien Lecomte.




top






tag: css




One of the previous best practices states that CSS should be at the top in order to allow for
progressive rendering.



In IE @import behaves the same as using <link> at the bottom of the page, so it's best not to use it.



top




Avoid Filters



tag: css




The IE-proprietary AlphaImageLoader filter aims to fix a problem with semi-transparent true color PNGs in IE versions < 7.
The problem with this filter is that it blocks rendering and freezes the browser while the image is being downloaded.
It also increases memory consumption and is applied per element, not per image, so the problem is multiplied.



The best approach is to avoid AlphaImageLoader completely and use gracefully degrading PNG8 instead, which are fine in IE.
If you absolutely need AlphaImageLoader, use the underscore hack _filter as to not penalize your IE7+ users.


top




Optimize Images



tag: images




After a designer is done with creating the images for your web page, there are still some things you can try before you
FTP those images to your web server.



  • You can check the GIFs and see if they are using a palette size corresponding
    to the number of colors in the image. Using imagemagick it's easy to check using


    identify -verbose image.gif


    When you see an image useing 4 colors and a 256 color "slots" in the palette, there is room for improvement.


  • Try converting GIFs to PNGs and see if there is a saving. More often than not, there is.
    Developers often hesitate to use PNGs due to the limited support in browsers, but this is now a thing of the past.
    The only real problem is alpha-transparency in true color PNGs, but then again, GIFs are not true color and don't
    support variable transparency either.
    So anything a GIF can do, a palette PNG (PNG8) can do too (except for animations).
    This simple imagemagick command results in totally safe-to-use
    PNGs:

    convert image.gif image.png


    "All we are saying is: Give PiNG a Chance!"


  • Run pngcrush (or any other PNG optimizer tool) on all your PNGs. Example:


    pngcrush image.png -rem alla -reduce -brute result.png


  • Run jpegtran on all your JPEGs. This tool does lossless JPEG operations such as rotation and can also be used to optimize
    and remove comments and other useless information (such as EXIF information) from your images.


    jpegtran -copy none -optimize -perfect src.jpg dest.jpg



top




Optimize CSS Sprites



tag: images




  • Arranging the images in the sprite horizontally as opposed to vertically usually results in a smaller file size.

  • Combining similar colors in a sprite helps you keep the color count low, ideally under 256 colors so to fit in a PNG8.

  • "Be mobile-friendly" and don't leave big gaps between the images in a sprite. This doesn't affect the file size as much
    but requires less memory for the user agent to decompress the image into a pixel map.
    100x100 image is 10 thousand pixels, where 1000x1000 is 1 million pixels


top




Don't Scale Images in HTML



tag: images





Don't use a bigger image than you need just because you can set the width and height in HTML.
If you need

<img width="100" height="100" src="mycat.jpg" alt="My Cat" />


then your image (mycat.jpg) should be 100x100px rather than a scaled down 500x500px image.


top




Make favicon.ico Small and Cacheable



tag: images




The favicon.ico is an image that stays in the root of your server.
It's a necessary evil because even if you don't care about it the
browser will still request it, so it's better not to respond with a 404 Not Found.
Also since it's on the same server, cookies are sent every time it's requested.
This image also interferes with the download sequence, for example in IE when you request
extra components in the onload, the favicon will be downloaded before these extra components.



So to mitigate the drawbacks of having a favicon.ico make sure:



  • It's small, preferably under 1K.

  • Set Expires header with what you feel comfortable (since you cannot rename it if you decide to change it).
    You can probably safely set the Expires header a few months in the future.
    You can check the last modified date of your current favicon.ico to make an informed decision.



Imagemagick can help you create small favicons



top




Keep Components under 25K



tag: mobile



This restriction is related to the fact that iPhone won't cache components bigger than 25K.
Note that this is the uncompressed size. This is where minification is important
because gzip alone may not be sufficient.



For more information check "Performance Research, Part 5: iPhone Cacheability - Making it Stick" by Wayne Shea and Tenni Theurer.


top




Pack Components into a Multipart Document



tag: mobile




Packing components into a multipart document is like an email with attachments,
it helps you fetch several components with one HTTP request (remember: HTTP requests are expensive).
When you use this technique, first check if the user agent supports it (iPhone does not).

How To Save Traffic With Apache's mod_deflate

In this tutorial I will describe how to install and configure mod_deflate on an Apache2 web server. mod_deflate allows Apache2 to compress files and deliver them to clients (e.g. browsers) that can handle compressed content which most modern browsers do. With mod_deflate, you can compress HTML, text or XML files to approx. 20 - 30% of their original sizes, thus saving you server traffic and making your modem users happier.

Compressing files causes a slightly higher load on the server, but in my experience this is compensated by the fact that the clients' connection times to your server decrease a lot. For example, a modem user that needed seven seconds to download an uncompressed HTML file might now only need two seconds for the same, but compressed file.

By using mod_deflate you don't have to be afraid that you exclude users with older browsers that cannot handle compressed content. The browser negotiates with the server before any file is transferred, and if the browser does not have the capability to handle compressed content, the server delivers the files uncompressed.

mod_deflate has replaced Apache 1.3's mod_gzip in Apache2. If you want to serve compressed files with Apache 1.3, take a look at this tutorial: mod_gzip - serving compressed content by the Apache webserver




1 Enable mod_deflate


If you have Apache2 installed, mod_deflate should also already be installed on your system. Now we have to enable it. On Linux with apache 2.2 installed, we can do it by setting the line in httpd.conf:


SetOutputFilter DEFLATE


Then restart Apache2:


/etc/init.d/httpd restart


On other distributions you might have to edit Apache2's configuration manually to enable mod_deflate. You might have to add a line like this to the LoadModule section:






LoadModule deflate_module /usr/lib/apache2/modules/mod_deflate.so


Make sure you adjust the path to mod_deflate.so, and restart Apache2 afterwards.



2 Configure mod_deflate


The compression of files can be configured in one of two ways: either explicit exclusion of files by extension or explicit inclusion of files by MIME type. You can enable mod_deflate for your whole Apache2 server, or just for specific virtual sites. Depending on this, either open


your Apache2's global server configuration section now or just the vhost configuration section where you want to enable mod_deflate.



2.1 Explicit Inclusion Of Files By MIME Type


If you want to compress HTML, text, and XML files only, add this line to your configuration:






AddOutputFilterByType DEFLATE text/html text/plain text/xml


This is the configuration I'm using because I don't want to compress images or PDF files or already compressed files such as zip files.



2.2 Explicit Exclusion Of Files By Extension


If you want to compress all file types and exclude just a few, you would add something like this to your configuration (instead of the line from section 2.1):






SetOutputFilter DEFLATE
SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ \
no-gzip dont-vary
SetEnvIfNoCase Request_URI \
\.(?:exe|t?gz|zip|bz2|sit|rar)$ \
no-gzip dont-vary
SetEnvIfNoCase Request_URI \.pdf$ no-gzip dont-vary


This would compress all files except images (gif, jpg, and png), already compressed files (like zip and tar.gz) and PDF files which makes sense because you do not gain much by compressing these file types.



2.3 Further Configuration Directives


Regardless whether you use the configuration from section 2.1 or 2.2, you should add these lines to your configuration:






BrowserMatch ^Mozilla/4 gzip-only-text/html
BrowserMatch ^Mozilla/4\.0[678] no-gzip
BrowserMatch \bMSIE !no-gzip !gzip-only-text/html


These lines are for some older browsers that do not support compression of files other than HTML documents.


The configuration is now finished, and you must now restart Apache2. On Debian, you do it like this:


/etc/init.d/apache2 restart


To learn about further configuration directives, take a look at Apache Module mod_deflate.



3 Testing


To test our compression, we add a few directives to our mod_deflate configuration that log the compression ratio of delivered files. Open your mod_deflate configuration and add the following lines:






DeflateFilterNote Input input_info
DeflateFilterNote Output output_info
DeflateFilterNote Ratio ratio_info
LogFormat '"%r" %{output_info}n/%{input_info}n (%{ratio_info}n%%)' deflate
CustomLog /var/log/apache2/deflate_log deflate


Make sure you replace /var/log/apache2 with your Apache2's log directory. This could be /var/log/httpd, /var/log/httpd2, etc.


Then restart Apache2. On Debian, do it like this:


/etc/init.d/apache2 restart


Now whenever a file is requested this will be logged in /var/log/apache2/deflate_log (or to whatever file you changed it to). A typical log line looks like this:






"GET /info.php HTTP/1.1" 7621/45430 (16%)


You see that the file info.php was requested and delivered. Its original size was 45430 bytes, and it was compressed to 7621 bytes or 16% of its original size! This is a great result, and if your web site mostly consists out of HTML, text, and XML files, mod_deflate will save you a lot of traffic, and for users with a low-bandwidth connection your site will load much faster.


If you don't need the logging after your tests anymore, you can undo the changes from section 3 and restart Apache2.


Tuesday, May 12, 2009

Over Rs 50,000 crore spent on Lok Sabha poll campaign

Though attempts were made to check the spiralling amounts of money spent during the campaign for Lok Sabha polls, neither the political parties nor the Election Commission have attained the desired results in the run-up to the 2009 Parliamentary elections.

According to rough estimates, the actual cost of running the campaign has crossed the staggering sum of Rs 50,000 crore.

According to insiders in the Congress and the BJP, both parties spent over Rs 20 crore in merely ferrying their leaders across India during the campaign. While the Congress had hired 21 helicopters and 18 executive jets, the BJP had hired 21 helicopters and 14 jets. These parties had to shell out Rs 80,000 per hour on each helicopter, plus the landing charges.

Prime Minister Manmohan Singh , Congress president Sonia Gandhi and Bharatiya Janata Party's prime ministerial candidate L K Advani were allowed to use Air Force planes to travel to their election rallies across India.

The political parties have reportedly spent Rs 3000 crore on advertisement campaigns. The Congress spent Rs one crore just to buy the rights of superhit song Jai Ho, but it failed to click with the masses.

Even smaller parties like the Samajwadi Party and the Rashtriya Janata Dal also spent a considerable amount of money on the campaign trail.

The Election Commission's budget for the 2009 elections is Rs 1,300 crore, which includes the conduct of polls plus transportation and movement of security forces. The state governments and other government agencies have earmarked Rs 700 crore for photo identity cards, electronic voting machines and setting up polling booths.



Though attempts were made to check the spiralling amounts of money spent during the campaign for Lok Sabha polls, neither the political parties nor the Election Commission have attained the desired results in the run-up to the 2009 Parliamentary elections.

According to rough estimates, the actual cost of running the campaign has crossed the staggering sum of Rs 50,000 crore.

According to insiders in the Congress and the BJP, both parties spent over Rs 20 crore in merely ferrying their leaders across India during the campaign. While the Congress had hired 21 helicopters and 18 executive jets, the BJP had hired 21 helicopters and 14 jets. These parties had to shell out Rs 80,000 per hour on each helicopter, plus the landing charges.

Prime Minister Manmohan Singh , Congress president Sonia Gandhi and Bharatiya Janata Party's prime ministerial candidate L K Advani were allowed to use Air Force planes to travel to their election rallies across India.

The political parties have reportedly spent Rs 3000 crore on advertisement campaigns. The Congress spent Rs one crore just to buy the rights of superhit song Jai Ho, but it failed to click with the masses.

Even smaller parties like the Samajwadi Party and the Rashtriya Janata Dal also spent a considerable amount of money on the campaign trail.

The Election Commission's budget for the 2009 elections is Rs 1,300 crore, which includes the conduct of polls plus transportation and movement of security forces. The state governments and other government agencies have earmarked Rs 700 crore for photo identity cards, electronic voting machines and setting up polling booths.

Mayawati moves SC against Varun verdict

The Uttar Pradesh government today moved the Supreme Court challenging advisory board's decision of revoking charges against BJP leader Varun Gandhi under the stringent NSA for his alleged hate speeches.

The advisory board on May 8 held that it neither found "plausible and convincing" grounds for the National Security Act being invoked against Varun nor was it satisfied by the explanation given by the Pilibhit District Magistrate.

29-year-old Varun, who is BJP's Lok Sabha candidate from Pilibhit, is currently on parole following a Supreme Court order after remaining in jail for nearly three weeks.

He was released from Etah jail on April 16. His parole expires on May 14. Varun was let off by a three-member Advisory board headed by senior judge of the Lucknow bench of Allahabad High Court Justice Pradeep Kant which went into the maintainability of Varun's detention by the UP government under the NSA imposed on March 29.
The Uttar Pradesh government today moved the Supreme Court challenging advisory board's decision of revoking charges against BJP leader Varun Gandhi under the stringent NSA for his alleged hate speeches.

The advisory board on May 8 held that it neither found "plausible and convincing" grounds for the National Security Act being invoked against Varun nor was it satisfied by the explanation given by the Pilibhit District Magistrate.

29-year-old Varun, who is BJP's Lok Sabha candidate from Pilibhit, is currently on parole following a Supreme Court order after remaining in jail for nearly three weeks.

He was released from Etah jail on April 16. His parole expires on May 14. Varun was let off by a three-member Advisory board headed by senior judge of the Lucknow bench of Allahabad High Court Justice Pradeep Kant which went into the maintainability of Varun's detention by the UP government under the NSA imposed on March 29.

Microsoft may lay off more if warranted: Ballmer

Microsoft, which has announced laying off 5,000 employees including 55 in India, said on Tuesday it may look at more layoffs if the economic downturn dramatically worsens again.

"Presuming the economy hopefully stays as bad as it is and doesn't get dramatically worse, we will finish our plan, but if it gets dramatically worse again, we will look at things again," Microsoft Corporation CEO Steve Ballmer, told reporters in Mumbai.

The Redmond-based company had announced in January it would axe 5,000 jobs globally amid the ongoing slowdown.

It announced on Monday slashing one per cent of its 5,500-strong Indian workforce, amounting to 55 layoffs, in a bid to realign its business in the country.

It added that it would continue to hire and create employment opportunities in line with the recovery and growth of the Indian economy.

"We had said that we would lay-off about 5,000 people. We are still filling other jobs. We are mostly through that process globally and there is still some work to do," Ballmer said.

"There are areas where we are continuing to add people. As I said, these are global additions, so it is a little hard to separate our work globally from our work in India," he added.

Ballmer said Microsoft is the second largest foreign IT employer in India and he doesn't see a change in that.

In the second round of job cuts effected on May 5, the software major said it would lay off 3,000 employees. In January, Microsoft had laid off 1,350-1,400 people, largely in the US.

The Bill Gates-led firm said it would make strategic investments, which are best suited to the current economic environment.

Ballmer said Microsoft is the second largest foreign IT employer in India and he doesn't see a change in that.

In the second round of job cuts effected on May 5, the software major said it would lay off 3,000 employees. In January, Microsoft had laid off 1,350-1,400 people, largely in the US.

The Bill Gates-led firm said it would make strategic investments, which are best suited to the current economic environment.

Friday, May 8, 2009

vSphere 4: Forerunner to a Data Center Revolution?



"The cloud" is a term that serves as a catchall for a variety of technology offerings that have Internet hosting as their common bond. Yet these services come in an unlimited variety of shapes and sizes. As the cloud begins to take on a less-wispy form, its potential is becoming clear. VMware imagines that it might one day function as a robust, complex virtual data center, with its own OS at the center.



Cloud computing has been a central subject and strategy for IT vendors of every sort, but the actual meaning of "cloud" remains hazy.

For Web-based information aggregators like Google (Nasdaq: GOOG) and Yahoo (Nasdaq: YHOO) , the cloud offers a mechanism for delivering advertising-driven content and services.

For Software as a Service (SaaS) vendors, including Salesforce.com (NYSE: CRM) , cloud infrastructures offer a highly efficient platform for hosting business applications and processes.

For service providers, the cloud provides the means for supporting emerging and yet-to-be-defined business and consumer offerings.

If one assumes there to be some cloud commonality among IT vendors, one would be largely incorrect. Not surprisingly, many storage vendors see the cloud as a way of supporting storage/data-centric consumer and business services.

Multi-platform systems vendors tend to define at least part of their cloud value propositions according to the capabilities of proprietary server platforms.

However, there are some similarities in x86/64 server-based cloud-specific products, largely because those systems are capable of commonly leveraging technologies from virtualization vendors such as VMware (NYSE: VMW), Microsoft (Nasdaq: MSFT) and Citrix (Nasdaq: CTXS) .

This last point provides the context for VMware's vSphere 4, which is essentially designed to drive forward the company's cloud computing strategy.

Driving Force

VMware has a different cloud vision: Rather than seeing a mechanism for simply delivering new or emerging service offerings, VMware imagines the enterprise data center as a highly flexible, scalable and changeable environment in which virtualization plays the central role in aggregating, integrating, managing and provisioning enormous pools of processor, server, storage and networking assets.

The company's view of the cloud appears to have struck a chord among its server vendor partners, whose executives offered their support on video or in person at the recent vSphere launch event at VMware headquarters in Palo Alto, Calif.

Cisco's (Nasdaq: CSCO) John Chambers, Dell's (Nasdaq: DELL) Michael Dell, EMC's (NYSE: EMC) Joe Tucci, HP's (NYSE: HPQ) James Munton and Intel's (Nasdaq: INTC) Pat Gelsinger were palpably enthusiastic about vSphere, and with good reason. If the effort succeeds to the extent that VMware and others expect, it will provide the driving force behind next-generation data centers.

Why? Because vSphere 4 is not just about cloud computing. While x86/64-based solutions have led server volume sales for several years, their overall performance and utilization have tended to suffer in comparison to Unix and mainframe systems.

Virtualization has helped to correct the traditionally woeful system utilization of x86/64 servers. Indeed, without virtualization, x86/64-based technologies would not be sustainable data center solutions.

Ushering In a New Age

VMware produced some eye-opening vSphere performance metrics -- including sustained 300,000 IOPS and up to 9,000 transactions per second on single systems -- that suggest a fundamental shift in performance that will allow x86/64 systems to fully inhabit every corner of the enterprise.

This, combined with other new features, including VMware Fault Tolerance, makes the platform eminently suitable for supporting business-critical applications and what VMware CEO Paul Maritz calls the "Big Computer" and the "21st Century Mainframe."

In other words, VMware considers vSphere 4 the key to ushering in an age in which highly virtualized, highly integrated industry standard x86/64 systems take over the jobs currently held by legacy enterprise systems.

Is this scenario remotely possible? Perhaps so. One could point to the emergence of x86/64 as the platform of choice in the vast majority of supercomputing installations -- a market once dominated by proprietary systems and technologies -- as an example of what is possible with innovative x86/64 development.

Is vSphere 4, then, poised to initiate the coming data center revolution?

Not quite. It is highly powerful and flexible, but VMware's new offering is a work in progress -- even though it is definitely several steps ahead of previous company offerings.

That said, if VMware delivers as promised on its product road map, vSphere 4 could become the platform to beat in x86/64 virtualization, and it will play an elemental role in how the company's customers and partners design, develop and deploy 21st century cloud computing data centers.

Gadgets on the Run: Keeping Tabs on Moving IT Assets

As enterprises deploy growing ranks of mobile, remote and telecommuting employees, keeping track of the many mobile devices they use has introduced new headaches for IT managers. Mobile asset management is a part of the larger IT asset management strategy and should not be viewed as a separate type of asset management program.




It's no secret that the business world is going mobile. Companies are beginning to move to a virtual environment -- especially sales teams who have to be on the road more often. The trend of telecommuting is also starting to catch on. With this in mind, IT managers are faced with a new headache: managing what they can't see. Companies are struggling to create and understand mobile IT strategy.


IT asset managers are faced with the challenge of managing mobile "moving assets" in a variety of scenarios including, but not limited to:


  • geographically dispersed offices (including multi-national organizations)

  • larger number of mobile devices than ever before

  • growing trends toward telecommuting

  • virtualization technologies and ASP (hosted) services


While mobile and remote workforces are nothing new, moving assets can cost companies millions (even billions) of dollars when not accounted for, and they can put organizations at even greater risk. While physical hardware usually has a shelf life of three to five years, mobile assets have a much shorter lifespan.


Special Attention



There are a number of activities that enterprise IT staffs need to pay special attention to if they want to effectively manage these moving assets:


  1. Define your mobile IT strategy. The tendency is to start with the technology and work backwards to try to solve the business problem. Accounting for mobile assets within your organization's regular ITAM (IT asset management) program should be the norm. However, mobile assets are likely the most difficult to track and manage, and -- most often -- the type of asset that is most often unaccounted for.

  2. Identify the right tool set for managing mobile IT. There are many solutions out there, from scanning tagged hardware to automating software asset management solutions. There are many options available, but finding the right one is imperative. Ideally, whatever type of solution you use, it would be best to integrate it with other systems, such as your financial systems, to provide better trend and cost analysis, geographic mapping and more robust reporting, as well as to enable better ROI (return on investment) tracking against monies spent. Some tools that are helpful for managing mobile assets:


    • Automated Solutions. Like IT asset management, mobile asset management needs discovery and ongoing management. From tracking hardware -- such as PCs, Macs, laptops, PDAs, smart phones and many other moving assets -- to tracking the software and user license sitting within the hardware -- from MS Office Suite to Adobe (Nasdaq: ADBE) More about Adobe -- having an automated asset tracking system can eliminate costly and time-consuming site visits. In the case of moving assets, an automated ITAM tracking system with a discovery feature would be extremely useful for reporting enterprise software and hardware assets on each of the moving assets without doing a physical "roll call."

    • RFID. For the large enterprise, this can be extremely useful in tracking hardware -- from PDAs (personal data assistants) to laptops to servers. When the information from the barcode/scanning technology is integrated into the whole ITAM program, it can initially give you some powerful ROI.


  3. Deal with leased and inactive mobile devices. How often do you realize that one of your employees has three BlackBerrys or two laptops? (Of course, he or she is only using one, while the other is propping the office door open). A simple solution is to run a quarterly report for missing leased equipment -- from laptops to mobile PDAs -- and review which computers have not connected to the network within 30 days or more. Assuming that you're following ITAM best practices, you'll be able to quickly find your missing and inactive items for redeployment within your organization or to retire the IT asset (instead of letting it become a dust catcher).

  4. Manage software licenses -- reduce cost and decrease your risk. Software licenses on mobile assets allow vendors to pull in millions of dollars during vendor audits. Software licenses are usually licensed by usage and tracked via discovery. Similar to mobile devices, unused licenses should also be taken into account via usage analysis. By proactively managing software licenses and usage on mobile assets, you can reduce risk and costs.

  5. Get in step with the greening of technology. While green technology does exist, it's in its infancy and it's expensive. However, IT disposal -- especially of mobile devices -- can be utilized for a company's green program (if one exists) or even for charitable donations (there are a lot of programs taking cell phones, laptops, PDAs and the like). Fundamentally, disposing of your mobile assets could be looked upon as a way of "doing good."



Bigger Picture


If nothing else, remember these three things:


  1. Effective management of mobile assets is a part of the larger IT asset management strategy and should not be viewed as a separate type of asset management program.

  2. One major advantage of proper management of mobile devices -- as well as fixed assets in general -- is the ability to show a quick return on investment in terms of dollars, savings and business gaps throughout the organization, especially with multi-location and multinational companies. This always scores points with the senior executives and may put some dollars back in the budget, as you can build a real case on savings with real dollars. Mobile devices certainly would account for a percentage of savings from retaining, recovering, deploying or retiring "moving" IT assets.

  3. Because mobile devices are not always top of mind with the IT folks or senior management and rarely (if ever) tracked properly, it's imperative to create policies and processes -- and enforce them.

Tata, SBI among world's most reputable firms

The Tata Group has been named the world's eleventh most reputed company, according to a study compiled by United States-based Reputation Institute.

Not just that, the Tata Group, whose Global Pulse Score was put at 80.89, has been ranked above global giants like Google, Microsoft, General Electric, Toyota, Coca-Cola, Intel, Univler, et cetera.

The Reputation Institute's Global Pulse is a measure of corporate reputation calculated by averaging perceptions of four main indicators -- trust, esteem, admiration, and good feeling -- obtained from a representative sample of at least 100 respondents in the companies' home countries. The Global Pulse scores are on a scale of 0 to 100.

Tata Group is one of India's largest industrial conglomerates and runs more than 98 firms.

For the record, Italian confectioner Ferrero has been ranked the world's most reputable business entity.

Thursday, May 7, 2009

Is Barack Obama right about Bangalore?

Before anyone in India gets hot under the collar about US President Barack Obama's tax proposals, because they might seem targeted at job creation in 'Bangalore,' it is important to understand what he is trying to do. For, on any rational basis, it is hard to be critical.

American companies that invest abroad have been tax-exempt on the profits from such businesses until they bring the profits back into the US; however, they have been allowed to claim a set-off on the expenses related to such investment.

This has been an open invitation to invest overseas and not in the home market, especially if the money is routed through tax havens so that the firms pay no tax on their profits anywhere. Mr Obama has called this a 'scam,' a term to which American businessmen have taken umbrage, but it is hard to think of it in any other terms.

The figures trotted out, showing that effective tax rates on such investments have been in the 2-3 percentage points range, support the president's drive to raise the effective level of tax on such corporate activity, at a time when he is running a gigantic deficit and needs money for other programmes.

Monday, May 4, 2009

Dhoni to lead Team India in T20 World Cup

Mahendra Singh Dhoni will lead a 15-member Indian squad in the ICC Twenty20 World Championship in England next month.

Virender Sehwag has been named the vice captain for the series.

Apart from Dhoni and Sehwag, Paceman R P Singh was on Monday rewarded for his consistent performance in the ongoing IPL with a recall while fellow speedster Munaf Patel was dropped from India's 15-member squad for next month's Twenty20 World Cup in England .

Wicketkeeper-batsman Dinesh Karthik , who figured in India's squad that played the last Twenty20 in New Zealand in February end, has also been dropped from the squad which has no major surprises. Hard-hitting batsman Robin Uthappa was also omitted.

India, who won the inaugural edition of the championship in South Africa in 2007, will be led by Mahendra Singh Dhoni while Virender Sehwag has been named his deputy.

The squad includes five specialist batsmen, five specialist pacers, two spinners, two all-rounders and a wicket-keeper in Dhoni.

The selectors had earlier announced a list of 30 probables for the event, to be held in England from June 5 to 21, and today pruned it to the final 15. All participating teams have to submit the final squad by Tuesday as per the ICC rules.

As expected, there are no major surprises in the squad which was picked up by the selection panel, headed by former India captain Krishnamachari Srikkanth via teleconference.

Among others who could not make it to the final 15 are Tamil Nadu opener M Vijay, Mumbai trio of Ajinkya Rahane, Dhawal Kulkarni and Abhishek Nayar , Tamil Nadu batsman S Badrinath, Delhi batsman Virat Kohli , Bengal duo of Manoj Tiwary and Wriddhiman Saha, Haryana leggie Amit Mishra , Tamil Nadu pacer L Balaji and Madhya Pradesh stumper Naman Ojha.

Tamil Nadu off-spinner R Ashwin, who was named back-up for Harbhajan Singh , also failed to survive the pruning exercise.

Youngsters Abhishek Raut, who plays for Rajasthan Royals , and Bangalore's Shrivats Goswami also could not make the cut despite their decent showing in the Indian Premier League .

Joginder Sharma, who is remembered for his last over in the inaugural edition of the Twenty20 World Cup, could not find a place in the 30-member list of probables, along with Piyush Chawla and S Sreesanth .

Sreesanth is recovering from a back injury and has not played competitive cricket for the last few months.

Sachin Tendulkar had opted out of the Twenty20 World Cup in 2007 and continues to remain away from the shortest format of the game despite his good form in recent time.

Squad: M S Dhoni (c), Virender Sehwag (v-c), Gautam Gambhir , Suresh Raina , Yuvraj Singh , Yusuf Pathan , Rohit Sharma , Harbhajan Singh, Zaheer Khan , Ishant Sharma , Praveen Kumar, RP Singh, Ravindra Jadeja , Pragyan Ojha and Irfan Pathan .

Sunday, May 3, 2009

India orders 250,000 OLPC laptops

The '$10, world's cheapest laptop', developed in India has been given a quiet burial with the government placing an order for 250,000 XO laptops from the Nicholas Negroponte-led One Laptop Per Child (OLPC) Foundation.

The $10 'laptop' had turned into a major bone of contention with the global IT industry and experts blasting the device that was earlier projected as a challenger to the $100 laptop of the OLPC project.

Meanwhile, Satish Jha, OLPC India president and CEO, was quoted in the media as saying that the OLPC XO laptops "have been ordered for 1,500 schools (throughout the country) and the deliveries will begin in June."

Now lets have a look on the fake scheme of government ignoring the below key points:

This is one of the biggest fraud waiting to come.

1. Whats point in procuring laptop for schoolkids when p[rimary education DO NOT need any computer / laptop.

2. If at all laptops are baught, which operaing system it will work on? Who will pay license fee?

3. WHat applications will be there on laptop for use ? WHat value will they add ?

4. If procured, how distribution will take place? Why only 1500 schools, what about other thousands of schools?

5. We dont give funds for primary education to rural areas. Why waste funds for unnecessary laptops in urban areas?

This is just to eat commission. FRAUD. FRAUD. FRAUD. FRAUD.

Saturday, May 2, 2009

Harshad Mehta's Reborn Version: Nirmal Kotecha and his Techniques

The man who follows Harshad Mehta's technique:Nirmal Kotecha

The son of an LIC agent who also ran a medical shop in Kochi, Nirmal Kotecha hasn't done too badly for himself. At 32, he is estimated to be worth about Rs 500 crore (Rs 5 billion).

His acquaintances have many adjectives for him: From 'genius' to a man wildly passionate about the stock markets to someone who was in a tearing hurry to make money etc.

Last week, he earned another sobriquet -- 'the mastermind and the main beneficiary of the Pyramid Saimira forgery'-- from the capital market regulator.

The April 23 order by the regulator said Kotecha had masterminded the forgery of a Securities and Exchange Board of India letter ordering directors of Pyramid, which runs a chain of theatres, to make an open offer to shareholders.

Since the forgery was aimed at manipulating the company's share price, Sebi barred 230 people and entities from trading.

The regulator also suspects that Kotecha used several front companies to trade in various stocks, and that there are indications of massive fund rotation among these front entities.

In a way, this wasn't a surprise. Kotecha has had the distinction of being on the Sebi's watch list on at least two other price manipulation cases -- Atlanta Ltd and SEL Manufacturing.

The modus operandus in both cases was similar to that of Pyramid: As Sebi discovered, Kotecha bought stakes in the companies at throwaway prices from the promoters, rigged the prices and then dumped the stock.

This may sound boringly familiar; many operators do just this frequently in the Indian stock markets, but Kotecha did it with finesse that would have made his self-confessed inspiration -- Harshad Mehta -- proud. He started investing in the market in 1993 at the age of 16, when Harshad Mehta's scam came to light and the young Kotecha was one of his ardent admirers.

By the time Mehta's cookie crumbled, Kotecha was deeply into investing and became a sub-broker at the Kochi Stock Exchange at the age of 18. Kotecha made his real big money during the technology boom in 2000.

He shifted to the Mumbai market soon after.

In the process, he opened many companies -- Skyz Financial Consultant and Kotecha Capital were just two of them.

As Sebi found, he was also using a large number of front accounts, including those of his relatives, to manipulate the securities market and to route the funds through several layers -- a reason the regulator has requested the Reserve Bank of India, Financial Intelligence Unit and the income tax department to look into possible money laundering.

All through, Kotecha seems to have used his early contacts with many promoters of small gems and jewellery companies well (he had invested in many of their IPOs as well). No wonder 43 of the 230 entities in the Sebi order belong to the gems and jewellery sector.

Though he has been known to be dealing in small IPOs to make mega bucks, many agree he perhaps went too far this time.

First, he forged a Sebi letter in which the regulator directed the promoters to make an open offer for Pyramid Saimira -- potential market-moving information.

Then he and his partners in the Pyramid case also planted a fake company secretary and gave this person's number to journalists who were sent the forged letter. When journalists called for confirmation, this person impersonating the company secretary claimed that Pyramid had indeed received such a letter from the regulator!

So when the Pyramid stock price surged after the forged letter became public, Kotecha went for the kill and reduced his holding from 24 per cent to just 0.24 per cent in just three months, making a massive profit.

Of late, Kotecha had shifted his attention to private equity too. For example, his PE firm Kotecha Capital picked up 49 per cent stake in the Bangalore-based US Pizza.

Although the exact valuation of the deal isn't known, media reports said Kotecha will invest over Rs 500 crore (Rs 5 billion), including debt, as the fast food chain plans to expand at a furious pace.

But many people say it would be a mistake to write off Kotecha. After all, Sebi's order is just an interim one and Kotecha will obviously challenge it. He was let off on two earlier occasions. Will it be third time lucky for the market player?

What is Swine FLU ?

Swine influenza (also called swine flu, hog flu, and pig flu) refers to influenza caused by those strains of influenza virus that usually infect pigs and are called swine influenza virus (SIV). Swine influenza is common in pigs in the midwestern United States (and occasionally in other states), Mexico, Canada, South America, Europe (including the United Kingdom, Sweden, and Italy), Kenya, Mainland China, Taiwan, Japan and other parts of eastern Asia.

Transmission of SIV from pigs to humans is not common. When it results in human influenza, it is called zoonotic swine flu. People who work with pigs, especially people with intense exposures, are at risk of catching swine flu. However, only about fifty such transmissions have been recorded since the mid-20th Century, when identification of influenza subtypes became possible. (Importantly, eating pork does not pose a risk of infection.) Rarely, these strains of swine flu can pass from human to human. In humans, the symptoms of swine flu are similar to those of influenza and of influenza-like illness in general, namely chills, fever, sore throat, muscle pains, severe headache, coughing, weakness and general discomfort.

The 2009 flu outbreak in humans that is widely known as "swine flu" is due to a new strain of influenza A virus subtype H1N1 that was produced by reassortment from one strain of human influenza virus, one strain of avian influenza virus, and two separate strains of SIV. The origin of this new strain is unknown, and the World Organization for Animal Health (OIE) reports that this strain has not been isolated in pigs.[2] It passes with apparent ease from human to human, an ability attributed to an as-yet unidentified mutation.[3] This 2009 H1N1 strain causes the normal symptoms of influenza, such as fever, coughing and headache.