Skip to main content

Chrome's Omnibox, debugging web applications and web statistics

If you use the latest version of Google's Chrome browser, you may have seen this setting:



Couple of days ago, I decided to turn it on. This way, I can get instant results when I search via the omnibox. Since I never go to Google's home page, this is the only way for me to get the benefits of instant search. So I turned it on and promptly forgot about it. Fact is, I never actually thought about how it worked. Why? Because the way it works is every character you typed is instantly sent, as a search query, to your search provider. So far, that's not a big deal. That's intuitive. However, what's not so intuitive is that once Chrome detects you are typing a URL, it starts sending those requests to the webserver for the URL. So say you want to type in "http://localhost/myapplication/pageAmTesting.aspx?Id=500", the last few requests Chrome will send are:
  • http://localhost/myapplication/pageAmTesting.asp
  • http://localhost/myapplication/pageAmTesting.aspx
  • http://localhost/myapplication/pageAmTesting.aspx?Id
  • http://localhost/myapplication/pageAmTesting.aspx?Id=5
  • http://localhost/myapplication/pageAmTesting.aspx?Id=50
  • http://localhost/myapplication/pageAmTesting.aspx?Id=500
The problem is that I routinely debug web applications by attaching Visual Studio to the browser and stepping through my code. Naturally, I am expecting only 1 request to be sent (and thus trapped and debugged via Visual Studio). I also expect that request to have a query string parameter (Id) with value (5). But with that setting enabled in Chrome, I get all these extra requests that mess up my debugging session. Some of these extra requests have no query string param (#1 and #2 in the list above); some have incomplete values for the query string param (#3 and #4 in the list).

Once I realized the problem, the fix was easy (turn off the setting). But, as is my nature, I started wondering just how much this seemingly innocuous behavior of Chrome could affect the web. Now I am not a web statistics guru, but it seems to me that this could serious skew web statistics (upwards). Then I thought to myself "Tundey, you are not smarter than Google. Surely they know about this and have taken it into consideration..." But have they? Answer is yes....and no. They have because they1 added at least 1 extra HTTP request header for all those extraneous requests. The header "X-Purpose" is set to ": preview" for preview requests. Ok so they thought about it. And I figure they've probably updated Google Analytics to account for the header (i.e. if the request has that header set, ignore it since it's not a user generated request). And perhaps there's some standards body in the web analytics space that they submitted this behavior to and got their major competitors (WebTrends etc) to adopt the standard. But what about other web usage? There are other areas of the web where this could screw things up:
  • lots of unnecessary requests putting semi-useless load on servers all over the world (because the responses from those preview requests are used just for the search result listing page...i.e. only a minor portion of the entire data returned is used)
  • lots of angst for developers when their applications keep throwing unusual exceptions (in the example above, each of those preview requests will likely trigger an exception in the web application since the expected query string is missing)
  • lots of 404 errors as some of those preview requests included incomplete page names (and thus the pages will not be found)
  • what about sites that use GET requests to perform actions? Yes it's stupid to perform POST actions using GET but I bet you some sites do it.Those sites better hope Chrome doesn't send preview requests their way
So what's the solution? Here are a couple of ideas:

  • Once Chrome detects that the text being typed is a URL, don't issue preview requests. After all, if I am in the process of typing "http://localhost/myapplication/pageAmTesting.aspx", chances are I know exactly where I want to go and don't necessarily need a preview.
  • Once Chrome detects that the text being typed is a URL, continue to send the requests to the user's search provider (like it does for other non-URL text).

I did some search on the "X-Purpose" header and it looks like it's not a Chrome-specific header at all. It's also used by Safari's "Top Sites" feature.


Related articles

Comments

Popular posts from this blog

InfoPath & SharePoint (Part 1)

A departure from sports and politics. This one is about technology.

InfoPath sucks and SharePoint is the most expensive piece of crap ever. InfoPath, as a development environment, has absolutely no redeeming value. It's worthless and if your boss ever thinks of using it, you have three options:

convince him not to (not easy once he's been brainwashed by the Microsoft marketing presentations)use one of Al Gore's lockboxes to store away your sanity 'cos you'll lose it. Also, pad your estimates very generously. You'll need every bit of time you can get.
quit immediately while you still have your sanityFirst, InfoPath:

To me InfoPath is like programming in assembly language. Sure it makes it easy (too easy in fact) to bind data to controls. But it doesn't provide you with easy access to your controls. Why is this important? Say you want to disable a button:

in most technology: buttonA.enabled = false (or something similar).

In InfoPath, you simply can't do this.…

Does InfoPath (still) suck?

A couple of years ago, I wrote a blog post titled "InfoPath & SharePoint (Part 1)". Back then I had just started working on a project using InfoPath 2007. So, expectedly, the post wasn't very complimentary to InfoPath (or SharePoint). In fact, I said:
InfoPath sucks and SharePoint is the most expensive piece of crap ever. InfoPath, as a development environment, has absolutely no redeeming value. It's worthless.... (more)Since then my opinion of InfoPath has changed slightly. It still suffers from all the flaws I pointed out in that post. However, I think when used right, InfoPath can be an OK tool. I think it's well suited for designing one off forms and not for anything that requires complex logic or multiple iterations (like most software development requires). Alas, most CTOs fall in love with its point & click simplicity and integration with SharePoint that they try to use it to replace more developed technologies like ASP.NET. What do you get? A horri…

Technical Certifications are worthless

Technical certifications, especially in the IT field, are totally worthless. Why? All a technical certification prove is that you were able to buy a couple of exam prep books, cram them in a week or two and take an exam. My monkey (if I had one) could do that. I can't tell you how many times I have interviewed certification-carrying candidates for open positions at my company only to find them severely lacking in thorough understanding of computer science. I don't care that you have an MSCD or MCP or whatever it's called these days if you don't know foundational concepts in computer science and database design.
For example, I don't want someone who just knows that you store things in a hashtable using keys. I need you to know why a hashtable is better than an array in some cases. I need you to know when an interface is better than an abstract class; when to use recursion; the different kinds of joins and when to use each one; I need you to understand how crucial sou…