Wednesday, February 23, 2011

Is Chrome intentionally trying to mess up web development?

A couple of weeks ago I posted about the weird behavior of Chrome's omnibox. Today, am posting about how Chrome's bungled a very simple feature of all web browsers. Since the very early days of the web, web browsers have always had an option for viewing the source code of the currently rendered page. Typically the menu is called "View Source" or something similar. You click it and you get the HTML source for the page you're looking at. Very simple, right? Well Chrome does it differently. In the spirit of complicating simple things, clicking on "View Source" in Chrome doesn't just give you the HTML of the current page. Oh no! It makes another request to the web server and shows you the HTML for that version. Yep. I am not sure how that's better than just showing the HTML of the already rendered page but that's what Chrome does. So in addition to using the omnibox to complicate web development, the "View Source" option makes a trivial action more complicated. BTW, who uses that option the most? Developers. The very same people who invoke that option because something in the currently rendered page is incorrect. And God help you if you have a debugger attached to the webserver process (as developers typically do).

Why am I so sure that Google messed this up? First, all the problems associated with their implementation of the omnibox that I described in my previous post apply here also. Second, Google "view source chrome" and you'll get a bunch of results describing this odd behavior. If that many people are having problems with this simple command, perhaps they ought to fix it?

Thursday, February 17, 2011

Does InfoPath (still) suck?

A couple of years ago, I wrote a blog post titled "InfoPath & SharePoint (Part 1)". Back then I had just started working on a project using InfoPath 2007. So, expectedly, the post wasn't very complimentary to InfoPath (or SharePoint). In fact, I said:
InfoPath sucks and SharePoint is the most expensive piece of crap ever. InfoPath, as a development environment, has absolutely no redeeming value. It's worthless.... (more)
Since then my opinion of InfoPath has changed slightly. It still suffers from all the flaws I pointed out in that post. However, I think when used right, InfoPath can be an OK tool. I think it's well suited for designing one off forms and not for anything that requires complex logic or multiple iterations (like most software development requires). Alas, most CTOs fall in love with its point & click simplicity and integration with SharePoint that they try to use it to replace more developed technologies like ASP.NET. What do you get? A horrible development environment that's absolutely not suited for software development and highly paid software developers designing InfoPath forms. See this link for an an example of how InfoPath makes something very simple and basic very complicated.

With the 2010 version, there's been many nice changes to InfoPath. But it still makes me chuckle that a Google search for "InfoPath sucks" turns up my blog post.

Tuesday, February 15, 2011

Chrome's Omnibox, debugging web applications and web statistics

If you use the latest version of Google's Chrome browser, you may have seen this setting:



Couple of days ago, I decided to turn it on. This way, I can get instant results when I search via the omnibox. Since I never go to Google's home page, this is the only way for me to get the benefits of instant search. So I turned it on and promptly forgot about it. Fact is, I never actually thought about how it worked. Why? Because the way it works is every character you typed is instantly sent, as a search query, to your search provider. So far, that's not a big deal. That's intuitive. However, what's not so intuitive is that once Chrome detects you are typing a URL, it starts sending those requests to the webserver for the URL. So say you want to type in "http://localhost/myapplication/pageAmTesting.aspx?Id=500", the last few requests Chrome will send are:
  • http://localhost/myapplication/pageAmTesting.asp
  • http://localhost/myapplication/pageAmTesting.aspx
  • http://localhost/myapplication/pageAmTesting.aspx?Id
  • http://localhost/myapplication/pageAmTesting.aspx?Id=5
  • http://localhost/myapplication/pageAmTesting.aspx?Id=50
  • http://localhost/myapplication/pageAmTesting.aspx?Id=500
The problem is that I routinely debug web applications by attaching Visual Studio to the browser and stepping through my code. Naturally, I am expecting only 1 request to be sent (and thus trapped and debugged via Visual Studio). I also expect that request to have a query string parameter (Id) with value (5). But with that setting enabled in Chrome, I get all these extra requests that mess up my debugging session. Some of these extra requests have no query string param (#1 and #2 in the list above); some have incomplete values for the query string param (#3 and #4 in the list).

Once I realized the problem, the fix was easy (turn off the setting). But, as is my nature, I started wondering just how much this seemingly innocuous behavior of Chrome could affect the web. Now I am not a web statistics guru, but it seems to me that this could serious skew web statistics (upwards). Then I thought to myself "Tundey, you are not smarter than Google. Surely they know about this and have taken it into consideration..." But have they? Answer is yes....and no. They have because they1 added at least 1 extra HTTP request header for all those extraneous requests. The header "X-Purpose" is set to ": preview" for preview requests. Ok so they thought about it. And I figure they've probably updated Google Analytics to account for the header (i.e. if the request has that header set, ignore it since it's not a user generated request). And perhaps there's some standards body in the web analytics space that they submitted this behavior to and got their major competitors (WebTrends etc) to adopt the standard. But what about other web usage? There are other areas of the web where this could screw things up:
  • lots of unnecessary requests putting semi-useless load on servers all over the world (because the responses from those preview requests are used just for the search result listing page...i.e. only a minor portion of the entire data returned is used)
  • lots of angst for developers when their applications keep throwing unusual exceptions (in the example above, each of those preview requests will likely trigger an exception in the web application since the expected query string is missing)
  • lots of 404 errors as some of those preview requests included incomplete page names (and thus the pages will not be found)
  • what about sites that use GET requests to perform actions? Yes it's stupid to perform POST actions using GET but I bet you some sites do it.Those sites better hope Chrome doesn't send preview requests their way
So what's the solution? Here are a couple of ideas:

  • Once Chrome detects that the text being typed is a URL, don't issue preview requests. After all, if I am in the process of typing "http://localhost/myapplication/pageAmTesting.aspx", chances are I know exactly where I want to go and don't necessarily need a preview.
  • Once Chrome detects that the text being typed is a URL, continue to send the requests to the user's search provider (like it does for other non-URL text).

I did some search on the "X-Purpose" header and it looks like it's not a Chrome-specific header at all. It's also used by Safari's "Top Sites" feature.


Related articles