Add jQuery to any page for some DOM-fu

If you are web-development inclined, and use a JavaScript development console, such as Firebug, you might wish that every page had jQuery, a excellent JavaScript library, for you to manipulate as you see fit. Well, it seems that Karl Swedberg over at Learning jQuery has come up with a neat bookmarkletto get the job done:

jQuerify

Head over to Better, Stronger, Safer jQuerify Bookmarklet for more details. To add this bookmarklet simply drag it to your bookmarks toolbar. When you press it, it will load jQuery and add it to the page you are currently viewing. Once it is loaded, simply open Firebug and have at it with your superior DOM-fu. Here's what you might try:

var $ = jQuery; // Set $ to be jQuery

alert( $('a').length ); // Count anchor tags on the page

$('a').css('font-weight', 'bold'); // Make all links bold

$('object, embed').remove(); // Remove all the annoying flash crap

$.post( 'index.php', {data: 'value'} ); // Do a POST request from the current page

You can use these technique are nearly infinite: extracting data out of a page, re-styling it, automating form submissions, etc. I started using this method about a year ago, but discovered the bookmarklet only now. Trust me, once you try it you won't be able to live without it.

Update: It appears that John Resig has also published a jQuerify bookmarklet. I guess this technique is pretty popular among the JavaScript developers.

Announcing E-mail Herald (a.k.a. Stop Spamming Yourself Framework)

It is amazing how much information I need on daily basis. Here is my informational morning routine:

  1. Check the weather.
  2. See what meetings I have today (thanks Google Calendar!)
  3. Check out the Woot Deal of the Day (silly I know but it's fun)
  4. Check on various nightly scripts and see if they ran properly. This is about a half dozen scripts that send confirmations to my work or my personal e-mail.

The whole procedure takes quite a while for what it really is: getting tiny bite-sized pieces of information. I decided it was enough and I wanted to automate the whole process. That's why I created E-mail Herald, a simple framework for extracting those bite-sized pieces of information out of various internet services and canning them into digest e-mails (heralds). Here's a sample herald I got this morning:

Weather: 38-52 deg. F, Partly Cloudy

-----

Woot: Perfect Pullup - $14.99

-----

Calendar Igor Partola:
  --No events--

-----

Calendar Igor Partola NIS@BU:
  * Veteran's Day

-----

Cron Jobs:
  * bu:maps:****** - success
  * bu:maps:****** - success
  * bu:maps:******** - success
  * bu:*** - success
  * netstore - success - 2009/11/11 03:19:12 [3967] sent 1704120805 bytes  received 1443 bytes  1481201.43 bytes/sec
  * bu:*******-******* - success - 0 applicants e-mailed

Pretty nifty huh? This is the kind of functionality that I Want Sandy used to have before it closed down. The problem with Sandy was that she e-mailed you at 5am and you couldn't change it. Also, you couldn't collect information from other services such as Google Calendar, etc. E-mail Herald does it all and if it doesn't do something you want it to do - write a plugin.

Announcing Disc ID DB

Disc ID DB is a new pet project of mine. It allows application developers to identify content on optical media. The whole project is open sourced under the MIT License and is available for download from SourceForge. Currently only identifying TV series content is supported but improvements are underway.

Just use JSON (in most cases)

While there has been plenty of debate on the whole XML vs JSON argument, I'd like to express a sincere IMHO towards just using JSON whenever possible. It has several real life key advantages that I take advantage of on practically daily basis. XML has its place and is leaps and bounds better than CSV or, god forbid, fixed with column files, but JSON often times is a more practical solution and here's why:

JSON Describes Objects:

XML can do this as well, but consider the following case:

<movies>
    <movie>
        <title>Austin Powers: International Man of Mystery</title>
        <actors>
            <actor>Mike Myers</actor>
            <actor>Elizabeth Hurley</actor>
        </actors>
    </movie>
</movies>

While this structure seems fairly self-explanatory, let's consider the following: Does the object "movie" have a property called <title> which is a string, or does it have an array of strings of type <title>? Looking at the XML we can certainly infer that the <title> node is only present once in each movie, but the XML parser would not know that and you would have to hint to it that <title> is a property while <actor> is an array.

The problem can be solved if we make <title> into an attribute of <movie>: <movie title="Titanic">. However, the same cannot be done for <actors> as it is an array and attributes cannot be arrays.

Consider the JSON version:

{ "movies": [
                {
                     "title": "Austin Powers: International Man of Mystery",
                     "actors": [ "Mike Myers", "Elizabeth Hurley" ]
                }
            ]
}

The structure here is also apparent to us, but the ambiguity is removed: "title" is a string and "actors" is an array of strings.

JSON Takes Less Code to Manipulate

In a JavaScript/LAMP environment JSON takes a lot less code to get from interpreting the data structures to actually using the data. Because of some built in ambiguity of XML, you generally need to instantiate some XML object native to your environment, and then extract the data out of it. For example PHP includes 14 different frameworks for dealing with XML. Typical PHP code that you would have to write to encode XML of our "movies" data structure looks like this:

$xml = new SimpleXMLElement('<movies></movies>');

$movie = $xml->addChild('movie');
$movie->addChild('title', $movie['title']);
$actors = $movie->addChild('actors');
foreach($movie['actors'] as $actor) {
    $actor = $actors->addChild('actor', $actor);

echo $xml->asXML();

This seems pretty straightforward, but is a lot of code to write. Consider this:

json_encode( $movies );

Notice that not only do you have to write less code and make less method calls, but also you are dealing with native PHP structures: in this case arrays. This reduces the amount of time it takes to go from your data to the data exchange format.

On the client side, JavaScript can often have built-in JSON decoder and if it does you still may use JSON parsing libraries. JavaScript's eval() will also interpret JSON, but it is insecure.

Sometimes You Want XML Anyways

XML still has some key advantages of JSON: it contains type information, it can be transformed using XSL and it is more widely supported. The XSL functionality often is particularly desirable since it can efficiently transform an XML document into any other format, for example HTML. This is the kind of thing that is not easily possible with JSON in a general case because it contains less data about the objects it is encapsulating. In BU Maps, I use JSON as the data exchange format, but transform the data into XML and then into HTML because it is simpler than writing the JavaScript code to create XHTML nodes.

Using a hiking GPS to update urban map

Last year I took over BU Maps as the primary developer. This application uses the Google Maps API and a custom database of locations to provide the BU community with up-to-date location information about anything related to the University: from colleges and departments to local restaurants and bike racks. The following process describes how the BU Maps database was updated originally:

  1. A team of "updaters" would print out high zoom images of the Hybrid Google Maps tiles (via taking screenshots of their viewport).
  2. They would go out into the field and collect the data. By this I mean they will mark the appropriate locations on the paper map printouts.
  3. They will come back and process the map printouts. The processing involves zoom in on the marked part of the map in the BU Maps admin system and clicking on the same location as the one marked on the paper map.

As you can imagine this process was tedious, long and boring. After having done a small set of updates to "get the feel" of the process, I decided to look for a better way. Enter Garmin GPSMap 60Cx: a consumer grade hiking GPS receiver priced at a reasonable $240 (as of July 2009). This device was built to provide precise and accurate position info in canyons, where other GPS receivers will struggle. It has the ability to be loaded with custom routable maps, for example from the OpenStreetMap project to ease navigation for the updaters. It comes with a USB cable and is supported by the GPSBabel project, so waypoint data can be easily loaded to and unloaded from the device. The 60Cx has the ability to average multiple position readings to establish current location more precisely. Finally, it has a connector for an external antenna, so I was able to connect an amplified external Gilsson antenna which boosts precision to just under 10 ft. This level of precision is great considering the fact that Boston is a big city with tall buildings.

After we acquired the 60Cx, I wrote a couple of scripts on top of GPSBabel to load the data from the BU Maps database to the device and vice versa. Now the process of updating BU Maps location is simple:

  1. I load the 60Cx with BU Maps locations as waypoints. Each waypoint has a unique ID within the 60Cx, which is important for this whole operation.
  2. I give the device to an updater. He goes out into the field and looks for whatever locations I loaded onto the 60Cx.
  3. Any new items get added to the 60Cx with a unique temporary ID and additional info is collected. This process is still manual but not nearly as time consuming as getting the positioning information.
  4. I load the data from the device back into the database, updating existing locations and adding new ones.
  5. If any additional information was collected (e.g. phone number, pictures, etc.), it is added through the admin system.

Most recently we used this method to accurately update locations of all the bike racks on and around BU. Using a GPS receiver is a lot quicker than the previous, paper-based process: it took the updater five hours to get updated positioning info for over 80 locations around the geographically spread out BU campuses. That's five hours from beginning of data collection to completion of database updates.

« Page 3 / 3