Maiden voyage of Garmin foretrex 401

July 10, 2011

I finally got a chance to try out the new Garmin wrist-mounted GPS unit that my friend Stefan gave me.

Pretty sweet – it does a good job of tracking, even in the foothills of the Sierras, where GPS signals come and go.

I wish I could directly embed the map of the hike, but that requires an , which isn’t allowed by wordpress.com – oh well. You can still click here to view it on the garmin.com site.

Advertisements

The web is an endless series of edge cases

December 17, 2009

Recently I’d been exchanging emails with Jimmy Lin at CMU. Jimmy has written up some great Hadoop info, and provided some useful classes for working with the ClueWeb09 dataset.

In one of his emails, he said:

However, what I’ve learned is that whenever you’re working with web-scale collections, it exposes bugs in otherwise seemingly solid code.  Sometimes it’s not bugs, but rather obscure corner cases that don’t happen for the most part.  Screwy data is inevitable…

I borrowed his “screwy data is inevitable” line for the talk I gave at December’s ACM data mining SIG event, and added a comment about this being the reason for having to write super-defensive code when implementing anything that touched the web.

Later that same week, I was debugging a weird problem with my Elastic MapReduce web crawling job for the Public Terabyte Datset project. At some point during one of the steps, I was getting LeaseExpiredExceptions in the logs, and the job was failing. I posted details to the Hadoop list, and got one response from Jason Venner about a similar problem he’d run into.

Is it possible that this is occurring in a task that is being killed by the framework. Sometimes there is a little lag, between the time the tracker ‘kills a task’ and the task fully dies, you could be getting into a situation like that where the task is in the process of dying but the last write is still in progress.
I see this situation happen when the task tracker machine is heavily loaded. In once case there was a 15 minute lag between the timestamp in the tracker for killing task XYZ, and the task actually going away.

It took me a while to work this out as I had to merge the tracker and task logs by time to actually see the pattern. The host machines where under very heavy io pressure, and may have been paging also. The code and configuration issues that triggered this have been resolved, so I don’t see it anymore.

This led me down the path of increasing the size of my master instance (I was incorrectly using m1.small with a 50 server cluster), increasing the number of tasktracker.http.threads from 20 to 100, etc. All good things, but nothing that fixed the problem.

However Jason’s email about merging multiple logs by timestamp value led me to go through all of the logs in more detail. And this led me to the realization that the job previous to where I was seeing a LeaseExpiredException had actually died quite suddenly. I then checked the local logs I wrote out, and I saw that this was right after a statement about parsing an “unusual” file from stanford.edu: http://library.stanford.edu/depts/ssrg/polisci/NGO_files/oledata.mso

The server returns “text/plain” for this file, when in fact it’s a Microsoft Office document. I filter out everything that’s not plain text or HTML, which lets me exclude a bunch of huge Microsoft-specific parse support jars from my Hadoop job jar. When you’re repeatedly pushing jars to S3 via a thin DSL connection, saving 20MB is significant.

But since the server lies like a rug in this case, I pass it on through to the Tika AutoDectectParser. And that in turn correctly figures out that it’s a Microsoft Office document, and makes a call to a non-existing method. Which throws a NoSuchMethodError (not an Exception!). Since it’s an Error, this flies right on by all of the exception catch blocks, and kills the job.

Looks like I need to get better at following my own advice – a bit of defensive programming would have saved me endless hours of debugging and config-thrashing.


Why fetching web pages doesn’t map well to map-reduce

December 12, 2009

While working on Bixo, I spent a fair amount of time trying to figure out how to avoid the multi-threaded complexity and memory-usage issues of the FetcherBuffer class that I wound up writing.

The FetcherBuffer takes care of setting up queues of URLs to be politely fetched, with one queue for each unique <IP address>+<crawl delay> combination. Then a queue of these queues is managed by the FetcherQueueMgr, which works with a thread pool to provide groups of URLs to be fetched by an available thread, when enough time has gone by since the last request to be considered polite.

But this approach means that in the reducer phase of a map-reduce job you have to create these queues, and then wait in the completion phase of the operation until all of them have been processed. Running multiple threads creates complexity and memory issues due to native memory stack space requirements, and having in-memory queues of URLs creates additional memory pressure.

So why can’t we just use Hadoop’s map-reduce support to handle all of this for us?

The key problem is that MR works well when each operation on a key/value pair is independent of any other key/value, and there are no external resource constraints.

But neither of those is true, especially during polite fetching.

For example, let’s say you implemented a mapper that created groups of 10 URLs, where each group was for the same server. You could easily process these groups in a reducer operation. This approach has two major problems, however.

First, you can’t control the interval between when groups for the same server would be processed. So you can wind up hitting a server to fetch URLs from a second group before enough time has expired to be considered polite, or worse yet you could have multiple threads hitting the same server at the same time.

Second, the maximum amount of parallelization would be equal to the number of reducers, which typically is something close to the number of ccores (servers * cores/server). So on a 10 server cluster w/dual cores, you’d have 20 threads active. But since most of the time during a fetch is spent waiting for the server to respond, you’re getting very low utilization of your available hardware & bandwidth. In Bixo, for example, a typical configuration is 300 threads/reducer.

Much of web crawling/mining maps well to a Hadoop map-reduce architecture, but fetching web pages unfortunately is a square peg in a round hole.


Using WordPress for web site but keeping mail separate

November 19, 2009

I use WordPress.com to host a number of web sites, and for simple stuff it’s great.

But I ran into a problem with keeping email separate, so I thought I’d share what I learned.

Here’s the background. I wanted to have http://bixolabs.com and http://www.bixolabs.com both wind up at the web site being hosted by WordPress.com. But I wanted to keep my email separate, versus using the GMail-only approach supported by WordPress.

According to WordPress documentation, you can’t do this. They say:

Changing the name servers will make any previously setup custom DNS records such as A, CNAME, or MX records stop working, and we do not have an option for you to create custom DNS records here. If you already have email configured on your domain, you must either switch to Custom Email with Google Apps or you can use a subdomain instead which doesn’t require changing the name servers.

This meant that I couldn’t just change my name server to WordPress, as they don’t support any customization.

But if I keep my own DNS configuration, then all I can do is use a CNAME record to map a subdomain to WordPress. And you can’t treat “www” as a subdomain.

So my first attempt was to configure my DNS record as follows:

  • www -> [URL redirect] -> http://bixolabs.com
  • @ -> [CNAME] -> bixolabs.wordpress.com
  • @ -> [MX] -> <my hoster’s mail server IP address>

This worked pretty well. http://www.bixolabs.com got redirected to bixolabs.com, and bixolabs.com mapped to the bixolabs site at WordPress.com.

But the http://www.bixolabs.com redirect was a temp redirect (HTTP 302 status) not a permanent redirect (HTTP 301 status), so I was losing some SEO “juice” due to how Google and others interpret temp vs. perm redirects.

I fixed this by having my hoster set up their Apache server to do a permanent redirect, and changing the entry for www to point to the Apache server’s IP address.

But there was a bigger, hidden problem. Occasionally people would complain about getting email bounces, when they tried to reply to one of my emails. The reply-to address in my email would be ken@bixolabs.com, but the To: field in their reply would be set to ken@lb.wordpress.com.

Eventually I figured out the problem. It’s technically not valid to have both a CNAME and an MX DNS entry for the same domain (or sub-domain, I assume). If a mail client does a lookup on the reply-to domain, bixolabs.com has the canonical address of “lb.wordpress.com”, since the CNAME entry overrides the MX entry.

The fix for this involved three steps. First, I changed the MX entry in my DNS setup to use “mail”, not “@”. Then I changed my email client reply-to address to use mail.bixolabs.com, not just bixolabs.com. And finally, my hoster had to configure their mail server to recognize mail.bixolabs.com as a valid domain, not just bixolabs.com.

 


Wikipedia Love

November 16, 2009
Wikipedia Affiliate Button Normally we wait until the end of the year to figure out our charitable donations, but I’ve been using Wikipedia so much over the past few days that I felt like I needed to donate today.

The WordPress Business Model

September 11, 2009

I think I finally understand how hosted WordPress makes money 🙂

I recently set up a web site for my dad’s consulting business, at KruglerEngineeringGroup.com. I used the WordPress hosted service, and a flexible, business-oriented theme called Vigilance.

But I needed to tweak the colors to get a solid background, with white-on-blue text. It was pretty easy (using Firebug) to figure out the CSS changes required, and I could edit these in the WordPress Custom CSS form, and I got the look I wanted – so the hook was set. Now I just need to pay for the $14.97/year “upgrade” to be able to save and use the custom CSS.

Which I gladly did, since it would be way more expensive for me in time and hassle to try to set this up in my own WordPress environment.

Step 2 was connecting his existing KruglerEngineeringGroup.com domain to the WordPress site. A few clicks on the WordPress.com site, another modest yearly payment of $9.97 (where do they get these amounts?), and we were almost all set. The one minor difficulty was in handling the “www” subdomain. WordPress says that if you want this to work, you need to change the domain name servers to use their name servers. But the current domain needs to use a specific email server (MX record).

So the solution was to create two DNS entries in the current name server config. One was the standard WordPress entry for subdomains, where you create a CNAME record that maps “@” to kruglerengineeringgroup.wordpress.com. The second entry mapped “www” as a URL redirect to http://kruglerengineeringgroup.com. Once that propagated, everything worked as planned. A few hours of my time, and $24.94/year to WordPress.


Yet another great git error message – expected sha/ref, got ‘

April 14, 2009

I’d been working away on the Bixo project, and pushing changes to GitHub without any problems.

Then I made the mistake of pulling in a new branch, versus creating the branch.

% git checkout origin cfetcher
% git pull

This merged the remote branch into my local master branch, with bizarre results. After a few attempts at trying to back it out, I blew away my local directory and just re-cloned the remote cfetcher branch, since that’s where I’d be working for the next few days. Unfortunately when I cloned it, I did:

% git clone git://github.com/emi/bixo.git

That created a clone using the GitHub “Public Clone URL”, not the “Your Clone URL”, which is git@github.com:emi/bixo.git. Oops.

Everything worked, though, until I wanted to push back some changes:

% git push
fatal: protocol error: expected sha/ref, got '
*********'

You can't push to git://github.com/user/repo.git
Use git@github.com:user/repo.git

*********'

Expected sha/ref? Though the error message had all of the info I needed, just not in a format that was obvious. For example, a good message would have said:

You can't push to git://github.com/emi/bixo.git
Update the url for the "origin" remote in your .git/config file to use git@github.com:emi/bixo.git

Eventually the Supercharged git-daemon blog post at GitHub cleared things up for me. I edited the URL entry in my .git/config file, and all is (once again) well.

[remote "origin"]
    url = git@github.com:emi/bixo.git
    fetch = +refs/heads/*:refs/remotes/origin/*