Monthly Archives: May 2006

Being news reader friendly

Andrew Grumet makes an interesting point about using CSS in blog posts. I often find myself make style declarations inline when I want to position images etc in my blog posts. I like his idea.

On a related theme, I often see broken images when viewing RSS feeds in news readers — once upon a time I had the very same problem in my feed. The reason: relative URLs.

For example, let’s take the image of myself. If you looked at the HTML for this particular image you would see something like this: <img src=”/images/me.jpg”/>. Works fine in a browser but try it in a news reader and … broken image. To view the image in a news reader the link should be: <img src=”http://simonbuckle.com/images/me.jpg”/>; notice the inclusion of the (full) domain name. Perhaps that’s another idea for a WordPress plugin: for each blog post add the domain name to any relative URLs to avoid things like broken image links.

In summary: check how your feed looks in a news reader! Your readers will thank you or, at the very least, you won’t look like a dumbass.

Blog list

For no other reason other than to fill in 5 minutes while I take a break from what I was doing, I thought I’d list the RSS feeds that are curently in my toolbar:

  • Joel On Software
  • Philip Greenspun’s Weblog
  • Microsoft Team RSS Blog
  • Paul Graham: Essays
  • Microsoft Research News and Headlines
  • Marc’s Voice
  • Official Google Blog
  • El Blog de Google México
  • Tim Berners Lee

Right. Back to what I was doing previously.

Amazon web services

Last night I attended a talk given by Jeff Barr, Amazon’s web service evangelist. He talked mostly about Amazon’s web service offerings, as you would expect from a web services evangelist, such as the Mechanical Turk, Alexa and S3, which stands for simple storage service — I had wondered what it meant.

He gave several examples of businesses that use Amazon’s web services to make money: CastingWords being the most interesting example. They transcribe podcasts but use Amazon’s Mechanical Turk to get the work done. Interesting idea. Probably cheaper than outsourcing to India!

At the end of the presentation somebody in the audience asked him what proportion of web service requests were made via SOAP versus REST. Apparently 80% of Amazon’s web service requests are made using REST! I was quite surprised by this. I thought SOAP would have been used a lot more. Then again perhaps it’s not so surprising if you have ever tried to read the SOAP specification. I have mentioned this before. From a development point of view, making a request via REST is certainly much less work!

Overall the talk was interesting, especially some of the ways in which people are using Amazon’s web services. You can read more about Amazon’s web services on their blog.

Profanity is for people who don’t know the words

While doing a search on Google for the word fuckwit I was amused to find out what came top of the list. I believe this is a good example of a Google bomb.

If you are unsure what the word fuckwit means, then check out the definition.

By the way, the title of this post was something I heard Charles Barkley say on TV during an interview many many years ago. At least I am pretty sure it was him. Kind of ironic coming from a professional sportsman and profound too!

Map upgrade

I have just completed the upgrade of my map application to use version 2 of the Google Maps API. I had a few issues but there doesn’t appear to be any problems now. Permalinks generated previously should still work. Unlike Windows Vista I decided to maintain backwards compatibility!

In the new API Google provide a method to automatically calculate the distance between two points. I wrote code to do this in the previous version, which I have now removed, so if you are using a permalink generated using version 1.0 there may be a slight difference between the total distance calculated for the route.

Any problems, please leave a comment.

Comment spam

Well I appear to have the comment spam situation under control. I installed Spam Karma 2 as Miles suggested and so far so good. I am still getting comment spams but they all appear to be getting caught. I refuse to turn comments off or to make people register as I think this discourages people from leaving comments. I quite like this idea although I am not sure how well it will work if spammers specifically target your site because it has lots of traffic, as suggested in some of the comments. Still, it may be worth a try.

Another option would be cut the spammer out at the transport level: if a machine with a blacklisted IP address tries to make a connection, just drop it. This would avoid the need to have fancy filters that try and figure out statistically whether something is spam or not.

The battle is won but the war is not over! Ding ding. Round 2.