Archive for the ‘High Geekery’ Category

The Quest for Feed Bliss

July 18, 2009

I’ve recently switched to Google Reader for all of my feed reading needs. This is the latest iteration in a long line of trying to find the perfect feed reading experience. Here’s what “perfect” means to me in this context:

  • Readily available so that I can polish off a few items whenever I have a spare minute
  • Enables me to clear out a batch of unread items easily
  • Fast
  • Navigable by keyboard for faster reading
  • Native applications for whatever platform I’m on plus a Web application backend
  • Sync between work, home, and phone

I subscribe to 250 feeds presently so the primary consideration is staying on top of them. There is a real cognitive weight to having 1,593 unread items and I strongly dislike declaring “feed bankruptcy.” So I have spent the last few years testing different options.

For most of that time, Bloglines was my go-to solution. It was fast and fairly efficient. But I was never satisfied because it was Web-based, lacked decent keyboard navigation, and required an Internet connection to access at all. I tried Google Reader when it first came out but it left me cold. Since I spent my working life on a Windows XP machine, I resigned myself to a Web-based application.

Then I got a Mac at work and suddenly all of the great Mac OS X feed reading applications were available. I again tried all of the ones I had evaluated at home: NetNewsWire, NewsFire, Shrook, and some others that I can’t remember now. I settled on NetNewsWire because of the NewsGator syncing, the native iPhone application, and decent keyboard navigation. I still wasn’t completely happy with the set up because the NewsGator Web application is terrible: no keyboard navigation, slower than you’d think possible, and hard to mark items as read.

As I said earlier, Google Reader is my current solution and I think it’s going to stick this time. The Web application has matured substantially since I looked at it four years ago. It lacks a native Mac OS X application but I found a way around that earlier this week, which I chronicled in this Super User answer:

  1. Download Fluid.app.
  2. Save this PNG image (or this higher-resolution one) to your Desktop.
  3. Open Fluid.app and use the Google Reader URL, name, and newly-saved icon.
  4. Launch the Google Reader application from your Applications folder.
  5. Buy Byline or use the really good mobile version of Google Reader (you can save it to your Home screen to boot).

This setup is very fast, feels native (Fluid.app even displays the unread item count as a badge on the Dock icon), syncs between all environments, has great keyboard navigation, and is always available. I’ve gotten my total unread item count down to 8 and kept it in double digits for the last week, something I haven’t done since I started feed reading.

It’s refreshing to have that load off my mind.

Curse You, URL Shortening Services!

June 19, 2009

I now have a horse in the URL shortening drama. My Meme Obfuscation Machine doesn’t work for tweets. Try as I might, I just can’t get something by Twitter’s automatic URL shortening. Seriously, what’s the fun in Rickrolling someone with a carefully-crafted, seductive URL when it gets turned into bit.ly/NauRm.

Lessons of a First-Time WWDC Attendee

June 13, 2009

In the interest of contributing to the wealth of tips on WWDC, I’d like to share what I learned this week about the event itself—I can’t talk about the session material since it’s under a non-disclosure agreement.

  1. Don’t lose your badge. I didn’t, thankfully, but the attachment of the badge to the lanyard is very precarious. Everything—everything—revolves around that badge and there’s security everywhere. They will balk if they can’t see the full badge.
  2. There is no Apple-provided dinner except for the Bash. From the original Web site, it seemed like Apple would provide dinner daily, but that was emphatically not the case. The Bash food, incidentally, was excellent. I was stuffed from the sushi, hot dogs, pizza, Chinese, pasta, cookies, and quiescent confections.
  3. You can leave on Friday. I booked my return flight for Saturday morning thinking that sessions would run as normal on Friday and I didn’t want to rush around dealing with luggage and transportation to the airport. Turns out, the last session ended a little past 2 o’clock and they have a luggage holding station at Moscone West. I could have easily left that day. There’s a lot to see in San Francisco, of course, but I was ready to go home.
  4. Don’t miss Stump the Experts. I didn’t learn anything at all from the session but it was hilarious. This was the 20th Stump the Experts event and it made me feel nostalgic even though this was my first time attending.
  5. The labs run concurrently with the sessions. There were many great sessions that conflicted with one another, but most of the good labs also conflicted with those great sessions. The best bet, I found, was to skip a Q&A here and there to make use of the session interstitials. Even still, I missed several opportunities. If the videos came out in a timely manner, I’d say to only go to the sessions for the Q&A (or to ask your Qs at) and focus on the labs. You can watch the video at your leisure but you’re never going to get that kind of face time with an Apple engineer otherwise.
  6. The WiFi access was excellent. I consistently got five bars throughout Moscone West during the entire conference. I also was able to connect via VPN at will. I’m not sure why the online accounts I read had WiFi trouble in the past, but Apple appears to have gotten its act together.
  7. Complaining about the lines is an effective icebreaker. WWDC, for me, was a series of lines: lines for the sessions, lines for the labs, lines for the urinals, lines for the sinks, lines for the food. Witty observations about this led to many interesting conversations with line neighbors. Not that you need an icebreaker: I never had any trouble striking up a conversation with anyone and the bonhomie was palpable throughout.
  8. Use the elevator. There’s an elevator near the stairs that was almost never being used. If you’re on the third floor after a Presidio session and you want to go to a lab, your best bet is to skip the line for the escalators entirely and go straight for the elevators. I generally rode it alone; I have no idea why so few people took it.
  9. Plan on getting in line for the Keynote by 8 o’clock. I waited until 9 AM to mosey down to Moscone and the line had already wrapped around nearly back to the main entrance off Howard. By 9:45, we had barely moved. I ended up getting seated in the overflow room, which had quite a nice view of the Keynote, about 10:20 AM and missed the hardware announcements entirely.
  10. The Interface Design consultation is by appointment and they fill up quickly. I was planning on having an Apple engineer give my iPhone application a once-over, but I didn’t realize you had to reserve a spot so they were gone by the time I got down there. If I were doing it again, I would make this action my top priority.

Was WWDC worth it? Big time. It was hard being away from my family—video conferencing via iChat helped considerably—but I learned so much and got direct answers to my questions that I can recommend it without reservation. Plus, I got a developer’s preview of Snow Leopard that is wonderful. iPhone OS 3.0 and Snow Leopard are going to be great, people. Make sure you upgrade when they become available.

Redmond, Start Your Pricing Guns

June 11, 2009

One of the most exciting aspects of the WWDC keynote announcements was the pricing of Snow Leopard at $29 and a five-pack family pricing of $49. I’ve purchased every version of Mac OS X for $129 since the original 10.0 (except 10.1 obviously), only occasionally catching a break due to buying new Macintoshes.

Every version was worth it, mind you, but it still felt like an ongoing cost of owning a Mac. (I must here disclaim any sense of entitlement: I know that previous versions of Mac OS X continue to work after the new ones come out and I have taken that route for non-essential computers. This feeling arose from my inner cheapskate more than any sense of deserving something for nothing.) Every new version required a careful calculation of benefits and review of features for ancillary machines.

But I don’t have to think twice at a $29 (or $49) price point. On this point, David Pogue has it right. But his reasons for the pricing barely scratch the surface. I paraphrase his four listed reasons as follows:

  1. This release doesn’t have enough features to justify $129.
  2. They want to get this out to a lot of people.
  3. They want to embarrass Microsoft with this ridiculous value of the release.
  4. The lower the price, the likelihood that people won’t even blink at upgrading.

There’s a lot more to it than that, though. 10.6 requires an Intel machine. If you’ve got an Intel machine already, it’s likely that you’ve running 10.5 and that you’d gladly pay $29 to recover 6 GB of space much less for a slew of new features. If you’re running Tiger on an Intel machine, you have to shell out $169 for the Mac OS X Box Set. And if you’re not using an Intel machine, you cannot upgrade to 10.6 (and presumably any future releases either). So this release cycle effectively communicates to those still on Tiger or the PowerPC platform that their days of being supported by Apple are nearly over.

Finally, if 10.6 is truly laying the groundwork for future plans, then Apple has an interest in having as many developers making use of its new technologies as possible. But historically developers will not migrate to these new systems until a critical mass of users have made the move: supporting two disparate versions of a feature is expensive for small developers and they won’t do it unless there’s a absolutely compelling reason. Pricing 10.6 at this level will induce a substantial number of consumers to upgrade. On the iPhone, I can imagine that 3.0-only applications will come about soon because the upgrade friction is minimal there.

With a solid base of applications using 10.6 features, Apple can sell future hardware in a way that Microsoft-based vendors cannot. With the gigahertz arms race faded, hardware vendors are competing on multiple cores, multiple CPUs, and RAM. But consumers quickly discover that all of this extra hardware encounters diminishing returns on the software that they use—either the software can’t make use of memory above 4GB or these extra cores are mostly idle. 10.6’s promise is that it makes using these hardware features seamless to the developer through mechanisms like Grand Central Dispatch, OpenCL, and completing the transition to 64-bit.

These strike me as more substantive reasons for the pricing than Pogue’s facile ones. I believe 10.7 will resume the $129 price cycle as people catch up to the Intel/Leopard transition and Apple wants the third-party applications to be there waiting to sell the hardware’s value.

Email Fun

February 27, 2009

In speaking with a co-worker, I mentioned a couple email tips that he hadn’t heard. Thinking that others may be in the same boat, I offer them here:

  • Gmail: you can put periods throughout the username and Google will ignore them. So “bbrown” can be “b.brown,” “bbr.own,” or even “b.b.r.o.w.n.” and the emails will come through.
  • Gmail: you can append a plus sign and additional text to the username and Google will also ignore that text. “bbrown+specialdeal,” “bbrown+spam,” and “bbrown+yahoo” all get to their proper final destination. This and the other tip plus Gmail’s filters enable you to create disposable email addresses without preplanning.
  • Mailinator is the king of throwaway email addresses. In a form, enter something@mailinator.com and you can access that username’s messages through the mailinator Web site. Anyone else can access the email, so this isn’t really useful for anything besides anonymous emailing. Some sites have caught on and check for the mailinator domain name, but there are plenty of aliases available (you can even point your own domain’s MX record there).

WebException and the HttpWebResponse

February 21, 2009

The following code is used to make a request and get the results:

HttpWebRequest req = (HttpWebRequest)WebRequest.Create("http://bbrown.info/");
HttpWebResponse resp = (HttpWebResponse)req.GetResponse();
StreamReader reader = new StreamReader(resp.GetResponseStream());
string contents = reader.ReadToEnd();
resp.Close();

contents will contain the HTML of this blog if the server gives a 200 OK response. Anything else will throw a WebException. You can wrap the snippet above in a try-catch to handle a non-200, but the exception is thrown in the GetResponse call so you get nothing from the actual response. 404? May as well be a 500.

Today I discovered that the WebException itself has two properties: Response and Status. This Response is the same as the resp above so you can extract out the server response in the catch.

This whole behavior of HttpWebRequest is counterintuitive in the sense that a non-200 is not an exceptional circumstance; I would have expected the response to be accessible and the status code to be populated.

Ch-Ch-Ch-CheckBot

September 30, 2008

Yesterday I released my latest project at work. I call it CheckBot and it is a Windows service that pulls down messages from a third-party service, checks them for domain names, and replies with whether those domain names are available. I built it using a plugin architecture, so adding third-party services is a breeze.

The first plugin was Twitter. A Twitter user just has to follow domaincheck and then send that bot account a domain name through the direct messaging system. Within seconds, CheckBot will respond with its availability and include a link to register it on GoDaddy.com if it is available.

I am very proud of this application because I did it fairly quickly and I like the simplicity of the design. There was only one bug that came up during testing and it was both minor and quickly resolved. This sort of thing is exactly the reason why I love my job and the Gadgets Team I lead.

[The views expressed on this website/weblog are mine alone and do not necessarily reflect the views of Go Daddy Software, Inc.]

Thoughts on Android

September 30, 2008

Last week saw the introduction of the first Android phone, the T-Mobile G1. I’ve been following Android’s progress with interest because it seems to be the most compelling competitor to Apple’s iPhone so far.

This video by Engadget really helped me to understand the phone and operating system in a way that all of the specs and press releases have not. This particular video was better than most of the other ones I’ve come across because the phone’s operator was quite familiar with its features.

Here are the things from the video that I really liked:

  • That little drop-down panel notification that appears and disappears after a few seconds. It also appears to be able to be recalled at any time. It’s especially handy for background processes and applications, neither of which are possible on the iPhone.
  • The compass rose on the Google Maps application. This is a third-party integration, like a plugin or Greasemonkey script, that provides additional functionality not originally conceived by the app developers. This sort of customization is impossible on the iPhone and could be the basis for a much richer experience.
  • The Street View responds to movement on all axes by changing the view accordingly. This is pretty sophisticated positional analysis. Like the compass rose, it appears that Android can tell the application not only the phone’s coordinates but also its orientation on all axes. That’s not available on the iPhone and could be very useful.

That being said, I believe that Android is doomed to failure. First, it has forsaken multitouch ubiquity. After the pioneering efforts of Jeff Han, Apple and Microsoft have clearly embraced multitouch as the user interface of the future. By not requiring hardware manufacturers to support multitouch (or touch at all, really), Google has seriously limited application developers. If a developer wants to do a multitouch application once Android supports that, he is either limited to a subset of the customer base or he has to make it degrade gracefully on phones that don’t support multitouch—neither is an appealing option.

And openness has proven time and again to not be a huge selling point to the average consumer. There are already open mobile operating systems but people are clamoring for iPhones. There’s a certain abstract benefit to openness that is hard to communicate to users. The freetards may whine but the average person just looks at the iPhone and drools. They don’t particularly care that their phone isn’t open. Why? Because most phones in the past never were.

I predict that Android will linger forever on cheaper phones where a free operating system could make for increased profits. The Big Three—Apple, Microsoft, and Nokia—will barely notice its share and Google will mostly abandon the project due to its corporate ADD.

A Hard Decision

September 27, 2008

I’ve been a consumer of the Facebook.NET framework for a few months now as I’ve developed several Facebook applications in ASP.NET. I chose that framework instead of the official Microsoft one because it seemed more logical and straightforward.

Honestly, I’m not at all certain why I originally chose one over the other. I read all of the blog commentary about each, I looked over the source, and I checked out the sample applications included. Facebook.NET struck me as elegantly designed, well conceived, and actively developed. Sure, Microsoft had commissioned and paid for the development and maintenance of the Facebook Developer Toolkit, which meant it was more likely to be around in the future. This possible objection was easily dismissed since Facebook.NET was open source and could be extended privately as long as need be.

What I couldn’t have foreseen were sweeping changes by Facebook to the underlying API within six months and a complete abandonment of the open-source project by its sole maintainer. Facebook has made a lot of mistakes in handling the transition but there’s precious little that I, as a third-party application developer, can do about that. So my sole responsibility is to keep up with updates to the framework and alter my code to accommodate the new (or changed) functionality.

Faced with a framework that isn’t getting updates, the responsibility expands considerably. One must either abandon the abandoned framework to search for greener pastures or one must take up the mantle of leadership by forking the project. Neither is a path to be chosen lightly for each entails considerable pain.

The choice was made easier for me by the fact that the Facebook Developer Toolkit was just as inactive at the time. I tried corresponding with the Facebook.NET maintainer and even succeeded a couple times: I would much rather have been a developer on a project than the man responsible. In the end, it became clear that the maintainer had moved on to other projects and that I was going to have to fork.

The result is fb.net. I largely brought it up to parity with the API changes in a span of two days but then I got distracted by work, family, and other projects myself. As it stands, there’s just a little more to go and then I can make a release candidate.

My only hope is that I can get this framework ready for a full release and then start looking to build a community that can assist in its maintenance. The Facebook.NET maintainer got it off to a good start; now it’s my turn to finish the job.

Wiki Wild Wiki Wiki Wild

September 18, 2008

My team just got moved to another group within Go Daddy on Monday. We had put a lot of information up on the old group’s SharePoint Wiki and needed to put it somewhere. The new group didn’t have a unified Wiki, leaving it up to each team. I hadn’t really looked at the Wiki world in a while and I imagined that I’d need to go with something like MediaWiki.

Then I remembered that Jeff Atwood had written favorably about a .NET Wiki called ScrewTurn. A cursory investigation indicated that it was pretty damn awesome!

In no time, I had a solid Wiki system up and running. It’s very easy to install and configure and it appears to hold to MediaWiki markup syntax, which is a big plus. I replaced its authentication system with Active Directory integration through an easily-installed plugin—enabling any employee to log in and edit pages.

The hardest part was migrating the content from SharePoint. It was brutal, tedious work but you only have to do it once. If you’re an employee and reading this on our network, you’re welcome to check out my handiwork. (If you need any help setting up your own ScrewTurn Wiki, let me know.)

[The views expressed on this website/weblog are mine alone and do not necessarily reflect the views of Go Daddy Software, Inc.]