Archive for July, 2009

I jinxed the Astros

Thursday, July 30th, 2009

Okay, so I was all upbeat about the Astros & then I went to Vegas — and subsequently the shit hit the fan.  Well, not quite, but it’s not good.  The ‘stros turned in a poor performance in a series against the hapless Mets, and our ace, Roy Oswalt, left from the Cubs series with an injury.  (At least they won the game).  Setup-man LaTroy Hawkins is now out, too.  Oh, and all-star Lance Berkman (the Puma – rawr!) is still on the DL.  Wandy Rodriguez continues to impress this season, though.  I’m still hoping for the best.  (This is why we’re called “fans”).

Astros sweep Cards, move into 2nd place in NL Central

Thursday, July 23rd, 2009

Timely hitting, great pitching, come-from-behind drama — I could get used to this.  I’ve been watching the Astros all season (which is sometimes painful), so this recent turn of events is quite welcome.  We’ll see if they can keep things going, but they’ve looked good against the Dodgers and Cardinals now.  Go Astros!

Microsoft and the GPL

Thursday, July 23rd, 2009

Microsoft recently released several paravirtualization drivers under the GPL (version 2).  People are making way too big of a deal about this.  (I suppose that’s to be expected from “Linux Magazine”).  There are two primary reasons that Microsoft chose the GPL:  maintenance of their own code, and proliferation.  This is not an attack on Linux.  This is not a trick on the GPL.  This is not Microsoft experimenting with Linux.  This is not a patch to the Linux kernel.  (“Linux Magazine”, indeed).  Modules are not the kernel proper, people!  My nVidia driver is no more “a kernel patch” than are Microsoft’s paravirt drivers.  The difference is that Microsoft’s drivers will ship with the overall kernel tree and get built with it, but so do drivers for arcane capture cards from 1994.

This is a practical move given the realities of how Linux is structured and distributed, and it’s comical (if not annoying) to see people who are supposedly Linux advocates completely misunderstand and mischaracterize what’s going on.  Look, the way the Linux kernel is structured, almost all useful APIs are exported ONLY to GPL-declared code.  That means if Microsoft was to declare its module as any other license, it could not use a ton of high-level APIs, including basic stuff like, say, the entire devfs API or any of the IOMMU APIs. There are numerous other examples.  This means that Microsoft would be forced to implement their own versions of these APIs, based on low-level constructs in the kernel that are subject to frequent change.  This is a maintenance nightmare, and Microsoft would have to be insane to pursue this strategy.

The other major reason to use the GPL for their drivers is that, without it, Microsoft’s drivers won’t ship with the base Linux kernel + drivers distribution.  Microsoft wants to get these drivers out to as many people as possible so that Hyper-V’s paravirtualized features “just work” with as many Linux installations as possible.  This move by them increases those odds, so it’s a smart business decision.  It’s no different than Intel wanting their drivers to ship with the kernel.

I understand that Linux enthusiasts are (justifiably) leery of Microsoft, but making up crazy theories does not exactly make you look like a rational, reasoned critic.  Rather, it makes it easier for Microsoft to publicly discount any and all claims that the Linux community may ever have regarding Microsoft’s tactics, because they can point to previous nutty behavior.  Acting “shocked” that Microsoft would pursue its business interests is the juvenile equivalent of rolling one’s eyes.  And, claiming (even in jest) that this is a first step to Microsoft using a Linux kernel inside a Microsoft product does, indeed, count as nutty.  I’m reading this crap all over the Internet — it’s not funny, it’s not clever, and it makes OSS people look like idiots.

Amazon “super-saver shipping’s” bizarrely variable latency

Tuesday, July 21st, 2009

Generally, I love Amazon’s free super-saver shipping.  Amazon’s customer service is great (I’ve had to return things a few times, and they were always good about it), and the prices generally are tough to beat.  What I don’t get is the reasoning behind (and the implementation of) the following clause regarding super-saver shipping:

Please note that your order may take an additional 3 to 5 business days to ship out from our fulfillment center(s).

I always use super-saver shipping.  For the most part, I don’t need stuff in a hurry, and it’s not worth spending $5 to $10 on shipping to get it faster.  And actually, most of the time stuff winds up shipping out a day or 2 after I ordered it, not 3 to 5 business days.  However, I do notice that on more expensive orders ($200 or more, usually), the delay is always in that 3 to 5 business-day range.  How do they actually implement that?  Do they have some sort of daily queue, and cheapskates like me get put in the back every day, and if they don’t get to you, they don’t get to you?  And if so, how come cheap stuff gets shipped out quickly, but expensive stuff takes longer — different warehouses with different queues?  I mean, hey, it’s free, I’m not complaining — but I am curious how this works.  And I’m also curious how this is cheaper.  It sure seems like it’s just an artificial, added delay to my order, and I’m tempted to believe this is meant as a way to punish me into paying for shipping next time.  (They have my order history, obviously — do they look at that and try to profile me?)

This might all seem like crazy conspiracy theorizing, but marketing really does work like that.  Wal-Mart pioneered this sort of stuff — they watch inventory closely, they watch what gets bought together, and they micromanage sales and discounting to maximize correlative sales on higher-margin items.  Supermarkets do this on an inter-trip (rather than intra-trip) per-person basis with “loyalty cards”.  People are regularly either being actively coerced with their shopping habits, or the marketers are looking to take advantage of what they observe to be correlative habits.  I’m going to keep buying stuff on Amazon with free shipping either way, but I do wonder if any such advanced techniques are in use here.

Adam Savage’s data-roaming & billing disaster

Monday, July 20th, 2009

You’ve probably heard about the $11,000 bill Adam Savage (of Mythbusters fame) got from AT&T after taking his mobile broadband card to Canada.  AT&T alleges he racked up 9 GB of data traffic while in Canada and proceeded to bill him at 1.5 (or .015, seeing as how their rep didn’t know the difference) cents per kilobyte.  Somehow, that comes out to $11,000, except, it doesn’t.  The data alone is either $135,000 (1.5 cents per kilobyte at 9 million KB) or $1,350 (.015 cents per kilobyte).  $11,000 is not an option there.  Maybe they meant .15 cents per kilobyte?  That’d make $13,500, which is at least in the ballpark.  Regardless, it’s an insane bill, but this math does not add up.

I bring this up not because the rate is insane (it is), but because there have been a few different takes on this story.  One that that made me think was from Leo Laporte (the Tech Guy, formerly of Tech TV who now has his own radio show/podcast).  I like Leo, but basically his take is that though the rates are crazy, Adam needs to come clean & say that he really did use 9GB of traffic.  (Adam claims that it was just a few hours of surfing).  Now, it’s possible that Adam really was downloading several movies (as in, 9+), or that he was there for quite a long time, or that his machine is (unknowingly) part of some botnet.  Casual surfing does not add up to 9GB, even over several days.  However, given that AT&T can’t seem to do math (see previous analysis), I don’t think it’s reasonable to give AT&T the benefit of the doubt about Adam’s bill.

As we go forward to the metered-Internet future (or at least, attempts thereof), can we actually trust the accounting of the telcos?  I’ve been hit with mysterious bills before from PacBell (allegedly I called Germany for 10 hours in one month, which was patently ridiculous), and the willingness of these folks to back down from huge bills suggests, to me, that they aren’t all that confident of their billing accounting.  Part of it also is probably that they know how ridiculous their rates are, but you don’t just walk away from $11,000 if you believe you really have provided $11,000 worth of service.  (Yeah yeah, cost of bad PR versus getting the money back, I get that).  Where is AT&T’s statement about Adam’s usage (not privacy-violating details, but just a bland “this is how we make sure our billing is accurate” type statement) that would assure the rest of us that we aren’t getting screwed?

It seems to me that there needs to be regulation in the presentation of bills to customers (especially when you can rack up $11k in one month!) to standardize these things, and there needs to be industry-wide regulation to ensure that the bills presented to customers are accurate.  This means penalties when the bills aren’t accurate.  (If I don’t pay my bills on time my credit gets damaged — what are the consequences for firms that issue erroneous bills?)  For example, when Comcast incorrectly billed me for three months after screwing up the bill-coding when I took back their DVR, there should be some consequence for them.  As it is, quasi-monopolies (granted by local franchising boards) and multi-year contracts (and penalties) effectively hold the customer hostage and separate these businesses from any market-driven consequences.  In the absence of those, I think regulation is a good idea here.

iTunes update kills syncing with Palm Pre

Thursday, July 16th, 2009

Apple’s latest update to iTunes (8.2.1) has apparently “fixed the glitch” wherein a Palm Pre could effectively mimic the iPod interface and convince iTunes to let it sync as an iPod does.  In other words, it used to work, and now it doesn’t.  What an epic fail for interoperability.  A buddy of mine (who has a Pre and wanted to see what the update does on his non-primary Mac) watched his log during the update and noticed that the new iTunes replaces USB .kexts.  Laaaaame.  It’s shady enough to break support for the Pre at the app level, but mucking around in the driver stack to do it?  Weak.  Presumably the Windows version is doing the same thing, since I believe there is a USB driver there, too, to detect that it’s an iPod & prevent Windows from automatically mounting the filesystem.

I like a lot of Apple products and generally prefer OS X over anything else (nice mix of power-user options, orthogonal configuration choices, and “it just works”), but this is both stupid (wouldn’t you want everyone to standardize on your portal — iTunes?) and plainly irritating to users.  How many iPhone/iPod sales have they just protected with this move, versus how much ill will did they just stir up?  I get that Apple is a company for the mass-consumer market that doesn’t care about this and just buys iEverything.  It’s not a company for geeks like me, no matter how much I dig the geek-ish options in OS X and the automatic inclusion of all my handy unix tools.  I think they know that people like me won’t buy Apple stuff (again) if they do crap like this, or if they do stupid things like eliminate non-reflective LCD screens in their lineup.  (The clock is ticking on that one — I’ll probably get/build a PC for my next computer to upgrade from my 2006 iMac if they don’t offer a sub-$2000/non-crappy antiglare option in the next year or so).  They just don’t care, because I’m not statistically significant compared to people who just will just buy Apple stuff because it’s hip.  It’s frustrating, but it’s not surprising.  Microsoft does much the same stuff with their pricing/feature/partner-maneuvering stuff, which is similarly annoying.

Yeah, yeah, yeah, I hear you out there, OSS people.  Don’t talk to me about Linux until you’ve got something that’s configurable without jacking around in an ever-changing layout of .conf files or doesn’t require non-stop incompatible updating.  FreeBSD is awesome & much better on the configuration/consistency front, but unfortunately has craptastic desktop hardware support for gadgets like cameras.  Neither of them have an Exchange 2007 client (crossover-office is cool, but still crashes occasionally).  Long story short:  it sucks, but I guess I’m just glad I’m not a Pre user right now.

TiVo seeks $1B in damages against EchoStar

Tuesday, July 14th, 2009

Apparently it’s come out that TiVo is seeking 1 billion dollars in damages in its ongoing suit against EchoStar (parent of Dish Network) for violating their patents on low-cost DVR manufacturing.  Though this may seem like a lot, this comes after separate rulings for $90 Million and $190 Million judgments against EchoStar in this same suit.  Over the past several years, EchoStar has attempted to avoid TiVo’s patent-licensing demands by implementing what they claim are non-infringing workaround solutions.  However, the court has ruled that these workarounds have been transparently similar to TiVo’s patent, and has even taken the somewhat unusual stance to demand that EchoStar consult the court first before implementing any further “workarounds”.  This latest demand by TiVo seems to be an attempt to punish EchoStar for what seems to have been willful game-playing with the court and deception regarding their “non-infringing” solutions.

This case is fascinating to me for several reasons.  Primarily, it’s sort of a lightning rod for intellectual-property discussions.  In computer-science circles, this seems to be the big one that anti-software-patent folks point to.  First, I hear people claim that this is solely a software patent, and since they don’t believe software is patentable, they don’t believe TiVo’s case has any merit.  Even if software wasn’t patentable, that argument fails here because there’s a large hardware component (compression hardware-assists — MPEG2 encoders) that are part of the overall system described in the patent, thus this is not solely a software solution.

The other argument is that what TiVo did was obvious and therefore their patent is invalid.  “It’s a VCR using digital parts — obvious!”  Except, combining the digital parts in such a way as to keep the price cheap is a key component of this patent.  Using a software-only solution to construct a DVR is, even now, cost-prohibitive.  (Good ones have hardware encoders — that’s the point).  Also, anyone who’s actually used TiVo software knows that they pioneered tons of stuff that nobody had done with a VCR, such as the “peek-ahead” method of fast-forwarding.  (If you fast-forward and press play, the recording actually starts slightly BEFORE where you had seen when you pressed play — this feature is not feasible with a tape-based VCR).  Lots of DVRs have that now, but I’m pretty sure TiVo did it first.  So, TiVo pursuing its intellectual property is entirely reasonable.  And when the other side is a maliciously bad actor who consistently turned down reasonable licensing deals and instead made transparently-infringing “workarounds”, well, that’s exactly why the tort system has punitive damages.

As to whether or not software should be patentable, I’m fairly firmly planted in the “pro” camp.  We need patent reform to weed out useless patents (such as the string of “on the Internet!” patents of the late-90s/early 2000s, where you could take any algorithm, slap “on the Internet!” on the end of it, and get a patent.  See “one click” from Amazon).  However, we increasingly live in a digital world.  Most things, at some level, are largely AD/DA converters with microcontrollers.  Since we already model complex systems in software (and develop predictive models all the time), the leap to making a physical, real-world-controlling manifestation is frequently a matter of having the right servos, AD/DA converters, and so on.  In other words, if everything eventually becomes controllable primarily through software and we eliminate software patents, does that mean that nothing is (or should be) patentable?  Whether they come out and acknowledge it or not, that seems to be the logical conclusion of the anti-software-patent argument.

Another race, another mediocre result for Dale Jr.

Monday, July 13th, 2009

The 2009 NASCAR season is quickly approaching its playoff system, known as “the chase for the cup“.  Planted firmly in 21st place overall and 367 points behind the last-place chase-qualifying driver at the moment (Matt Kenseth), Dale Earnhardt Jr. has been a huge disappointment for many fans (including myself) this year.  (Current standings are available here).  When Dale Jr. moved from his stepmother’s DEI race team to the big-money, top-performing Hendrick Motorsports team in 2008, expectations were set astronomically high.  Dale Jr.’s Hendrick teammates are current three-peat Cup champion Jimmie Johnson,  4-time Cup champion Jeff Gordon, and current-season Cup-series wins-leader Mark Martin.  One could make the argument that expectations after his move might’ve been too high.  Still, after changing crew chiefs this year and having had a full season with Hendrick to get things right, it’s reasonable to expect him to be at least competing for a spot in the Chase.  Given that there are so many drivers between him and 12th place and that the points format does not overwhelmingly favor winners, even an unlikely win-streak will not save his Chase aspirations at this point.  Saturday’s result was especially disappointing, because Dale Jr. didn’t really ever look like he was in the race, despite the fact that he won at Chicago last year with Hendrick.

NASCAR’s biggest star turning in such a mediocre season’s performance is a story in and of itself.  However, what it means for the sport (specifically the track owners) is another story, and it will be interesting to see how it unfolds this year.  Dale Jr.’s popularity is something of a money machine, and it keeps people coming to the races.  If you look at his fans, they’re mostly a mix of old-school NASCAR fans (mostly inherited from his legendary father) and younger Southern fans.  Stereotypes aside, these are the people that pay money to go to the races, eat the food, and stay in the hotels (okay, okay, some may be camping in parking lots — the RV-based fans you see on TV are actually the extreme minority).  Though Jeff Gordon’s success and national popularity helped NASCAR become a national, mainstream sport, Dale Jr. is the draw for people who actually go to the races (which are still predominantly located throughout the South).  When I went to Daytona for the (then-Pepsi) 400 a couple summers ago, Dale Jr. fans outnumbered Jeff Gordon fans by at least 4:1.  This, despite Dale Jr. was winless that season.

However, with the current economy, I’ve noticed that NASCAR (like all sports) is taking in fewer fans.  The rumor is that NASCAR is doing everything they believe that they reasonably can to try to help stimulate lagging attendance, but I haven’t seen that reflected in ticket prices.  Even a crappy seat is going to run you $50 with fees.  A good seat will be five times that or more.  Many race tracks hold over 100,000 fans.  (Daytona holds 167,000.  Talladega holds 175,000).  With so many seats available (many of which are empty these days), the economics of this doesn’t make sense.  The entire back-stretch of Daytona was empty for the (now-Coke Zero) 400 this year.  The 10s (that’s right, 10s) of fans that were there seemed to have sneaked in to that portion of the track, because they seemed to be moving around a lot (likely dodging security).

The risk here is that the local fanbase dries up and doesn’t go anymore — or at least, doesn’t go in as large of numbers.  I love NASCAR, but I am not one of the people who goes to just watch “my guy”.  For most fans, however, that’s what they do.  And if their guy is Dale Jr. (as it seems to be for most of the people in attendance), you have to wonder if they’re going to continue shelling out $400+ in tickets, food, and gas to take the family.  This is terrible for the local track owners and the surrounding businesses in what are usually fairly rural towns.  (Talladega has a population of 80,000 people — less than half the capacity of the track).  I don’t think that NASCAR is about to dry up, but it seems like NASCAR may have been riding on the popularity of one driver to fill large tracks with fans who were willing to overpay for tickets.

Unless Dale Jr. starts at least looking like a contender, NASCAR is going to have to do something much more drastic about ticket prices.  If they don’t, tracks may go bankrupt, and NASCAR will have to decide whether they want to bail/buy out those tracks, shorten the season, or have new races at existing (financially solvent) tracks.  In the end, that may be better for the sport — having 4 races a year at one location versus 2, for example.  Also, it’s not clear that the length of the season is ideal.  NASCAR may generate more interest by having a shorter season (and possibly eliminating the Chase, which I am not a fan of).  Hopefully they won’t nix the New Hampshire races, though, since those are about the only ones I can reasonably go to anymore.  🙁

mysql hibernate query caches demystified

Thursday, July 9th, 2009

Having trouble with mysql performance, particularly with its hibernate query cache?  This guy’s blog has tons of architectural information on memory management in the hibernate query cache.

Google Chrome OS – the next big thing?

Thursday, July 9th, 2009

So, Google announced their development (and impending release) of their so-called “Chrome OS”.  Because Google has a huge amount of resources and has a track record of delivering at least some revolutionary stuff (search + adwords, obviously, but Android and their locality-aware search stuff are promising), this is getting quite a lot of press. Given that this is not their first foray into Linux distributions, I think some of the novelty here might be a bit overstated.  Google’s done a Linux distribution before – this is not groundbreaking. Yes, it’s been for different target markets (server w/ no UI, or handheld with a very customized, simplified UI), but still, let’s not pretend Google is going out on a limb when they have, in fact, been comfortably seated on this branch for a while now. The interesting, new aspect to me is that this could become ubiquitous on a large class of PC-ish hardware (netbooks), which could seriously impact the overall ecosystem of hardware vendors (targeting/supporting Chrome OS) and developers (moving to Chrome OS from… probably other Linux distribution).

Will developers stop working on Ubuntu and hack away on Chrome OS? Well, a lot of the open-source community seemed to really covet Mac OS from the beginning (specifically the closed-source Aqua interface), and Ubuntu never really seemed to adequately whittle down the configuration GUI tools into simple, usable applications the way Apple has with OS X.  (Yes, you can do everything at the command line — that’s not the point.  If that was the end of it, Linux would have 50% market share by now).  So if Chrome OS is sufficiently slick and finally puts some of the “just plain works” polish on Linux, who knows. How Google “welcomes” the open-source community, particularly if/when they attempt to tell Google how they ought to handle their own operating system or if they manage to introduce incompatible/problematic “enhancements”, will be interesting.

Or, maybe it’ll be completely locked down & just a glorified enhancement to Android, specifically (and only) for a subset of netbook hardware.  Preliminary announcements from Google suggest that’s a distinct possibility.  I’m skeptical of the Chrome OS project — the “the Web is the computer” marketing sounds all too familiar — but I’m also undeniably interested in how this could change the landscape of desktop computing, particularly as it pertains to the existing players in desktop operating systems.