Posts Tagged ‘software’

Snow Leopard Exchange Bug Confirmed

Monday, August 31st, 2009

After the official launch Friday, several people within my company are now reporting the same bug I experienced with Exchange support in Snow Leopard.  Email folders and calendar events just disappear from the server.  This looks very nasty.

MacOS 10.6 “Snow Leopard” Review

Friday, August 28th, 2009

I’m a member of the Apple Developer Connection, which gives me access to Apple’s “developer preview” releases of MacOS.  The latest release, launched today, is version 10.6 (dubbed “Snow Leopard“).  I’ve been running the last developer preview (build 10A432) for two weeks now.  Though developer previews are subject to Apple’s NDA, the non-disclosure terms do not apply to publicly-available programs and information.  Fortunately, a friend of mine found that his fresh install of the retail disk that arrived from Apple today is also build 10A432 (that is, the publicly available disk is identical to my installation).  Because my experiences reflect those of use with (what is now) a publicly-available product, I can share those experiences now.

Though this is not a comprehensive review, these are my perspectives as an end user.  Overall, I like the responsiveness, disk-space efficiency, and (as a geek) the promise of improved capabilities through the rearchitected (“refined”) kernel.  I had severe data-loss issues with Exchange support such that I would strongly advise caution before using it in a critical setting, and I had a couple of kernel panics that are troubling.

Installation went smoothly and took about an hour.  I ran the installer from within my pre-existing 10.5.8 installation on my Core 2 Duo Late 2006 iMac, which has 4GB of RAM.  I disliked the fact that the installation of Rosetta (Apple’s mechanism for emulating PowerPC execution so that you can run older applications written for PowerPC on Intel hardware) is disabled by default.  I thought they could do something more intelligent like look in your /Applications folder and check for PowerPC applications.  It’s a minor issue, however — I had read that this was a known change, so I enabled the optional install.  More serious to me was that the installation deleted Xcode.  I suppose this, too, was an optional installation, but this seems rather stupid.  It’s Apple’s software that ships as part of a fundamental part of the operating system.  If you’re upgrading the rest of the OS and are saving settings and so on, why on earth was Xcode passed over?  I reinstalled it after I discovered it was gone, but finding my basic tools (gcc et al.) gone was disturbing.  The installation initially did seem to reclaim about 6GB of disk space, but after reinstalling Xcode, the difference was negligible.  Allegedly the major source of reclamation is removing unused printer drivers and installing them on-demand instead, but since I had already removed them, it makes sense that I missed out on the big disk-space savings.  I also installed 10.6 on my late-2006 Macbook Pro, which roams more often and therefore I’ve left the printer drivers on it.  The disk savings was closer to the claimed 6GB on that machine.  Regardless, both worked fine with my networked Brother laser printer after the upgrade.  The installer no longer gives installation-type choices (“upgrade in place”, “archive and install”, “erase and install”) — it just does an upgrade.  Apparently you *can* “erase and install” manually, if you erase your disk before booting the DVD, but that’s not the method I chose.

After installation, the first thing I did was configure Mail to use my work email account, which is hosted on an Exchange 2007 server.  This is probably the big user-noticeable feature that’s added in 10.6, which is otherwise mostly a rearchitecting of the kernel with very few user-visible changes.  I was very excited about Exchange support.  For those that don’t know, Exchange integrates calendaring, a user database, and email.  All information is stored on the server and fetched by the client, so that you can have multiple clients connected to one server, yet they all have a synchronized “view” of your account’s email, calendar, and contacts.  It’s a Microsoft standard, and frankly it’s annoying to use because there are so few non-Microsoft clients.  I had been using (the Rosetta-requiring) Entourage 2004 client, which is functional but has a terrible interface and is slow.  Setting up my account was trivial.  Mail quickly started fetching my mail, and before long my calendar was synchronized with iCal as well.  Exchange contacts were accessible as expected in Mail, and things seemed to “just work” — until they didn’t.

About 3 days after using Mail, disaster struck.  Somehow, one of the clients (either my iMac or Macbook Pro) was out of sync and deleted several folders worth of email from the server.  I lost about 3000 emails, possibly permanently.  (I’m in the process of finding out from our IT group how difficult it might be to restore them, since I know that our email is archived as per Sarbanes-Oxley compliance).  Most of these were not critical emails, but this was extremely troubling.  I immediately disabled my Exchange account on my 10.6 clients.  Apple’s bug-reporting tool has been broken for weeks, so I’ve no idea if they’re making progress on this issue.  Worse, a backup could not save me — it’s a synchronization bug that wipes out your emails on the server.  I haven’t seen anyone else report this issue.  Obviously I looked in the trash (on all my clients) and saw nothing — this clearly seems to be some sort of bug in the synchronization mechanism Apple is using.  I have perhaps 15,000 emails on my account, and I do wonder if their testing has overlooked cases like this.  I also have about 20 filtering rules (for mailing lists), which may have complicated the issue, and I always have several clients connected to the server simultaneously (usually my Outlook 2007 client at work, my iPod Touch, and then my laptop & iMac at home).  Regardless, I’ve been doing this with Outlook and Entourage for a long time now and have never encountered this, so I’m definitely chalking this up to some sort of bug that Mail either has or triggered in the other clients.  The take-away point is that users should be extremely cautious of using Mail for their Exchange accounts until Apple releases an update that mentions something about improved Exchange support.  I certainly won’t be using that feature until then.

Other than Exchange support (which is unusable for me now), changes are subtle.  There’s a new look to context menus for items in the Dock, which was the only interface issue I noticed as different.  Window decorations and the Finder look identical to 10.5.  Bootcamp adds support for reading HFS+ volumes from Windows, which is nice.  The other major differences are all under the hood.  I did not do any benchmarking, but threading and context switching seem much, much faster.  (I hate reviews that say useless crap about how something “feels snappier” — how is that quantified, exactly?  It always seems to be in the reviewer’s head).  Specific things that I noticed as improved was the responsiveness of Expose and the Dashboard.  The Finder has been rewritten in Cocoa, which may also explain its speed improvements.  And surprisingly, Firefox 3.5.2 is much more responsive on 10.6 than 10.5.  Opening new tabs is far quicker than before, and I get far fewer “spinning beachball” pauses when I’ve got lots of tabs open on Safari or Firefox.  These perhaps are the effects of scheduler and memory allocation improvements in MacOS — things that have, traditionally, been kind of crappy in performance, and which seem to be addressed in the marketing material for 10.6 (things like the “Grand Central” multicore dispatch mechanism, which seems to reveal heavy work on the scheduler).  10.6 also has support for a 64-bit kernel and extensions, but my hardware does not have a 64-bit implementation of EFI and therefore is not compatible with Apple’s 64-bit kernel.  (It’s possible to run a 64-bit kernel on 32-bit EFI, but Apple isn’t supporting it, which is fine.  Even on machines where a 64-bit kernel is supported, it likely does not make sense to enable it by default until drivers and 3rd-party extensions have a formal, rigorous qualification process).  Likewise, my graphics hardware does not support Apple’s hardware-accelerated h.264 playback, so I can’t comment on that, either.  Unlike the 64-bit kernel, I do hope that Apple aggressively rolls out expanded hardware support for this acceleration to other models.  It may well be that Apple will support these features on all new Macs, which would be great.  Now, they are of limited (or nonexistent) value for upgraders.

Unfortunately, the architectural and performance improvements aren’t all ideal, either.  When using Safari on a guest account that I was testing, I managed to trigger a kernel panic (!) twice when using the “Top Sites” feature, which I normally disable on my own account.  Unfortunately, this feature is enabled by default.  Thinking it was a fluke, I tried again, and boom, panic() again.  Apparently Apple has not taken the advice from my blog (that is, “don’t panic()”).  🙂  Again, I haven’t seen other reports of this issue, but I have not had stability problems on this hardware for the 2.5 years I’ve had it, including some 3D graphics use.  The panic log showed that the failure occurred in com.apple.ATIRadeonX1000, therefore this may be a graphics-driver issue.  I say “may” because any other driver in the system could’ve DMA’d memory into the driver earlier that caused the eventual fault.  It’s tough to say.   So, it would seem that a few corner-case bugs still require addressing.  In the meantime, I’m disabling “Top Sites” everywhere and will be on the lookout for an update.  I’m also going to beware of 3D graphics, actually.

So to summarize, these are the Pros and Cons from my experience — on newer hardware, I’d think that the “Pros” would also include hardware acceleration for h.264, but perhaps an update will come with that.

Pros:

  • Improved application responsiveness
  • Disk space reclaimed
  • HFS read support from Windows (via Bootcamp driver)

Cons:

  • Potentially catastrophic Exchange support
  • Mysterious kernel panics that seem graphics-related (using Safari’s Top Sites feature)
  • Features that seem tightly restricted to a small-ish subset of installed hardware (h.264 hardware acceleration, OpenCL support)

On the whole, it seems a fairly solid point-zero release (10.5.0 had its share of troubles).  The only potentially glaring problem is if the Exchange issue is encountered by many more people, given that it is the primary end-user feature of this release.  I’m hopeful for an update on this issue and perhaps hardware acceleration for h.264 as well.

Spyware on Digsby

Monday, August 17th, 2009

I recently discovered that my former IM client, Digsby, was leasing out my computer for computational work without my explicit consent.  Since I don’t like the idea of possibly donating my computer (least of all, my work computer) to God-knows-who for God-knows-what project, I ditched it.  (This is usually against the IT policy of every major company I’ve ever seen — even benign things like folding@home are forbidden, because they suck up the company’s electricity, and companies want their resources used to make money).  I went back to Pidgin in the interim, which has a craptastic interface but is functional in a Soviet-automobile sort of way.  It works with AIM, Yahoo, and Jabber, and supports my company proxy — which is the bare minimum of functionality for me.  However, I discovered that Trillian has released a new IM client, Astra, which I’m now using and love so far.  As I use Astra more, I’ll follow up on it.

The Digby folks were rather non-specific in their description of what it does, offering feel-good descriptions like cancer research.  And yes, it’s true that this was disclosed in a blog post quite a while ago.  The problem is that I shouldn’t have to read some developer blog post to know that your software isn’t nefariously using my resources once I install your program.  It’s true that any time you install any software on your computer, some level of trust is required.  And yes, Digsby is free — but they also pitch themselves as some sort of fast-moving community of software developers, so it was never clear that they were leveraging their users for anything more than testing and community-building.  Seeing as how they kind of present themselves as the Facebook/Myspace of IM and seeing as how those other services are free (without nefarious strings attached), they have an obligation to prominently disclose behavior which very clearly deviates from a standard agreement between users and software publishers.  Burying this information with nonspecific descriptions in a click-through EULA which nobody reads (and whose legal basis has not been evaluated in court) does not count.  At very least, it’s unethical.

I’m sure some people are fine with what Digsby did, and they don’t mind running the so-called “research module”.  The problem is that none of Digsby’s explanations of the research module make any sense.  They first explain what grid computing is, then they give some examples of grid computing.  The clear intention is to leave the reader with the belief that what your computer will be doing is “things like” cancer research.  Hell, they call their distributed-computing client a “research module”.  Their intent to deceive is clear.  From their description:

There are numerous research projects that require a massive amount of computing power to complete.  One option is to run these on a supercomputer but there are very few of these in the world and renting time on them is very expensive.  Another option is to break the problem up into many little pieces so each of the little pieces can run in parallel on thousands or even hundreds of thousands of regular computers.  This is called Grid Computing.

A few examples of popular grid computing projects are: Help Conquer Cancer, Discovering Dengue Drugs, FightAIDS@Home, and The Clean Energy Project.  Besides these non-profit projects, there are many commercial applications for grid computing such as pharmaceutical drug discovery, economic forecasting, and seismic analysis.

Now that you have an understanding of grid computing, let’s go over how this fits into Digsby.  We are testing a revenue model that conducts research similar to the projects mentioned above while your computer is idle.  Unlike the installer revenue model above, which is commonly seen in many products, this is much more unique so we’d like to clarify what it does and how it works.

[2 paragraphs removed]

The idea is to make this both a revenue model and a feature!  Some of the research Digsby conducts may be for non-profit projects like the ones mentioned above and some may be for paid projects, which will help us keep Digsby completely free.  So, using this module keeps Digsby free and contributes to research projects that will make the world a better place. [emphasis mine]

So, first they explain that they’re doing grid computing with their “research module”.  Then they give some examples of grid computing, all which sound great and which happen to be research.  “Research”, “research module” — the average person would read this and conclude that this thing is doing AIDS research in the background, and Digsby’s being paid for it.  Except, those projects don’t work that way.  They take unpaid volunteers who install those clients on computers, typically at universities.  They don’t pay people.

So who does pay people for computationally intensive work?  Though it could be relatively harmless things, I have no hope of knowing, because Digsby will not disclose who their customers (no, not you, the people actually paying for their users’ compute power) are nor what, specifically, their applications are.  Digsby has told you what some “examples” of research are that sound great, but they haven’t told you the ones that you probably wouldn’t be so enthusiastic about.  Those examples include data-mining/analysis or prime-number factorization for decryption.  Possible customers include telecommunications companies, the NSA, the CIA, or foreign governments.  Those are all customers that would pay.  A more harmless example of for-pay distributed computing would be a security firm such as RSA wanting to test a new algorithm against distributed attack, but again, the problem is we’ve no idea of knowing who it is.  And because many of these examples would be fairly secret, the Digsby developers themselves probably don’t know, which raises all sorts of concerns.

There are numerous research projects that require a massive amount of computing power to complete.  One option is to run these on a supercomputer but there are very few of these in the world and renting time on them is very expensive.  Another option is to break the problem up into many little pieces so each of the little pieces can run in parallel on thousands or even hundreds of thousands of regular computers.  This is called Grid Computing.

A few examples of popular grid computing projects are: Help Conquer Cancer, Discovering Dengue Drugs, FightAIDS@Home, and The Clean Energy Project.  Besides these non-profit projects, there are many commercial applications for grid computing such as pharmaceutical drug discovery, economic forecasting, and seismic analysis.

Now that you have an understanding of grid computing, let’s go over how this fits into Digsby.  We are testing a revenue model that conducts research similar to the projects mentioned above while your computer is idle.  Unlike the installer revenue model above, which is commonly seen in many products, this is much more unique so we’d like to clarify what it does and how it works.

Microsoft and the GPL

Thursday, July 23rd, 2009

Microsoft recently released several paravirtualization drivers under the GPL (version 2).  People are making way too big of a deal about this.  (I suppose that’s to be expected from “Linux Magazine”).  There are two primary reasons that Microsoft chose the GPL:  maintenance of their own code, and proliferation.  This is not an attack on Linux.  This is not a trick on the GPL.  This is not Microsoft experimenting with Linux.  This is not a patch to the Linux kernel.  (“Linux Magazine”, indeed).  Modules are not the kernel proper, people!  My nVidia driver is no more “a kernel patch” than are Microsoft’s paravirt drivers.  The difference is that Microsoft’s drivers will ship with the overall kernel tree and get built with it, but so do drivers for arcane capture cards from 1994.

This is a practical move given the realities of how Linux is structured and distributed, and it’s comical (if not annoying) to see people who are supposedly Linux advocates completely misunderstand and mischaracterize what’s going on.  Look, the way the Linux kernel is structured, almost all useful APIs are exported ONLY to GPL-declared code.  That means if Microsoft was to declare its module as any other license, it could not use a ton of high-level APIs, including basic stuff like, say, the entire devfs API or any of the IOMMU APIs. There are numerous other examples.  This means that Microsoft would be forced to implement their own versions of these APIs, based on low-level constructs in the kernel that are subject to frequent change.  This is a maintenance nightmare, and Microsoft would have to be insane to pursue this strategy.

The other major reason to use the GPL for their drivers is that, without it, Microsoft’s drivers won’t ship with the base Linux kernel + drivers distribution.  Microsoft wants to get these drivers out to as many people as possible so that Hyper-V’s paravirtualized features “just work” with as many Linux installations as possible.  This move by them increases those odds, so it’s a smart business decision.  It’s no different than Intel wanting their drivers to ship with the kernel.

I understand that Linux enthusiasts are (justifiably) leery of Microsoft, but making up crazy theories does not exactly make you look like a rational, reasoned critic.  Rather, it makes it easier for Microsoft to publicly discount any and all claims that the Linux community may ever have regarding Microsoft’s tactics, because they can point to previous nutty behavior.  Acting “shocked” that Microsoft would pursue its business interests is the juvenile equivalent of rolling one’s eyes.  And, claiming (even in jest) that this is a first step to Microsoft using a Linux kernel inside a Microsoft product does, indeed, count as nutty.  I’m reading this crap all over the Internet — it’s not funny, it’s not clever, and it makes OSS people look like idiots.

iTunes update kills syncing with Palm Pre

Thursday, July 16th, 2009

Apple’s latest update to iTunes (8.2.1) has apparently “fixed the glitch” wherein a Palm Pre could effectively mimic the iPod interface and convince iTunes to let it sync as an iPod does.  In other words, it used to work, and now it doesn’t.  What an epic fail for interoperability.  A buddy of mine (who has a Pre and wanted to see what the update does on his non-primary Mac) watched his log during the update and noticed that the new iTunes replaces USB .kexts.  Laaaaame.  It’s shady enough to break support for the Pre at the app level, but mucking around in the driver stack to do it?  Weak.  Presumably the Windows version is doing the same thing, since I believe there is a USB driver there, too, to detect that it’s an iPod & prevent Windows from automatically mounting the filesystem.

I like a lot of Apple products and generally prefer OS X over anything else (nice mix of power-user options, orthogonal configuration choices, and “it just works”), but this is both stupid (wouldn’t you want everyone to standardize on your portal — iTunes?) and plainly irritating to users.  How many iPhone/iPod sales have they just protected with this move, versus how much ill will did they just stir up?  I get that Apple is a company for the mass-consumer market that doesn’t care about this and just buys iEverything.  It’s not a company for geeks like me, no matter how much I dig the geek-ish options in OS X and the automatic inclusion of all my handy unix tools.  I think they know that people like me won’t buy Apple stuff (again) if they do crap like this, or if they do stupid things like eliminate non-reflective LCD screens in their lineup.  (The clock is ticking on that one — I’ll probably get/build a PC for my next computer to upgrade from my 2006 iMac if they don’t offer a sub-$2000/non-crappy antiglare option in the next year or so).  They just don’t care, because I’m not statistically significant compared to people who just will just buy Apple stuff because it’s hip.  It’s frustrating, but it’s not surprising.  Microsoft does much the same stuff with their pricing/feature/partner-maneuvering stuff, which is similarly annoying.

Yeah, yeah, yeah, I hear you out there, OSS people.  Don’t talk to me about Linux until you’ve got something that’s configurable without jacking around in an ever-changing layout of .conf files or doesn’t require non-stop incompatible updating.  FreeBSD is awesome & much better on the configuration/consistency front, but unfortunately has craptastic desktop hardware support for gadgets like cameras.  Neither of them have an Exchange 2007 client (crossover-office is cool, but still crashes occasionally).  Long story short:  it sucks, but I guess I’m just glad I’m not a Pre user right now.

TiVo seeks $1B in damages against EchoStar

Tuesday, July 14th, 2009

Apparently it’s come out that TiVo is seeking 1 billion dollars in damages in its ongoing suit against EchoStar (parent of Dish Network) for violating their patents on low-cost DVR manufacturing.  Though this may seem like a lot, this comes after separate rulings for $90 Million and $190 Million judgments against EchoStar in this same suit.  Over the past several years, EchoStar has attempted to avoid TiVo’s patent-licensing demands by implementing what they claim are non-infringing workaround solutions.  However, the court has ruled that these workarounds have been transparently similar to TiVo’s patent, and has even taken the somewhat unusual stance to demand that EchoStar consult the court first before implementing any further “workarounds”.  This latest demand by TiVo seems to be an attempt to punish EchoStar for what seems to have been willful game-playing with the court and deception regarding their “non-infringing” solutions.

This case is fascinating to me for several reasons.  Primarily, it’s sort of a lightning rod for intellectual-property discussions.  In computer-science circles, this seems to be the big one that anti-software-patent folks point to.  First, I hear people claim that this is solely a software patent, and since they don’t believe software is patentable, they don’t believe TiVo’s case has any merit.  Even if software wasn’t patentable, that argument fails here because there’s a large hardware component (compression hardware-assists — MPEG2 encoders) that are part of the overall system described in the patent, thus this is not solely a software solution.

The other argument is that what TiVo did was obvious and therefore their patent is invalid.  “It’s a VCR using digital parts — obvious!”  Except, combining the digital parts in such a way as to keep the price cheap is a key component of this patent.  Using a software-only solution to construct a DVR is, even now, cost-prohibitive.  (Good ones have hardware encoders — that’s the point).  Also, anyone who’s actually used TiVo software knows that they pioneered tons of stuff that nobody had done with a VCR, such as the “peek-ahead” method of fast-forwarding.  (If you fast-forward and press play, the recording actually starts slightly BEFORE where you had seen when you pressed play — this feature is not feasible with a tape-based VCR).  Lots of DVRs have that now, but I’m pretty sure TiVo did it first.  So, TiVo pursuing its intellectual property is entirely reasonable.  And when the other side is a maliciously bad actor who consistently turned down reasonable licensing deals and instead made transparently-infringing “workarounds”, well, that’s exactly why the tort system has punitive damages.

As to whether or not software should be patentable, I’m fairly firmly planted in the “pro” camp.  We need patent reform to weed out useless patents (such as the string of “on the Internet!” patents of the late-90s/early 2000s, where you could take any algorithm, slap “on the Internet!” on the end of it, and get a patent.  See “one click” from Amazon).  However, we increasingly live in a digital world.  Most things, at some level, are largely AD/DA converters with microcontrollers.  Since we already model complex systems in software (and develop predictive models all the time), the leap to making a physical, real-world-controlling manifestation is frequently a matter of having the right servos, AD/DA converters, and so on.  In other words, if everything eventually becomes controllable primarily through software and we eliminate software patents, does that mean that nothing is (or should be) patentable?  Whether they come out and acknowledge it or not, that seems to be the logical conclusion of the anti-software-patent argument.

mysql hibernate query caches demystified

Thursday, July 9th, 2009

Having trouble with mysql performance, particularly with its hibernate query cache?  This guy’s blog has tons of architectural information on memory management in the hibernate query cache.

Google Chrome OS – the next big thing?

Thursday, July 9th, 2009

So, Google announced their development (and impending release) of their so-called “Chrome OS”.  Because Google has a huge amount of resources and has a track record of delivering at least some revolutionary stuff (search + adwords, obviously, but Android and their locality-aware search stuff are promising), this is getting quite a lot of press. Given that this is not their first foray into Linux distributions, I think some of the novelty here might be a bit overstated.  Google’s done a Linux distribution before – this is not groundbreaking. Yes, it’s been for different target markets (server w/ no UI, or handheld with a very customized, simplified UI), but still, let’s not pretend Google is going out on a limb when they have, in fact, been comfortably seated on this branch for a while now. The interesting, new aspect to me is that this could become ubiquitous on a large class of PC-ish hardware (netbooks), which could seriously impact the overall ecosystem of hardware vendors (targeting/supporting Chrome OS) and developers (moving to Chrome OS from… probably other Linux distribution).

Will developers stop working on Ubuntu and hack away on Chrome OS? Well, a lot of the open-source community seemed to really covet Mac OS from the beginning (specifically the closed-source Aqua interface), and Ubuntu never really seemed to adequately whittle down the configuration GUI tools into simple, usable applications the way Apple has with OS X.  (Yes, you can do everything at the command line — that’s not the point.  If that was the end of it, Linux would have 50% market share by now).  So if Chrome OS is sufficiently slick and finally puts some of the “just plain works” polish on Linux, who knows. How Google “welcomes” the open-source community, particularly if/when they attempt to tell Google how they ought to handle their own operating system or if they manage to introduce incompatible/problematic “enhancements”, will be interesting.

Or, maybe it’ll be completely locked down & just a glorified enhancement to Android, specifically (and only) for a subset of netbook hardware.  Preliminary announcements from Google suggest that’s a distinct possibility.  I’m skeptical of the Chrome OS project — the “the Web is the computer” marketing sounds all too familiar — but I’m also undeniably interested in how this could change the landscape of desktop computing, particularly as it pertains to the existing players in desktop operating systems.