Indeed I did, but that just introduces a whole different set of problems, starting with storing and parsing the 40gb of data needed! Then there come issues with dividing out up by graticule (which the data’s not geared towards) and then you either have to break the shapes into primitives to derive their area (which is doable, but I’d have had to learn more than I was willing to) or else take a sampling approach (which is basically what I did).
You’re right though, and it’s definitely a superior solution and one I’d look at for a future version. But a “good enough” solution today trumps a perfect one at some future point, in my opinion!
Thanks for the thought, though. If you happen to know your way around processing that kind of data, feel free to share any tips!