I’m always curious about computer performance. Can’t help it, must be in my blood. So when I heard that perl hashes were fast, I had to do some research.

From what I’ve gleaned so far, hashes are best when used for random access, as opposed to sequential access. For example, if at any given time, any member of a hash might be searched for, it will perform well. On the other hand, if you are going to lookup values in an array one at a time, on after the other, an array will perform well.

I’ve been thinking about this because unfortunately, mod_dbd combined with mod_rewrite is still a ways away, and I’d like to keep an active database of all my url schemes. Maybe hashes are the way to go?

I’m wondering how this would compare to a database for random access. I bet a hash would be faster, but I really don’t know. I do know that in my experience, because of the way I prefer to manage data, its actually easier for me to manage a database than it is to manage several hash files. Just my opinion though, I’m sure others prefer hash files.

I’ve been thinking about this because unfortunately, mod_dbd combined with mod_rewrite is still a ways away, and I’d like to keep an active database of all my url schemes. Maybe hashes are the way to go?

UPDATE October 26, 2007: Is there any similarity between one way encryption hashes and has files used as databases?Related:

Building a Better Hash by Dan Schmidt

Re: Why are hashes so much faster? - perlmonks.org

Even better:

Shift, Pop, Unshift and Push with Impunity! - also at perlmoks.org

¥