I’ve been giving some deeper thought to the multiple-has table I described in my last post, and I’ve figured out where the problem is. All the usual actions can be performed in constant time for any specific table, as usual… the issue is what kind of time they require. For n-dimensional tables, the number of operations for
find increases linearly with n, but the operations required for
delete increases factorially. For n=2,
add requires three actions, but for n=3,
add takes seven. n=4 causes
add to take twenty-five, unless I’ve mucked up my math.
This is, of course, bad. Unless you can magically prepopulate the hashtable, and you never have to modify it… and that’s exceedingly unlikely. I think I need to find a better way of storing the indices. I think the problem that’s causing the factorial growth is the fact that I was originally thinking of storing the sub-hashtables (say, where n=2, the tables one dimension deep) as independent of each other, and relying on encapsulation. Didn’t I specifically mention that the annoying part about classic multidimensional arrays is that they’re based on encapsulation?!
Back to the drawing board.