I'm not a big fan of the big-array-to-hold-more-than-we-think-we-will-ever-need model. I agree with Jeff W. that RAM is not limiting these days, however, it is possibile that some day these models will exist on a network (web) and the data will have to be marshalled between the client computer and the model server. These large data structures will surely slow things down.
If we use a pointer, the user can define any size of array they want, however, this model also falls victim to the network as a custom marshalling routine would have to be written to handle embedded pointers to data. So I may have talked myself out of this one too.
A compromise would be to look at the requirements of each variable/data structure. For instance, the __sp_info structure is species-specific. Since we can define a priori valid species (via our species list), we already have a good idea of the size requirements of this structure. Other variables may yield to the same analysis.