That is what I was trying to explain several pages ago. Hydrologic modeling of stream flow and river rise is very easy in a single flow source/single flow channel problem where flow stays mostly within the channel. They can tell you how high it will go quite accurately and when and for how long it will crest. When the flow goes outside the channel, it becomes more difficult to model because you don't really know how fast the water moves when flowing through the woods, over roads, around houses, etc. When you add multiple flow sources (dam releases, tributary watersheds, etc.) it gets much harder, because then you have to look at changes to those flow sources and whether they act as barriers to downstream flow. Then when the tributaries start going out of their banks, it gets even harder to model, because you have no idea where that water is flowing, how fast it is flowing, or even how much water is actually flowing, when you get beyond bank full status. Then when you start stacking 10-20" of rain in about a 12-18 hour period on top of that, you have to add overland sheet flow of water to the modeling as another source of input, but also as a barrier to flow from the other sources.
As you can tell, that is a huge amount of very complicated modeling to do. And you have probably heard the phrase garbage in/garbage out? Well they have very detailed channel profiles for most of the watersheds to assist in the modeling. When the stream goes out of its banks, the profile kind of goes out the window depending on where and how extensively it leaves the channel. They also have at most 2-3 gauges per tributary to give them approximate elevation and flow measurements. So if even one of those gauges gets a chunk of debris stuck in it or is ripped loose by debris, then you have even less to work with. It very quickly becomes an impossible situation to try to accurately model it. So what do you do? Well you look at it empirically. What happened last time the stream got to this elevation? What did it look like last time we had this much rain? What about the last time we released this much water from the dam?
Keep in mind that for the 1979 floods, the rain was intensely focused on a small area around Alvin and Friendswood. For 1994, the very heavy rain was tightly centered on the are north and east of Houston. For Allison, the very heavy rain fell mostly on the near east side of town. Tax day was heavily focused on the Cypress and Waller areas. What I am getting at is that in each of the comparable type flood events, the heaviest of the record rains was localized to 1 or 2 watersheds. In this event, pretty much every watershed in the area received at or above record rainfall amounts over 12-24 hour periods at the same time or in close succession and over the entire watershed, not just a portion of it. The rainfall amounts were staggering, but the real killer was the areal extent of those rainfall totals. That all combined to setup a system for which there was no analog, no comparable storm to base a guesstimate off of, and no empirical basis for creating a model on the fly. All of their forecasts were guesses based on inadequate data and non-comparable previous flood events. And they were wrong in many cases. They guessed high on the Brazos flood height by nearly 4 feet initially. They guessed low on the Kingwood floods by several feet at least.
It wasn't malicious, it wasn't incompetence, it was science at the limits of data availability and modeling skill. It was what happens when you put in insufficient data from an event outside the calibrated limits of a model and ask it for a prediction. You have no idea if that prediction has any forecasting skill because the model has never been asked to go that far out of typical limits before. You just know it is the only one you have and you are out of time to go back and try again. What do you do with it?