Gold Star ntxVol said:
American Hardwood said:
Gold Star ntxVol said:
American Hardwood said:
The question for me is whether the dam can hold up if it ends up going over the top. I'm not enough of an engineer to run calculations but it would seem that the lateral force at the very top is going to increase dramatically even with a few meters of water running over the top.
It won't go over the top of the damn. They should have a spillway off to the side so, if the level gets high enough, it will spill over, uncontrolled. It will spill over (uncontrolled) until the level drops back down below the spillway level. It should never go over the top of the damn.
How much flow can the spillways handle? There still has to be a maximum designed flow for all systems. Would be curious what that number is.
I don't know but it's a lot.
Lake Texoma was designed by the Corps of Engineers such that it should only go over the spillway once every 100 years. It's gone over 4 times since it was built. In 2015 it set a record at almost 6 feet above the spillway.
That's not the Red River in the background.
As stated, once in 100 years isn't really once every 100 years. It's a (very) long term average or probability. You could have 4 events in 20 years, then none for 400, or none for 200, 4 in a short span, and then none for another 200. They're theoretically random, so clustering and long periods of quiet are to be expected. You won't have one, "every 100 years," because that's not random.
Then there's the, "once in a hundred years," part. How do we know the probability of a large scale event or its long term average when we have little or no history of it happening? Like earthquakes. How do we know the probability of large earthquakes in earthquake prone regions without ever having recorded them there and without having a long term, detailed record of geologic activity? The answer is the distribution of smaller events in a time frame and their occurrence rates over time. The rate of occurrence for events in relation to size can tell us how common larger, unseen events are and even give an idea of the maximum event size. From there, we can build probabilities of larger events because they follow a distribution. The problem is, it's an estimated distribution still based on limited observations. Estimate wrong, and a once in 100 year event could really be something that happens once about every 50 years or 25 years.
The probability also changes with a changing landscape. With increases in development, you can change runoff patterns and groundwater patterns. That can cause areas to flood with less rain because the water isn't absorbed into the ground like it used to be. That makes rare flood events more common because flooding doesn't require rare weather conditions to happen anymore.
ETA Since maximum probable event is mentioned, if you incorrectly estimate the occurrence rate to magnitude distribution of an event, not only can large events happen more often than you estimate, the magnitude of the largest probable event increases as well. As magnitude increases, probability decreases and approaches 0. With undercounted smaller events, the right tail of the distribution is slightly depressed, and the true probability of an event becomes negligible at a higher magnitude than what you expect.
These distributions are also based on relatively short term observations of random, non-uniformly distributed events. If you are observing during a lull, which is more likely than observing during a cluster of activity because they are longer, then you are undercounting. The drawback to these types of estimates is that the observation period may not be long enough to be random sampling even over very long time periods because human lifespans are nothing compared to weather and geologic variability. What seems like a long observation period really isn't. You need to understand the variability of the distribution to determine the likelihood of being in a lull, a period of high activity, or something closer to average in the observation period compared to the entire distribution. Basically, are you at the mean or regressing to it.