Much has changed in the seven years since I wrote an article called Seven rules for successful simulation [1]. Computers are now faster and more powerful and hold more data in ever smaller space.
Engineers right out of school with little or no real-world experience have the opportunity to focus on world-class problems. As far as software, new programs seem to pop up on a daily basis, touting more advanced capabilities than ever before. Some are so advanced that they seem to run themselves. Simulations that once took weeks to compute an answer now take mere hours.
With all the computer power and sophisticated software, what responsibility is the user taking? What role does the seasoned engineer play in this new age? Generally the engineer with 30-to-40-years of experience prefers to do complex calculations by hand. This leaves the young computer jockeys of today to sit at the helm of the many robust simulation packages. But, because the software is handling many of the constraints and boundary conditions that govern the outcome of a solution, how much confidence can we have in the results?
[javascriptSnippet ]
Read almost any major engineering magazine and youll see advertisements that make all sorts of claims about software capability e.g., non-linear capability when addressing the physics, and automatic and user-friendly when discussing the model design itself. Many of the non-linear solutions require all sorts of coefficients that normally come from experiments or well-known solutions. These coefficients may not be completely accurate for the problem you are working on.
Achieving accuracy
What process or method should we use to ensure accurate solutions? When performing numerical modeling, seven basic rules have withstood the test of time:
- No result can be more accurate than the input conditions. Not too long ago I was in a technical review where the engineers were touting the complexity and accuracy of their solution. When asked about critical boundary conditions it became quite clear that important factors that would affect the solution were no more than estimates. Sometimes we become so proud of our calculation that we forget the error and uncertainty of input data. Many models require calculation of boundary conditions. These are either calculated by hand or by some other software package. Sometimes these boundary conditions are provided by measurements. A model or simulation cant be any more accurate than the input data. One of the worst mistakes on a simulation is evaluating the results to four significant digits when the input data are limited to only one significant digit. Always evaluate the uncertainty of all aspects of the problem you are working on. (See CPs ongoing series by Dr. Gooddata; Part 1.)
- Nothing beats experience. Its important to define the model that best fits the physical situation. Many young folks are computer wizards and can develop a model and quickly get results. Thats great but its a good idea to have as much gray hair around the problem as possible. The experienced folks might not know all the details about the computer modeling but they have been there and done it in the field. They have an intuitive feel for the behavior of a problem that others dont have. This is important for success. A few years ago I was in a meeting where a group were presenting simulation results on a structural dynamics problem. A well-respected senior engineer told the group their results were wrong. When asked specifically what the problem was, he said, I dont know what the actual problem is but the numbers you calculated in the results I have never seen in that range before. The young guns just passed the comments off, but the senior engineer asked them to solve a problem he had done by hand that was a known solution in the industry. As it turned out, their program left out a gravitational acceleration term, causing an error in all results by a factor of 32.2. That senior engineer saved the team an embarrassing mistake; its a lesson those young guns will never forget.
- Take it easy on the problem size. Dont try to model the world. With computers as powerful as they are today, the trend is to make the model as big as the computer can handle. This isnt the best strategy. Take the problem in increments and strategic bite sizes. Similarly, stay away from non-linear analysis until things are tuned up and debugged with linear analysis. Then, introduce complexity in small steps. As a friend once said, Just because you can eat more doesnt mean you should. This step also ties to Rule 2, because with all the computer power now we can do more testing effectually and faster to make sure our approach is correct.
- Always check the model in detail. I remember discussing a structural dynamics problem with an engineer who had a doctorate in mechanical engineering. The results of the problem were coming up with an incredibly unrealistic deflection. The experienced plant folks questioned this. The PhDs response was a long theoretical mumbo-jumbo on how this could occur. As it turned out, the only problem was a bug in the input data. You analytical types should keep your ego in check when questioned by experienced design engineers and operating plant personnel who have lived with the situation. Dont always trust those beautiful color plots. Assume everything in the model is wrong until all is proven correct. Its always wise to perform hand calculations to double-check certain aspects of the problem.
- Define a model that best depicts the physical situation. Dont rush into this important step. Good thinking upfront in defining the model will save valuable time on the overall project. Its important that the boundary conditions of the problem are gathered from a reliable source and that these conditions can be verified through a known solution. If the exact model design isnt clear, take a macro model approach use a large simplistic model to yield rough solutions to give some idea what the detailed model should look like. After several trial attempts a detailed model then can be developed with reasonable certainty.
- Use commercial software that has the theory to back it up. Good software developers of commercial code arent afraid to publish a theory manual. Ive invariably found that the quality of the manual correlates with the performance of the software. Commercial codes that have had a poor theory manual or that lacked one have performed poorly. Ignore any hype that the code is so user-friendly that the engineer doesnt need to have detailed knowledge of the theory. For you P.E.s (Professional Engineers) out there, watch it. Youre responsible for the design or analysis, not the software vendor. Is the bat liable if a baseball player hits a foul ball or strikes out? Dont ruin your career because you trusted or used a cheap piece of software thats relatively unproven or isnt backed by a detailed theory manual. Just because some software is the latest thing on the market and claims to have all the latest advances doesnt make it the best product to use.
- Perform hand calculations to check the approach. For most problems you can run hand calculations on a test case to check your approach to a problem and give you confidence in your solution. After completion of the detailed analysis and checking, see what the governing aspects of the problem are and develop test cases that can be calculated by hand to validate your approach. Its also important to run sensitivity studies on governing parameters to evaluate the accuracy needed for these parameters to get a meaningful solution.
The last part of simulation almost always leads to assessment. Sometimes this is as big a problem as the simulation itself. First, take for example, a structural analysis where the stresses must be classified for a code assessment to be performed. Often a process called stress linearization is conducted to compare the stresses to the code. Stress linearization, while sounding sophisticated, is nothing more than a translator for Finite Element Analysis (FEA) to the code. Unfortunately, almost all the stress linearization routines have errors associated with the problem and they are user-dependent on chosen cross-sections; so, be careful and therefore refer to Rule 2.
FEA exploded in the 1980s and Computational Fluid Dynamics (CFD) grew rapidly in the 1990s. Now there are integrated design packages that will automatically perform CFD and FEA and require little knowledge from the user. Almost all of this analysis is non-linear and highly dependent on boundary conditions, convergence algorithms, model definition and equation parameters. I can give 10 inexperienced engineers a CFD problem and get 10 different answers because theres one thing for certain these packages will yield answers. When assessing the results the reviewer must be aware of this. Having some bench-mark comparisons often is helpful. Data are merely data, unless you are able to correctly interpret those data.
A bright future
Dont be put off by the concerns Ive raised. There are good software tools out there and there are good people using them. The future is exciting. For instance, today we can perform non-linear structural analysis and assess local plasticity, but usually dont. I believe that one day almost all structural analysis will be non-linear and the model will automatically account for local plasticity. In CFD, the tools are gaining speed and efficiency and more data and information are available to tune the solvers to achieve a better solution. Maybe someday the software programs and computers will be smart enough to replace seasoned engineers with common sense and experience (ha ha!) but that day isnt today. Until then, youre better off sticking to the seven rules for successful simulation.
Reference
1. Knight, C., Seven rules for successful simulation, Hydrocarbon. Proc., p. 61 (Dec. 2001).
Cliff Knight, P.E., is president of KnightHawk Engineering, Houston. E-mail him at [email protected].