Flexibility and transparency in the engineering process are key. In practice, this boils down to 3 practical steps.
1 – Be a freak about documentation
Many engineers suffer from a special variety of the ‘Not Invented Here’ syndrome, where models, predictions and measurements that did not involve the engineer herself are not fully trusted – and therefore not used. Engineers like to have full control over the complete process, and in the end only fully trust analysis results based on their own data and their own analysis scripts. Now that measurements need to be outsourced to whoever can be present to physically perform the measurement (at a supplier, test-track, etc.), there is no other option than to work with data from other people. There are two key aspects to a successful implementation of this: clear documentation and objective data quality indicators.
The handover of data to a colleague needs to be accompanied by clear documentation of all performed steps and observed anomalies. For measurements, our most successful clients switched to using CAD environments or the DIRAC-PREPARE module rather than old-fashioned Excel lists to get the transparency of the process up to the required level. They add structured photos and notes of the actual situation to ensure that any differences between ‘practice and theory’ can be checked also by people not present at the measurement. ‘Visual documentation’ simply prevents misunderstanding, and currently allows VIBES to prepare measurements in our offices and have partners perform the actual measurements all over the globe.
2 – Use objective quality indicators
Especially for measured data, objective and independent quality indicators are crucial. The ultimate check for the whole process is to evaluate the end result with an independent cross-check. When differences are observed, the documentation of step 1 is crucial to find the cause. In practice VIBES uses a few approaches, depending on the project at hand:
- The virtual point reciprocity check is the perfect example: two independent sets of virtual point responses (not just sensors!) and corresponding forces should give the same result.
- When available, we compare measured results with simulated data from finite element models and try to truly understand any differences that appear (so: never simply blame ‘the lack of damping’ or ‘the sensor mass’ to shuffle potential mistakes under the carpet)
- For a blocked-force characterization we use a similar approach: for every analysis we have at least a few independent validation measurements (according to ISO 20270) to verify the analysis procedure.
3 – Have a back-up strategy (the modular approach)
Ideally, OEMs make accurate NVH predictions at the very start of a new development project and continuously optimize designs, knowing the impact on NVH. The reality is different though: all models are wrong, and only when the first prototype is available you know how far off you are. If the availability of these prototypes is also unsure, things become increasingly challenging.
Ideally, there’s an alternative, or ‘back-up plan’ for each component: if simulations and measurements are seamlessly combined, we decrease the dependency on either full simulations or full prototype measurements: we mix ‘n match with the information we have, while continuously updating our predictions once more reliable information becomes available. At the basis are two things:
- a modular approach to combine different modeling strategies (for example using virtual points);
- “the Cloud” for global availability, access and version control of component and vehicle models.
Is a subsystem available but the full-vehicle not yet? Measure it & combine the (more reliable!) data with simulations of the rest of the vehicle! Have a delay with a specific component? Use a simulated model instead! Who knew ‘agile’ would be relevant for NVH departments 😊