Imagine you’re tasked with characterizing the noise in your system. Say you’re developing a new automotive component, like a steering gear or a compressor, and you want to know what sound it will make when you put it into a new vehicle that hasn’t been built yet.

Would you rather: run a few tests, get lost in the data management, and emerge a few weeks later with results you’re pretty sure make sense… Or let your software keep track of all the little details and get reliable predictions of the sound in the vehicle while the steering gear is still hot on the test bench? If you chose the latter, let me show you how to use SOURCE: the VIBES solution for source characterization and component-based transfer path analysis.

How to: Load and check FRF data

So let’s open SOURCE and create a new project for the analysis of our steering gear. To begin with, we will load in the FRF model that we had already measured with DIRAC before the operational tests. Since we want to do an in-situ blocked force characterization of our component, we need the matrix where the input-forces are all transformed to the virtual points at the interface of the steering gear.

In this tab you can plot individual FRF, and also the complex mode indicator functions of the whole matrix. This is a valuable tool as it gives you an idea about how many degrees of freedom are relevant to model your system. Here for example, the singular values contain quite some noise in the lower frequency range. This is something that could become relevant when we assemble the steering gear in a virtual prototype.

How to: Use master channels for bookkeeping

Next, we have to set up the rules for the bookkeeping of all the channels. In SOURCE, this is done by connecting the channels of each dataset to so called master channels. SOURCE will then perform all the calculations based on their settings – so the master channels are essential for all the analyses that we will do. Since our FRF measurement here was directly imported from DIRAC, setting up the master channels is easy, because SOURCE recognizes which channels belong to indicator sensors, validation sensors, and forces. On top of that, it also recognizes to which virtual point the channels belong.

How to: Handle operational data

Now that this set up, let’s also load in our operational data, map them to the master channels and quickly check if all the mapping was done correctly. By the way – this was all the bookkeeping that we had to do. Which means that we don’t have to worry about this anymore and can start looking into our operational data while, in the background, SOURCE will take care that no channels get mixed up.

We can use the tracking channel, to cut out segments when the steering gear was actually in operation. We can then combine the segments in a sequence. This sequence only contains data when the steering wheel was turned at constant speed and in the same direction.

Since we’re using this data to identify blocked forces, we’re going to run some quick sanity checks on it. For this, we make use of the fact that the channels of the sequence were automatically linked to the master channels and to the virtual points. This means that we can look at the operational consistency of the measured data. Here the structure starts moving more flexibly in the higher frequency range. But in our frequency range of interest, the data looks very consistent with respect to the virtual point movement. This looks like a good measurement, so let’s calculate some blocked forces.

How to: Perform an on-board validation

For this we jump into Analyze and create a new in-situ analysis. We select our sequence of the operational data and the Force-VP matrix as our inputs. As you can see, SOURCE has already selected which channels are used as indicators, for validation and as forces. Next check the settings for the analysis and hit calculate.

The results we now obtained are: the FFT of our operational data, the blocked forces at the virtual points, and the predicted response at the validation points using these blocked forces. So let’s do a quick on-board validation and compare these predictions to the actual measurement: select all channels of one of the validation sensors and compute the norm , zoom into our frequency range of interest , and maybe also average over all time blocks… and I would say that this looks like great results for an on-board validation. If you want to use this plot for your report, you can of course make it as pretty as you’d like by editing the legends and changing the style of the curves and the graph. But let’s leave this till later, and now, rather do something really cool and virtually assemble this steering gear into a vehicle.

How to: Listen to a virtual acoustic prototype

For this, we browse through our databank and load an FRF measurement of a vehicle. And just like before we quickly map the blocked force channels in here. In order to see how this steering gear would sounds like in the cabin, we also have to add the microphone channels to the master channels. Let’s also keep track of our test assemblies so that they don’t get mixed up in the calculations.

Then jump back into our analysis, select the vehicle FRF as an additional input and tick the time domain conversion for our TPA synthesis. SOURCE will now use the blocked force that we calculated on the test bench and apply them to the noise transfer functions of our vehicle. The result is a sound prediction of the steering gear we just measured in a virtual assembly.


    Please let us know how we can reach out to you and we'll keep you up to date on our software!

    ×