Your order

Close

Subtotals

Christopher Kjærulff

Let's talk digital signal processing

It's no secret; there's no denying it. Digital signal processing, also known as simply DSP, is a big part of the Dynaudio line up. You'll find sophisticated DSP chips in everything from our new Music family to our firmware upgradable Focus XD series and in our professional products such as the LYD monitors and 18S subwoofer with presets that provide a perfect match with our speakers.

The manipulation of numbers

But what is DSP, how does it work, and what happens to the music when processed? I wanted to find out so I approached our Chief Technology Officer, Jan Abildgaard Pedersen, to learn more about the technology.

[Christopher Kjærulff, Content Manager] Hi Jan, thanks for taking the time to sit down with me. So, we use DSP in some of our products such as the Xeo series, Focus XD series, and in many of our professional products, but what is it? Can you, in your words, describe what DSP is? 

[Jan Abildgaard Pedersen, CTO] No, problem. It’s my pleasure. Digital signal processing – or more commonly known as DSP – is the numerical manipulation of signals and we do it in a computer of sorts; a microprocessor specifically developed to do complex calculations extremely quickly.

Ask the Expert with Jan Abildgaard Pedersen who talks about digital signal processing

We work with a huge amount of data and complexity, and the regular household computer just cannot process it, so that’s where DSP comes into play: it’s a microprocessor that excels at doing multiplications and additions in nanoseconds.

High-resolution audio is sampled 192,000 times each second, and we work in stereo, so we get two numbers every 192,000-part of a second. All that information is processed in the DSP: calculated, filtered, manipulated – and it offers us complete control over the process and a sea of opportunities.

However, there’s a saying in our business: “the DSP engineer does not know the problem but has the solution. The acoustician knows the problem but can’t imagine the solution, so he doesn’t ask the DSP engineer.”

You need to master the interdisciplinary field of DSP and acoustics. If you don’t, you might develop solutions for problems that don’t exist or neglect to fix a problem because you didn’t know the solution existed.

Because we focus on this interdisciplinarity, we can use the possibilities in digital signal processing to enhance the quality of our sound:

  • by manipulating signals to offset irregularities in the speaker
  • by improving certain frequencies
  • by preventing loss of information in the signal path
  • by continuously making better loudspeakers that reproduce sound as close to the original as possible.

Replicating a famous piece of art - could you do it?

Obviously, it’s a huge benefit being able to offset irregularities, preventing loss of information, etc.? But, could you expand a little more on the benefits of doing so?

If I ask you to paint a replica of a famous painting, you might be able to reproduce some of the essentials: shapes, colors, etc. But it would be difficult to reproduce the artist’s brushstroke, the small details, the right color variations: the things that add clarity.

But, if I asked you to copy a sequence of 20 numbers, you could do so without losing any information in the process.

What we are doing in the analog domain somewhat resembles making a replica of a complex painting whereas working with a sequence of numbers is what we do in DSP.

We transmit zeroes and ones: voltage or no voltage. When we started with DSP, a one was anything above 2.5V, and a zero anything below – the maximum was 5V. If we transmit a one, we would send a 5V signal through the cable and receive, let’s say, 4.1V due to resistance in the cable – a significant loss, but still very clearly a one. And, the receiver wouldn’t resend it as a 4.1V signal, because it recognized it as a one and therefore sends 5V. The receiver only needs to differentiate between zeroes and ones.

There is no loss of information.

With DSP, we can do things that are not possible with traditional technology.

Jan Abildgaard Pedersen, CTO

The Hilbert transform

Also, DSP offers control. We have much more control over the process and transmission, as we don’t have any component tolerances to worry about; the components are numbers in a program. They don’t change. If we produce 1,000 loudspeakers with DSP – they’ll all be the same.

This allows us to work with very steep curves as we are free from the side effects of component tolerances. The predictability of DSP is unique, and that predictability means we can work with higher order crossover designs without any problems at all. 

Another benefit is opportunity. With DSP, we can do things that are not possible with traditional technology. In an analog crossover with regular electronic components, you cannot change power input at different frequencies without altering the phase of the loudspeaker, and the phase shift will be different at different frequencies.

If you have recorded two voices – a male and a female - singing simultaneously and play them back through a loudspeaker with phase problems, then one of the voices would be delayed to the other because of their different frequencies.

In analog systems, there are a one-to-one relationship between power as a function of the frequency and phase alteration as a function of the frequency – the Hilbert transform describes this correlation. Needless to say, we aren’t interested in this, and we can’t do anything about it in analog systems, but we can in digital ones.

In a DSP solution, we can make what is called a linear-phase filter that introduces a constant time delay at all frequencies. This would mean that when you press play, the sound would be delayed 20 milliseconds and wouldn’t be noticeable. Contrarily, you would notice a time delay between frequencies.

We can manipulate with power and phase as much as we want with DSP as all combinations are possible.

Harmonic distortion occurs naturally in instruments, and we don’t want the loudspeaker to add another layer.

Jan Abildgaard Pedersen, CTO

Finally, I think it’s important to mention nonlinear distortion. For a loudspeaker to be linear, it would have to increase volume and voltage by the same factor. However, a passive loudspeaker does not work like this – no, it might only increase volume by 1.8 even though we doubled voltage. So there’s clearly a nonlinear relationship between input and output – this leads to harmonic distortion.

Harmonic distortion occurs naturally in instruments, and we don’t want the loudspeaker to add another layer. So, we can introduce a nonlinear counter distortion that cancels out what the loudspeaker produces on its own. 

For example, in a pretty standard parabola that is Y = X2, we see that when X doubles Y quadruples – this is a good example of nonlinearity. In this example, the loudspeaker is Y = X2 and X is input to the speaker, but the input is from our streaming device, let’s call that A then we can make a box that changes A to X in a clever way, because that allows us to say the output X from this box is the square root of A and we now realize that Y = A, as the square in X and square root from the box cancels out each other.

It's all about the interdisciplinarity between DSP and acoustics

With all of these benefits, I guess there are also areas where you have to be ‘razor sharp’ and avoid pitfalls? What are some of the most common pitfalls when we talk digital signal processing?

At a very general level, you have to be very aware of the interdisciplinarity between DSP and acoustics. As I said earlier: the DSP engineer has the solution but doesn’t know the problem. The acoustician knows the problem but can’t imagine the solution.

Almost all classical DSP is one-dimensional: each signal only depends on one variable, time. In acoustics, few things can be described as one-dimensional: the driver at very low frequencies could be one as the surface moves like one piston – but as soon as it climbs in frequencies the surface becomes irregular and no longer only depended on time.

Also, it turns really ‘bad’ when we put the driver into a cabinet in a room. Quickly, the cabinet becomes large and so does the room compared to the wavelength. Everything has to be measured in wavelength. At 20 Hz, we have a wavelength of little more than 17 meters.

For the room to be compact in relation to the wavelength, our rule of thumb says it has to be less than 1/8 the length – so, in this case, the room mustn’t be deeper than 2.1 meters at 20 Hz. So, it’s only in a few cases we deal with acoustics as being one-dimensional.

Unfortunately, in some cases we have seen well-made DSP solutions that only works in one-dimensional worlds – not the three-dimensional one the loudspeaker is in; yes, the graphs they produce are beautiful, but they sound horrible.

For example, if your algorithm is designed to manipulate images of buildings to make them equally high, and you feed it a two-dimensional picture where the buildings, due to a lack of depth, seem equally tall, well then your algorithm won’t work as intended. Here, we only moved from three to two-dimensions. Imagine how bad it can be when we skip from three-dimensions to one.

The industry had to 'cheat' a little...

What you’re talking about here, in my mind, also explains why DSP have had opponents in the audiophile world. But, is this really the reason for DSP having a bad name in some circles?

It is definitely one of the main causes, but you have to realize that DSP – the theories behind it – was developed almost 50-60 years ago and it wasn’t until the emergence of the CD people starting taking an interest.

To make it work, the industry had to ‘cheat’ a little because the DSPs available weren’t powerful enough to carry out the complex calculations needed. One solution was to use less precise numbers: a CD is 16-bit, and what if we went even lower and used 8-bit? Yes, you raised the noise floor, but the slow DSPs could calculate the information.

For example, with a standard CD, we get a number for left and right channel 44,100 times every second, but what if you downsampled to 5,000 times a second? You get 9 times less information, and the crossover’s length could also be reduced to a 9th. That would mean the DSP workload is reduced by 92, which gives you so much more time to do your calculations. 

Basically, the false start gave DSP a bad reputation in some circles because the technology was not ripe – the machinery simply was not ready. Now, it is. Processing power has multiplied numerous times due to Moore’s law, and I don’t foresee that changing anytime soon.

Room adaptation could be the future

Talking about Moore’s Law is a nice hook for my next question, where do you see DSP taking us in the future?

As I said earlier, we have full precision, full control, no limitations, and a world of opportunities with DSP. We are working hard on opening up the opportunities DSP presents. Of course, it’s about using DSP to continuously improve the quality of sound reproduction. But, we are also exploring how DSP can change and redefine the listening experience – without compromising audio quality.

My main field – room adaption – is one of those opportunities we are pursuing. We want a speaker that is designed and tuned in our listening rooms to sound exactly the same in any other room, and that’s where room adaptation comes into the picture.

To do this, our algorithms and the way we work with acoustics is paramount. You need to make measurements in the customer's listening room to figure out placement, dimensions, etc. so your algorithms have the right data to work with – and you need several measurements: one just doesn’t cut it if you want a solution that doesn’t only have a nice graph but sounds great.

We are also researching how we can get technology to adapt to human needs and not vice versa. Especially, the possibility of sound zones: classical music is playing in the sofa and rock is playing by the armchair – in the same room, from the same loudspeaker system, without affecting each other! With DSP, this is theoretically a possibility.

Ask a question? Of course, you can!

Do you have a question about digital signal processing, you’re itching to have answered? If you do, it’s time to find Dynaudio on Facebook and write down all of the questions you have. We will select as many questions as we can fit in an episode of Ask The Expert and release the video with Jan’s answers as soon as possible on our website, YouTube, and Facebook. So, make sure to connect with us on Facebook and subscribe to our YouTube channel.