Across Acoustics

New Research Roundup: Music as Noise, Instruments as Dynamic Sound Sources, and Modeling Piano Soundboards

ASA Publications' Office

This episode highlights three recent articles from the field of musical acoustics. First, we talk to Elvira Brattico (Aarhus University) about her research into what causes a person to experience music as noise. Next, Stefan Weinzierl (Technical University of Berlin) discusses how a musician's movement during a performance will impact the notes listeners hear. Finally, Pablo Miranda Valiente (University of Southampton) discusses his work to develop a model that shows the impact a piano soundboard has on the note played.

Associated papers:
- Giulio Carraturo, Marina Kliuchko, and Elvira Brattico. "Loud and unwanted: Individual differences in the tolerance for exposure to music."  J. Acoust. Soc. Am. 155, 3274–3282 (2024). https://doi.org/10.1121/10.0025924.
- David Ackermann, Fabian Brinkmann, and Stefan Weinzierl. "Musical instruments as dynamic sound sources." J. Acoust. Soc. Am. 155, 2302–2313 (2024). https://doi.org/10.1121/10.0025463.
- Pablo Miranda Valiente, Giacomo Squicciarini, and David J. Thompson. "Influence of soundboard modelling approaches on piano string vibration." J. Acoust. Soc. Am. 155, 3213–3232 (2024). https://doi.org/10.1121/10.0025925.


Read more from The Journal of the Acoustical Society of America (JASA).
Learn more about Acoustical Society of America Publications.

Music Credit: Min 2019 by minwbu from Pixabay. 

 


Kat Setzer  00:06

Welcome to Across Acoustics, the official podcast of the Acoustical Society of America's publications office. On this podcast, we will highlight research from our four publications. I'm your host, Kat Setzer, editorial associate for the ASA.

 

Kat Setzer  00:25

Today, we're going to try a little different format for our episode. We'll actually be sharing three different research articles from the area of musical acoustics that published recently in JASA. The first article has to do with something I'm sure many of us have experienced at some point or another: loud music. I'm talking with Elvira Brattico about her recent forum article, "Loud and unwanted: Individual differences in the tolerance for exposure to music." Thank you for taking the time to speak with me, Elvira. How are you?

 

Elvira Brattico  00:50

Yeah, fine. Thank you. How are you?

 

Kat Setzer  00:52

I'm pretty good.

 

Elvira Brattico  00:53

Yes. Thanks for inviting me here.

 

Kat Setzer  00:56

Oh, yeah, you're more than welcome. So first, just tell us a bit about your research background.

 

Elvira Brattico  01:00

Yeah, thanks for asking. I have a very, very distinct interdisciplinary background because when I was younger, I studied philosophy, musicology, music performance, psychoacoustics, psychology. And then I landed into a PhD in cognitive neuroscience. And currently, I'm principle Investigator of learning team at Music in the Brain Center at Aarhus University in Denmark. I also work in Italy as a second affiliation. My research closely reflects actually this background because I use multiple methods from neuroimaging to psychology and also psychoacoustics, and computational musicology to study how the brain is changed when we are doing musical activities, which can go from listening to performing and learning to play an instrument. And it's very, very fascinating topic, because we really spend a lot of time interacting with music, both when we are musicians, but also as listeners in everyday life. And then another line of research is about individual differences in audiation. So how our responses to sounds vary according to personality, to even genetics and to cognitive abilities. And because all this can actually affect how we perceive, memorize, and how we feel the sounds of the environment. 

 

Kat Setzer  02:15

Very cool. So in this article, you talk about how music, which is often seen as pleasurable for listeners, frequently can become noise in other scenarios. What distinguishes when music is perceived in a negative or positive way? And what's the impact on the listener as a result?

 

Elvira Brattico  02:44

So we know from really several studies, such as questionnaire studies, that music is considered by people, one of the stronger sources of pleasure in everyday life. So this is of course, in contrast with the fact that music can be perceived as noise as something negative. But indeed, it happens. So music can annoy us when it is listened unintentionally, for instance, in a shopping mall, or when you're trying to concentrate, working or studying, and a very loud brass band with mistuned instruments passes by in the street. And by the way, this is really what's happened to me recently this year. But music can become also annoying, when it does not match expectations based on our culture. And this is because according to a very influential theory, the predictive coding theory of music, developed actually in in Denmark. So this is because we like and derive pleasure from music that is optimally matching our expectations, but not too much. So that we can predict the sounds that are coming next, but we also like to, to be stimulated and to have predictions that introduce some originality, so to update continuously predictions from learning and you know, incorporating new things what happened in in our models. But when we can't predict it also, when the sounds are too complicated, so they don't match at all anything what we have heard before, then we are also confused and this  kind of situation also is not really positive for us. So we have music that is too simplified and corny on one end and... such as for example, an ear worm, and on the other end, we have musicality is too complicated, too hard to understand. And these two extremes we have negative experience. 

 

Elvira Brattico  04:47

And so in sound if we consider all the reasons why music can become annoying, become noise. So we have acoustic reasons: So we can have music that is too loud, but also too rough, inharmonic. And this certain features are unpleasant because our peripheral and central auditory nervous system is programmed to respond in a certain way to sounds that might actually be related to a danger or to both physical danger, such as a very, very loud sound, which is actually can damage our ear, or sounds that are rough, such as animal, very, very dangerous tiger or whatever, this could actually be in danger to us, to our life. So these are the acoustic reasons why music can become annoying. But then there are also cognitive reasons related to this, what I said before, so to the fact that we tend to like music that is optimally matching our expectations based on  what we have heard in our life. And so when it's too much or too little matching expectations, then we also don't like and it can become really annoying. And finally, music can be perceived as noise when it is unwanted. So when it does not match our subjective goals, what we want to do with music or what we are doing, what kind of task we are achieving. And in that sense, music can become destructive in between us and our goals. And  in that case, it's also annoying.

 

Kat Setzer  06:32

Interesting, like when you're at a restaurant trying to talk to your friends, and then the music is really turned up super loud. 

 

Elvira Brattico  06:38

Yeah, exactly. Exactly.

 

Kat Setzer  06:40

 Yeah. So why do so many public spaces use background music?

 

Elvira Brattico  06:45

Well, this is also very interesting to consider, because actually, I would point at maybe three main reasons, because we know from a large body of research in music, psychology, neuroscience, that music actually can induce emotions, I mean, can even induce emotions that we can measure with neuroimaging devices or psychophysiological devices. So we can see really changes in our central nervous system. in our autonomous nervous system in our body, and these emotions can be positive ones. So music can induce happiness, can calm us down. So it can be really soothing. And it can regulate, down regulate our arousal, our stress by acting on autonomous nervous system, and particularly on the HPA axis. When music is able to do that, it's really, you know, functional for the shopping experience, for example, aa public space, such as a shopping mall, we are exposed to background music, which makes us feel relaxed, in a good mood. So we stop worrying, we feel elated, and we think that we have time to shop and we want to spend money. So that's the point. 

 

Elvira Brattico  08:08

Also, this is another reason is that music is actually [names] and colleagues wrote a couple of years ago, in a review paper, music is actually persuasive, and motivating. Because it goes directly to activate our reward system working with dopaminergic neurotransmission. And so when music is, you know, activating this part of the brain this through [a secret], then we want to move, but we also feel more motivated for doing things we desire. We are, you know, we are not apathetic, we want something and this desire attitude is exactly what shop owners want in us. And also in the restaurants, you know, we've, it also is really useful for certain purposes. 

 

Elvira Brattico  09:01

And the third reason relates to the social function of music, and identifying oneself with a group, and in [your band], for example, coffee shops, and thinking about the street style. You can hear background music which is specific genres, like that younger generation. like rap or pop. And in this case, music is like, acts as a flag. So to attract those people who identify with that particular subculture of musical genre. And so they want to go there and they want to feel that they belong to the subculture, and then it you know, it is functional to this. So I've focused a bit on mainly on the shopping situation. But, you know, it's just as an example of, and I think that these kind of reasons apply to several public places. 

 

Kat Setzer  09:59

Yeah, yeah, that all makes sense because like I was thinking, too, like at the gym, you know, they always play like very kind of like high-energy music. And people like to listen to music when they're working out to kind of keep them motivated and moving along, which kind of goes along, in the same way as the encouraging shopping and eating. 

 

Elvira Brattico  10:15

Yeah, yeah, well, I mean, with music use at the gym. I mean, yeah, it works in the same way. So that essentially keeping the physiological arousal very high, and also keeping the motivational drive going. There's, so again, it's acts on the neurologic reward system. And that is really useful for, you know, reducing the sense of fatigue and keep going when you are working out. Yeah, exactly. 

 

Kat Setzer  10:43

Oh, that's really cool. So listen to music when you're working out. 

 

Elvira Brattico  10:46

Yeah, exactly. Yeah, it works. Yeah.

 

Kat Setzer  10:49

So what about music being used in public spaces with a collective focus, like a church or a sports stadium?

 

Elvira Brattico  10:56

Yeah, it's nice that you asked this, because I believe that mechanisms are a bit different from the previous examples. So that on the top of the previous mechanisms, there are a few more, and particularly one more, which is very important, because in those kinds of places with the collective focus, then the music has a function of a social glue. So maybe it's actually one of the functions of music since the very, very beginnings of our homosapiens society or our ancestors, really, because some evolutionary psychologists have suggested that music in the old times really was important for social cohesion, for bonding with members of group, and even for affiliations between the caregiver and the infant. Because you synchronize, and then you actually activate maybe the brain in the same way. So the neuronal oscillations entrains in two or more individualss in a group, and music helps in this entrainment. And, and this helps for feeling part of the united group. And still nowadays, in many social occasions, when we gather to celebrate rite of passage, like being at a birth or a death, and where we want to cofeel strong emotions, and we want to participate in special events, such as in a stadium or even in theater, we really need to have music. Without music, the experience would be less emotional, less unifying. Or to put it bluntly, I would say it wouldn't really make any sense. So really, is music is really part of this experience. Yeah. In the public spaces with a collective focus, as I said.

 

Kat Setzer  12:55

Interesting, interesting. So different people have different preference in how they listen to music. What do we know about these different preferences? In particular, why do some folks like loud music and others hate it?

 

Elvira Brattico  13:04

Yeah, this is a really interesting question. Because actually, we know quite much about this, about relationship between different preferences and individual differences, as we say, in psychology. And it seems that personality is what really matters here. Because some personality traits have been associated with preferences for specific kinds of music. And this is actually research that is being conducted very much in the UK. I would like to mention mainly Peter [name], but there are also other colleagues who have been concentrating on this question. But it seems that certain musical preferences are related to tendency for liking certain kinds of music, I wouldn't say specific genres, but groups of genres that are sharing structural features. For example, there are personalities that are more attracted or more like into novelty seeking or openness, very strong in openness trait, who are fascinated by music that is sophisticated and complex. So that could be classical, but also jazz or world music. And then there are personalities that might be more towards agreeaable personalities that might be more interested in music that is less complex or more pop rock or more energetic, upbeat. So this kind of research exists. And there are actually a few studies relating personality traits with musical preferences. 

 

Elvira Brattico  14:50

And there is also another line of research coming from Finland from their colleague, [Sophie Sadie Kaleo], who has identified certain strategies that we use implicitly to pick up the music that might best regulate our mood. And so we tend to select music according to our emotional goals. And we tend to have certain strategies that predominate over others. And I would like to mention two out of the seven that have been identified, that are related to the preference for loud music by some people and hated by others. One strategy is called "discharge." And this really like the strategy that makes people use music to release or venting anger or sadness, through the music that expresses those emotions. So when you want to regulate your mood by listening to death metal, for example, for instance, are really, really loud. And then there is another strategy that is sort of also related to negative music or music that can be sad or also angry. But it's the opposite strategy, which has been termed "solace" because it's related to the fact that you want to feel accepted, you want to feel understood by listening to music that is similar or that resonates to your negative feelings. So when you feel troubled, you want to listen to songs that are also troubled. But in order to seek understanding, not to amplify not to burst out, it's more like to feel accepted in these emotions. And interestingly, the first strategy I mentioned, the discharge, seems to be unhealthy in the sense that it can actually amplify the negative mood, rather than really positively downregulate it. Whereas the solace is more like healthy strategy. So in both cases, you want to listen to very loud music.

 

Kat Setzer  17:05

Okay. Yeah, sort of switching gears a little bit, what factors can affect an individual's noise sensitivity?

 

Elvira Brattico  17:12

Yes, this is related to noise in general. But we have research also conducted in different countries around the world, identifying factors of sensitivity to noise. So people who have very strong sensitivity to noise probably have had grandparents or parents with the same level of sensitivity, because we know that it's heritable trait. And we think that its also related to genetics. So there might be some polygenic factor that accounts for this particular kind of sensitivity to unwanted sounds. And interestingly, in the research that we conducted a few years back, we didn't find effect of, you know, culture, geographical location. So, this kind of trait of sensitivity to noise seems to be, you know, transversal across cultures, because we studied the Finnish individuals, and Italians, southern Italians, and Germans, as far as I remember. Yeah. And we didn't find any effects of the nationality. So, we found the same amount of sensitivity to noise in southern Italians, who are always you know, bombarded by noise, but also in Finland with very, very quiet environment, we found the same amount of sensitivity. So, which is not good news for those people who live in southern Italy.

 

Kat Setzer  18:52

Is there any relationship between a person's hearing ability and their noise sensitivity? 

 

Elvira Brattico  18:56

Well, not necessarily. There are some clinical conditions, like hyperacusis. So, a condition when sounds are considered... which are considered innocuous become intolerable and painful and unpleasant. So, hyperacusis can be caused by excessive exposure to loud noises and people with hyperacusis experience actually, discomfort from loudness at very low average levels, such as 16-18 decibel lower than normal actually. But the research so far has not found any significant correlation between actually the hearing pressure loss and hyperacusis. So, we can say that on one hand hyperacusis often occurs in conjunction with hearing loss, but on the other hand, hearing loss is not essential for having a hyperacusis, and so this kind of tendency is also present in non-clinical sensitivities, such as actually noise sensitivity that we mentioned earlier. But noise sensitivity is actually not connected, also, to individuals' hearing ability, as it's not connected to culture, as I said earlier. 

 

Kat Setzer  20:16

Okay, and so you talked a little bit about hyperacusis. What about clinical noise sensitivity? What is considered a clinical level of noise sensitivity, and what sort of diagnoses fall under this umbrella? How's it different from regular noise sensitivity?

 

Elvira Brattico  20:31

Yeah, thanks for these questions. And it's very important to specify, actually, that the terminology and definitions have differed a lot in research literature and the clinical literature. And, unfortunately, at the current state of [the art,] it's really difficult to compare and interpret the studies with consistency. So, we can observe that nonclinical noise sensitivity is not just determined by intensity level or perceive sounds, but by, you know, the subjective perception of noisy environment which is actually not related to experience of pain or any other physical reactions. However, in contrast, there is the concept of clinical noise sensitivity, such as for hyperacusis or phonophobia. And in this case, we know that hyperacusis as, I said before, or phonophobia causes extra physical discomfort and even pain to sounds that are actually very, not so loud even, or even fear in the case of phonophobia, for sounds. And in these cases, the diagnosis and treatment pathways are actually more identifiable, more specific. In the case of noise sensitivity, because it is subclinical we can say, and because it is not related to a physical strong reaction as in hyperacusis or phonophobia, then there is not a diagnosis or treatment specific to it. And also, as I said, literature is still quite discrepant, so there is still not yet consistent, unfortunately.

 

Kat Setzer  22:20

Okay, okay. As we sort of mentioned, unwanted noise is certainly annoying, but what are the other negative impacts of noise exposure?

 

Elvira Brattico  22:28

Yeah, well, quite, quite many, unfortunately. Also here, because noise is actually not just annoying. So, in fact, we described in our article that the harmful effects of noise are often significantly underestimated. Because beyond the annoyance, a continuous exposure to loud noises can really pose health risks. And these are actually studied, though maybe not enough, considering the political impact of these studies, because loud noise and continuous exposure can cause cardiovascular disease, hypertension, and sleep disturbance. And even, if it's continuous exposure, some cognitive issues, especially in a developmental age, and also listening to loud noises, even recreational ones, like in a music concert, in a pop concert, may result in temporary thresholds shift. So temporary increase in hearing [facial[ sensitivity due to some kind of cochlear dysfunction that affects inter and also other assets in the cochlea. And these temporary threshold shifts may actually predispose an individual to an eventual permanent hearing loss. And all these facts are massively under estimated by political decisions and normative wise.

 

Kat Setzer  24:01

Okay, yeah. So what can be done both on the individual scale and on the larger scale to reduce music as noise?

 

Elvira Brattico  24:08

But there are several practical strategies to reduce the negative effects of music as noise. So first of all, as I already mentioned earlier, there is a need to change a bit the political situation or the political climate. So by raising public awareness and educate the regulatory bodies to recognize the risks associated with loud music. aAnd in this sense, some practical strategies can include, for example, encouraging more and more the use or earplugs in particularly noisy environments, such as in opera or concerts. And it could also be appropriate to establish legal sound limits for personal listening devices. This is a bit yeah, it's implemented in some devices, but it could be done like in France, where the maximum output is set 100 decibels, so that you can't really reach over that at all. It's not  just that you are warned, you know. Additionally, we suggest discussing the possibility of lowering the current threshold for daily noise exposure applied to workplaces, which is now 85 DB, especially where background music is always played, such as shopping malls, or in gyms. And there are people working there, and sometimes they are exposed to very loud music all the time. And that's really not really healthy for them. So in this new threshold  might be adjusted considering that the ideal sound level for normal conversation is actually between 55 to 65 decibel. And indeed, 55 decibel has been already proposed as the idea of sound level in shopping centers, shopping malls. So I found that this could be important. So to have specific regulations of how long music can be in a very loud way, so not not to... for example, over past 50 minutes of very loud music over 80 DB, for example,  in shopping malls, and gyms, and so on, theater halls, and so on. At the same time, also, raising awareness about risks associated with loud music could benefit individually. So people, especially young age, should really know better what they're doing with their ears. And I always actually remind my students at university like that, if you actually lose your hair cells in the inner ear, then they are lost forever. So it's really important to raise awareness about this, because we know that nearly 40% of young and older adults between age 12 to 35 years from countries such as Italy, Denmark, or US are exposed to potentially damaging sound levels in different places such as nightclubs, discos, bars, or concerts. So we know this, and we really need to do something about it so that we don't destroy our hearing system before... 

 

Kat Setzer  27:25

Right. 

 

Elvira Brattico  27:26

Yeah.

 

Kat Setzer  27:27

Yeah. Do you have any other closing thoughts?

 

Elvira Brattico  27:30

No, I think that just like to remind about positive effects of music, because we actually studied music is really beneficial for our brain as a cognitive stimulation, emotional regulation and so on and so forth. But it has to be used in a clever way. So not so it becomes annoying and damaging. 

 

Kat Setzer  27:54

Right, right. Well, I really appreciate you taking the time to speak with me. This was such an interesting article to read and discuss. I'm very much a person who struggles to concentrate with any background noise, even music, so it hit close to home. It was also fascinating to learn what factors will impact a person's noise tolerance, and even how we experience sound can vary so much. Thank you again and I hope you have a great day.

 

Elvira Brattico  28:17

Thank you. Same to you.

 

Kat Setzer  28:19

A lot of current research regarding the sound generated by musical instruments uses setups that have a stationary instrument. In real life, though, musicians move their instruments around quite a bit while playing. Today I'm talking with Stefan Weinzierl about his article, "Musical instruments as dynamic sound sources," which appeared in the April issue of JASA and discusses some of the effects this movement has on what we hear. Thanks for taking the time to speak with me today, Stefan, how are you?

 

Stefan Weinzierl  28:43

Fine, thank you.

 

Kat Setzer  00:25

So first, tell us a bit about your research background.

 

Stefan Weinzierl  28:47

Well, I'm head of the Audio Communication Group at TU Berlin. And together with my colleague David Ackermann, who has done most of the data acquisition and analysis work for the current study, we're covering a broad range of topics in communication acoustics, including room acoustics, music acoustics, and also virtual acoustic reality. 

 

Kat Setzer  29:10

Yeah, I was going to say, we see quite a few publications out of your group frequently a JASA. So what is directivity, and how does it affect the sound created by musical instruments?

 

Stefan Weinzierl  29:19

Well, directivity is a basic property of any sound source, both electroacoustic sound sources-- loudspeakers, sound reinforcement systems-- but also natural acoustic sources, such as the human voice or a musical instrument.

 

Kat Setzer  29:34

Okay, how has the directivity of musical instruments been studied previously? 

 

Stefan Weinzierl  29:39

Well, traditionally, the directivity of musical instruments has been treated in the same way as that of loudspeakers, by placing microphones at different directions from the source and measuring to what extent the sound radiation varies with the angle relative to a certain reference axis.

 

Kat Setzer  29:59

Okay, okay. But the problem you mentioned in the article is that a lot of this research, in the setup, has focused on studying musical instruments that are staying still, but most musicians move around when they play their instruments. What do we know about how a musician's movement of an instrument will affect the instruments directivity? 

 

Stefan Weinzierl  30:15

Well, as every concert goer knows, the movement of musicians on stage is an important creative element. These movements convey expressive gestures, and these are related to musical emotions, but also the phrasing-- that is when a musical phrase begins, when it has its climax, and when it ends. And what we showed is that these gestures are not only communicated visually, but also acoustically, since together with a musical instrument, also its directivity is moved. And this causes a variation in the timbre of the musical instrument at a certain listener position, since the spectral composition and therefore the timbre of the sound changes depending on the direction from which we see and hear the instrument. And another effect is that also the room acoustical response is different depending on how the directivity is oriented in the room.

 

Kat Setzer  31:19

I guess that all makes sense, when you think about it. It's a little intuitive. So in this study, you're concerned with note-dependent directivity. How does the particular note that's being played affect directivity?

 

Stefan Weinzierl  31:30

Well, this effect is maybe not so obvious as the motion-induced variation, but everybody who has played a recorder or a flute or a clarinet knows that the different notes and the different pitches are reached by opening and closing the tone holes. And for example, if all holes are closed, the sound is radiated through the open end of the tube-- through the bell of a clarinet, for example. But if some holes are open, a significant part of the sound is radiated through these open holes. So as an effect with the pitch, also the sound radiation characteristic changes and the directivity changes along with it. 

 

Kat Setzer  32:13

Oh, that's really nifty. 

 

Stefan Weinzierl  32:14

This effect is quite strong for woodwind instruments, but it also occurs, for example, for string instruments, since also for them the vibration of the resonance body, which is the sound radiating function, also changes with the note which is played.

 

Kat Setzer  32:32

That's really nifty. How did you study the impact of musicians' movements on note-dependent directivity in this work?

 

Stefan Weinzierl  32:39

Well, we have analyzed it in two steps. So first, we measured the static directivity-- so an instrument which is not moved-- in the traditional way, in the anechoic chamber, but for every note that musical instruments can play. And then we measured in a second step the movements of different musicians along with their instrument with a motion tracking system on stage in a typical concert situation. And then both of these data were combined, so we applied the movements of the motion tracking system to the static directivity and simulated how, for example, the timbre of an instrument would change at a certain point in the audience. 

 

Kat Setzer  33:24

Okay. Okay. So let's get into the simulations a little bit. You use simulations to consider playing and how it would occur outside of an anechoic chamber. How did these simulations work?

 

Stefan Weinzierl  33:35

Well, as you said, first, we described the variations that would occur in the anechoic case, because this is how we first measured the static directivity. But then in a second step, we used a room acoustical simulation to analyze the situation in a typical concert hall. And as we expected, the sound reflections that occur in a concert hall tend to smoothen these variations a little bit, so they are not so big anymore as they are in the anechoic case.

 

Kat Setzer  34:07

That's really interesting. Okay, so what is the directivity index, and how did you use it in your study?

 

Stefan Weinzierl  34:13

Well, I’d say that directivity index is the most basic indicator of the directivity of a sound source. So, it describes to what extent the sound source is omnidirectional, and to what extent it is directed towards a certain direction. And in our study, it was a first step to show the variation of the directivity, both with a movement and with the played notes, because we could see that also the directivity index is different depending on the notes played, and it's also varied with the movements of the musicians. 

 

Kat Setzer  34:51

Okay, what spectral effects did you observe in these experiments? 

 

Stefan Weinzierl  34:55

Well, the directivity for almost every sound source gets more complex with higher frequencies. At low frequencies, there is no strong directivity at all. And as a result, there's not much of a variation when the sound source is moved or when different notes are played. But at higher frequency, the directivity gets more and more complex, and so do the variations that occur related to the two effects that we spoke about.

 

Kat Setzer  35:26

Okay. And then you were also concerned with room acoustic effects, as you mentioned. Why? What did you end up finding in your study?

 

Stefan Weinzierl  35:33

Well, as I said, the change in timbre is one effect of a time-varying directivity. But a second effect is that also the room acoustic response changes, depending on whether the sound radiation is mainly directed towards highly absorbing surfaces or a highly reflective part of the room. You know, for example, the audience is usually the most absorbing surface, and if the instrument plays towards the audience, there's a lot of absorption occurring, whereas if it plays to a lateral wall or to the ceiling, there's much more reflection going on. So also the room acoustical behavior is time varied, if the directivity changes. And both effects in combination produce a clearly audible variation, because, of course, we try to describe it quantitatively as precise as possible as variations in sound pressure level in spectral composition in room acoustical parameters. But it's another question whether all these variations are audible. And as we could see from our data, and as was also shown by other groups, these variations are clearly audible. So every listener can hear the difference between a static and time-varying directivity. 

 

Kat Setzer  36:55

Okay, what is the effect of this time variation effect on the listener in the concert hall? 

 

Stefan Weinzierl  37:00

Well, as I said, the spectral content varies, and also the room acoustical response varied. What kind of sonic quality is affected by these effects, that's indeed, a question it would be worth for future studies to investigate. So what we would expect is that there is a kind of liveliness that you experience in a concert hall when you hear a musician with all his movements and behavior of his instrument. And for example, if you listen to a virtual acoustic reality where such an instrument, the sound is produced by an auralization algorithm, I would expect that there is a loss in liveliness and presence going on. And that's why I think it would be worth to think about how these effects can be simulated also in virtual acoustic realities when we simulate musical instruments playing.

 

Kat Setzer  37:56

Oh, yeah, yeah, totally. You kind of already got into this, but what's the potential impact of this expanded understanding of directivity? 

 

Stefan Weinzierl  38:04

Well, I think I see different consequences. One is that most of us are used to images of the directivity of musical instruments. You know, but all of these images, which show for example, that the singer has its main direction of radiation a little below the horizontal axis, all these illustrations are only valid for a certain note and a certain position of the sound source, whereas in reality, all of this is continuously varied during a musical or spoken performance. So that's something we should be aware of. As I said, if we want to create virtual realities with sound and with sound sources, we should find a way to really render these variations. And also in room acoustical design, if we think about concert halls, I think we should have a clear understanding of how an orchestra or a soloist is radiating into the room because that might change the way we design for example, the orientation of reflecting surfaces around the stage. 

 

Kat Setzer  38:18

Okay, okay. Yeah, it sounds like there are quite a few impacts from understanding this little bit of change in how a sound moves through the space, essentially. 

 

Stefan Weinzierl  39:28

Absolutely.

 

Kat Setzer  39:29

Yeah. You know, I mentioned this in our preliminary call, it never really occurred to me how directivity of a sound can vary, even with a particular note played, although it makes sense now that you've explained it, and it's so interesting how much of an impact it has. Thank you again for sharing your research with us. I really appreciate it. 

 

Stefan Weinzierl  39:45

Thanks, Kat. 

 

Kat Setzer  39:46

And have a good day.

 

Stefan Weinzierl  39:48

Thanks for the opportunity. Have a good day, too.

 

Kat Setzer  39:51

Most folks probably know a bit about how pianos work: You press the key which causes the hammer to hit a string, which results in a musical note... but there's a lot more that affects the sound that comes out. I'm talking with Pablo Miranda Valiente about his attempts to model the influence of the piano soundboard, which he wrote about in the recent JASA article, "Influence of soundboard modeling approaches on piano string vibration." Thanks for taking the time to speak with me today, Pablo. How are you?

 

Pablo Miranda Valiente  40:15

Fine, thanks. Thanks for having me.

 

Kat Setzer  40:17

So first, just tell us a bit about your research background.

 

Pablo Miranda Valiente  40:21

I started getting involved in piano acoustics in 2014-2015 while I was doing my master's degree in the Institute of Sound and Vibration of the University of Southampton. During my MSc project, I researched about the sound radiation of two upright pianos that were made using different woods. One soundboard was made of solid wood, while other was made of laminated woods. The results were not particularly conclusive, but for me, the experience was very positive and unforgettable. And I always wanted to do more research. After my MSc I spent the years in the industry back in Chile. And in 2020, I was awarded with a scholarship from the National Agency of Research and Development. Once I got the award, I started my PhD back in the ISVR in 2021, to continue my research in piano acoustics. So far, I've been mostly focused on understanding what variables or assumptions can play a role while modeling the overall sound generated by a piano. I'm currently in my final year of my PhD, and while during my MSc and my PhD, I've been super lucky to have been supervised by my coauthors of this paper Dr. Giacomo Squicciarini and Professor David Thompson. Their guidance and patience have made my research here and my stay here very enjoyable.

 

Kat Setzer  41:50

So I imagine, like I said, most folks probably are familiar with the basic setup of a piano. And your research has to do with the strings and the soundboard that they connect to. Can you explain why understanding these components of the instrument is important to understanding the sound it produces?

 

Pablo Miranda Valiente  42:05

Well, the sound that we hear when we play the piano is the result of the complex interaction between the hammer, the strings, and the soundboard. For instance, the way the hammer interacts with the string has been a matter of interest for decades. And moreover, the basics of the interaction, for instance, where does the hammer hit the string, or the physical characteristics of the hammer, or how it compresses, determine the manner in which the string will be excited and the frequency content of the resulting vibration transmitted to the soundboard. Then the interaction between the string and the bridge of the instrument generates vibration in other directions, which influence the decay of the sound that we hear of the piano. The string itself presents complicated and nonlinear phenomena that enhances, for instance, the longitudinal vibration at high frequencies. All these different phenomena are related, transmitted to the soundboard by means of a force, and then the soundboard contributes to the overall sound providing the lower frequency notes. 

 

Kat Setzer  43:10

Okay, got it. So how does the bridge where the string and soundboard connect impact the understanding of the relationship between string vibration and soundboard vibration? 

 

Pablo Miranda Valiente  43:10

The bridge is the physical connection between the string and the soundboard. So when the string is connected to the bridge, the string now it's divided in two parts, the speaking length, which corresponds to the fundamental frequency of the string, and the duplex scaling segment, which in some pianos is muted in the lower ranges, and left to vibrate in the others. 

 

Kat Setzer  43:43

Oh, interesting. 

 

Pablo Miranda Valiente  43:45

Here, you can hear the effect of the duplex scaling segment in the acceleration modeled by one of the models I used, and how it can be muted using a distributed set of springs and dampers. The first audio you will hear is the non-muted duplex scaling segment. So this segment of the string is now able to vibrate freely. [piano note plays]

 

Pablo Miranda Valiente  44:15

The second audio file is while adding a distributed set of dampers, so imagine you are wrapping a piece of cloth alongside a string, using a damping coefficient of one. [piano note plays]

 

Pablo Miranda Valiente  44:36

The third audio is same process but using a coefficient of two. [piano note plays]

 

Pablo Miranda Valiente  44:45

And the final audio is the muted string using a coefficient of four. [piano note plays]

 

Pablo Miranda Valiente  44:57

So it’s up to you to decide which one is more appropriate. Actually, for us, as long as the resonance, or the ringing effect, is already diminished, that’s enough for us. So, apart from dividing the string and connecting it to the soundboard, the bridge is responsible of coupling the strings in the unison—remember that each note has one, two, or three strings, which are the unison, that supposedly are to have the same frequency. And the bridge can connect these strings also with other strings that are very far away. So this enhances the effect of the sympathetic vibration in the piano. And apart from that, like I’ve said before, it also produces the coupling between the string’s directions of vibration, so not only the perpendicular vibration of the string, but the longitudinal and the other perpendicular vibration which is parallel to the soundboard. And this all contributes to having a longer decay of the piano string and richness of the tone.

 

Kat Setzer  45:58

So that's interesting. So the vibration of the string, because of the bridge, ends up affecting other strings and aspects of the piano?

 

Pablo Miranda Valiente  46:05

Yes, yes, exactly. 

 

Kat Setzer  46:08

Okay, very cool. So describe some of the previous models of string and soundboard vibrations. What are their strengths and limitations?

 

Pablo Miranda Valiente  46:15

There exists several different models for piano strings, from simple models, such as the ones used by me in this study, and other models that are geometrically exact that consider string nonlinearities and several details of the string. These models normally are solved by using numerical schemes and other numerical methods. The effect of the damping, which is related to the way the tone decays, the vibration decays, is introduced by the connection with the soundboard, and sometimes in these first models, these are represented by terms in the equation. So you're considering the damping of the soundboard in the string by means of external terms for damping. In piano models, it's more common to use the first transverse motion of a simply supported string perpendicular to the soundboard and use the vertical component of the tension of the string as the force transmitted to the soundboard. This, however, implies that there is no influence of the soundboard on the string. Different authors have proposed models on how to include nonlinear phenomena, but this is out of the scope in this paper. In our models it's only transverse motion. And in the models that include the soundboard, the vibration is coupled. 

 

In the case on the soundboard, it has seen numerous approaches. For instance the soundboard can be modeled analytically as a plate, or using a complex finite element models and also linear filters. So you compute all the vibration of the string and then you apply a filter to get the influence of the soundboard. Each assumption has different limitations. And also different directions of the vibration. Some models include one direction, others use two directions, and so on. And then, after this, normally for computing the sound generation, typical models are used, for instance, the the Rayleigh integral or more complex numerical methods.

 

Kat Setzer  48:12

Okay, okay. So what was the goal of the model you developed?

 

Pablo Miranda Valiente  48:16

My main goal was to understand how different modeling assumptions of the sample can affect the interaction with the string, and how will this influence transmitted vibration to the soundboard. So all of my PhD itself is to try to understand what are the variables that can play a role or not, and what are the effects of the different assumptions they take on the model. So in this case, it was just focused on how the modeling of the soundboard affects the string. 

 

Kat Setzer  48:47

Okay. So you considered four different models in this. Can you describe these for us?

 

Pablo Miranda Valiente  48:53

Yes, from a more complex to simpler, no soundboard models, I use a finite element soundboard model, which is a complete model of the soundboard considering all the complexities of the geometry. And then based on this finite element model, I used a reduced model, which contains few properties of that soundboard. Later, I used even a more simplified approach, which is a Kelvin-Voigt model, which only represents the main trend or the level of the response of the soundboard. And, finally, I just used a string which is simply supported, and there is no soundboard, so it's just a string which is pinned at its ends. Later, I modified this simply supported string to include the damping introduced by the connection with an FE soundboard. So those are the modeling approaches I took. 

 

Kat Setzer  49:44

Okay, okay. Okay. Yeah. So did your models consider the degradation of the soundboard with age and environment at all? And if so, how?

 

Pablo Miranda Valiente  49:52

Well it is very complicated to model in finite elements, the degradation and structural damage of a wooden sample, as at first we will need to perform some detailed scanning techniques to identify them. So this will imply a huge detail. So the answer basically is no, but we were keen to use a piano soundboard that is very well maintained and stored in ideal properties. The measurements in this piano and the development of vibrators with the FE model were challenging, but I think we did an ok job.

 

Kat Setzer  50:29

Okay. Yeah, it makes sense that a new piano or a new soundboard would be much easier to model than one that's, you know, warped over time. 

 

Pablo Miranda Valiente  50:37

Yes. 

 

Kat Setzer  50:39

So was there anything in the models to account for the types of wood or other materials used for creating the soundboard? 

 

Pablo Miranda Valiente  50:43

Well in our models, for instance, even our complex FE model, we model the soundboard as an orthotropic material. So wood has different material properties depending on the direction of the grain of the wood, while on the other hand, we model the bridge of the instrument and the ribs of the instrument as isotropic, so only one property. So those were the main considerations we have regarding wood, and regarding other parts of the soundboard, we did not consider them. We just considered the soundboard, the bridges of the instrument, and the ribs which are the stiffeners. 

 

Kat Setzer  51:24

Okay. So what aspects of the string and soundboard dynamics did you consider when validating the models, and how did you test these different variables? 

 

Pablo Miranda Valiente  51:32

Well, for the string models, I used classical stiff-string equations that can account for the inharmonicity of the partials, which occur in reality, the damping of the string, which is related to decay of the vibration, was later estimated using measurements, physical measurements of real strings. The main aspect that I used as a starting point in my models is the fact that the soundboard mobility is orders of magnitude smaller than the string. So the string basically can move much easily than a huge piece of wood, which is the soundboard. So, when the string is connected to the soundboard, in this case, using different models, the frequency of the response can provide information on where the soundboard can play a role in the overall response, and in which zones the response will be only dominated by the string. And in this case, we saw that the sample can be influential only in the range, where there is not a very significant mobility mismatch, so when the string and the soundboard are more or less, as close as flexible as possible. So we realized that in that area, the soundboard can influence the response. And the main validation of the soundboard model is made through the comparison with the FE model, which is tuned and developed to match the measured frequency response of the soundboard. So basically, we started from the basic physics between the strings and the soundboard, and then we developed an FE model based on measurements we took, and then we started from there.

 

Kat Setzer  53:16

So how did the models end up performing? 

 

Pablo Miranda Valiente  53:18

The models performed more or less as expected, I mean, we were expecting that neither of the models that included a soundboard showed very significant differences, because of the level of the mobility. The string is far more mobile than the soundboard. But on the other hand, we were expecting that the dynamic models, the models that include certain vibration properties of the soundboard, to show a better agreement between them in comparison to the model that only represented the main trends, the Kelvin-Voigt model. We saw this but we also saw in our results that this model, this very simple model that only represents the trends does not provide results that are very dissimilar. For instance, you can hear now the transmitted force to the soundboard for all the models that contain a soundboard. and you can hear that they sound very similar. [Three piano notes play in succession.]

 

Pablo Miranda Valiente  54:43

So the first audio corresponds to the transmitted force from a string to an FE soundboard, so the most complex soundboard. And the second value represents the transmitted force to the soundboard using a reduced model soundboard, so the soundboard only contains the five vibration mode properties of the soundboard. And the third audio is the force transmitted to the soundboard when a soundboard is represented by a string and a damper, so it only represents the trend of the response, just the level, not any vibration. So yes, it's up to you. I mean, for me, they sound very, very, very similar. So this is a good resource, because it shows that we don't need very complex soundboard to represent, the interaction with the string.

 

Kat Setzer  55:33

Okay, okay. Yeah, I agree. They sound very similar. What is the key takeaway from this study, then?

 

Pablo Miranda Valiente  55:39

The key takeaway is that a complex soundboard model is not really needed to reproduce an accurate representation of the physical phenomena between the string and the soundboard coupling, and the posterior vibration to the soundboard. Using a simpler models can save us a lot of computational time.

 

Kat Setzer  55:58

Yeah, what are the next steps for your research? 

 

Pablo Miranda Valiente  56:01

Well, during my PhD, we have the plan to write about the extension of the current modeling to a full three-dimensional model. So for accounting vibration in different directions, also including nonlinear phenomenon, and also finalizing the model with some sound generation models. On another hand, we are interested in how the different scales of the sympathetic vibration affects the vibration and subsequent piano sound where the sympathetic vibration can occur due to the use of the una corda pedal. And also whether the vibration of the ringing segment can perhaps affect the overall tone to some extent, and, of course, the use of the sustain pedal, we're considering studying this phenomenon and hopefully we'll find something very interesting. 

 

Kat Setzer  56:53

Yeah, that does sound really like potentially very interesting. And I hope to get to see some of those articles. It is really funny how sometimes a simpler model ultimately can be better. Thank you again for your insight into how sound is created by this much-loved musical instrument. And I wish you the best on your future research.

 

Pablo Miranda Valiente  57:10

Thank you very much. Thank you for this opportunity and for this interview. 

 

Kat Setzer  57:13

You're welcome.

 

Kat Setzer  57:17

Thank you for tuning into Across Acoustics. If you'd like to hear more interviews from our authors about their research, please subscribe and find us on your preferred podcast platform.