Across Acoustics

Student Paper Competition: A Small Ship-Like Structure, Underwater Micronavigation, and Rotorcraft Noise

ASA Publications' Office

This episode, we talk to a few of the latest round of POMA Student Paper Competition winners from the 186th ASA Meeting in Ottawa about their exciting research endeavors:
- Using a small-scale ship-like structure to test noise mitigation techniques for shipping noise
- Modeling spatial coherence in underwater sonar
- Understanding the noise created by rotorcraft

Make sure to keep an ear out for our next episode, which will include interviews with the remaining two winners!

Associated papers:
- Marc-André Guy, Kamal Kesour, Olivier Robin, Stéphane Gagnon, Julien St-Jacques, Mathis Vulliez, Raphael Tremblay, Jean-Christophe Gauthier Marquis. "Effectiveness of standard mitigation technologies at reducing ships’ machinery noise using a small-scale ship-like structure." Proc. Mtgs. Acoust. 54, 070001 (2024). https://doi.org/10.1121/2.0001912

- Kyle S. Dalton, Thomas E. Blanford, Daniel C. Brown. “Bistatic spatial coherence for micronavigation of a downward-looking synthetic aperture sonar.” Proc. Mtgs. Acoust. 54, 070002 (2024). https://doi.org/10.1121/2.0001924.

- Ze Feng Gan, Vitor Tumelero Valente, Kenneth Steven Brentner, Eric Greenwood. “Time-varying broadband noise of multirotor aircraft.” Proc. Mtgs. Acoust. 54, 040006 (2024). https://doi.org/10.1121/2.0001946.

Learn more about entering the POMA Student Paper Competition for the Fall 2024 virtual meeting.

Read more from Proceedings of Meetings on Acoustics (POMA).

Learn more about Acoustical Society of America Publications.

 
Music Credit: Min 2019 by minwbu from Pixabay. https://pixabay.com/?utm_source=link-attribution&utm_medium=referral&utm_campaign=music&utm_content=1022





Kat Setzer  00:06

Welcome to Across Acoustics, the official podcast of the Acoustical Society of America's publications office. On this podcast, we will highlight research from our four publications. I'm your host, Kat Setzer, Editorial Associate for the ASA. 

 

Kat Setzer  00:24

It's time for the next round of POMA Student Paper Competition winners, this time from the Ottawa meeting. First up, I'm talking to Marc-Andre Guy about his article, "Effectiveness of standard mitigation technologies at reducing ships' machinery noise using a small-scale ship-like structure." Congrats on the award and thanks for taking the time to speak with me today. How are you?

 

Marc-Andre Guy  00:42

Yeah, thank you. How are you?

 

Kat Setzer  00:44

I'm good. So first, tell us a bit about your research background.

 

Marc-Andre Guy  00:49

Yeah. So I'm a masters student in Mechanical Engineering at the University of Sherbrooke in the province of Quebec, and I'm working on underwater radiant noise coming from ships. So my main research goal is to reduce ships' noise to increase life quality of marine mammals.

 

Kat Setzer  01:07

Okay, well, that sounds pretty important. So your research has to do with reducing noise created by ships. Why is this noise of particular concern?

 

Marc-Andre Guy  01:15

Well, first of all, we need to know that more than 80% sometimes, some resources, we'll say, 90% of all transportation and all goods on the planet, they are transported by ships. And so the more goods we consume, the more the economy is growing, well, the more ships we have on our oceans, on our rivers. And this is linked and correlated with an increase in ambient noise in the oceans. And this was well illustrated during the covid pandemic. Well, since the economy was slowing down, we observed a reduction in noise in the oceans. So the two quantity are well correlated, and it is well illustrated in the literature. So this increase in noise is quite detrimental and is considered noise pollution for marine mammals, because they rely on sound to communicate, to navigate, to locate prey, predators, and so when a ship is passing by, it masks their ability to do that, and it can also create significant physiological stress and sometimes even death, because some marine mammals are sometimes hit by ships. So there is a need to reduce this noise. Yeah.

 

Kat Setzer  02:23

Yeah, yeah. I don't think I even realized exactly how much shipping is involved in our commerce. 

 

Marc-Andre Guy  02:28

Yeah, yeah. It's quite, it's quite invisible, also. We don't always realize it, but most of the shirts that we wear, or the shoes that we put on our feet, the gasoline that we put in our car, it does not transit by planes or through the road cars; it is by ships. And so it is quite important, and most of the economy relies on that. And yeah, it is quite invisible. Yeah, we don't really, we don't always realize it. 

 

Kat Setzer  02:51

So what are the current obstacles engineers face when trying to reduce machinery noise in ships?

 

Marc-Andre Guy  02:56

The main obstacle is that to reduce this noise, well, you need to install several noise-reduction technologies on ships, and to install that and to implement those technologies, it is quite massive investments. So it is hundreds of 1000s of dollars, sometimes millions of dollars, to install those technologies, the sooner the better. So when you are through the design phase, it is the best time to install and to think about the acoustics, and when the ship is already constructed and it is a retrofit, well, costs will skyrocket, so one of the main challenges to reduce machinery noise is actually to quantify the performance of those technologies. Because let's say that you are a ship owner and you want to install a million-dollar solution on your ship? Well, you need to have certain guarantees and incentives to do so, and right now, for most standard technologies which are installed on ship for more than 50 years, we don't really have the concrete performance of those technologies at reducing underwater radiated noise. So we know that those technologies work well, for example, inside machinery room, but inside water, we don't really have, we are pretty much clueless about the effectiveness of those technologies. So if we don't know how much dB we can gain with a particular technologies, well, it's not very likely that those technologies will be installed on ships. 

 

Marc-Andre Guy  04:17

And why is that so? It is because to quantify those technologies, it is quite hard and quite complex to go on a ship, to instrument the ship, to deploy hydrophones inside water. It is a big engineering challenge, and also it is hard to isolate the effectiveness of one technology on a single machine, because when you go inside an engine room, you have all the machines emitting noise, and you cannot turn down one machine to do one measurement and isolate the contribution of this machine, because even when ships are waiting in ports, well, all the machines are still running, otherwise there is a blackout in the ship. So it is quite hard to make measurements and to isolate, also, the contribution of a single ship inside water. It is another challenge, because when you deploy hydrophones, well, you measure, yes, the contribution of the ship nearby, but also the contribution of all the other ships which are some kilometers away, because sound travels very much over longer distances in water. So it is quite hard to isolate and to quantify the performance of those technologies to reduce machinery noise.

 

Kat Setzer  05:26

Okay, so lots and lots of obstacles it sounds like. 

 

Marc-Andre Guy  05:29

Yeah. 

 

Kat Setzer  05:31

So what was the goal of this study?

 

Marc-Andre Guy  05:34

Yeah. So now that you know how much of a challenge it is to go on ship and do on-ship testing, the idea of my research project was to develop the smaller-scale platform, so to do testing in the controlled environment and at a smaller scale, which is easier to control all our testing parameters. And also we can test several technologies in way less time than actually going on a ship. It is also cheaper to do those type of technologies. So when I started my master's degree, the project which I'm working on was already going. It is issued from a partnership between University Sherbrooke and Innovation Maritime in Rimouski. So they worked on this project for more than a year before I arrived. So they did all the design, the CAD files and the machine, so they built the actual testing platform. So when I arrived in Rimouski to do my testing, it was ready to be instrumented and to be tested. And this platform, it is a small-scale section of an actual ship, so it can reproduce the structural dynamic behaviors of what is observed in the engine room. So we use the same thickness, the same materials which is present inside an engine room, and we can reproduce vibroacoustic phenomenon inside the platform using a speaker and a vibration source. And so we have a small scale system which we can deploy in a water basin at Rimouski, and so we have a controlled environment to do testing and to quantify the performance of several technologies.

 

Kat Setzer  07:08

Oh, okay, this platform sounds really cool. So tell us about your experimental setup.

 

Marc-Andre Guy  07:14

Yeah, the experimental setup is that we have a platform which we can deploy in the basin. So in this platform, we can fix noise sources and vibration sources to reproduce specific signals which are emitted by machinery. We also have the opportunity to put several sensors, so microphones, accelerometers, force sensors to measure the response of the structure, and inside the basin, we deployed hydrophones to measure what noise is coming from the platform, and we can test several technologies. So we can install a particular technology, do some testing, modify some parameters, vary the weight, vary the mass that is added to the platform, and we can compare the measurements with and without technologies to quantify the performance of those technologies. So we have a big toy to play with.

 

Kat Setzer  08:03

That sounds like fun.

 

Marc-Andre Guy  08:05

Yeah. 

 

Kat Setzer  08:05

So you're concerned with the acoustic response of the water basin where you performed the experiments. Why? 

 

Marc-Andre Guy  08:09

Yeah, because before deploying the platform inside the basin, the basin is not anechoic, so it is made of concrete, and so it is highly reverberant, so there are a lot of sound reflections inside the basin. So we wanted to understand the acoustic response, because for over a certain frequency, we can assume that sound pressure levels inside the basin were uniform, so the pressure is independent of spatial position. But under a specific frequency, which is the critical frequency, we observe modes. So we have an alternatives of pressure minimums and pressure maximums depending on frequency. So we wanted to understand this phenomenon and to find what was this frequency, to ensure that when we do testing, we are not biased by those phenomenon. Because let's say that you do some testing without technology at a certain position. So you put your hydrophones, let's say at the center of the basin. You do measurements. Next, you install reduction technology in the platform. You repeat your measurements, but your hydrophones are not exactly at the same position. So let's say that you measure 10 DB reduction. Well, how do you know if the reduction is actually due to the technology, or to the fact that one time the hydrophones were at a pressure maximum and the other time at a pressure minimum? So this was the kind of questions that we asked ourselves and that we wanted to clarify to ensure that we were not biased in our measurements. And we actually found that this frequency was about 600 Hertz. And so considering that machinery noise is mainly emitting in the low-frequency spectrum, so under this frequency, we were more likely to observe modal behavior inside our basin. And so this tells us that we needed to have a precise hydrophone positioning system to ensure repeatability in our measurements and to ensure that. That if we did testing, let's say, in January, and we repeated those tests in April, we were sure that we were at this exact same position inside the basin, and so that this parameter did not vary through our experiments. So this was why we needed to characterize the acoustic behavior of the basin before putting the platform in it.

 

Marc-Andre Guy  08:22

So what technologies Did you test with your setup?

 

Marc-Andre Guy  10:01

Yeah, so the main goal of the project was to test standard technologies, so technologies which were used already in ships. So we tested several technologies. I will limit myself to two today, which are the technologies I presented in my POMA paper. So we tested, first of all, elastic mounts. So those were dampers used to isolate and to decouple the machine from the structure. So normally, when you have a machine inside an engine room, well, if it is bolted directly on the hull, well you now have a big, massive speaker which emits sound inside our oceans. So the goal is to decouple the machines from the hull, and so we install elastic mounts between the machines and the structure. So this was the first technology which we wanted to test to see the effect on vibrations. 

 

Marc-Andre Guy  11:14

And so the second technology that we tested was mineral wool. So this technology is mainly used to reduce the airborne contribution, and you can observe this technology to insulate studios, for example, or to treat acoustics in places where music is played, for example. So those two technologies were what we installed on our platform to see the effect on both the airborne contribution and also the structure-born transmission path. So those were the two technologies that that we tested.

 

Kat Setzer  11:45

Okay, very cool. How did they end up doing at reducing ship machinery noise? 

 

Marc-Andre Guy  11:50

Yeah, for the elastic mounts, we obtained up to 20 dB reductions, which is quite massive. And so the dB reductions were URN, so they were underwater radiated noise reductions. And so this was actually the reduction that we observed inside the basin, which is what we are interested in in the first place. And we observed also that for the lower-frequency part, we amplified noise. So installing elastic mounts did not solve our problem, rather amplified it because we were near the resonance zone of this technology. So this tells us that when you use this technology, you need to ensure that the resonant frequency of your system is far from the lowest frequency at which your machine will be operated. Otherwise you will amplify noise rather than reducing it. But when it is the case, we observed up to 20 dB reductions, which were quite significant. 

 

Marc-Andre Guy  12:45

And for the second technology, so for mineral wool, we obtained up to 15 dB reductions inside the basin, and we also obtained good reductions in the platform. So this was quite interesting, because the project's end goal is to reduce noise and increase the life quality of marine mammals. But let's not forget that there are also people working inside an engine room all day long wearing earplugs, and they cannot hear themselves because noise levels are too high. So if we can reduce both noise inside the platform, so inside an engine room, and also inside water, well, installing this technology has the potential of a double benefit. So this was quite interesting. 

 

Marc-Andre Guy  13:24

The main challenge with this technology was that for the lower-frequency spectrum, so about under 160 hertz, performance was not very good. So this is explained by the thickness of the materials were used. Because to be effective, you need to have a thickness which is about the quarter of the wavelength that you want to isolate. So let's say that you want to isolate 100 hertz. Well, in air, the wavelength is about 3.4 meters. So you will need to install about 80 centimeters thickness mineral wool to insulate 100 hertz frequency. So imagine installing that inside an engine room, right? It is way too costly and it will take way too much space, so it is impossible to do that. So that's why we are more limited in the lower-frequency part of the spectrum for this technology. 

 

Kat Setzer  14:12

Okay, okay. So what was the most exciting, interesting or surprising aspect of this research for you?

 

Marc-Andre Guy  14:19

Yeah, I will say that most exciting and gratifying part of the project was working in a team. So as I mentioned, before I arrived at university, a team was already working on the project to do the design of the platform for more than a year. And so when I arrived, I had the opportunity to work on a platform that was already machined, so to instrument the platform to do the testing, to analyze all the data. I had the chance of working with several people. We had a big multidisciplinary team. So I worked with electrical and mechanical engineers. We had technicians, also physicists. And so by putting all our expertise together, we contributed to the project, and we realized all our objectives, and we answered several questions that we wanted to do with this project. And so we had a lot of fun together. So I would say that was the most exciting part of the project. I would say, yeah.

 

Kat Setzer  15:11

Yah, that does sound like a lot of fun and very satisfying, like you said. So what are the next steps in this research?

 

Marc-Andre Guy  15:18

Yeah, so the next steps we want to investigate and to build a numerical model of the platform. So we want to model the actual structure of the platform and the basin environment, to conduct numerical simulations and to compare our results to the experimental data that we measured with hydrophones and all our sensors. So by having a calibrated model we could test and we could vary several parameters of the platform and see the effect on underwater radiated noise. 

 

Marc-Andre Guy  15:47

We also now have a big tool that we can play with and use to test other technologies. In this particular research project, I tested standard technologies such as mineral wool and elastic mounts. But we also have another project running which will test metamaterials. So those materials are structural materials which are designed to absorb specific frequencies. Because for machinery noise, we have a lot of tonalities, and so when a machine is running at the fixed RPM, for example, well, it emits specific frequencies. So let's say that you know what frequency your machine is emitting sound, you can target those frequencies and design a system to absorb specifically those frequencies. So we are currently working on the project to design such technologies, and we now have the tool and the platform to test the effectiveness of those technologies. So yeah, the develop platform and the develop methodology can be reused in the future to test other technologies. 

 

Marc-Andre Guy  16:47

And another question which we want to answer is, which transmission path is contributing the most to URN? Because, in short, we want to answer the question, is it more vibrations or airborne noise which contributes to URN, to better select appropriate technologies for reducing underwater-related noise. 

 

Kat Setzer  17:05

Okay, yeah, it sounds like there are lots of opportunity for research, and also hopefully a very big impact from your research. 

 

Marc-Andre Guy  17:13

Yeah.

 

Kat Setzer  17:13

I have to say, this research platform and the, I guess the "small-scale ship like structure," as you say in your title, just sounds really cool, and hopefully it will help with the development and, more importantly, the actual implementation of these noise-mitigation strategies you discussed. I wish you the best of luck in your research. And once again, congratulations.

 

Marc-Andre Guy  17:31

Yeah, thank you. So I’d also like to thank our financial partners. So Transport Canada through the Quiet Vessel Initiative, Davie Shipyard, and Mitacs, who supported the project, and Mechanum Inc. for providing some equipment for testing mineral wool. So thank you. 

 

Kat Setzer  18:15

Our next Student Paper Competition winner we've actually had on the podcast before, after the meeting in Denver, Colorado. Kyle Dalton's latest award is for his paper, "Bistatic spatial coherence for micro navigation of a downward-looking synthetic aperture sonar." Welcome back, Kyle, and congratulations. How are you?

 

Kyle Dalton  18:33

I'm good. Thanks again for having me.

 

Kat Setzer  18:36

You're welcome. So first, just tell us about your research background. 

 

Kyle Dalton  18:39

Yeah. So I am going into the fifth year of my PhD in the acoustics program at Penn State. My research is an underwater acoustics, primarily with a type of sonar called synthetic aperture sonar, or SAS. SAS systems are generally used to make very high resolution images of things that don't move. So they're useful for very detailed surveys of the sea floor or exploring shipwrecks or looking at smaller stationary objects in great detail. More specifically, the SAS I use is built to look for unexploded ordnance at old military testing and training sites so they can be cleaned up. My earlier research looked at modeling objects that exhibit some of the same acoustic phenomena as unexploded ordnance, and then coming up with a technique to form higher-quality imagery of those objects. I also did some machine learning work with some collaborators at Penn State, looking at how we can better classify unexploded ordnance and images made from synthetic aperture sonar data. And then my current research looks at some of the underlying physics that goes into getting a highly accurate positional estimate for your sonar when you're trying to make an image using a SAS. Knowing exactly where the sonar is is incredibly important.

 

Kat Setzer  19:48

So what is coherence and how is it used in underwater acoustics?

 

Kyle Dalton  19:52

Generally speaking, coherence is a measure of similarity, usually with regards to the phase or fluctuations of the signals you're comparing. You might hear coherent used in coherent addition, or coherent processing, where you're aligning the phase of your signals in a beneficial way. Some people look at spectral coherence, which is how similar the frequency content of two signals is. Or folks will talk about a coherent reflection, effectively an echo that is very similar to the signal that originally was sent out. 

 

Kyle Dalton  20:21

I'm specifically interested in spatial coherence. If we have two observations of some phenomena made at two different points in space, spatial coherence looks at how similar those two measurements are. You can think of it like going to a sporting event with some friends. If you and one friend have seats right next to each other, then your views of the game will probably be very similar. They're highly coherent. But maybe your other friend got a seat in the upper deck, way on the other side of the stadium, and even though you're watching the same game, their view will look very different than yours. So coherence gets used in underwater acoustics in so many ways. Looking at coherence can be really useful if you want to study how the underwater environment changes, or if you want to see if a specific underwater scene has changed since the last time you looked at it. Coherence-based techniques have been used for tracking icebergs and measuring currents. It also comes into play with synthetic aperture imaging, which is effectively a coherent combination of many observations of a scene to make one high resolution picture of that scene. And then more related to what I'm doing now, you can do coherence-based micronavigation to help figure out where your sonar is.

 

Kat Setzer  21:31

Okay. So what is micronavigation, and what does it have to do with spatial coherence?

 

Kyle Dalton  21:36

So micronavigation is a process where you use a sonar, usually one that's not specifically built for navigation, to get a really precise estimate of your location relative to a previous location. The positioning tools that normally come to mind for this sort of task are accelerometers and gyroscopes, and those are great, but they aren't always accurate enough. If the conditions are right with micronavigation, it's possible to get a positional estimate with millimeter scale accuracy. And micronavigation is really built on coherence, specifically the coherence of a sequence of pings. So let's say our sonar sends out a ping to observe the sea floor and then from exactly the same location, we send out another ping to observe the sea floor. Again, our two observations should look very, very similar. They should be highly coherent, because we're looking at exactly the same section of the sea floor from exactly the same point in space both times. But what if we were to move some in between pings? Well, now our two observations are at different points in space, and because of this spatial offset, the observations will be less coherent. So if we have an accurate model for how the spatial coherence changes as the distance between our observations changes, we can start to figure out how far we've moved.

 

Kat Setzer  22:51

Okay, got it. Got it. So how is spatial coherence typically modeled, and what are the challenges with these methods?

 

Kyle Dalton  22:58

So there's, there's a classic quote: "All models are wrong, but some are useful." And I think that's especially true in spatial coherence modeling. There are quite a few models out there, all of which make their own simplifications and assumptions based on the application at hand. So the model I'm using is built around the van Cittert- Zernike theorem, which was originally developed to study the spatial coherence of light. And the math works out to this really concise but very powerful form that relates the change in spatial coherence to the distribution of source intensity. So one of the first challenges was taking the math that works for light and making it work for sonar. And thankfully for me, there have been acousticians who've laid the foundations for using the van Cittert-Zernike theorem underwater. But then the next challenge was analyzing the assumptions and simplifications of that foundational work and figuring out what parts would work for my project and what parts I needed to redevelop in a more accurate way.

 

Kat Setzer  23:55

Okay, so you already started talking about your model a little bit, but what was your goal for your model?

 

Kyle Dalton  24:01

Yeah, my goal was really to adapt and extend the previous work on modeling spatial coherence for the types of sonars that I'm interested in. I work with downward looking SAS systems that are relatively large, that operate over a large range of frequencies and work in very shallow water, usually less than five meters in depth. And my application breaks many of the assumptions that have normally gone into modeling spatial coherence. So this work really aims to undo some of those assumptions and make a model that's more appropriate for my broadband by static, downward looking sonar.

 

Kat Setzer  24:35

Okay, got it. So how did you develop and verify your model?

 

Kyle Dalton  24:40

Development was sort of this iterative back and forth between reading the existing literature and doing a bunch of math on a whiteboard.

 

Kat Setzer  24:47

Ha.

 

Kyle Dalton  24:48

Yeah, it was figuring out what assumptions are built into the previous work and deciding if I can make that assumption as well, or deciding if I need to carry on with a more complicated form. Another big part of development was taking the more general mathematical form of the model and making sure that I was including the physical mechanisms and the sonar parameters behind each variable. For example, in the POMA, there's an important equation that essentially boils down to one variable, and it looks super simple, until you realize that that one variable is built from many other variables that capture things like the position of the sonar relative to the sea floor, the sonar's beam patterns, the amplitude and the phase of the signal as it propagates, and the roughness and material properties of the sea floor. 

 

Kyle Dalton  25:32

And then to verify, I started by checking that the units of the model were consistent and that they made physical sense, then I made sure that I could apply the same assumptions and simplifications that other models had made and arrive at the same result as those other models. In the POMA I showed that in some simplified test cases, that my model matches the output of another previous model and successfully replicates the results of some peer-reviewed papers.

 

Kat Setzer  25:59

Okay, okay, so how well did your model work for shallow water, downward-looking SAS applications?

 

Kyle Dalton  26:04

So Admittedly, I haven't gotten all the way there just yet. 

 

Kat Setzer  26:07

Okay…

 

Kyle Dalton  26:08

Yeah. Baby steps. The POMA just looks at the theory of how the model should work and then gives some initial comparisons to other models and simulations. But this summer, I've been working through some experimental data from an in-air sonar and seeing how it compares with the model and writing up the results. And so far, everything looks promising, but I think it'll probably be up to someone after me to really be the judge of how well this model works when you take it out to the water and apply it to a real downward-looking SAS.

 

Kat Setzer  26:36

Okay, okay. So what are the next steps for this research, whether for you or for the next person?

 

Kyle Dalton  26:42

Yeah, I've talked a lot about removing assumptions, and there are still a few assumptions in this model that could theoretically be removed. For example, my model assumes homogeneous seafloor, which does not account for things like rocks or seashells or other individual objects that scatter sound, which are definitely present in many real-world environments. My model also assumes that you only get scattering from the top of the seafloor, and while that's a good place to start, it's not entirely true. For the sonars I work with, you can also get returns from the volume of sediment, sort of underground within the seafloor. And I laid out the math for a volumetric model in the POMA, but I haven't validated it yet. And like I mentioned, since writing the POMA, I've done some experimental validation, but nothing yet in the field, in a real underwater environment. There's another grad student in my lab group who's just getting started approaching spatial coherence from a more experimental angle, and I'm super excited to see how her data and her analysis supports my model, and maybe just as importantly, how her work proves it wrong.

 

Kat Setzer  27:42

Yeah, right. So what was the most interesting, surprising or exciting aspect of the research?

 

Kyle Dalton  27:49

Yeah, I know we always talk about how interdisciplinary acoustics is, but I was kind of surprised by just how diverse and interdisciplinary the spatial coherence literature is. You know, it's been a really fun topic to read about and to work in. You know, the papers at the core of this project were written in the 1930s to study light, and there are folks in astrophysics looking at the coherence of distance stars. There's work looking at the coherence of radio signals as they get distorted in the atmosphere. There's spatial coherence work in the biomedical field. It's all over the place, and it's been really cool to see how this one principle can get applied in so many ways.

 

Kat Setzer  28:28

Yeah, that is really cool. It's interesting that it has so many diverse uses, despite being a somewhat simple principle, I suppose.

 

Kyle Dalton  28:35

Yeah, it really just comes down to taking the general math and making it specific to your application. 

 

Kat Setzer  28:42

Yeah. Well, thank you for introducing me to the world of micronavigation. I wish you the best of the luck on your future research and finishing up your PhD, and congratulations again.

 

Kyle Dalton  28:52

Yeah. Thank you so much.

 

Kat Setzer  28:54

Our next student paper winner, Ted Gan, takes us to the skies with his paper, "Time-varying broadband noise of multi-rotor aircraft." Congrats on the award, and thanks for taking the time to speak with me today, Ted! How are you? 

 

Ted Gan  29:05

I'm very good. Thank you. And thanks for this opportunity to appear on the podcast.

 

Kat Setzer  29:10

You are very welcome. So just first, tell us a bit about your research background.

 

Ted Gan  29:15

I'm a PhD candidate at Penn State in aerospace engineering, and my PhD research, broadly speaking, it's on rotorcraft aeroacoustics. So "rotorcraft" refers to aircraft with rotors. So this includes helicopters, propeller airplanes, and UAVs, more commonly known as drones. But this is broadly applicable to any kind of spinning rotors, including ceiling fans or washing machines. It's all rotors. And "aeroacoustics," the "aero" just means air. So when we talk about aeroacoustics, we're talking about sound generated by air, in contrast to, let's say, noise from structural vibrations.

 

Kat Setzer  30:01

Okay, okay. So your paper has to do with time varying broadband noise from multi-rotor aircraft, like drones, as you mentioned. What is time varying broadband noise? How does it differ from other noise?

 

Ted Gan  30:13

Okay, I'll answer this question in two parts, because both parts are important. So the first question is, what is broadband noise? And the second question is, what is time-varying noise? So I'll start with the first. So broadband noise is just any sound that encompasses a broad range of frequencies. So perhaps the best example of this is white noise. This is like radio static, or you might use a white noise machine for sleep. White noise is being characterized as sounding soft or rough or fuzzy, and this applies to any general kind of broadband noise. 

 

Ted Gan  30:54

And then the second part of this question is, what is time-varying noise? So time-varying noise really means time-varying sound levels. Sound levels are what our brains actually interpret as hearing sound. So for example, helicopter rotors generate time-varying noise, like that characteristic chopping or buzzing sound. That's because the sound levels vary with the rotation speed of the blades, which we call the blade passage frequency. And the technical term "modulation frequency," that's the frequency at which the sounds levels vary with time. 

 

Ted Gan  31:35

And it's really important not to confuse the modulation frequency with the sound frequency. So for example, let's just consider a pure tone with a sound frequency of 1000 Hertz. Humans can hear that pretty well. So 1000 Hertz means that the air pressure is varying at your eardrum at, let's say, at 1000 cycles per second. That's what a Hertz is; however, our brain is not going to interpret that as 1000 kind of clicks per second. Instead, we just hear a constant sound level. So that would be an example of what is NOT time-varying noise. We call that time-invariant noise. 

 

Ted Gan  32:15

But in contrast, we could have a 1000 Hertz tone with a modulation frequency of 10 Hertz. That means that you'll hear the sound, kind of like this 10 cycles or clicks or beats per second. And this kind of exemplifies how sound frequencies are different from modulation frequencies. In fact, speaking right now, me speaking. So if you say, I don't know, if you count, "One Mississippi, two Mississippi," I'm saying five syllables per second. But you can't hear a five Hertz sound. That's below the range of human hearing. So you can't hear a five Hertz sound frequency, but you could hear a five Hertz modulation frequency, because you can hear me speak, and this really models the broadband noise modulation of a rotor. 

 

Ted Gan  33:03

So when you have a helicopter rotor, the broadband noise frequencies are several 1000 Hertz, because these are the high frequencies of turbulence. But the blade passage frequency is going to be a lot less, on the order of, let's say, tens of Hertz, and that's really what you're hearing when you hear a helicopter chop or buzz. Your brain can't make out 5000 cycles per second, but it really locks onto the 10 cycles per second. So time-varying broadband noise is important for human perception of sound. But that being said, most aircraft noise research doesn't typically consider the time variation of broadband noise, so that's what my research wants to address.

 

Kat Setzer  33:44

Interesting. Interesting. What were the goals for this research?

 

Ted Gan  33:48

So the overarching goal of our research is to design aircraft, and the operations, to not only sound quieter, but also less disruptive and less annoying. This is really important because let's say, even if we meet all the aviation noise regulations, even if all the noise regulations are met, aviation noise has significant public health consequences. This includes various health factors, including, for example, increased risk of cardiovascular conditions and diseases like hypertension, for example. Also just disruption of concentration, learning, and sleep, which is especially important for children, let's say if an aircraft flies over a school. 

 

Ted Gan  34:34

And these harmful health effects depend a lot on how we perceive noise. And human perception of sound has been shown to depend significantly on the time-variation of noise levels, in contrast to just, for example, time-average noise levels, which are normally what we quote in decibels. And current aircraft noise certification metrics don't always fully capture the human physiological and psychological response to sound. And what this means is that meeting noise regulations doesn't necessarily strongly correlate with public acceptance, and public acceptance of aviation noise is very important, because this is going to limit how frequent and how widespread flight operations are. This is pretty evident in current urban operations of helicopters in big cities like New York City or Los Angeles. And one could argue, okay, maybe it's okay. Maybe these guys are just complaining, you know, they're being unreasonable. But that's not the case, because of all these health effects we studied. And let's say, even if that is the case, you know, they're just, they're just complaining for the sake of complaining. But at the end of the day, right, aviation is meant to serve the public, so we obviously have to take into account what the communities want. 

 

Kat Setzer  35:58

Right, right. That all makes a lot of sense. So you did both outdoor and indoor measurements. Why? Can you describe both of these setups?

 

Ted Gan  36:06

So outdoor measurements is just, they consist of flight tests, and of course, this is representative of real-life, realistic operations. However, the issue with outdoor measurements is that they can be quite challenging to study because there are simply too many factors to consider and control for. So for example, atmospheric conditions can change pretty substantially within a single flight test or between repeated flights that are following these exact same flight profiles. So let's say, even if we could control the weather, and the atmosphere is perfectly calm, one challenge with these multi-rotor aircraft is they actually have too many rotors and as well as possibly wings or other control surfaces like flaps or ailerons. And what this means is that the flight controller can fly the same flight or same maneuver, or respond to atmospheric disturbances, let's say, in numerous ways. There are numerous, let's say, combinations of rotor rotation speeds that'll achieve the same flight. And this has huge implications for noise, because this means that repeated flights following the same flight profile can generate very different noise characteristics, and this variation in noise between flights really must be characterized and accurately quantified if we have any hope of certifying the noise of these aircrafts. So that's where the indoor tests come in. They're very useful because we simply have more control over what's going on, and this helps us really understand the underlying fundamental physics so that we can improve our designs. So our indoor tests take place in an anechoic chamber at Penn State, and these are specialized facilities whose walls absorb rather than reflect the sound, because this is a better representation of outdoor sound propagation.

 

Kat Setzer  38:00

Okay, okay. So how did you analyze the noise to take into account the time variation?

 

Ted Gan  38:06

We did this by using metrics that have good time resolution. I know that sounds like perhaps an overly simplified answer, but it's really the core essence of it. So typically, when acousticians study sound, we'll describe sound using time-averaged sound levels in decibels. For example, let's say the room I'm in right now, the background noise is, I don't know, 30 decibels. Technically speaking, if I really want to be precise, I need to state the duration over which I calculated that 30 decibels. So for example, okay, I've recorded this room for two minutes, and over that two minutes on average, the time-average sound level is 30 decibels. But again, even most scientific journals don't always report this. And for good reason. This is because often, in many applications, let's say room acoustics, sometimes you care about the average noise levels. You're not really that concerned with the time variation of noise levels. But specifically, when we talk about rotor noise or aircraft rotor noise, this is not the case. We actually do need that time resolution, so instead, so the main acoustic signal processing method I use is called envelope analysis. So envelope analysis really gives a good sense of the total sound levels with good time resolution. But the sacrifice really is that you don't really look much at the exact sound frequencies. But for my applications, I'm actually not too concerned with the exact sound frequencies, because I know it's broadband noise. It's over a broad range. Instead, the frequencies I really care about are the modulation frequencies at which the sound levels vary, and envelope analysis is really a good method for this purpose.

 

Kat Setzer  39:55

Oh, okay, okay. So is the effect of broadband noise modulation significant multi-rotor aircraft in the fight? 

 

Ted Gan  40:03

Yeah, we found that it was, and this in itself, was perhaps a novel and perhaps surprising discovery. It's not that obvious that these time-varying broadband noise levels will be significant in flight, because UAVs, let's say, typically have many rotors. So you might see a hobby camera UAV with four, six, or eight rotors. And what becomes tricky when you have that many rotors is you have to be really careful when you consider the constructive and destructive interference of sound waves. 

 

Ted Gan  40:36

So for example, if you have just two sound waves, which models the sound of two rotors, if, let's say, at where you're standing, where your ears are, the sound of the two waves are perfectly in phase, then you get what we call constructive interference, and the sum of the waves doubles, and you get much louder noise. But in contrast, you could also have the opposite effect. They could be perfectly out of phase, and in that case, the two waves actually sum to zero, and this is what we call destructive interference. And it becomes more complicated when we have more than two waves, even though the same physics principles apply. It's plausible to assume that it's... It's highly unlikely, based on all the conditions, that they'll add perfectly in phase or perfectly out of phase. The true answer will lie somewhere in the middle. And it's hard to say without, you know, doing the experiment or doing the simulation, where in the middle are you going to end up. So computational predictions in the literature predicted that, okay, when you have so many rotors, the phases will add up such that the total sound amplitude isn't going to be that high. That's a plausible conclusion, but we couldn't find in the literature any experimental confirmations of this. So that's what we set out to do with our outdoor and indoor tests.

 

Kat Setzer  41:57

Okay, so then you ended up wanting to understand why multi-rotor modulation is significant in flight, contrary to the predictions in the literature. So how did you go about doing this, and what did you find?

 

Ted Gan  42:08

So in terms of the how, we conducted a UAV flight test, and we measured noise, and then I analyzed noise measurements using the envelope analysis techniques I mentioned earlier. So of course, obviously we aren't the first person in the world to measure noise during a UAV flight test. This really goes to show that sometimes you can learn a lot just by processing the data in a different way. And when I did that, we found that the amplitude of time-varying broadband noise levels was large in flight., so then we wanted asked moreso, why is that?

 

Ted Gan 42:44

And to that, we turned to the anechoic chamber, because we can control more of the variables. And what we found was that the phase between the sound waves generated by different rotors, it was essentially uniformly random with time. That means all the phases are equally likely, and this means that... so the rotors aren't actually going to add, they're rarely going to add perfectly in phase, but it's just as unlikely that you're going to add perfectly out of phase and cancel each other out. So again, as I said earlier, we know the outcome is, on average, it's going to be somewhere in the middle. The hard part is determining what is... is that middle ground going to be large or small? And when we did the processing, it seemed that this middle ground still seemed to be a significant amplitude of time-varying broadband noise levels.

 

Kat Setzer  43:35

So what were the key takeaways of this research?

 

Ted Gan  43:38

So the main takeaway is that when you have a multi-rotor aircraft like a quadcopter UAV, is that its noise is really strongly affected by the phase offset between the rotor blades of different rotors. So if the blade offset between rotors fluctuates with time, then noise is also going to fluctuate with time. And we know that the blade phase offsets fluctuate with time because we know that the rotation speeds of the different rotors vary with time in order to control and stabilize the aircraft. However, even though we know the importance of these rotor speed and these phase offset variations, we typically don't consider them in rotorcraft noise analysis. This is because, especially when you look at, let's say, these high-fidelity aerodynamic simulations, it's only practical to set the rotation speed to be perfectly constant with time. But the issue with that is that you're going to lose the accuracy that you get in the experiment, namely, you're not going to capture these time-varying noise levels. However, a more positive takeaway is that we can deal with this by using it to our advantage. So we can phase the rotor blades to have a constant offset with time, and we've done it at Penn State. We've designed controllers that really keep these phase offsets between rotor blades as constant as possible. And we can choose the offset values to get the noise to cancel out as much as possible. And this process, we call it, it's called in the literature, synchrophasing, and it has been proven to reduce noise in experiments. However, people have only considered synchrophasing for tonal noise, typically not broadband noise time variation. So that was a novel contribution of our work.

 

Kat Setzer  45:34

Yeah, the synchrophasing sounds really cool and like a very interesting solution. So what was the most interesting, surprising, or exciting aspect of this research For you?

 

Ted Gan  45:43

Personally, to me, it was quite surprising that these very small fluctuations in rotor rotation speeds actually substantially affect the noise amplitudes we hear. So for example, in our experiments, for... we have a pretty, I would say, typical flight controller. So we set the rotors to spin at 637 Hertz. However, because no controller in the world is perfect, in reality, even if you tell the controller, okay, spin at 637 Hertz. In reality, it's going to fluctuate like plus or minus one or two Hertz. So let's say 635 to 640 Hertz. And that's less than 0.5% of what we told it. So you know, if you told me, okay, your motor works to like 99.5% accuracy, I think everybody... That sounds pretty impressive, right? And to be honest, like this, it's not like we have a bad controller or a good controller. I think this is a pretty representative number. You would be hard pressed to find controllers that maintain rotation speed within less than 0.1 Hertz. But that being said, these small fluctuations in rotor rotation speed, they really affect how the noise between multiple rotors add up, not because that's such a big fluctuation, again, it's less than 0.5%, but because these small fluctuations can really affect the phasing between rotors, and that's going to really affect noise. So again, like that 0.5% that matters, and that kind of surprised me.

 

Kat Setzer  47:18

Yeah, it's counterintuitive that such a tiny, tiny difference could greatly impact, like you said, the phasing so that there's more noise. So what are the next steps in this research?

 

Ted Gan  47:29

So next, we really want to work on better characterizing, one, the variations caused by the controller, and how those variations propagate to affect the noise variation. So by propagating uncertainties in, let's say, rotor rotation speed caused by the flight controller, and propagating that throughout noise calculations, we can characterize the variation of the noise quite accurately. And this is really important for noise certification of multi-rotor aircraft, because it's difficult to certify noise if it's going to vary every time you fly the same flight. So if we can really precisely, quantitatively characterize that variation, would be a big help, not only to aviation regulators, but also aircraft designers and operators, we can all work together to produce noise and keep our communities quiet.

 

Kat Setzer  48:24

Yeah, well, it sounds like an exciting endeavor. It is really interesting to learn how rotorcraft noise can be perceived so much differently than from traditional plane noise, and certainly seems like something we need to know more about as it becomes more prevalent. I wish you the best of luck on your research, and once again, congratulations.

 

Ted Gan  48:44

Thanks for having me.

 

Kat Setzer  48:47

Before we wrap up this episode, I’d like to share a couple messages with our listeners. One, if you liked what you heard in this episode, please text it or email it to someone who may enjoy it as well

 

Kat Setzer  48:56

Second, for any students or mentors listening around the time this episode is airing, we're actually holding another Student Paper Competition for the 185th ASA meeting in Sydney. So students if you're presenting or have presented, depending on when you're listening to this episode, now's the time to submit your POMA. We're accepting papers from all of the technical areas represented by the ASA. Not only will you get the respect of your peers, you'll win $300, and. perhaps the greatest reward of all, the opportunity to appear on this podcast. And if you don't win, this is a great opportunity to boost your CV or resume with an editor-reviewed proceedings paper. The deadline is January 8, 2024. We'll include a link to the submission information on the show notes for this episode. 

 

Kat Setzer  49:37

Thank you for tuning into Across Acoustics. If you'd like to hear more interviews from our authors about their research, please subscribe and find us on your preferred podcast platform.