Pages

Showing posts with label Sound. Show all posts
Showing posts with label Sound. Show all posts

Sunday, May 12, 2024

Whistle While You Wheel

Shortly after Marika and I got our car, we installed a roof rack to carry our cargo box for camping trips. The first time we drove on the highway with it, we suddenly heard a clear, crisp note from above our heads. As our speed changed, the note would suddenly shift to a new pitch, just as clear. The roof rack has a channel running down the center of its length, with a rubber seal on top that opens on each end. I realized it was acting exactly like a flute – Air blown across one end of an open tube was resonating at a specific frequency. I had hoped to make a simulation of this at the time, but I couldn't get my head around the equations involved.

Some time later, I found this wonderful interactive simulator, and tried adapting that to this situation, but now the obstacle was introducing the pipe geometry. Finally this week I looked again, and was saved by Daniel Schroeder, who introduced me to the HTML5 simulations I've shown here. Schroeder's demo shows how a steady stream past a barrier can create vortices, but we want a tube with an opening on top. Here's the boundary I came up with:

Now we can see what happens to air moving left to right. The simulation used here keeps track of the density and velocity of air at each point. To get the next state, it uses the Lattice Boltzmann method, which involves two steps: collision and streaming. The collision step changes the velocity in each cell to push the system toward equilibrium, and the streaming step uses those velocities to shift the density between cells. You can find my adaptation of Schroeder's code here. I found that this setup would pretty quickly reach equilibrium, but to get sound we need oscillation. Adding a little bit of noise allowed it to settle into an oscillating pattern. Looking at the full map we can only really see the transients (though those are pretty nifty):

If we instead focus in on the opening of the tube, we can see the density oscillating around a central value, which is exactly what we need to get sound:

Now on to the pitch question: We can measure the relative strengths of the frequencies using an amplitude spectral density, and see how it changes for different speeds.

Sharp peaks indicate a clear note, and we can see a few here, separated by the different speeds. This is exactly what we experience in the car, and it's pretty cool to see it show up in such a pared down model – One of the things I love about physics!

Sunday, March 10, 2024

An Honest PUC

Back in January, my brother Nate got several BirdWeather PUCs, small outdoor devices that listen for bird songs, and identify the species responsible, similar to the popular Merlin phone app. He gave one to our parents, one to Marika and me, and kept one for himself. If you follow those links, you can see each PUC uploads its observations to a central server, which can be queried through an API. Nate used this to set up a daily email giving the previous day's species counts for each of our stations, and I was curious if I could apply some of the data analysis techniques I've learned to the results.

The first thing I wanted to try was identifying trends in the time of day each bird is heard. To do this, we can use an idea called data folding. This is similar to the idea of a Fourier transform, where we're dealing with periodic data, but we're only interested in a single period: 1 day. If we split our months of data into single-day segments and stack them on top of each other, we can get better statistics about when each bird is heard. There are too many individual species to look at all of them, so I considered finding a way to group them. One idea was to use their taxonomy, but depending on the level I chose, I'd get either one big group, or a group for every bird. Going back to the better statistics idea, I decided to just plot the ones with more than 200 total detections:

This is called a violin plot, often used to show statistical distributions like these. The end caps show the max/min values, and the bulges show the more frequent times. You can see that most peak around dawn hours, but a few are heard throughout the day.

In the daily emails Nate set up, I noticed that for a long time I only saw the Carolina wren show up in Florida, but then it started popping up in Massachusetts as well. I wondered if I was seeing a spring migration, so I got the data from all 3 of our stations, and looked at the number of wrens for each day:

This seems to show it was just my imagination: The wrens are much more frequent in Florida overall, and there doesn't appear to be a trend toward MA over time.

In the first plot I showed, you may have noticed the "Engine" entry. It turns out the PUC has several non-bird sounds it recognizes, and it picked up the highway traffic next door to us. I was curious to see what other non-bird detections it had made. The reported data gets a "confidence" rank based on both how close the sound was to the model, and how likely it is to hear that sound in the location. I split up the detections below on those rankings:

It's reassuring to see the few "gun" detections don't rise above uncertain, though the siren counts are quite high (and I'm assuming that refers to the emergency-vehicle type, not luring-sailors-to-doom type). Seeing these makes me curious what other sounds it can recognize. Thanks for this cool gift, Nate!

[Edit: I noticed this post is getting lots of traffic from the BirdWeather community. Maybe someone *cough*Nate*cough* shared this post there, but anyway, here's my code!]

[Edit 2: I've made a JavaScript tool that can fetch the latest data from a given station and plot histograms like the ones above.]

Sunday, April 23, 2023

A Rainbow of Random

Last week I was talking to Steve and Nate (my father and brother), and they mentioned using white noise generators to help with sleeping. That made me think of a meeting I was in earlier in the week where someone mentioned that LIGO and LISA typically have red noise contaminating the measurements. That made me wonder: What does red noise (or other colors of noise) sound like?

First off, we need to talk about what it means for noise to have a particular color. In the context of light, color tells use the frequency of light: red light has lower frequency than blue light, and white light is a mixture of all frequencies. We can apply the same principle to frequencies of sound: more low frequencies is redder, and more high frequencies is bluer. We figure out the "color" of a dataset by taking its Fourier transform, and looking at where the peak frequency lies. White noise has an equal amount of power in all frequencies.

That's the theory, but how do we go about generating different noise colors? White noise is easy: Given some sampling rate, we generate uniformly distributed random numbers. Because these numbers are uncorrelated, they don't favor any frequency, and give a flat spectrum. We can "redden" that noise by giving each sample some dependance on the previous. This page suggests mixing each sample with part of the previous:

where x is the series of samples, w a series of white samples, and r a tuning parameter between 0 and 1, with 0 giving white noise, and 1 giving constant values. For blue noise, we want to do the opposite: sequential values should be as different as possible. To do that we can follow the method here: For each blue point we want to generate, we get several white points and pick the one furthest from a previous sample.

We still haven't gotten to what these different noises sound like though. For that, we can use Javascript's AudioContext, and pipe these sequences of values through your speakers. Below, you'll find a slider to control the color of the noise – For red noise, it represents the r from above, and for blue noise it represents how many samples ago to avoid. That technique isn't quite right, so for values above zero (which is the white noise point) there isn't a continuous transition. In the window above the slider, you can see the Fourier transform of the sound, with the frequency going left to right. Maybe not the most soothing sounds, but I hope it can give you some insight into the sort of problems scientists have to deal with.

Saturday, July 18, 2020

Vroom!

This morning for breakfast, we got some pastries from a favorite bakery. On the way home, another car honked its horn as it passed, and the changing pitch reminded be of something I've often wanted to try: I wondered whether I could simulate the sound of a vehicle moving by an observer, by calculating the Doppler shift at each point.

The Doppler shifted frequency is given by
where v is the velocity of the vehicle, c the speed of sound, and f0 the original frequency. The dot product with r means we only take the part of the velocity that is along the line-of-sight between the vehicle and the observer. That dot product is what leads to the changing pitch: As the vehicle approaches the observer, the angle decreases, lowering the pitch.

Sunday, February 2, 2020

Slow, Sarcastic Clap

I mentioned looking forward to coming back to Cardio Drumming at the gym where Marika and I go, but sadly they no longer offer it. Instead, they offer something called Pound Fitness that we've been trying out, which involves using weighted drumsticks without the ball drums. It's got rhythm on my mind, and made me think of another idea I've wondered about for a while: When large groups clap along to music, why does it seem like there are so many who are off-beat?

I suppose the uncharitable answer is that they're bad at keeping time, but I wondered whether the relatively slow speed of sound had anything to do with it. If we imagine the beat as a pulse that leaves the stage at 343 m/s, each audience member will clap when that pulse reaches them. That starts a new beat-pulse that will be out of sync when it reaches other audience members.

The equation for the error is fairly simple:

where R is the distance from the stage to the audience members, c is the speed of sound, and 𝜃 is the angle between the audience members.

Saturday, August 24, 2019

Strung Out

This is another idea from my list, and I have absolutely no memory of the source, aside from a casual interest in computer-generated sound and music: The Karplus-Strong algorithm. This is a technique for generating the sound of a plucked string, using entirely electronics, no actual strings required! Before we get into the algorithm though, we should go over some background:

What is sound?
Sound is a wave made up of pockets of higher- and lower-pressure air (or other substances). These pockets of pressure excite your eardrums in different ways that your brain interprets as sound. A musical note has a pitch, which is a specific frequency of high/low pressure changes.

How does a computer make sound?
A computer speaker uses magnets to change electricity into sound:
By Svjo - Own work, CC BY-SA 3.0, Link
The yellow coil is a wire that the computer can run current through. This creates a magnetic field, which moves the magnet (2) in the middle of the coil. The magnet is attached to a diaphragm (4) which compresses and expands the air in front of it to make sound waves.

What do we put into the speaker to get a note?
This is where the algorithm comes in. A pitch is a specific frequency, so as long as we repeat whatever we're putting into the speaker, we'll get some kind of note. If you remember your math classes, a sine wave might come to mind:
Unfortunately, as any Physics Lab instructor can tell you, pure sine tones aren't the most melodious to listen to:

Not surprisingly, it sounds a bit like a telephone tone, another form of computer-generated sound, designed to be heard by a computer.

Musical instruments sound different even when playing the same pitch because of the harmonics they include. There's a base frequency, like the sine wave above, but then there are many others that add color to the sound. This is where the Karplus-Strong algorithm comes in. We want a way to systematically create a range of frequencies associated with a single base frequency.

I don't want to get too deep into how the algorithm works – for that you can read the Wikipedia page I linked above, or take a look at the code I used to make these sounds. I will, however, try to summarize the concept.

We start with a list of values – mostly zeros, but we initialize things with some random noise at the beginning:
We then take the first value and output it. At the same time, we combine it with the second:
We put the combined value at the back, and move everything forward:
It's a simple method, but I was stunned by the quality of the sound I got out of it:

I also plotted some frames of the buffer to see what was going on:
You can see the burst of noise at the beginning, and then a damping sinusoidal wave.

I was curious what kinds of tones I could get by playing around with the filtering pattern. First I tried simply extending from 2 bins to 5:

Then I decided to go nuts and try a sinusoid filter:

The sound is even worse than the pure sine, but the waveform is really interesting:

It's a little difficult to see, but the initial noise manages to continue propagating through, as the sine pattern gradually asserts itself.

The thing I love most about Physics is how simple models can still give realistic results, and this is a perfect example. I wrote the code above in less than an hour, with no specialized knowledge about sound synthesis, yet the result sounds exactly like a plucked string. As always, I encourage the tech-minded among you to look at the code, and come up with some interesting sounds for yourself!

Saturday, March 30, 2019

"Feed Me, Seymour!"

The third observing run for Advanced LIGO/Virgo starts Monday! That means I've been on many telecons making sure everything is prepared for the big moment. Unfortunately, when you have dozens of people on a call, someone will invariably forget to mute their microphone, leading to echos, or worse, feedback.

Feedback happens when an output, like speakers, gets fed back into an input, like a microphone. Typically, the point of a speaker is to amplify its input, so the output gets louder, which goes into the input, and comes out louder, and goes into the input... Eventually, the signal reaches the limits of either the input or output system, and things stabilize (or break). I was curious though, why feedback has its characteristic sound. White noise is so-called because it's an equal distribution across all frequencies, just like white light. Feedback has a specific tone because of the properties of microphones and speakers.

The sounds we hear are rarely single-frequency – They're usually spread over a range with different intensities. This is how you can tell the difference between two instruments playing the same note. Different shapes and materials have different harmonics.

Microphones on computers are designed to pick up human speech, which is mostly in the range 150 Hz to 4 kHz:
Source
The x-axis of this plot is frequency, and the y-axis is intensity in decibels (dB). This is the typical unit used for measuring sound intensity, but it's a bit confusing. For one thing, it has the uncommon deci- prefix, meaning it is 1/10th of a bel (named after Alexander Graham Bell). It's also defined on a logarithmic scale:
where L is the loudness of the sound, p_rms is the average pressure of the sound wave, and p_ref is a reference pressure. Every 10 decibels corresponds to a pressure 10x greater.

Computer microphones are designed to pick up the same range of frequencies:
Source
Notice though, there's a slight uptick around 5 kHz. We can compare that to the output of computer speakers,
Source
This also rises around 5 kHz, which leads to the typical high-pitch – For reference, the highest note on a piano is around 4 kHz.

Of course, understanding how it works doesn't make it any less irritating!