Sunday, May 24, 2020

Roader Coaster

There's a road near us with a bump, and almost every time we go over it, I get the feeling of my stomach dropping, just like a roller coaster. As with most things that I dislike (as well as things I enjoy, or I'm curious about) I wondered if I could model what's going on!

My assumption was that in going over the bump, the car was accelerating at some significant fraction of gravity. Given the car's velocity and the shape of the bump, I should be able to find the acceleration. The first thing I thought of was centripetal acceleration:
This is the acceleration required to move in a circle, where v is the velocity, and r is the radius of the circle. We have a curve, not a circle though, so how can we get a radius? There's an idea called an osculating circle, which touches a point on a curve and has a radius that allows it to match the curve (at least on an infinitesimal scale) at that point. Before we can calculate that though, we need to define our bump.

I figured a Gaussian curve would make a nice smooth bump. Unfortunately, combining that equation with the osculating circle makes for a huge mess, so I figured I'd let Python deal with it:
The point moves at a constant velocity along the curve – This is a bit more complicated than it sounds. We need to take into account both the x- and y-movement:
Since we're already doing things numerically, we can choose our points to be evenly spaced along the curve, rather than the more typical spacing along x.

After making that fancy animation though, I realized it's not the centripetal acceleration we're interested in, but only the y-direction, since that is what will cancel gravity. Calibrating the bump to 75 cm high, and the car surface speed to 30 mph gives the y-acceleration as
The maximum is just over 0.5% of gravity, so I'm underestimating either the size of the bump, or the sensitivity of my gut!

Sunday, May 17, 2020

Filling in the Holes

Following up on some previous posts about gravitational wave detections, this week I have some questions from Papou:

How many current black hole collisions are we aware?
Last year, the LIGO/Virgo Collaboration released the first Gravitational-Wave Transient Catalog, covering all the detections from the first and second observing runs. That includes 10 binary black hole (BBH) events, and 1 binary neutron star (BNS) merger. The third observing run is split in two pieces. Results from the first half, O3a, are available on GraceDb, and include 37 BBHs, 6 BNSs, 5 neutron star black hole (NSBH) mergers, and 4 events that fall in the "mass gap".

The mass gap represents a range of masses where we have never seen a black hole or neutron star. This image shows a summary of the compact objects observed by LIGO and electromagnetic astronomers as of the end of O2:
via Northwestern
The empty space between 2 and 5 solar masses is the mass gap. It is unknown what kind of bodies were involved in those mass gap collisions.

What happens to "Matter and Antimatter" when two black holes collide?
Black holes are made of matter. When matter and antimatter combine they make energy, but as with most physical processes, this is reversible: energy can create matter/antimatter pairs. Since the universe contains residual heat energy from the Big Bang, these pairs are constantly forming and annihilating back into energy in space. If one of these pairs forms near a black hole though, the antiparticle can fall into it, while the matter particle escapes. This process is called Hawking radiation, and can lead to a black hole losing mass.

Are photons escaping due to the collision energy?
In the case of black holes, no. Light can't escape a black hole's event horizon, and when two collide, their horizons merge. Neutron star collisions, however, can release photons in the form of a gamma-ray burst (GRB). In an earlier post, I mentioned LIGO's detection of GW170817, which showed a correlation between a binary neutron star merger, and a GRB.

Are the combining gravities simply arithmetic additions or does the total gravity grow in multiples?
Neither! Mass and energy are connected through E = mc^2, so when energy is released in the form of gravitational waves, part of that comes from the masses of the black holes. Despite the extreme sensitivity needed to detect gravitational waves, they carry an enormous amount of energy. The first detection, GW150914, lost about 3 suns worth of mass-energy. I tried to find a way to put that number in perspective, like "X billion nuclear bombs", but it's so huge that it dwarfs even a measure like that. Spacetime is exceptionally stiff stuff, and wrinkling it even a little needs an amount of energy that we don't normally encounter.

Thanks for more great questions, Papou!

Sunday, May 10, 2020

Spare Brain

[Speaking of brains, I recently made an ebook version of Steve's cancer blog, available on Google and Apple! /shameless plug]

This week's post isn't exactly about Physics, but instead an interesting technique that can be used for data analysis, and I've used to make a card game. First some background: During my first semester at Swarthmore, I took a class on Jane Austen's novels. In one of the books, the characters played a card game called Piquet, which I had never heard of. I looked it up, and invited my friend Kevin to try playing it, which turned into a fierce tournament spanning many years. One summer, I developed an Apache Tomcat version of the game that could be played in your browser against various computer players, which I called WebPiquet. It's no longer functional, but all the code is here.

Skip ahead ~10 years, and I've become a fan of the blog AI Weirdness. It's written by a research scientist, Janelle Shane, who applies a program called a neural network to various tasks, and shows the results. You may have heard about neural nets in the news recently, thanks to GPT-2. The way they work is by imagining a set of neurons that are activated to different degrees by inputs. Those neurons are then combined to create a set of outputs.

As the name "neural network" suggests, the neurons are laid out in a network:
Moving from left to right, each neuron in one layer connects to the ones in the next. The connections between neurons specify different weights to tell how the first affects the next.

The key to getting a useful neural net is training. They're such a general tool that to get them to work for a specific job, you need a large amount of training data. By giving the network input data and tuning the weights according to the output data, we can try to find the best set of weights for a job.

The downside of neural nets is that without proper design and training, their results can be nonsense. Shane's blog features many of these, and there are Twitter accounts like @normalcatpics that feature particularly funny and/or nightmarish examples.

Getting back to cards, I thought it would be interesting to make a Piquet bot that took advantage of neural networks. I wrote an entirely new version in Python, which I naturally named Pyquet. I've only trained it for 10 or so hours, so some of the decisions it makes are still pretty bonkers, but the training was improving it slowly:
I alternated between making it play against itself, and a bot that simply made random choices. You can see above the score against the random bot is trending upward.

If you'd like to try it yourself, there are instructions and references at the link above. Feel free to post questions (or high scores!) below.