12 Days IX: Sad boys in snow

After some years, I sat down and took the time to read through Kii Kanna’s manga. The aesthetic was really my thing. The character designs were a bullseye for me, which when coupled with the fashion was something that baited me hard. It’s the kind of thing that resembles the feel of, say, Hyouka or High Speed. But what really got me after the aesthetic was the subject: the concerns and going-ons of the transient man in his late-20s. As I’ve gotten older, it’s gotten more difficult to find characters that you can really identify with because the typical character is a high schooler. It’s doubly hard to find such characters paired so perfectly with my aesthetic. So powerful is this combination that it’s already left significant impressions in my life.

The 4th annual π day anime and mathematics post: Traveling teacher time

Korosensei is fast, but can he do better?

Korosensei is fast, but can he do better?

せーの、

Korosensei can move at Mach 20 and takes advantage of this fact to make frequent trips around the world. Obviously, in the interest of efficiency, he often takes the opportunity to make multiple stops per trip. But what is the most efficient way for him to do so?

We can think about this using graphs. Suppose all the possible destinations are vertices and edges with costs connect the vertices, representing the cost of traveling from some destination $u$ to another destination $v$. At first glance, it might make sense to use physical distance in our case, but that is not necessarily a measure of the “best” way to get there. For instance, taking a bus or train between two cities may get you to the same place, but the actual cost depends on the path that’s taken and you may choose to include other criteria like the cost of tickets and other stuff like that.

So the problem is this: Korosensei has a list of places he wants to hit up during lunch hour. What is the most cost-effective way of getting to each of those places exactly once and get back to school? This is what’s known as the Traveling Salesman problem (TSP).

As it turns out, this problem, like many graph theory problems, is NP-complete, meaning it’s quite computationally hard and taxing for computers to solve. Formally, if we’re given a graph $G = (V,E)$ and a cost function $d: V \times V \to \mathbb R$, the problem is to find a cycle that visits every vertex exactly once with smallest possible cost. Usually, if we have $n$ vertices, then $G$ is the complete graph on $n$ vertices, that is, a graph where every vertex has an edge to every other vertex. Otherwise, it may not be possible to find such a cycle (and the problem of finding such a cycle is also an NP-complete problem, Hamiltonian cycle).

It’s called the Traveling Salesman problem because the idea is that if you’re a traveling salesman, you’d want to find that least-cost cycle because it’d be the most cost-effective way to hit up all the cities you need to go to in order to sell whatever it is that you’re selling. Of course, the hardness of the problem means that our poor salesman or Korosensei are stuck having to check the cost of every cycle. How many are there? Well, since we’re dealing with a complete graph, this means we can get to any vertex from any vertex so we can take any permutation of our vertices and that’s a cycle, and there are $n!$ permutations. This means that as $n$ gets larger, the difficulty of solving this problem grows exponentially.

One way we can attempt to make NP-complete problems feasible to solve is to loosen our demands a bit. Right now, we want the absolute least-cost cycle, but what if it’s not necessary for us to get the least cost cycle? What if we’re okay with getting something that’s within a factor of 2 or 3 or $f(n)$ for some function $f$ that depends on the size of the input? These are called approximation algorithms, in that they approximate the optimal solution by some factor. Unfortunately this doesn’t work for TSP.

Why? Because if I have a feasible algorithm for approximating TSP, then I’ve got an efficient algorithm for solving Hamiltonian cycle, which I mentioned was also NP-complete. Suppose I take any old graph $H$ with $n$ vertices and turn it into a complete graph with all my original edges having cost 1 and any new edges I add having cost $\rho \cdot n + 1$. If I do have some approximation algorithm that gives me a solution within a factor of $\rho$, I’ll run it and get a cycle for Korosensei to use. But, based on the cost of the cycle I get, I can tell if my original graph had a Hamiltonian cycle: if it has cost $\leq n$, then that means it found a Hamiltonian cycle using all of the edges in my original graph. Otherwise, we get a cycle with cost at least
$$(\rho \cdot n + 1) + (n-1) = \rho \cdot n + n > \rho \cdot n$$
meaning it must’ve used one of the new expensive edges I added and so we know the original graph didn’t have a Hamiltonian cycle.

Now this is a bit discouraging, but Korosensei would encourage us to keep on thinking about how to assassinate the hell out of this problem. In the most general case, the cost function $d$ has no constraints and the way that I’ve initially motivated Korosensei’s problem, $d$ can be arbitrary, with costs adjusted to his needs. However, some of the attempts to make TSP more computationally feasible have to do with making some reasonable restrictions on our cost function. This is another fairly standard approach to making computationally hard problems easier to deal with: figure out some cases of the problem that are still useful but might help to simplify or restrict the problem a bit.

For example, in most cases, it’s a reasonable expectation that if you’re going from $u$ to $v$, it’s best to go there directly rather than stopping at $w$ on your way there. This is what we call the triangle inequality:
$$d(u,v) \leq d(u,w) + d(w,v).$$
It’s called the triangle inequality because of what I just described: draw a triangle out of $u$, $v$, and $w$ and the distance between any two should be shorter than going through the third point.

I say should because for an arbitrary cost function, this isn’t always the case. One example is flight pricing. Sometimes, it’s more expensive to fly from $u$ to $w$ than it is to fly from $u$ to $v$, even if the flight stops over at $w$. Anyhow, Korosensei doesn’t have to deal with silly things like flight pricing and so we impose this reasonable triangle inequality condition on his cost function. Does it help? It turns out it does and we can get an approximation that’ll be at most twice the cost of the optimal solution.

How? Well, first we find a minimum spanning tree. A spanning tree of a graph is a tree (a graph that contains no cycles) that connects all of the vertices of a graph together. The minimum spanning tree is the spanning tree with the least weight out of all the spanning trees. The nice thing about MSTs is that we know how to do this efficiently.

Once we have our MST $T$ of our graph $G$, we double up every edge in $T$ to get a cycle $T’$. This cycle what we want yet, since it travels along all the same edges and goes to each vertex twice. But, we can get the cycle we really want by removing all of the second times we visit a vertex and we can do this because the triangle inequality tells us that by taking a more direct route, the cost of the cycle won’t increase. So now if we have our cycle $C$, then we have
$$cost(C) \leq cost(T’) \leq 2 \cdot cost(T).$$

Now, to compare to the cost of our actual optimal solution $C^*$, we notice that taking out some edge of $C^*$ makes it a spanning tree. It’s not necessarily minimal, so we’ve got
$$cost(T) \leq cost(C^* \setminus \{ e \}) \leq cost(C^*).$$
Putting that all together, we’ve got
$$cost(C) \leq cost(T’) \leq 2 \cdot cost(T) \leq 2 \cdot cost(C^*)$$
and there’s our 2-approximation.

Given how fast Korosensei is, this is probably good enough for him and is a decent tradeoff between the time it takes to solve the problem and his actual travel time.

12 Days V: I don’t really want anyone else to know about this side of you

宮村くん!! | もちゃこ

宮村くん!! | もちゃこ

Two high school students inadvertently discover a different side of the other than they usually present to the rest of their classmates and now they’re both in on the secret. At first, it’s just fun as they discover more and more about what the other person is really like. This all very naturally segues into a cute romance as the relationship grows. It’s all very nice and fuzzy and funny and adorable—mostly. It’s really easy to forget that the dorky Miyamura that we’ve gotten to know is someone who, when we first meet him, is someone who’s given up on other people. So occasionally, we’re treated to some delving into his past which serves as a reminder of how far he’s come. Like everything else in this manga, it’s not heavy-handed and it all comes together naturally. You really can’t help but feel glad that he’s managed to find a bit of happiness now.

12 Days XI: Sad Oreki in school

「夏空と屋上」/「たかな」

「夏空と屋上」/「たかな」

Iris Zero is Hyouka except the Oreki Houtarou here has been bullied since childhood and loves cakes. He gets by keeping as low a profile as possible, until he’s dragged out of his shell by a familiarly meddlesome cute girl. This was what I turned to when Hyouka finished airing last year. Unfortunately, I wasn’t prepared for the mangaka to fall ill and go on hiatus for a year and a half. Thank goodness it’s back as of this October so it can continue to feed me while I wait in vain for KyoAni to animate more Hyouka.

Suugaku Girl supplementary handout for chapter 2: Prime numbers

She doesn't seem that excited

So last time, Tetra was being enlightened by MC-kun about definitions. This actually arises from MC-kun using prime numbers as a motivating example.

Primes are megas important in mathematics and even more important today. The entire branch of mathematics called number theory is all about studying the properties of prime numbers. They’re so useful that we’ve done stuff like extend the notion of prime elements to algebraic structures called rings or apply analytic techniques to learn more about them, but we’ll stick with elementary number theory for now.

Now, for hundreds of years, we’d been studying number theory only because it’s cool and mathematicians love prime numbers. Last time, I mentioned some examples of math preceding useful applications. Well, number theory is a really good example of that, because in the 70s, we found a use for it, which is its main use today, in cryptography. There have been some new techniques using some algebra as well, but for the most part, modern cryptography relies on the hardness of factoring primes. Neat!

Okay, so we’re back to the original question that MC-kun tries to get Tetra to answer, which is, what is a prime number?

Definition. An integer $p$ is prime if and only if $p\geq 2$ and the only positive divisors of $p$ are 1 and itself.

MC-kun explains that the motivation for excluding 1 from the definition of a prime number is because we want to be able to say that we can write every number as a unique product of prime numbers. This is very useful, because now we know we can break down every number like this and we can tell them apart because they’re guaranteed to have a unique representation. This is called unique prime factorization.

Theorem. Let $a > 0$ be an integer. Then we can write $a = p_1p_2\cdots p_k$ for some primes $p_1,\dots,p_k$. This representation is unique up to changing the order of terms.

We can show this by induction on $a$. We’ve got $a=2$ so that’s pretty obvious. So let’s say that every integer $k\lt a$ can be decomposed like this and suppose we can’t decompose $a$ into prime numbers, assuming $a$ itself isn’t already a prime since it would just be its own prime decomposition. Then we can factor $a=cd$ for some integers $c$ and $d$. But both $c$ and $d$ are less than $a$, which means they can be written as a product of primes, so we just split them up into their primes and multiply them all together to get $a$. Tada.

As a sort of side note, I mentioned before that primes are so useful that we wanted to be able to extend the idea of prime elements into rings. Well, it turns out for certain rings, it isn’t necessarily true that numbers will always have a unique representation when decomposed into primes. This is something that comes up in algebraic number theory, which is named so because it involves algebraic structures and techniques. This was invented while we were trying to figure out if Fermat’s Last Theorem was actually true (which needed this and other fun mathematical inventions from the last century that implies that Fermat was full of shit when he said he had a proof).

So at the end of the chapter, after Tetra gets her chair kicked over by the megane math girl, we’re treated to a note that acts as a sort of coda to the chapter that mentions that there are infinitely many primes. How do we know this?

Suppose that there are only finitely many primes. Then we can just list all of the prime numbers, like on Wikipedia or something. So we’ve got our list of primes $p_1,p_2,\dots,p_k$. So let’s make a number like $N=1+p_1\cdots p_k$. Well, that number is just a regular old number, so we can break it down into its prime factors. We already know all the primes, so it has to be divisible by one of them, let’s say $p_i$.

Now we want to consider the greatest common divisor of the two numbers, which is just the largest number that divides both of them. We’ll denote this by $\gcd(a,b)$. So since $p_i$ is a factor of $N$, we’ve got $\gcd(N,p_i)=p_i$. But then that gives us $p_i=\gcd(N,p_i)=\gcd(p_i,1)=1$ by a lemma that says that for $a=qb+r$, we have $\gcd(a,b)=\gcd(b,r)$. This means that we have $p_i=1$, which is a contradiction, since 1 isn’t a prime number, and so I guess there are actually infinitely many primes.

So the nice thing is that we won’t run out of prime numbers anytime soon, which is very useful because as we get more and more computing power, we’ll have to increase the size of the keys we use in our cryptosystems. Luckily, because factoring is so hard, we don’t need to increase that size very much before we’re safe for a while. Or at least until we develop practical quantum computers.