# Kenji the quantum computer, Part 2 He will use this knowledge in less than 24 hours.

So back when I was moving in at the start of the school year, I decided to try out my BD player on my hot new Dell monitor. Since I didn’t want to take out my Kara no Kyoukai BDs from their box, I opted to crack open the cheap flimsy Summer Wars case and throw in the disc. I hadn’t actually tried playing it yet anyway. I opted to skip the OZ introduction and go straight to the opening. First of all, I was disappointed that Funimation set the credits in goddamn Arial. Seriously, what? I find it difficult to believe they didn’t have Helvetica already from one of countless projects they could’ve used it on already.

More importantly, Funimation translated the chapter title of the book that Kenji was reading on the train. It turns out he was reading about Shor’s Algorithm. Shor’s Algorithm is the quantum algorithm that can factor integers in polynomial time. I alluded to it when I talked about Summer Wars and cryptography a while back and now’s as good a time as ever to get into it, since, you know, this means Kenji really is a quantum computer and all.

In quantum computing, instead of dealing with bits that are 1s and 0s, we deal with quantum bits, or qubits, that are denoted $|1\rangle$ and $|0\rangle$. These are really just quantum states. Quantum states have interesting properties, the most famous of which is being in superposition. Essentially, what this means is that we can have qubits that are in a state where it represents 1 and 0 simultaneously.

This turns out to be a very neat property to have. If we’ve got $n$ qubits, then we’ve got like $2^n$ inputs simultaneously. That’s great, right? We can compute things exponentially fast now! Well, it turns out it isn’t that simple.

Suppose that we do end up running an algorithm or something on our $2^n$ states. What happens when we want an answer? Well, we measure the state. The problem with that is that on measuring a quantum state, you won’t get all $2^n$ results. You’ll only get one of those results with a certain probability. What’s more is that once you measure a quantum state, you’re stuck with whatever it gave you and you can’t go back and check for another result.

That is obviously no good, because we want not just any answer, but, you know, the right answer. So it looks like approaching our problem from the naive interpretation doesn’t seem to work. And in fact, that immediately tells us that quantum computers aren’t a magic bullet that can help us solve SAT in a reasonable amount of time.

What we can do is manipulate our resultant quantum state so that the actual correct result we want is the one with very, very high probability and reduce the probability of getting any of the other results to near-zero. Of course, the problem is that we don’t know which result is the correct one to begin with. It’s actually this exact hitch that prevents us from efficiently solving general NP-complete problems.

Luckily, if we want to factor some numbers, we can take advantage of this. We can reduce the problem of factoring any old number into prime factors to the problem of order finding, which I’d mentioned before. For an integer $N$, there’s another integer $m$ such that $k^m=1\bmod N$ and we call $m$ the order of $k$ in the integers modulo $N$.

So suppose we have $k\bmod N$ with order $m$ and we decide to keep on taking powers of it, like $k\bmod N,k^2\bmod N, k^3\bmod N$ and so on. Eventually, we’ll get up to $k^m\bmod N$, at which point, that’s just $1$ and we start our cycle over again. The thing to note here is that this sequence is finite and periodic, which narrows down what we have to search through.

So what do we do? We create a quantum state for our period that’s in a superposition over all the powers of $k$ in $\bmod N$. We don’t know what our period is (that’s what we’re trying to find), so we just choose a large enough one that our order will be in the period. Now, in order to find this order, we do some magic with a Quantum Fourier Transform.

What?

So I mentioned at the beginning that when we look at a quantum state in superposition, we can think of it as returning one of the possible states with a certain amount of probability. How the probabilities for these quantum states work is actually more complicated and it’s from this that we can massage an answer out of our magic quantum algorithm.

Quantum states have something called amplitudes, which are kind of like probabilities, except that amplitudes can be basically any complex number as long as all of the amplitudes $a_i$ for a quantum state satisfy $\sum|a_i|^2=1$. If we’re just measuring a quantum state, this really doesn’t change anything, we just get state $|i\rangle$ back with probability $|a_i|^2$. But when we’re manipulating quantum states, there’s more to the amplitudes to consider.

If we think about amplitudes in terms of waves, we can see that we can cancel waves out. For instance, the states $\alpha|1\rangle$ and $-\alpha|1\rangle$ have the same probability if we were to measure them (since $|\alpha|^2=|-\alpha|^2$), but if we put them together and add them, they’d cancel each other out. What’s more is that since amplitudes are complex numbers, so we can think of them as having a direction (like a vector on the complex plane) and we end up with weird stuff like having things partially cancel out.

So just like a regular Fourier transform extracts some information about frequencies from a sequence, we can use the Quantum Fourier Transform to extract some information about all of this amplitude stuff that we can’t see just by measuring the state. And this is exactly how we can manipulate our state so that it increases the probability of the answer we want when we go to measure it and depress the probabilities of all the other answers.

Now after rambling about amplitudes and Fourier transforms and orders, we should probably step back and see what we’ve done in perspective. What we have is a way to find the order $m$ of an element $k \bmod N$. Well, actually, we don’t even have that quite yet. What we get out of all of that quantum stuff is just some information about $m$ that we have to manipulate further, except we just do that with plain old number theory instead of quantum computation. But, what we’ve kind of stumbled through is kind of a vague description of Shor’s algorithm.

Like I mentioned in the first Summer Wars post, this is the only way we know of so far that lets us factor large numbers (especially the ones that are used in cryptography) within a feasible amount of time. Again, Kenji (or at least his brain) is basically a quantum computer. Coincidentally, I played around a bit with what the implications could be if our brains were quantum computers in an earlier post on Kaiba.

But even if Kenji isn’t a quantum computer, he’s goddamn smart for a high school student. The theory behind the computing part of quantum computing is really just a ton of linear algebra, so an understanding of quantum mechanics isn’t really necessary (which is great, because I don’t know any of that crap). The quantum computing course that I took in my undergrad only had second year linear algebra as a formal prerequisite, but that was mainly to allow students from different departments to take it and also implies that you’ve got first and second year math down at the very least.

# Suugaku Girl supplementary handout for chapter 2: On definitions An introduction to mathematical writing

In chapter 2 of Suugaku Girl, we’re introduced to the third component of the little love triangle that’s forming. Tetra is the underclassman that the main character is tutoring and she’s one of the many people who think they might like math but school eventually beats that silly thought out of them.

Anyone who’s taken a real math class will know that by the end of it, your notes are essentially a giant list of definitions, theorems, and proofs along with an example thrown in once in a while. By a real math class, I mean a math class that’s actually concerned about reasoning about how we get our theorems instead of focusing on how to use them for practical applications. I did my undergrad at a school where math has its own faculty and where there’s a lot of rivalry between the various faculties. Everyone in math often jokes about how an example suffices as a proof for engineers and their ilk.

I checked your textbook for a proof and it said that we’ve done enough examples for it to be plausible. Must have been written by engineers.
— Vanderburg, PMATH 340 (F09)

Here, Tetra falls into the same trap when the main character asks her to define what prime numbers are. So MC-kun has to explain how definitions work. First of all, definitions have to be precise, just like theorems. It’s pretty easy to lose marks for misstating theorems and definitions by leaving off an edge case for 0 or 1 or something silly like that. Of course, that leads Tetra to question why these things need to be so precise and arbitrary.

One of the things I realized about math is that it’s all about trying to do things with the definitions you start out with. You can kind of see that in how the number system is built up. We can start with plain old numbers that we use to count things. And then we can add things to it. Like, what happens if we have less of a thing, how would we represent that? Tada, we’ve got negative numbers. Okay, now what if we have a part of a thing? Now we’ve got fractions. And so on and so forth. We realize that not every number can be written as a fraction and suddenly we’ve defined real numbers.

I’m sure we’ve all wondered why complex numbers work out the way they do. It’s because we’ve defined everything to work out like that. Some guy tried to take the square root of a negative number and found that it didn’t work out very well, so we defined the square root of -1 to be $i$. Now we have this $i$ thing, what can we do with it? Well, we can just start writing crazy things like $3 + i + 4i^2 + 5i^3 + 9i^4 + 2i^5 + 6i^6$, but then we realize that it just becomes $2-2i$. Well, okay that’s kind of neat.

But now we know that all complex numbers can be written as $a+bi$ with $a,b\in\mathbb{R}$. So someone along the line must’ve been like, what would happen if we tried to graph these things? So we treat $a$ as the $x$ component and $b$ as the $y$ component and it turns out we can think of complex numbers as other structures like a vector or just a 2-tuple or something. And suddenly, this gives us a way to compare complex numbers, by taking the length of the vector that they define. And now that we have vectors, we can do some weird geometry stuff with them. We can think of these things as the length of the vector and the angle that they form. And then you can go crazy and talk about roots of unity or what multiplication of complex numbers might mean.

The point is that all of mathematics is built up like this. You start off with some definitions or premises and you go nuts. Of course, you don’t have to use the same definitions as everyone else. The problem with doing that is you might not end up with anything interesting. Like, what if we did something nuts and defined 1 to be a prime number like so many people do when they leave off that condition? Well, not much, we’d probably just rephrase all of our theorems to exclude 1.

On the other hand, sometimes when we play around with definitions, we do end up with something interesting. For instance, we can define something called the extended complex numbers, which is just the set $\mathbb{C}\cup\{\infty\}$. Yep, we just say okay, infinity is a number now, deal with it. So what can we do now?

Well, we can divide by 0 now.

I imagine there might be a few people who might flip out at this notion, but yes, since $\infty$ is included in our number system, we can define $\frac{1}{0}=\infty$. Of course, we can’t do everything — $0\times\infty$ and $\infty-\infty$ still don’t mean anything. But if you’ve been paying attention, you might be going, okay well, we can divide by 0, but what else can we do? Dividing by 0 is kinda meaningless if there’s nothing new we can do.

As it turns out, the extended complex numbers defines a very different geometric object. If we remember from the example of the complex numbers, we can basically define every complex number as a vector over a two-dimensional plane. Here, with the extended complex numbers, we can define every number as a line passing through something called the Riemann sphere. And like the complex plane, this sphere lets us create weird relationships between numbers and angles and stuff. This turns out to have interesting properties in complex analysis and quantum mechanics.

So yes, definitions are often arbitrary. Why? Because it’s just useful and interesting that way. You could argue that it’s because nature forces us to define things a certain way. Kind of like, of course you can’t take a square root of a negative number, you just can’t do it! What happens, though, is that we always seem to end up finding useful things that line up with our mathematics rather than inventing our mathematics to do useful things with. After all, mathematicians have been playing around with imaginary numbers for at least a few decades before electromagnetism was even discovered.

# Suugaku Girl supplementary handout for chapter 1: Sequences That really isn't enough terms to identify as Fibonacci

Suugaku Girl is a manga about people who really like math. The great thing about it is that it contains a substantial amount of math. This is also great because it will give me plenty to blog about.

The first chapter of Suugaku Girl is about the main character’s initial meeting with Milka, our socially retarded genius meganekko. She just starts spouting out numbers and, for whatever reason, he feels compelled to guess what comes next. And that is apparently the beginning of this love story.

In the chapter, we’re introduced to a few sequences, some of which are famous and some of which you might not have considered a sequence. It’ll probably help to understand what a sequence is beyond just being a bunch of numbers in a prescribed order.

Formally, we define a sequence as a function $a:S\to R$. The set $S$ is basically our index and is $\{1,2,…,n\}$ if $a$ is a finite sequence or the set of natural numbers $\mathbb{N}$ if we’re dealing with an infinite sequence. Normally, functions are written as $a(n)$, but, as was alluded to before, we refer to terms by index as in $a_n$.

However, $R$ can be any set. In the case of Suugaku Girl, it seems to be sticking with integers, but we can have sequences of bitstrings, vectors, complex numbers, functions, or whatever. What this also means is that a sequence doesn’t necessarily need to have a “pattern”, but can really be any ordered list of numbers (or functions or vectors or…).

Milka also brings up the idea of infinite sequences. A lot of the time, people will try to “solve” a sequence by completing it when they’re given only the first few terms. But, like Milka suggests, what they’re doing in that case is assuming that the rest of the sequence goes on as initially implied. Really, we can define any sequence we like with any behaviour we like. Again, remember that a sequence can be anything we want it to be. In fact, the sequence that Milka defines using the digits of π is kind of like that in that it’s completely arbitrary and doesn’t really have the kind of sequence definitions we’re used to seeing.

For finite sequences, we can just list all of the terms of our sequence like $(1,1,1,1,1,1000000,1)$. Obviously, we can’t do that for an infinite sequence. Luckily, a lot of the time we define sequences that have some sort of useful pattern that we can represent in a succinct way. Sometimes, like the digits of π sequence, this is harder.

We can try to formally define all of the sequences that were given in the manga. For instance, the Fibonacci sequence $(a_n)_{n=1}^\infty$ is commonly defined as $a_n = a_{n-2} + a_{n-1}$, but we have to give the first two terms $a_1 = 1, a_2 = 1$. The second sequence (which we’ll call $(b_n)_{n=1}^\infty$) takes a bit more work to define. We’ll need to define $p_n$ to be the $n$th prime number and then we can define $b_n = p_n \times p_{n+1}$. The third sequence $(c_n)_{n=1}^\infty$ is easy, it’s just $c_n = n^n$. And we can formalize the last one, which I’ll call $(d_n)_{n=1}^\infty$, just like the first two with a few more words. We let $\pi = q_1q_2q_3\cdots$ be the decimal expansion of π. Then $d_n = 2\cdot q_n$.

So that’s all fine, but what exactly are sequences used for? I’m pretty sure everyone’s learned about arithmetic and geometric sequences in grade school. Obviously, we can study sequences and their behaviour on their own. We can talk about whether they increase or decrease or how fast they grow or whether they converge. Apart from that, I don’t remember seeing sequences used for something besides sequences until analysis.

Analysis is basically the field of pure math that formalizes the concepts that we’re introduced to in calculus and generalizes them to spaces. Limits are a fundamental idea in calculus and analysis and these are defined by how a sequence converges. And this is where those weird sequences of vectors or functions comes into play, since we can talk about the convergence of a sequence of vectors or a sequence of functions in these other spaces that we want to do analysis in.

That’s probably the easiest example of an “application” of sequences. For myself, over the last few months I’ve read about automatic sequences, which are sequences that can be generated by a deterministic finite automaton. This gives us a way to relate automata theory to number theory and algebra. For instance, once we have k-automatic sequences, we can talk about k-regular sequences and come up with power series with respect to certain rings and fields and bla bla bla.

If you ever want to find out what crazy sequence you might have a part of, check out the Online Encyclopedia of Integer Sequences.

# The Future Gadget Lab vs. Kolmogorov complexity

In Steins;Gate 12, the Future Gadget Lab is faced with a problem. They need to send a copy of a person’s memory, or about 3.24 TB, to the past, but they can only send 36 bytes. What is a MADDO SCIENTISTO to do? What they end up doing is sending that data to the Large Hadron Collider to compress with a black hole. Problem solved! Or is it?

Obviously, using a black hole to compress data is silly, since a black hole would probably just destroy the data. In fact, it’s probably unnecessary, since if it were physically possible to achieve that kind of compression, you’d be able to do it on any computer because of the Church-Turing thesis. The only thing that would change is how quickly the algorithm would run. Luckily, in theoretical computer science, when dealing with the question of whether something is possible or not, we don’t care how long it’ll take. So, is there a compression algorithm that can give us the desired result?

Basically what we’re doing when were compressing stuff is we’re trying to rewrite a bunch if zeroes and ones, or strings, into a smaller bunch of zeroes and ones that mean the same thing. In general, this is impossible to do, since there are less strings of zeroes and ones when you have less digits, so you can’t arbitrarily stuff, say eight bits of information into seven bits. What you can do is create a way to describe the zeroes and ones in a way that is smaller than what you start out with.

For instance, we can describe strings of zeroes and ones in plain English. It’s shorter to write “one hundred zeroes followed by one hundred ones” than writing

00000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000
11111111111111111111111111111111111111111111111111
11111111111111111111111111111111111111111111111111

The obvious problem with our compression scheme so far is that there are some strings that are not very easy to describe and it might be longer to describe in English than it would be to just write the bits themselves.

Again, it appears to be difficult to create a compression scheme that works for any old string. It’s much easier to create such an algorithm when you know something about the kind of string you want to compress, like if it’s a certain size or if it has any patterns or something. In the case of the FG lab, we know they won’t need to worry about compressing strings of length 288 or less. Unfortunately, there isn’t much more we know about the strings they want to compress other than the fact that they’re huge.

Actually, there’s another problem, which is that they need a way to decompress the string they’ve sent. If I give you a description of the string in English, it’s not going to do you any good unless you already know English. So in addition to sending a compressed string, they need to send information about the compression scheme they used, all in 36 bytes.

In information theory, the length of the shortest description of a string $x$ is its Kolmogorov complexity, which we’ll denote $C(x)$. Formally, $C(x)$ is defined in terms of encodings of Turing machines that generate $x$, but it’s pretty easy to argue that the choice of how we describe $x$ doesn’t matter up to a constant term.

We know that $C(x) \leq |x| + O(1)$ (where $|x|$ is the length of $x$), since we can always just write out $x$. We also know that for every length $n$, we can always find a string $x$ such that $C(x) \geq |x|$ since there are $2^n$ strings of length $n$ but only $2^{n-1}$ shorter descriptions of those strings. These strings that can’t be compressed are defined to be random.

We can quantify this idea of compressibility by saying that a string $x$ is $c$-compressible if $C(x) \leq |x| – c$. And so, we can argue about how likely it is that we have a string that can be compressed to the degree that we want. For strings of length $n$, we have that at least $2^n – 2^{n-c+1} + 1$ of them are incompressible by $c$.

We have some numbers, so let’s play around with them. Remember from earlier that the approximate size of Okarin’s memories is 3.24 TB, or $3.24 \times 2^{40} \times 8$ bits, while the limit we can send is 36 $\times$ 8 = 288 bits. If we want to squeeze Okarin’s memories into a text message, we’ll have
$$c = 3.24 \times 2^{40} \times 8 – 288 = 2.849934139166592 \times 10^{13}.$$
WolframAlpha helpfully tells me that this is slightly more than 1.4 times as many red blood cells in the human body.

We can calculate the probability that we have a compressible string by figuring out the probability that incompressible, which we can do with our lower bound on the number of incompressible strings we have. Taking $n = 3.24 \times 2^{40} \times 8$, the probability is given by
$$1 – \frac{2^n – 2^{n-c+1} + 1}{2^n} = \frac{1}{2^{c-1}} – \frac{1}{2^n}.$$
Plugging our numbers in, we get
$$\frac{1}{2^{3.24 \times 2^{40} \times 8 – 288 – 1}} – \frac{1}{2^{3.24 \times 8 \times 2^{40}}}.$$

And unfortunately, after this step, WolframAlpha kind of refused to try to do the computation. I gave it a shot on my laptop in Python but all that happened was CPU usage went up to 100% for a few minutes and it started eating all of my memory. I kind of wanted to keep using my computer instead of waiting it out for a MemoryError exception, so I interrupted it. Doing some further estimation, I threw $2^{2^{40}}$ at WolframAlpha, since it’s kind of about the same neighbourhood in magnitude as the number we’re trying to get. Apparently, it’s a number that’s about 300 billion digits long, so one in that are the kind of odds you’re looking at.

I guess this is another way of saying there’s no chance in hell that what the FG Lab tried to do would ever work. And remember that we’re not talking about any particular compression algorithms or technologies here. We can’t come up with a better algorithm or scheme. We can’t use faster computers. We can’t use quantum computers (Holevo’s theorem says that we can’t, in general, use less qubits than classical bits for the same string). From an information theoretic standpoint, it is impossible.

But I’m pretty sure everyone already could’ve guessed that anyway.

# The (1st annual?) π day anime and mathematics lecture

Happy $\pi$ day. We’ll begin with an obligatory video.

One of the reasons I enjoyed Summer Wars so much is because the main character’s superpower is math. Well, okay, you say, he’s really good at math, but so what? A lot of people complain about the implausibility of OZ, but those of us with a basic understanding of cryptography and number theory will have been drawn to Kenji’s quick problem solving work with an eyebrow raised. So let’s talk about why Kenji is a wizard. Kenji doing some friggin mathematics

We’ll start with modular arithmetic, which Kenji mentions to Natsuki on the train ride to Ueda. When we divide numbers, we often end up with remainders. Suppose we divide some integer $k$ by $N$ and we get a remainder of $r$. Then we say that $k$ and $r$ are equivalent $\bmod{N}$ and we denote that by $k = r \bmod{N}$. Because it’s how division works, for any integer $k, r$ will be some number from $0$ to $N-1$. It turns out a lot of arithmetic operations work the same way in modular arithmetic: adding, subtracting, and multiplying numbers and then taking the modulus of the result will give you the same number as adding, subtracting, and multiplying the moduli of the numbers you started out with.

However, division doesn’t work as we would expect it to. So we have to think about division (or the equivalent operation) differently. Instead of thinking of division as splitting a group of stuff into smaller groups, we’ll think of it as multiplying by an inverse. What’s an inverse? Well, we can try thinking of it in terms of addition. It’s pretty intuitive that subtraction is the opposite of addition. If we have some integer $k$, then the additive inverse of $k$ is $-k$. When we add $k$ and $-k$, we get $0$, the additive identity. The identity is just the special number that we can add to anything and get that same thing back unchanged ($n+0 = n$). In the same way, if we multiply $k$ by its inverse, $k^{-1}$, then we’ll get $1$, since $k \times 1$ is just $k$ again. What this means is that the inverse of $k \bmod{N}$ is just some other number $j$ from $0$ to $N-1$ such that $j\cdot k = 1 \bmod{N}$ and it’s just multiplication again.

Now, the problem with this is that it’s not guaranteed that there’s always an inverse hanging around in $\bmod{N}$. In particular, if $k$ and $N$ share any divisors, then $k$ won’t have an inverse $\bmod{N}$. This is interesting because it also tells us that if we consider integers mod a prime number $P$, then every integer $\bmod{P}$ has an inverse, since $P$ doesn’t share any divisors with any integers from $0$ to $P-1$. We call these things that have inverses units. So if we have a unit $k$, then $k^m$ is also a unit, for any integer $m$. We even have a funny function $\phi$ defined such that $\phi(n)$ is the number of units in $\bmod{n}$. Love Machine solicits help

So taking everything we’ve learned, we can set up a cryptosystem! The one we’ll be looking at is called RSA, after the guys who invented it. We have Bob who wants to securely send a message $M$ to Alice. Alice chooses two prime numbers $p$ and $q$ and figures out $m = pq$. She also goes and figures out $\phi(m)$, which happens to be $(p-1)(q-1)$. Finally, she picks some integer $k$, a unit in $\bmod{\phi(m)}$. She lets everyone know $m$ and $k$, but she keeps $p$, $q$, and $\phi(m)$ secret.

So Bob wants to send $M$, which is just his message conveniently in number form. He makes $M$ into a number between $0$ and $m$, and if $M$ is too big, he can just break it up into chunks. Bob figures out the smallest $b$ such that $b = a^k \bmod{m}$ and sends $b$ over to Alice. Now, since Alice has $k$ and $\phi(m)$, she can also find $k^{-1}$ pretty easily. Once she has that, she can get the original message by figuring out $b^{k^{-1}} = (M^k)^{k{-1}} = M \bmod{m}$, since $kk^{-1} = 1 \bmod \phi(m)$.

The interesting thing here is that all of the information is out there for someone to encrypt a message to send to Alice, but no one is able to decrypt it. Well, they’re able to decrypt it if they know what $p$ and $q$ are, since once they’ve got that, they can get $\phi(m)$. But it turns out getting $p$ and $q$ from $m$ (which Alice just throws up on the interwebs) is really hard. And it really works for reals, because RSA is pretty widely deployed for things like keeping your credit card information safe while you send it through the tubes to Amazon. A conveniently displayed ciphertext

Let’s go back and think about units some more. Of course, there are only $N$ numbers in the integers $\bmod{N}$, so there’s a point at which $k^m$ is just $1$ again and starts over. If $k^m = 1 \bmod{N}$, we say that $m$ is the order of $k$. But why do we care about finding the order of $k$?

It turns out finding the order of elements is very, very similar to factoring an integer into primes and other related problems, like discrete logarithms. If we can find orders of elements, it won’t be too hard to figure out how to factor a number. In this case, the eavesdropper wants to figure out what $p$ and $q$ are, so they’ll want to factor $m$. And it turns out a lot of other public-key cryptosystems (like elliptic curves) are based on the difficulty of factoring.

How hard could it be? Well, we could just check every possibility, which doesn’t seem that bad for a number like 48, but once we get into numbers that are hundreds of digits long, that might start to suck. It turns out the fastest known algorithms for order finding take approximately $e^{O(\log N \log \log N)^{\frac{1}{2}}}$ steps. Current key lengths for RSA are at least 1024 bits, which would give us about 4.4 x 1029 operations. Assuming three trillion operations per second (3 GHz), it’d take a PC about 4.7 billion years. Sure, you could just throw more powerful computers at it, but they’d just double the key size and suddenly, you’d need to do 1044 operations. It's a lot easier to write math than typeset it

Well, that’s not entirely true. One of the breakthroughs in quantum computing was coming up with a fast algorithm for factoring. It turns out quantum order finding takes $O((\log N)^2 \log \log N \log \log \log N)$ steps, which, for a 1024-bit key is just over 60 operations. Doubling the key-size to 2048 bits only increases the number of operations by just over 20. Unfortunately (or fortunately, because we’d be pretty screwed if someone could easily break RSA right now), we haven’t built any quantum computers that large yet, nor are we capable of doing so anytime soon.

tl;dr – Kenji is a quantum computer.