Isaac Beh

Odd & Distinct Partitions

MathsDiscrete

I was reviewing my Anki flashcards from my second year discrete maths course (MATH2302) and rediscovered this fact for myself: "For any integer, the number of odd partitions is the same as the number of distinct partitions." I found this strange; it was clear from the generating functions, but I didn't have any intuition on why this would be the case.

To give some background, a partition is a way to write an integer as a sum of positive integers where we ignore ordering (e.g. the partitions of 4 are 1+1+1+1 , 1+1+2 , 1+3 , 2+2 and 4 itself). We call the elements of the sum "parts". Odd partitions are partitions where all parts are odd, and distinct partitions have distinct parts (e.g. for 4 , the odd partitions are 1+1+1+1 and 1+3 ; the distinct partitions are 1+3 and 4 ). I thought about rearranging the Young diagram geometrically (which can often provide intuition, such as why distinct odd partitions are in bijection with self-conjugate partitions), but had no success. I let the problem lie at the back of my mind for a bit, and came up with a nice bijection unexpectedly one morning that clarified the connection.

The Bijection

I won't state the bijection formally, but I'll give you enough of a description that you could easily turn one partition into another.

Start with an odd partition. Whilst the partition still has two parts of equal magnitude, we are going to repeatidly form a new partition where we have added these two parts into a single part. For example, if we have the partition 5+3+3+3+3+3+1+1 , we would undertake the process: \begin{align*}&5+3+3+3+3+3+1+1\\\Rightarrow\quad&5+3+3+3+3+3+2\\\Rightarrow\quad&6+5+3+3+3+2\\\Rightarrow\quad&6+6+5+3+2\\\Rightarrow\quad&12+5+3+2\end{align*}

Obviously, this provides us with a distinct partition (we only terminate our procedure once we have a distinct partition). More surprisingly, it won't map any two different odd partitions to the same partition (i.e. this process is injective). We can see this by shifting our perspective on this procedure.

Consider a part i with multiplicity m(i) (i.e. the part i occurs m(i) times within our partition). Then take the binary representation of m(i) , and for each "1 " in this representation, we are going to include a part consisting of the binary representation of i padded on the right by 0 , with the number of 0 's equal to the position of the 1 in the representation of m(i) (starting at 0, counting right-to-left).

That was a mouthful, so let's walk this through step by step. I'm going to use a subscript 2 to indicate a binary number, so 5=101_2 . Consider 3=11_2 from our previous example, so it has multiplicity m(3)=5=101_2 . There are two 1 's in the binary of 5 ; let's first consider the left-most 1 . It occurs in position 2 , as we start counting right-to-left from 0, thus we pad 3=11_2 with two 0 's, getting 1100_2=12 . The second (and last) 1 occurs in the zeroth position, so we pad 3 with no zeros, leaving it as it is. This means that our transformation will have parts 12 and 3 just as we saw earlier. In fact, these are the exact parts we got from combining 3 's together. If we repeat the 0 -padding process to all the other parts in our odd partition, we will get the same distinct partition as before, so these are really equivalent processes.

Looking at this process from this new perspective, we see why odd partitions are key. In binary, odd parts will end with a 1 , so by padding with 0 's on the right, we can determine what is padding and what was the original part. Thus, we can also take any distinct partition and turn it back into an odd partition. For any part i in a distinct partition, let n be the number of 0 's that are at the right of the binary representation (we could have n=0 ). If we let j be the part i but with these zeros removed, then we include 2^n copies of j in our odd partition. For example, if we have i=12=1100_2 , then we have n=2 and so we include 2^2=4 copies of j=11_2=3 in our odd partition.

This reverse function works for any distinct partition and provides the same odd partition that we started with. Thus, our original transformation is a bijection (i.e. for any odd partition we can find a respective distinct partition and vice-versa) and the total number of each kind of partition is equal for any integer. I find this is such a neat way to show this, especially how it highlights why we are interested in odd partitions, rather than even ones.

Generating Functions

While I think this is the most informative way of euclidating this fact, it would not be the first tool I reach for to prove it (or similar statements). Identifying these bijections is difficult, and proving that they are actually bijections is ugly. Instead, we tend to use generating functions; one of the most powerful tools in combinatorics.

A generating function is just a polynomial or formal power series, with each coefficient being the result of some counting problem. For example, we will let p_\text{odd}(x)=a_0+a_1x+a_2x^2+\cdots be the generating function for odd partitions, with a_n being the number of odd partitions of the integer n . We don't care about the convergence of such a series — we don't care about substituting in values for x — but we do care about when these series are equal. If we let p_\text{distinct}(x) be the generating function for distinct partitions (defined similarly to p_\text{odd} ), then to show that the number of odd partitions is the same as the number of distinct partitions (for every integer), we just need to show that p_\text{odd}=p_\text{distinct} . This gives us access to many tools in algebra that we can work with.

First, let's actually find representations of p_\text{odd} and p_\text{distinct} . Starting with p_\text{odd} , we'll find the coefficient of x^n . Notice that if we do have a partition n=i_1+\cdots+i_k , then we also have x^n=x^{i_1}\cdots x^{i_k} , and if part i has multiplicity m(i) , then: \overbrace{x^{i}\cdots x^{i}}^\text{\inlinemath{m(i)} times}=x^{m(i)\cdot i} So for this partition, we have x^n=x^{m(i_1)\cdot i_1}\cdots x^{m(i_k)\cdot i_k} where each i_j is unique. For any partition we just need to choose what m(1),m(2),\ldots are. It's even easier for odd partitions, as m(2)=m(4)=\cdots=0 . We can represent these choices as a sum: p_\text{odd}(x)=\overbrace{\left(x^0+x^1+x^{2\cdot 1}+x^{3\cdot 1}+\cdots\right)}^{\text{choosing \inlinemath{m(1)}}}\overbrace{\left(x^0+x^3+x^{2\cdot 3}+x^{3\cdot 3}+\cdots\right)}^{\text{choosing \inlinemath{m(3)}}}\overbrace{\left(x^0+x^5+x^{2\cdot 5}+x^{3\cdot 5}+\cdots\right)}^{\text{choosing \inlinemath{m(5)}}}\cdots When we expand this product, the each term will consist of some element from the first bracket times something from the second bracket and so on. This means that the terms are of the form: x^{m(1)\cdot 1}x^{m(3)\cdot 3}x^{m(5)\cdot 5}\cdots = x^{m(1)\cdot 1+m(3)\cdot 3 + m(5)\cdot 5+\cdots} Thus the coeffient of x^n will be the number of terms such that n=m(1)\cdot 1+m(3)\cdot 3 + m(5)\cdot 5+\cdots , which is the same thing as the number of odd partitions of n . Isn't that so cool!

We can do a similar thing for p_\text{distinct} . This time, each m(1),m(2),\ldots is either 0 or 1 , so: p_\text{distinct}(x)=\overbrace{\left(x^0+x^1\right)}^\text{choosing \inlinemath{m(1)}}\overbrace{\left(x^0+x^2\right)}^\text{choosing \inlinemath{m(2)}}\overbrace{\left(x^0+x^3\right)}^\text{choosing \inlinemath{m(3)}}\cdots

Given that these are just power series, we can use typical algebra to simplify them. Thus, we get:\begin{align*}p_\text{odd}(x) &= \prod_{j=1}^\infty\sum_{m=0}^\infty x^{m\cdot (2j-1)}\\ &= \prod_{j=1}^\infty\sum_{m=0}^\infty \left(x^{2j-1}\right)^m\\ &= \prod_{j=1}^\infty \frac{1}{1-x^{2j-1}}\end{align*} We can get this further by clever multiplication by 1-x^{2j} . Note that we are being a bit playful with how we rearrange and reindex these products, but just repeat the mantra "these are formal series; convergence is meaningless" and all will be alright: \begin{align*}p_\text{odd}(x) &= \prod_{j=1}^\infty \frac{1}{1-x^{2j-1}}\\ &= \prod_{j=1}^\infty \frac{1-x^{2j}}{\left(1-x^{2j-1}\right)\left(1-x^{2j}\right)}\\ &= \frac{\prod_{j=1}^\infty\left(1-x^{2j}\right)}{\prod_{j=1}^\infty\left(1-x^{2j-1}\right)\left(1-x^{2j}\right)}\\ &= \frac{\prod_{j=1}^\infty\left(1-x^{2j}\right)}{\prod_{j=1}^\infty\left(1-x^j\right)}\\ &= \frac{\prod_{j=1}^\infty\left(1-x^j\right)\left(1+x^j\right)}{\prod_{j=1}^\infty\left(1-x^j\right)}\\ &= \prod_{j=1}^\infty\frac{\left(1-x^j\right)\left(1+x^j\right)}{\left(1-x^j\right)}\\ &= \prod_{j=1}^\infty\left(1+x^j\right)\\ &= p_\text{distinct}(x)\end{align*} And we're done!

Using generating functions like this almost feels a bit magical. This definitely doesn't give you the same intution, but just look at that power! The proof has no edge cases or the need to check or little details (say that our process is well defined), it's just straight forward computation.

This is the start of a common trend within the mathematics I have learnt: You start with some more tediuous technique (in this case creating bijections; but also consider the \varepsilon -\delta defintition of the limit), you then learn about some stronger tools (e.g. generating functions; L'Hôpital's rule), sometimes these provide intuition, but sometimes you need to remove one layer of abstraction to gain intuition (e.g. the bijection discussed here; going back to the \varepsilon -\delta defintion to argue about absolute continuity). I'm hoping that as I continue to improve my skills, I'll gain more intuition about the tools themselves, so that I can think in terms of them, instead of needing to peel away layers of abstraction (in the same way that I don't think of multiplication as repeated addition anymore). However, I may be overly optimistic; at least currently, generating functions are as clear as a brick wall.