Today, to divert myself, I tried to find a new proof of Basel identity $\boxed{\sum_{j=1}^\infty\frac{1}{j^2}=\frac{\pi^2}{6}}$. I came up with the following, which essentially interprets the identity as the *invariance of the trace* under basis change. The final part is a bit dirty, but maybe someone can spot a way to simplify it.

**Q1.** Is it already known? If not, this post is just meant to share it :-)

**Q2.** Do you know proofs of other identities based on a similar idea, namely viewing both sides as the trace of something?

**Proof.** Let $X:=\{f\in L^2([-1,1])\mid f(-x)=-f(x)\}$ be the space of odd functions, which has the Hilbert basis $\{e_j(x):=\sin(j\pi x)\}_{j=1}^\infty$.
Call $S:X\to X$ the operator given by $S(f):=-\iint f$, where $\int$ denotes the mean-zero primitive. We have $S(e_j)=\frac{e_j}{\pi^2 j^2}$, so $S$ is positive and symmetric with square root $T(e_j):=\frac{e_j}{\pi j}$.

Let $X_n$ be the linear span of $\{x,\dots,x^{2n-1}\}$. Since $\{x,x^3,\dots\}$ spans a dense subset of $X$ (proof: approximate any $f\in X$ with a polynomial $p$, then take $\frac{p(x)-p(-x)}{2}$), the Gram-Schmidt algorithm gives a Hilbert basis of polynomials $\{p_j\}$ (which happen to be the odd Legendre polynomials). Now

$$\begin{aligned} \lambda&:=\sum_{j=1}^\infty\frac{1}{\pi^2 j^2} =\sum_{j=1}^\infty\langle Se_j,e_j\rangle =\sum_{j=1}^\infty\|Te_j\|^2 =\sum_{j=1}^\infty\sum_{k=1}^\infty|\langle Te_j,p_k\rangle|^2 \\ &=\sum_{k=1}^\infty\sum_{j=1}^\infty|\langle e_j,Tp_k\rangle|^2 =\sum_{k=1}^\infty\|Tp_k\|^2 =\lim_{n\to\infty}\sum_{k=1}^n\langle Sp_k,p_k\rangle. \end{aligned} $$

Being $\{p_1,\dots,p_n\}$ an orthonormal basis of $X_n$, this expresses the fact that $\lambda$ is the limit of the trace $\lambda_n$ of $S_n:=\Pi_nS:X_n\to X_n$, where $\Pi_n:X\to X_n$ is the orthogonal projection.

Using the basis $x,x^3,\dots,x^{2n-1}$ and computing $S(x^{2j-1})=-\frac{x^{2j+1}}{2j(2j+1)}+c_jx$, we get $$ S(x^{2j-1})=-\frac{x^{2j+1}}{2j(2j+1)}+c_jx\text{ for }1\le j<n,\quad S(x^{2n-1})=-\frac{\Pi_n(x^{2n+1})}{2n(2n+1)}+c_nx. $$ Finally, it is easy to check that $$\Pi_n(x^{2n+1})=x^{2n+1}-\frac{(2n+1)!}{(4n+2)!}\frac{d^{2n+1}}{dx^{2n+1}}(x^2-1)^{2n+1}$$ and thus the coefficient of $x^{2n-1}$ in $\Pi_n(x^{2n+1})$ is $(2n+1)\frac{(2n+1)!}{(4n+2)!}\frac{(4n)!}{(2n-1)!}\sim\frac{n}{2}$. Thus, by definition of trace, $\lambda_n=c_1+O(\frac{1}{n})$ and we get $\lambda=c_1=\frac{1}{6}$.

during a process, I generate a number of bits: zeros and ones. I then create a z-score based on how many more or less ones I get than zeros:

Here's my notes on how I create the z score:

z = ((K - np) ± 0.5) / √npq z = ((510 - 500) - 0.5) / √250 z = 0.600833Here's a list z-scores I get:

[1.0697392397288346, -1.6923126540744842, -1.9887761847152696, -1.0005644159126512, 0.20011288318253023, 0.4669300607592372, -1.3909080645896854, 0.6102207672356169, 0.669513473363774, 0.9659770040045595](Assuming that I create the same number of bits every time), is there a way that I can combine these z-scores to create an overall z-score? would summing them be sufficient? it doesn't seem to be that simple, I'm not even sure if it can be done.

I can see it isn't as easy as summing the scores because I ran the z-score on the combined bit list:

bits 00101000 bits 01101110 score -1.0606601717798212 score 0.35355339059327373 bits (combined) 0010100001101110 score -0.25Denote an integer partition of $n$ by $\lambda=(\lambda_1\geq\lambda_2\geq\dots\geq\lambda_k)$ where $\lambda_k>0$. Also recall the $q$-analogues of integer $n$ given by $[n]_q=\frac{1-q^n}{1-q}$. Further, let $$[n]_q!=[n]_q[n-1]_q\cdots[2]_q[1]_q \qquad \text{and} \qquad [0]_q!=1.$$ If $\lambda=(\lambda_1\geq\lambda_2\geq\dots\geq\lambda_k)\vdash n$, define $$a(\lambda):=[\lambda_k]_q!\prod_{j=1}^{k-1}\,\,[\lambda_j-\lambda_{j+1}]_q! \qquad \text{and} \qquad b(\lambda)=\prod_{j=1}^k[\lambda_j]_q.$$

**Question.** The following appears to be true. Is it?
$$\prod_{\lambda\vdash n}a(\lambda)=\prod_{\lambda\vdash n}b(\lambda).$$

Is Eratosthenes's sieve used in anywhere? I like the idea of this sieve but seems to be too inefficient to use in a computer program.

Let us define three functions here as $f(z)$ and $g(z)$ and $g_x(z)$ only in the $0<a<1$ with $z=a+it$. They are analytic and continues in this region.

And if we accept $z_0$ as one of the zeros of $g(z)$, we know:

$g(z_0)=0$ and $g(1-z_0)=0$.

On the other hand, we also know:

1-) $\lim_{x \rightarrow \infty}g_x(z)=g(z)$

2-) $g_x(z)=h_x(z)+x^{(1-z)}-(x-1)^{(1-z)}$ ..and .. $g_x(1-z)=h_x(1-z)+x^z-(x-1)^z$

3-) $\lim_{z \rightarrow z_0}(\lim_{x \rightarrow \infty}h_x(z))=0$ ....or ...$\lim_{(z,x) \rightarrow (z_0,\infty)}h_x(z)=0$

4-) $\lim_{z \rightarrow z_0}(\lim_{x \rightarrow \infty}h_x(1-z))=0$ ....or ...$\lim_{(z,x) \rightarrow (z_0,\infty)}h_x(1-z)=0$

5-) $ \lim_{x \rightarrow \infty}\frac {d} {dx}h_x(z)=0 $ ....for $0<a<1$

6-) $ \lim_{x \rightarrow \infty}\frac {d} {dx}h_x(1-z)=0 $ ....for $0<a<1$

7-) $ {|f(z_0) |}$ is finite and non-zero.

Thus, can we go a little further with the following limit? Even, can we find a limit result?

$$ {|f(z_0) |} ={|\lim_{z \rightarrow z_0}\frac {g(z)} {g(1-z)}|} =? $$

This is a restated version of my original very broad question.

Let $P$ be probability a measure on an interval $[a,b]$ ($-\infty<a<b<\infty$) that's dominated by Lebesgue measure. Let $\langle f, g \rangle_P=\int_{[a,b]} f \cdot g \, dP$ be an inner product for integrable functions associated with $P$ and $\| \cdot \|_{L_2(P)}$ denote the induced $L_2$-norm. Let $f: [a,b] \rightarrow \mathbb{R}$ be integrable and $f \in C^r[a,b]$ for some $r \leq \infty$. Consider approximating $f$ by a trigonometric polynomial $a_0+\sum_{k=1}^K [a_k \cos(kx) + b_k \sin(kx)]$. Let $\pi_{K,P}(f)$ denote the projection of $f$ onto the space of $K$-th order trigonometric polynomials with respect to $\langle \cdot, \cdot \rangle_P$. Consider the approximation error $\| f-\pi_{K,P}(f) \|_{L_2(P)}$. Assume $f$ is not a trigonometric polynomial itself.

I've seen an upper bound of this error that looks like $C K^{-r}$ when $r < \infty$ and is derived from the $L_\infty$ error (https://www.springer.com/us/book/9783540506270). Is there a lower bound for the $L_2(P)$ error, ideally with the same structure as the upper bound (maybe under mild conditions)? Also, what would be the upper bound and lower bound if $r=\infty$? I'm particularly interested in the case where $f(x)=x$.

You may take $[a,b]$ to be any interval you want for convenience.

I would be very happy if you know of any results of this kind, perhaps under similar settings (e.g. a different notion of smoothness for $f$ in terms of Soblev spaces).

**Original question:**

Let $P$ be probability a measure on an interval $[a,b]$. Let $\langle f, g \rangle_P=\int_{[a,b]} f \cdot g \, dP$ be an inner product for integrable functions associated with $P$ and $\| \cdot \|_{L_2(P)}$ denote the induced $L_2$-norm. Let $f: [a,b] \rightarrow \mathbb{R}$ be integrable. Let $\phi_1,\phi_2,\ldots: [a,b] \rightarrow \mathbb{R}$ be a basis and I want to use $\sum_{k=1}^K \beta_k \phi_k$ to approximate $f$ for some $\beta_k \in \mathbb{R}$. In particular, let $\pi_{K,P}(f)$ denote the projection of $f$ onto $Span\{\phi_1,\ldots,\phi_K\}$ with respect to $\langle \cdot, \cdot \rangle_P$. Consider the approximation error of $\pi_{K,P}(f)$: $\| f-\pi_{K,P}(f) \|_{L_2(P)}$.

I know there are results on the upper bound of this error. But is there a more accurate estimate (not just an upper bound)? Or is there a lower bound? If so, can you provide a reference?

Here I always assume $f$ is not a linear combination of $\phi_1,\phi_2,\ldots$, i.e. $\pi_{K,P}(f) \neq f$ for any $K<\infty$ and any $P$, so there might be a nontrivial lower bound.

You may consider simplified/restricted versions of this problem. For example, you may take $\phi_k=x^{k-1}$ or $\phi_{2k-1}=\cos((k-1)x), \phi_{2k}(x)=\sin(kx)$ (trigonometric polynomial); you may take $[a,b]=[0,1]$ or $[a,b]=[0,2\pi]$.

You may add other assumptions on $f$ or $P$ as long as they are not too restrictive. For example, assume $f$ is continuously differentiable up to some order or infinitely differentiable; or, assume $P$ is dominated by Lebesgue measure.

Edit 1: Let me try to be more specific. Is there a lower bound of the approximation error when $f(x)=x$ and I use a trigonometric polynomial (with both $\sin$ and $\cos$ series) to approximate? You may assume $[a,b]=[0,2\pi],[0,\pi],[-\pi,\pi]$ or whatever finite interval as you want. The probability measure $P$ has no special property. If necessary, you may assume $P$ is dominated by Lebesgue measure.

Edit 2: Just want to emphasize that I only care the behavior on a finite interval $[a,b]$ instead of the whole real line, so if I use trigonometric polynomial for approximation, I feel $f(x)=x$ not being periodic should not make the approximation behave super badly.

Consider the geodesic flow on $X = \Gamma \backslash \text{PSL}(2,\mathbf{R})$, the unit tangent bundle of a hyperbolic surface, where $\Gamma$ is a lattice.

I have heard that, for any real number $\alpha \in [1,3]$, there exists a orbit of the geodesic flow whose closure in $X$ has Hausdorff dimension $\alpha$. Where can I find a proof of this?

A hypergraph is a pair $H=(V,E)$ where $V\neq \emptyset$ is a set and $E\subseteq{\cal P}(V)$ is a collection of subsets of $V$. We say two hypergraphs $H_i=(V_i, E_i)$ for $i=1,2$ are *isomorphic* if there is a bijection $f:V_1\to V_2$ such that $f(e_1) \in E_2$ for all $e_1\in E_1$, and $f^{-1}(e_2) \in E_1$ for all $e_2\in E_2$.

If $G$ is a group, denote by $\text{Sub}(G)$ the collection of the subgroups of $G$.

Are there non-isomorphic groups $G,H$ such that the hypergraphs $(G, \text{Sub}(G))$ and $(H, \text{Sub}(H))$ are isomorphic?

Under what conditions on $a(x)$ and domain $D$, the spectral gap of elliptical operator $ \nabla \cdot(a(x)\nabla)$ defined on $D$, can be controlled?

The boundary condition is that the solution at the boundary is zero. Assume that $D$ is a unit ball in $R^{d}$. Since the eigenvalues of this operator are countable and nonnegative, the spectral gap is the difference between its smallest nonzero eigenvalue and zero.

Let's first define what we mean by *depth of a subgroup*.

Let $G$ be a finite group and $H$ a subgroup. Let $(V_i)_{i \in I}$ and $(W_j)_{j \in J}$ be the irreducible complex representations of $G$ and $H$ (up to isom.). Consider the bipartite graph $\mathcal{G}$ whose vertices are these representations, and with $d_{ij}$ edges between $V_i$ and $W_j$ if $\langle V_i\vert_H,W_j \rangle = d_{ij}$. Let $\mathcal{G}_0$ be the connected component of $\mathcal{G}$ containing the trivial representation $V_0$ of $G$. Note that $\mathcal{G}_0$ can be called the principal block of the decomposition matrix, or the principal graph. Note that $\Vert \mathcal{G}_0 \Vert^2 = |G:H|$.

**Definition**: The *depth* of $H \subset G$ is the distance between $V_0$ and a farthest vertex in $\mathcal{G}_0$.

*Alternative definition* (after Noah): *depth* is the maximum number of applications of induction $\mathrm{Ind}_H^G$ or restriction $\mathrm{Res}_H$ from $V_0$ that generate a new irreducible component (by Frobenius reciprocity).

Note that the depth of $H \subset G$ is $2$ if and only if $H$ is a normal subgroup.

The principal graph for $\{e\} \subset S_3$, where the starry vertex is $V_0$:

The principal graph of $\langle (1,2)(3,4) \rangle \subset A_4$ (depth $3$):

The principal graph for $A_4 \subset A_5$ (depth $5$):

If $H \subset G$ is a maximal subgroup of depth $2$, then it is easy to see that $|G:H|$ is a prime number.

Let $I_n$ be the set of indices of *maximal* subgroups of depth $n$ in the finite groups. Then $I_2 = \mathbb{P}$.

In order to see what $I_n$ looks like for $n>2$, we computed the beginning of these sets, more precisely, we computed the subsets $E_n \subset I_n$ restricted to $|G:H| \le 100$, $|G| < 10^7$ and $n \le 7$. The results are the following (see full computation and code below):

- $E_2=\{2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, \dots \}$,
- $E_3=\emptyset$,
- $E_4=\{3, 4, 5, 7, 8, 9, 10, 11, 13, 15, 16, 17, 19, 23, 25, 27, 28, 29, 31, 32, 36, 37, \dots \}$,
- $E_5=\{ 5, 6, 8, 9, 10, 11, 12, 14, 15, 17, 18, 20, 21, 24, 26, 28, 30, 32, 33, 35, 36, \dots \}$,
- $E_6=\{ 4, 6, 7, 8, 9, 10, 12, 14, 16, 18, 20, 21, 24, 25, 26, 27, 28, 30, 32, 35, 36, 38, \dots\}$,
- $E_7=\{11, 13, 25, 31, 36, 40, 45, 49, 57, 64, 81, 100\}$.

*Surprisingly* $E_3=\emptyset$, which leads to wonder whether $I_3 = \emptyset$ also, in other words:

**Question**: Is there a maximal subgroup of depth $3$?

For people interested in subfactor (planar algebra) theory, the question extends as follows:

*Bonus question*: Is there an irreducible maximal subfactor of depth $3$ and integral index?

**Computation**

**Code** (the first function is due to Jack Schmidt, see here)

I have a question regarding Goormaghtigh conjecture on the Diophantine equation $$\frac{x^m-1}{x-1}=\frac{y^n-1}{y-1}.$$

Suppose that a positive integer $N$ is given. How many integer solutions are there to the equation $$\frac{x^m-1}{x-1}=N=\frac{y^n-1}{y-1},$$ with $x$ and $y$ prime powers?

Observe that I am not asking for a solution of Goormaghtigh conjecture in the case that $x$ and $y$ are prime powers, but I am asking whether one can bound the number of solutions with a very slow growing function of $N$, when $x$ and $y$ are prime powers. [Not sure what I mean with "slow growing". Just interested to know what is known in this case.]

It is well known now that Yitang Zhang's work on Jacobian conjecture collapsed because his advisor's work earlier contains unjustified claims. I am wondering what specifically is unclear about his paper. From fellow researchers I heard his paper is unreadable, and the last claim in his paper on Jacobi conjecture may even be wrong. What is the consensus?

Consider irrational numbers, $a_1,\dots,a_n$, that are also linearly independent over $\mathbb{Q}$. Now, construct the set, $$ S=\{m_1a_1+\cdots+m_na_n:m_1,\dots,m_n\in\mathbb{Z}\}. $$ One can do some arguments, similar to Dirichlet's approximation, to prove that, for every $\epsilon>0$, there exists an $n-$tuple, $(m_1,\dots,m_n)$ with the property that, $\left|\sum_{k=1}^n m_ka_k\right|<\epsilon$ (therefore, any metric ball centered at $0$ with arbitrarily small radius has an intersection with $S$).

Can we say something about the density of this set, in some interval, say, $[0,\delta]$. I'm sorry, if this is something simple, I could not see a way to tackle it.

Given a vector $(v_1,\dots,v_n)\in\mathbb Z^n$ if $D$ is the discrepancy of the fractional parts $\big(\{\frac{mv_1}p\},\frac{mv_2}p\},\dots,\{\frac{mv_n}p\}\big)$ where $p$ is a prime and $m\in\{1,\dots,p\}$ then we know that we can find an $m$ such that $\big(\{\frac{mv_1}p\},\frac{mv_2}p\},\dots,\{\frac{mv_n}p\}\big)\in\mathcal I_1\times\dots\times\mathcal I_n$ where intervals $\mathcal I_i\subseteq(0,1)$ satisfy condition $\prod_{i=1}^n|\mathcal I_i|\geq D$ holds and in particular $$|\mathcal I_1|=\dots=|\mathcal I_n|=D^{1/n}+\epsilon$$ is possible at any $\epsilon>0$.

If $(v_1,\dots,v_n)=(a_1,b_1)\otimes(a_2,b_2)\otimes\dots\otimes(a_t,b_t)$ (note $n=2^t$) where each pair $a_i,b_j$ is coprime, each pair $a_i,a_j$ is coprime and each pair $b_i,b_j$ is coprime holds with $$p^{1/n}+1<a_i,b_j<2p^{1/n}$$ then is there an $m\in\mathbb Z$ such that $$|\mathcal I_1|=\dots=|\mathcal I_n|=D^{1/2t}+\epsilon$$ is possible at any $\epsilon>0$ (even though $n=2^t$ we only have $2t$ degrees of freedom for tensor product sequence)?

Note discrepancy of tensor product sequence is at most $p^{-2t/n}$.

If $|\mathcal I_1|=\dots=|\mathcal I_n|=p^{-1/n}+\epsilon$ is possible then we can meet Dirichlet pigeonhole bound in Difference between Dirichlet Pigeonhole and Exponential sums bound in particular situation?.

Are there any good survey articles in symplectic and contact geometry, which focus on the "big picture", i.e how this discipline fits into the mathematical world ?

In the symplectic case : I am looking for surveys which explain the differences between the group of symplectic diffeomorphisms (and its Hamiltonian subgroup) and the group of volume preserving diffeomorphisms, through rigidity considerations. In addition, I would like to understand the motivation behind the use of generating functions to treat fixed points problems.

In the contact case : I would like to read about the link between contact geometry and simple Lie algebras, the physical motivation between the Reeb flow, and the motivation behind the concept of prequantization spaces. In addition, I would like to understand the motivation behind the study of fixed points up to Reeb flow: studying fixed points in odd-dimensional spaces is apparently not interesting (why ?).

I understand that there are sets of 7 points on a circle that can be fully shattered using triangles.But, it is not clear to me why it cannot shatter 8 points.

Is there any intuitive way of arriving at the conclusion that it can't shatter 8 points? Is there a simple explanation without using much advanced geometry tools?

A state space of a Boolean algebra is a Choquet simplex but not all Choquet simplices can be viewed as state spaces of Boolean algebras. Is it known which Choquet simplices are precisely state spaces of Boolean algebras?

I am looking for a reference for Eisenstein series for discrete subgroups of $SL(2,\mathbb C)$, in particular, finite index subgroups of $SL(2,\mathcal O_K)$ where $K$ is an imaginary quadratic field.

Much work has been done over discrete subgroups of $SL(2,\mathbb R)$, and similarly Eisenstein series on $SL(2,\mathcal O_K)$ itself, but I have not been able to locate this particular case.

Thinking of $\mathbb {CP^1}$ as the sphere $S^2\subset\mathbb R^3$, we can define the notion of a circle on it to be a subset that is got by a hyperplane section of $S^2$ inside $\mathbb R^3$. This notion is known to be invariant under the complex automorphism group $PSL_2(\mathbb C)$ of $\mathbb {CP^1}$.

Suppose $n\ge 3$ is an integer. Then, it is known that the complement of $n$ distinct points $\{z_1,\ldots,z_n\}$ on $\mathbb {CP^1}$ carries a unique complete hyperbolic conformal metric of finite area, call it $g$. It is also known that if $\gamma$ is a simple closed curve on $X = \mathbb {CP^1}-\{z_1,\ldots,z_n\}$ which is homotopically non-trivial, then it is homotopic to a unique simple closed geodesic for $g$. Let us continue to denote this by $\gamma$. **Is it then true that $\gamma\subset \mathbb {CP^1}$ is a circle in the sense of the previous paragraph?**

Consider two morphisms $T\to Z$ and $Y\to Z$ of varieties over an algebraically closed field $k$ where $Z$ is an affine space. If $Y\to Z$ is flat, is it always true that the fiber product of $T\times_Z Y$ is a complete intersection in $T\times_k Y$?

The motivation comes from an argument of Knop in his paper On the Set of Orbits for a Borel Subgroup. Let $G$ be a reductive group with Lie algebra $\mathfrak{g}$, and $X$ be a spherical variety on which $G$ acts. In the proof of Lemma 6.5, where $Y=\mathfrak{t}$ is a Cartan subalgebra of $\mathfrak{g}$, $Z=\mathfrak{t}/W$ is the quotient by the Weyl group, and $T=T^\ast X$ is the cotangent bundle over $X$, the above statement is claimed for $$T^\ast X\times_{\mathfrak{t}/W}\mathfrak{t}\subset T^\ast X\times_k \mathfrak{t}.$$ EDIT: I have edited to include a flatness assumption to avoid simple counterexamples, as pointed out by @Alexander Braverman.

I would like to understand this point a bit better. Is this a standard argument, and if so is there a good reference?