Math Overflow Recent Questions

Subscribe to Math Overflow Recent Questions feed
most recent 30 from mathoverflow.net 2018-04-18T18:40:18Z

Expected value of determinant of simple infinite random matrix

Fri, 04/06/2018 - 04:13

Suppose we have a matrix $A \in \mathbb{R}^{n\times n}$ where

$$A_{ij} = \begin{cases} 1 & \text{with probability} \quad p\\ 0 &\text{with probability} \quad1-p\end{cases}$$

I would like to know the following expected value

$$\lim_{n \rightarrow \infty} \mathbb E(| \det (A) |)$$

i.e., the asymptotic behavior as $n$ becomes large.

What I tried so far

It feels like this has been done already, but after searching for quite a while I performed simulations and it looks like

$$\mathbb E(|\det(A)|) \propto e^{n f(p) }$$

where $f$ is some function of the probability $p$.

I would be very happy if someone knows the result or a good reference where I could look it up.

Do algebraic elements form a subring [duplicate]

Fri, 04/06/2018 - 03:49

This question already has an answer here:

It is known that the following statements are true:

(i) If $R,S$ are commutative rings and $R$ is a subring of $S$. Then all of the elements of $S$ integral over $R$ form a subring of $S$.

(ii) If $K/F$ is a field extension then all of the elements of $K$ algebraic over $F$ form a subfield of $K$.

Can these proposition be generalized to this statement?

If $R,S$ are commutative rings and $R$ is a subring of $S$. Then all of the elements of $S$ algebraic over $R$ form a subring of $S$.

At first I want to prove it using the strategy of (ii), however I found that this may work only when the element is integral instead of being algebraic. Now I am not sure if it is right. Can anybody give a proof or counterexample?

Sets of points avoiding small angles

Thu, 04/05/2018 - 15:58

(1) $\mathbb{R}^2$.

I'd like to place $n$ points in the plane so that the smallest angle they determine is as large as possible. In a sense, such a point set is in very general position, not only avoiding three points collinear, but also avoiding near collinearities.

Define the smallest angle of a set $S$ of points to the be smallest angle of any triangle formed by three points in $S$. So the $n=4$ and $n=5$ point sets shown below have smallest angles $45^\circ$ and $36^\circ$ respectively.          

Q1. What is the maximum of the smallest angle determined by any set $S$ of $n$ points, the maximum over all $S$? Is $S$ the vertices of a regular $n$-gon?

Update. Answered Yes by fedja with a nice proof in the comments.

(2) $\mathbb{R}^3$ (Added).

In 3D, the optimal arrangement seems to be akin to packing points on a sphere, e.g., the Tammes problem or the Thompson problem. Below shows the smallest angle realized by the $12$ vertices of the icosahedron.          
          Smallest angle $\approx 31.7^\circ$.

Q2. The same question in $\mathbb{R}^3$, and in $\mathbb{R}^d$, $d>3$.

Likely this question has been studied, in which case pointers to the literature would be appreciated.

Affiliation when invited professor

Thu, 04/05/2018 - 12:10

I am a PhD student in one university and an invited professor in another, that is I do not have a permanent position in the second one. Now I need to indicate an affiliation in a journal paper but I do not know whether to indicate or not to indicate a university where I am an invited professor.

What’s the etiquette for the affiliation to indicate in such a case - to indicate both or only the first one?

$\det(I-K(z)+\varepsilon(z,x)) $ versus $\det(I-K(z))$

Thu, 04/05/2018 - 10:58

First let me ask the general question that might interest others dealing with determinantal formulas. We are trying to compare the following two quantities

$$C_{\varepsilon} := \oint \det(I-K(z)+\varepsilon(z)) \frac{dz}{z}$$

and

$$C := \oint \det(I-K(z)) \frac{dz}{z}$$

where the contour must contain the origin, $I, K(z), \varepsilon(z)$ are $n\times n$ matrices. For $K(z)$ has entries poles wrt to z whereas $\varepsilon(z)$ has entries that are analytic.

A third condition is the following. By residue theorem we $\oint F(\varepsilon(z)) \frac{dz}{z}=F(\varepsilon(0))$ for analytic F. We assume that $\varepsilon(0)$ is the zero matrix. So if for example $F=\det$ we obtain

$$\oint \det(\varepsilon(z)) \frac{dz}{z}=0.$$

Then given this condition we ask:

Q: Ideally $C_{\varepsilon(z)}$ is close to $C$.

This is asking too much since the determinant will have all its terms coupled. So maybe we can say something interesting as $(\varepsilon)_{i,j}\to 0$.

Our particular case

For example, a typical entry for K is of the form

$$\left(\oint_{|w|=R}-\oint_{|w|=\delta}\right) e^{t(w-1)}\left(\frac{1-w}{w}\right)^{q} \frac{1}{w^{n}(1-w)^{l-n}-z^{l}} dw,$$

for some constants $l>0,q>l-n>0$, large R and small $\delta$. The $R,\delta$ are picked so that the poles of $w^{n}(1-w)^{l-n}-z^{l} $ are contained in the annulus $A(0,\delta,R)$. Whereas $\varepsilon(z,x)$ has similar entries

$$\left(\oint_{|w|=R}-\oint_{|w|=\delta}\right) e^{t(w-1)} \left(\frac{1-w}{w^{-x}}\right)^{q} \frac{1}{w^{n}(1-w)^{l-n}-z^{l}} dw,$$

with the exception of a $w^{-x}$ and so as $x\to \infty$ (even just for large enough x) the contour $\oint_{|w|=\delta}$ disappears and we are left with a quantity that is analytic in z. And so we obtain the third condition above:

$$\varepsilon(0) = \left(\oint_{|w|=R} e^{t(w-1)} \left(\frac{1-w}{w^{-x}}\right)^{q_{i,j}} \frac{1}{w^{n}(1-w)^{l-n}} dw \right)_{i,j}=0.$$

Q: So ideally we have $C_{x}\to C$ for large enough x. But again that might be asking too much.

Attempts

  1. expanding the determinant (using Jacobis formula) gives terms where $\varepsilon(z)$ and $I-K$ are coupled

  2. For both integrals expanding in z as a geometric series:

$$\frac{1}{1-\frac{z^{l}}{(w^{n}(1-w)^{l-n})}}$$

by picking the z-contour small enough so that $\left|\frac{z^{l}}{(w^{n}(1-w)^{l-n})}\right|<1$. This gave for the first integral

$$\sum_{k_{1}+ \dots +k_{n}=0} \det[F(x,k)]$$

where $F(x,k_{i})$ is of the form

$$\left(\oint_{|w|=R}-\oint_{|w|=\delta}\right) e^{t(w-1)}\frac{(1-w)^{q+(n-l)k_{i}} }{w^{-x+q+nk_{i}}} dw.$$

Then I will take its difference with that of the other integral.

On the number of Eulerian orderings

Thu, 04/05/2018 - 10:38

This post is a sequel of Eulerian ordering of the integers modulo n.
Let us recall the definition of an Eulerian ordering:

Let $n>1$ be an integer. Consider the set $C_n := \{0,1, \dots , n-1\}$.

An Eulerian ordering of $C_n$ is an ordering $r_1, \dots, r_n$ of its elements such that:
$$\forall i \le n \ \forall j<i \ \exists k < i \text{ with } \frac{n}{gcd(n,r_k-r_i)} \text{ prime and } \frac{gcd(n,r_k-r_i)}{gcd(n,r_j-r_i)} \text{ integer.}$$

For the motivation of this notion coming from algebraic combinatorics, we refer to the previous post.

Question: How many Eulerian orderings of $C_n$ are there?

Let $a_n$ be the number of Eulerian orderings of $C_n$.

  • If $n$ is not square-free then $a_n = 0$ (exercise).
  • If $n$ is square-free then $a_n>0$ by this answer, providing an example involving mixed base.
  • If $p$ is a prime number, then $a_p = p!$.
  • If $n=2p$ with $p>2$ prime, then $a_n = n!/(\frac{p+1}{2})$. See Prop. 2 below, due to @user44191.

Definition: Let $g(n,m)$ be the number of ways for filling a $n \times m$ grid such that each newly filled box (the first excepted) is co-linear (vertically or horizontally) to a previously filled box.

Proposition 1: The number of Eulerian orderings of $C_n$, with $n=pq$ and $p \neq q$ primes is $g(p,q)$.
Proof: The grid corresponds the the decomposition of the cyclic group $C_{pq} \simeq C_p \times C_q$. The fact that two boxes are co-linear (vertically or horizontally) corresponds to $gcd(n,r_k-r_i)$ prime, which is equivalent to $\frac{n}{gcd(n,r_k-r_i)}$ prime (because $n=pq$). So the first condition for an ordering to be Eulerian corresponds exactly to the above way of filling the grid. Finally, $\forall j<i$, if $gcd(n,r_j-r_i)$ is prime then take $k=j$, else $gcd(n,r_j-r_i) = 1$ so the above $r_k$ works. $\square$

Intermediate problem: Find a formula for $g(n,m)$.

The following result is due to @user44191 (see its first comment):

Proposition 2: $g(2,m) = 2 (2m)!/(m+1)$.
Proof: Consider the $2 \times m$ grid. We first count the number of ways for filling $\ell \le m$ horizontally co-linear boxes (below $m=7$ and $\ell = 4$):

$$\substack{ \displaystyle{◻◻◻◻◻◻◻} \cr \displaystyle{◼◻◼◼◼◻◻} } $$

The number is $2 \times \frac{m!}{(m-\ell)!}$. Next we can choose among exactly $\ell$ boxes which are vertically co-linear to a previously filled.

$$\substack{ \displaystyle{◻◻◻◼◻◻◻} \cr \displaystyle{◼◻◼◼◼◻◻} } $$

Finally, any other box is (vertically or horizontally) co-linear to a previous one, so we can finish by any of the $(2m-\ell - 1)!$ ways. It follows that:

$$g(2,m) = 2 \sum_{\ell=1}^m \frac{m!}{(m-\ell)!} \ell (2m-\ell-1)!$$

In fact, as observed by @user44191, this research on WolframAlpha provides the formula: $2\Gamma(2m+1)/(m+1)$ which is equal to $2 (2m)!/(m+1)$. $\square$

Remark: We could try to extend the above approach for a formula of $g(n,m)$, possibly recursive, but it seems already tricky just for $g(3,m)$.

fake and weak cardinals

Thu, 04/05/2018 - 07:52

Suppose $\lambda$ is a successor of a singular cardinal. We will say $\lambda$ fake if there is a transitive set $M$ such that $\lambda \subseteq M$ satisfying $\mathrm{ZFC}^-$ (ZFC without powerset) in which there is a largest $M$-cardinal $\kappa < \lambda$ which is regular in $M$. We will say $\lambda$ is weak if we can find such $M$ and $\kappa$ such that $M \models \kappa^{<\kappa} = \kappa$.

Question: If $\lambda$ is a fake successor of a singular, is it also weak?

Some motivation: To obtain some properties around singular cardinals of high consistency strength, one often creates weak successors of singulars using Prikry-type forcing. But to obtain other such properties, one needs to use successors of singulars that are not weak. These two methods are in tension. In practice, the examples of fake successors of singulars are also weak, since the witnesses may be taken from inner models satisfying GCH. But I am wondering if there is a deeper explanation.

Remark: If $\kappa$ is supercompact and $\mathrm{cf}(\mu)<\kappa<\mu$, then $\mu^+$ is not weak. Using Radin forcing, we can produce a model with many measurable cardinals in which every successor of a singular is weak.

Kan condition for bar construction

Thu, 04/05/2018 - 02:05

Let $T$ be a monad on a concrete category $\mathcal{C}$, and $A$ an algebra over $T$. The bar construction is a simplicial object in the category $\mathcal{C}^T$ of algebras which we can think of a sort of "resolution" of $A$. Some of the arrows look like the following diagram: $$ \begin{array}{ccc} \cdots TTA \rightrightarrows TA \to A \end{array} $$ (I unfortunately cannot draw more arrows here. See the link above for a better picture.)

Now, is such a simplicial object a Kan complex, or at least a quasicategory? Is there a filling condition for horns, in general? If not, what would be a counterexample?

Any reference would also be welcome.

How many ways are there to form a complete graph as the union of triangles?

Wed, 04/04/2018 - 18:38

Consider the complete graph on n vertices. How many ways are there to form this graph by unioning together triangles?

The triangles are distinct but may overlap.

What is the scope of validity of Kunneth formula for de Rham?

Wed, 04/04/2018 - 09:07

In books like Bott-Tu or all pdf texts I have found on internet, the Kunneth formula for manifolds $M$ and $N$ and their de Rham cohomology $$ H^{\bullet}_{dR}(M \times N) \simeq H^{\bullet}_{dR}(M) \otimes H^{\bullet}_{dR}(N)$$ is proved under various finiteness hypothesis : one of the two manifolds is compact, or with finite dimensional de Rham spaces, or admitting a finite good cover.

On another side, for singular cohomology of topological spaces $X$ and $Y$ and a PID $A$ (let's say $Z$) there is the following more general Kunneth formula : $$H^n(X \times Y;A) \simeq (\sum_{i+j=n} H^i(X;A)\otimes H^j(Y;A))\oplus(\sum_{p+q=n+1} Tor(H^p(X;A),H^q(Y;A))$$

If $X$ and $Y$ are manifolds and we take $A \equiv \mathbb{R}$, the $H^p(X;\mathbb{R})$ ans $H^q(Y;\mathbb{R})$ are vector spaces, so the $Tor$ part is null, and using the de Rham theorem we end up with a Kunneth formula for de Rham without any finitenes hypothesis.

But Bott-Tu p 108 give an explicit counterexample to the Kunneth formula when both manifolds have infinite dimensional cohomology, and write "that some sort of finiteness hypothesis is necessary for Kunneth and Leray-Hirsch to hold".

So what is wrong with the Kunneth formula for de Rham "deduced" from kunneth for singular above ?

And what is the real scope of Kunneth formula for de Rham i.e. the minimal hypothesis for the formula to hold ?

Rate of convergence of a test statistic towards a Gaussian random variable

Wed, 04/04/2018 - 08:52

This is a follow-up question to Rate of convergence of $\frac{1}{\sqrt{n\ln n}}(\sum_{k=1}^n 1/\sqrt{X_k}-2n)$, $X_i$ i.i.d. uniform on $[0,1]$? . My motivation is to construct a statistic whose rate convergence to a Gaussian will be very slow and as such explore types of convergence which are not encapsulated by the Berry-Esseen' theorem. We therefore define the following statistic: \begin{equation} S_n := \frac{\left(\sum\limits_{k=1}^n f^{-1}(X_k) - \frac{3}{2} e \cdot n\right)}{e\cdot \sqrt{n \log(\log(n))}} \end{equation} where $X_k$ are i.i.d. uniformly distributed in $(0,1)$ and the function $f()$ is defined as follows: \begin{equation} f(x) := \frac{e^2}{2} \cdot \frac{1+\log(x)}{x^2 \log(x)^2} 1_{x \ge e} \end{equation} Now, the probability density of $f(X)$ is as follows: \begin{eqnarray} \rho_{f^{-1}(X)}(z) &=& \int\limits_0^1 \delta(z - f^{-1}(x)) dx =-\int\limits_{e}^\infty \delta(z-u) f^{'}(u) du= -f^{'}(z) 1_{z \ge e}\\ &=& \frac{e^2}{2} \cdot \frac{2+3 \log(z)+2 \log(z)^2}{z^3 \log(z)^3}1_{z \ge e} \end{eqnarray} From this we readily get the moments: \begin{eqnarray} E\left[ f^{-1}(X) \right] = \frac{3}{2} e\\ E\left[ (f^{-1}(X))^2 \right] = \infty \end{eqnarray} We also get the characteristic function. It reads: \begin{eqnarray} \kappa_{f^{-1}(X)}(k) = e^{\imath k e}+ \imath k \frac{e}{2} e^{\imath k e}-k^2 \frac{e^2}{2}\cdot \int\limits_0^\infty (-\imath k)^\delta \cdot \Gamma(-\delta,-\imath e k) d \delta \end{eqnarray} for $0<k<1$.

Note: The last integral on the right hand side is for me hard to crack. However numerical computations suggest that: \begin{equation} \lim_{k\rightarrow 0} \frac{1}{\log(\log(1/k))} \cdot \int\limits_0^\infty (-\imath k)^\delta \cdot \Gamma(-\delta,-\imath e k) d \delta = 1 \end{equation} Indeed by using the integral representation of the Gamma function along with integration by parts we quickly establish the following identity: \begin{eqnarray} (-\imath k)^\delta \cdot \Gamma(-\delta,-\imath e k) = \frac{e^{-\delta}}{\delta} + (-\imath k)^\delta \cdot \Gamma(-\delta) + \sum\limits_{n=1}^\infty \frac{(\imath k)^n}{n!}\cdot \frac{e^{n-\delta}}{\delta-n} \end{eqnarray}

Now clearly \begin{eqnarray} &&\int\limits_0^\infty (-\imath k)^\delta \cdot \Gamma(-\delta,-\imath e k) d \delta =\\ && \int\limits_0^\infty \left( \frac{e^{-\delta}}{\delta} + (-\imath k)^\delta \cdot \Gamma(-\delta) \right) d\delta + O(k)\\ &&= \int\limits_0^\infty \left( \frac{e^{-\delta}}{\delta} - \frac{(-\imath k)^\delta}{\delta} \right) d\delta + \int\limits_0^\infty (-\imath k)^\delta \left(\Gamma(-\delta)+\frac{1}{\delta}\right) d\delta + O(k)\\ &&= \left.\left( Ei(-\delta) - Ei(-A \delta)\right)\right|_0^\infty+ \int\limits_0^\infty (-\imath k)^\delta \left(\Gamma(-\delta)+\frac{1}{\delta}\right) d\delta + O(k)\\ &&= \log(-A) + \int\limits_0^\infty (-\imath k)^\delta \left(\Gamma(-\delta)+\frac{1}{\delta}\right) d\delta + O(k) \end{eqnarray} where $A=-\log(-\imath k)= \imath \pi/2 - \log(k)$. Now we have checked numericaly that the integral in the middle above decays monotonically when $k\rightarrow 0$. Since now $\log(-A) = \log(-\imath \pi/2 + \log(k))= \log(-\imath \pi/2-\log(1/k)) \rightarrow \log(-\log(1/k)) = -\imath \pi/2 + \log(\log(1/k)) \rightarrow \log(\log(1/k))$ when $k\rightarrow 0$ the claim is established.

Now we check that our test statistic is properly normalized.

Define $c_n:=\sqrt{n\log(\log(n))}$. Indeed we have: \begin{eqnarray} &&\log\left( \kappa_{S_n}(k)\right) =\\ && -\imath k \frac{3}{2} \frac{n}{c_n} + n \log\left[ \kappa_{f^{-1}(X)}(\frac{k}{e c_n})\right] \\ &&= -\imath k \frac{3}{2} \frac{n}{c_n} + n \log\left[ e^{\imath \frac{k}{c_n}}(1+\imath \frac{k}{2 c_n}) - \frac{1}{2} \frac{k^2}{c_n^2} \log(1+\log(c_n)-\log(k))\right]\\ &&= -\imath k \frac{3}{2} \frac{n}{c_n} + n \log\left[ 1+\imath \frac{3}{2} \frac{k}{c_n} - \frac{k^2}{c_n^2} + O(\frac{k^3}{c_n^3}) - \frac{1}{2} \frac{k^2}{c_n^2} \log(1+\log(c_n)-\log(k)) \right]\\ &&= -\imath k \frac{3}{2} \frac{n}{c_n} + \imath k \frac{3}{2} \frac{n}{c_n} + \left(\frac{1}{8} - \frac{1}{2} \log(1+\log(c_n)-\log(k))\right) \frac{k^2}{c_n^2} n + O(\frac{k^3}{c_n^3})\\ &&= \left(\frac{1}{8} - \frac{1}{2} \log(1+\log(c_n)-\log(k))\right) \frac{k^2}{c_n^2} n + O(\frac{k^3}{c_n^3}) \end{eqnarray} Now for the statistic to be properly normalized we have to have: \begin{equation} \lim_{n\rightarrow \infty} \frac{n}{c_n^2} \log(\log(c_n)) = 1 \end{equation} which is indeed the case as one can readily check by plugging the definition of $c_n$ into the lhs.

Now, I carried out a Monte Carlo simulation and computed the sample Cumulative Distribution Function (CDF) of our statistic and plotted it along with the CDF of a standardized Gaussian distribution with the former and the later being plotted in Blue and Purple respectively. Here I took $n=5,10,15$ and in each case I used $m=1000$ realizations. The figures are below:

CDFs at $n=5$

CDFs at $n=10$

CDFs at $n=15$

I have used the following Mathematica code to produce those figures:

m = 1000; n = 15; delta = 1/10; bins = Table[-5 + delta/2 + j delta, {j, 1, (10 - delta)/delta}]; limD = CDF[NormalDistribution[0, 1], bins]; X = RandomReal[{0, 1}, {m, n}]; x =.; {t0, Y} = Timing[(x /. Map[First[ NSolve[(E^2 (1 + Log[x]))/(2 x^2 Log[x]^2) == # && x > E, x, Reals]] &, X, {2}])]; ll = (Total[#] & /@ Y - 3/2 E n)/(E Sqrt[n Log[Log[n]]]; emp = EmpiricalDistribution[ll]; DD = CDF[emp, bins]; pl = ListPlot[Transpose[{bins, #}] & /@ {DD, limD}, ImageSize -> 800, LabelStyle -> {15, FontFamily -> "Arial"}, BaseStyle -> {15, FontFamily -> "Bold"}, PlotLabel -> "n=" <> ToString[n]]; Export["LimitBehavior1_n_" <> ToString[n] <> ".jpg", pl, "JPEG"]; Import["LimitBehavior1_n_" <> ToString[n] <> ".jpg"]

Having said all this my question is the following. What is the rate of convergence of our statistic towards a Gaussian. To be specific we are asking about the behavior of the supremum norm of the difference in CDFs for large values of $n$.

Loomis-Whitney versus Gagliardo inequalities

Wed, 04/04/2018 - 01:26

When searching for a reference, I discovered a curious fact about the Wikipedia page concerning the Loomis-Whitney Inequality (LWI).This page, which exists only in an English version, states that the LWI is $$\int_{{\mathbb R}^n}\prod_{i=1}^ng_i(\hat x_i)\,dx\le\prod_{i=1}^n\|g_i\|_{L^{n-1}({\mathbb R}^{n-1})},$$ for every measurable functions $g_i\ge0$ over ${\mathbb R}^{n-1}$. The notation $\hat x_i$ for a vector $x\in{\mathbb R}^n$ means that the coordinate $x_i $ is dropped.

It seems to me that Loomis and Whitney (1949) did not establish this inequality, which is actually due to Gagliardo (1958). They proved instead that if $E$ is a measurable domain of ${\mathbb R}^n$ and $E_i$ denotes its projection under $x\mapsto \hat x_i$, then $$\mu_n(E)^{n-1}\le\prod_{i=1}^n\mu_{n-1}(E_i).$$ The latter inequality can be viewed as a consequence of the Gagliardo Inequality (GI). But to me, they are not equivalent to each other.

Is there a way to prove GI, starting from the original LW ?

Although I am a contributor to the French Wikipedia, I don't have rights to modify an English page. Who could correct it ? In particular, it seems important to give full credit to Gagliardo. His inequality is the best starting point in the proof of Sobolev Inequalities.

Kernel of evaluation map into field of quotients

Tue, 04/03/2018 - 19:59

Let $R$ be an integral domain and for $a \in R$ denote by $\text{eval}_a: R[X] \to R$ evaluation at $a$. It's well-known (and easy to see) that $$\ker(\text{eval}_a)=(X-a).$$ The next more complicated thing in this setting is to evaluate at an element $q$ of the quotient field $K$ of $R$: $\text{eval}_q: R[X] \to K,\,f \mapsto f(q)$.

Question 1: Is there an explicit description of the generators of $\ker(\text{eval}_q)$ ?

In particular, I wonder if

$$\ker(\text{eval}_q) =(\,bX-a \mid q=\frac{a}{b};\,a,b \in R\,)\qquad ? $$

I could solve the following special cases:

  1. If $q=a\in R$ then $\ker(\text{eval}_q)=(X-a)$

  2. If $q=1/b$ then $\ker(\text{eval}_q)=(bX-1)$

  3. If $R$ is a GCD and $q=\frac{a}{b}$ with $a,b$ coprime then $\ker(\text{eval}_q)=(bX-a)$

According to 3. I wonder, if the GCD assumption is really needed:

Question 2: If $a, b\in R$ are coprime, i.e. $(a,b)=R$, is $\ker(\text{eval}_q)=(bX-a)$ for $q=\frac{a}{b}$ ?

$$$$ For a proof of 3. note that $bX-a\in R[X]$ is irreducible and hence prime (since $R$ is GCD, $R[X]$ is also GCD and irreducible elements in a GCD are prime). If $f \in R[X]$ annulates $q$, write $f=(X-q)h$ for some $h \in K[X]$. By clearing denominators, there is $r \in R$ and $\tilde{h}\in R[X]$ such that $rf =(bX-a)\tilde{h} \in (bX-a)$. Since $(bX-a)$ is prime and $r \not\in (bX-a)$ we finally obtain $f \in (bX-a)$.

Remark: I have asked the question on math.SE but didn't get any reply: https://math.stackexchange.com/questions/2718227/kernel-of-evaluation-map-into-field-of-quotients

Maximal abelian (Cartan) subalgebras of Lie algebras over $\mathbb{C}$

Tue, 04/03/2018 - 15:46

Let $\mathfrak g$ be the Lie algebra of a compact connected Lie group $G$. Let $\mathfrak g_{\mathbb{C}}$ be the complexification of $\mathfrak g$ and let $\mathfrak h \subset \mathfrak g_{\mathbb{C}}$ be a Lie subalgebra satisfyig $\mathfrak h + \overline{\mathfrak h} = \mathfrak g_{\mathbb{C}}$. Suppose that $\mathfrak a \subset \mathfrak h$ is a maximal abelian Lie subalgebra of $\mathfrak h$. Does it hold that $\mathfrak a + \overline{\mathfrak a}$ is a maximal abelian subalgebra of $\mathfrak g_{\mathcal C}$?

Edit: A nontrivial example:

Suppose that $G$ is an even dimensional compact Lie group and suppose that is endowed with a left-invariant complex structure(*). Take $\mathfrak h$ as the set of all left-invariant vector fields that annihilates every local holomorphic function on $G$.

(*): This kind of complex structure always exist. In Proposition 2.5 of 1 there is a detailed characterization and in section 5.1 of 2 there is an easy construction.

Find the number of ordered triples (a,b,c) such that abc=108. Or Number of ways of arranging n objects into r groups

Tue, 04/03/2018 - 09:44

The question is to find the number of ordered triples (a,b,c) such that abc=108. I know that 108 = 22.33 which is 2*2*3*3*3. My approach to the problem is to find in how many ways these numbers can be distributed into 3 different groups. As two of the numbers can be 1, so the problem becomes: In how many ways can 7 objects be arranged into 3 groups?

Correct me if I am wrong at any point and if not please help me with the solution. Any other approach will also be appreciated.

Thanks in Advance

Inverse of matrix with blocks of ones

Tue, 04/03/2018 - 09:43

Consider a real matrix of the form: $$\begin{pmatrix}a_{11}{1}_{r_{1}\times r_{1}}+b_{1}I_{r_{1}} & a_{12}{1}_{r_{1}\times r_{2}} & a_{13}{1}_{r_{1}\times r_{3}} \\ a_{12}{1}_{r_{2}\times r_{1}} & a_{22}{1}_{r_{2}\times r_{2}}+b_{2}I_{r_{2}} & a_{23}{1}_{r_{2}\times r_{2}} \\ a_{13}{1}_{r_{3}\times r_{1}} & a_{23}{1}_{r_{2}\times r_{2}} & a_{33}{1}_{r_{3}\times r_{3}} +b_{3}I_{r_{3}} \end{pmatrix}$$ The $r_i$ are growing linearly in the matrix size $n$, the $a_{ij}$ are bounded in absolute value as $n$ grows, while the $b_i$ are bounded and bounded away from $0$. The matrices $1_{r_i \times r_j}$ are matrices of all ones of the appropriate dimensions, and $I_{r_i}$ are identity matrices.

Are there techniques to analyze the inverse of such a matrix? In the 2-by-2 case, we can compute that the inverse is \begin{pmatrix}-\frac{b_{1}^{-1}}{r_{1}}{1}_{r_{1}\times r_{1}}+b_{1}^{-1}I_{r_{1}}+O(1/n^{2}) & O(1/n^{2})\\ O(1/n^{2}) & -\frac{b_{2}^{-1}}{r_{2}}{1}_{r_{2}\times r_{2}}+b_{2}^{-1}I_{r_{2}}+O(1/n^{2}) \end{pmatrix} Is there an easy way to see why this should generalize?

A Proof for Goldbachs Conjecture

Tue, 04/03/2018 - 09:42

I need help to revise and edit my proof of Goldbachs Conjecture, I have started it but am unable to ensure it is correct. Contact me for the proof.

Convex hull of the Stiefel manifold with non-negativity constraints

Tue, 04/03/2018 - 09:24

Consider the Stiefel manifold

$$\mathrm{St}(n,k) :=\{X \in \mathbb{R}^{n\times k} : X^TX = I_k\},$$

where, $I_k$ is $k$ dimensional identity matrix. It is well known that

$$\mathrm{conv}(\mathrm{St}(n,k))= \{X \in \mathbb{R}^{n\times k} : \|X\|_2 \leq 1\},$$

where, $\|\cdot\|_2 $ is induced 2-norm of an operator.

Question: Is there a characterization for the convex hull of the Stiefel manifold with non-negativity constraints:

$$\mathrm{conv}(\mathrm{St}(n,k) \cap \mathbb{R}^{n\times k}_+),$$

where $\mathbb{R}^{n\times k}_+$ is the set of all $n \times k$ matrices with non-negative elements?

Supersingular Primes of an Elliptic Curve over $\mathbb{Q}$

Tue, 04/03/2018 - 09:03

My question is regarding Elkies' paper on "The existence of infinitely many supersingular primes for every elliptic curve over $\mathbb{Q}$".

In the section "Nuts and Bolts", Elkies has the following proposition:

Proposition. Modulo $\ell$, $P_\ell(X)$ and $P_{4\ell}(X)$ factor into $(X-12^3)R(X)^2$ and $(X-12^3)S(X)^2$ for some polynomials $R(X)$ and $S(X)$ respectively.

Here $P_D(X)$ refers to the Hilbert class (or ring class) polynomial for the imaginary quadratic order $O_D$ of discriminant $-D$, and $\ell$ is a prime congruent to $-1$ mod $4$. He goes on to prove the following lemmas:

Lemma 1. $P_\ell(12^3) \equiv P_{4\ell}(12^3) \equiv 0$ mod $\ell$.

The proof of Lemma 1 is easy to understand. Next, he says that the proofs of the $P_{\ell}$ part and the $P_{4\ell}$ part of the proposition proceed in the same way (by proving Lemma 2), and he does it for $P_\ell$. However, it is unclear to me if Lemma 2 can be proven similarly for $P_{4\ell}$, and I will explain it below.

Lemma 2. Let $D = \ell$ or $4\ell$. If $x_0$ is any root of $P_D(X)$, then there exists a unique prime $\lambda_0$ lying above $\ell$ in the splitting field $K_D$ of $P_D$ such that $x_0 \equiv 12^3$ mod $\lambda_0$.

Proof of Lemma 2. The existence claim in Lemma 2 follows from Lemma 1. For uniqueness, he assumes for a contradiction that there exists another prime $\lambda_1$ lying above $\ell$ such that $x_0 \equiv 12^3$ mod $\lambda_1$. Since some $\sigma \in Gal(K_D/\mathbb{Q})$ carries $\lambda_1$ to $\lambda_0$, we obtain another root $x_1 = \sigma(x_0) \neq x_0$ of $P_D$ such that $x_1 \equiv 12^3$ mod $\lambda_0$ (as well).

If $E_0$ and $E_1$ are elliptic curves of $j$-invariants $x_0$ and $x_1$, then $E_0$ and $E_1$ both reduce (mod $\lambda_0$) to elliptic curves which are isomorphic to the reduction of \begin{equation} \mathscr{E} \colon Y^2=X^3-X \end{equation} modulo $\ell$, which we shall denote by $\mathscr{E}_\ell$.

Some facts about $\mathscr{E}_\ell$:

  1. $\mathscr{E}_\ell$ is supersingular, whose Frobenius $\ell^{\text{th}}$-power isogeny $F$ satisfies $F^2 = [-\ell]$. Since $ker(1+F) \supseteq \ker[2]$, $\mathscr{E}_\ell$ has an endomorphism $\frac{1+F}{2}$.
  2. $\mathscr{E}$ has CM by $\sqrt{-1}$, given by $(x,y) \mapsto (-x,iy)$. We shall denote the reduction of this isogeny modulo $\ell$ by $I$.
  3. $(IF)^2 = [-\ell]$.
  4. $End(\mathscr{E}_\ell) = \mathbb{Z}[I,\frac{1+F}{2}] = \mathbb{Z} \oplus \mathbb{Z}I \oplus \mathbb{Z}\frac{1+F}{2} \oplus \mathbb{Z}\frac{I+IF}{2}$.

Therefore, we get a degree-preserving embedding \begin{equation} \iota \colon Hom(E_0,E_1) \hookrightarrow End(\mathscr{E}_\ell). \end{equation}

Where the proof diverges for the cases $D=\ell$ and $D=4\ell$:

For $D=\ell$: Elkies says that it can be shown that there exists a $\mathbb{Z}$-basis $\lbrace \psi_1,\psi_2 \rbrace$ for $Hom(E_0,E_1)$ such that $\deg(\psi_i) \leq \frac{1+\ell}{4}$ for each $i$ (this is okay for me --- I've managed to obtain a sharper upper bound of $\frac{\ell}{6}$). For each $i$, let $\theta_i = \iota(\psi_i)$, and Fact #4 above allows us to write $\theta_i = a+bI+c\frac{1+F}{2}+d\frac{I+IF}{2}$ for some $a,b,c,d \in \mathbb{Z}$. Then \begin{equation} \frac{\ell}{6} \geq \deg(\theta_i) = (a+bI+c\frac{1+F}{2}+d\frac{I+IF}{2})(a-bI-c\frac{1-F}{2}-d\frac{I+IF}{2}) = (a+\frac{c}{2})^2 + (b+\frac{d}{2})^2 + (c^2+d^2)\frac{\ell}{4} \geq (c^2+d^2)\frac{\ell}{4}, \end{equation} which forces $c=d=0$. Thus, for each $i=1,2$, $\theta_i \in \mathbb{Z}[I]$. In other words, $\iota$ is the following embedding \begin{equation} \iota \colon Hom(E_0,E_1) \hookrightarrow \mathbb{Z}[I] \subseteq End(\mathscr{E}_\ell). \end{equation} It can be shown that the image of $\iota$ is a rank $2$ lattice whose period parallelogram has Lebesgue area $\frac{\sqrt{\ell}}{2}$, but all the sublattices of $\mathbb{Z}[I]$ have unit parallelograms of integral area --- which culminates in a contradiction.

For $D=4\ell$: Elkies doesn't do this case in his paper, but I've managed to show that there exists a $\mathbb{Z}$-basis $\lbrace \psi_1,\psi_2 \rbrace$ for $Hom(E_0,E_1)$ such that $\deg(\psi_i) \leq \frac{4\ell}{6} = \frac{2\ell}{3}$ for each $i$. Once again, for each $i$, let $\theta_i = \iota(\psi_i)$, and write $\theta_i = a+bI+c\frac{1+F}{2}+d\frac{I+IF}{2}$ for some $a,b,c,d \in \mathbb{Z}$. By the same computation, we get \begin{equation} \frac{2\ell}{3} \geq \deg(\theta_i) = (a+\frac{c}{2})^2 + (b+\frac{d}{2})^2 + (c^2+d^2)\frac{\ell}{4} \geq (c^2+d^2)\frac{\ell}{4}, \end{equation} which implies $c^2+d^2 \leq \frac{8}{3}$. This is where the proof seems to fall apart for $D=4\ell$, since $c^2+d^2 \leq \frac{8}{3}$ does not imply $c=d=0$ --- and hence does not produce a contradiction. (Note that there are clearly solutions $(a,b,c,d)$ such that $c \neq 0$ or $d \neq 0$, e.g. $(0,0,1,1)$.)

What I have tried so far: (1) I don't believe that I can improve my upper bound of $\frac{2\ell}{3}$ any further (if this is possible, please enlighten me). (2) I tried to change the $\mathbb{Z}$-basis $\lbrace 1,I,\frac{1+F}{2},\frac{I+IF}{2} \rbrace$ for $End(\mathscr{E}_\ell)$ and (possibly) get some quadratic form which works, but all of them (so far) arrive at the same inequality $c^2+d^2 \leq \frac{8}{3}$ (i.e. no contradiction).

I'm aware that Elkies has proven a more general version of the proposition (at the start) in his PhD thesis, but I feel like that proof is out of reach for me at the moment. Therefore, I'm hoping that someone who has worked out this proof for the specific case $D=4\ell$ can enlighten me on this issue. Thank you.

Universal enveloping algebra and the algebra of invariant differential operators

Tue, 04/03/2018 - 08:53

Let $G$ be a Lie group and $\mathfrak{g}$ be its Lie algebra. Then $\mathfrak{g}$ may be interpreted as the Lie algebra of right (equivalently left) invariant vector fields. Let $\mathcal{U}(\mathfrak{g})$ be its enveloping algebra.

Why $\mathcal{U}(\mathfrak{g})$ may be interpreted as the algebra of all right invariant differential operators?

In other words: as there are no algebra relations within $\mathcal{U}(\mathfrak{g})$ except $XY-YX=[X,Y]$ the same should be true for invariant differential operators: however, is it obvious?

Pages