Recent MathOverflow Questions

Measure of distance between two curves described by their curvature

Math Overflow Recent Questions - Sat, 12/09/2017 - 05:48

I have two curves both of which start at $(0,0)$. They satisfy the following equations:

$ \begin{align*} \frac{dx}{d\theta} &= -\frac{\sin \theta}{\kappa(\theta)} \\ \frac{dy}{d\theta} &= \text{ }\frac{\cos \theta}{\kappa(\theta)}\\ \kappa(\theta) &= \cos^2 \theta \left[(2-x) \cos \theta - (1+y)\sin \theta\right] \end{align*} $

Angle $\theta$ is the angle of the outward normal to the curve. The curve is parametrized by $\theta$ which ranges from $-\pi/4 < \theta_0 < 0$ to $0$.

While the two curves, $\mathcal{C}_1$ and $\mathcal{C}_2$, satisfy the same DEs and start at the origin, their initial data differs in their angle $\theta_0$. Say, $\theta_0^{\mathcal{C}_1} < \theta_0^{\mathcal{C}_2}$.

The picture below is perhaps more helpful to understand what $\theta$ is and also how the curves look.

I am interested in understanding how far the curves are when $\theta = 0$. Obviously, this is very easy to do numerically. But, are there any analytical methods that I can exploit to understand the difference between their $x$-coordinate as well as their $y$-coordinate?


Why does Fermat's last theorem invalidate the existence of many real algebraic numbers? [on hold]

Math Overflow Recent Questions - Sat, 12/09/2017 - 05:38

Why does Fermat's last theorem imply immediately the non-existence of many alleged real algebraic numbers in mathematics and science of the following form? $$\sqrt[p]{q}$$ where $(p > 2, q)$ are prime numbers

Long time back, I was shocked by discovering this fact, and strangely the rigorous proof is so elementary to the limit that makes one may be wondering about this, or wither he had truly missed something that led to this rare beast conclusion, so I thought there must be some historical proofs about their true existence that I may be not aware of

Thanking you for help

Gorenstein projective modules for Cluster-tilted algebras

Math Overflow Recent Questions - Sat, 12/09/2017 - 05:27

Cluster-tilted algebras are 1-Gorenstein. Is it known which of those algebras are representation-finite and which are CM-finite (that is have only finitely many Gorenstein projective modules)? More generally, for which 1-Gorenstein algebras is it known that they are CM-finite? Is a classification possible?

Integration of the Poisson summation formula

Math Overflow Recent Questions - Sat, 12/09/2017 - 04:53

Consider $f(x)$, a rapidly decreasing function, such that $\int_0^{\infty} f(x)=0$ and for $x$ near zero: $f(x)=O(x^a)$ (wit $a>0$). Then I calculated the integral of the following sum (which appears in the Summation Poisson Formula) and I found (noting $F(x)=\int_0^{x} f(t) dt$): $$\int_0^{\infty}\sum_{n=1}^{\infty} f(nx) dx= -\frac{1}{2}\int_0^{\infty} \frac{1}{x}F(x) dx = -\frac{1}{2}\int_0^{\infty} \ln(x) f(x) dx$$

(See my previous post to see that the integral on the left is well defined and how above result is obtained using the Poisson summation formula: Interchange of sum and integral (on a "Poisson summation") )

Is there any known reference in literature with similar result (using Poisson Summation formula or not) ?

How to evaluate this integral? [on hold]

Math Overflow Recent Questions - Sat, 12/09/2017 - 04:36

How to evaluate this integral: $$\int_0^1 \int_0^1 \cdots \int_0^1\frac{x_{1}^2+x_{2}^2+\cdots+x_{n}^2}{x_{1}+x_{2}+\cdots+x_{n}}dx_{1}\, dx_{2}\cdots \, dx_{n}=?$$ I'm making use of the integral identity: $\int_{0}^{+\infty }e^{-t(x_{1}+x_{2}\cdots +x_{n})}dt=\frac{1}{x_{1}+x_{2}\cdots +x_{n}}$and then reversing the order of integration with respect to time and space variables. But for $n=1$,then such that,$\int_{0}^{\infty }dt\int_{0}^{1}x^{2}e^{-tx}dx=\int_{0}^{\infty }\frac{2 - e^{-t}(2 + 2t+t^2)}{t^3}dt=\int_{0}^{1}xdx=\frac{1}{2}$,and,$\int_0^1 \int_0^1 \cdots \int_0^1\frac{x_{1}^2+x_{2}^2+\cdots+x_{n}^2}{x_{1}+x_{2}+\cdots+x_{n}}dx_{1}\, dx_{2}\cdots \, dx_{n}=n\int_{0}^{+\infty }\frac{2 - e^{-t}\left ( 2 + 2t+t^2 \right )}{t^3}\left ( \frac{1-e^{-t}}{t} \right )^{n-1}dt$

time-dependent Hille-Yosida theorem?

Math Overflow Recent Questions - Sat, 12/09/2017 - 03:10

Is there an existence theorem for linear time-dependent differential operator equations that reduces in case of constant coefficients to the Hille-Yosida theorem?

Why can't this polynomial vanish except when $x+y=0,xy= 0$? [on hold]

Math Overflow Recent Questions - Sat, 12/09/2017 - 03:08

Show that for any $x, y \in \mathbb R$ with $x + y \neq 0,xy\neq 0$

$$p(x,y) := x^6-2 x^5 y+2 x^5-x^4 y^2-2 x^4 y+x^4+4 x^3 y^3+2 x^3 y-x^2 y^4-4 x^2 y^3-4 x^2 y^2+2 x^2 y-2 x y^5+6 x y^4+2 x y^3+y^6-2 y^5-y^4-2 y^3+y^2 \neq 0$$

I'm sorry,I forget $xy\neq 0$,Now I think it's hold?

Hypercontractive inequality for random walks on sets

Math Overflow Recent Questions - Sat, 12/09/2017 - 02:09

Let $k<N$ be natural numbers. In this question we consider graphs whose vertices are size-$k$ subsets of a size-$N$ universe. Consider the following random walk in the graph:

Starting from a set $R$ pick $t$ elements in $R$ uniformly at random and pick a uniformly random set $S$ that contains those $t$ elements ($t$ is a parameter; note that the size of the intersection of $S$ and $R$ may be larger than $t$).

This model is studied in the association schemes literature and has an elegant spectral analysis (see, e.g.,

My question is whether one can prove a hypercontractive inequality in this model (equivalently, a Log Sobolev constant), similarly to what's proven for closely related models in

Number of iterations required for a transposition cipher to yield the original input

Math Overflow Recent Questions - Fri, 12/08/2017 - 18:11

I have asked this question on but received no response; hoping someone on here can help.

Suppose a function $f$, representing what I call a "dynamic transposition cipher" taking one string of text $str$ as input, is defined so that it outputs another string of text $res$ which is the transposed characters of $str$ according to the following algorithm:

  1. Let $i_1 = 1$, $i_2 = $ the number of characters in $str$, and $res = ""$ (empty string)
  2. While $i_2 - i_1 > 1$:

    a. Append the character $str[i_1]$ to $res$

    b. Append the character $str[i_2]$ to $res$

    c. Increment $i_1$ by $1$

    d. Decrement $i_2$ by $1$

  3. If $i_1 = i_2$, then append the character $str[i_1]$ to $res$; otherwise, append the characters $str[i_1]$ followed by $str[i_2]$ to $res$.

  4. Return $res$

Assume that string indices begin at one, i.e. "ABC"$[1] = $ "A" and that each character of $str$ is unique.

For example, $f($"ABCDEF"$)$ would equal "AFBECD", and $f($"AFBECD"$)$ would equal "ADFCBE". We can keep iterating $f \circ f \circ f(x)$, yielding "AEDBFC", "ACEFDB", and finally "ABCDEF", which was our original input. All in all, for any string $str$ of 5 characters, $f(str)$ = 4, since it takes four iterations including the original string.

Let $g(n)$ represent the number of iterations required to transpose a string of $n$ characters back into itself according to $f$.

Following this formula for strings of 1 through 10 characters, there is unusual variation in the number of iterations required: $g(2) = 2$, $g(3) = 3$, $g(4) = 4$, $g(5) = 4$, $g(6) = 6$, $g(7) = 7$, $g(8) = 5$, $g(9) = 5$, and $g(10) = 10$. I have calculated these values up to $g(30)$: $g(28) = 21$, $g(29) = 10$, and $g(30) = 30$.

Plotting these discrete points, we can find several linear functions which collectively fit some of the points: $y = x$ fits some, $y = 0.5x + 1.5$ fits others, while $y = 0.25x + 2.5$ fits still others, while there are many more data pairs not accounted for. I am not noticing much of a pattern that holds consistently besides $y=x$.

What would be a definition of $g(n) \space | \space n \in \Bbb I, n > 0$?

What are the matrices preserving the L1-norm?

Math Overflow Recent Questions - Fri, 12/08/2017 - 15:36

So I am inspired by unitary matrices which preserve the L2-norm of all vectors, so in particular the unit norm vectors. But then I saw that the L1-norm of probability vectors is preserved by matrices whose columns are probability vectors. And this got me thinking: But what are the matrices preserving the L1-norm of arbitrary real unit L1-norm vectors? So basically we extend a probability vector to also allow a sign, but ignoring the signs, this should still be a probability vector; and then we ask for the corresponding structure-preserving matrices.

It is already clear that the columns of such a matrix should be this 'extended' kind of probability vector, because we can multiply the matrix with a standard basis vector which has L1-norm 1. But not all of such matrices preserve this, take for example

$$ M = \frac{1}{2} \left(\begin{matrix} 1 & 1\\ 1 & -1 \end{matrix}\right) $$


$$ x = \left( \begin{matrix} 0.3 \\ -0.7 \end{matrix} \right) $$

Then we have

$$ Mx = \left(\begin{matrix} -0.2 \\ 0.5 \end{matrix}\right) $$

which fails the test.

Is this function positive?

Math Overflow Recent Questions - Wed, 12/06/2017 - 15:22

Could someone tell me if my argument is correct? Let $\rho_1:[0,1]\to [0,1]$ and $J:\mathbb R\to \mathbb R^+$, I have a system of two coupled PDE's and I proved that its solution $(u_0(t, r), u_1(t, r))$ exists unique in $C([0 ,\tilde t]\times [0,1], \mathbb R)$ and can be written implicitly as \begin{align} (0) \;u_0(t,r)=e^{-\int_0^{t}\int_{0}^1J({r-r'})u_1(s, r')dr'ds}> 0, \end{align} while \begin{align}\label{1} (1)\;u_1(t,r)=e^{-t}\rho_1(r)+\int_0^tds\;e^{-({t-s})}\int_0^1dr'J(r-r')u_1(s,r')u_0(s,r). \end{align}

I would like to prove that $u_0(t, r)>0$ and $u_1(t,r)\geq 0$ for every $(t,r)\in [0, \tilde t]\times [0,1]$. I proved it in the following way

By (0) it is obvious that $u_0({t, r})>0$ for all $({t, r})\in [0, \tilde t]\times [0,1]$. To prove that the same property holds for the function $u_1({\cdot, \cdot})$, define \begin{align} A:=\{r\in[0,1] : \rho_1(r)=0\}\quad B:=\{r\in[0,1] : \rho_1(r)>0\} \end{align} and the time \begin{align}\nonumber t^*:=\inf\{t\in (0, \tilde t]: u_1(t, r^*)\neq 0\text{ for some $r^*\in A$}\text{ or } u_1(t, r^*)=0 \text{ for some $r^*\in B$}\}, \end{align} with the convention that the infimum of the empty set is $\tilde t+1$.

If $t^*>\tilde t$ the proof follows trivially. Indeed assuming $t^*>\tilde t$ we have that for every $s\in (0, \tilde t]$ fixed, $u_1({s, r})=0$ for all $r\in A$ and $u_1({s, r})\neq 0$ for all $r\in B$.

Suppose by contradiction that there exists $\bar r\in B$ such that $u_1({s, \bar r})<0$. Since $\bar r\in B$ we have that $u_1({0, \bar r})>0$; the continuity of the function $u_1({\cdot, \cdot})$ in the first variable and the intermediate values theorem allow to conclude that there exists $s^*\in (0,s)$ such that $u_1({s^*, \bar r})=0$. It follows that $t^*\leq s^*<s\leq \tilde t$ and this contradicts the assumption $t^*> \tilde t$.

Consequently, when $t^*>\tilde t$, we can conclude that $u_1(t, r)\geq 0$ for every $(t,r)\in [0, \tilde t]\times [0,1]$.

Suppose $t^*\leq \tilde t$. We have two possibilities: \begin{align} (a)\;\exists r^*\in A: u_1(t^*, r^*)\neq 0,\qquad (b)\;\exists r^*\in B: u_1(t^*, r^*)=0. \end{align} Suppose by contradiction that (b) holds, then $u_1(t^*, r^*)=0$ and $u_1(0, r^*)>0$. By evaluating (1) in $(t^*, r^*)$ we get a contradiction.

If (a) holds we have that $u_1(t^*, r^*)\neq 0$, $\rho_1(r^*)=0$ and $u_1(t, r)\geq 0$ for every $(s, r)\in (0, t^*)\times [0,1]$.

By (1) we get that $u_1(t^*, r^*)>0$ and consequently we can conclude that $u_1({t, r})\geq 0$ for every $({t, r})\in [0,t^*]\times[0,1]$. Iterating the same procedure in the interval $[t^*,\tilde t]$ it is possible to show that $u_1(t,r)\geq 0$ for every $(t,r) \in [0,\tilde t]\times [0,1]$.

Is that correct?

Relation between commutator length and stable commutator length in free groups

Math Overflow Recent Questions - Wed, 12/06/2017 - 10:14

In Bardakov, Algebra and Logic, Vol. 39, No. 4, 2000 I have found the following (page 225, see

We pronounce tile validity of the following:

Conjecture. For every element z in the derived subgroup of a free non-Abelian group F and for any natural m, $$ \mathrm{cl}(z^m) \geq (m+1/2)\mathrm{cl}(z) $$

Where cl denotes the commutator length of an element (ie. the minimal number to express it as a product of commutators).

This inequality is not true, and $$z = [a, b]$$ may be a counterexample. However, I belive that there may be a typo, so it should rather be $$ \mathrm{cl}(z^m) \geq (m+1)/2 \cdot \mathrm{cl}(z) $$

Unvortunatelly, I could not find it in any other paper/book (including Calegari's "scl"). And the proof in Bardakov is unclear to me.

Do you know any paper, with a proof of the above inequality? Or maybe some counterexample? Or maybe has anybody have any clue why Bardakov did not prove this inequality?

History of "natural transformations"

Math Overflow Recent Questions - Wed, 12/06/2017 - 09:48

It is often claimed that the notion of natural transformations existed in mathematical vocabulary long before it had a definition (see, for example, Peter Freyd, Abelian Categories, p. 2). Eilenberg and Maclane in: General theory of natural transformations, Trans. AMS, 58 (1945) p.p.: 231-294, discovered the fact that this notion could be mathematically defined.

However, Ralf Kromer casts doubt on the above claim due to lack of evidence (see: Ralph Kromer Tool and Object: a history and philosophy of category theory, p. 70).

My question is, can you supply an evidence of use of the notion of natural transformations prior to the 1945 paper of Eilenberg and Maclane cited above?

Schur property for a sum of Banach spaces

Math Overflow Recent Questions - Wed, 12/06/2017 - 09:46

Suppose we have two Banach spaces X and Y each of them having the Schur property (weakly convergent sequences are norm convergent). Does it follows that X+Y has the Schur property? Note that this is trivially true when the sum is direct.

Any proof (or disproof), or references will be appreciated.

Variety of locally residually nilpotent groups

Math Overflow Recent Questions - Wed, 12/06/2017 - 09:02

Does there exist a variety of groups in which all finitely generated groups are residually nilpotent, and which contains some finitely generated group that is not nilpotent? That is, can a variety be locally residually nilpotent but not locally nilpotent?

Note that a consequence of Theorem 4.5 of [Traustason, Gunnar. Milnor groups and (virtual) nilpotence. J. Group Theory 8 (2005), no. 2, 203–221. MR2126730] is that no such variety can be metabelian (since the varieties $\mathcal{A}_p \mathcal{A}$ and $\mathcal{A} \mathcal{A}_p$ contain finite non-nilpotent groups). It appears the difficulty is that residual nilpotence does not pass to quotients in general.

integral involving hypergeometric function of matrix argument

Math Overflow Recent Questions - Wed, 12/06/2017 - 08:54

This conjecture comes from an observation on simulations of the matrix variate noncentral Beta distribution (similar to this observation, but I open a new question because yet I'm not sure it is exactly the same).

Let $p \geq 1$ be an integer, $a,b > \frac{p-1}{2}$, $\Theta$ a positive scalar $p \times p$-matrix $\Theta = \text{diag}(\theta, \ldots, \theta)$ and $U$ a symmetric $p \times p$-matrix satisfying $0 < U < I_p$. The conjecture is: $$ \int_{S >0} {\det(S)}^{a+b-\frac12(p+1)} \exp\left(-\mathrm{tr}\left(\frac{S}{2}\right)\right) {}_0\!F_1\left(b, \frac{1}{2}\Theta S^\frac12 U S^\frac12\right)\textrm{d}S \\ = 2^{a+b}\Gamma_p(a+b){}_1\!F_1(a+b, b, \Theta U). $$ According to this paper by Constantine (page 1280), the integral in the LHS is difficult to evaluate unless $\Theta$ is a scalar matrix. This suggests that this integral can be simplified but I don't find the result in the literature.

Do you have a reference for this result, or a proof?

Base of topolgy [on hold]

Math Overflow Recent Questions - Wed, 12/06/2017 - 08:25

Let $X$ be a nonempty set and $p:X\times X\rightarrow\mathbb{R}^+ $ be a function satisfying the following conditions for all $x,y,z\in X$: \begin{align} &1)\enspace p(x,y)=0\implies x=y \\ &2)\enspace p(x,y)=p(y,x)\hspace{1,2cm}\\ \hspace{0,2cm}&3)\enspace p(x,z)\leq p(x,y)+p(y,z) \end{align}Then the pair $(X,p)$ is said to be a metric-like space.

I want to show please that each metric-like $p$ on $X$ gererates a topology $τ_d$ on $X$ whose base is the family of open-balls . $$B(x,\varepsilon)=\{y\in X:|d(x,y)-d(x,x)|<\varepsilon\}$$.

Thank you .

Identification of cohomology sheaf in the definition of the Kodaira-Spencer morphism for abelian schemes

Math Overflow Recent Questions - Wed, 12/06/2017 - 08:18

Let $p:A \to S$ be a projective abelian scheme, where $S$ is some smooth scheme over a base field $k$. Then we have the Kodaira-Spencer morphism $$ \kappa : T_{S/k} \to R^1p_*T_{A/S} $$ where $T_{S/k}$ (resp. $T_{A/S}$) denotes the dual module of $\Omega^1_{S/k}$ (resp. $\Omega^1_{A/S}$).

Let $\text{Lie}_SA$ be the $\mathcal{O}_S$-dual of $p_*\Omega^1_{A/S}$. If I didn't misunderstand it, in Faltings-Chai, page 80, one identifies $R^1p_*T_{A/S}$ with $$ \text{Lie}_SA \otimes_{\mathcal{O}_S} R^1p_*\mathcal{O}_A $$ and I recall that $R^1p_*\mathcal{O}_A$ is naturally isomorphic to $\text{Lie}_SA^t$, where $A^t\to S$ denotes the dual abelian scheme.

The authors seem to give no justification for the isomorphism $R^1p_*T_{A/S} \cong \text{Lie}_SA \otimes R^1p_*\mathcal{O}_A$. How to prove it?

What is $G_2(2^m)$, and how is it embedded in $\Gamma L_6(2^m)$?

Math Overflow Recent Questions - Wed, 12/06/2017 - 08:10

I am trying to understand the classification of doubly transitive groups, specifically the nonsolvable affine case. Dixon and Mortimer (p.244) says there are three infinite families, one of which is $\mathbb{F}_{2^m}^6 \rtimes G_2(2^m) \leq \mathbb{F}_{2^m}^6 \rtimes \Gamma L_6(2^m)$.

What exactly is the group $G_2(2^m)$, and how is it embedded in $\Gamma L_6(2^m)$?

Ideally, the answer would say something like, "It's the subgroup of $\Gamma L_6(2^m)$ that preserves $X$." I am looking for a reference that provides such a description, written in contemporary English, and preferably at an introductory level.

So far, I have found the following:

  • Dixon and Mortimer points me to Hering 1974. Hering points me to "Dickson, 1915", but there is no such entry in the bibliography. MathSciNet lists six research articles by Dickson in 1915. None of them appear relevant.

  • This question is relevant and has a long list of references, but they are aimed at proving that the list in Dixon and Mortimer is complete. I want something that just describes the groups in that list.

  • This paper by Cooperstein talks about $G_2(2^m)$ as a subgroup of $Sp_6(2^m)$, but it doesn't explicitly describe the embedding. Instead, it points me to this paper by Tits and Borel (in French) and this earlier paper by Cooperstein, but the latter is quite technical and does not obviously contain what I need. I would like something at an introductory level.

  • Wikipedia references this paper by Dickson, which first introduced $G_2(2^m)$ in 1905. Maybe Hering was trying to cite this one instead. In any case, the language is extremely outdated. I'm looking for something more understandable.

sum of certain decomposable elements

Math Overflow Recent Questions - Wed, 12/06/2017 - 07:57

Let $V$ be be a vector space of dimension $m$ over any field and $\ell\leq m$ be a positive integer. Let $\omega_1,\ldots,\omega_r \in\bigwedge^\ell V$ are liniearly independent, completely decomposable vectors such that their sum $\omega=\omega_1+\cdots+\omega_r$ is again completely decomposable. Is it true then that, $\omega´=\omega_1+\cdots+\omega_j$ for any $j\leq r$ is completely decomposable?

This might be a simple problem. I was trying to prove this but neither can prove nor can produce a counterexample. Any help or reference would be appreciatable.


Subscribe to curious little things aggregator