Recent MathOverflow Questions

positive matrices diagonalised by orthogonals which are also involutions

Math Overflow Recent Questions - Wed, 06/27/2018 - 00:53

Let $A$ be a positive definite matrix. Then $A$ is diagonalized by an orthogonal matrix $P$.

I want to know, when this matrix is also an involution. ie. $P^2 = I$.

If there is any characterization of such $A$ kindly share.

Thank you.

When $\Big[ uv \Big]_{x\,:=\,0}^{x\,:=\,1}$ and $\int_{x\,:=\,0}^{x\,:=\,1} v\,du$ are infinite but $\int_{x\,:=\,0}^{x\,:=\,1}u\,dv$ is finite

Math Overflow Recent Questions - Wed, 06/27/2018 - 00:03

I asked this question on math.stackexchange.com, and getting no good answers, I offered a bounty, and still didn't get any answer that was somewhat complete (but the system automatically awarded the bounty). So maybe I can get something better here.

I have encountered a simple problem in probability where I would not have expected to find conditional convergence lurking about, but there it is. So I wonder:

  • Is any insight about probability to be drawn from the fact that infinity minus infinity appears here?
  • In particular, do the two separate terms that evaluate to infinity have some probabilistic interpretation?
  • Is any insight about analysis or anything else to be found here?
  • Generally in what situations does one write $\displaystyle \int_{x\,:=\,0}^{x\,:=\,1} u\,dv = \Big[ uv \Big]_{x\,:=\,0}^{x\,:=\,1} - \int_{x\,:=\,0}^{x\,:=\,1} v\,du$ and find that those last two terms are both infinite even though the first one converges absolutely?
  • In particular, are any instances of this particularly notable or worth knowing about?

The probability problem

Suppose $X_1,\ldots,X_6$ are independent random variables each having the same exponential distribution with expected value $\mu,$ so that $$ \Pr( X_1 > x) = e^{-x/\mu} \quad\text{for } x\ge0. $$ It is desired to find this expected value $$ \operatorname E(\max\{\,X_1,\ldots,X_6\,\}) = \mu\left( 1 + \frac 1 2 + \frac 13 + \frac 1 4 + \frac 1 5 + \frac 1 6 \right) = 2.45\mu. \tag 1 $$ One fairly routine way to show this goes like this: $$ \Pr(\min\{\,X_1,\ldots,X_6\,\} > x) = \big( \Pr(X_1>x) \big)^6 = e^{-6x/\mu}\quad \text{for }x\ge0, $$ and therefore $$ \operatorname E(\min) = \frac \mu 6. $$ Let $X_{(1)}, \ldots, X_{(6)}$ be the order statistics, i.e. $X_1,\ldots,X_6$ sorted into increasing order. Then we have $$ \operatorname E(X_{(1)}) = \frac \mu 6, \quad\text{and } \operatorname E(X_{(2)} - X_{(1)}) = \frac \mu 5 $$ because that difference is the minimum of five exponentially distributed random variables. And so on through the last one.

No conditional convergence appears above.

But suppose instead we just reduce it to evaluation of an integral.

\begin{align} & \Pr(\max \le x) = \Pr(X_{(6)} \le x) = \Pr( \text{all of }X_1,\ldots,X_6 \text{ are} \le x) = \left( 1 - e^{-x/\mu} \right)^6 \text{ for } x\ge0. \\[10pt] & \text{Hence for measurable sets $A\subseteq[0,+\infty)$ we have } \Pr(\max\in A) = \int_A f(x)\, dx \\[10pt] & \text{where } f(x) = \frac d {dx} \left( 1 - e^{-x/\mu} \right)^6 = 6\left( 1- e^{-x/\mu} \right)^5 ( e^{-x/\mu}) \frac 1 \mu. \end{align}

So here's our integral: $$ \operatorname E(\max) = \int_0^\infty xf(x)\, dx. $$ No suggestion of conditional convergence, right?

\begin{align} \operatorname E(\max) = \int_0^\infty xf(x)\, dx & = \int_0^\infty x 6\left( 1- e^{-x/\mu} \right)^5 ( e^{-x/\mu}) \, \frac {dx} \mu \\[10pt] & = \mu \int_0^\infty s 6( 1-e^{-s})^5 e^{-s} \, ds \\[10pt] & = \mu \int s\, dt = \mu st - \mu\int t\,ds \\[10pt] & = \mu s(1-e^{-s})^6 - \mu \int (1-e^{-s})^6 \, ds. \end{align} Now a substitution: \begin{align} r & = 1-e^{-s} \\[6pt] s & = -\log(1-r) \\[6pt] ds & = \frac{dr}{1-r} \end{align} Our integral becomes \begin{align} & \mu ( - r^6 \log(1-r) ) - \mu \int \frac{r^6}{1-r} \, dr \\[10pt] = {} & \mu ( - r^6 \log(1-r) ) - \mu \int \left( -r^5 - r^4 - r^3 - r^2 - r - 1 + \frac 1 {1-r} \right) \, dr \end{align} Now the temptation is to write $$ \require{cancel} \xcancel{\left[ \mu \left( -r^6 \log_e(1-r) \right) \vphantom{\frac11} \right]_0^1} - \xcancel{\mu \int_0^1 \left( -r^5-r^4-r^3-r^2 - r -1 + \frac 1 {1-r} \right) \, dr }. $$ The problem is that this is infinity minus infinity, so we have conditional convergence. So suppose we write it like this: \begin{align} & \left[ \mu \left( -r^6 \log(1-r) \right) - \mu \int \left( -r^5-r^4-r^3-r^2 - r -1 + \frac 1 {1-r} \right) \, dr \right]_0^1 \\ & \text{(The above is not standard notation, as far as I know.)} \\[10pt] = {} & \mu \left[ (1-r^6) \log_e (1-r) + \left( \frac{r^6} 6 + \frac{r^5} 5 + \frac{r^4} 4 + \frac {r^3} 3 + \frac{r^2} 2 + r \right) \right]_0^1 \end{align} After we use L'Hopital's rule to evaluate the first term, this ends up being just what we see in $(1).$

Maybe I'll post my own answer if I am so inspired, but other answers may provide valuable alternative points of view. (I don't have an answer to post yet.)

Postscript:

Where I've seen something similar before is in attempts to prove that if $\Pr(X\ge0) = 1$ and $f$ is the p.d.f. and $F$ the c.d.f. of $X$, then

$$ \int_0^\infty xf(x)\, dx = \int_0^\infty (1-F(x))\,dx. $$

If you write

$$ \int(1-F(x))\,dx = \int u\,dx = xu - \int x\, du = \text{etc.,} $$

then you get infinity minus infinity. But you can do this:

\begin{align} & \int_0^\infty xf(x)\, dx = \int_0^\infty \left( \int_0^x f(x)\,dy \right) \, dx \\[10pt] = {} & \int_0^\infty \left( \int_y^\infty f(x) \,dx\right) \, dy \\[10pt] = {} & \int_0^\infty (1-F(y))\,dy. \end{align}

Tonelli's theorem is applicable since the function being integrated is everywhere non-negative, so that justifies the change in the order of integration.

A system of homogeneous linear equations

Math Overflow Recent Questions - Tue, 06/26/2018 - 23:56

This is the "real-life" (but slightly more technical) version of a question I have asked recently.

For a prime $p>10$, let $\mathcal L_X$, $\mathcal L_Y$, and $\mathcal L_Z$ denote the pencils of all those lines in $\mathbb F_p^2$ parallel to the lines $$ X:=\{(x,0)\colon x\in\mathbb F_p \}, \ Y:=\{(0,y)\colon y\in\mathbb F_p \}, \ Z:=\{(z,z)\colon z\in\mathbb F_p \}, $$ respectively; thus, $|\mathcal L_X|=|\mathcal L_Y|=|\mathcal L_Z|=p$. Write $$ \chi(x,y) := \omega^x,\quad (x,y)\in\mathbb F_p^2, $$ where $\omega$ is a fixed primitive root of unity of degree $p$. Given a set $S\subseteq\mathbb F_p^2$, with every element $s\in S$ associate a formal variable $x_s$, and consider the system of homogeneous linear equations \begin{gather*} \sum_{s\in S\cap\ell} x_s = 0,\quad \ell\in\mathcal L_X\cup\mathcal L_Y, \\ \sum_{s\in S\cap\ell} \chi(s)\,x_s=0, \quad \ell \in \mathcal L_Z; \end{gather*} notice that there are $3p$ equations and $|S|$ variables. Does there exist a set $S\subseteq\mathbb F_p^2$ of size $|S|<3p$ for which this system has a solution such that the set $\{s\in S\colon x_s\ne 0\}$ meets every line in $\mathbb F_p^2$?

Is $O_{D}(X)$ a reflective Banach space?

Math Overflow Recent Questions - Tue, 06/26/2018 - 22:50

Let $X$ be a Riemann surface and $O_{X}(D)$ be the line bundle associated with $D$. Let the metric on $O_{X}(D)$ be given by $$ |1_{O_{X}(D)}(P)|=G(P,D)^2 $$ where $G(P,D)^2$ is the Green function associated with $D$ and $P$ which vanishes up to first order on the diagonal.

My question is whether $O_{X}(D)$ equipped with this metric is a reflective Banach space over $\mathbb{C}$.

In what sense bibundles are called as generalized morphisms

Math Overflow Recent Questions - Tue, 06/26/2018 - 22:46

Definition : Let $\mathcal{G}$ and $\mathcal{H}$ be Lie groupoids. A bibundle from $\mathcal{G}$ to $\mathcal{H}$ is a manifold $P$ together with two maps $a_L:P\rightarrow \mathcal{G}_0,a_R:P\rightarrow \mathcal{H}_0$ such that

  1. there is a left action of $\mathcal{G}$ on $P$ with respect to an anchor $a_L$ and a right action of $\mathcal{H}$ on $P$ with respect to an anchor $a_R$.

  2. $a_L:P\rightarrow \mathcal{G}_0$ is a principal $H$-bundle.

  3. $a_R$ is $\mathcal{G}$ invariant.

  4. the actions of $\mathcal{G}$ and $\mathcal{H}$ commutes.

I am trying to understand in what sense these are called generalized morphisms between Lie groupoids.

There is already a notion of generalized morphsim between Lie groupoids from Ieke Moerdijk's article Orbifolds as groupoids.

Definition : A generalized morphism from a Lie groupoid $\mathcal{G}$ to a Lie groupoid $\mathcal{H}$ is a morphism of Lie groupoids $\mathcal{G}'\rightarrow \mathcal{H}$ where $\mathcal{G}'$ is a Lie groupoid morita equivalent to $\mathcal{G}$.

The name generalized morphisms seems reasonable for this but I do not see in what sense a bibundle is said to be a generalized morphism.

Any comments that helps to understand in what sense bibundles are called as generalized morphisms are welcome.

Geometry problem [on hold]

Math Overflow Recent Questions - Tue, 06/26/2018 - 22:36

Hello please help with the geometry problem and if you can check it on the electronic descriptive application is given a point inside the acute angle and you need to select 2 points on the sides of the corner so that the perimeter of the triangle formed is the smallest and check please for this condition, please 2 of the following options: if we drop perpendiculars from a given point to the sides of the angle and connect the intersection points of these perpendiculars with the sides of the angle among themselves or if the same perpendicular s on the sides of the angle to extend by the same distance and the ends of these distances join together from where and the supposedly sought-for points are determined at the intersection of the data connecting the ends of the line to the sides of the corner. Thank you very much!

Can we write each integer $n>3$ as $p+F_kL_m$ with $p$ an odd prime and $k$ and $m$ positive integers?

Math Overflow Recent Questions - Tue, 06/26/2018 - 21:49

Recall that the Fibonacci numbers $F_0,F_1,\ldots$ are defined by $$F_0=0,\ F_1=1,\ \text{and}\ F_{n+1}=F_n+F_{n-1}\ (n=1,2,3,\ldots),$$ and the Lucas numbers $L_0,L_1,\ldots$ are given by $$L_0=2,\ L_1=1,\ \text{and}\ L_{n+1}=L_n+L_{n-1}\ (n=1,2,3,\ldots).$$ It is well known that $$F_n=\frac{1}{\sqrt5}\bigg(\left(\frac{1+\sqrt{5}}2\right)^n-\left(\frac{1-\sqrt{5}}2\right)^n\bigg)$$ and $$L_n=\left(\frac{1+\sqrt{5}}2\right)^n+\left(\frac{1-\sqrt{5}}2\right)^n$$ for all $n=0,1,2,\ldots$.

Here I ask a question on a new kind of representations involving primes, Fibonacci numbers and Lucas numbers.

QUESTION: Can we write each integer $n>3$ as $p+F_kL_m$ with $p$ an odd prime and $k$ and $m$ positive integers? Is this supported by heuristic arguments?

I conjecture that any integer $n>3$ can be written as $p+F_kL_m$, where $p$ is an odd prime, and $k$ and $m$ are positive integers. I have verified this for all $n=4,\ldots,3\times10^9$. For the number of ways to write a positive integer $n$ as $p+F_kL_m$ with $p$ an odd prime, $k>1$ and $m\ge1$, see http://oeis.org/A316141. For example, $$5=3+F_3L_1\ \text{and}\ 17=3+F_3L_4=5+F_4L_3=11+F_3L_2=13+F_2L_3.$$

Do heuristic arguments support the above conjecture?

Pages

Subscribe to curious little things aggregator