Define $\sigma(n)=\sum_{d\mid n} d$. It is a result of Robin that if the Riemann Hypothesis is false, then there exists constants $0<\beta<1/2$ and $C>0$ such that

$$\sigma(n)\geq e^{\gamma}n\log\log n + \frac{Cn \log\log n}{(\log n)^{\beta}}$$

for infinitely many $n$.

However, Robin's paper (which i found on https://mathoverflow.net/a/84285/135597) was written in French, which i cannot understand.

So can someone kindly post the proof/ its main ideas in English, please ?

https://drive.google.com/file/d/1QJU6-MX5X0CJWesQTCcL_Ws5p_UQgtFW/view?usp=sharing

This is the article I am reading and trying to implement, on page 22 this formula

The issue I have is that this function changes and adds a new integral dimension for every new path (row) i have in my data.

so if i have 2 paths, its a 2 dimensional integral if 3 it's 3 dimensional and if 3000 it 3000 dimensional.

you can assume there is a table on N rows where each column represents xi, hj-ty, and so on,

can you write me some sample code that by appending a new dimension to a list, will calculate (probably using monte carlo method) the numeric value of this integral ?

For $p \in \mathbb{R}$, consider the following problem: \begin{equation} \label{1} \begin{cases} \operatorname{div}(a \nabla u ) = p\delta_{x_0} \quad \text{in } \Omega \\ u=0 \quad \text{on } \partial \Omega ; \end{cases} \end{equation} under the assumption that $a \in L^\infty$ is constant in some neighbourhood of $x_0$, i.e. $a(\mathbf{x})= a_0 \text{Id}$ for $\mathbf{x} \in B=B_{r_0}(x_0)$, $a_0 \in \mathbb{R}$, we can look for a solution in the form $$ u(x) = \psi(x) + K(x-x_0), $$ where $K(\cdot)$ is the fundamental solution (up to the constants $a_0,p$) of the Laplace operator and $\psi \in H^1(\Omega)$ satisfies a classical, well-posed, Neumann problem with data depending on $K|_{\Omega \setminus B}$. Note that the solution $u$ is not quite regular globally since it reads the singularity of $K$ at $x_0$.

Nevertheless, we can set up a control problem "away" from $x_0$ with the number $p$ as control and the quadratic tracking cost functional $$ \min_{p} \left( \frac{1}{2} \| u(p) - u_{d} \|_{0, \Omega \setminus B}^2 + \frac{1}{2} |p|^2 \right), $$ for some desired state $u_d \in L^2$, $u(p)$ being the solution of the above problem (in the above sense!) corresponding to the control $p$.

I see some problems arising while trying to formulate go-to results like necessary optimality conditions: it is not clear what should be a suitable adjoint problem, since a weak formulation is only available for $\psi=\psi_p$, but the state $u$ also depends on $K=K_p$, making $u(p)$ not a trivial translation of $\psi$. Moreover, the choice of the $L^2(\Omega \setminus B)$ norm in the optimization was made to somehow regularize $u$, on the other hand:

- Is the control problem still meaningful, as we are trying - in principle - to approximate a global
*a priori*chosen desired state taking into account only the behavior away from a fixed point? - Working with integrals in $\Omega \setminus B$ rather than $\Omega$ gives rise to unwanted boundary terms in integrations by parts.

Are there any references for optimal control problems of this kind?

**Note:** I know that it is possible to set up a global weak formulation for this type of Dirac-source problems (see reference) using sharp functional analysis results on weighted spaces, but this is not known to be possible for larger classes of operators, like those I have to deal with in my research. Therefore, this is a model example and the "split" solution is most likely the only option.

**Reference:** Allendes, Alejandro, et al. "An a posteriori error analysis for an optimal control problem with point sources." ESAIM: Mathematical Modelling and Numerical Analysis 52.5 (2018): 1617-1650.

Please give a closed immersion of schemes $f: Z\to X$ such that $f(Z)\subset U$ with $U$ a proper open subset of $X$.

$\DeclareMathOperator\Res{Res}$I have been reading Kontsevich and Soibelman's "Airy structures and symplectic geometry of topological recursion" (https://arxiv.org/abs/1701.09137) and am having trouble understanding their Section 6.3. I think the section is meant to explain the connection between the deformation of a disc $\mathbb{D}_t$ embedded in a symplectic surface $S$ and differential forms $\eta \in \mathbb{C}((z))dz$, $\Res_z(\eta) = 0$ such that \begin{equation} \forall n \geq 1, \Res_z (yx^n dx) = 0, \qquad \forall n \geq 0, \Res_z (y^2 x^ndx) = 0 \end{equation} where $x = z^2, y = z - \eta/(2zdz)$. Unfortunately, the explanation in the section is too brief for me.

Here's what I think is going on:

We choose coordinates $x$, $y$ of $S$ such that the symplectic form is $\omega = dx \wedge dy$, foliation is given by $x = \mathrm{const}$ and the disc $\mathbb{D}_t \subset S$ is embedded by $x = z^2$, $y = z$.

If the disc is deformed to $\mathbb{D}'_t$ then we can represent the deformation by a vector field $v \in T_S|_{\mathbb{D}_t}$ pointing along the direction of foliation, i.e., $\propto \partial_y$. I'm not sure how this is defined, really.

Getting a 1-form using the symplectic form, $\eta = \omega(v,\cdot) \propto dx$.

Somehow the pull-back of this form to the disc $\mathbb{D}_t$ will satisfy the residue constraints. The converse is also true. This is the part I really fail to make a connection.

I feel like this type of construction is well-known in a certain area of studies that why the explanations are so brief. But the paper cited no references for me to look further.

Could someone please explain to me in more detail what is going on or suggest me good references that will help fill the gap?

I'm encountering this inequality for dimensionality reduction problem. The simplified form looks as follows:

Consider positive integers $a_1$, $a_2$, $b_1$ and $b_2$ where $a_1>b_1$ and $a_2>b_2$. Prove that

$$ \frac{a_1a_2-b_1b_2}{a_1a_2-1}\geq\frac{(a_1-b_1)(a_2-b_2)}{(a_1-1)(a_2-1)} $$

The inequality seems very trivial and easy but I am struggling to prove it. While I could prove for the special cases where (1) $a_1=a_2=a$, which reduces to

$$ (a-1)[(b_1+b_2-2)a-(2b_1b_2-b_1-b_2)]\geq0 $$

$$ \iff a\geq max(b_1,b_2)\geq\frac{b_1(b_2-1)+b_2(b_1-1)}{(b_2-1)+(b_1-1)}, $$

and (2) $b_1=b_2=b$, which reduces to

$$ (a_1a_2+b)(a_1+a_2)\geq a_1a_2(2b+2), $$

I cannot verify the general case where $a_1\neq a_2$ and $b_1 \neq b_2$. If someone could help to provide guidance, reference to similar inequalities in the literature, or any idea to the solution, I would be very thankful.

Let $\kappa>0$ be a cardinal, and let $[\kappa]^{<\kappa}$ denote the collection of subsets of $\kappa$ having cardinality strictly less than $\kappa$. Is it consistent that $$|[\kappa]^{<\kappa}| > \kappa$$ for all cardinals $\kappa>\aleph_0$?

Suppose $\{a(n)\}_{n\ge 1}$ is a bounded complex sequence. Let $\phi(s)=\sum_{n\ge 1} \frac{a(n)}{n^s}$. Obviously, the Dirichlet series $\phi(s)$ is absolutely convergent for $\mathcal{R}(s)>1$. I would like to know the minimal value $\alpha$ such that the Dirichlet series $\phi(s)$ can have a meromorphic extension on the half plane $\mathcal{R}(s)>\alpha$? The converse question is whether there is some condition that ensures the Dirichlet series $\phi(s)$ can have a meromorphic extension on the half plane $\mathcal{R}(s)>1/2$?

Let's consider $H_k∶\ \left\{\begin{matrix}\mathbb{R}^2\rightarrow\mathbb{R}^2\\(x,y)\longmapsto(kx,ky)\\\end{matrix}\right.\ $

It is an homothetic transformation of $\mathbb{R}^2$ of center $(0,0)$. (a kind of zoom).

**Now I want to find :**
$$
S=\left\{\mathcal{C}\in\mathcal{P}(\mathbb{R}^2)\ |\ \forall\ k\in\mathbb{R}^\ast,H_k(\mathcal{C})=\mathcal{C}\right\}
$$

Where $\mathcal{P}(\mathbb{R}^2)$ designates *the set of all the sets of $\mathbb{R}^2$ vectors*, which is **kind of** the set of all 2D curves.

($\mathbb{R}^\ast$ is $\mathbb{R}-\{0\}$.)

So my problem is equivalent to finding all the sets of 2D points that don't change after any $H_k$ transformation of non-zero $k$ scalar.

**Note :** Maybe I could call : $S_k=\left\{\mathcal{C}\in\mathcal{P}(\mathbb{R}^2)\ |\ H_k(\mathcal{C})=\mathcal{C}\right\}$ such as : $$S=\bigcap_{k\in\mathbb{R}^\ast}S_k$$

I intuitively think that $S$ would be something like :

$$Intuition=L\cup\left\{\emptyset,\mathbb{R}^2\right\}$$ where $L$ is the set of all the $\mathbb{R}^2$ lines that contain $(0,0)$ : $$L\ =\ \left\{\left\{(x,mx),x\in\mathbb{R}\right\}|m\in\mathbb{R}\right\}\cup\left\{(0,y),y\in\mathbb{R}\right\}$$

I know that the $Intuition$ set is included in $S$ but I don't know if I have missed some solutions.

I would like to precisely compute $S$ to see if my intuition was good, but I have no idea how to do it.

If you have any idea, I would really appreciate some help !

**EDIT :** I just noticed that I posted this on the wrong stack Exchange... Sorry about this, now this question is on *Mathematics*.

I am interested in the following sequence: $$ T_n = \sum\limits^{n-1}_{k=0} \begin{pmatrix} n \\ k \end{pmatrix} T_{k}, \ \ \ \ T_0 = C \in \mathbb{N} $$ I would like to express it as a function of n, but none of the method I have tried work.

Asymptotically, I can tell that $T_n = \mathcal{O}(2^{\frac{k^2}{2}})$. One method that failed was to see $T_n$ as the $n$-th term in a series, but those terms grow to fast for it to work.

Do you know how to solve it, or have an intuition regarding how it might get solved?

Thank you.

In some paper the authors make use of the following inequality without further explanation: Let $x\in\mathbb{R}^n$ with $x_1\le\cdots\le x_n$ and $\alpha\in[0,1]^n$ with $\sum_{i=1}^n \alpha_i=N\in\{1,2,\ldots,n\}$. Then $$\sum_{i=1}^n\alpha_i x_i\ge\sum_{i=1}^N x_i.$$

While I already have found a (quite lengthy) bare-hands-proof, I wonder if this inequality is just (some variant of) some commonly known inequality that I am just unaware of. Any hints?

Let's suppose that I have a tree with n nodes. The root of my tree does not change in time. It is the same. However, the rest of nodes change their positions (parents nodes too) all the time. In this link, there are a clear example. I want to compute different possibility with keeping the root as the node 1. In this case there 16 possible trees.

How to compute the possible number of trees (combination between nodes) with keeping the same root for different trees?

I find that is possible to apply the Prufer Code. That means that I have n^(n-2)possible true. Don't hesitate please if you any other suggestion.

It is well known that $E[X|X+Y]$ is Gaussian if both $X$ and $Y$ are, and the result can be derived using standard density arguments. However, how can one prove it by only resulting to optimization arguments in order to argue that $$ \min_{Z \in L^2(\sigma(X+Y))}E[(Z-X)^2] = \min_{Z \in N}E[(Z-X)^2], $$ where $N$ is the affine subspace of $L^2(\sigma(X+Y))$, spanned by Gaussian random-variables?

**Intuition/Sketch:**
Here is what my trail of thought goes like:

- Since the space of Gaussian random-variables is closed under addition, scalar action, a linear subspace of $L^2(X+Y)$. Moreover, since the limit of a sequence of Gaussians in Gaussian, then $N$ is a closed linear subspace of the Hilbert space $L^2(X+Y)$.
- Therefore, the projection $$ P_N:x \mapsto \operatorname{argmin}_{w \in N}E[(w-x)]^2, $$ is well-defined and single-valued.
- Therefore $L^2(X+Y)\cong N \oplus N^{\perp}$, withthe projection on to the first coordinate, given $P_N$,
- The Triangle-inequality then implies that if $Z \in L^2(X+Y)$, then it's first two moments are well-defined and $$ E[(Z-X)^2]\leq E[(P_N(Z)-X)^2] + E[(P_{N^{\perp}}(Z))^2] , $$ with equality holding if and only if $Z \in N$.
- Hence, if $X$ is Gaussian, then so must the minimzer of $E[(\cdot-X)^2]$ be.

However, this argument doesn't really use the properties of $N$, so it feels like something is missing...

Does anybody have a reference for invariants of configurations of linear subspaces in the projective space?

In particular I would be curious to see an explicit expression of the invariant functions of sets of 4 lines in $P^3$, under the action of $PGL(3)$.

Let $M$ be the Dieudonne module of a p-divisible group $G_0$ over $k$, and let a lift of $G_0$ to $A$ be a p-divisible group $G$ over $A$ such that $G \otimes_A k \simeq G_0$. Let $\omega_G$ be the sheaf of invariant differentials of the p-divisible group $G$. Weinstein states in his notes on The Geometry of Lubin Tate Spaces (bottom of pg 10):

Even though the module $M$ (and the endomorphism $F$) only depend on $G_0$, the line spanned by $\omega_G$ in $M$ really does depend on the lift $G$. Any lift of $G_0$ gives rise via its invariant differential to a line $Fil \subset M$ having the property that the image of $Fil$ spans $M/FM$. Different lifts could (and indeed do) give rise to different lines in $M$ having this property. **These ideas were made precise by Fontaine, that the set of lifts of $G_0$ is canonically the same as the set of lines $Fil \subset M$ whose image spans $M/FM$.**

I looked through *groupes p-divisibles sur les corps locaux* and could not find this result, nor could I find in Demazure-Gabriel why $G$ and $G'$ are isogenous lifts of $G_0$ iff $\omega_G$ and $\omega_{G'}$ define the same $k$-line in $M/FM$. (I assume that the set of lifts of $G_0$ in the statement above is taken up to isogeny, though I may be mistaken, there must be some equivalence relation!) I am desperate to have a reference for this result, as I'm perplexed attempting to rederive it.

Let $Q$ be a quiver, and let $d=(d_i)$ be a dimension vector. We can consider Rep($Q,d$), the affine space consisting of representations of $Q$ with dimension vector $d$. The general linear $GL(d)= \prod_i GL(d_i)$ acts naturally on Rep($Q,d$) in such a way that its orbits are the isomorphism classes of representations of $Q$ with dimension vector $d$.

Let $X$ be such a representation, and consider the Zariski closure of the orbit corresponding to the representations isomorphic to $X$.

There is a simple argument, given in Kirillov's book "Quiver representations and quiver varieties" which shows that if we have a filtration of $X$ as $$X=X_r > X_{r-1} > \dots > X_0 = 0$$ then the orbit corresponding to $\bigoplus X_i/X_{i-1}$ is contained in the Zariski closure of the orbit of $X$.

Kirillov also gives an argument, citing Kempf, that if the orbit corresponding to $Z$ is closed and is contained in the closure of the orbit corresponding to $X$, then the converse holds, i.e., $Z$ is the direct sum of the subquotients of a filtration of $X$.

My question is: what if we have some orbit corresponding to $Z$ contained in the closure of the orbit corresponding to $X$, but the orbit corresponding to $Z$ is not closed. Is it necessarily isomorphic to the direct sum of the subquotients of some filtration of $X$?

Let $x(t),t\in [1,\infty)$ be a nondecreasing positive function satisfying the following inequality: $$ x'(t) \le \int_t^{+\infty} x(s)\frac{k(s)}{s^2}\,ds, $$ for any $t \ge 1$, where $k(t),t\in [1,\infty)$ is a nonincreasing positive function such that $$ \int_1^{+\infty}\frac{k(s)}{s}\,ds <\infty. $$

Can we prove that $x(t)$ is a bounded function?

Let $R$ be a $p$-torsion free ring which is integrally closed in $R[1/p]$ and let $S$ be a finite etale extension of $R[1/p]$.

Is it true that an integral closure $S^+$of $R$ in $S$ is flat over $R$?

**Remark 1:** It suffices to show that $S^+/p$ is flat over $R/p$, but I don't know how to see this.

**Remark 2:** I am mostly interested in the situation when $R$ is non-noetherian by itself, but $R[1/p]$ is. But I don't know whether this result holds even in the noetherian case.

If it helps, feel free to add extra conditions on a pair $(R, R[1/p])$. For example, in my main case of interest $R$ is $p$-adically complete and $R[1/p]$ is regular. Though I am not sure how useful it is.

Let $k$ be a field. Is it true that for any smooth irreducible projective $k$-variety $X$ and a dense open set $U\subset X$, for any zero-cycle on $X$ one can find an irreducible curve containing its support and meeting $U$?

Let $A$ be a square matrix in characteristic $p > 0$ with both column and row having length $(1 + p^0 + p + \cdots + p^i)$, where $i \geq 0$.

Suppose that further the $(m,n)$-component $a_{m,n}$ of the matrix $A$ is defined as follows$\colon$ Now, we shall give the presentation of $\sigma$ in terms of the matrix in the following manner$\colon$ \begin{align}\label{MATRIX} & a_{1,1} = a_{2,1} = \cdots = a_{1 + p^0 + \cdots + p^k + 1,1} = 1 \phantom{A} {\mathrm{for}} \phantom{A} 0 \leq k < i \notag \\ & a_{2,1} = a_{4, 3} = \cdots = a_{m, m-1} = 1 \phantom{A} {\mathrm{for}} \phantom{A} m \not= 1 + p^0 + p + \cdots + p^k + 1, {\mathrm{where}}~0 \leq k < i \notag \\ & a_{1 + p^0 + \cdots + p^k + 1,1 + p^0 + p + \cdots + p^k + p^{k + 1}} = 1 \phantom{A} {\mathrm{for}} \phantom{A} 0 \leq k < i, \notag \\ & a_{m,n} = 0 \phantom{A} {\mathrm{otherwise.}} \end{align} where $a_{m,n}$ is the $(m,n)$-component of the matrix. For example in the case of $p = 2$ and $i = 2$, it is written as follows$\colon$

\begin{pmatrix}\label{matrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{pmatrix}

Q. We shall consider the column vector $v$ of length $1 + p^0 + p^1 + \cdots + p^i$ and the condition $(A^e - I)v = 0$. Suppose the first entry of $v$ is $not$ zero. Then the minimal possible such $e$ must be $p^{i+1}$. Moreover, $A^{p^{i + 1}} = I$ must hold.