I am reading *Integrals of Nonlinear Equations of
Evolution and Solitary Waves* by Peter Lax and I'm having a hard time. The methods are pioneering, of course, but Lax does not bother much to provide precise explanations: he sticks to the PDE *Weltanschauung* and simply differentiates and integrates all the way through. This is fully ok, as long his results are groundbreaking and he's interested in a very specific PDE (KdV on $\mathbb R$, in this case) where his ideas work.

However, things break down soon if one tries to apply Lax' ideas to other examples; this is especially the case if one thinks of PDEs on bounded domains, where boundary conditions matter and make it tricky to figure out what's the precise meaning of a commutator.

So I was wondering whether there is some good reference about an abstract approach to the Lax pair idea. I'm ideally thinking of something along the lines of:

Let $H$ be a complex Hilbert space, $D_1,D_2$ subspaces of $H$ that are densely and compactly embedded in $H$, $\mathbb R_+\ni t\mapsto L(t)\in {\mathcal L}(D_1,H)$ be a $C^1$-family such that each $L(t)$ is self-adjoint as an operator on $H$ with domain $D_1$, $\mathbb R_+\ni t\mapsto P(t)\in {\mathcal L}(D_2,H)$ be a $C^1$-family of operators that generate an invertible evolution family on $H$ ...

and so on. Basically, what I'm looking for is a precise translation of the idea of Lax pairs to Hilbert space theory.

**Note:** This question has a 1-categorical and an $\infty$-categorical versions. I am interested in the $\infty$-categorical one so this is the version that I write below, but an answer for the 1-categorical version would be interesting too.

Given an $\infty$-category $\mathcal{C}$ and a collection of simplciial sets $\mathcal{I}$, there is a notion of "freely adjoining to $\mathcal{C}$ all colimits of shape $\mathcal{I}$". This is the full subcategory $P_{\mathcal{I}}(\mathcal{C})$ of all presheaves on $\mathcal{C}$ generated from the image of the Yoneda by $\mathcal{I}$-colimts (HTT 5.3.6). Now, assume that $\mathcal{I}$ is not any collection, but the collection of all $I$, such that $I$-colimits commute in spaces with all $K$-limits for all $K$ in some collection $\mathcal{K}$. e.g.

1) $\mathcal{K}$ is finite simplicial sets and $\mathcal{I}$ is filtered simplicial sets.

2) $\mathcal{K}$ is finite discrete simplicial sets and $\mathcal{I}$ is sifted simplicial sets.

Assuming $\mathcal{C}$ admits all $K^{op}$-colimits for all $K\in \mathcal{K}$, in these two examples it is known that $P_{\mathcal{I}}(\mathcal{C})$ can be described as the full subcategory $P^{\mathcal{K}}(\mathcal{C})$ of the $\infty$-category of presheaves spanned by the ones which preserve $\mathcal{K}$-limits (for (1) HTT 5.3.5.4 and for (2) HTT 5.5.8.15). The proofs of these two facts seem to be quite different, so I am wondering

**Question**: Under the above conditions on $\mathcal{I}$, $\mathcal{K}$ and $\mathcal{C}$, does $P_{\mathcal{I}}(\mathcal{C})$ always coincide with $P^{\mathcal{K}}(\mathcal{C})$?

It is easy to see that we have an inclusion of $P_{\mathcal{I}}(\mathcal{C})$ into $P^{\mathcal{K}}(\mathcal{C})$, but the converse seems more complicated. I also have a creepy feeling that the words "sound doctrine" may come up...

I am looking for a reference for a result from convex geometry that I suspect has already been proven. The result seems geometrically obvious, but I couldn't find a similar result in Peter Gruber's book nor could I prove it succinctly with my limited knowledge of convex geometry!

For each $\epsilon > 0$ and $A \subseteq \mathbb{R}^d$ we define $$ B_\epsilon(A) = \{ x \in \mathbb{R}^d : \inf_{y \in A} |x-y| < \epsilon \},$$ where $|\cdot|$ is the usual Euclidean norm.

Let $P$ is a proper, compact convex body in $\mathbb{R}^d$ and let $m$ denote Lebesgue measure. Setting aside measurability concerns, we need the inequality $$ m(B_\epsilon(\partial P) \cap P ) \le m(B_\epsilon(\partial P) \cap P^C ) $$ for each $\epsilon > 0$. As the title suggests, this is just the statement that there is more volume close to the boundary of a proper, compact convex body on the outside than on the inside.

For our application it suffices to consider the case where $P$ is a polytope.

I am reading a paper in differential geometry, Hitchin's Langlands duality and G2 spectral curves (see the end of page 8 in the arxiv version), where $f: E \rightarrow F$ is a morphism of holomorphic vector bundles on a Riemann surface, and the kernel bundle $K$ is considered. A standard argument shows that $K$ is a vector bundle too. In the paper, the nature of the map ensures also that $K$ is not a zero bundle.

Anyway, the author considers a proper divisor $D$ on the Riemann surface "where $K$ is null". Does it make any sense?

(I study algebraic geometry so maybe there is some kind of "differential" approach I am missing here, I don't know...)

Suppose we have a seed $(x,y,B)$ where $B$ is a skew-symmetrizable matrix, $x = \{x_1,\ldots,x_n\}$ and y is an n-tuple of elements in $Trop\{x_{n+1},\ldots,x_m\}$.

If there are finitely many cluster variables in the cluster algebra generated by this seed, then why are there only finitely many y-variables? (This is equivalent to asking why the extended part of $\tilde{B}$ stays bounded if there are only finitely many cluster variables.) -- it is mentioned here (https://arxiv.org/pdf/1707.07190.pdf) in the proof of Corollary 5.1.6 which makes me think it is trivial.

If there are finitely many cluster variables, it is clear why the mutation equivalence class of $B$ is finite, because otherwise there would be a mutation equivalent matrix $B'$ with an entry $b'_{ij} \geq 4$ and then performing a sequence of mutations at $i$ and $j$ would generate infinitely many cluster variables.

I am not sure that this is appropriate at MO, so if not, please, delete this.

This is inspired by David Hansen's question where he asks about mathematics done during the WWII. I would like to ask the opposite question:

what are some examples of mathematical research interrupted by a war?

Everyone is aware of the terrible damage inflicted by the war on the Polish mathematical school. The dramatic destinies of Stefan Banach (who lived in very difficult conditions during the WWII and died soon after it), Juliusz Schauder (killed in Gestapo), Józef Marcinkiewicz (killed in Katyn) and of many others have much influence on the conscience of mathematicians in Central Europe (including Russia, and I believe, not only here).

When I was a student an idea was popular in Soviet Union that war moves science. I must confess, I am a partisan of the opposite one: war kills science. I would be grateful to people here who would share their knowledge and give illustrations (I mean here any war, not necessarily WWII).

Let $P_1, \ldots, P_m$, $Q_1, \ldots, Q_k \in \mathbb{C}[x_0,\ldots,x_n]$ be linear homogenous polynomials.

Assume that for every $i$ and for every $j$ the polynomial $x_0^2$ belongs to the ideal $\langle P_i, Q_j \rangle$.

Is it true that the rank of $\{P_1, \ldots, P_m \}$ (in the vector space of all linear polynomials $\mathbb{C}[x_0,\ldots,x_n]$) or the rank of $\{Q_1, \ldots Q_k \}$ is less than some constant?

(I think the answer of this question can help to solve this question)

UPD: Since Zach Teitler has answered the question for $x_0^2$ I would like to ask the same question for an arbitrary homogenous quaratic polynomial of degree $2$.

I believe that there is a conjecture that for any smooth projective variety $X$ over a number field $K$, its Chow groups $CH^i(X)$ (or at least $CH^i(X)\otimes_{\mathbf Z} \mathbf Q$) are finitely generated. Is it called Beilinson's Conjecture? What is the best reference for this conjecture? Is it know in any non-trivial case?

I am getting a negative variance for a set of data. I know this should not be the case but I cannot figure out what I am doing wrong.

I keep a tally of the scores and a tally of the scores squared as follows

the tallies are initialized as

tally = 0 -- this is the tally of the scores

tally2 = 0 -- this is the tally of the scores squared

whenever an event that results in a score takes place the new scores are added to the tallies

tally = tally + score

tally2 = tally2 + score*score

at the end I calculate the mean and variance as

mean = tally / number of events

variance = tally2 / number of events - mean*mean

I know that the variance should never be negative, but for some reason I am getting a negative value. Any input on what could cause my variance to be negative would be appreciated. Thanks.

In homotopy theory, the *mapping cone* of a continuous map $f\colon X \to Y$ is the homotopy pushout over the following span:

$$ \require{AMScd} \begin{CD} X @>{f}>> Y\\ @VVV \\ \{*\} \end{CD} $$

I.e., it is universal among all squares of the form $$ \begin{CD} X @>{f}>> Y\\ @VVV @VVV\\ \{*\}@>>> Z \end{CD} $$ where the square commutes up to homotopy.

But what is a good name for such an object $Z$? Normally, I would call it a **cocone**, but I would rather not use the word *cone* to mean two different things.

*Square* and *cospan* are possibilities, but they seem a bit too general: I want to refer specifically to cocones for the first diagram.

Is there a good alternative word?

This problem is motivated from one of my pattern mining research projects. Any helpful suggestions will be highly appreciated and **acknowledged**.

Consider an $n \times n$ correlation matrix A such that all the off-diagonal entries are between [-1,0]. (**Note**: A correlation matrix is a positive semi-definite symmetric matrix, with diagonal entries 1 and all off-diagonal entries between [-1,1]).

Let $\alpha_i = \frac{\sum_{j=1,j \neq i}^{n}|A_{ij}|}{n-1}$ denote the mean of magnitudes of off-diagonal entries in $i^{th}$ column.

Let $v_{min} = [v_1,v_2,...,v_n]^T$ be the unit eigenvector corresponding to the least eigenvalue $\lambda_{min}$ of A. Let $v_k$ be the weight with minimum magnitude in $v_{min}$.

**Then empirically, I am observing that $\alpha_k$ is also minimum among all $\alpha_i$'s.**

I am wondering if this is indeed true and can be proved, or otherwise, if there is any counterexample where this will break?

**My attempt so far:** I was trying the following approach but that doesn't lead me very far:

$\lambda_{min} = v_{min}^{T}Av_{min}$.

On expanding, we get

$\lambda_{min} = 1 + \sum_{i=1}^{n}\sum_{j=1,j \neq i}^{n} v_i v_j A_{ij}$.

Since $\lambda_{min}$ is the least eigenvalue, we would want $\sum_{i=1}^{n}\sum_{j=1,j \neq i}^{n} v_i v_j A_{ij}$ to be minimized.

It can be shown that for any correlation matrix that has all off-diagonal entries $\leq 0$, all the weights in $v_{min}$ are always of same sign. Therefore, $v_i v_j A_{ij} = - v_i v_j |A_{ij}|\leq 0, \forall i,j \in [1,n], i \neq j$.

Now, for any two columns $A_{:i}$ and $A_{:j}$ such that $|A_{mi}| \geq |A_{mj}| \forall m \in [1,n], m \neq i,j $, $\alpha_i \geq \alpha_j$. Also it is easy to show that $|v_i|$ has to be $\geq |v_j|$ in order to minimize $\lambda_{min}$.

However, the above case is a very special case where column $A_{:j}$ is elementwise smaller than or equal to the other column $A_{:i}$. For other general cases, it doesn't seem to tell us anything more.

Cantor's Attic is a really great website for the various descriptions of large finite numbers, large countable ordinals, and large cardinal axioms.

However, after looking through the archives of the website, I have found that originally, the following cardinals were included and never given a definition:

- Grand reflection cardinals
- Universe cardinals
- Weak universe cardinals

The universe cardinals and weak universe cardinals were replaced by the worldly cardinals in the same spot, so it makes sense that the term "universe" was renamed to worldly. However, that doesn't explain what "weak universe" cardinals are.

The grand reflection cardinals were created and never replaced. They still remain on the upper attic today, although hidden by code. You can see the link there, but it links to nothing.

So what are these cardinals? Does anybody know? The best person I could think of to answer this would be @JoelDavidHamkins himself, who was the one to put these on Cantor's Attic.

Suppose $A,B\in M_n(\mathbb C)$ are self-adjoint. Does there exist a constant $C>0$ dependent only on $n$ such that $$ |A+iB| \leq C(|A| + |B|)? $$

If $A$ and $B$ are positive then this was answered in the affirmative in this question. The argument given is not obviously extendable to this situation.

This is a follow-up to normal form for some finite groups, extending the small groups library.

Not being familiar with groups, I wonder whether it is possible to check efficiently whether a group (given as a permutation group) is isomorphic to a generalized symmetric group.

Initial computer experiments indicate that the parameter $m$ in $\mathbb Z_m\wr\mathfrak S_n$ might be twice the index of the derived subgroup in the group.

From a practical point of view, I am trying to do this with GAP.

Let $f \colon U \to \mathbb R^n$ ($U$ open in $\mathbb R^n$) be of class $C^1$ and assume furthermore that $f$ is injective. The theorem of the invariance of domain tells us that $f(U)$ is open.

Is it possible to show that $f(U)$ is open by not using the invariance of domain theorem. But instead using that $f$ is smooth?

Denote the standard Gaussian probability measure on $\mathbb R^n$ by $\gamma$. We partition $\mathbb R^n$ into two sets $A$ and $A^c$ such that $\gamma(A) = \gamma(A^c) = 1/2$.

Denote by $\gamma_{A}$ to Gaussian measure restricted to $A$, and normalized so that it is a probability measure. Similarly, define $\gamma_{A^c}$ to be the Gaussian measure restricted to $A^c$ and normalized.

My question is the following:

What is the optimal $A$ such that $\gamma_A$ and $\gamma_{A^c}$ are the farthest apart; i.e., solving

$$\arg\max_{A} W_2(\gamma_A, \gamma_{A^c}),$$

where $W_2$ is the 2-Wasserstein distance?

Possible generalization: Instead of constructing $\gamma_A$ and $\gamma_{A^c}$ as above, we could start with any two probability measures $\gamma_1$ and $\gamma_2$ such that $\gamma = \frac{\gamma_1 + \gamma_2}{2}$ and find $\arg \max_{\gamma_1, \gamma_2} W_2(\gamma_1, \gamma_2)$.

Finding upper bounds on the $W_2$ distance is also of interest. A natural conjecture, inspired by the Gaussian isoperimetric inequality, would be that $A$ should be a half-space. Counterexamples to this are also welcome!

Let $X$ be an integral affine scheme $X = Spec(A)$ endowed with a finite groupe action by $G$, which is of order $n$.

Consider the fixed points scheme $X^G$

Assume $n$ is invertible in $A$. Is there a result out there describing the irreducible components of $X^G$ ?

I tried several easy examples (permutations of coordinates in affine $d$-space) and every time I found that $X^G$ was irreducible.

I'm looking for a scanned version of the famous Thurston's notes (as it were in ~1980).

I have true difficulties to find the original version ... since the electronic (TeX) version is now everywhere on the web.

Any help appreciated !

As a bonus question in an exam we were asked to find compact metrix spaces $X,Y$ and $Z$ such that $d_{GH}(X,Y)=d_{GH}(X,Z)=d_{GH}(Y,Z)>0$.

The proposed answer is to take $\{0\},\{-1,1\}$ and $\{-1,0,1\}$. And the distances can be easily calculated by trying all appropriate correspondances and calculating distortions.

However I proposed the following three sets ( there are only two radii and they are $R>r$)

set $X$ is the big closed ball of radius $R$.

Set $Y$ is the small closed ball of radius $r$.

Set $Z$ is a closed ball of radius $r$ union two perpendicular line segments of length $2R$ that intersect in the center of the ball.

The conjecture is that all distances are $R-r$. To achieve this distance simply place the figures concentrictly. (I think we may need that $\frac{r}{R}$ is big)

In order to get lower bounds for the distances involving $Y$ one simply uses the bound $d_{GH}(A,B)\geq |\text{Diam}(A)-\text{Diam}(B)|/2$.

But I am stuck calculating the distance between $X$ and $Z$. One approach is to use contradiction and try to use distortions. If we take $x_1$ and $x_2$ diametrally opposite then if $z_1\sim x_1$ and $x_2\sim z_2$ we must have $d(z_1,z_2)>r$ and so at least one of $z_1$ and $z_2$ is outside the small ball, but Im stuck after this.

Let $a = (a_1, \cdots, a_n), b = (b_1, \cdots, b_n), c = (c_1, \cdots, c_n) \in \mathbb{R}^n$ with $a_1 \geq \cdots \geq a_n, b_1 \geq \cdots \geq b_n, 0 < c_1 \leq \cdots \leq c_n$.

In addition, assume that $\sum_{i=1}^k b_i \leq \sum_{i=1}^k a_i$ for all $k \in \{1, \cdots, n-1\}$ and $\sum_{i=1}^n b_i = \sum_{i=1}^n a_i$.

Let $A := \{(a_{\sigma(1)}, \cdots, a_{\sigma(n)}) \,|\, \sigma \in S_n\}$ and $K_A$ be the unique minimal convex set containing $A$.

For $b \in K_A$, does $$\sum_{i=1}^n c_i b_i \geq \sum_{i=1}^n c_i a_i$$ hold?