Consider the following statement, which I suspect is false as written:

Let $E,F,G$ be (Hausdorff) topological vector spaces (over $\mathbb{R}$), let $\varphi\colon E\times F\to G$ be continuous and bilinear, and let $(x_i)_{i\in I}$ and $(y_j)_{j\in J}$ be summable families in $E$ and $F$ respectively with $\sum_{i\in I} x_i = x$ and $\sum_{j\in J} y_j = y$. Then $(\varphi(x_i,y_j))_{(i,j)\in I\times J}$ is summable with $\sum_{(i,j)\in I\times J} \varphi(x_i, y_j) = \varphi(x,y)$.

(Lest there be any doubt, “$(z_k)_{k\in K}$ summable with $\sum_{k\in K} z_k = z$” means that for every neighborhood $V$ of $z$ there is $K_0\subseteq K$ finite such that for any $K_0 \subseteq K_1 \subseteq K$ finite we have $\sum_{k\in K_1} z_k \in V$”.)

I am interested both in “nice” counterexamples to the statement above and in strengthenings of the hypotheses which would make it true—or basically any information regarding variations of this statement (I know essentially nothing except for the pretty much trivial fact that when $E,F,G$ are finite-dimensional it is correct). Since the rules of MO are to ask one specific question, and since I am mostly interested in counterexamples, let me ask:

**Question:** Is there a counterexample to the above statement with “nice” spaces $E,F,G$ (e.g., locally convex, complete, metrizable… or even Banach spaces)?

—but again, any information concerning it is welcome.

**Comments** (added 2018-01-04):

It is clear that, under the hypotheses of the statement above, $\sum_{(i,j)\in I_1\times J_1} \varphi(x_i, y_j)$ converges to $\varphi(x,y)$ where $I_1$, $J_1$ range over the finite subsets of $I$ and $J$ respectively; what is to be proven is that $\sum_{(i,j)\in K_1} \varphi(x_i, y_j)$ converges to $\varphi(x,y)$ where $K_1$ ranges over the finite subsets of $I\times J$. The subtlety, of course, is that $K_1$ can fail to be a rectangle.

The following result is found in Seth Warner's book

*Topological Rings*(1993), theorem 10.15: if $E,F,G$ be are Hausdorff commutative topological groups, and $\varphi\colon E\times F\to G$ is continuous and $\mathbb{Z}$-bilinear, and $(x_i)_{i\in I}$ and $(y_j)_{j\in J}$ summable families in $E$ and $F$ respectively with $\sum_{i\in I} x_i = x$ and $\sum_{j\in J} y_j = y$, then**provided**$(\varphi(x_i,y_j))_{(i,j)\in I\times J}$ is summable, it sum is $\sum_{(i,j)\in I\times J} \varphi(x_i, y_j) = \varphi(x,y)$. So the crucial question is the summability of $(\varphi(x_i,y_j))$, not the equality with $\varphi(x,y)$. Even with the very weak hypothesis that $E,F,G$ are commutative topological groups, I still don't have a counterexample!The following possibly related result is found in Kamal Kant Jha's 1972 paper “Analysis of Bounded Sets in Topological Tensor Products” (corollary 3.3): If $(x_i)$ is a totally summable family in a locally convex space $E$ [meaning that there exists $L\subseteq E$ closed, absolutely convex and bounded, such that $\{x_i\}\subseteq L$ and $\sum_i p_L(x_i) < +\infty$ for $p_L$ the gauge of $L$] and ditto for $(y_j)$ in $F$, then $(x_i \otimes y_j)$ is totally summable in $E \mathbin{\otimes_\varepsilon} F$.

[Reposted from math.stackexchange]

Consider a monoid $M$ acting on a set $X$, where $M$ is the full transformation monoid on some set $A$.

Say that $B\subseteq A$ *fixes* $x\in X$ iff, for all $m\in M$, if $m(b) = b$ for all $b\in B$, then $mx=x$.

Say that $B\subseteq A$ *pins down* $x\in X$ iff, for all $m,n\in M$, if $m(b)=n(b)$ for all $b\in B$, then $mx=nx$.

[Apologies if there's more standard terminology for these notions]

Question 1: If $B$ fixes $x$, does it follow that $B$ pins down $x$?

Question 2: If $B$ and $B'$ both pin down $x$, does it follow that $B\cap B'$ pins down $x$?

Is a collection of reciprocals of monic reducible quadratic polynomials, that is functions of the form

$$ \{ \left( (x-a_i)(x-b_i) \right)^{-1} \}_{i=1}^{k}, $$

linearly independent over a finite field? This can be seen for the reciprocals of linear functions from the invertibility of Cauchy matrices.

Edit: To address ABX's very nice observation/obstruction, assume that $k$ is very small compared to the characteristic of the field.

Where can one find a proof of Lefschetz fixed-point theorem for the Frobenius map on elliptic curves over algebraic closures of $F_{p}$ ?

This could immediately follow if their coholomogies (for the sheaf of regular functions) were Weil cohomologies. But the proof of this is also hard to find.

Yet, there are references to this fact in connection with the use of Picard-Fuchs equation and counting rational points on such curves.

The Catalan numbers $C_n$ count both

- the Dyck paths of length $2n$, and
- the ways to associate $n$ repeated applications of a binary operation.

We call the latter *magma expressions*; we will explain below.

**Dyck paths, and their lattice structure**

A *Dyck path of length $2n$* is a sequence of $n$ up-and-right strokes and $n$ down-and-right strokes, all having equal length, such that the sequence begins and ends on the same horizontal line and never passes below it. A picture of the five length-6 Dyck paths is shown here:

There is an order relation on the set of length-$2n$ Dyck paths, given by comparing their heights; we call it the *height order*, though in the title of the post, we called it "Dyck order". For $n=3$ it gives the following lattice:

For any $n$, one obtains a poset structure on the set of length-$2n$ Dyck paths using height order, and in fact this poset is always a Heyting algebra (it represents the subobject classifier for the topos of presheaves on the twisted arrow category of $\mathbb{N}$, the free monoid on one generator; see this mathoverflow question).

**Magma expressions and the "exponential evaluation order"**

A set with a binary operation, say •, is called a magma. By a *magma expression of length $n$*, we mean a way to associate $n$ repeated applications of the operation. Here are the five magma expressions of length 3:

It is well-known that the set of length-$n$ magma expressions has the same cardinality as the set of length-$2n$ Dyck paths: they are representations of the $n$th Catalan number.

An ordered magma is a magma whose underlying set is equipped with a partial order, and whose operation preserves the order in both variables. Given an ordered magma $(A,$•$,\leq)$, and magma expressions $E(a_1,\ldots,a_n)$ and $F(a_1,\ldots,a_n)$, write $E\leq F$ if the inequality holds for every choice of $a_1,\ldots,a_n\in A$. Call this the *evaluation order*.

Let $P=\mathbb{N}_{\geq 2}$ be the set of natural numbers with cardinality at least 2, the *logarithmically positive* natural numbers. This is an ordered magma, using the usual $\leq$-order, because if $2\leq a\leq b$ and $2\leq c\leq d$ then $a^c\leq b^d$.

**Question:** Is the exponential evaluation order on length-$n$ expressions in the ordered magma $(P,$^$,\leq)$ isomorphic to the height order on length-$2n$ Dyck paths?

I know of no *a priori* reason to think the answer to the above question should be affirmative. A categorical approach might be to think of the elements of $P$ as inhabited sets, choose an arbitrary element of each, and use them to define functions between the various Hom-sets (e.g. a map $\mathsf{Hom}(c,\mathsf{Hom}(b,a))\to\mathsf{Hom}(\mathsf{Hom}(c,b),a)$), do so in a recursively-definable way, hope to prove they are injective. However, while defining these maps seems to be doable in an ad hoc manner, I don't see how to generalize it. I also don't see how one should use the assumption that the sets are not just inhabited but have cardinality at least two.

However, despite the fact that I don't know where to look for a proof, I do have evidence to present in favor of an affirmative answer to the above question.

**Evidence that the orders agree**

It is easy to check that for $n=3$, these two orders do agree:

a^(b^(c^d)) A := A(a,b,c,d) | | a^((b^c)^d) B / \ / \ (a^b)^(c^d) (a^(b^c))^d C D \ / \ / ((a^b)^c)^d EThis can be seen by taking logs of each expression. (To see that C and D are incomparable: use a=b=c=2 and d=large to obtain C>D; and use a=b=d=2 and c=large to obtain D>C.) Thus the evaluation order on length-3 expressions in $(P,$^$,\leq)$ agrees with the height order on length $6$ Dyck paths.

(Note that the answer to the question would be negative if we were to use $\mathbb{N}$ or $\mathbb{N}_{\geq 1}$ rather than $P=\mathbb{N}_{\geq2}$ as in the stated question. Indeed, with $a=c=d=2$ and $b=1$, we would have $A(a,b,c,d)=2\leq 16=E(a,b,c,d)$.)

It is even easier to see that the orders agree in the case of $n=0,1$, each of which has only one element, and the case of $n=2$, where the order $(a^b)^c\leq a^{(b^c)}$ not-too-surprisingly matches that of length-4 Dyck paths:

/\ /\/\ ≤ / \Indeed, the order-isomorphism for $n=2$ is not too surprising because there are only two possible partial orders on a set with two elements. However, according to the OEIS, there are 1338193159771 different partial orders on a set with $C_4=14$ elements. So it would certainly be surprising if the evaluation order for length-4 expressions in $(P,$^$,\leq)$ were to match the height order for length-8 Dyck paths. But after some tedious calculations, I have convinced myself that these two orders in fact *do agree* for $n=4$! Of course, this could just be a coincidence, but it is certainly a striking one.

**Thoughts?**

Let $A$ and $B$ be complex $4\times 4$ matrices. Assume both are Hermitian, and that they are linearly independent.

Must there exist a nonzero real linear combination $aA + bB$ which has a repeated eigenvalue?

Not long ago, the Puzzle Corner of the magazine *MIT Technology Review* asked for a set of $N$ dice that are non-transitive in the sense that there is a cyclic ordering on them, in which each die beats the next die in the cyclic order. I had not seen this particular question about non-transitive dice before, but it is not a terribly difficult puzzle; you can view one solution here if you are curious. My question here is not about the puzzle per se but about a curious kind of matrix product that arose while I was trying to solve the puzzle.

For simplicity, assume that all numbers on all dice are distinct. Suppose we have two dice $A$ and $B$ with respective numbers $A_1, A_2, A_3, \ldots$ and $B_1, B_2, B_3, \ldots$ and assume without loss of generality that $A_1 > A_2 > A_3 > \cdots$ and $B_1 > B_2 > B_3 > \cdots$. Then we may record the relationship between the dice by a 0-1 matrix $M$ whose $(i,j)$ entry $M_{ij}$ is given by $$M_{ij} = \cases{1, &if $A_i>B_j$;\cr 0, &if $A_i<B_j$.}$$ Note that if $A_i>B_j$ then $A_i>B_{j+1}$ and $A_{i-1}>B_j$. This means that the "1" entries in $M$ form a Young diagram in the upper right-hand corner of $M$ (except that the rows are right-justified rather than left-justified).

Now let us consider a third die $C$, and let $M'$ denote the 0-1 matrix that records the relationship between $B$ and $C$. It is natural to ask:

What relationships necessarily hold between $A$ and $C$?

This question is readily answered. If $A_i>B_j$ and $B_j>C_k$ then necessarily $A_i>C_k$. Conversely, if there is no $B_j$ such that both $A_i>B_j$ and $B_j>C_k$ then there is no necessary relationship between $A_i$ and $C_k$. Therefore we can compute the necessary relationships between $A$ and $C$ by computing $M\boxtimes M'$, where by $\boxtimes$ I mean the matrix product defined by $$(M\boxtimes M')_{ik} = \max_j M_{ij}M'_{jk}.$$ Equivalently, $\boxtimes$ is matrix multiplication on Boolean matrices, with scalar multiplication replaced by AND and scalar addition replaced by (inclusive) OR.

In this language, the existence of non-transitive dice is equivalent to the existence of $M$ and $M'$, each with more 1's than 0's, such that $M\boxtimes M'$ has more 0's than 1's.

My question is:

Does $\boxtimes$ have any interesting properties? Does it show up elsewhere in mathematics?

One can think of $\boxtimes$ as an operation on Young diagrams or on lattice paths, but I do not recall encountering this operation before.

X is a non singular projective variety over an infinite field k. How to prove there are no projectives with a surjective map to the structure sheaf O_X in Qch(X) and Coh(X). Coh(X) is the category of coherent sheaves on X.

Let $y(x) = \sum_k y_k x^k$ be a formel power series and ,Assume that is

invertible then $y^{-1}$ exist ,Also assum that $y$ ,$y^{-1}$ are Borel summable

then my question here is :

**Question:**
Is there a function $f$ satisfy :$B(f+f^{-1})=B(f)+B(f^{-1})$ with $B$ is a Borel transform ?

if I were to have a list of multiple polynomials and I need to sort the list in ascending order (not just the degree of the polynomial but relatively all the terms of the polynomial)

How can I achieve it

say I have

$$
3x^2 + x + 4 $$
$$3x^2 + 2x + 4 $$
$$1x^2 + x + 4 $$
$$2x^2 + 2x + 4 $$

Now I want the final list to be

$$1x^2 + x + 4 $$
$$2x^2 + 2x + 4 $$$$3x^2 + x + 4 $$
$$3x^2 + 2x + 4 $$

Given an elliptic curve group with a generator $G$ where $G$ has a prime order, p. Given a point $P=aG$ for some unknown $a$. Is it possible to efficiently calculate $Q=a^{-1}G$ without a discrete log operation?

With a discrete log, the problem is simple: first calculate $a$, then $a^{-1} = a^{p-1} $ mod $p$.

But I can't reduce a diffie-hellman problem to this to break it. Nor do I have the background to prove it directly (I have a background in NP-complete problems).

I see that the possibility of this operations would break a tiny subset of shared secrets but this should be negligible. So unless I'm wrong the existence of this algorithm isn't inconsistent with the original proof.

This is tetration question about finding the indefinite integral. I am not sure where to start so any help would be appreciated.

$$ I= \int \ln(x)^{\ln(x)^{\ln(x)^{\cdot^{\cdot^{\cdot^{\ln(x)}}}}}} dx $$

MSE was not able to answer so far so I thought this might be more appropriate.(https://math.stackexchange.com/questions/2585791/indefinite-integral-of-lnx-lnx-lnx-lnx-for-an)

This is a generalization of this question.

Let $P_1, \ldots, P_m$, $Q_1, \ldots, Q_k \in \mathbb{C}[x_0,\ldots,x_n]$ be linear homogenous polynomials. Let $f_1, \ldots, f_s$ be a homogenous quadratic irreducible polynomials of degree $2$.

Assume that for every $i$ and for every $j$ the ideal $\langle P_i, Q_j \rangle$ contains some $f_l$.

Assume also that the rank of $\{f_1, \ldots, f_s \}$ (in the vector space of all quadratic homogenous polynomials in $\mathbb{C}[x_0,\ldots,x_n]$) is equal to some constant $c$ .

**Question:** Is it true that the rank of $\{P_1, \ldots, P_m \}$ or the rank of $\{Q_1, \ldots Q_k \}$ is less than some constant (i.e. some function from $c$)?

I can affirmatively answer this question if $s$ (the number of quadratic polynomials) is bounded by a constant:

Consider those polynomials in $\{f_1, \ldots, f_s\}$ that belongs to $\langle P_1, Q_j \rangle$ for some $j$. W.l.o.g. we can assume that this set is $\{f_1, f_2,, \ldots, f_{s'} \}$ for some $s' \le s$.

Consider $M_i:= f_i \cap P_1$ (I mean the intersection of the zeros $f_i$ and $P_1$) for some $i \le s'$.

This set is the zeros of a quadratic form in plane $P_1$ with codimension $1$ (it can not be $P_1$ since $f$ is irreducible). For some $j$ the intersection $Q_j \cap f$ must contain a subspace of codimension $2$. Hence $M_i$ is the union of one or two subspace of codimension $2$. So, there exists at most $2s'$ subspaces of codimension $2$ such that every $Q_j$ must contain at least one of them. Now, it is not difficult to see that rank of $\{Q_1, \ldots, Q_k\}$ is bounded by $4s' \le 4 s$. The similar argument works for the rank of $\{P_1, \ldots, P_m\}$.

In my research on linear algebra and optimization, I have come across the following problem repeatedly:

Given constant matrices $C\in\mathbb{R}^{k \times k}$ and $X\in\mathbb{R}^{n \times n}$, $$\min_{A\in\mathbb{R}^{n\times k}, B\in\mathbb{R}^{k \times n}} \| X - A C B X \|_F$$

where $C$ may be singular, as $ k \leq n $ ($k,n$ are constant). We minimize the norm over $A, B$ of fixed dimensions (maybe rectangular) with no additional constraints.

If $C$ were absent (replaced with the unit matrix), this could be solved analytically via low-rank matrix approximation ($AB$ can be viewed as the rank factorization of the approximating matrix), but can anyone tell me if an analytical solution is available in the presence of singular/rectangular $C$ matrices? Perhaps good approximations with appropriate bounds on the error? The relation to the problem without $C$?

I am unable to find an analytical solution and a numerical/iterative solution would not be very informative on the role of the $ C $ in the solution for the optimal $A, B$, which is my goal. I thank all helpers and appreciate all assistance.

There is a number $n \in \mathbb{N}, \ n > 1, n < 2^k$. How to prove this statement:

$n$ is included into Pascal triangle not more than $2k -2$ times?
I have no idea how to solve this.

In "Riemannian Geometry, Peter Petersen, GTM171, Third Edition" page 430, there is an Anderson Lemma: For each $n\geq 2$, there is an $\varepsilon(n)>0$ such that any complete Ricci flat manifold (M,g) that satisfies $$Vol(B(p,r))\geq(1-\varepsilon)\omega_n r^n$$ for some $p\in M$ is isometric to Euclidean space.

I have problem about the proof in this book.

The book proved it by contradiction: for each $i$, it constructed a complete Ricci flat manifold $(M_i,g_i)$, with $\lim\limits_{i\rightarrow \infty}\frac{Vol(B(p_i,r)}{\omega_n r^n}\geq(1-\frac{1}{i})$, and $(M_i,g_i)$ is not isometric to Euclidean space.

But the book said that for all $r>0$, the $C^{1,\alpha}$ harmonic norm of $(M_i,g_i)$ of scale $r$ is nonzero. After scaling the metric suitably, it assumed the $C^{1,\alpha}$ harmonic norm of $(M_i,\bar{g_i})$ less than 1, and the pointed norm has positive lower bound. I think it is impossible, but I do not know how to correct it.

It is true that if $(M_i,g_i)$ is not isometric to Euclidean space, we can find $r_i$, such that the $C^{1,\alpha}$ harmonic norm of $(M_i,g_i)$ of scale $r_i$ is nonzero. But we do not know that it is uniformly bounded.

**Remark** Crossposted in MSE.

Let $M$ be (say smooth) manifold. From the short exact sequence of groups $0 \to \mathbb{Z} \to \mathbb{Z} \to \mathbb{Z}_2 \to 0$ (where the first map is multiplication by $2$) one obtain long exact sequence in cohomology. In particular one obtains the connecting map $\beta:H^2(M,\mathbb{Z}_2) \to H^3(M,\mathbb{Z})$. This map is called the Bockstein homomorphism. Define $W_3(M):=\beta(w_2(M))$ where $w_2(M)$ is the second Stiefel-Whitney class. The class $W_3(M)$ is called the *third integral* Stiefel-Whitney class.

On the other hand there is another class in $H^3(M,\mathbb{Z})$ which at the first sight has nothing to do with $W_3(M)$: it is called *Dixmier-Douady* class and is defined in terms of bundles of simple $C^*$-algebras.

It turns out that these two classes coincide: this is proved in this paper by Plymen-see Theorem 2.8. However the proof relies on another result. The author gives precise reference:

Marry P. ,,*Varietes spinorielles. Geometrie riemannienne en dimension 4*'', Seminaire Arthur Besse, CEDIC, Paris 1981

however I was unable to find it (even if I could, unfortunately I don't speak French). So

I would like to understand why $W_3(M)=\delta(M)$, in particular understand the last two lines of the case ''i)'' in the proof of Theorem 2.8 in the above paper.

EDIT: The relevant bundle for defining $\delta(M)$ is the (even part of) the complex Clifford bundle of the tangent bundle. Recall that the complex Clifford algebras are isomorphic to either $M_{2^n}(\mathbb{C})$ or to $M_{2^n}(\mathbb{C}) \oplus M_{2^n}(\mathbb{C})$ thus the even part of Clifford algebra is always a simple algebra.

https://artofproblemsolving.com/wiki/index.php?title=1988_IMO_Problems/Problem_6, and recently popularised by Numberphile: https://youtu.be/Y30VF3cSIYQ.

Let $a$ and $b$ be positive integers such that $ab + 1$ divides $a^{2} + b^{2}$. Show that $\frac {a^{2} + b^{2}}{ab + 1}$ is the square of an integer.

The elementary proof is well known and based on infinite descent using Vieta jumping (https://en.wikipedia.org/wiki/Vieta_jumping).

My question is:

What are non-elementary ways of solving it? Which mathematical structure is useful in creating the context for this problem? Is there an insight of the type where, once the problem is put into the right algebraic context, the solution is obvious*?

*What I have in mind is: notice, for example, how most of the steps in Euler's proof of the Fermat's prime sum of squares theorem (https://en.wikipedia.org/wiki/Proofs_of_Fermat%27s_theorem_on_sums_of_two_squares#Euler.27s_proof_by_infinite_descent) become trivial once re-interpreted in the ring $\mathbb Z [i] $, the infinite descent itself being replaced by Euclidean division. Is something similar applicable here?

In a quadratic program (QP), do linear equality constraints always reduce the norm of the minimizer? Specifically, let $P \succ 0$, $A \in \mathsf{M}_{m\times n}$ and $q\in\mathbb{R}^n$. Define

$$x^* := \arg\min_x\,\tfrac{1}{2} x^\mathsf{T} P x - q^\mathsf{T}x$$

and

\begin{align} x_c^* &:= \arg\min_x \, \tfrac{1}{2} x^\mathsf{T} P x - q^\mathsf{T}x\\ &\quad\,\,\,\operatorname{subject to} \,\,Ax=0. \end{align}

Intuitively $\|x_c^*\| \leq \|x^*\|$, if not in the standard $\ell^2$ norm in the $P$ (or maybe $P^{-1}$) induced norm $\|x\|_P = \langle Px,x \rangle^{1/2}$, because I'd think that the solution $x_c^*$ is the $\|\cdot\|_P$ metric projection of $x^*$ onto $\ker A$, a closed convex set, and such a projection is a contraction.

Nevertheless, I'm having trouble showing this. Boyd and Vandenberghe [p.546] tell us $x_c^* = (I + P^{-1}A^\mathsf{T}(AP^{-1}A^\mathsf{T})^{-1}A)P^{-1}q$ while $x^* = P^{-1}q$. Hence it suffices to show the operator $I + P^{-1}A^\mathsf{T}(AP^{-1}A^\mathsf{T})^{-1}A$ is a contraction under some metric.

Unfortunately, I just sampled a random $A$ and positive $P$, and the above operator is not a contraction in the $\ell^2$-norm in general.

Questions:

- is $\|x_c^*\|_2 \leq \|x^*\|_2$ in general?
- if not, is this true under a different norm such as $\|\cdot\|_P$?

If possible, a bound not involving $A$ would be helpful.

*There's already a question about the same topic but I think its aim is different.*

Classical (non-quantum) gauge theory is a completely rigorous mathematical theory. It can be phrased in completely differential-geometric terms (where the main players are bundle with connections on a manifold).

I think I have a basic understanding of what gauge theory is about and what various words mean in this context (yang mils, potential, energy, etc...). However I have still not managed to figure out what "Instanton" means in this context.

**What is an Instanton?**

Is it something special to Yang-Mils theory? Is it something special to Quantum Gauge theory? Are there any mathematical interpretations/applications for Instantons?