Recent MathOverflow Questions

Are there any non-asymptotic bounds for the minimum empirical risk vs theoretical risk?

Math Overflow Recent Questions - Wed, 04/18/2018 - 10:13

I'm trying to see if there's any bounds on the difference between $f_{ERM}$ and $f^{*}$. For now, define $\mathcal{F}$ to be a function class.

Let $P$ be a probability measure and $\hat{P_n}$ be the empirical measure on a set $S\subset \mathbb{R}^2$.

Let $$f^{*} = \arg \min_{f\in \mathcal{F}} R_P(f)$$ and

$$\hat{f^{*}} = \arg \min_{f\in \mathcal{F}} R_{\hat{P_n}}(f)$$ where $R_P$ denotes the classification risk with respect to the measure $P$ (and similarly for $\hat{P_n}$). So $R_P(f)=E_{(x,y)\sim P}[L(f(x),y)]$ and $R_\hat{P_n}(f)=\frac{1}{n}\sum_{i=1}^n L(f(x_i),y_i)$ where $x_i,y_i$ are iid samples drawn from distribution $P$.

Is there anything we know about $\| R_{\hat{P_n}} (\hat{f^{*}}) - R_{P}(f^{*}) \|$?

I thought this was related to ERM stability bounds except those bounds are for $\| R_{P} (\hat{f^{*}}) - R_{P}(f^{*}) \|$

Is there a name for sequences of integers reduced to their lowest prime divisors?

Math Overflow Recent Questions - Wed, 04/18/2018 - 09:46

When trying to obtain the value of Jacobsthal's function for some $n$; to find the largest sequence of consecutive numbers that are all coprime to $n$, one approach (and the only direct approach that I know of) is to exhaust all of the possible sequences of consecutive numbers that are all coprime to $n$ by trial. But when doing this, it's helpful (when working by hand) to represent all integers (except $1, 0$ and $-1$) by their lowest prime divisor. For example; the sequence $2,3,4,5,6,7,8,9,10$ would be equivalent to $2,3,2,5,2,7,2,3,2$. This representation has it's advantages as it represents more than one sequence of consecutive integers. For example; $2,3,2,5,2,7,2,3,2$ is also the respective representation of $212,213,214,215,216,217,218,219,220$. This representation enables the exhaustive approach for determining the value of Jacobsthal's function for some $n$.

But the thing is, I can't find a specific name for this representation of integers, (or this type of sequence in a more context-free setting). Is there a referable/favoured name for this representation? Usually, I have to make up my own dummy name for them to be able to say anything about them, which I would like to avoid if possible.

For example; I would like to conjecture that for any unknown name of the first $n$ prime numbers, that is larger than $2p_{n-1} -1$, there exist a reflection point in the unknown name of some prime numbers, whom when multiplied together are larger than ${p_n}^2$. This is just another way of conjecturing that a prime number exist in all intervals of length $2p_{n-1}$, bounded above by ${p_n}^2$. That's an open question, but I would like to read more about this sort of approach, and related attempts to identify how these unknown name are distributed in respect to primorial numbers, and squared primes.

Modular tensor category associated to an even integral lattice and the lattice automorphism

Math Overflow Recent Questions - Wed, 04/18/2018 - 09:44

Let $(L,\langle -,-\rangle)$ be an even integral lattice, and let $(A,q)$ be the associated discriminant form: $$ A=L^*/L, \quad q(a)=e^{\pi i \langle a,a\rangle}. $$ This in turn determines a modular tensor category $\mathcal{C}(A,q)$. (This is also the category of modules of the lattice VOA $V_L$ constructed from $L$.)

We then naturally have a group homomorphism $O(L)\to Aut(\mathcal{C}(A,q))$.

So it seems natural to study the obstructions to extending $\mathcal{C}(A,q)$ by $O(L)$ or any of its subgroups in the sense of [ENO, arXiv:0909.3140].

Does anybody know of any references concerning this point? I would be happy with any partial results, or any starting point for me to explore the literature.

(Also, does the extension theory of ENO appear somewhere in the theory of lattice VOA?)

Lattices that top is the top of its join-irreducibles, such that a random element is almost surely grater then any given join-irreducible

Math Overflow Recent Questions - Wed, 04/18/2018 - 08:01

Let $(L,\leq,0,1)$ be a lattice, and let's denote by $JI(L)$ the set of its join-irreducibles (i.e. elements that are not the lowest grater bound of two other elements). We suppose that $\sup JI(L)=1$. and that $\mathbb P$ is a probability on $L$ such that ideals and filters are mesurable and singleton have probability 0. Let's call such a lattice a "reach-irreducible lattice" (RIB). If the cardinality of $L$ is that of the continuum" we can say "continuum reach-irreducible lattice" (CRIL)

Does there exist in any $L$ that is a continuum-reach-irreducible lattice, $g\in JI(L)$ such that $\mathbb P(\left\{x\in L,\, g\leq x\right\})<1$


Related discussions on the MSE are and

The first link is an imperfect attempt to fit with the motivation of this question, that is explained in the second link. If the answer to this question is yes, then a weak version of the Frankl Conjecture ("WFC")- that is also an open problem - is true (see MSE links).

Note that in both links, the lattices that I built assuming that WFC is wrong are not exactly CRIL, but up to small details, one can easily get CRIL from them.

Note also that the cardinality condition is just given to fit with WFC, but it would be nice indeed if the answer of the question is yes for any RIL. And if it is not, it might also give some useful informations, so I would also be happy to read answers and comments about general RIL.

Neural Network derivative of derivative

Math Overflow Recent Questions - Wed, 04/18/2018 - 04:35

I have a normal Feed Forward NN: $$N\,=\,\sigma(W_2\,(\sigma(W_1x+b_1)+b_2)$$ A penalty term is computed using the derivative of the network w.r.t. the inputs: $L = f(\frac{\partial N}{ \partial x})$, which is, if I am right, $$L = f(\sigma'(out2)*W_2*\sigma'(out1)*W_1)$$ if we call $out_2$ and $out_1$ the outputs of the NN layers before the sigmoid. Now I need to compute the derivative of L w.r.t. the weights $W_2$ and $W_1$, to let my network learn. Since $$\partial L/ \partial W = \frac{\partial L }{ \partial (\partial N/ \partial x)}(\frac {\partial N }{ \partial x}(x, W_1, W_2))*\frac {\partial(\partial N/ \partial x)}{ \partial W}(W)$$where calculating the first term is easy, i need the second terms.

I read these slides at here and found: $\partial (AXB) = A\partial (X)B = BA\partial X$, but as long as I agree with first equality since sigmoid derivatives and the other weight are constants w.r.t. the weight I am searching the derivative for, the second equality doesn't hold for me (and in fact I couldn't demonstrate it) just considering matrix dimensions. Here x = [241x1] (column vector), $W_1$ = [20x241], $W_2$ = [20x20].

Please help! Thanks!

Lagrange Multipliers for two constraints, degenerate case

Math Overflow Recent Questions - Wed, 04/18/2018 - 03:37

To optimize $f(x,y,z)$ subject to $g(x,y,z)=h(x,y,z)=0$, we use the Lagrange Multiplier method and solve \begin{equation*} \nabla f=\lambda \nabla g+\mu\nabla h,\quad g=0,\quad h=0. \end{equation*} Geometrically, $\nabla f$ must lie on the normal plane spanned by $\nabla g$ and $\nabla h$. However, it can happen that $\nabla g$ is parallel to $\nabla h$ at certain points, and hence they cannot span the normal plane. In this case, does $\nabla f$ have to be parallel to $\nabla g$ to be a critical point? If yes, how to explain it? If no, can it happen that some critical points are missing? Thanks.

Why do we specify the degree of elements in algebra? [migrated]

Math Overflow Recent Questions - Wed, 04/18/2018 - 03:03

In algebra (especially algebraic topology where I have seen it) when we have some sort of ring like $R[x]$ we are quick to specify that $x$ has degree $n$. I understand the importance of this but isn't this just the same as the ring $R[x^n]$?

I would find the later notion more compact and clear given that in early mathematical education $R[x]$ is always assumed to have $x$ in degree 1. In fact then it seems to me that only a single definition of $R[x]$ needs to be given.

I am interested in the historic or good reasons for defining it above. This notion is really common in cohomology for example.

On reasonable asymptotic estimates for some integral involving the logarithm of the Riemann zeta function

Math Overflow Recent Questions - Wed, 04/18/2018 - 00:33


$$I(T) = \int_{-T}^{T} \frac{\log|\zeta(\frac{1}{2} + it|)|}{\frac{1}{4}+t^2}\mathrm{d}t$$

where $\zeta$ denotes the Riemann zeta function.

What are the reasonable asymptotic estimates for $I(T)$ ?

Here is what i think:

It is well-known that

$$\int_{-T}^{T} \log \Big|\zeta\Big(\frac{1}{2} + it\Big)\Big|\mathrm{d}t \ll T\log T$$ Therefore, by a dyadic decomposition, it follows from the above that

$$I(T)=\int_{-T}^{T} \frac{\log |\zeta(\frac{1}{2} + it)|}{\frac{1}{4}+ t^2}\mathrm{d}t \ll\frac{\log T}{T}$$

REMARK: An almost similar application of dyadic decomposition can be found on

Binary cube root of a given positive, integer square matrix

Math Overflow Recent Questions - Tue, 04/17/2018 - 22:32

How can I find all binary matrices $A$ that are a solution of matrix equation


for some given positive, integer square matrix $B$? Is there a way to characterize all the solutions?

Computing remainders modulo $\prod_{i\in S} (x-x_i)$ fast using FFT

Math Overflow Recent Questions - Tue, 04/17/2018 - 21:00

Note: Originally asked on Math StackExchange here, without an answer. Figured I should try here, since this is a more research-level question.

I am trying to implement a fast polynomial multipoint evaluation algorithm via FFT (e.g., the one described in Chapter 10.1 of "Modern Computer Algebra," 3rd edition). I should mention that the polyonomial being evaluated is the formal derivative $N'(x)$ of: $$N(x) = (x-x_1)(x-x_2)\cdots(x-x_k)$$

Specifically, we're evaluating $N'(x)$ at points $x_1, x_2, \dots, x_k$, where $k$ is a power of two.

I'm looking for any mathematical tricks that will lead to practical improvements in the multipoint evaluation algorithm. AFAICT, the main bottleneck will be computing remainders after division by $(x - x_i)\cdots(x-x_j)$.

As a refresher, at any "node" in the multipoint evaluation tree, we have a subset of points $x_i, \dots, x_j, x_{j+1}, \dots, x_\ell$, a previous remainder $R(\cdot)$, and we want to compute: \begin{align*} R(x) &\bmod (x - x_i)\cdots(x-x_j)\\ R(x) &\bmod (x - x_{j+1})\cdots(x-x_\ell) \end{align*} Furthermore, the dividend $R$ has degree $\le 2n-1$ while the divisor has degree $n$. Initially $n = k$, so it's a power of two, and then $n$ gets halved as we go down the multipoint evaluation tree.

I am aware there is a $O(n\log{n})$ algorithm based on FFT for computing remainders. For example, the algorithm described in Chapter 9.1 of "Modern Computer Algebra," 3rd edition first computes the quotient by computing a modular inverse and then computes the remainder.

Is there any way to speed up this division algorithm given our particular setting:

  • We do not need the quotient, only the remainder.
  • We only need to divide by $(x - x_i)\cdots(x-x_j)$, for some $i,j$ where $1 \le i,j \le k$.
  • $x_i$ can be an $\ell$th root of unity ($\ell > k$) but not necessarily the $i$th one (or $i-c$th one)
  • $k$ is a power of two or can be adjusted as needed
  • We're doing a multipoint evaluation on $N'(x)$

The only mathematical trick I could find was in Todd Mateer's "Fast Fourier Transform Algorithms with Applications" PhD thesis (pdf), in Sec 7.5, pg 194.

Multivariable Gaussian polynomials

Math Overflow Recent Questions - Tue, 04/17/2018 - 18:56

Let the (homogeneous) Gaussian numbers be defined as $[n]_{x,y} = \frac{x^n-y^n}{x-y}$, and define Gaussian factorials as $[n]_{x,y}! = [n]_{x,y}[n-1]_{x,y}\dots [1]_{x,y}$, and the Gaussian binomials as $ {n \brack m}_{x,y} = \frac{[n]_{x,y}!}{[m]_{x,y}![n-m]_{x,y}!}$. (Letting $x=q$ and $y=1$ or $x=q$ and $y=q^{-1}$ recovers the more standard definitions.)

The coefficient of $x^ay^{nm-a}$ in $ {n+m \brack m}_{x,y}$ is equal to the number of multiset partitions of $\{1^a, 2^{nm-a}\}$ into $m$ parts of size $n$. (Ordering the parts by how many $1$'s they have and arranging things into a rectangle this is counting partitions of size $a$ fitting in an $n \times m$ rectangle).

This interpretation has a natural analog with more variables. We can define the homogeneous multivariate Gaussian polynomials to be the polynomials $p_{n,m}(x_1,x_2,\dots x_k)$ where the coefficient of $x_1^{a_1}x_2^{a_2}\dots x_k^{a_k}$ is equal to the number of multiset partitions of $\{1^{a_1},2^{a_2}, \dots, k^{a_k} \}$ into $m$ parts of size $n$.

For example $p_{2,3}(x,y,z) = $


$$x^5y +x^5z+$$

$$2x^4y^2 + 2x^4yz + 2x^4z^2+$$

$$2x^3y^3 + 3x^3y^2z + 3x^3yz^2 +2x^3z^3+$$

$$2x^2y^4 + 3x^2y^3z + 5x^2y^2z^2 + 3x^2yz^3 +2x^2z^4+$$

$$xy^5 + \hspace{.2cm} 2xy^4z + \hspace{.2cm} 3xy^3z^2 +\hspace{.2cm} 3xy^2z^3 +\hspace{.2cm} 2xyz^4 + \hspace{.2cm} xz^5$$

$$y^6 +\hspace{.35cm} y^5z + \hspace{.35cm} 2y^4z^2 +\hspace{.35cm} 2y^3z^3+\hspace{.35cm}2y^2z^4+\hspace{.35cm} yz^5+ \hspace{.35cm}z^6$$

Where for example the coefficient $4$ of $x^2y^2z^2$ counts the five multiset partions $\{\{1,1\},\{2,2\},\{3,3\}\}$, $\{\{1,1\},\{2,3\},\{2,3\}\}$, $\{\{1,2\},\{1,2\},\{3,3\}\}$, $\{\{1,2\},\{1,3\},\{2,3\}\}$, and $\{\{1,3\},\{1,3\},\{2,2\}\}$ of $\{1^2,2^2,3^2\}$ into $3$ parts of size $2$.

These showed up for me doing some calculations in a representation theoretic context, and I can say some things about them from that perspective. For example, I can see they satisfy a "higher rank" version of the unimodality property for the coefficients for Gaussian polynomials.

Have these been studied before? Are there combinatorial proofs of these unimodality type results in the literature? Do these polynomials have interpretations say in terms of subspaces of finite dimensional vector spaces over finite fields? Do they have simpler formulas like the original one I gave in the two variable case?

EDIT: After thinking more and fixing some sage code that had been misleading me, these polynomials $p_{n,m}(x_1, \dots, x_k)$ are just the characters of $Sym^m(Sym^n(\mathbb{C}^k))$. Oddly enough that's not the original representation theoretic context I was looking at, which was more convoluted to state, but it's probably related by something like Howe duality. Anyway I'm still interested in what's known about them combinatorially.

covering theory with compact open topology

Math Overflow Recent Questions - Tue, 04/17/2018 - 17:18

In the following all spaces $C^0(X,Y)$ are spaces of base point preserving maps with the compact-open topology.Furthermore all spaces I consider in the following are locally pathwise connected.

Under which assumptions can we prove the following statement:

If $Y,X,\widetilde{X}$ are pointed topological spaces, $p: \widetilde{X} \rightarrow X$ is a covering, $\widetilde{X} $ is connected and and $Y$ is simply connected, then the map

$ p^* : C^0(Y, \widetilde{X}) \rightarrow C^0(Y,X) $

is a homeomorphism?

I can prove that the map is bijective by standard covering theory and that it is continuous by the definition of the compact open topology (this only uses $p$ continuous).

I am mainly interested in finding sufficient assumptions on $X, \widetilde{X},Y$ to make this statement true, in particular, if it holds for locally compact Hausdorff spaces, but more general statements seem interesting as well.

Edit: Added the assumption, that all spaces are locally path connected to ensure, that the universal lifting theorem applies.

Edit2: I think I can prove this under the following additional assumptions:

$Y$ is compact, $X$ and $ \widetilde{X}$ are metric spaces, such that:

$\varepsilon := \inf _ { \{x,y \in \widetilde{X} : x \ne y \land p(x)=p(y) \} }(d_{ \widetilde{X} }(x,y)) < \infty $

and $p$ induces the metric on $X$, meaning

$d_X(x,y)= \inf_ { \{ \tilde x , \tilde y \in \widetilde{X} : p(\tilde x) =x \land p( \tilde y ) =y \}} (d_{\widetilde{X}} ( \tilde x, \tilde y)) $

Sketch of Proof: The compact open topology is metrizable with the metric

$d_{c^0(Y,X)} ( f,g) = \sup_{y \in Y } ( d_X (f(x), g(x)))$

and the same for $\widetilde{X}$. Now for $\delta < \varepsilon /2 $, we should get:

$\forall f \in C^0(Y,\widetilde{X}): p^*( B _ \delta (f)) = B_ \delta (p \circ f)$

This implies $p^*$ open. Hence $p^*$ is a homeomorphism.

I am not 100% sure, if this proof holds and I do not know how to remove the infimum assumption at all.

If the decktransformation group acts tranisitivley and preserves the metric, the infimum condition should hold I guess.

And I am new to this forum: Should I have posted everything under Edit2 rather as an answer then an edit?

Weight, Index, and Congruence Subgroup of Classical Jacobi Theta Functions

Math Overflow Recent Questions - Tue, 04/17/2018 - 15:15

On the very first page in the Introduction of Eichler and Zagier's text on Jacobi forms, they mention that the theta function

$$\Theta_{x_{0}}(\tau, z) = \sum_{x \in \mathbb{Z}^{N}} q^{Q(x)} y^{B(x, x_{0})}$$

is a holomorphic Jacobi form of weight $N/2$ and index $Q(x_{0})$ for some congruence subgroup of $SL_{2}(\mathbb{Z})$. In this formula, $Q: \mathbb{Z}^{N} \to \mathbb{Z}$ is a positive-definite quadratic form, $B(x, x_{0}) = \frac{1}{2}(Q(x + x_{0}) - Q(x) - Q(x_{0}))$ is the associated bilinear form, and $x_{0}$ is some lattice vector. Finally, $q = e^{2 \pi i \tau}$ and $y = e^{2 \pi i z}$, and these are conventions I'd like to stay with for my purposes.

I would like to apply this to the example of the four classical Jacobi theta functions, but I haven't found any conclusive references, and those references I have found, all seem to use different conventions. I believe the Jacobi theta functions are given for my definition above of $q$ and $y$ by

$$\vartheta_{1}(\tau, z) = - \sum_{n \in \mathbb{Z}} q^{\frac{1}{2}(n + \frac{1}{2})^{2}}(-y)^{n+\frac{1}{2}}$$

$$\vartheta_{2}(\tau, z) = \sum_{n \in \mathbb{Z}} q^{\frac{1}{2}(n + \frac{1}{2})^{2}} y^{n + \frac{1}{2}}$$

$$\vartheta_{3}(\tau, z) = \sum_{n \in \mathbb{Z}} q^{n^{2}/2} y^{n}$$

$$\vartheta_{4}(\tau, z) = \sum_{n \in \mathbb{Z}} q^{n^{2}/2} (-y)^{n}$$

My questions simply is, what is the weight and index of each of these theta functions, and with respect to what congruence subgroup? Is there a nice reference?

I'm struggling to even reconcile these functions with the general theta series above from Eichler and Zagier. For example, $n^{2}/2$ is not an integer-valued quadratic form, and even if it were, $n$ has the wrong coefficient to be the corresponding bilinear form. This leads me to worry that I'm using the wrong conventions for $q$ and $y$. Moreover, don't the factors of $(-1)^{n+1/2}$ and $(-1)^{n}$ in $\vartheta_{1}$ and $\vartheta_{4}$ respectively, prevent us from putting these in the general form of $\Theta_{x_{0}}(\tau, z)$ above?

expected length of a largest cycle in regular graph

Math Overflow Recent Questions - Tue, 04/17/2018 - 09:58

Consider a simple random regular graph $G_d(n)$ with $d=2$ (that is, I select a $2-$regular graph from the set of all $2-$regular graphs on $n$ vertices, uniformly at random).

It is clear that this graph consists of (not-intersecting) cycles only; and asymptotically a.s., it does not contain a cycle of length $n$.

My question is: Can we characterize the length of its largest cycle?

The Sudoku game: Solver-Spoiler variation

Math Overflow Recent Questions - Tue, 04/17/2018 - 08:34

Consider the Sudoku Solver-Spoiler game, a natural variation of the Sudoku game recently appearing in the question Who wins two-player Sudoku? posted by user PyRulez. In that game, the players attempt to trap each other in a position that cannot be extended without explicitly violating the Sudoku condition.

In this game variation, we have two players, the Solver and the Spoiler, who place numbers on a Sudoku board, obeying the rule that they must never explicitly violate the Sudoku condition: that is, the numbers must never occur twice in any row, column or sub-board square.

The Solver aims to build a global Sudoku solution on the board, and the Spoiler aims to prevent this.

Question. Who wins the Sudoku Solver-Spoiler game? What is the winning strategy? Does it matter who goes first?

Of course, the Solver wins the trivial $1^2\times 1^2$ board. And in my recent blog post, Infinite Sudoku and the Sudoku game, I considered the Solver-Spoiler game on the infinite Sudoku boards $\kappa^2\times\kappa^2$ and proved that the Solver can always win in the infinite case, even when the Spoiler is allowed to go first and to make many moves at once on every turn.

On (nontrivial) finite boards, however, the advantage seems to lie with the Spoiler, since in practice it seems easy to spoil a solution on such a board. Can someone describe an explicit winning strategy for the Spoiler? It seems that one will want to almost-but-not-quite complete a row, for example, and then play the missing number in the corresponding column or subsquare to spoil the solution. But things get complicated if there are two missing squares in the almost-completed row, and I haven't managed to push the argument through.

If indeed the Spoiler has a winning strategy on the $n^2\times n^2$ board, for $n\geq 2$, how quickly can she win?

Update. It has come to my attention that people sometimes consider asymmetric Sudoku boards, of size $(nm)\times(mn)$, an $m\times n$ array of rectangular sub-boards of size $n\times m$. I wanted to mention that the argument on my blog does not work in the infinite asymmetric case $(\kappa\times\lambda)\times(\lambda\times\kappa)$, since as Christopher King pointed out, with only $\lambda$ many moves in suitable rectangles, one can rule out a particular symbol for a given row, and so it is no longer true that every position of size less than the number of labels can be extended to a full solution.

Question. Who wins the infinite asymmetric Sudoku Solver-Spoiler game?

It seems likely to me that if $\lambda<\kappa$ and $\kappa$ is infinite, then the Spoiler can win the $(\kappa\times\lambda)\times(\lambda\times\kappa)$ game.

Intersecting a set of positive semidefinite matrices with a hyperplane

Math Overflow Recent Questions - Mon, 04/16/2018 - 16:57

Consider the real space $H_n(\mathbb{C})$ of $n \times n$ Hermitian matrices with the inner product $\langle A, B\rangle = \text{trace}(AB)$. Define the set

$$\mathcal{S} = \text{hull} \{ x x^* : \|x\|_\infty \leq 1 \}$$

where hull is the convex hull, and let $\mathcal{H}$ be a hyperplane. Is it true that $$ \mathcal{S} \cap \mathcal{H} = \text{hull} \{ x x^* : \|x\|_\infty \leq 1, x x^* \in \mathcal{H} \}?$$ That is, are all extreme points of the set $\mathcal{S} \cap \mathcal{H}$ rank one?

It might be useful to consider the hyperplanes $\mathcal{H} = \{X : \text{trace}(X) = k\}$ for some $k$, and indeed the property is true when $k=n$ (see the comments on Dima's answer below), but I am not able to show this for general $k$.

Note that this is different from some similar questions which ask about the extreme points of the set of positive semidefinite matrices with the magnitudes of all entries upper bounded by $1$. Contrary to such cases, here we already know that $\mathcal{S}$ is a convex hull of rank one matrices.

I have first unsuccessfully tried to construct a counterexample, but it appears very difficult in general to show that a given matrix is an extreme point of $\mathcal{S} \cap \mathcal{H}$. I do not have any intuition about whether this property is actually true or not – I would appreciate any ideas about proving or disproving the above.

Why $S$ cannot be homeomorphic to the $1$-sphere of $\ell^2$?

Math Overflow Recent Questions - Mon, 04/16/2018 - 13:57

Consider the $\ell^2$ complex Hilbert space.

Let $m\in \mathbb{N}^*$ be a fixed number, and set $$ S=\left\{ x=(x_n)_n\subset \ell^2\ :\ \sum_{n=1}^m \frac{|x_n|^2}{n^2}=1\right\}.$$

I want to show that $S$ is not homeomorphic to $$ S(0,1)=\left\{ x=(x_n)_n\subset \ell^2\ :\ \sum_{n=1}^\infty |x_n|^2=1\right\}.$$

An upper bound for a vector with given norm 1 and norm 2

Math Overflow Recent Questions - Mon, 04/16/2018 - 12:44

Suppose $X = (x_1, \ldots , x_n)$ is given and we know that $x_i$'s are nonnegative, $\sum_{i=1}^n x_i = n$ and $\sum_{i=1}^n x_i^2 = m $. Just by this information, is it possible to find a vector that majorizes $X$? the meaning of majorization can be found here:

Certainly, $(n , 0 , \ldots , 0)$ is an answer, but I want a nontrivial vector. In particular,I'm looking for some nontrivial known results .


A question about Second-Order ZF and the Axiom of Choice

Math Overflow Recent Questions - Sun, 04/15/2018 - 12:02

This question follows Noah Schweber's excellent answer to a corresponding question regarding second-order $ZFC$ and the continuum hypothesis:

Simply put, it seems that $ZFC_2$ "decides" $CH$ in a certain sense that can be made formally precise, although second-order logic is so limited as to make it impossible for us to determine which way it is decided. This seems, if my understanding correct, to be related to $ZFC_2$ being categorical if large cardinals are excluded.

Does a similar situation exist for $ZF_2$ and $AC$?

This paper seems to show that it is, and that $ZF_2$ is likewise categorical if large cardinals are excluded. Hence, this would mean that we have the same analogous situation to with $ZFC_2$ and $CH$, so that $AC$ is likewise "decided" in $ZF_2$, but that we don't know which way it is decided.

Is this analogy correct?

Poincare duality for mixed motives

Math Overflow Recent Questions - Sun, 04/15/2018 - 12:01

Suppose $k$ is a field of characteristic zero (and we assume it is a number field if necessary). If $U$ is a smooth quasi-projective variety over $k$, then there is Poincare duality, \begin{equation} H^n_{c}(U_{\overline{k}},\mathbb{Q}_{\ell})^{\vee} \simeq H^{2d-n}(U_{\overline{k}},\mathbb{Q}_\ell)(d),~d=\text{dim}\,X \end{equation} where $H^n_{c}(U_{\overline{k}},\mathbb{Q}_{\ell})$ is compactly supported etale cohomology group.

Question 1: is it expected that there is a mixed motive $h^n_c(U)$ whose $\ell$-adic realization is $H^n_{c}(U_{\overline{k}},\mathbb{Q}_{\ell})$? I have not found references about "compactly supported mixed motives", could anyone give references on this?

Question 2: is it expected that there is a Poincare duality of the form \begin{equation} h^n_c(U) \simeq h^{2d-n}(U)(d) \end{equation} Does anyone know any references?


Subscribe to curious little things aggregator