I tried to compute Bredon cohomology of $\mathbb{S}^\sigma$, where $\sigma$ is a sign representation of $\mathbb{Z}/2$, following first chapter and first construction of cohomology from Bredon's "Equivariant cohomology theories". Could somebody please verify it, at least the result?

Throughout $G$ means $\mathbb{Z}/2$.

So I assume the following: $G$-CW structure is obvious, given by two points with trivial action as 0-cells and two arcs with swapping action as 1-cells. I am using simple coefficient system $\mathcal{L}$ on it, that is my functor from "cellular category" to abelian groups factors through some coefficient system $M$. $M$ consists of two groups $M(*)$ - trivial $G$-module and $M(G)$ - $G$-module, and an equivariant map $\epsilon:M(*)\rightarrow M(G)$.

$C^0(\mathbb{S}^\sigma;\mathcal{L})$ consists of the functions $f:\{e^0_1,e^0_2\}\rightarrow M(*)$. Since action of G is trivial on 0-cells, induced action on 0-chains is also trivial, therefore $C^0(\mathbb{S}^\sigma;\mathcal{L})=C^0_G(\mathbb{S}^\sigma;\mathcal{L})=M(*)^2$. $C^1(\mathbb{S}^\sigma;\mathcal{L})$ consists of the functions $f:\{e^1_1,e^1_2\}\rightarrow M(G)$. Action on 1-cells is non-trivial (even free), so $C^1_G(\mathbb{S}^\sigma;\mathcal{L})$ consists of equivariant $f$'s. Thus $C^1_G(\mathbb{S}^\sigma;\mathcal{L})=M(1)$.

The only non-trivial differential is $\delta :C^0\rightarrow C^1$ and is given by $(\delta f)(\tau)=\pm\epsilon(f(e^0_1))\mp\epsilon(f(e^0_2))$. Here $\tau$ of course means any of two 1-dimensional cells.

So $H^0_G(\mathbb{S}^\sigma;\mathcal{L})=M(*)$ and $H^1_G(\mathbb{S}^\sigma;\mathcal{L})=M(G)/M(G)^G$ - but for this I have to assume that $\epsilon$ is an iso on $M(G)^G$.

If this is not "mathoverflow" question, I can ask it also on MathStack.

Let's $v(t)$ be a negative real function and consider the following integral with $f(t)$ complex (rapidly decreasing at infinity):

$$A= \int\limits_{0}^\infty \frac{1}{x} \int\limits_{x}^\infty \Big(v(t)f'(t) + \frac{1}{2} v'(t)f(t) \Big) \overline{f(t)} + \overline{\Big(v(t)f'(t) + \frac{1}{2} v'(t)f(t) \Big) }f(t) dt dx$$

Providing the integral is well defined, it is not hard to see that $A$ is always positive ($v(t)$ is negative by hypothesis) by an integration by parts we directly have:

$$A= \int\limits_{0}^\infty \frac{1}{x} [v(t) f(t) \overline{f(t)} ]_{x}^\infty dx = - \int\limits_{0}^\infty \frac{1}{x} v(x) f(x) \overline{f(t)} dx$$

Now, can we say something about the sign of the same expression where we consider the cosine transform of precedent functions ? (we note $\mathcal{F_c}$ the cosine transform)

$$B=\int\limits_{0}^\infty \frac{1}{x} \int\limits_{x}^\infty \mathcal{F_c}\Big(v(t)f'(t) + \frac{1}{2} v'(t)f(t) \Big) \overline{\mathcal{F_c}(f(t))} + \overline{\mathcal{F_c}\Big(v(t)f'(t) + \frac{1}{2} v'(t)f(t) \Big) } \mathcal{F_c}(f(t)) dt dx$$

I do not manage to use Parseval's equality. May be we cannot say anything about the sign of $B$ ? Any idea on how to investigate this sign ? Or conditions on $v$ and $f$ to have it also positive ?

(Note that to have integral $A$ converging we need : $$\int\limits_{0}^\infty \Big(v(t)f'(t) + \frac{1}{2} v'(t)f(t) \Big) \overline{f(t)} + \overline{\Big(v(t)f'(t) + \frac{1}{2} v'(t)f(t) \Big) }f(t) dt =0$$)

How does pre-exponentiating the zeta function by 1/lnx changes the function in terms of its properties. How are the zeros changed, if at all?

For $X$ a reasonable topological space denote by $Sh(X)$ the derived (stable $\infty$-) category of sheaves of vector spaces (over a fixed field $k$) on $X$.

Consider the filtered diagram whose whose objects are $Sh(U)$ for all $0 \in U \subset \mathbb{R}^n$ open subsets. And morphisms are restriction functors $j^*:Sh(V) \to Sh(U)$ for every inclusion $j : U \to V$.

**Question: What is the ($\infty$-)colimit of this diagram?**

In slightly less precise terms i'm looking for the stalk at $0$ of the sheaf of categories on $\mathbb{R}^n$ which assigns to any open set the category of sheaves on that open.

I have a hunch that this colimit is equivalent to the category of $\mathbb{R}_{>0}$-equivariant sheaves on $\mathbb{R}^n$ but I have no idea how to prove this...

I would like to understand the following optimization problem. Let $F(t,x)$ be a continuous function defined on $[0,1]\times [0,1]$, which is increasing in $t$ and convex in $x$ (I have in mind $F(t,x)=x(t-\frac{x}{2})$), the goal is to solve: \begin{align*} \min_{\alpha \in \mathcal{C}_A} \int_0^1F(t,x_t)dt \end{align*} subject to : \begin{align*} x_t \in \arg \max_{x \in [0,1]} F(t,x)-\alpha(x)1\!\!1_{x\neq 0} \end{align*} and $\mathcal{C}_A$ is the class of differentiable functions such that: \begin{align*} \int_0^1\alpha(x_t)dt &\leq A,\\ \forall x\in [0,1], 0 \leq \alpha(x) & \leq K, \text{and }\vert \alpha'(x)\vert \leq K, \end{align*} where $K> 0$ is a fixed constant.

What annoys me the most is the boundness conditions on the function $\alpha$ and its derivative. Would you have any hint or reference which would help me to get started ?

I'm trying to see if there's any bounds on the difference between $f_{ERM}$ and $f^{*}$. For now, define $\mathcal{F}$ to be a function class.

Let $P$ be a probability measure and $\hat{P_n}$ be the empirical measure on a set $S\subset \mathbb{R}^2$.

Let $$f^{*} = \arg \min_{f\in \mathcal{F}} R_P(f)$$ and

$$\hat{f^{*}} = \arg \min_{f\in \mathcal{F}} R_{\hat{P_n}}(f)$$ where $R_P$ denotes the classification risk with respect to the measure $P$ (and similarly for $\hat{P_n}$). So $R_P(f)=E_{(x,y)\sim P}[L(f(x),y)]$ and $R_\hat{P_n}(f)=\frac{1}{n}\sum_{i=1}^n L(f(x_i),y_i)$ where $x_i,y_i$ are iid samples drawn from distribution $P$.

Is there anything we know about $\| R_{\hat{P_n}} (\hat{f^{*}}) - R_{P}(f^{*}) \|$?

I thought this was related to ERM stability bounds except those bounds are for $\| R_{P} (\hat{f^{*}}) - R_{P}(f^{*}) \|$

When trying to obtain the value of Jacobsthal's function for some $n$; to find the largest sequence of consecutive numbers that are all coprime to $n$, one approach (and the only direct approach that I know of) is to exhaust all of the possible sequences of consecutive numbers that are all coprime to $n$ by trial. But when doing this, it's helpful (when working by hand) to represent all integers (except $1, 0$ and $-1$) by their lowest prime divisor. For example; the sequence $2,3,4,5,6,7,8,9,10$ would be equivalent to $2,3,2,5,2,7,2,3,2$. This representation has it's advantages as it represents more than one sequence of consecutive integers. For example; $2,3,2,5,2,7,2,3,2$ is also the respective representation of $212,213,214,215,216,217,218,219,220$. This representation enables the exhaustive approach for determining the value of Jacobsthal's function for some $n$.

But the thing is, I can't find a specific name for this representation of integers, (or this type of sequence in a more context-free setting). **Is there a referable/favoured name for this representation?** Usually, I have to make up my own dummy name for them to be able to say anything about them, which I would like to avoid if possible.

For example; I would like to conjecture that for any **unknown name** of the first $n$ prime numbers, that is larger than $2p_{n-1} -1$, there exist a reflection point in the **unknown name** of some prime numbers, whom when multiplied together are larger than ${p_n}^2$. This is just another way of conjecturing that a prime number exist in all intervals of length $2p_{n-1}$, bounded above by ${p_n}^2$. That's an open question, but I would like to read more about this sort of approach, and related attempts to identify how these **unknown name** are distributed in respect to primorial numbers, and squared primes.

Let $(L,\langle -,-\rangle)$ be an even integral lattice, and let $(A,q)$ be the associated discriminant form: $$ A=L^*/L, \quad q(a)=e^{\pi i \langle a,a\rangle}. $$ This in turn determines a modular tensor category $\mathcal{C}(A,q)$. (This is also the category of modules of the lattice VOA $V_L$ constructed from $L$.)

We then naturally have a group homomorphism $O(L)\to Aut(\mathcal{C}(A,q))$.

So it seems natural to study the obstructions to extending $\mathcal{C}(A,q)$ by $O(L)$ or any of its subgroups in the sense of [ENO, arXiv:0909.3140].

Does anybody know of any references concerning this point? I would be happy with any partial results, or any starting point for me to explore the literature.

(Also, does the extension theory of ENO appear somewhere in the theory of lattice VOA?)

To optimize $f(x,y,z)$ subject to $g(x,y,z)=h(x,y,z)=0$, we use the Lagrange Multiplier method and solve \begin{equation*} \nabla f=\lambda \nabla g+\mu\nabla h,\quad g=0,\quad h=0. \end{equation*} Geometrically, $\nabla f$ must lie on the normal plane spanned by $\nabla g$ and $\nabla h$. However, it can happen that $\nabla g$ is parallel to $\nabla h$ at certain points, and hence they cannot span the normal plane. In this case, does $\nabla f$ have to be parallel to $\nabla g$ to be a critical point? If yes, how to explain it? If no, can it happen that some critical points are missing? Thanks.

In algebra (especially algebraic topology where I have seen it) when we have some sort of ring like $R[x]$ we are quick to specify that $x$ has degree $n$. I understand the importance of this but isn't this just the same as the ring $R[x^n]$?

I would find the later notion more compact and clear given that in early mathematical education $R[x]$ is always assumed to have $x$ in degree 1. In fact then it seems to me that only a single definition of $R[x]$ needs to be given.

I am interested in the historic or good reasons for defining it above. This notion is really common in cohomology for example.

Let

$$I(T) = \int_{-T}^{T} \frac{\log|\zeta(\frac{1}{2} + it|)|}{\frac{1}{4}+t^2}\mathrm{d}t$$

where $\zeta$ denotes the Riemann zeta function.

What are the reasonable asymptotic estimates for $I(T)$ ?

*Here is what i think:*

It is well-known that

$$\int_{-T}^{T} \log \Big|\zeta\Big(\frac{1}{2} + it\Big)\Big|\mathrm{d}t \ll T\log T$$ Therefore, by a dyadic decomposition, it follows from the above that

$$I(T)=\int_{-T}^{T} \frac{\log |\zeta(\frac{1}{2} + it)|}{\frac{1}{4}+ t^2}\mathrm{d}t \ll\frac{\log T}{T}$$

REMARK: An almost similar application of dyadic decomposition can be found on https://mathoverflow.net/a/285215/123305

How can I find all binary matrices $A$ that are a solution of matrix equation

$$A^3=B$$

for some given positive, integer square matrix $B$? Is there a way to characterize all the solutions?

*Note: Originally asked on Math StackExchange here, without an answer. Figured I should try here, since this is a more research-level question.*

I am trying to implement a fast polynomial *multipoint evaluation* algorithm via FFT (e.g., the one described in Chapter 10.1 of "Modern Computer Algebra," 3rd edition).
I should mention that the polyonomial being evaluated is the formal derivative $N'(x)$ of: $$N(x) = (x-x_1)(x-x_2)\cdots(x-x_k)$$

Specifically, we're evaluating $N'(x)$ at points $x_1, x_2, \dots, x_k$, where $k$ is a power of two.

I'm looking for *any mathematical tricks* that will lead to *practical* improvements in the multipoint evaluation algorithm.
AFAICT, the main bottleneck will be computing remainders after division by $(x - x_i)\cdots(x-x_j)$.

As a refresher, at any "node" in the multipoint evaluation tree, we have a subset of points $x_i, \dots, x_j, x_{j+1}, \dots, x_\ell$, a previous remainder $R(\cdot)$, and we want to compute: \begin{align*} R(x) &\bmod (x - x_i)\cdots(x-x_j)\\ R(x) &\bmod (x - x_{j+1})\cdots(x-x_\ell) \end{align*} Furthermore, the dividend $R$ has degree $\le 2n-1$ while the divisor has degree $n$. Initially $n = k$, so it's a power of two, and then $n$ gets halved as we go down the multipoint evaluation tree.

I am aware there is a $O(n\log{n})$ algorithm based on FFT for computing remainders. For example, the algorithm described in Chapter 9.1 of "Modern Computer Algebra," 3rd edition first computes the quotient by computing a modular inverse and then computes the remainder.

**Is there any way to speed up this division algorithm** given our particular setting:

- We do not need the quotient, only the remainder.
- We only need to divide by $(x - x_i)\cdots(x-x_j)$, for some $i,j$ where $1 \le i,j \le k$.
- $x_i$ can be an $\ell$th root of unity ($\ell > k$) but
**not necessarily**the $i$th one (or $i-c$th one) - $k$ is a power of two or can be adjusted as needed
- We're doing a multipoint evaluation on $N'(x)$

The only mathematical trick I could find was in Todd Mateer's *"Fast Fourier Transform Algorithms with Applications"* PhD thesis (pdf), in Sec 7.5, pg 194.

Let the (homogeneous) Gaussian numbers be defined as $[n]_{x,y} = \frac{x^n-y^n}{x-y}$, and define Gaussian factorials as $[n]_{x,y}! = [n]_{x,y}[n-1]_{x,y}\dots [1]_{x,y}$, and the Gaussian binomials as $ {n \brack m}_{x,y} = \frac{[n]_{x,y}!}{[m]_{x,y}![n-m]_{x,y}!}$. (Letting $x=q$ and $y=1$ or $x=q$ and $y=q^{-1}$ recovers the more standard definitions.)

The coefficient of $x^ay^{nm-a}$ in $ {n+m \brack m}_{x,y}$ is equal to the number of multiset partitions of $\{1^a, 2^{nm-a}\}$ into $m$ parts of size $n$. (Ordering the parts by how many $1$'s they have and arranging things into a rectangle this is counting partitions of size $a$ fitting in an $n \times m$ rectangle).

This interpretation has a natural analog with more variables. We can define the homogeneous multivariate Gaussian polynomials to be the polynomials $p_{n,m}(x_1,x_2,\dots x_k)$ where the coefficient of $x_1^{a_1}x_2^{a_2}\dots x_k^{a_k}$ is equal to the number of multiset partitions of $\{1^{a_1},2^{a_2}, \dots, k^{a_k} \}$ into $m$ parts of size $n$.

For example $p_{2,3}(x,y,z) = $

$$x^6+$$

$$x^5y +x^5z+$$

$$2x^4y^2 + 2x^4yz + 2x^4z^2+$$

$$2x^3y^3 + 3x^3y^2z + 3x^3yz^2 +2x^3z^3+$$

$$2x^2y^4 + 3x^2y^3z + 5x^2y^2z^2 + 3x^2yz^3 +2x^2z^4+$$

$$xy^5 + \hspace{.2cm} 2xy^4z + \hspace{.2cm} 3xy^3z^2 +\hspace{.2cm} 3xy^2z^3 +\hspace{.2cm} 2xyz^4 + \hspace{.2cm} xz^5$$

$$y^6 +\hspace{.35cm} y^5z + \hspace{.35cm} 2y^4z^2 +\hspace{.35cm} 2y^3z^3+\hspace{.35cm}2y^2z^4+\hspace{.35cm} yz^5+ \hspace{.35cm}z^6$$

Where for example the coefficient $4$ of $x^2y^2z^2$ counts the five multiset partions $\{\{1,1\},\{2,2\},\{3,3\}\}$, $\{\{1,1\},\{2,3\},\{2,3\}\}$, $\{\{1,2\},\{1,2\},\{3,3\}\}$, $\{\{1,2\},\{1,3\},\{2,3\}\}$, and $\{\{1,3\},\{1,3\},\{2,2\}\}$ of $\{1^2,2^2,3^2\}$ into $3$ parts of size $2$.

These showed up for me doing some calculations in a representation theoretic context, and I can say some things about them from that perspective. For example, I can see they satisfy a "higher rank" version of the unimodality property for the coefficients for Gaussian polynomials.

Have these been studied before? Are there combinatorial proofs of these unimodality type results in the literature? Do these polynomials have interpretations say in terms of subspaces of finite dimensional vector spaces over finite fields? Do they have simpler formulas like the original one I gave in the two variable case?

**EDIT:** After thinking more and fixing some sage code that had been misleading me, these polynomials $p_{n,m}(x_1, \dots, x_k)$ are just the characters of $Sym^m(Sym^n(\mathbb{C}^k))$. Oddly enough that's not the original representation theoretic context I was looking at, which was more convoluted to state, but it's probably related by something like Howe duality. Anyway I'm still interested in what's known about them combinatorially.

Consider a simple random regular graph $G_d(n)$ with $d=2$ (that is, I select a $2-$regular graph from the set of all $2-$regular graphs on $n$ vertices, uniformly at random).

It is clear that this graph consists of (not-intersecting) cycles only; and asymptotically a.s., it does not contain a cycle of length $n$.

My question is: Can we characterize the length of its largest cycle?

Consider the $\ell^2$ complex Hilbert space.

Let $m\in \mathbb{N}^*$ be a fixed number, and set $$ S=\left\{ x=(x_n)_n\subset \ell^2\ :\ \sum_{n=1}^m \frac{|x_n|^2}{n^2}=1\right\}.$$

I want to show that $S$ is not homeomorphic to $$ S(0,1)=\left\{ x=(x_n)_n\subset \ell^2\ :\ \sum_{n=1}^\infty |x_n|^2=1\right\}.$$

Suppose $X = (x_1, \ldots , x_n)$ is given and we know that $x_i$'s are nonnegative, $\sum_{i=1}^n x_i = n$ and $\sum_{i=1}^n x_i^2 = m $. Just by this information, is it possible to find a vector that majorizes $X$? the meaning of majorization can be found here: https://en.wikipedia.org/wiki/Majorization.

Certainly, $(n , 0 , \ldots , 0)$ is an answer, but I want a nontrivial vector. In particular,I'm looking for some nontrivial known results .

Thanks.

This question follows Noah Schweber's excellent answer to a corresponding question regarding second-order $ZFC$ and the continuum hypothesis: https://mathoverflow.net/a/78083/24611

Simply put, it seems that $ZFC_2$ "decides" $CH$ in a certain sense that can be made formally precise, although second-order logic is so limited as to make it impossible for us to determine which way it is decided. This seems, if my understanding correct, to be related to $ZFC_2$ being categorical if large cardinals are excluded.

Does a similar situation exist for $ZF_2$ and $AC$?

This paper seems to show that it is, and that $ZF_2$ is likewise categorical if large cardinals are excluded. Hence, this would mean that we have the same analogous situation to with $ZFC_2$ and $CH$, so that $AC$ is likewise "decided" in $ZF_2$, but that we don't know which way it is decided.

Is this analogy correct?

Suppose $k$ is a field of characteristic zero (and we assume it is a number field if necessary). If $U$ is a smooth quasi-projective variety over $k$, then there is Poincare duality, \begin{equation} H^n_{c}(U_{\overline{k}},\mathbb{Q}_{\ell})^{\vee} \simeq H^{2d-n}(U_{\overline{k}},\mathbb{Q}_\ell)(d),~d=\text{dim}\,X \end{equation} where $H^n_{c}(U_{\overline{k}},\mathbb{Q}_{\ell})$ is compactly supported etale cohomology group.

**Question 1**: is it expected that there is a **mixed motive** $h^n_c(U)$ whose $\ell$-adic realization is $H^n_{c}(U_{\overline{k}},\mathbb{Q}_{\ell})$? I have not found references about "compactly supported mixed motives", could anyone give references on this?

**Question 2**: is it expected that there is a Poincare duality of the form
\begin{equation}
h^n_c(U) \simeq h^{2d-n}(U)(d)
\end{equation}
Does anyone know any references?

Every totally disconnected separable metric space of dimension $n$ homeomorphically embeds into $C\times \mathbb R ^n$.

Is something like this known? $X$ is *totally disconnected* means that every point in $X$ is equal to the intersection of all clopen sets containing the point. $C$ is the Cantor set.

It is known that there are totally disconnected spaces of arbitrary dimension. But what about just $n=1$? How might we prove $X$ embeds into $C\times [0,1]$?

We know there is a one-to-one continuous mapping $f:X\to C$ such that the quasi-components of $X$ are the point inverses of $f$. It seems like the $C$-coordinates of the embedding should be determined by $f$, but we need another mapping $g:X\to [0,1]$ determine the second coordinates. At this moment it is unclear how to define $g$. The Erdos space has a norm $\|\;\|$, and the mapping $g$ does something like $$\frac{1}{1+\|x\|}.$$ So maybe just fix any $x'\in X$ and let $$g(x)=\frac{1}{1+d(x,x')}\;\;?$$

But I think this would require that *all* metric balls around $x'$ have zero-dimensional boundary. With Erdos space this occurs, but it is not clear it always happens with totally disconnected spaces.