Let $X$ be a suspension spectra whose $BP$-homology is infinitely generated ($BP_*(X) = \Sigma^d BP_*/I$, where $I$ has the form $I=(v_0^{i_0}, \dots , v_n^{i_n})$ such that the homology is a $BP_*(BP)$ coalgebra).

Let $C_nX$ the fiber of the map $ X \to L_nX$ and let $\Sigma C_n X$ be its cofiber.

What can be said about the natural map $$BP_*(\Sigma C_n X) \to BP_*(\Sigma C_{n-1} X) ? $$

My guess would be that it's always injective, but i'm not entirely sure about it.

Thanks

We consider the ODE

$$ icv'+v''+v(1-|v|^2)=0,~{in}~\mathbb{R}$$

(from non-linear Schrodinger equation), where $c \in \mathbb{R}.$ In the article, the author claims that :

" By **standard results on smooth dependence on the parameters for an ODE**, $c\mapsto v=v(c) $ and $c\mapsto 1-|v(c)|^{2}$
are smooth with values into any Sobolev space $W^{s,p}(\mathbb{R}$) and have exponential decay "

What the author is refering to? I don't find/know any theorems/results which can give me that.

----------I have derived following theorem

Odd integer $N=6p+5$ is a prime number iff neither of two diophantine equations

$6x^2−1+(6x−1)y=p$

$6x^2−1+(6x+1)y=p$

has solution.

Odd integer $N=6p+7$ is a prime number iff neither of two diophantine equations

$6x^2−1−2x+(6x−1)y=p$

$6x^2−1+2x+(6x+1)y=p$

has solution.

$x=1,2,3,..y=0,1,2,...p=0,1,2,..$

Note: All primes (except 2 and 3) are in one of two forms

$6p+5$ or $6p+7$

Proposed theorem can be formulated as "matrix sieve": Positive integers which do not appear in both 2-dimensional arrays:

|5 10 15 20...| P1(i,j)= |23 34 45 56...| |53 70 87 104...| |95 118 141 164...| |149 178 207 236...| |... ... ... ... | | 5 12 19 26 ..| |23 36 49 62...| P2(i,j)= |53 72 91 110...| |95 120 145 170...| |149 180 211 242...| |... ... ... ...|$P1(i,j)=6i^2-1+(6i-1)(j-1)$

$P2(i,j)=6i^2-1+(6i+1)(j-1)$

are indexes $p$ of primes in the sequence $S1(p)=6p+5$. Positive integers which do not appear in both 2-dimensional arrays

|3 8 13 18 ..| |19 30 41 52...| |47 64 81 98...| P3(i,j)= |87 110 133 156...| |139 168 197 226...| |... ... ... ... | |7 14 21 28 ..| |27 40 53 66...| P4(i,j)= |59 78 97 116..| |103 128 153 178...| |159 190 221 252...| |... ... ... ... |$P3(i,j)=6i^2-1-2i+(6i-1)(j-1)$

$P4(i,j)=6i^2-1+2i+(6i+1)(j-1)$

are indexes $p$ of primes in the sequence $S2(p)=6p+7$.

Is proposed "matrix sieve" useful for number theory?

Let $\mathcal P$ be a unit intensity Poisson point process on $\mathbb R^d$. Fix $r>0$ and let $W_t = \cup_{s \leq t} \mathbb B(B_t,r)$ be the Brownian sausage around a Brownian motion $B_t$. Run the process until the time $\tau = \inf \{ t \colon W_t \cap \mathcal P \neq \emptyset\}$ that the sausage hits a point in $\mathcal P$.

Now, let $\mathcal P'$ be an independent unit intensity Poisson point process. Define the set $$\mathcal P'' = (\mathcal P - W_\tau) \cup (\mathcal P' \cap W_\tau).$$ So we are taking out the point that $W_\tau$ hit and putting back in $W_\tau \cap \mathcal P'$.

Is $\mathcal P''$ a unit intensity Poison point process?

For a certain project I am currently working on, I need to be able to find PA cuts in nonstandard models of PA, in desirable intervals. For example, I wonder if the following is true, where $\newcommand\PA{\text{PA}}\PA_k$ refers to the $\Sigma_k$ fragment of $\PA$.

**Question.** If $M$ is a model of $\PA$ in which $\PA_{k-1}$ is consistent, but $\PA_k$ is not (so $k$ is nonstandard), then is there a $\PA$ cut in $M$ above $k$ in which $\PA_k$ is consistent?

That is, I want to cut $M$ below the first proof of a contradiction in $\PA_k$, but above $k$, and have $\PA+\text{Con}(\PA_k)$.

Alternatively, is there some other $\Sigma_1$ property of $k$, other than $\neg\text{Con}(\PA_k)$, such that I can always find a $\PA$ cut in $M$ between $k$ and the witness of that property? Kameryn Williams suggested that the Paris-Harrington result may provide this, since it is designed to ensure $\PA$ cuts below the corresponding PH-Ramsey number. But I would need, however, that one can always end-extend the model so as to make the $\Sigma_1$ property true. Does the PH construction have both these features?

With the consistency statements, for example, for any nonstandard $k$ in any model $M$ of $\PA$, there is always an end-extension of $M$ to a model of $\PA$ with $\neg\text{Con}(\PA_k)$.

I am considering to investigate on a variation of the cops and robber game where the robber is considered as an "invisible evader" for their location is unknown until one of the cops are at an adjacent node to the robber.

I was just wondering whether this has been previously studied by any individuals and if so, how much. It would be really helpful if I can be directed to that study.

Thank you.

The classic boundary condition for trace theorem is $C^m$, (Ref: Adams R A, Fournier J J F. Sobolev spaces[M]. Academic press, 2003.), but in practice, we always encounter polygonal domain, this domain has some non-smooth points.

So is there a trace theorem for strong Lipschitz boundary conditions, or even weaker boundary condition? and can we generalize trace theorem in the smooth manifold with corners?

Consider the sequence in $\Bbb N\times\Bbb N$:

$(0,0),(1,1),(1,2),(2,1),(1,3),(2,2),(3,1),(1,4),(2,3),(3,2),(4,1),...,(1,2k-2),(2,2k-3),...,(k-1,k),(k,k-1),...,(2k-3,2),(2k-2,1),(1,2k-1),(2,2k-2),...,(k,k),...(2k-2,2),(2k-1,1),...$

Question $1$: To define an Abelian group structure on $\Bbb N$ that is not a finitely generated Abelian group and be isomorphic to $(\Bbb Q,+)$, I need to know: what is rule of this sequence?

Consider the sequence in $\Bbb N\times\Bbb N$:

$(1,1),(1,2),(2,1),(1,3),(2,2),(3,1),(1,4),(2,3),(3,2),(4,1),...,(1,2k-2),(2,2k-3),...,(k-1,k),(k,k-1),...,(2k-3,2),(2k-2,1),(1,2k-1),(2,2k-2),...,(k,k),...(2k-2,2),(2k-1,1),...$

Question $2$: To define an Abelian group structure on $\Bbb N$ that is not a finitely generated Abelian group and be isomorphic to $(\Bbb Q\setminus\{0\},\times)$, I need to know: what is rule of this sequence?

Thanks in advance.

Asalesman is employed by a large corporation. He has a $n$ cities to visit, connected by roads, forming a graph. But as travel takes a lot of time, he has to pick hotels between visits. He cannot take any hotel he wishes; rather there are precisely $m$ hotels where he may rest.

He has to plan his travel in such way that after visiting $p$ cities, he has to visit a hotel. We may generalise it to say he has to visit $q$ different hotels. (Maybe it will be traveling celebrity problem?)

So basically he has a graph $G(E,V)$, where $E$ are the edges, $V$ the nodes, and two sets of nodes: $C$ (cities) and $H$ (hotels) with $C \cup H = V$ and $|C|=n$, $|H|=m$. Find a path in the graph starting at one of the $C$ nodes, ending at a different $C$ node and forming pattern $(p,q)$, $p$ nodes from set $C$, then $q$ nodes from set $H$, then repeat. The path may not visit every $H$ element and it may visit some of $H$ elements many times, but it has to visit every $C$ node once.

So it is like finding a Hamiltonian path but with "rests".

- Does this problem have a name or it is something new?
- In what cases does it have a solution? It probably depends both on the numbers of nodes $p$, $q$, and where on the graph they are located.
- How can we find the shortest path, ignoring hotel costs?
- What is a way to find an optimal solution (cheapest travel) if every hotel cost is the same?
- What is the optimal solution when costs of hotels are different?

Suppose $F$ is a totally real field of degree $d$. Is there an explicit way (like theta series or so) to construct automorphic forms on $G_{sp}(2d)$ from Hilbert modular forms of ${\rm GL}_2(F)$?

This question asks: If $f,g \in k[x,y]$ are two algebraically dependent polynomials over an arbitrary field $k$, is it true that there exists a polynomial $h \in k[x,y]$ such that $f,g \in k[h]$, namely, $f=u(h)$ and $g=v(h)$ for some $u(t),v(t) \in k[t]$; the answer is positive.

Is it possible to replace the field $k$ by an integral domain $D$? Namely: If $f,g \in D[x,y]$ are two algebraically dependent polynomials over an arbitrary integral domain $D$, is it true that there exists a polynomial $h \in D[x,y]$ such that $f,g \in D[h]$?

Denote the field of fractions of $D$ by $Q(D)$. It is clear that if $f,g \in D[x,y] \subset Q(D)[x,y]$ are two algebraically dependent polynomials over $D$, then from the above question there exists a polynomial $h \in Q(D)[x,y]$ such that $f,g \in Q(D)[h]$, namely, $f=u(h)$ and $g=v(h)$ for some $u(t),v(t) \in Q(D)[t]$.

I do not see why, for example, $D[x][y] \ni f=u(h)=u_mh^m+\cdots+u_1h+u_0$ should imply that $h \in D[x,y]$ and $u_j \in D$ (changing variables does not seem to help, namely if the leading term is $cy^l$, with $c \in Q(D)$).

Any comments are welcome.

I've heard about two ways mathematician describe Feynman diagrams:

They can be seen as "string diagrams" describing various type of arrows (and/or compositions operations on them) in monoidal closed category.

They are combinatorial tools that allows to give formulas for the asymptotical expansion of integrals of the form:

$$ \int_{\mathbb{R}^n} g(x) e^{-S(x)/\hbar} $$

when $\hbar \rightarrow 0$ in terms of asymptotical expansion for $g$ and $S$ around $0$ (with $S$ having a unique minimum at $0$ and increasing quickly enough at $\infty$ and often with a very simple $g$, like a product of linear forms), as well as some variation of this idea, or for the slightly more subtle ``oscilating integral'' version of it, with $e^{-i S(x)/\hbar}$.

My question is: *is there a relation between the two ?*

I guess what I would like to see is a "high level" proof of the kind of formula we get in the second point in terms of monoidal categories which explains the link between the terms appearing in the expansion and arrows in a monoidal category... But maybe there is another way to understand it...

Let $Q$ be a wild quiver without oriented cycles and let $V$ be an indecomposable representation of $Q$. Assume that $V_i\neq 0$ for each vertex $i$ of $Q$. The base field $k$ is algebraically closed. If $V$ is not a Schur representation, $\operatorname{End}V$ is a local $k$-algebra different from $k$, so there is a non-trivial nilpotent endomorphism. What I would like to know is the following: does there exist a nilpotent endomorphism $\phi$ such that for each vertex $i$ $\phi$ is not zero at $V_i$?

I have a large body of examples in which this question has positive answer, however I still can't prove it in full generality... am I missing some key example?

This question arised while solving problems for a homework sheet for a course on quiver representations, however it does not help me in any way to solve that problem, which was simply to prove that $V$ is indecomposable iff $End V$ is local.

Edit: forgot the key word, I want $\phi$ to be nilpotent.

Is it decidable whether a finite group presentation is diagrammatically aspherical (that is there is no reduced spherical diagram over this presentation)? Probably - not, but I cannot find a reference.

Consider $f(x)$, a rapidly decreasing function, such that $\int_0^{\infty} f(x)=0$ and for $x$ near zero: $f(x)=O(x^a)$ (wit $a>0$). Can we interchange the sum and integral and write as below: $$\int_0^{\infty}\sum_{n=1}^{\infty} f(nx)= \sum_{n=1}^{\infty} \int_0^{\infty} f(nx) =0$$

Note that, as $\int_0^{\infty} f(x)=0$, the Poisson summation formula (thanks to the limit of $f(x)$ in zero) ensures that: $\sum_{n=1}^{\infty} f(nx)\sim O(x^a) \;\; (x \to 0)$ so the integral on the left in above expression is well defined.

We cannot apply directly the classical interchange theorems as: $\sum_{n=1}^{\infty} |f(nx)| \sim O(\frac{1}{x})$ and even if we have simple convergence of the sum, the partial sum $|\sum_{n=1}^{N} f(nx)|$ cannot be bounded near zero for all N (even if the complete sum is absolutely integrable). So is there way to show we can interchange or two sums are different, and if it is the case how can we show it ?

Let us define polynomials $P_n^{(a)}(x)$ as follows :

$P_n^{(a)}(x)=\left(\frac{1}{2}\right)\cdot\left(\left(x-\sqrt{x^2+a}\right)^n+\left(x+\sqrt{x^2+a}\right)^n\right)$ ,

where $x$ and $n$ are nonnegative integers , and $a$ is an integer .

We can define these polynomials by the recurrence relation also :

$P_0^{(a)}(x)=1$

$P_1^{(a)}(x)=x$

$P_{n+1}^{(a)}(x)=2xP_n^{(a)}(x)+aP_{n-1}^{(a)}(x)$

Note that $T_n(x)=P_n^{(-1)}(x)$ , where $T_n(x)$ is Chebyshev polynomial of the first kind .

Next , let us formulate the following claim :

Let $a \in \mathbb{Z}$ , $n \in \mathbb{N}$ , $n \ge 3$ and $\operatorname{gcd}(a,n)=1$ . Then $n$ is prime if and only if $P_n^{(a)}(x) \equiv x^n \pmod{n}$ .

You can run this test here .

The AKS test goes like this :

Input : integer $n>1$

If $n=a^b$ for $a \in \mathbb{N}$ and $b>1$ , output composite .

Find the smallest $r$ such that $\operatorname{ord}_r{n}>(\log_2n)^2$ .

If $1 < \operatorname{gcd}(a,n) <n$ for some $a \le r$ , output composite .

If $n \le r$ , output prime .

For $a=1$ to $ \left\lfloor \sqrt{\varphi(r)} \log_2(n) \right\rfloor$ do

if $(x+a)^n \not\equiv x^n+a \pmod {x^r-1,n}$ , output composite .

Output prime .

Under assumption that a claim given above is correct can we change step 5 into :

For $a=1$ to $ \left\lfloor \sqrt{\varphi(r)} \log_2(n) \right\rfloor$ do

if $P_n^{(a)}(x) \not\equiv x^n \pmod {x^r-1,n}$ , output composite .

and still have a correct algorithm ?

You can run this modified version here .

Let $\mathcal{B}(F)$ the algebra of all bounded linear operators on a complex Hilbert space $F$. Let ${\bf S} = (S_1,\cdots, S_d)\in \mathcal{B}(F)^d$. We define $$W({\bf S})=\{(\langle S_1 y\; ,\;y\rangle,\cdots,\langle S_d y ,\;y\rangle):y \in F,\;\;\|y\|=1\}.$$ It is well known that the cases in which $W({\bf S})$ is convex are the following:

$(1)$ ${\bf S} = (S_1,\cdots,S_d)$ is an $d$-tuple of commuting normal operators.

$(2)$ ${\bf S} = (S_{1},\cdots,S_{d})$ is a $d$-tuple of Toeplitz operators.

$(3)$ ${\bf S} = (S_1,\cdots,S_d)$ is a commuting $d$-tuple of operators in a two dimensional Hilbert spaces.

However, one can ask this natural question: if the operators $S_k$ are commuting and non normal on a complex hilbert space $F$ with $\mbox{dim}(F)\geq3$, why $W({\bf S})$ is not convex?

My goal is to construct ${\bf S} = (S_1,\cdots,S_d)$ such that the operators $S_k$ are commuting but $W({\bf S})$ is not convex.

Consider the matrices $$ S_1 = \left[\begin{array}{ccc} 0&1&0 \\ 0& 0&0 \\0&0&0 \end{array}\right] \ \ \textrm{and} \ \ S_2 = \left[\begin{array}{ccc} 0&0&0 \\ 0& 0&0 \\0&1&0 \end{array}\right]. $$ Clearly, $S_1S_2=S_2S_1$. Moreover, we get $$ W(S_1,S_2)=\{(b\overline{a},b\overline{c});\;(a,b,c) \in \mathbb{C}^3\;\;\hbox{and}\;|a|^2+|b|^2+|c|^2=1\}. $$ But I'm facing difficulties to show that $W(S_1,S_2)$ is convex or not in this case.

Thank you for your help..

Let $G$ be a $p$-adic Lie group, $\text{Lie}(G)$ its Lie algebra.

Is there any reasonable notion of exponential map $\text{exp} : \text{Lie}(G)\to G$?

My problem is the following. Given $F$ and $G$ cumulative distribution functions, with densities $f,g$ (for example on $[0,1]$), what can I say on the monotonicity of $F(x)(1-G(x))$? More specifically: I would like to conclude that $F(1-G)$ should be increasing for low enough $x$ and decreasing for high enough $x$. I feel this should be true under quite general regularity conditions, but I could prove it only in the case of log concave densities (but it true is in many non log concave examples, as in the Pareto distribution). Maybe I am missing the obviouys, but are there more general regularity conditions that ensure the result?

All I could do is the following reasoning, proving that there is an interval where the product is increasing arbitrarily close to 0 if the cdfs are strictly increasing and the densities are continuous.

Indeed, if they are strictly increasing $F(0)(1-G(0))=0$ and $F(x)(1-G(x))>0$ if $x>0$, so by Lagrange theorem for any $x>0$ there is a point $\zeta \in (0,x)$ such that: $D(F(1-G))(\zeta)=\frac{F(x)(1-G(x))}{x}>0$ hence by continuity there exists an interval around $\zeta$ in which $F(1-G)$ is strictly increasing.

Let $A_{n,k},k=1,\ldots,n$ be a sequence of $n\times n$ upper triangular matrices where $A_{n,1}=I_n$ and $A_{n,k},\quad 2\leq k\leq n$ be a regularly shifted and scaled matrix, with $P_{n,k}$ an $n\times n$ matrix with $0,1$ entries to be specified below. *An example illustrating this sequence for $n=7$ is given at the end of the question.*

Now define $B_n=A_{n,1}+\cdots+A_{n,n}$ and note that one always obtains an upper triangular matrix with this process.

Informally, initialize the matrix $B_n=[b_{i,j}]$ as the identity. Let $k$ range from $2$ to $n$ and increment each entry $b_{vk-f,vk}$ by $1/k$ for $f=0,\ldots,k-1.$ Do this for $v=1,2,\ldots,\lfloor n/k \rfloor.$

For the case $n=7,$ one obtains the matrix as the sum of the matrices at the end of this question: $$ B_{7}=\begin{bmatrix} 1 & 1/2 &1/3 & 1/4 & 1/5 &1/6 &1/7 \\ 0 & 3/2 &1/3 & 1/4 &1/5 &1/6 &1/7 \\ 0 & 0 & 4/3 & 3/4 &1/5 &1/6 &1/7 \\ 0 & 0 &0 & 7/4 &1/5 &1/2 &1/7 \\ 0 & 0 &0 & 0 &6/5 &1 &1/7 \\ 0 & 0 &0 & 0 &0 &2 & 1/7 \\ 0 & 0 &0 & 0 &0 & 0 & 8/7 \end{bmatrix}. $$ whose eigenvalues are on the diagonal. The nonzero entries of $B_n$ which occur for $j\leq i,$ are given by $$ B_{i,j}= \sum_{d|j} \mathbb{1}\left\{j-i +1 \leq d \right\} d^{-1} \qquad\qquad (1) $$ In fact $d=j$ can also be included giving a single expression for $b_{i,j}$ above. It is also clear that the matrix $B_n$ matches the matrix $B_{n-1,n-1}$ in its upper left $(n-1)\times (n-1)$ submatrix which provides a nice "bordered" recurrence to obtain $B_n$ from $B_{n-1}$ by augmentation. Hence

$$ B_{6}=\begin{bmatrix} 1 & 1/2 &1/3 & 1/4 & 1/5 &1/6 \\ 0 & 3/2 &1/3 & 1/4 &1/5 &1/6 \\ 0 & 0 & 4/3 & 3/4 &1/5 &1/6 \\ 0 & 0 &0 & 7/4 &1/5 &1/2 \\ 0 & 0 &0 & 0 &6/5 &1 \\ 0 & 0 &0 & 0 &0 &2 \end{bmatrix}. $$

Clearly all $B_n$ are positive definite with a unique minimum eigenvalue $1.$ All the entries at and above the main diagonals are nonzero but typically somewhat small.

**Question:**
What can be said about eigenvalues (especially lower bounds) and eigenvectors of the sequence of matrices $C_n:=B_n^T B_n$?

Does the bordered recurrence help in determining bounds on the eigenvalues of $C_n$? Any references to similar problems [structurally w.r.t. the recurrence even if not number theoretic] is appreciated.

**Example for $n=7$:**

$$A_{7,1}=I_7,$$

$$A_{7,2}=\begin{bmatrix} 0 & 1/2 &0 & 0 &0 & 0 & 0\\ 0 & 1/2 &0 & 0 &0 & 0 & 0\\ 0 & 0 &0 & 1/2 &0 & 0 & 0\\ 0 & 0 &0 & 1/2 &0 & 0 & 0\\ 0 & 0 &0 & 0 &0 & 1/2 & 0\\ 0 & 0 &0 & 0 &0 & 1/2 & 0\\ 0 & 0 &0 & 0 &0 & 0 & 0 \end{bmatrix},$$

$$A_{7,3}=\begin{bmatrix} 0 & 0 &1/3 & 0 &0 & 0 & 0\\ 0 & 0 &1/3 & 0 &0 & 0 & 0\\ 0 & 0 &1/3 & 0 &0 & 0 & 0\\ 0 & 0 &0 & 0 &0 & 1/3 & 0\\ 0 & 0 &0 & 0 &0 & 1/3 & 0\\ 0 & 0 &0 & 0 &0 & 1/3 & 0\\ 0 & 0 &0 & 0 &0 & 0 & 0 \end{bmatrix},$$

$$A_{7,4}=\begin{bmatrix} 0 & 0 &0 & 1/4 &0 & 0 & 0\\ 0 & 0 &0 & 1/4 &0 & 0 & 0\\ 0 & 0 &0 & 1/4 &0 & 0 & 0\\ 0 & 0 &0 & 1/4 &0 & 0 & 0\\ 0 & 0 &0 & 0 &0 & 0 & 0\\ 0 & 0 &0 & 0 &0 & 0 & 0\\ 0 & 0 &0 & 0 &0 & 0 & 0 \end{bmatrix},$$

$$A_{7,5}=\begin{bmatrix} 0 & 0 &0 & 0 &1/5 & 0 & 0\\ 0 & 0 &0 & 0 &1/5 & 0 & 0\\ 0 & 0 &0 & 0 &1/5 & 0 & 0\\ 0 & 0 &0 & 0 &1/5 & 0 & 0\\ 0 & 0 &0 & 0 &1/5 & 0 & 0\\ 0 & 0 &0 & 0 &0 & 0 & 0\\ 0 & 0 &0 & 0 &0 & 0 & 0 \end{bmatrix},$$

$$A_{7,6}=\begin{bmatrix} 0 & 0 &0 & 0& 0 &1/6 & 0 \\ 0 & 0 &0 & 0 &0 &1/6 & 0 \\ 0 & 0 &0 & 0 &0 &1/6 & 0 \\ 0 & 0 &0 & 0 &0 &1/6 & 0 \\ 0 & 0 &0 & 0 &0 &1/6 & 0 \\ 0 & 0 &0 & 0 &0 & 1/6 & 0\\ 0 & 0 &0 & 0 &0 & 0 & 0 \end{bmatrix},$$

and

$$A_{7,7}=\begin{bmatrix} 0 & 0 &0 & 0& 0 &0 &1/7 \\ 0 & 0 &0 & 0 &0 &0 &1/7 \\ 0 & 0 &0 & 0 &0 &0 &1/7 \\ 0 & 0 &0 & 0 &0 &0 &1/7 \\ 0 & 0 &0 & 0 &0 &0 &1/7 \\ 0 & 0 &0 & 0 &0 &0 & 1/7 \\ 0 & 0 &0 & 0 &0 & 0 & 1/7 \end{bmatrix}.$$