Let $S_1(H)$ be the space of trace-class operators on a separable Hilbert space $H$. Let $T:S_1(H) \rightarrow S_1(H)$ be a bounded linear operator. We can then define a bounded operator $T^*:B(H) \rightarrow B(H)$ by

$$tr(T^*(A)(\eta))=tr(AT(\eta)) \quad \text{for all } A \in B(H) \, , \, \eta \in S_1(H).$$
I have seen older papers (starting in the 1970s) where people do not seem to be aware that the operator $T$ is completely positive if and only if $T^*$ is completely positive. This prompted me to spend some time proving it. Since then, I have run into a couple of more recent articles claiming this is true but without any proof or references (they're mostly physics articles). Does anyone know if a proof of this has been published anywhere? I have looked a great deal and cannot find one.

Given $v_1,\dots,v_m\in\Bbb F_q[x]^n_d$ (vectors of length $n$ of at most degree $d$ polynomials in $\Bbb F_q[x]$) then consider $L(v_1,\dots,v_m)=\{v\in\Bbb F_q[x]^n_d:v=\sum_{i=1}^m\alpha_i v_i\}$ where $\alpha_i\in\Bbb F_q$ and $\prod_{i=1}^m\alpha_i\neq0$ holds.

If $v\in L(v_1,\dots,v_m)$ let $deg(v)$ be the smallest degree of polynomial coordinates in $v$.

Consider the question:

'Given $v_1,\dots,v_m\in\Bbb F_q[x]^n_d$ what is the least degree of all $v\in L(v_1,\dots,v_m)$?'

Is there an $O((n\cdot d\cdot m \cdot \log q)^c)$ algorithm for this?

Has this been studied?

I asked the same question on MSE one week ago, but it has not received any answers.

**Background.** Let $G=SL(2,\mathbb{R})$, let $K=SO(2)$, and let $\Gamma$ be a lattice in $G$, e.g. $SL(2,\mathbb{Z})$. Let $\phi \in L^1(G)$ be $\mathfrak{Z}$-finite, and let the Poincaré series $P_\phi(g)$ be defined by
$$ P_\phi(g):= \sum _{\gamma \in \Gamma} \phi(\gamma g).$$
Theorem 6.1 in Borel's *Automorphic Forms on $SL(2,\mathbb{R})$* says: If $\phi$ is $K$-finite on the right, then the series converges absolutely and locally uniformly, belongs to $L^1(\Gamma \backslash G)$, and represents an automorphic form for $\Gamma$; and if $\phi$ is $K$-finite on the left, then the series converges absolutely and is bounded on $G$.

Here is an example of an integrable function that is $K$-finite on both sides and $\mathfrak{Z}$-finite. For the rest of this post, let's switch to working in the unit disc model, instead of in the upper-half plane, using the transformation $T= \lbrace 1, -i; 1, i \rbrace$, which sends $i$ to $0$. So, from now on, when I write $G$, I mean $T.SL(2,\mathbb{R}).T^{-1} = SU(1,1)$; similarly for $K$ and $\Gamma$. Let $j(g,w):=cw+d$, where $c$ and $d$ are the bottom left and bottom right entries of the matrix $g$, respectively, and let $\varphi_n(w):=w^n$, with $w$ in the unit disc. Then $$ \phi_{m,n}(g) := j(g^{-1},0)^{-m} \varphi_n ( g^{-1}.0 ) \qquad (n,m \in \mathbb{Z}, \, n \geq 0, \, m \geq 4 )$$ has left $K$-type $m$ and right $K$-type $-m-2n$. In fact, $\phi_{m,n}$ is a basis element of a (holomorphic) discrete series representation of $G$ (acting on the right) in $L^2(G)$. And if $P_{\phi_{m,n}}(g)$ is not identically zero, then $P_{\phi_{m,n}}(g)$ is a basis element of a discrete series representation of $G$ (acting on the right) in $L^2(\Gamma \backslash G)$. The left and right $K$-types describe the characters by which $K$ acts on the left and the right, respectively; and saying that a function is $\mathfrak{Z}$-finite means that the function is annihilated by a non-constant polynomial in the Casimir operator. I could elaborate on the representation theory here, but I think this question is really about lattices.

Now denote $P_\phi(g)$ by $^L \! P_\phi(g)$, and define $$ ^R \! P_{\phi}(g):= \sum _{\gamma \in \Gamma} \phi(g \gamma).$$ (The superscripts L and R stand for averaging over the left and right action of the lattice, respectively.) Then the mirror image of Theorem 6.1 in Borel's book holds.

**Question.**

True, or false? $$ ^L \! P_{\phi_{m,n}}(g) := \sum _{\gamma \in \Gamma} \phi_{m,n}(\gamma g) \not\equiv 0 \quad \Longleftrightarrow \quad ^R \! P_{\phi_{m,n}}(g) := \sum _{\gamma \in \Gamma} \phi_{m,n}(g \gamma) \not\equiv 0 $$

I think it should be true, but I cannot prove it. If one of the series is non-zero at some $g \in G$, shouldn't it be possible to find some $g' \in G$ where the other series is non-zero?

**Observations and ideas.** Note that $^L \! P_{\phi_{m,n}}(g) \in L^1(\Gamma \backslash G)$, with right $K$-type $-m-2n$, while $ ^R \! P_{\phi_{m,n}}(g) \in L^1(G / \Gamma)$, with left $K$-type $m$, so the situation is not entirely symmetrical.

Here's how the Poincaré series defined in the Background section are related to another kind of Poincaré series. Let $\varphi$ be a bounded holomorphic function in the unit disc, and let $m \geq 4$ be an integer. Then the Poincaré series $$ p_{m,\varphi}(w) := \sum _{\gamma \in \Gamma} j(\gamma,w)^{-m} \varphi (\gamma.w) $$ converges absolutely and locally uniformly and defines a holomorphic automorphic form for $\Gamma$. (This is Theorem 6.2 in Borel's book. By the way, these are not the same Poincaré series as the Poincaré series at infinity in e.g. Iwaniec's book on automorphic forms.) We have $$ j(g^{-1},0)^{-m} p_{m,\varphi_n}(g^{-1}.0) = \ ^R \! P_{\phi_{m,n}}(g)$$ where $\varphi_n$ is as in the Background section. On the other hand, I can't obtain any relationship between $p_{m,\varphi_n}$ and $^L \! P_{\phi_{m,n}}(g)$.

Anyway, I think that the answer to this question will have more to do with lattices (or maybe just discrete subgroups) than with automorphic forms. I'm aware of the relationship between dimensions of spaces of cusp forms and multiplicities of discrete series representations in $L^2(\Gamma \backslash G)$ or $L^2(G / \Gamma)$, but it doesn't help here. Could we use Borel's density theorem somehow? ($\Gamma$ is Zariski-dense in $G$.) Could we use the fact that $G$ is unimodular? (A necessary condition for the existence of lattices.) Am I overthinking this?

Suppose $X$ is an infinite countable CW complex which satisfies the following property: for all $k$-cells $e$, the number of $(k+1)$-cells incident to $e$ is at most $c_k$, where the latter is some number that depends on $k$. Let $X_k$ be the set of $k$-cells.

Let $\ell^2_k(X)$ be the set of functions $a_k : X_k \to \Bbb R$, such that the series $$ \sum_{e \in X_k} a_k(e)^2 $$ converges (this implicitly makes use of the counting measure on $X_k$). Then the incidence bound assumption implies that coboundary operator $$ \delta: \ell^2_k(X) \to \ell^2_{k+1}(X) $$ is defined (this uses the same formula that arises when defining the cellular cochain complex of $X$).

When $\dim X =1$ this construction was introduced by Dodziuk and Kendall in

Dodziuk, J.(1-CUNYG); Kendall, W. S.(4-STRA) Combinatorial Laplacians and isoperimetric inequality. From local times to global geometry, control and physics (Coventry, 1984/85), 68–74, Pitman Res. Notes Math. Ser., 150, Longman Sci. Tech., Harlow, 1986.

**Questions**

Has this construction been investigated in the generality described above?

How is the cohomology of this complex related to the usual cellular cohomology of $X$?

Is there a set of reasonable conditions on $X$ which guarantee that this cohomology is finite dimensional?

How does the above relate to other notions of $L^2$-cohomology?

I hope this is a suitable MO question. In a research project, my collaborator and I came across some combinatorial expressions. I used my computer to test a few numbers and the pattern was suggesting the following equation for fixed integers $K\geq n>0$.

$$\dfrac{K!}{n!K^{K-n}}\sum\limits_{ \begin{subarray}{c} k_1+\dotsb+k_{n}=K \\ k_i \geq 1 \end{subarray}} \prod\limits_{i=1}^n \dfrac{k_i^{k_i-2}}{(k_i-1)!}=\displaystyle {K-1\choose n-1}.$$

We tried to think of a proof but failed. One can probably move these $K!, n!$ to the right and rewrite the RHS, or maybe move $K!$ into the summation to form combinatorial numbers like $K\choose k_1,k_2,\dotsc,k_n$. We don't know which is better.

The questions are:

- Anyone knows a proof for this identity?
- In fact the expression that appears in our work is $\sum\limits_{ \begin{subarray}{c} k_1+\dotsb+k_{n}=K \\ k_i \geq 1 \end{subarray}} \sigma_p(k_1,\dotsc,k_n) \prod\limits_{i=1}^n \dfrac{k_i^{k_i-2}}{(k_i-1)!}$, where $p$ is a fixed integer and $\sigma_p(\dotsc)$ is the $p$-th elementary symmetric polynomial. The equation in the beginning simplifies this expression for $p=0,1$. Is there a similar identity for general $p$?

I hope you are well. Here is my problem.

Let $\{s_0,\,s_1,\ldots,\,s_T\}$ be a sequence of discrete random variables and denote $S_t=s_0+s_1+\cdots+s_t$, with $S_0=0$ and $S_T\leq M$, where $M$ and $T$ are large positive integers.

For all $t\in\{1,\ldots,\,T\}$, suppose that

$s_t|\{S_{t-1}=u_{t-1}\}\sim\text{Binomial}(M-u_{t-1},\,p_t)$, with

$\text{logit}(p_t)=\beta_0+\beta_1\cdot u_{t-1}$,

and $\beta_0\in\mathbb{R}$ and $\beta_1\in\mathbb{R}$ are known and fixed.

I would like to compute $\mathbb{P}(S_{T}=m)$, with $m$ sufficiently large and $m\in\{0,\,1,\ldots,\,M\}$.

Conditionally on $M$, $\beta_0$, and $\beta_1$, the following recursive formula allows me to obtain the probability distribution of $S_T$. For all $k_T\in\{0,\,1,\ldots,\,M\}$,

$\displaystyle\mathbb{P}(S_T=k_T)=\sum_{k_{T-1}=0}^{k_T}\mathbb{P}(s_T=k_T-k_{T-1}|\{S_{T-1}=k_{T-1}\})\cdot\mathbb{P}(S_{T-1}=k_{T-1})$, with

$\displaystyle\mathbb{P}(S_{T-1}=k_{T-1})=\sum_{k_{T-2}=0}^{k_{T-1}}\mathbb{P}(s_{T-1}=k_{T-1}-k_{T-2}|\{S_{T-2}=k_{T-2}\})\cdot\mathbb{P}(S_{T-2}=k_{T-2})$, with

$\qquad\qquad\vdots$

$\displaystyle\mathbb{P}(S_{2}=k_{2})=\sum_{k_{1}=0}^{k_{2}}\mathbb{P}(s_{2}=k_{2}-k_{1}|\{S_{1}=k_{1}\})\cdot\mathbb{P}(S_{1}=k_{1})$, with

$\displaystyle\mathbb{P}(S_1=k_1)={M\choose k_1}\cdot p_1^{k_1}\cdot(1-p_1)^{M-k_1}$.

However, I noted that this recursive formula is inefficient to compute $\mathbb{P}(S_{T}=m)$, even for a statistical software, such as ${\tt R}$.

**Question:** Is there a more efficient method to compute $\mathbb{P}(S_{T}=m)$, for instance a method based on any extension of the Central Limit Theorem? If so, where can I find this method? Thanks a lot.

Let $X$ be a degree $d$ hypersurface in $\mathbb{P}^n$. For $(d,n)=(3,4),(4,5)$, or $(5,6)$, Coskun and Starr proved in *Rational curves on smooth cubic hypersurfaces* that the Kontsevich space $\overline{\mathcal{M}}_{0,0}(X,e)$ of rational curves has two irreducible components ($e>1$):

1) $e$ to 1 covers of a line

2) the closure of the smooth rational curves

I'm confused about a detail in the proof that these are the only two components. The proposed proof is to specialize to a chain of lines, and then argue that the chain of lines is a smooth point of the Kontsevich space. (And they take care that if you start with a curve that's not a cover of a line, you don't degenerate it into a cover of a line.)

Why is it clear that the space of chains of lines (even restricted to those that are generically injective) is irreducible without some monodromy argument? Otherwise, even for the first case of conics on a cubic threefold, I worry that there might be that there are multiple components of conics, where for each component, we can specialize a general element to a chain of lines corresponding to a smooth point of the Kontsevich space, but elements of different components specialize to different chains of lines.

(Edit: I should mention that when $d=n-1$ as above, we would expect there to be finitely many lines through each point of the hypersurface)

Let $H$ be a $r \times r$ grid. Suppose that at most $r/10^5$ vertices of this grid are colored red. For every vertex $v \in V(H)$, let $B_i(v)$ be the ball of radius $i$ centered at $v$. (Or for simplicity you can consider the $2i+1 \times 2i+1$ sub-grid centered at $v$.). Let $C_i(v) = B_i(v) \setminus B_{i/2}(v)$, for $i \in I:=\{2^k\colon k\in\mathbb{Z}, 1\leq k\leq\frac{\log r}{\log 2} \}$; that is the annulus of radius $i$ around $v$. Is the following claim true: There exists a vertex $v \in V(H)$ such that for every $i \in I$, we have that the number of red vertices in $C_i(v)$ is at most $i$?

Let X be a Integral Projective surface. Let P be the only singular point. Suppose the normalisation $\overline{X}$ is smooth and $P_1$ and $P_2$ are two points over P. Let Us Blow up $\overline{X}$ at $P_1$ and $P_2$. We will get two $\mathbb{P}^1$'s as exceptional divisors on Blow $\overline{X}$. Now Suppose Z be a projective surface so that there is a map Blow $\overline{X}\rightarrow Z$ which identifies these two $\mathbb{P}^1$'s. Naturally there will be morphism from $Z\rightarrow X$, which contracts the $\mathbb{P}^1$ to P. Now General theorem says that this morphism is a Blow up along some ideal. What is this ideal?

Let I be the ideal of the point P. It can't be some power of I because that will give the same Blow up as I. Can anyone provide a reference of this kind of example? will be really helpful.

$\newcommand{\GLp}{\operatorname{GL}_n^+}$ $\newcommand{\SLs}{\operatorname{SL}^s}$ $\newcommand{\dist}{\operatorname{dist}}$ $\newcommand{\Sig}{\Sigma}$ $\newcommand{\id}{\text{Id}}$ $\newcommand{\SOn}{\operatorname{SO}_n}$ $\newcommand{\SOtwo}{\operatorname{SO}_2}$ $\newcommand{\GLtwo}{\operatorname{GL}_2^+}$

I am trying to find the **Euclidean distance** between the set of matrices of constant determinant and $\SOn$, i.e calculating
$$
F(s)= \min_{A \in \GLp,\det A=s} \dist^2(A,\SOn).
$$

Since the problem is $\SOn$-invariant we can effectively work with SVD; Using geometric reasoning, we can reduce the problem to diagonal matrices with **at most two distinct values** for its entries:

Indeed, denote by $\SLs$ the submanifold of matrices with determinant $s$; Let $\Sig \in \SLs$ be a closest matrix to $\SOn$. By orthogonal invariance, we can assume $\Sig$ is positive diagonal. Then its unique closest matrix in $\SOn$ is the identity. Consider the minimizing geodesic between $I,\Sig$: $$ \alpha(t) =\id+t(\Sig-\id). $$ Since a minimizing geodesic to a submanifold is orthogonal to it, we have $$\dot \alpha(1) \in (T_{\Sig}SL^{s})^{\perp}=(T_{(\sqrt[n]s)^{-1}\Sig}SL^{1})^{\perp}=\big((\sqrt[n]s)^{-1}\Sig T_{\id}SL^{1}\big)^{\perp}=\big(\Sig \text{tr}^{-1}(0)\big)^{\perp}.$$

$\Sig^{-1} \in \big(\Sig \text{tr}^{-1}(0)\big)^{\perp} $ a basis for $\big(\Sig \text{tr}^{-1}(0)\big)^{\perp}$, we deduce

$$ \Sig-\id=\dot \alpha(1)=\lambda \Sig^{-1}$$ for some $\lambda \in \mathbb{R}$, i.e

$$ \sigma_i-1=\frac{\lambda}{\sigma_i} \Rightarrow \sigma_i^2-\sigma_i-\lambda=0.$$ Denote the roots of this equation by $a,b$. We just proved $\{\sigma_1,\dots,\sigma_n\} \subseteq \{a,b \}$.

*So, we are naturally led to the following optimization problem:*

$$ F(s)=\min_{a,b \in \mathbb{R}^+,a^kb^{n-k}=s,0 \le k \le n, k \in \mathbb{N}} k(a-1)^2+(n-k)(b-1)^2. \tag{1}$$

I solved some special case (see below), but I don't see a good way to solve the general problem. One approach I thought of is to solve the problem for each $k$ separately, and then compare the results to find the best value for $k$. This seems very unpleasant in general (the constraint is non-linear in $a,b$ for generic $k,n$ and the expressions become hard to compare).

**Partial results so far:**

- By letting $k=0$ (or $k=n$) we get $F(s) \le n(\sqrt[n]s-1)^2$. This bound can always be realized by a conformal matrix.
- In dimension $2$, a
*phase transition*occurs: (A detailed analysis can be found in the next section). It can be proved that

$$F(s) = \begin{cases} 2(\sqrt{s}-1)^2, & \text{ if }\, s \ge \frac{1}{4} \\ 1-2s, & \text{ if }\, s \le \frac{1}{4} \end{cases}$$

In other words, for $A \in \GLtwo$,
$$
\dist^2(A,\SOtwo) \ge \begin{cases}
2(\sqrt{\det A}-1)^2, & \text{ if }\, \det A \ge \frac{1}{4} \\
1-2\det A, & \text{ if }\, \det A \le \frac{1}{4}
\end{cases}.
$$
When $\det A \ge \frac{1}{4}$ equality holds if and only if $A$ is **conformal**. When $\det A < \frac{1}{4}$ equality does **not** hold when $A$ is conformal. In fact, the
the closest matrices to $\SOtwo$ in the class of matrices with a given determinant $s=\det A$ (up to left and right compositions with elements in $\SOtwo$) are

$$ \begin{pmatrix} \frac{1}{2} + \frac{\sqrt{1-4\det A}}{2} & 0 \\\ 0 & \frac{1}{2} - \frac{\sqrt{1-4\det A}}{2} \end{pmatrix}, \begin{pmatrix} \frac{1}{2} - \frac{\sqrt{1-4\det A}}{2} & 0 \\\ 0 & \frac{1}{2} + \frac{\sqrt{1-4\det A}}{2} \end{pmatrix} $$

when $\det A < \frac{1}{4}$, and

$$ \begin{pmatrix} \sqrt{\det A} & 0 \\\ 0 & \sqrt{\det A} \end{pmatrix} $$

when $\det A \ge \frac{1}{4}$.

- Analysis of the case when $n$ is even and $n=2k$:

**Claim:**

$$ \text{Let } \, \,f(s)=\min_{a,b \in \mathbb{R}^+,a^{\frac{n}{2}}b^{\frac{n}{2}}=s} \frac{n}{2} \big( (a-1)^2+(b-1)^2 \big). \tag{2}$$ Then $$F(s) \le f(s) = \begin{cases} n(\sqrt[n]s-1)^2, & \text{ if }\, s^{\frac{2}{n}} \ge \frac{1}{4} \\ \frac{n}{2}(1-2s^{\frac{2}{n}}), & \text{ if }\, s^{\frac{2}{n}} \le \frac{1}{4} \end{cases}$$

Expressing the constraint as $g(a,b)=ab-s^{\frac{2}{n}}=0$, and using Lagrange's multipliers method we see that there exist a $\lambda$ such that

$$ (2(a-1),2(b-1))=\lambda \nabla g(a,b)=\lambda(b,a)$$ so $a-1=\frac{b}{2}\lambda,b-1=\frac{a}{2}\lambda$.

Summing, we get $$ (a+b)-2=\frac{\lambda}{2}(a+b) \Rightarrow (a+b) (1-\frac{\lambda}{2})=2.$$ This implies $\lambda \neq 2$, so we divide and obtain $$ a+b=\frac{4}{2-\lambda} \Rightarrow a=\frac{4}{2-\lambda}-b. \tag{3}$$ So, $$a-1=\frac{4}{2-\lambda}-b-1=\frac{b}{2}\lambda \Rightarrow b(\frac{2+\lambda}{2})=\frac{2+\lambda}{2-\lambda} .$$

If $\lambda \neq -2$, then $b=\frac{2}{2-\lambda}$, which together with equation $(3)$ imply $a=b$.

Suppose $\lambda=-2$. Then $a=1-b$, so $s^{\frac{2}{n}}=ab=b(1-b)$. Since $a=1-b,b,s$ are positive we must have $0<b<1,0<s^{\frac{2}{n}}\le\frac{1}{4}$. (Since $\max_{0<b<1} b(1-b)=\frac{1}{4}$).

In that case, $$ \frac{n}{2} \big( (a-1)^2+(b-1)^2 \big) =\frac{n}{2} \big( b^2+(b-1)^2 \big)=\frac{n}{2} \big( 1-2b(1-b) \big)=\frac{n}{2}(1-2s^{\frac{2}{n}}).$$

Since $$\frac{n}{2}(1-2s^{\frac{2}{n}}) \le n(\sqrt[n]s-1)^2,$$ with equality holds iff $s^{\frac{2}{n}}=\frac{1}{4}$ we are done.

The conclusion to the $2$-dim case immediate.

Is there a known way or software to find integer eigenvectors for an integer matrix with integer eigenvalues?

In particular, I have a large real symmetric matrix with only a small number of distinct eigenvalues. I want to know if it is possible to find an eigenbasis such that each eigenvector contains only -1, 0, and 1 entries.

Let the operator vec($A$) unroll all the elements of $A$ into a single column vector in column-major order. Then, the elements of vec($A^T$) are a permutation of the elements of vec($A$). If I want to write this permutation as a matrix-vector product, I get

vec($A^T)$ = $P$ vec($A$).

I'm looking for a common name and/or symbol for the "vec transposition permutation matrix" $P$?

We can take a fundamental class $S^3 \rightarrow K(\mathbb{Z}, 3)$ and consider homotopy theoretic fiber of this map $\bar{S}^3$. Then we obtain the following homotopy fiber sequence $\Omega^2\bar{S}^3\rightarrow \Omega^2S^3 \rightarrow \Omega^2K(\mathbb{Z},3)$. Of course, $\Omega^2K(\mathbb{Z},3) = K(\mathbb{Z},1) = S^1$, therefore we obtain fibration $\phi:\Omega^2S^3 \rightarrow S^1$ with fiber $\Omega^2\bar{S}^3$.

Now, consider the double suspension map $E^2: S^1 \rightarrow \Omega^2S^3$ and its homotopy fiber called $W(1)$. What are the compositions $\phi\circ E^2: S^1\rightarrow S^1$ and $E^2\circ\phi: \Omega^2S^3\rightarrow\Omega^2S^3$? Is it true that $W(1)$ is homotopy equivalent to $\Omega^2\bar{S}^3$?

I'm just reading some analysis leisurely after not having done any in a long time (specifically "Real Analysis for Graduates" by Richard Bass) and I've been trying to answer one of his exercises.

I am tasked with finding an example of a set $X$ and a monotone class $\cal{M}$ consisting of subsets of $X$ such that $\emptyset,X\in \cal{M}$ but that $\cal{M}$ is not a $\sigma$-algebra.

I think I've come up with a suitable example but could do with some verification as to whether I've understood it correctly. Let us consider the sets $X = \overline{B_0(1)}$ and $X_n = B_0 (1 - 1/n)$ where $B_0(r)$ represents the open ball centred at the origin, of radius $r$. If we construct the set $$\mathcal{M} = \bigcup_{n=1}^{\infty} {\{ X_n\} } = \left\{\emptyset,B_0\left(\frac{1}{2}\right),B_0\left(\frac{2}{3}\right),B_0\left(\frac{3}{4}\right),\ldots,B_0\left(\frac{k-1}{k}\right),\ldots\right\}$$ then this must be a monotone class because $X_1 = B_0(0)= \emptyset$ and $\lim_{n\to\infty} X_n=B_0(1)$ which is the union of all the $X_n$ (since the sets $X_n$ form an increasing sequence).

Even though this satisfies the condition for a $\sigma$-algebra (I think) in that it is closed under taking infinite unions and intersections, it does not satisfy the condition for $\cal{M}$ to even be an algebra (and therefore cannot be a $\sigma$-algebra), in that if we choose say $X_2 = B_0(\frac{1}{2})\in\cal{M}$, then $X_2^c=B_0(1)-B_0(\frac{1}{2})$ which is not one of the members of $\cal{M}$.

I have the following problem: A matrix $C\in \mathbb{R}^{2N}$, where $C=\epsilon A+D$

$\epsilon A=(C-C')/2$ is skew symmetric with "block" anti-diagonal structure of size 4.

$ D=(C+C')/2$ (Diagonal matrix) with "block" diagonal structure of size 2.

$D=\begin{bmatrix} 0 & 0 & 0 & \dots \\ 0 & 0 & 0 & \dots \\ 0 & 0 & \alpha & \dots \\ 0 & 0 & 0& \alpha & \dots \\ 0 & 0 & 0& 0 & 2\alpha \dots \\ 0 & 0 & 0& 0 & 0& 2\alpha \dots\\ \vdots\\ 0 & 0 & 0& 0 & 0& \dots &(N-1)\alpha &0\\ 0 & 0 & 0& 0 & 0& 0& \dots &(N-1)\alpha \end{bmatrix}$

And

$A=\begin{bmatrix} 0 & 0 & 0 & -\beta \dots \\ 0 & 0 & \beta & \dots \\ 0 & -\beta & 0 & 0&0&-\sqrt{2}\beta\dots \\ \beta & 0 & 0& 0 & \sqrt{2}\beta&\dots \\ 0 & 0 & 0& -\sqrt{2}\beta & 0 \dots \\ 0 & 0 & \sqrt{2}\beta& 0 & 0& 0 \dots\\ \vdots\\ 0 & 0 & 0& 0 &\dots&-\sqrt{N-1}\beta&0 &0\\ 0 & 0 & 0&\dots \sqrt{N-1}\beta& 0& 0& &0 \end{bmatrix}$

Here $\alpha,\beta$ are constants of order 1.

I want to expand the inverse of $C$ in terms of $\epsilon$<<1, by writing

$C^{-1}=(D+\epsilon A)^{-1}$.

Note that both $D$ and $A$ are of rank $2N-2$.

Due to rank-deficiency of $D$, I cannot simply invert $D$ and do the obvious Taylor series.

Does any one have any ideas on how to proceed in this ?

The difficulty seems to be due to differently sized "block"-wise structures of $A$ and $D$.

For example, if instead the matrix $A$ was "block"-anti-diagonal of size 2 (same as block size of $D$), it seems like the Taylor series gives the right result where I replace $D^{-1}$ with pseudo-inverse and only invert in the non-singular subspace.

The singularity category of a Gorenstein algebra is equivalent to the stable category of Gorenstein projectives.

Is more generally for any finite dimensional algebra (or more general) the singularity category equivalent to the stable category of $\Omega^{\infty}(mod-A)$? Probably this might be too easy to be true in general, but are there non-Gorenstein algebras where this is true?

Is this true if the syzygy dimension (see Syzygy dimension for commutative algebra for the definition) is finite (thus for example for representation-finite algebras)?

I would like to ask about the equivalence between these two definitions for a $C^1$ domain. In the book Vector Analysis Versus Vector Calculus, we have:

Definition 8.2.1: Let $\mathbb{H}^k=\{(t_1,\ldots,t_k)\in\mathbb{R}^k:\,t_k\geq 0\}$. Let $2\leq k\leq n$ and $M\subseteq\mathbb{R}^n$, $M\neq\emptyset$, be given. Then $M$ is said to be a regular $k$-surface with boundary of class $C^p$ if for every $x\in M$ there exists an injective mapping $$ \varphi: \mathbb{A}\subseteq\mathbb{H}^k\rightarrow M\subseteq\mathbb{R}^n $$ of class $C^p$ in $A$ such that $x\in\varphi(\mathbb{A})$ and the following hold:

For every relatively open subset $\mathbb{B}\subseteq \mathbb{A}$, $\varphi(\mathbb{B})$ is relatively open in $M$.

For every $t\in\mathbb{A}$, the set $\{\partial_{t_1}\varphi (t),\ldots,\partial_{t_k}\varphi(t)\}$ is linearly independent.

In the book Partial Differential Equations by Evans, we have:

Definition in appendix C: Let $U\subseteq \mathbb{R}^n$ be open and bounded. We say that $\partial U$ is $C^1$ if for each point $x^0\in\partial U$ there exist $r>0$ and a $C^1$ function $\gamma:\,\mathbb{R}^{n-1}\rightarrow\mathbb{R}$ such that - upon relabeling and reorenting the coordinate axes if necessary - we have $$U\cap B(x^0,r)=\{x\in B(x^0,r):\,x_n>\gamma(x_1,\ldots,x_{n-1})\}.$$

My questions:

Are both definitions equivalent? (when $k=n$ and $p=1$).

The second definition can be extended to a Lipschitz domain, usually used in PDE. Is there a version of definition 1 for "Lipschitz surfaces with boundary"?

In his notes "Torsion-Free Modules," Matlis defined an {\em h-local ring} to be a ring in which

(1) each nonzero prime ideal is contained in a unique maximal ideal, and

(2) each nonzero element in contained in only finitely many maximal ideals.

Matlis has the blanket condition that all rings are integral domains. I am interested in general commutative rings, Noetherian if necessary, which satisfy condition (2) and also satisfy a weaker version of (1)

(1') each nonzero prime ideal is contained in only finitely many maximal ideals. Is there a name for this more general class of rings? Any references would be appreciated. Thanks.

I've some problems with the definition of the pullback vector bundle. Say $F : M \rightarrow N$ be a $C^{\infty}$ function between differential varieties, and $\pi : E \rightarrow N$ a vector bundle over $N$. We define $F^*E$, the pullback vector bundle of $E$, by saying:

- $(F^*E)_p = E_{F(p)}$ be the fiber over $p \in M$
- $F^*E = \bigsqcup_p \, (F^*E)_p$
- $\tilde{\pi} : F^*E \rightarrow M$, with $\tilde{\pi} = F^*(\pi) = \pi \circ F$

With this definitions $\tilde{\pi}: F^*E \rightarrow M$ is a vector bundle

Now, the problem is in the definition of $\tilde{\pi}$. I expect that:

$$\tilde{\pi}(F^*E)_p = p$$
but
$$\tilde{\pi}(F^*E)_p = \tilde{\pi} E_{F(p)} = \pi \circ F(E_{F(p)}) \neq p = F^{-1} \circ \pi (E_{F(p)})$$
Where i go wrong?

Thanks

Let $\mathfrak{g}$ be a finite-dimensional real compact Lie algebra and $\mathfrak{t}\subset \mathfrak{g}$ a maximal abelian subalgebra. Let $\Delta(\mathfrak{g}_\mathbb{C},\mathfrak{t}_\mathbb{C})\subset \mathfrak{t}_\mathbb{C}^\ast$ be the associated root system (where $_\mathbb{C}$ denotes complexification). Let now $\mathfrak{g}'\subset \mathfrak{g}$ be a compact subalgebra with the property that $\mathfrak{t}':=\mathfrak{g}'\cap \mathfrak{t}$ is maximal abelian in $\mathfrak{g}'$. Then the root system $\Delta(\mathfrak{g}'_\mathbb{C},\mathfrak{t}'_\mathbb{C})\subset {\mathfrak{t}'_\mathbb{C}}^\ast$ can be regarded as a subset of $\mathfrak{t}_\mathbb{C}^\ast$ by identfying ${\mathfrak{t}'_\mathbb{C}}^\ast$ with the elements in $\mathfrak{t}_\mathbb{C}^\ast$ that are supported on $\mathfrak{t}'_\mathbb{C}$.

**How large is the intersection $\;\Delta(\mathfrak{g}'_\mathbb{C},\mathfrak{t}'_\mathbb{C})\cap\Delta(\mathfrak{g}_\mathbb{C},\mathfrak{t}_\mathbb{C})$ compared to $\Delta(\mathfrak{g}'_\mathbb{C},\mathfrak{t}'_\mathbb{C})$ ?**

**Edit: The initial question was whether one has $\;\Delta(\mathfrak{g}'_\mathbb{C},\mathfrak{t}'_\mathbb{C})\subset\Delta(\mathfrak{g}_\mathbb{C},\mathfrak{t}_\mathbb{C})$, but this is rarely true, see the comment by Jeffrey. However, it still seems to me that the intersection should not be empty, at least, and the number of elements in it should be about half the number of elements in the smaller root system.**

Background: The original setting where my question comes from is the following. A compact connected Lie group $G$ acts smoothly on a smooth manifold $M$. Let $T\subset G$ be a maximal torus in $G$, which then also acts on $M$, and let $\mathfrak{t}$ be its Lie algebra. For a point $p$ in $M$ with stabilizer group $G_p$ and associated stabilizer algebra $\mathfrak{g}_p$ it is easy to show that $\mathfrak{t}_p:=\mathfrak{t}\cap \mathfrak{g}_p$, which is the Lie algebra of the stabilizer group $T_p$ of $p$ with respect to the $T$-action, is maximal abelian in $\mathfrak{t}$ (i.e., the identity component of $T_p$ is a maximal torus in $G_p$). I would like to know under which circumstances the root system $\Delta({\mathfrak{g}_p}_\mathbb{C},{\mathfrak{t}_p}_\mathbb{C})$ is contained in $\Delta(\mathfrak{g}_\mathbb{C},\mathfrak{t}_\mathbb{C})$.