Find the equation of the cone whose generating curve is X^2 + Y^2 + Z^2 = a^2 and X + Y + Z = 1, whose vertex is (0, 0, 0).

Let $A$ be a ring, $A_0$ a subring of $A$ equipped with an injective endomorphism $f : A_0\to A_0$, and such that $A$ is the direct limit:

$$A = \varinjlim_{f^n : A_0\to A_0, n\ge 0} A_0.$$

Let $I\subset A$ be an ideal. Does there exist an ideal $I_0\subset A_0$ such that $I = \varinjlim I_0A_0$?

My guess is $I_0 := I\cap A_0$, where the intersection takes place in $A$, and $A_0$ is the copy of $A_0$ in $A$ given by the injective map $A_0\to\varinjlim A_0$.

Let $A,B$ be Banach algebras, and $I$ be a closed two sided ideal of $A$ and $J$ be a closed two sided ideal of $B$. Is the relation $A\hat{\otimes}B/I\hat{\otimes}J\cong A/I\hat{\otimes}B/J$ true?(I mean topological isomorphism and projective tensor product). If it isn't true in general, is there conditions that make it true?

I have a set of 14 linear equations with 14 unknowns, but they are in non-homogenous form.

Can I convert the system to homogenous form simply by rearranging such that each of the 14 equations is equal to 0?

e.g. x + y = z becomes x + y - z = 0

Or will this change the solution?

I have what is in essence a basic analysis question.

To make working out a certain example a bit easier I found that I need to find existence of a function $f\in C^\infty(\mathbb{R})$ with the following properties:

- $f$ is increasing
- $f(x)=0$ for all $x\leq 0$
- $f(x)=1$ for all $x\geq 5$
- $\frac{f(x)}{x}<1$ for all $x>0$
- For any $y>-1$ the function $F(x,y)=\frac{f(x)}{f(x+f(x)y)}$ which is immediately well-defined and smooth for $x>0$ can be extended to a smooth function on $\{(x,y)| y>-1\}$.

To find a function $f$ that satisfies the first 4 conditions is relatively simple by tweaking one that looks like $e^{-\frac{1}{x^2}}$. So the main question is whether $f$ exists such that 5. is satisfied. After some discussion it seems like it should be possible even setting $F(x,y)=1$ for $x\leq 0$, but I have not been able to convince myself fully yet. The idea is to rewrite the denominator as $f(x(1+\frac{f(x)}{x}y))$ and use the fact that $\frac{f(x)}{x}$ vanishes to order $n$ at $x=0$ to deduce $n$ times differentiability of $F(x,y)$. Then, since $f^{(n)}(0)=0$ for arbitrary $n$ we find that smoothness.

Any insights would be appreciated of course!

(I apologize for the long question, which has no mathematical content. Just looking for the right reference.)

In their celebrated paper [ST1971] introducing iterated forcing, Solovay and Tennenbaum showed the following theorem:

Theorem A: If $(B_\alpha: \alpha< \omega_1)$ is an increasing sequence of complete subalgebras of the Boolean algebra $B_{\omega_1}$, and $\bigcup_{\alpha<\lambda} B_\alpha$ is dense in $B_\lambda$ for each limit ordinal $\lambda\le \omega_1$, and if all $B_\alpha $ ($\alpha<\omega_1$) satisfy the ccc (countable chain condition), then also $B_{\omega_1} $ will satisfy the ccc.

This theorem is used to show that the finite support iteration of ccc forcings is again ccc. (Theorem 6.3 in [ST1971])

An essential fragment of the proof given in the paper is due to Silver. (The authors write that Silver's version is "quite a bit simpler than [our] original proof".)

Essentially the same proof shows the following theorem:

Theorem B: Let $\kappa $ be regular uncountable. Let $(P_\alpha:\alpha <\kappa)$ be an iteration of forcing notions with direct limit $P_\kappa$, and assume that the set of stages $\delta<\kappa$ where $P_\delta$ is the direct limit of the previous forcings is stationary in $\kappa$. If all $P_\alpha$ satisfy the $\kappa$-cc, then so does $P_\kappa$.

(In Jech's book, Theorem A is 16.9/16.10, and Theorem B is 16.30. The latter theorem has a 3-line proof, starting with "Exactly as the proof of 16.9.")

**Question**: To whom should Theorem B be credited?

- To Solovay-Tennenbaum, whose unpublished original proof of Theorem A
*most likely*also showed Theorem B? - Or to Silver, whose proof of Theorem A
*definitely*can be easily generalized to a proof of Theorem B? - Or to "Silver/Solovay-Tennenbaum", or "Silver's proof in [ST]"?
- Or just to Solovay? Solovay's remarks in [K2011] indicate that the proof of the iteration theorem is his. But I think it is customary (at least in mathematics) not to divide credit between the coauthors of a paper, unless such a division is explicitly mentioned in the paper.
- Or to somebody else, who first explicitly formulated Theorem B?

(I am asking because I want to add a remark to a proof of a variant of Theorem B; the proof will be a variant of the S/S-T proof.)

[K2011]: Akihiro Kanamori: *Historical remarks on Suslin's Problem*, in: Juliette Kennedy and Roman Kossak, editors, Set Theory, Arithmetic and Foundations of Mathematics: Theorems, Philosophies, Lecture Notes in Logic, volume 36, 1-12. Association for Symbolic Logic, 2011. (MR2882649. http://math.bu.edu/people/aki/18.pdf )

[ST1971]:
Solovay, R. M.; Tennenbaum, S.
*Iterated Cohen extensions and Souslin's problem*.
Ann. of Math. (2) 94 (1971), 201–245.
(MR0294139, DOI:10.2307/1970860)

It is well-known that Sklyanin algebras are Koszul, but, is it known an explicit description of the dual algebra Ext_A(k,k)? (I mean in terms of generators and relations)

I would like to know if there exist a uniform asymptotic approximation of the Whittaker function $W_{\kappa,i\mu}(x)$ for $\kappa<0$, $x >0$, and with $\mu \to +\infty$. The case of $\kappa \ge 0$ is treated in [1], but I was not able to find the asymptotics for negative $\kappa$ in the literature.

[1] T. M. DUNSTER, Anal. Appl., 01, 199 (2003).

Let $\sigma_1,\ldots,\sigma_M$ i.i.d. random vectors in $\mathbb{R}^d$, and for notational convenience, let $\Sigma=(\sigma_1,\ldots,\sigma_M)$. I am interested in understanding $$ \gamma(\Sigma) = \min_{\lambda\in\Delta_M} \Big\|\sum_{i=1}^M \lambda_i \sigma_i\Big\|_2, $$ where $\Delta_M=\{\lambda \in\mathbb{R}_+^M: \sum_i \lambda_i=1\}$, is the $M$-dimensional simplex. I am primarily interested in the cases of the distribution being the standard Gaussian and the uniform probability on the hypercube $\{-1,+1\}^d$.

Here is what I know:

- If we consider continuous distributions, the sigmas are linearly independent with probability 1 when $M\leq d$, thus this quantity should be strictly positive in this regime. In the discrete case, the latter claim should still hold with high probability.
- For the discrete case, the function $\gamma(\cdot)$ is Lipschitz for the Hamming distance, so it concentrates around its mean
- Similarly, for the Gaussian case one can prove $\gamma(\cdot)$ is Lipschitz for the Euclidean norm (more precisely, the Frobenius norm of $\Sigma$ as a matrix), so it concentrates around its mean.

By the last two observations, I am now mostly interested in understanding $\mathbb{E}_{\Sigma}[\gamma(\Sigma)]$, as a function of $M$. Clearly, for $M=1$, and for my distributions of interest, $\mathbb{E}[\gamma]=\sqrt{d}$, and I believe that for $M>d$, $\mathbb{E}[\gamma]\approx0$ (although I don't have a proof).

My question is how to compute (or lower bound) this expectation as a function of $M$. Connections with the literature are also welcome. As a final comment, I tried to lower bound the expectation using the Khintchine inequality, but the minimum in between seems to ruin the approach.

PS: $\gamma(\Sigma)$ represents the largest possible (origin centered) ball not touching the simplex generated by the vectors $\sigma_1,\ldots,\sigma_M$; which is similar, but not equivalent to the inner radius of the (symmetrized) convex hull. So better suggestions for a title are also welcome.

The $N\times N$ determinant $$D(a,\vec{b})=\det\left( \frac{(2N+a+b_j-i-j)!}{(N-j)!(N+a-i)!}\right)$$ has the nice form $$D(a,\vec{b})=\prod_{j=1}^N\frac{(N+a+b_j-j)!}{(N+a-j)!}\prod_{i=j+1}^N\frac{(b_i+b_j-j+i)}{(i-j)}.$$

I would like to know if the generalization where $a$ is allowed to vary with $i$ has a nice expression as well, $$D(\vec{a},\vec{b})=\det\left( \frac{(2N+a_i+b_j-i-j)!}{(N-j)!(N+a_i-i)!}\right)=?$$

I know Krattenthaler has this great paper about determinants, but I was not able to find help there.

Let $X$ be a compact Kähler manifold of dimension $n$ with a given Kähler metric $\omega$. Let $L$ be a hermitian holomorphic line bundle on $X$ whose metric is positive. Let $x_0\in X$.

I would like to construct a section $s\in H^0(X,L)$, so that $s(x_0)=0$ and $ds(x_0)=\alpha\neq 0$ for given $\alpha$. Moreover, I want to control the $L^2$ norm of $s$ by some function of $|\alpha|$.

Here $ds$ is the differential of $s$, since $x_0$ is a zero of $s$, $ds(x_0):T_{x_0}X \rightarrow L_{x_0}$ is intrinsically defined.

The question is similar to a special case of Ohsawa-Takegoshi theorem, where the prescribed information is only $s(x_0)$. If there is a good solution to my question, then I would like to know if it is possible to replace the single point $x_0$ by a more general analytic subset of $X$?

Consider an continuous distribution $F$ with density $f$. The (differential) Shannon entropy of $f$ is

$h(f)=-\int f(x)\log f(x) dx$.

In the literature of differential entropy estimation, oftentimes analyzing the performance of an estimator relies on a functional Taylor expansion (sometimes termed `Von Mises Expansion'). For two densities $f$ and $g$, the expansion of $h(f)$ about $g$ reads as (see, e.g., https://www.cs.cmu.edu/~aarti/Class/10704_Spring15/lecs/lec5.pdf)

$h(f)=h(g)+\int \big(\log g(x) +1\big)\big(g(x)-f(x)\big)dx + O\big(||f-g||_2^2\big)$

However, I couldn't fully figure out under what assumptions on $f$ is this expansion valid. Would very much appreciate a clarification on what are the minimal assumptions on $f$ for the above to hold true.

In particular, does the expansion apply when $f=p\ast\gamma$, where $p$ is a compactly supported density and $\gamma$ is a Gaussian (i.e., $f$ is a convolution with Gaussian), and $g$ is some density estimator of $f$ (say, a kernel density estimator)?

My homework: For $P \in \mathbb{R}^{n \times n}$ with $P > 0$. Show that the dominant eigenvalue $\lambda(P)= \min \{\lambda \in \mathbb{R}_{\geq 0} \mid Px \leq \lambda x \text{ for some nonzero } x \geq 0\}=\min \Gamma(P)$.

My proof: Perron's Theorem says if $P \in \mathbb{R}^{n \times n}$ with $P > 0$, then $P$ has a dominant eigenvalue $\lambda(P)>0$ with an corresponding eigenvector $v>0$. Therefore, $\lambda(P)\in \Gamma(P)$. Furthermore, $Pv\leq \lambda v$ for any $\lambda\in \Gamma(P)$. So, $\lambda(P) v\leq \lambda v$. Therefore, $\lambda(P)\leq\lambda$. Hence, $\lambda(P) = \min_{\lambda\in\Gamma(P)} \lambda$.

My main concern is this sentence: Furthermore, $Pv\leq \lambda v$ for any $\lambda\in \Gamma(P)$.

In the question, we define $\Gamma(P)$ by $Px \leq \lambda x \text{ for some nonzero } x \geq 0$. Picking an arbitrary $\lambda\in \Gamma(P)$, I don't know whether I can replace $x$ by $v$, where $v$ is a positive dominant eigenvector. In other words, does "for some nonzero $x\geq 0$" means any nonnegative $x$ works?

Let $X$ be a locally compact Hausdorff space and suppose that $X$ can be written as the disjoint union of countably many non-empty closed subsets. Is at least one of the subsets clopen?

Let $X$ be a (possibly infinite) set. We consider a subset $H$ of the set $\{0,1\}^X$ of functions $X\to\{0,1\}$. Given a finite subset $B\subset X$, we denote by $H_B$ the set of restrictions to $B$ of elements of $H$, that is, the image of $H$ by the restriction map $\{0,1\}^X\to\{0,1\}\to B$. If $H_B=\{0,1\}$ (or equivalently if $|H_B| = 2^{|B|}$), we say that $H$ *shatters* $B$.

(definition provided here on page 69)

Consider $X,B,H$ as above. Fix an element $c_1\in B$. Given $h\in\{0,1\}^X$, define its "brother" $\hat{h}\in\{0,1\}^X$ as equal to $1-h(c_1)$ on $c_1$ and equal to $h$ elsewhere. Define $H'$ as the set of $h\in H$ such that there exists $g\in H$ such that $h$ and $\hat{g}$ coincide on $B$.

(definition is on page 75)

I am considering the proof of Sauer's lemma here from page 74. It is proven by induction, and the proof consists mostly in saying that

$$|H_A|\le|\{B \subseteq A : H \text{ shatters }B \}| \ \ \ \ \ (1)$$

I want to add additional requirement, which leads to

$$|H_A| = |\{B \subseteq A : H \text{ shatters }B \}| \ \ \ \ \ (2)$$

all the time when this requirement is satisfied and when it is not satisfied, I want to prove that

$$|H_A| < |\{B \subseteq A : H \text{ shatters }B \}| \ \ \ \ \ (3)$$

Am I correct in the assumption that the only requirement for it to be true is $H' = H$ for all $H'$ defined on connected with them subsets $B$? If it is true, then is it correct to say, that strict equality '<' as in (3) is achieved if and only if there exist some element in A, which either classified as 0 or as 1 by **all** functions in H? And when for every element $x$ from $A$ there exist $h_1$, $h_2$ from $H$ so that $h_1(x) = 0$ and $h_2(x) = 1$, we have equality '=' as in (2)?

I am asking this as part of finding an answer to that question of mine.

I am thinking about about advanced texts similar to Polya's 'How to solve it?'. Quite a few good articles of such a kind are published under Philosophy of Mathematics, but that dwells on a very different domain generally.

Let $X$ be a scheme which is smooth and quasi-projective over $\operatorname{Spec} \mathbf{Z}[1/N]$, and let $\ell$ be a prime dividing $N$ (hence invertible on $X$). Then then there is a regulator map $$ H^i_{\mathrm{mot}}(X, \mathbf{Q}(j)) \otimes \mathbf{Q}_\ell \to H^i_{\mathrm{et}}(X, \mathbf{Q}_\ell(j))$$ for any integers $i \ge 0$ and $j \in \mathbf{Z}$, where the left-hand side is motivic cohomology and the right is etale cohomology.

Is it expected that this map should always be injective? If so, is there a published reference where this conjecture is written down explicitly? I'm principally interested in the case when $j < i \le 2j$.

(EDIT. I found a version for projective varieties, avoiding the awkward boundary case $i = 2j$, in Bloch and Kato's article in the Grothendieck Festschrift. I'm still keen to know of any references treating the non-projective case.)

The branching laws for the $SO(n-1)$ as a subgroup of $SO(n)$ are well known and easy to find. See for example the Wikipedia article:

https://en.wikipedia.org/wiki/Restricted_representation#CITEREFMurnaghan1938

I am having trouble finding the next most complicated example, i.e. $$ SO(n-2) \times SO(2) \subseteq SO(n). $$ Where can a description of these branching laws be found?

Edit: In particular I would like to know examples of representations of $SO(n-2) \times SO(2)$ which appear with multiplicity $1$ in any branching.

I'm working on this series:

$$\lim_{i\rightarrow\infty}\sum_{l=0}^{i-1}\left|\sum_{k=0}^l(-1)^{k+1}\binom{p+q+i+2}{k}\binom{p+q+l-k+1}{p+q} \frac{(p+1+l-k)^i}{(p+q+i+1)!}\right|$$

where $(p,q)$ are fixed non-negative integers. Computationally, it does decrease:

g[p_, q_, i_] := Sum[Abs[Sum[(-1)^(k + 1) Binomial[p + q + i + 2, k] Binomial[ p + q + l - k + 1, p + q] (p + 1 + l - k)^i/(p + q + i + 1)!, {k, 0, l}]], {l, 0, i - 1}] f[i_] := N[g[1, 0, i]] Array[f, 50] Output= {0.666667, 0.875, 0.341667, 0.161111, 0.129365, 0.123289, \ 0.0864446, 0.0859799, 0.065676, 0.0656577, 0.0521962, 0.0521958, \ 0.04278, 0.04278, 0.0358899, 0.0358899, 0.0306693, 0.0306693, \ 0.0266021, 0.0266021, 0.0233605, 0.0233605, 0.0207277, 0.0207277, \ 0.0185547, 0.0185547, 0.0167366, 0.0167366, 0.0151971, 0.0151971, \ 0.0138799, 0.0138799, 0.0127427, 0.0127427, 0.0117526, 0.0117526, \ 0.0108844, 0.0108844, 0.0101181, 0.0101181, 0.0094377, 0.0094377, \ 0.00883025, 0.00883025, 0.00828528, 0.00828528, 0.00779415, \ 0.00779415, 0.00734969, 0.00734969}But I don't know what technique might be useful to prove it. I tried to use asymptotic theory by abusing on Stirling's formula and hoping to get the expression asymptotic to a function that goes to zero. But I'm having no luck at it.

Context: From this article, I'm trying to estimate the sum

$$\sum_{l=0}^{i-1}|A_{p,q}(l,i-l)-A_{p-1,q+1}(l+1,i-l-1)|$$

Is there an example of an ordinary and simple abelian variety $A$ over an algebraically closed field $K$ (of characteristic $p>0$) such that ${\rm End}(A)$ is not commutative? Note that the answer is no if $K=\overline{\bf F}_p$ (in that case ${\rm End}(A)_{\bf Q}$ is a CM field). My question is for other fields.