Let $f=\frac{1}{|x|},x\in\mathbb{R^3}$ and $\Omega=[-b,b]^3$. How to construct a quadrature scheme to solve $$ \int_\Omega fgdx\quad ? $$ where $g$ is smooth function.

I know there exists a transformation called Duffy transformation which can eliminate the singularity at $x=0$ by using the jacobian. Indeed, let $$\begin{aligned}T&=\{(x_1,x_2,x_3);0\leq x_1\leq 1,0\leq x_2\leq x_1,0\leq x_3\leq x_1\}\\ \hat{T}&=\{(\hat{x}_1,\hat{x}_2,\hat{x}_3);0\leq \hat{x}_1\leq 1,0\leq\hat{x_2}\leq1,0\leq\hat{x_3}\leq 1\}\end{aligned}$$

Define $F:\hat{T}\rightarrow T$ as $$ F(\hat{x}_1,\hat{x}_2,\hat{x}_3)=(\hat{x}_1,\hat{x}_1\hat{x}_2,\hat{x}_1\hat{x}_3) $$ Then $$ \begin{aligned} \int_{T}\frac{1}{|x|}gdx&=\int_{\hat{T}}\frac{1}{\sqrt{x_1^2+x_2^2+x_3^2}}g(x)dx =\int_{\hat{T}}\frac{g(\hat{x_1},\hat{x}_1\hat{x}_2,\hat{x}_1\hat{x}_3)}{\sqrt{\hat{x}_1^2+\hat{x}_1^2\hat{x}_2^2+\hat{x}_1^2\hat{x}_3^2}}\hat{x}_1^2d\hat{x}\\ &=\int_\hat{T}\frac{\hat{x}_1g(\hat{x_1},\hat{x}_1\hat{x}_2,\hat{x}_1\hat{x}_3)}{\sqrt{1+\hat{x}_2^2+\hat{x}_3^2}}d\hat{x} \end{aligned} $$ which can be solved by usual quadrature scheme.

On the other hand, someone can also avoid the singularity by omitting a $\varepsilon$ cube which contains the origin.

However, these are not enough for me when I want to study the convergence rate of numerical integration on $\Omega$. I hope the error can be bounded by $\Vert f\Vert_{0,2,\Omega}\cdot \Vert g\Vert_{1,2,\Omega}$ or $\Vert f\Vert_{1,1,\Omega}\cdot \Vert g\Vert_{1,2,\Omega}$

Is there any other way to integrate a $L^2$ function or a function with singularity at vertex directly with the error bounded by $L^2$ norm?

Thank you very much.

Suppose $(X,d)$ is a metric space. Let $B(u, R) = \{x \in X : d(u, x) \leq R\}$, i.e., a closed ball of radius $R$ in this metric space.

Suppose I have a family of balls $\{B(u_i, R_i)\}$. Then, what can we say about the smallest ball containing the intersection of the balls in the family? What is the smallest possible radius? How does the smallest possible radius change as a function of the center of the ball enclosing the intersection? What is the choice of center that minimizes the radius? Is it possible to give explicit characterizations (formulae) for each of these questions?

Note, I'm interested mainly in the case of two balls with nontrivial intersection, and in finite families of balls, but also would be interested in (countably, uncountably) infinite families as well. I'm also most interested in the case when $X = \mathbf{R}^n$, with the usual topology.

Perhaps this question is well understood? Any references could also be useful!

Let $f\in C^\infty$ have bounded derivatives, i.e. $$ \sup_{x\in\mathbb{R}}|f^{(p)}(x)| = B_p < \infty$$ for every $p\ge 1$.

I would like to find a proof or a counterexample for the following conjecture:

For every such $f$ there exists $C_f>0$ such that $B_p \le (p+1)^{C_f p}$ for all $p\ge1$

That is, if all derivatives of a function are bounded, the bound cannot grow too quickly.

I am also curious about the converse statement

If $f$ is not constant and $\lim_{x\to\pm\infty}f(x)=0$, there exist $c_f, c_f'>0$ such that $B_p \ge c_f'p^{c_f p}$

That is, if all derivatives of a non-constant function are bounded, the bound cannot grow too slowly. Note that polynomials and sine functions are not counterexamples due to the requirements $f$ is not constant and $\lim_{x\to\pm\infty}f(x)=0$.

On the wikipedia page of tetrahedron, there is a list of eight symmetry groups for a (possibly irregular) $3$-simplex (with unmarked faces). There is also a list on the page of 5-cell but doesn't seem complete.

This seems elementary, but I can not find a source to cite. Also, is there already a general classification for symmetry groups of unmarked (possibly irreglar) d-simplices (or an algorithm for enumerating it)?

I believe this is equivalent to asking about the symmetry of edge-colored complete graphs. I find a lot of literature on those with transitive groups, but not general cases.

A function is called an involution if it is its own inverse, that is, $f^2(x)=x$. This implies that $f^3(x)=f(x), f^4(x)=x$, and in general, $f^{2k}(x)=x$ for $k\in\mathbb{Z}$. Does the converse of this hold, that is, is every function $f$ such that $f^{2k}(x)=x$ is an involution?

I am reading a paper where they prove Donsker's invariance principle for a sequence of dependent RV's. They do the following steps which I can't follow so well. I won't write out the precise definitions since I don't think they are needed, what I don't understand is the logic behind some steps.

Suppose $W_n(t)=n^{-\frac{1}{2}}\sum\limits_{k=1}^{[nt]}X_k$ where $t\in [0,1]$ is the sequence we want to show converges weakly to $W$ where $W$ is our Brownian motion.

This paper looks at a different sequence $M_n$ and shows that $M_n$ converges weakly to $W$. It then says that since for all $T>0$, $\left|\sup\limits_{t\in[0,T]}\left|W_n(t)-M_n(t)\right|\right|_{\mathcal{L}^2}\to 0$, it follows that $W_n\to W$ weakly.

This last step is the one I am struggling to understand. How does the assumption on $M_n$ ensure that $W_n\to W$? It appears to only show the convergence of FDD's (by Slutsky's theorem). Or does this also show tightness somehow? Or am I completely mistaken.

Thanks!

Given $n$ "data points" in $d$ (Euclidean) space

$$\mathbf{x}_j \in \mathbb{R}^d, \text{ for } j \in \{1,\dots,n\}$$

how does one find the smallest integer $m$ such that there exists $m$ "centre points"

$$\mathbf{c}_i \in \mathbb{R}^d, \text{ for } i \in \{1,\dots,m\}$$

where for each data point the closest centre is within a given radius $r$

$$r^2 \ge \min^{m}_{i=1} \|\mathbf{x}_j - \mathbf{c}_i\|^2, \forall j \in \{1,\dots,n\} $$

?

I think this is similar to or the same as the geometric set cover problem. In my case, the centre locations are *not* drawn from a discrete set but rather they are free points in $\mathbb{R}^d$.

I have a "bottom-up" merge-based heuristic that seems to work well in 2D, but would like to know if any known algorithms exist.

One of the most salient aspects of the discipline of number theory is that from a very small number of definitions, entities and axioms one is led to an extraordinary wealth and diversity of theorems, relations and problems--some of which can be easily stated yet are so complex that they take centuries of concerted efforts by the best mathematicians to find a proof (Fermat's Last Theorem, ...), or have resisted such efforts (Goldbach's Conjecture, distribution of primes, ...), or lead to mathematical entities of astounding complexity, or required extraordinary collective effort, or have been characterized by Paul Erdös as "Mathematics is not ready for such problems" (Collatz Conjecture).

(Of course all branches of mathematics have this property to some extent, but number theory seems the prototypical case because it has attracted some of the most thorough efforts at axiomization (Russell and Whitehead), that Gödel's work is most relevant to the foundations of number theory (more than, say, topology), and that there has been a great deal of work on quantifying complexity for number theory--more so than differential equations, say.)

How do professional mathematicians best understand the foundational source of the complexity in number theory? I don't think answers such as "once relations are non-linear things get complicated" or its ilk are intellectually satisfying. One can refer to Gödel's Theorem to "explain" that number theory is so complicated that no finite axiomitization will capture all its theorems, but should we consider this theorem as in some sense the *source* of such complexity?

This is not an "opinion-based" question (though admittedly it may be more appropriate for metamathematics or philosophy of mathematics): I'm seeking theorems and concepts that professional mathematicians (particularly number theorists) recognize as being central to our understanding the source of the breadth and complexity of number theory. Why isn't number theory trivial?

What references, especially books, have been devoted to specifically addressing the *source* of the deep roots of the diversity and complexity of number theory?

By contrast, I think physicists can point to a number of sources of the extraordinary wealth and variety of physical phenomena: It is because certain forces (somehow) act on fundamentally different length and time scales. The rules of quantum mechanics govern the interaction of a small number of subatomic particles. At sufficiently larger scales, though, quantum mechanics effective "shuts off" and classical mechanics dominates, including in most statistical mechanics, where it is the large number of particles is relevant. At yet larger scales (e.g., celestial dynamics), we ignore quantum effects. Yes, physicists are trying to unify the laws so that even the quantum mechanics that describes the interactions of quarks is somehow unified with gravitation, which governs at the largest scales... but the fact that these forces have different natural scales leads to qualitatively different classes of phenomena at the different scales, and hence the complexity and variety of physical phenomena.

A *(convex) polytope* is the convex hull of a finite number of points in Euclidean space (this is the so-called "vertex description"). Alternatively, it can defined to be a **bounded** polyhedron (this is the "facet description").

Here a *(convex) polyhedron* is the intersection of a finite number of closed half-spaces.

My question is whether we can get a vertex description of polyhedra, and not just polytopes: namely, is there some reasonable way to topologically extend Euclidean space so that all polyhedra are convex hulls of finite sets of points in this larger space?

In Appendix C of his book in progress Spectral Algebraic Geometry, Lurie defines the **unseparated derived category** $\check{{\cal D}}({\cal A})$ (see Definition C.5.8.2 loc.cit) associated to a Grothendieck abelian category. The unseparated derived category $\check{{\cal D}}({\cal A})$ is a stable presentable $\infty$-category equipped with a natural t-structure which is compatible with filtered colimits and whose heart is identified with ${\cal A}$. It is related to the usual derived $\infty$-category ${\cal D}({\cal A})$ by a t-exact functor $\check{{\cal D}}({\cal A}) \to {\cal D}({\cal A})$ which exhibits ${\cal D}({\cal A})$ as the "left separation" of $\check{{\cal D}}({\cal A})$. In addition, $\check{{\cal D}}({\cal A})$ itself enjoys the following universal property (see Theorem C.5.8.8 and Corollary C.5.8.9 of loc.cit): if ${\cal C}$ is any other stable presentable $\infty$-category equipped with a $t$-structure which is compatible with filtered colimits, then restriction along the inclusion of the heart $A \to \check{{\cal D}}({\cal A})$ induces an equivalence
$$ {\rm LFun^{{\rm t-ex}}}(\check{{\cal D}}({\cal A}),{\cal C}) \stackrel{\simeq}{\to} {\rm LFun^{{\rm ex}}}({\cal A},{\cal C}^{\heartsuit}) $$
where on the left hand side we have colimit preserving t-exact functors $\check{{\cal D}}({\cal A}) \to {\cal C}$ and on the right hand side we have colimit preserving exact functors ${\cal A} \to {\cal C}^{\heartsuit}$. This is indeed a very satisfying universal characterization of $\check{{\cal D}}({\cal A})$ together with its t-structure. However, for various reasons it can be useful to have a universal characterization of $\check{{\cal D}}({\cal A})$ *without* the t-structure. To see how this might go, note that t-exact colimit preserving functors $\check{{\cal D}}({\cal A}) \to {\cal C}$ send short exact sequences in ${\cal A}$ to cofiber sequences in ${\cal C}$ and filtered colimits in ${\cal A}$ to filtered colimits in ${\cal C}$. One may hence consider the possibility that restriction along ${\cal A} \to \check{{\cal D}}({\cal A})$ induces an equivalences between *all* colimit preserving functors $\check{{\cal D}}({\cal A}) \to {\cal C}$ on the one hand and and all functors ${\cal A} \to {\cal C}$ which preserve filtered colimits and send short exact sequences in ${\cal A}$ to cofiber sequences in ${\cal C}$ on the other.

Question 1: Is this true?

A positive answer to Question 1 would imply that $\check{{\cal D}}({\cal A})$ admits the following explicit description: we would be able to identify it with the $\infty$-category of presheaves of spectra ${\cal A}^{{\rm op}} \to {\rm Sp}$ which send filtered colimits in ${\cal A}$ to filtered limits and send short exact sequences in ${\cal A}$ to fiber sequences of spectra.

As a variant to Question 1, one may hope that the connective part $\check{{\cal D}}({\cal A})_{\geq 0}$ enjoys the same universal characterization when ${\cal C}$ is now replaced with a Grothendieck prestable $\infty$-category, or maybe even any presentable $\infty$-category. Such a characterization would imply that $\check{{\cal D}}({\cal A})_{\geq 0}$ can be identified with the $\infty$-category of presheaces of **spaces** ${\cal A}^{op} \to {\cal S}$ which send filtered colimits to cofiltered limits and short exact sequences to fiber sequences of spaces.

Question 2: Is this true?

**Remarks:**
1) A positive answer to Question 2 would imply a positive answer to Question 1 since $\check{\cal D}({\cal A}) \simeq {\rm Sp}(\check{\cal D}({\cal A})_{\geq 0})$ is the stabilization of $\check{\cal D}({\cal A})_{\geq 0}$.

2) If ${\cal A} = {\rm Ind}(A_0)$ with $A_0$ an abelian category with enough projectives in which every object has finite projective dimension then $\check{\cal D}({\cal A}) \simeq {\cal D}({\cal A})$ (Proposition C.5.8.12 in loc.cit) and can also be described using complexes of projective objects. In this case $\check{\cal D}({\cal A})_{\geq 0} \simeq {\cal P}_{\Sigma}((A_0)_{{\rm proj}})$ is the $\infty$-category obtained from $(A_0)_{{\rm proj}}$ by freely adding sifted colimits. In particular, $\check{\cal D}({\cal A})_{\geq 0}$ admits a universal characterization as a presentable $\infty$-category and $\check{\cal D}({\cal A}) \simeq {\rm Sp}(\check{\cal D}({\cal A})_{\geq 0})$ admits a universal characterizations as a stable presentable $\infty$-category (without the t-structure). However, even in this case, I don't know how to deduce from this particular universal characterization the other universal characterization I'm interested in (the missing part is to show that a coproduct preserving functors $(A_0)_{{\rm proj}} \to {\cal C}$ extends in an essentially unique way to a functor ${\cal A}_0 \to {\cal C}$ which sends short exact sequence to cofiber sequences).

Let $X$ be the $\mathbb{C}\mathbb{P}^1$ with $n$ points deleted. Let $n\geq 3$. If I understand correctly, the universal covering of $X$ is isomorphic to the upper half plane as a complex analytic space.

**Q.** How one can describe the group of deck transformations of the universal covering as a subgroup of $\mathrm{PGL}(2,\mathbb{R})$?

I expect this should be a standard material. A reference would be helpful.

Given a group G and a normal subgroup N of G, is there an action of G on N such that, whenever g,h are distinct members of the same N-coset, we have g•n≠h•n? If not, then can this be done in the case G is abelian?

Take for granted that we may select a set of coset representatives for the N-cosets to use as "origins" for each N-coset. I'm working on some abstract analysis/descriptive set theory ideas, and got stuck on this thought because the algebra got a little too far out of my element. If it can't be done, a counterexample would be GREATLY appreciated.

Okay. Gotta update this question: In this setting, the groups are all either countably infinite or of size continuum.

Also it's important to actually be able to describe the action.

I am simply asking if it is possible to have $S^k=T^k$ for $S\neq T$, where $S,T$ are operators on analytic functions and $k$ is positive integer.

When we deal with usual *category* $\mathbf A$ we require that *Ob*$(\mathbf A)$ (i.e., collection of objects of category $\mathbf A$) and *Mor*$(\mathbf A)$ (i.e., collection of morphisms between objects in category $\mathbf A$) form a class (*proper* or no).

Then if we deal with *metacategory* $\mathbf A$ we say that *Ob*$(\mathbf A)$ is a *conglomerate* and *Mor*$(\mathbf A)$ so is.

To form a *conglomerate* we require:

- every class is a
*conglomerate*, - for every “property”
*P*, one can form the*conglomerate*of all classes with property*P*, *conglomerates*are closed under analogues of the usual set-theoretic constructions, and- the
*Axiom of Choice for Conglomerates*; namely for each surjection between*conglomerates*$f:X\rightarrow Y$, there is an injection $g:Y\rightarrow X$ with $f\circ g=id_Y$.

So the question is: can we continue this hierarchy further to form the *metacategory of all metacategories* and so on?

I'm trying to figure out the time-complexity of the problem I describe below, which I call the semi-ordered Eulerian tour problem or the SOET problem. Either finding an efficient algorithm for this problem or proving that this is actually in NP-hard would settle this question for me. Below I describe the problem, give two examples of instances to the problem and explain what I have realized so far. To be precise, my questions are:

What is the time-complexity of this problem?

Is this problem NP-hard?

If it exists, what is an algorithm that solves this problem in polynomial time in terms of both $n$ and $k$? (see below)

**The SOET problem**

Given a 4-regular graph $G=(V,E)$, i.e. a graph where each vertex have degree 4, the problem is to find a semi-ordered Eulerian tour (defined below). Note that the graph is not assumed to be simple, so multi-edges and self-loops are allowed.

*Definition (semi-ordered Eulerian tour):*
Let $S\subseteq V$ be a subset of the vertices in $G$ and $s=s_1s_2\dots s_k$ be a permutation of $S$. The Eulerian tour $U$ is a semi-ordered Eulerian tour with respect to $S$ if, $U=As_1Bs_2\dots s_kXs_1Ys_2\dots s_kZ$ for some $s$ and where $A,B\dots,X,Y,\dots,Z$ are words with letters in $V\setminus S$. That is, $U$ traverses $s$ in some order once and then again in the same order. To be clear, which specific $s$ that satisfies this is unimportant.

Note that since $G$ is 4-regular an Eulerian tour always exists and furthermore it will pass each vertex exactly twice.

**Examples**

Here I give two examples of 4-regular graphs. The first one, G_1, there is a SOET with respect to $\{a,b,c,d\}$, since there is a Eulerian tour $U=abcdaebced$. In the second example, G_2, there is none.

**Current status**

I have realized that this problem is at least fixed-parameter tractable in terms of $k$, i.e. there is an algorithm with time-complexity $\mathcal{O}(f(k)p(n))$, where $k=|S|$, $n=|V|$ and $p$ is some polynomial. This can be seen by mapping the problem to $k!k^3$ edge-disjoint paths problems (DPP) where you try to find paths between pairs of vertices in $S$. This has to be done for all permutations of the vertices in $S$, the reason for the factor $k!$ in the number of DPP:s. Furthermore each vertex has to be split into two vertices and connected by edges in one out of three ways (vertically, horizontally or diagonally), to actually give a Eulerian tour. This are in total another $3^k$ DPP:s. Note that this does not necessarily exclude the possibility of an algorithm which is also polynomial in $k$.

*Note regarding duplicate:*

*I have now also asked the same question on Theoretical Computer Science since after two weeks of no response here I thought that this might be the better place for this question.*

For infinite matrices, Jaffard's theorem states that if $(A(k,l))_{k,l\in \mathbb{Z}}$ is invertible and satisfies $$ A(k,l) \leq C (1+|k-l|)^{-r}, $$ for some $C>0$, then $$ A^{-1}(k,l) \leq C' (1+|k-l|)^{-r}, $$ for all $k,l\in \mathbb{Z}$ and for some $C'>0$. That is if the initial matrix exhibits polynomial off-diagonal decay, so does its inverse.

I am wondering if there is a complementary version for finite matrices. Note that for finite matrices such a result would be interesting if $C'$ can be bounded in terms of $C$ and $r$.

This paper (https://www.math.umd.edu/~jjb/general_all_pq_revised.pdf) claims that $C'$ depends only on $r$, $C$, and some parameter $b$ such that $1>b\geq \|I-A\|_2$. However, there seem to be some critical typos in the surrounding discussion, and they refer (cf. p. 5 of the preprint linked) to Jaffard's paper for a proof that $C'$ depends only on certain constants (see. loc. cit. for which). The latter paper is in French, so I couldn't verify this.

So I have three questions:

- Does any one have a pointer to a version of Jaffard's theorem for finite matrices?
- Or alternatively, is there a clear argument to bound $C'$ in terms of $C,r,$ and possibly $b$?
- Supposing that there is indeed such a bound, can it be generalized to settings where $\|I-A\|_2 \geq 1$?

In 1930, Émile Borel, the father of measure theory together with his student Lebesgue and a world-class expert in probability theory, published a short note *Sur les probabilités universellement négligeables* (*On universally negligible probabilities*) in Comptes rendus hebdomadaires des séances de l'Académie des Sciences, 190, pp. 537-40. Here it is:

http://gallica.bnf.fr/ark:/12148/bpt6k3143v.f539

According to this question

and to the best of my knowledge, this note has never been translated in any foreign language. I would be happy to translate it entirely upon request despite my poor English.

Borel is concerned by Cournot principle. As the bridge, the connection between the mathematical theory of probability and the real world of experience,Borel considers Cournot principle to be the most important and fundamental principle of probability theory: he used to call it the *fundamental law of randomness* or the *unique law of randomness*. Hence, Borel seeks for a quantitative version of Cournot principle. He starts like this:

*We know that, in the applications of the calculus of probability, when the probability becomes extremely close to unity, it can and must be practically confounded with certainty. Carnot principle, the irreversibility of many phenomena, are well-known examples in which the theoretical probability equals practical certainty. However, we may not have, at least to the best of my knowledge, sufficiently specified from which limits a probability becomes universally negligible, that is negligible in the widest limits of time and space that we can humanly conceive, negligible in our whole universe.*

and concludes:

*The conclusion that must be drawn is that the probabilities that can be expressed by a number smaller than ${10^{ - 1000}}$ are not only negligible in the common practice of life, but universally negligible, that is they must be treated as rigorously equal to zero* [emphasized by Borel]

This *Cournot-Borel principle*

$\left\{ \begin{array}{l} p \in \left[ {{{0,10}^{ - 1000}}} \right]\;\;\;\;\,\; \Rightarrow p = 0\quad \quad {\rm{Borel - supracosmic}}\;{\rm{probabilities}}\\ p \in \left[ {1 - {{10}^{ - 1000}},1} \right] \Rightarrow p = 1\quad \quad \,{\rm{Borel - supercosmic}}\;{\rm{probabilities}} \end{array} \right.$

implies that there are only discrete probability measures/distributions in *every probabilistic questions regarding our universe*.

Indeed, consider for instance a cumulative distribution function $F\left( x \right):\mathbb{R} \to \left[ {0,1} \right]$. Suppose $F\left( x \right)$ is left-continuous at some point ${x_0}$, that is $F\left( x \right)$ is continuous at ${x_0}$ since it is right-continuous by definition:

$\forall \varepsilon > 0\;\exists \eta > 0,\forall x,{x_0} - \eta < x < {x_0} \Rightarrow \left| {F\left( x \right) - F\left( {{x_0}} \right)} \right| = F\left( {{x_0}} \right) - F\left( x \right) = {\text{Prob}}\left( {y \in \left[ {x,{x_0}} \right]} \right) = \mu \left( {\left[ {x,{x_0}} \right]} \right) < \varepsilon $

In particular, by the Cournot-Borel principle

$\forall {10^{ - 1000}} > \varepsilon > 0\;\exists \eta > 0,\forall x,{x_0} - \eta < x < {x_0} \Rightarrow \mu \left( {\left[ {x,{x_0}} \right]} \right) = 0$

Hence, either $F\left( x \right)$ is constant or it is discontinuous: $F\left( x \right)$ is nothing but a discrete cumulative distribution function or cumulative mass function.

Hence, following Borel, at least two different mathematical theories of probability would coexist: the mathematical, *metaphysical*, continuous one that relies heavily on measure theory, and the *scientific*, physical, discrete one where measure theory is almost irrelevant.

This *Borel-Cournot discrete theory of probability* is not necessarily inconsistent nor trivial because continuous r.v.s have discrete probability measures. By construction and definition, it constitutes another potential answer or proposal to Hilbert sixth problem or program. We can also talk about a *quantum theory of (classical and quantum?) probability* (not the theory of quantum probability) with Borel *probabilistic quanta* $b = {10^{ - 1000}}$, analogous to the energy quanta in QM.

Has something like this theory ever been developed?

I have the following problem. If i can get some help, I would greatly appreciate it. I am trying to replicate a particular research paper and came across a problem:

Suppose X is a birth death process (represents population size) that evolves by:

$X -> X+1 $ if a birth occurs with rate $\mu$

$X -> X-1 $ if a death occurs with rate $\theta$

Suppose $T_A$ is first passage time of a BD process from state A to state 0 and suppose $T_B$ is first passage time of another BD process from state B to state 0. They are both independent.

I need to find $P(T_A < T_B)$. That is, probability that a population of size A goes to 0 before population of size B.

By definition:

$T_A = T_{A,A-1} + T_{A-1,A-2} + ... + T_{1,0}$

where $T_{i,i-1}$ represents first passage time from state i, to state i-1.

I read some articles online that mentioned that if $T = T_A - T_B$, then the CDF defined by:

$G_T(t) := P(T <=t)$

is what i need. The paper suggested taking inverse laplace of a CDF to obtain the CDF and evaluate it at 0. It first suggested finding laplace transform of T, which is given by

$L[T] = E(e^{-ST}) = -\frac{L[T_A](s) L[T_B](-s)}{s}$

Then it suggested taking laplace of $G_T(t)$ i.e $L[G_T(t)]$

However, $L[G_T(t)] = \frac{L[T]}{s} = \frac{L[T_A](s) L[T_B](-s)}{s}$

Then the paper suggests taking the inverse of above evaluated at 0 to get $P(T_A < T_B)$.

Question:

1) Given the Laplace transform of CDF, which is $L[G_T(t)]$, I want to use the inverse laplace to obtain $G_T(t)$ evaluated at 0. But, by definition, inverse laplace using algorithms in python are all one sided from $[0,\infty]$ My random variable T is given by difference of two first passage times, $T = T_A - T_B $. Won't this be negative?

2) In the paper, it says they are shifting the random variable X under study by a constant c such that P(X + c > 0) is approximately 1. Then inverting the corresponding one sided Laplace transform. How would i do that in this context here?

Thanks

In geometry, a kissing number is defined as the number of non-overlapping unit spheres that can be arranged such that they each touch another given unit sphere.

Let $\tau_n$ be the kissing number in $n$ dimension.

Kabatiansky-Levenshtein (1978) proved the following *asymptotic* upper bound: $$\tau_n \le 2^{0.401n(1+o(1))} = (1.32\dots)^{n(1+o(1))}$$

**Question**: What is the smallest $\alpha$ such that $\tau_n \le \alpha^n$, for all $n$?

By using volume, we can prove that $\tau_n \le \frac{Vol(B(3))-Vol(B(1))}{Vol(B(1))}=3^n-1$, so $\alpha \le 3$.

Now $\tau_2 = 6$, so $\alpha \ge \sqrt 6 \simeq 2.45$. Moreover, for $n \le 24$, $e^{\ln(\tau_n)/n} \le \sqrt 6$. Is it true that $\alpha = \sqrt 6$?

This post is motivated by arXiv:1710.00285, Section 5.

Given a cardinal number $\aleph_\alpha$, is it known whether or not $\aleph_\alpha=\beth_\alpha$ is independent of ZFC?

One could define $\mathrm{CH}(\aleph_\alpha)$ as $\aleph_\alpha=\beth_\alpha$. For which cardinals is it true that $\mathrm{ZFC}\models\mathrm{CH(\kappa)}$?

Clearly, $\mathrm{ZFC}\models\mathrm{CH}(\aleph_0)$, and $\mathrm{CH}(\aleph_1)$ is equivalent to $\mathrm{CH}$ (and is thus independent of ZFC). Because $\mathrm{GCH}$ is indepenedent of ZFC, $\neg\mathrm{CH}(\kappa)$ cannot be proven for any cardinal $\kappa$.

If $\mathrm{ZFC}\models\mathrm{CH}(\aleph_{\alpha+1})$, then $\aleph_{\alpha+1}=\beth_{\alpha+1}$. Thus, if we assume $\aleph_\alpha<\beth_\alpha$, we get a contradiction, because $\aleph_\alpha<\beth_\alpha$ and then $\aleph_{\alpha+1}<\beth_{\alpha+1}$. So, if $\mathrm{CH}(\kappa^+)$ is provable, then $\mathrm{CH}(\kappa)$ is also provable. Thus, for cardinals $\kappa<\aleph_\omega$, $\mathrm{CH}(\kappa)$ is independent of ZFC.

It is known that $\alpha\leq\aleph_\alpha$, and thus if $\beth_\kappa=\kappa$ then $\aleph_\kappa=\kappa$ (i.e. all $\beth$-fixed points are also $\aleph$-fixed points). Therefore, $\beth_\kappa=\aleph_\kappa=\kappa$, and $\beth_\kappa=\aleph_\kappa$. So, for all $\beth$-fixed points $\kappa$, $\mathrm{CH}(\kappa)$ is provable from ZFC.

Are there any other known $\kappa$ with $\mathrm{CH}(\kappa)$ independent of ZFC? Is $\exists\kappa\neq\aleph_0(\mathrm{CH}(\kappa))$ independent of ZFC? (If so, $\beth$ fixed points are independent of ZFC as well.)

Edit: Of course $\beth$-fixed points are not independent of ZFC, they do exist by $\alpha\rightarrow\beth_\alpha$ being a normal function. So, ZFC actually proves the existence of cardinals larger than $\aleph_0$ which fulfill GCH.