Let $\mathcal{C}$ be a simplicial category, such that for any two objects $X, Y\in\mathcal{C}$, $\text{Hom}_{\mathcal{C}}(X,Y)$ is a simplicial commutative monoid. Is the simplicial nerve $\text{N}(\mathcal{C})$ an $(\infty, 1)$-category?

If for any two objects $X,Y\in \mathcal{C}$, $\text{Hom}_{\mathcal{C}}(X,Y)$ was a simplicial abelian group, this would be true as a consequence of Prop. 1.1.5.10 in J. Lurie's Higher Topos Theory.

Thanks

This is the continuation of part 1, where all useful definitions and notations are given. J. D. Hamkins answered to the question 1 in this first part, proving that there can be no injection of the proper class On into the proper class W, so that we now know all about the six injection possibilities between On, W and V. Part 2 is about the answer to my question "Bijective-equivalent collections of proper classes in set theory" given by Ali Enayat on14/03/2013. He proved that there exists a model of NBG where the proper class P(On) is such that: (i) Evidently On injects in P(On)that injects in P(P(on))=V; (ii) But V=P(P(On)) does not inject in P(On), that itself does not inject in On, so that the Proper Cardinal level P(On)* is distinct from both On* and V*; (iii) Moreover there can be no injection from P(On) into W, because we could chain with an injection from On into P(On) and build an injection from On into W, that is impossible. So that W*, that is already distinct from the distinct proper Cardinal levels On* and V*, is also distinct from P(On)*. Then we see that Ali Enayat's model of NBG provides a case with (at least) four distinct Proper Cardinal levels.

Question 2:Is it possible to build an injection from W into P(On) ?

Concerning surjections, we have that: (i) V surjects onto P(On) that itself surjects onto On; (ii) V surjects onto W that itself surjects onto On.

Question 3: Is it possible to build a surjection from P(On) onto W, or a surjection of W onto P(on), so that On, P(On), W and V are linearly ordered by surjection ?

Question 4: Is it possible to have more than four distinct Proper Cardinal levels in NBG ?

Gérard Lang

Let $f(x) = x^m+\sum_{j=1}^{m}f_{m-j}x^j\in P[x]$ be a **monic** polynomial over a field $P$ and let $f(x) = (x-\alpha_1)\cdot\ldots\cdot(x-\alpha_m)$ be a factorization of $f$ over an extension field $Q$ of $P$.

Then it is quite natural to consider a value (called discriminant) $$\prod_{i\neq j}(\alpha_i-\alpha_j)^2,$$ whose being zero or not determines whether or not the polynomial has any multiple roots (in any extension field of $P$).

But, when $f$ is monic, say $f = f_0x^m+\sum_{j=1}^{m}f_{m-j}x^j $, a usual definition of the discriminant reads $$ f_0^{2m-2}\prod_{1\leq i<j\leq m}(\alpha_i-\alpha_j)^2.$$

So, my question is:

**What is the essence of the factor** $f_0^{2m-2}$?

Let $X$ be a compact Kähler manifold and $alb \colon X \to \mathrm{Alb}(X)$ be the Albanese morphism. I am interested in a number of questions about relations between the geometry of $X$ and the geometry of $alb$. There is a number of more or less obious considerations (such as: $alb$ induces isomorphisms on $H^1$ and $Pic^0$; its fibers contain each point together with its rational connected component, etc.) However, I can't find even particular answers to plenty of natural questions. For example here are the simplest:

When is the Albanese map surjective?

And when is it injective?

When is the image smooth? I've heard that it can be singular, though I am not able to construct a counter-example nor to find it in the literature.

I suspect that no simple answers can be giving to these questions, but I'd be glad to hear any necessar and/or sufficient conditions on $X$ for the questions above.

As I have mentioned, these are only the simplest questions and I am interested in any non-trivial results on Albanese mappings.

Let $\mathcal{Q}$ be an irreducible quadric in $\mathbb{P}^n(k)$, with $n \geq 2$ and $k$ a finite field. Let $K_0(V_k)$ be the Grothendieck ring of $k$-varieties. It is well known (it appears) that the class $[\mathcal{Q}]$ in $K_0(V_k)$ is contained in $\mathbb{Z}[\mathbb{L}]$, where $\mathbb{L} = [\mathbb{A}^1(k)]$. My question is: what is an easy (elementary) way to prove this rigorously ? The more proofs the better !!

Let $M_{2,4}(\mathbb{R})$ be the set of real $2\times4$-matrices of rank $2$. For any $A\in M_{2,4}(\mathbb{R})$ and $1\leq i<j\leq 4$, let $p_{ij}$ be the corresponding $2\times 2$-minors of $A$. The image $K_{2,4}$ of the Plücker map $$ \mathcal{P}: M_{2,4}(\mathbb{R})\to\mathbb{R}^6,\;\;\; \mathcal{P}(A)=(p_{12}, p_{13}, p_{14}, p_{23},p_{24},p_{34}) $$ is the affine cone over the Klein quadric $Gr_2(\mathbb{R}^4)\hookrightarrow\mathbb{P}^5(\mathbb{R})$ given by the famous Plücker relation $p_{12}p_{34} - p_{13}p_{24} + p_{14}p_{23} = 0$.

Now, I am interested in finding "nice" (continuous/smooth/analytic...) functions $f:\mathbb{R}\to\mathbb{R}$, such that the map $F(x_1,\ldots,x_6) := (f(x_1),\ldots,f(x_6))$ preserves $K_{2,4}$.

I have the following partial example: let $K^0_{2,4} = \mathcal{P}(M^0_{2,4}(\mathbb{R}))$, where $M^0_{2,4}(\mathbb{R})\subset M_{2,4}(\mathbb{R})$ is the set of matrices that annihilate the column vector $(1,1,1,1)^t$. Then for $f(x)=A\sin(\alpha x)$ and $f(x)=A\sinh(\alpha x)$ we have $F(K^0_{2,4})\subset K_{2,4}$. It would be also nice to have other examples of functions with this last property.

This question follows up on Bound on queries to a tree with unusual probabilities, where @fedja was able to disprove my conjecture under only constraints (1-4) below. I restate the relevant facts here for simplicity.

Consider a tree $\mathcal{T}(r)=(V,E)$ rooted at $r \in V$ and of maximal depth $n$. Let $\kappa_r:V\rightarrow[0,1]$ be such that

- $\sum_{v \in V} \kappa_r(v)^2 = 1$,
- $\kappa_r(r) = 0$,
- for $v \neq r$, $\kappa_r(v) = \sum_{c \leftarrow v} \kappa_r(c)$ where $c\leftarrow v$ means that $c$ is a child of $v$, and
- (no longer included)

Let $P(b,v)$ be the shortest path connecting $b$ to $v$ through $\mathcal{T}(r)$ and $L(v)$ be the set of all leaves in the subtree rooted at $v$. We now add the following constraints:

For any two leaves $l_0,l_1 \in L(b)$ with most recent ancestor $b \in V$, $\sum_{x \in P(b,l_0)} \kappa_r(x) = \sum_{x \in P(b,l_1)}\kappa_r(x)$

(Probably unnecessary) We know $\eta \in \left[\frac{1}{|L(r)|},n\right]$ such that $\eta \sum_{x \in L(r)}\kappa_r(x) = \sum_{x \in P(r,l_0)}\kappa_r(x)$ where $P(r,l_0)$ is the shortest path from $r$ to $l_0$.

We consider an algorithm that seeks to find a leaf of the tree by the following process,

- sample a random vertex $v$ with probability $\kappa_r(v)^2$
- let $v$ be a new root and repeat the process on the subtree $\mathcal{T}(v)$ with probabilities assigned by an updated function $\kappa_v$.

Constraints (1-5) apply to any tree/root, so that a new function $\kappa_v$ must also be consistent with them, however it need not be the same function. My conjecture, previously disproven by @fedja, is that under constraints (1-4) one requires something slightly looser than $\log(|V|)$ samples to find a leaf. At least naively, constraint (5) removes the possibility of @fedja's example, since amplitudes appear somewhat inversely proportional to depth. Constraint (6) adds no real mathematical content, but is of algorithmic interest, so I believe that I would like a bound in terms of $\eta$. I can already prove that the algorithm runs in something less than $O\left(|L(r)|\log(n)\right)$ expected steps, but this estimate still seems loose.

In the book Fibre Bundles by Husemoller, universal G-bundles are introduced as bundles over a homotopy type $BG$, for which the cofunctor $[-,BG]\rightarrow k_G(-)$ is a natural isomorphism.

Contrary to this, tom Dieck defines universal G-bundles as those for which the total space EG is terminal for some homotopy category, i.e. for all numberable free G-spaces $E$ there is a G-map $E\rightarrow EG$, unique up to G-homotopy.

How does the first definition imply the second? My aim is to get a unique homotopy type to show that any total space of a universal bundle as defined in the first way is contractible without restricting to CW complexes.

I have already unsuccessfully asked this question here https://math.stackexchange.com/questions/2416706/total-spaces-of-universal-principal-g-bundles

Let $P$ be a convex polygon (or any convex body in $\mathbb{R}^2$)
with perimeter of length $1$. Call a chord $c$ of $P$ *perimeter-halving*
if half the perimeter lies to one side of $c$
(and so half to the other side).
Here are three convex polygons with many perimeter-halving chords drawn:
(Perimeter-halvings play a role in folding convex polygons to convex polyhedra.)

Define the *perimeter-halvings center* for $P$ to be a point $x$
that minimizes the maximum distance $\delta$ of any perimeter-halving chord to $x$.
So the perimeter-halving chords all nearly pass through $x$.

** Q1**. Does the perimeter-halvings center of $P$ coincide with the centroid of $P$? Or is it located at some other natural center?

Center of gravity marked. $\delta = 0.035$.

** Q2**. Which shapes achieve the extremes of $\delta$?

Clearly any centrally symmetric shape achieves $\delta=0$. Does any other shape realize $\delta=0$? Which shapes have the worst (largest) $\delta$?

And just out of curiosity, I would be interested to learn what are the elegant spirograph/astroid-like envelope curves visible in the figures.

Let $P\ne Q$ be an arbitrary pair of primes, $M=PQ$.

$S$ = sum of all $m<M$ co-prime to $M$ such that equation $Px+Qy=m\ (1)$ has a solutions in natural numbers.

$s$ = sum of all $m<M$ such that this equation is not solvable.

Is the assertion $S-s=\dfrac{(P^2-1)(Q^2-1)}{6}$ right?

It is known that the group $\ C(\Bbb R)$ is isomorphic to U(1) (trivially) and $\ C(\Bbb Q)$ is isomorphic to $\Bbb Z^r\times E(\Bbb Q)$, where all possible $\ E(\Bbb Q)$ are known. Are there similar results for other fields? I am especially interested in the case K is a number field, or the p-adic numbers (both $\Bbb Z_p$ and $\Bbb Q_p$)

Let $K$ be a number field and let $g$ be an integer. Let $\mathcal{A}(K,g)$ be the set of absolutely simple $g$-dimensional abelian varieties over $K$. Is the set $\{\mathrm{End}^0(A_{\mathbb{\overline{Q}}}):A\in \mathcal{A}(K,g)\}$ of division algebras a finite set?

Let $\mathbb{N}$ be the standard model of the natural numbers. For any statement in the language of arithmetic, we can translate into a statement in the language of set theory by asking if it is true of $\mathbb{N}$.

Let's say that a statement in arithmetic is "extraneous" if it is independent of PA. For example, ZFC proves Con(PA), which is extraneous.

My question is, is there a set of statements $S$ (in the language of set theory), such that $S$ proves no extraneous statements, and $S+PA=ZFC$ (or perhaps $S+PA \vdash ZFC$).

Edit: We can also consider the same question, but with PA replaced with the set of arithmetical statements provable in ZFC.

Let the function $$G(a)=E(Y^{a+1}) [E(Y^{a})]^{-1} - E(Z Y),$$ where $a$ is a real scalar, $Y$ and $Z$ are non-negative continuous random variables with expectation equal to one and finite second moments, $0<Corr(Z,Y)< 1$, and $G'(a)>0$. I am trying to show that $G(a)=0$ has a solution. By substituting second-order Taylor-series approximations for the two expectations in which $a$ appears, and solving the quadratic equation that results, I find a value of $a$, $a_0$, that provides an approximated solution to $G(a)=0$. From this, I can write: $$G(a_0)=D,$$ where D is the difference between $E(Y^{a+1}) [E(Y^{a})]^{-1}$ and its Taylor-series based approximation.

I am wondering whether it is valid to invoke the inverse function theorem and conclude, possibly under some additional assumptions, that an exact solution to $G(a)=0$ exists. And, if not, if there is some other way of showing that such a solution exists (I thought I could use the intermediate value theorem to this effect, but have failed in my attempts; although it is clear that $a=0$ makes the function negative, I have not been able to show that there is a value of $a$ that makes the function positive).

Any suggestions/comments would be greatly appreciated.

For $k\ge1$, $j\ge1$, Let $$e_k(j)=\sum_{1\le i_1<...<i_k\le j}i_1\cdot\cdot\cdot i_k.$$ We know that $e_k(j)$ is a polynomial in $j$ with coefficients depending on $k$. I am curious about whether there is an explicit formula for the coefficients?

*Remark : I've found a rather trivial answer for this question and so very likely the premise of paralleling it with the Zsigmondy-theorem is wrong, so this question might better be retracted. I'll give it a certain time loking at reactions on the discussion in the MO-meta site.*

I'm (recreationally) studying properties of the generalized Collatz-problem ("gCp" for shortness) for $3x+\rho$ and look for help in the question of the existence of cycles in gCp's with composite $\rho$ as parameter in relation to that in gCp's with the primefactors of $\rho$ as parameters.

To explain the problem formally, I'd like to use here the compact "Syracuse"-notation of the gCp in the following way:
$$ T_\rho(a;[A]) := {3a+\rho \over 2^A} \qquad \qquad \text{where } A=v_2(3a+1)
$$
with odd positive integer in the parameter $\rho$. By that definition the transformation $b=T(a;[A])$ works on odd numbers $a,b$ only.

For an iterated transformation of $N$ steps I write simply $a_{N+1}=T_\rho(a_1;[A_1,A_2,...,A_N]) $. I denote $S$ as the sum of exponents $A_k$ and to make the following formulae short, I define $E_{N,S} = [A_1,A_2,...,A_N]$ as the vector for a fixed set of exponents $A_k$ having length $N$ and sum $S$.

I'm interested in the question on existence of cycles in $T_\rho()$ for some composite $\rho$ in relation to that in $T_\varphi()$ where $\varphi$ is a primefactor of $\rho$ .

For example let $\varphi, \sigma \in \Bbb P$ being prime and $\rho=\varphi \cdot \sigma$ being composite. Let some vector $E_1 = E_{N_1,S_1}$ be such that $a_{N+1} = T_\varphi(a_1;E_1)=a_1$ is a cycle.

It is easy to derive that then also $ b_1 = T_\rho(b_1;E_1)$ with $b_1= \rho/\varphi \cdot a_1$ is a cycle in $T_\rho()$.

Of course the analogue is true for a cycle in $T_\sigma()$ in relation to $T_\rho()$ with another set of exponents $E_2 := E_{N_2,S_2}$

- a) So we have first -by some simple analysis-, that a generalized Collatz-problem gCp with $T_\rho()$ with a composite parameter $\rho$ defines the same cycles as the gCp's with the primefactors of $\rho$ as parameters (where only its elements $a_k$ are rescaled by some factor as said above).

The two interesting parts are the following which I'd like to understand/to prove.

b) The gCp of a composite parameter $\rho$ seems to have

*in general*always additional cycles besides that of the gCp's defined by $\rho$'s primefactors.

This reminds me of the theorem of Zsigmondy about the primefactorization of Mersenne-numbers (where he proves the existence of the then so called "primitive primefactors"). Possibly it can be proven the same way, but I could not follow his proof. Let's call that additional cycles analoguously "primitive cycles".c) Different from the "general" case: when we have the primefactor $\sigma=3$ involved, the composition $\rho=\varphi \cdot 3$ with that primefactor $T_{\varphi \cdot 3}()$ seems to

*not*to lead to such additional "primitive cycles" - again in analogy to Zsigmondy's observation on the Mersenne-number $M_6 = M_{2 \cdot 3}$ which has no "primitive" factors.

Of course I do such a statement only based on heuristics and the basicand that it has only the "trivial cycle" $1=T_1(1;[2,2,2,...,2])$.*assumption that the Collatz-conjecture for $T_1()$ is true*

So my question asks for help in proving the observations in b) and c) (under assumption of truth of the Collatz-conjecture). Perhaps it would be sufficient to only get Zsigmondy's derivations transparent enough.

*Remark: for some examples and possibly for a better explanation of the elementary analysis of this you might look at the short treatize at my webspace (instead of the term "primitive cycles" I've used "unexplained cycles" there)*

Does somebody have experience with Jänich's book on linear algebra at American universities?

Would you recommend that book for advanced courses on linear algebra, or is it too easy/too difficult for that purpose?

Way back in elementary school, we used remainders in division problems. For example, $5\div2=2$ r $1$ means "$5$ divided by $2$ is $2$ with a remainder of $1$." It's commonly understand that, in this context, $2$ r $1$ is a pair of numbers, with $2$ being the quotient and $1$ being the remainder. However, the way the equation is written makes it look like $2$ r $1$ is one number. This annoyed me for the longest time, but now, I've been able to come up with a new number system that actually makes sense of quantities like $2$ r $1$ as numbers in their own right. I call them, "remainder numbers."

**Remainder Numbers 1.0: A prototype**

A remainder number is of the form $a_1$ r $a_2$, where $a_1$ and $a_2$ are both integers. Addition and subtraction of remainder numbers are done component-wise, and multiplication and division are defined as follows:

For any three integers $a$, $b_1$, and $b_2$: $a\times(b_1$ r $b_2)=ab_1+b_2$.

For any two integers $a$ and $b$: $a\div b=\lfloor\frac{a}{b}\rfloor$ r $(a$ mod $b)$.

(In this convention, fractions represent standard division, and $\div$ represents remainder number division. Also, $a$ mod $b$ is defined to be between $0$ and $b$, including $0$ but not $b$, allowing $a$ mod $b$ to be negative when $b$ is negative.)

Let's look at some examples of multiplication and division:

$5\div2=\lfloor\frac52\rfloor$ r $(5$ mod $2)=2$ r $1$

$2\times(2$ r $1)=2\cdot2+1=5$

$5\div-3=\lfloor-\frac53\rfloor$ r $(5$ mod $-3)=-2$ r $-1$

$-3\times(-2$ r $-1)=(-3\cdot-2)-1=5$

As you can see, multiplication "undoes" division, which is a very nice property to have. Unfortunately, this number system has a glaring problem: multiplication and division aren't closed over the remainder numbers. Let's fix that.

**Remainder Numbers 2.0: Improving Multiplication and Division**

Fixing multiplication is pretty simple:

$(a_1$ r $a_2)\times(b_1$ r $b_2)=(a_1$ r $a_2)\times b_1+b_2=a_1b_1+a_2+b_2$

This derivation isn't exactly rigorous, but it's perfectly consistent with what multiplication represents in the remainder numbers, and it's a pretty reasonable definition. To multiply two remainder numbers, simply multiply the integer parts and add the remainders.

Fixing division is more difficult, so I'll build up a comprehensive definition case by case.

The first case is $a\div(b_1$ r $b_2)$. Since I want multiplication to undo division, this is essentially asking, "What number, when multiplied by $b_1$ r $b_2$, results in $a$?" Because of the way multiplication is defined, this is also equivalent to asking, "What number, after multiplying by $b_1$ and adding $b_2$, results in $a$?" This effectively reduces the problem to $(a-b_2)\div b_1$, which is already defined.

Since the divisor's remainder can easily be removed, the only real case left to consider is when the dividend has a non-zero remainder: $(a_1$ r $a_2)\div b$. Unfortunately, this reveals another glaring problem, which is that multiplication only returns integers.

**Remainder Numbers 3.0: Final Version**

To make division closed, I'll have to drastically update the definition of remainder numbers. From now on, a remainder number can be defined recursively as $a_1$ r $a_2$, where $a_1$ is an integer and $a_2$ is a remainder number. Alternatively, remainder numbers can be embedded as sequences of integers. Basically, think of a remainder number as a decimal expansion, but without a specified number base. Addition and subtraction are still done component-wise, and the definitions of multiplication and division are extended as follows:

Recursive notation: $(a_1$ r $a_2$ r $a_3)\times(b_1$ r $b_2$ r $b_3)=(a_1b_1+a_2+b_2)$ r $(a_3+b_3)$, where $a_3$ and $b_3$ are remainder numbers.

Sequence notation: $(a_1,a_2,a_3,a_4,\ldots)\times(b_1,b_2,b_3,b_4,\ldots)=(a_1b_1+a_2+b_2,a_3+b_3,a_4+b_4,\ldots)$

I'll skip a few steps in deriving the division formula, but here it is:

Recursive notation: $(a_1$ r $a_2)\div(b_1$ r $b_2$ r $b_3)=\lfloor\frac{a_1-b_2}{b_1}\rfloor$ r $[(a_1-b_2)$ mod $b_1]$ r $(a_2-b_3)$

Sequence notation:

$(a_1,a_2,a_3,\ldots)\div(b_1,b_2,b_3,b_4,\ldots)=(\lfloor\frac{a_1-b_2}{b_1}\rfloor,(a_1-b_2)$ mod $b_1,a_2-b_3,a_3-b_4,\ldots)$

**Analysis and Final Thoughts**

Admittedly, this number system looks incredibly ugly. Multiplication isn't associative, it doesn't distribute over addition, it's not one-to-one, and it doesn't even have an identity element. Also, the number system has zero divisors. (But hey, at least multiplication undoes division and is commutative, so I guess it has that going for it.) So, yeah, it's pretty clear why we use rational numbers instead of remainder numbers. Having said that, though, do you see any interesting properties or potential applications of this number system, or is it just the useless mess that it appears to be? Also, if you can find a way to modify this number system to make it more mathematically pleasing, that would be nice.

I am currently learning about optimization algorithms.

Quite often, there is used following assumption:

Hessian of the objective function is uniformly bounded, thus there exists $\kappa > 0$, such that, for all $x \in R$

$\| \nabla_{x x} f(x) \| \leq \kappa$

I wonder, is there any intuitive explanation behind this assumption? Something, what could help me understand its importance.

Given a differential graded algebra $(A_\bullet,d)$, is there a well-defined notion of a K-injective, K-projective, K-flat dimension of a differential graded module, or even of the category of differential graded modules?

Moreover, if there is a well-defined notion, when are each of them finite? for example, does the koszul complex $K^\bullet_R(M;f_1,\ldots,f_k)$ have computable K-dimensions?