I've been pondering this weakened version of the finding primes problem for a while:

Is there an algorithm which given $k$ outputs a prime $p > 2^k$ in time $F(\log_2(p))$?

This differs from the ordinary finding primes problem, which I'll state as:

Is there an algorithm which given $k$ outputs a prime $p > 2^k$ in time $F(k)$?

Equivalently, the weak version is that there is a deterministic algorithm to find a prime $p > 2^k$ which runs in time $F(k)$ on infinitely many inputs $k$. Another formulation of the weak problem is to determine the values of $\displaystyle\liminf_{p\rightarrow\infty}{\frac{L(p)}{\log_2(\log_2(p))}}$ and $\displaystyle\liminf_{p\rightarrow\infty}{\frac{L(p)}{\log_2(p)}}$ where $p$ is prime and $L$ is the Levin complexity.

I consider both problems to be defined with respect to the class of models of computation possessing a time-translation to a single-tape Turing machine satisfying $T' \in T^{1+o(1)} \cdot S^{O(1)}$, where $T$ is the time used in the other model and $S$ is the space. This class includes the various RAM models and multi-tape Turing machines. However, the equivalence is sharper than a polynomial time-translation, and we can witness a specific value of the time exponent with an algorithm satisfying $S \in T^{o(1)}$ in any model in the class. For example, searching an interval $[2^n, 2^n + 2^{\epsilon \cdot n})$ for a prime can be accomplished in time $2^{\epsilon \cdot n + o(n)}$ in all of these models because the AKS test is simultaneously in subexponential time and subexponential space. But we won't be able to place the AKS test itself in a particular time class like $n^{6+o(1)}$ unless we can adapt it to simultaneously use space $n^{o(1)}$, until then the best we can say is that its runtime is $n^{O(1)}$ in every model. If either finding primes problem can be solved in polynomial time, assuming model-invariance won't make it any harder to prove that, and (suspending disbelief) it can only help to prove a lower bound on the exponent of exponential-time algorithms. It's plausible that it would present an obstruction to improving the exponential upper bound — for example, an algorithm that finds a prime $p > 2^k$ using $p^{\frac{1}{3}}$ time and $p^{\frac{1}{3}}$ space on some particular machine wouldn't qualify as an improvement under model-invariance. That's because the time exponent doesn't translate when the space is exponential — $\frac{1}{3}$ becomes $\frac{b+1}{3}$ after a $T' = T \cdot S^b$ time-translation. The exponential-time algorithms I refer to below are all in subexponential space anyway so this issue doesn't seem to actually come into play.

For the ordinary version, all we know is $F(k) \in 2^{0.525 \cdot k + o(k)}$, and I haven't been able to find any better bound for the weak version.

A more recent result in this area is Rubinstein's theorem which says $\text{FACTORING} \in \text{DTIME}(2^{\frac{n}{3} + o(n)})$. My understanding from the polymath page is that we don't know if having factoring for free can help us find primes. Can it help us find primes infinitely often?

An attractive aspect of the finding primes problem is that there are many conjectures which imply improvements. For example, the Riemann hypothesis puts $F(k) \in 2^{\frac{k}{2} + o(k)}$, Cramér's conjecture implies $F(k) \in k^{O(1)}$, and $\text{P}=\text{NP}$ implies $F(k) \in k^{O(1)}$. For my version of the problem, we additionally have that infinitely many Mersenne or Fermat primes gives $F(k) \in k^{O(1)}$, infinitely many $n^2+1$ primes implies $F(k) \in 2^{\frac{k}{2}+o(k)}$, and Bunyakovsky's conjecture implies $F(k) \in 2^{o(k)}$. Some other conjectures have similar implications. A world where we can't find primes infinitely often would be a very weird place, even weirder than one where we just can't find primes!

That is my main motivation behind studying this problem, to try and understand what that world would be like.

I'm looking for more information about finding primes infinitely often. In particular, is there any better time bound than what is known for the ordinary version? I haven't found it discussed in the polymath threads or elsewhere, and I haven't identified any open problems that any improvement implies, despite all the open problems that imply improvements. So for all I know, there's a simple and provably fast algorithm that I just can't think of myself.

There seems to be many ways to obtain a 1-category out of a 2-category:

- Dumb truncation. $\delta: 2\text{-Cat} \to \text{Cat}$ sends a 2-category $\cal K$ into the 1-category obtained forgetting the 2-cells. Only works with strict 2-categories.
- Core truncation. $c : 2\text{-Cat} \to \text{Cat}$ sends a 2-category $\cal K$ into the 1-category obtained taking isomorphism classes of 1-cells. Apparently this works also with bicategories.
- Geometric truncation. $\pi_{0*} : 2\text{-Cat} \to \text{Cat}$ sends a 2-category $\cal K$ into the 1-category obtained applying hom-wise the $\pi_0$ functor (so takes connected components of ${\cal K}(x,y)$).

Under which conditions is ${\cal K}^\delta, {\cal K}^c, {\pi_{0*}\cal K}$ a co/complete 1-category?

Non-strictness is not a big deal, I'm fine with strict 2-categories.

Let $M$ be a closed connected smooth manifold and let ${\rm Diff}^r(M)$ be the group of $C^r$-diffeomorphisms equipped with the compact-open $C^r$-topology. I am looking for a reference to the fact that the natural inclusion ${\rm Diff}^r(M)\to {\rm Diff}^1(M)$ is a homotopy equivalence for $1\leq r\leq \infty$.

Generalizing from 1-category theory, there's a simple definition of a "naive complex" in a stable $\infty$-category. Considering bounded positive graded chain complexes, they are a sequence of maps

$$ A_n \xrightarrow{d_n} A_{n-1} \xrightarrow{d_{n-1}} \ldots \xrightarrow{d_1} A_0 $$

with the property that $d_k d_{k+1} \simeq 0$. If I've not made any errors, then at first blush they seem to have a reasonable theory, along with a nice "realization" given by iteratively taking homotopy cofibers:

$$ |A| = \mathrm{cofib}(\mathrm{cofib}(\ldots) \to A_0) $$

that parallels taking the total complex of a bicomplex. This realization can also be given by the colimit of the diagram

$$ A_n \rightrightarrows^{d_n}_0 A_{n-1} \rightrightarrows \ldots \rightrightarrows^{d_1}_0 A_0$$

However, naive complexes don't seem to be studied. Instead, in Higher Algebra, Lurie proves a Dold-Kan correspondence stable $\infty$-category asserting equivalences between the categories of:

- Simplicial objects
- Filtered objects (i.e. arbitrary $\mathbb{Z}_{\geq 0}$-indexed diagrams)
- upper-triangular array with zeroes along the diagonal, in which every square is a pushout

and it is this last sort of thing that Lurie calls a complex. (Specifically, a $\mathbb{Z}_{\geq 0}$-complex)

It is only when looking at the homotopy category does Lurie say anything about a correspondence between simplicial objects and naive complexes.

So my question is whether this is an oversight; i.e.

Is the $\infty$-category of simplicial objects in a stable $\infty$-category equivalent to the $\infty$-category of (unbounded, positively graded) naive complexes?

and if the answer is no, what goes wrong?

**Problem.** Is there a prime number $p$ (desirably $p\le 3$) and an infinite set $A\subset\mathbb N$ such that for any distinct numbers $a,b\in A$ the Diophantine equation $x^p+ax=y^p+by$ has no positive integer solutions?

Fix an unweighted, weakly connected digraph $\Gamma$, possibly with loops, and of bounded degree.

Call $p$ a *common $n$-ancestor* of $q_0,q_1$ if there are $n$-paths (directed paths of length exactly $n$) from $p$ to each $q_i$, and a *last common ancestor* if intuitively there are no common ancestors in between, i.e. for every common $k$-ancestor $p' \ne p$ of $q_0,q_1$, the shortest path from $p$ to $p'$ has length $> n-k$.

It is not hard to see that last common ancestors must be somewhat constrained: for example, if $p$ is a last common ancestor of $q_0$ and $q_1$ then every pair of shortest $n$-paths from $p$ to the respective $q_i$ must be disjoint; what's more, any path leading between two such respective $n$-paths must be of at least a certain length, depending on which points it connects between.

The question then is: how much further, and under what circumstances, can we further constrain the possible last common ancestors of a pair of nodes $q_i$? I'm particularly interested in when we can infer some smallish upper bound on $n$, and especially when both $q_i$ have arcs pointing to a common target node (which may itself be one of the $q_i$, since loops are allowed).

Some properties which might make the problem easier, in roughly increasing order of how much I'd prefer to avoid them:

the graph

- has a loop at every node;
- is strongly connected;
- is undirected;
- has polynomial growth;
- is finite (but still "large" compared with the size of bounds I'm interested in).

Not sure how best to tag this one.

A couple of years ago, I taught an undergraduate class introducing various aspects of classical geometry, learning the (beautiful!) subject as I went. I found one thing that really bothered me: the definition of lines in the upper half-plane model of the hyperbolic plane felt rather *ad hoc*. Although there is a nice explanation in terms of differential geometry, the algebraist in me was hoping that they could be alternatively explained through Galois theory. I gave this a shot and was unable to get it to come together, and (as an obvious novice in the subject) I was hoping someone could point me to where this has already been considered.

$\newcommand{\Re}{\mathrm{Re}} \newcommand{\Im}{\mathrm{Im}} \newcommand{\R}{\mathbb R} \newcommand{\C}{\mathbb C} \newcommand{\RP}{\R\mathrm P} \newcommand{\CP}{\C \mathrm P} \newcommand{\Q}{\mathbb Q} \DeclareMathOperator{\Gal}{Gal}$First, a few relevant facts about the upper half-plane model:

- Lines take one of two forms: they are the (nonempty) complex solution sets to $\Re(z) = b$ or $z \bar z - b(z + \bar z) + c = 0$ for $b$ and $c$ real.
- Lines in the upper half-plane are determined by their pair of incidences on the horizon line $\RP^1$.
- Isometries of the upper half-plane fix the horizon $\RP^1$ and are also determined by their behavior on it.

A common feature of the above facts is the governance of the reals: everything in sight is Galois-invariant for $\Gal(\C/\R)$. Accordingly, rather than privilege the upper half-plane, I would like to interpret this model as the quotient of $\CP^1$ by the Galois action of $\Gal(\C/\R)$. In particular, isometries of this object are then inherited from those of $\CP^1$ which fix $\RP^1$: these are the expected real linear fractional transformations.

In an effort to recover the formulas for the lines, let's pick incidence points $p$ and $q$ in $\RP^1$ and go looking for a quadratic Galois-invariant expression that contains these points in its zero locus. (Forgive me for working away from infinity in what follows.) We might first write down $(z - p)(z - q)$—and this obviously does not have the expected solution set in $\CP^1$, as it is a polynomial and hence has finitely many zeroes. The next thing to try, knowing that $\bar z$ appears in the expected equation, is averaging this polynomial with its Galois conjugate: $$\frac{(z - p)(z - q) + (\bar z - p)(\bar z - q)}{2} = \Re(z^2) - (p+q)\Re(z) + pq.$$ While this does have an infinite solution set, it again is not the expected result. Instead, though, if we tweak the original expression by shuffling the Galois action through, we arrive at $$0 = \frac{(z-p)(\bar z - q) + (\bar z - p)(z - q)}{2} = z \bar z - c(z + \bar z) + (c^2 - r^2)$$ for $c = (p+q)/2$ and $|r| = |p - q|/2$.

At this point, I'm excited to have guessed the right answer, but I'm unable to conceptualize why I should have jumped right here from the start. Accordingly, my question is a bit nebulous:

**Question:** Is there a conceptual reason why it's reasonable to have shuffled the Galois conjugation action through the expression for the real quadratic with roots $p$ and $q$? Is there a purely Galois perspective on why this equation deserves to be called anything like a "line" in $\CP^1\;//\;\Gal(\C/\R)$?

**Question:** What happens if $\C/\R$ is replaced with something like $E/\Q$ or $E_p / \Q_p$ a quadratic extension? Is there a fruitful theory of plane geometry here? (What about larger extensions?)

I previously asked a version of this question on math.stackexchange, and while I learned something, I don't think it got properly answered.

I have a problem: Suppose bounded function $u(x, y) \in C^{2}(\mathbb{R}^{2})$ is a solution of $u_{xx}-2u_{xy}+\beta u_{yy}+2\beta u_{x}-\beta^{2} u_{y}=0$. Is it true that $u\not\equiv\operatorname{const}$ when a) $\beta>5$, b) $\beta<-5$?

I reduce equation to canonical form and have no other idea. Do you have one?

EDIT

I get the following form canonical form: $(\beta-1)u_{rr}+(\beta^{2}-1)u_{tt}+(2\beta-\beta^{2})u_{r}-\sqrt{\beta^{2}-1}u_{t}=0$ in coordinates $r=x+y,t=-\sqrt{\beta^{2}-1}x$.

It is known that $\alpha$-Hölderness impose restrictions on the bad behavior of a graph in the sense of dimension. To put it precisely, let $I=[0,1]$ and $f:[0,1] \to \mathbb{R}$. Then, if $f$ is $\alpha$-Hölder, its graph $ \Gamma_f = \{(x,f(x)):x\in I\}$ has Hausdorff dimension at most $2-\alpha$. Equivalently, if $\operatorname{dim}_H(\Gamma_f) > 2 - \alpha$, then $f$ can't be $\alpha$-Hölder.

The previous example is an instance of the following principle: if the graph of $f$ is bad enough in the sense of dimension, then the regularity of $f$ is bounded above. By "bouded above" I mean "less than $C^\alpha(I)$" as in the previous example.

Are there reverse implications in the spirit of "upper bound on dimension of $\Gamma_f$ implies some regularity of $f$"? I'm not necessarily talking about being $C^\alpha$, it can be some weaker notion of regularity: being in some Sobolev space, agreeing with a $C^\alpha$ function on some big set, being approximable in a particularly good/efficient way, etc...

How should I go about proving that K3,3 is not planar using the Jordan curve theorem?

I am intending to know some of the open problems on integrals,series and number theory with rewards. Also if there is any site to earn small monetary rewards by solving mathematical problems.

Can it be shown, on the assumption that $ZF$ is consistent, that there is a model of $ZF$ in which the reals cannot be well-ordered but there does exist a set of reals which is not Lebesgue measurable?

Let $\overline {\mathcal{M}_{g_{i}, n_{i}}}, \ i \in \{1, 2\},$ be a moduli stack of pointed stable curves of type $(g_{i}, n_{i})$ over a finite field $\mathbb{F}_{p}$. For any algebraic stack $\mathcal{X}$, we write $|\mathcal{X}|$ for the set of points of $\mathcal{X}$ (cf. Stack projects, properites of algebraic stacks, Definition 4.2). My question is as follows:

Is the natural surjective morphism of the sets of points of $$|\overline {\mathcal{M_{g_{1}, n_{1}}}}\times_{\mathbb{F}_{p}}\overline {\mathcal{M_{g_{2}, n_{2}}}}| \rightarrow |\overline {\mathcal{M_{g_{1}, n_{1}}}}|\times_{|\text{Spec} \ \mathbb{F}_{p}|}|\overline {\mathcal{M_{g_{2}, n_{2}}}}|$$ a homeomorphism?

I know that the morphism between the set of points of the fiber product of two algebraic stacks and the fiber product of the sets of points of two algebraic stacks is not a homeomorphism in general. Does the special case mentioned above hold?

I am writing a paper on the topological structure of the Golomb space (defined here) and arrived to the following question:

**Question 1.** Is it true that for a number $a\in\mathbb N$ the equation $x^2+x=a$ has an integer solution $x$ if and only if for any number $b\in\mathbb N$, coprime with $a$, the equation $x^2+x=a \mod b$ has a solution $x$ (i.e., a solution in the ring $\mathbb Z/b\mathbb Z$).

In fact, I need a more general fact.

**Question $2^n$.** Is it true that for any number $n\ge 0$ and any number $a\in\mathbb N$ the equation $(x^2+x)^{2^n}=a$ has an integer solution $x$ if and only if for any number $p\in\mathbb N$, coprime with $a$, the equation $(x^2+x)^{2^n}=a \mod p$ has a solution $x$ (i.e., a solution in the ring $\mathbb Z/p\mathbb Z$).

We can also ask a more general

**Problem.** For which monic polynomials $f\in\mathbb Z[x]$ the following local-to-global principle holds:

$(*)$: for every $b\in\mathbb N$ the equation $f(x)=a$ has a solution in $\mathbb Z$ if and only if for any $b\in\mathbb N$, relatively prime with $a$ the equation $f(x)=a \mod b$ has a solution in the ring $\mathbb Z/b\mathbb Z$?

**Remark.** I have edited the second question a little bit.

Let $G$ a finite group. I've seen three options discussed for making $G$-cell complexes: in increasing generality, one might allow $X_n$ to be constructed from $X_{n-1}$ by attaching cells of the form

$G/H\times D^n$, where $D^n$ has trivial $G$-action, or

$G/H\times D(V)$, where $D(V)$ is the unit disk of a $G$-representation $V$, or

$G\times_H D(V)$, where $D(V)$ is the unit disk of an $H$-representation $V$

(The subgroup $H$ and representation $V$ is allowed to vary for different cells. Cells of type 1 are what are used in the standard definition of $G$-CW complex.)

Megan Shulman, on p.47 of her thesis (link goes directly there), says regarding standard $G$-CW complexes that

If we are interested in putting CW structures on spaces found in nature, this is a very restrictive definition...

However, I seem to remember that even for a $G$-cell complex $X$ constructed with cells of type 3, we can always rearrange (triangulate?) somehow to give $X$ a standard $G$-CW complex structure. Is that true or false? Which papers look into this?

Ferland and Lewis, on p.23 of their paper (link goes directly there), say that cells of type 3

... are of interest because they arise naturally from equivariant Morse theory (see, for example, [**21**]). Further, if $G$ is a finite abelian
group, then the usual Schubert cell structure on Grassmannian manifolds generalizes in an obvious way to a generalized $G$-cell structure on the Grassmannian manifold $G(V, k)$ of $k$-planes in some $G$-representation $V$ (see Chapter 7).

[**21**] A. G. Wasserman, Equivariant differential topology, Topology 8 (1969), 127–150.

However, skimming over at that section of Wasserman's paper (link goes directly there), I must admit I don't see where anything like cells of type 3 show up, much less demonstrate their usefulness. Could anyone explain their relationship to equivariant Morse theory?

Also, in their Chapter 7 (link goes directly there; look at the bottom of p.83 & top of p.84 in particular), I'm not sure I see where cells of type 3 are necessary, because it seems like $W$ is inherently a $G$-representation, and hence they are only obtaining cells of type 2. Could anyone clarify this?

So to summarize, I'd greatly appreciate any of the following:

simple / natural examples that illustrate the additional flexibility of each generalized cell:

spaces which can easily be seen to have a cell structure when cells of type 3 are allowed, but not when only cells of type 2 are allowed

spaces which can easily be seen to have a cell structure when cells of type 2 are allowed, but not when only cells of type 1 are allowed

any other reasons why cells of type 2 and 3 are reasonable

a citation / proof / counterexample for this half-remembered "fact" that cells of type 1 suffice

Let $C$ be a small category: we define its *nerve* $(N(C)_k)_k$ as the following simplicial set: $N(C)_0=Ob(C)$ (the set of objects), $N(C)_1=Mor(C)$ (the set of all morphisms) and $N(C)_k$ to be a set of all $k$-tuples of compasable morhpisms $(f_1,...,f_k)$. This is equipped with the face maps $d_i$ defined by $d_i(f_1,...,f_k):=(f_1,...,f_{i-1},f_i \circ f_{i+1},f_{i+2},...,f_k)$ except of $d_0$ and $d_k$ which omit the first and the last morphism. Degeneracies act as follows $s_j(f_1,...,f_k):=(f_1,...,f_{j-1},id,f_j,...,f_k)$.

When $G$ is a group then $G$ can be viewed as a small category with only one object and the set of morphisms being $G$ itself. Thus everything is composable and $N(G)_k=G^k$.

One thus arrives at the well defined simplicial set. Therefore one can speak about its geometric realization $|N(G)|$: there are in fact two possible versions, the standard one, sometimes called ,,thin realization'' and another version, which is called ,,thick'' realization which is denoted by $\| N(G) \|$ and which is defined using only relations involving the face maps. See also this discussion.

I was told that this geometric realization is in fact (possible version of) the classifying space $BG$. I wonder why it is true. This question is motivated by this discussion. Since my original motivation was the statement that the group cohomology coincides with the (singular for example) cohomology of the classifying space and with the help of the cited discussion I was able to understand why is it true directly (not invoking the fact that $BG$ is geometric realization of $N(G)$) I'm posting this question as another topic. So my question is:

Why the geometric realization of $N(G)$ is (homotopy equivalent to) $BG$?

Some ideas can be found in Hatcher's book ,,Algebraic Topology'' however, as I understood correctly, the face maps $d_i$ from this book would act rather as $(g_1,...,g_k) \mapsto (g_1,...,g_{i-1},g_{i+1},...,g_k)$ which is not the same as for $N(G)$.

EDIT: The classifying space is not unique up to homeomorphism, but it is unique up to homotopy: the definition which is most familiar for me is the abstrack definition as the quotient of contractible principial $G$-bundle $EG$. There is also specyfic *construction* due to Milnor using infinite join which is also familiar to me.

EDIT 2: As I understood correctly the answer below: each element from $\mathcal{E}G_n$ being of the form $(g_0,g_1,...,g_n)$ may be presented in the form $(g_0',g_0'g_1',...,g_0'g_1'...g_n')$. Solving in $g_0',...,g_n'$ gives $g_0'=g_0,g_1'=g_0^{-1}g_1,...,g_n'=g_{n-1}^{-1}g_n$. Thus we obtain a bijective map $\alpha:\mathcal{E}G_n \to \mathcal{E}G_n$ given by $\alpha(g_0,...,g_n)=(g_0,g_0^{-1}g_1,...,g_{n-1}^{-1}g_n)$. Let $d_i,s_j$ be the face maps and degeneracies which you described (omitting $i$-th vertex and repeating) and $d_i',s_j'$ be faces and degeneracies as in the definition of the nerve of the category. Then $\alpha s_j=s_j' \alpha$ and $\alpha d_i=d_i' \alpha$ for all $j$ and for all $i$ *except* $i=0$ where we obtain the difference on the zeroth coordinate $(\alpha d_0)(g_0,...,g_n)=(g_1,...)$ and $(d'_0 \alpha)(g_0,...,g_n)=(g_0^{-1}g_1,...)$ (the remaining entries are the same). Therefore if we pass to the quotient we get an equality. This suggest that there is something correct in this procedure: however I'm little bit confused with the following: $\mathcal{E}G$ is a simplicial set. Its geometric realization is just topological space: the bar notation which you are using refers to $EG/G$ which is also topological space. However you described face and degeneracies maps for $EG/G$ which looks like the relevant maps for $NG$ but the former is just topological space while the latter is (abstract) simplicial set.

In the 2010 paper Quantum Money from Knots Farhi, Gosset, Hassidim, Lutomirski, and Shor give a doubly stochastic Markov chain acting on grid diagrams. Transitions in the Markov chain are permutations of the configuration space of grid diagrams, given by random Cromwell moves. They imply that the security of their quantum money system is dependent on the rapid *polynomial-time* mixing of their Markov chain. They comment that, at least at the time it was written, even for the unknot it wasn't known if there were two equivalent grid diagrams requiring a superpolynomial number of moves to go from one to the other.

In the 2013 paper A polynomial upper bound on Reidemeister moves Lackenby shows that the number of Reidemeister moves needed to untangle a diagram of the unknot with $c$ crossings is, at most, $(236c)^{11}$. From other comments, it appears that Lackenby has extended the above to show that arbitrary knots can be converted to one another with a polynomial number of Reidemeister moves.

Is Lackenby's result strong enough to show, or lend credence to, Farhi, Gosset, Hassidim, Lutomirski, and Shor's conjectured polynomial-time mixing?

I envision Lackenby's result as putting a polynomial upper bound on the diameter/God's number of the graph of Reidemeister moves - similar to the graph of Cromwell moves that can be randomly walked with Farhi, Gosset, Hassidim, Lutomirski, and Shor's Markov chain.

I am trying to apply the Girsanov formula and Doobs optional sampling theorem to obtain an asymptotic form of first passage density of an fbm process with drift, but the answer i am getting seems strange. Is there a way to verify the correctness of these computations without relying on numerical simulations, perhaps ?

**Some known results**

**The Girsanov formula for fBm**
Let $B^H \left(t\right)$ denote fractional brownian motion with mean $0$ and variance $t^{2H}$, for all $H \in \left(0, 1\right)$ defined on $\left(\Omega, \mathcal{F}, \mathbb{P}\right)$. Let $a$ be a scalar. Define a new probability measure $\mathbb{Q}$ on $\left(\Omega, \mathcal{F}\right)$ via the Radon Nikodym derivative with respect to $\mathbb{P}$ \begin{equation}
\frac{\mbox{d}{\mathbb{Q}}}{\mbox{d}{\mathbb{P}}} = \exp\left\{-\mu M_T - \frac{1}{2}{\mu}^2 \langle M, M\rangle_T \right\}\label{girsanovRadonNikodymDerivativeWithInnerProduct}
\end{equation}
where
\begin{equation}
M_T = \frac{1}{2 H \Gamma \left(\frac{3}{2} - H\right) \Gamma \left(H + \frac{1}{2}\right)} \int_0^T \left(s\left(T - s\right)\right)^{\frac{1}{2} - H} \mbox{d}{{B_s}^H}.
\end{equation}
The process $M_T$ is a martingale with independent increments, zero mean and variance function $c^2 T^{2 - 2H}$ where
\begin{equation}\label{c}
c = \sqrt{\frac{\Gamma\left(\frac{3}{2} - H\right)}{2 H \left(2 - 2H\right)\Gamma\left(H + \frac{1}{2}\right)\Gamma\left(2 - 2H\right)}}.
\end{equation}
Then the process defined, for all $t \in \left[0, T\right]$, by $B^H \left(t\right) + \mu t$ is the standard $\mathbb{Q}$-fractional Brownian motion on $\left[0, T\right]$. In other words, under probability measure $\mathbb{Q}$,
$B^H \left(t\right)$ restricted to $t \in \left[0, T\right]$ is distributed as an arithmetic fractional Brownian motion with drift $\mu$.

A proof of the Girsanov formula for fractional Brownian motion can be found in Norros's paper, where the term the fundamental martingale is also coined for the process $M_t$. It's noteworthy that using the variance of $M_t$, Radon Nikodym Derivative can also be re-written as \begin{equation} \frac{\mbox{d}{\mathbb{Q}}}{\mbox{d}{\mathbb{P}}} = \exp\left\{-\mu M_T - \frac{1}{2}{\mu}^2 c^2 T^{2 - 2H} \right\}.\label{girsanovRadonNikodymDerivative} \end{equation}

**corollary**
Let $X\left(t\right) = b B^H \left(t\right)$ be an arithmetic fractional brownian motion with volatility $b$, for all $H \in \left(0, 1\right)$ defined on $\left(\Omega, \mathcal{F}, \mathbb{P}\right)$. Let $a$ be a scalar. Define a new probability measure $\mathbb{Q}$ on $\left(\Omega, \mathcal{F}\right)$ via the Radon Nikodym derivative with respect to $\mathbb{P}$
\begin{equation}
\frac{\mbox{d}{\mathbb{Q}}}{\mbox{d}{\mathbb{P}}} = \exp\left\{-\frac{a}{b} M_T - \frac{1}{2}\frac{a^2}{b^2} c^2 T^{2 - 2H}\right\}.\label{scaledGirsanovRadonNikodymDerivative}
\end{equation}
Then the process defined, for all $t \in \left[0, T\right]$, by $Z \left(t\right) = X \left(t\right) + a t$ is an arithmetic $\mathbb{Q}$-fractional Brownian motion process on $\left[0, T\right]$ with volatility $b$. In other words, under probability measure $\mathbb{Q}$, $X\left(t\right)$ restricted to $t \in \left[0, T\right]$ is distributed as an arithmetic fractional Brownian motion with drift $a$ and volatility $b$.

**Proposition**
Let $B^H \left(t\right)$ denote scaled fractional brownian motion with
mean $0$ and variance $b^2 t^{2H}$, for all $H \in \left(0, 1\right)$
with respect to measure $\mathbb{P}$. Define $\tau_k = \inf \left\{t \ge
0 : B^H \left(t\right) = k\right\}$ for $k > 0$. Then the conditional
mean and variance of $M_t$ given $B_t$ are
\begin{equation}
E\left(M_t | B^H \left(t\right) = k \right) = {t^{1-2H} k \over b}
\end{equation}
and
\begin{equation}
\hbox{Var}\left(M_t | B^H \left(t\right) = k\right) = t^{2-2H}\left(c^2-1\right).
\end{equation}

**Proof**
Both $B^H \left(t\right)$ and $M_t$ have mean zero, the variance
of $B^H \left(t\right)$ is $b^2 t^{2H}$, the variance of $M_t$ is $c^2 T^{2 - 2H}$,
and their covariance $bt$, can be derived
similarly to, as in Proposition 3.2, in Norros's paper. Hence
the correlation coefficient $\rho$ between $M_t$, $B^H \left(t\right)$
is $1/c$. Therefore, using elementary results for the bivariate normal
distribution, we find
\begin{eqnarray}
E\left(M_t | B^H \left(t\right) = k \right) &=& {\rho \sigma_{M_t}\over \sigma_{B^H \left(t\right)}}k\\\nonumber
&=& {t^{1-2H} k \over b}\nonumber
\end{eqnarray}
and
\begin{eqnarray}
\hbox{Var}\left(M_t | B^H \left(t\right) = k\right) &=& {\sigma^2}_{M_t}\left(1 - \rho^2\right) \\\nonumber
&=& t^{2-2H}\left(c^2-1\right).\nonumber
\end{eqnarray}

**The Derivation Attempt**
By Doob's optional sampling theorem (justified by the uniform integrability of the martingale
\begin{equation}
\exp\left\{\frac{a}{b} M_T - \frac{1}{2}\frac{a^2}{b^2} \langle M, M\rangle_T\right\}\nonumber
\end{equation}
on $\left[0, T\right]$
and the fact that $\left\{\tau_k \le T \right\} \in \mathcal{F}_{\tau_k}
\cap \mathcal{F}_{T} = \mathcal{F}_{\tau_k \bigwedge T} \subseteq \mathcal{F}_T$ ),
\begin{equation}
\mathrm{E}\left[\left.\exp\left\{\frac{a}{b} M_T - \frac{1}{2}\frac{a^2}{b^2} \langle M, M\rangle_T\right\} \right| \mathcal{F}_{\tau_k \bigwedge T}\right]
= \exp\left\{\frac{a}{b} M_{\tau_k \bigwedge T} - \frac{1}{2}\frac{a^2}{b^2} \langle M, M\rangle_{\tau_k \bigwedge T}\right\}.
\end{equation}
Therefore,
\begin{eqnarray}
\mathbb{P}^{a,T}\left[\tau_k \in (t,t+dt)\right] &=& \mathbb{E}^{a,T} \left[ 1_{\left\{\tau_k \in (t,t+dt)\right\}} \right]\\
\nonumber
&=& \mathbb{E}\left[\exp\left\{\frac{a}{b} M_T - \frac{1}{2}\frac{a^2}{b^2} \langle M, M\rangle_T\right\}1_{\left\{\tau_k \in (t,t+dt)\right\}}\right]
\end{eqnarray}
by Corollary above
\begin{eqnarray}
\nonumber
&=& \mathbb{E}\left[\left.\mathbb{E}\left[\exp\left\{\frac{a}{b} M_{\tau_k \bigwedge T}
- \frac{1}{2}\frac{a^2}{b^2} \langle M, M\rangle_{\tau_k \bigwedge T}\right\} \right|{\cal F}^{B^H}_{\tau_k\wedge T}\right]1_{\left\{\tau_k \le T\right\}}\right]\\
\nonumber
&=& \mathbb{E}\left[\exp\left\{\frac{a}{b} M_{\tau_k} - \frac{1}{2}\frac{a^2}{b^2} \langle M, M\rangle_{\tau_k}\right\}1_{\left\{\tau_k \le T\right\}}\right]\\
\end{eqnarray}
by Doob's sampling theorem
\begin{eqnarray}
\nonumber
&=& \mathbb{E}\left[\exp\left\{\frac{a k}{b^2}{\tau_k}^{1-2H} - \frac{1}{2}\frac{a^2}{b^2} {\tau_k}^{2-2H}\left(c^2 - 1\right)\right\}1_{\left\{\tau_k \le T\right\}}\right]\\
\end{eqnarray}
by propostion above
\begin{eqnarray}
\nonumber
&=& \int_0^T \exp\left[\frac{a k t^{1-2H}}{b^2} - \frac{1}{2}\frac{a^2 t^{2 - 2H}}{b^2} \left(c^2 - 1\right)\right] \mathrm{P}\left[\tau_k \in \mbox{d}{t}\right].
\end{eqnarray}
On the other hand, $\left\{B^H \left(t\right) + a t\right\}_{t \in \left[0, T\right]}$ is a scaled fractional Brownian motion under $\mathbb{P}^{a,T}$, so
\begin{equation}
\mathbb{P}^{a,T} \left[\tau_k \le T\right] = \mathbb{P}\left[\hat{\tau_k} \le T\right],
\end{equation}
where $\hat{\tau_k}$ is the first hitting time of the level $k$ of the scaled fractional Brownian motion
with drift $a$. Using the formula for asymptote of the first passage density of fBM without drift due to Molchan, it follows immediately that the long-time form of the first passage time density for fBm with drift is given by
\begin{eqnarray}
f\left(t\right) &=& \exp\left[\frac{a k t^{1-2H}}{b^2} - \frac{1}{2}\frac{a^2 t^{2 - 2H}}{b^2} \left(c^2 - 1\right)\right] t^{H-2}.\label{firstPassageDensityArithmeticFbm}
\end{eqnarray}

One of the things which really worries me about this asymptotic density formula is that it seems $c^2 < 1$ for most values of $H$.

Is there a way to verify the correctness of these computations without numerical simulations, perhaps ?

I'm trying to read the proof of the Lefschetz hyperplane theorem from Griffiths-Harris. They prove the theorem (on pages 156-157) using the Kodaira vanishing theorem. I have a basic question regarding their strategy of proof.

They begin by noting that we have Hodge decompositions for both the Kähler manifold $M$ and the positive hypersurface $V$, and so, they claim that to prove the statement about the restriction map $H^r(M,\mathbb C)\to H^r(V,\mathbb C)$, it is enough to prove the corresponding statement for the restriction maps $H^q(M,\Omega^p_M)\to H^q(V,\Omega_V^p)$.

It is not clear to me why this is sufficient, i.e., why the restriction map from the Dolbeault cohomology of $M$ to that of $V$, and the restriction map on de Rham cohomology are compatible with the Hodge decompositions of $M$ and $V$. Could someone please explain why this is the case? I think this would follow if the restrictions of harmonic forms on $M$ are harmonic on $V$, but I don't see how to prove that either.

For the $A_m$-singularity, it can be viewed as the singular part of $\mathbb{C}^2/\mathbb{Z}_m$. The action of $\mathbb{Z}_m$ on $\mathbb{C}^2$ is defined as following $$ \bar{1} \cdot (z,w) = (z e^{\frac{2\pi i}{m}}, w e^{\frac{-2\pi i}{m}}), $$ where $\bar{1} \in \mathbb{Z}_m$. For $m=2$, in other words $(z,w) \sim (-z,-w)$, there is a resolution $\mathcal{O}(-2)$ for $\mathbb{C}^2/\mathbb{Z}_2$ with the holomorphic map $\pi:\mathcal{O}(-2) \rightarrow \mathbb{C}^2/\mathbb{Z}_2$ defined as $$ \pi:(z, \xi) \mapsto [z\sqrt{\xi}, \sqrt{\xi}] $$ on $\mathcal{O}(-2)|_{U_1}$ where $U_1 = \{[z,w] \in \mathbb{CP}^1|w\not = 0\}$, and $$ \pi:(w, \eta) \mapsto [\sqrt{\eta}, w\sqrt{\eta}] $$ on $U_2 = \{[z,w] \in \mathbb{CP}^1|z\not = 0\}$ similarily.

However, for $m=3$ I have no idea to write down the similar holomorphic map and the resolution. Is there any reference point out the resolution of $A_3$-singularity?