Is there any way to reconstruct a topological space from the category of its continuous self maps (possibly under some assumptions)?

How can we tell whether a category is the category of continuous self maps of some topological space?

Are there at least existing theorems or frameworks for questions related to these ones?

Let $k$ be a field (of possibly positive characteristic), let $U_n$ denote the space of all $n \times n$ unipotent upper triangular matrices over $k$, and let $G$ be an algebraic subgroup of $U_n$ (hence a unipotent algebraic group itself). Then each $X \in \text{Lie}(G)$ (thought of as a member of $\text{Lie}(U_n)$, i.e. a strictly upper triangular $n \times n$ matrix) is nilpotent, so it makes sense to define

$\text{exp}(X) = 1 + X + X^2/2! + \dots + X^{n-1}/(n-1)!$

(This definition makes sense even in characteristic $p > 0$ so long as $p \geq n$, i.e. so that $p$ never divides $1!, 2!, \dots, (n-1)!$). We can also define, for $g \in G$, since $g-1$ is nilpotent,

$\log(g) = (g-1) - (g-1)^2/2 + (g-1)^3/3 - \dots \pm (g-1)^{n-1}/(n-1) $

Obviously $\text{exp}$ and $\log$ define maps from $\text{Lie}(U_n)$ to $U_n$ and back to $\text{Lie}(U_n)$, and are bijective, being inverses of one another.

**My Question**: If $g \in G$, is $\log(g) \in \text{Lie}(G)$? Or, equivalently, for $X \in \text{Lie}(G)$, is $\text{exp}(X) \in G$?

I feel like there should be an obvious proof of this, but I don't see it. If $G$ were a Lie group, the Lie algebra of $G$ would often just be defined to be all $X$ such that $e^{tX} \in G$ for all $t \in \mathbb{R}$, and so for Lie groups $\text{exp}$ maps from $\text{Lie}(G)$ to $G$ simply by definition. In the algebraic group context this definition no longer makes sense generally, and even when it does, is not used in the literature (so far as I've seen), so I tried using each of the following equivalent definitions of $\text{Lie}(G)$, with no success:

$\text{Lie}(G) = \text{Dist}_1^+(G)$ (distributions of order no greater than $1$ without constant term)

$\text{Lie}(G) = $ the subspace of $\text{Lie}(U_n) = \text{Dist}_1^+(U_n)$ which kills $I = (\text{defining polynomials of $G$})$

$\text{Lie}(G) = \{M \in \text{Lie}(U_n): 1 + \tau M \in G(k[\tau]) \}$ where $\tau^2 = 0$

$\text{Lie}(G) = \{M \in \text{Lie}(U_n) : 1 + \tau M \text{ satisfies the defining polynomials of } G \}$, again where $\tau^2 = 0$

$\text{Lie}(G) = $ left invariant derivations on the Hopf algebra of $G$

It is certainly believable on it's face; we have that $\text{Lie}(G) \stackrel{\text{exp}}{\longrightarrow} U_n \stackrel{\log}{\longrightarrow} \text{Lie}(G)$ composes to the identity, similarly for $G \stackrel{\log}{\longrightarrow} \text{Lie}(U_n) \stackrel{\text{exp}}{\longrightarrow} G$, but I don't see why in the meantime that $\log(G) \subset \text{Lie}(G)$ or that $\text{exp}(\text{Lie}(G)) \subset G$.

If it makes a difference, I'm actually only interested in the case where the defining polynomials of $G$ have integer (perhaps mod $p$) coefficients.

Thanks in advance for any help.

**EDIT**: Here's a more basic question, one which might help answer the above.

Suppose $k = \mathbb{R}$. Then $G$ is also a Lie group, and it is customary to define

$\text{Lie}(G) = \{ X \in \text{Lie}(U_n): e^{tX} \in G \text{ for all } t \in \mathbb{R} \}$

Can someone explain, or point me to a reference explaining, why this definition is equivalent to any of the above definitions for $\text{Lie}(G)$ as an algebraic group?

I'm attempting to solve a simple Dirichlet problem on the fractional Laplacian with boundary conditions:

$r^{+}(\nabla^s) v = f$

where $0 \leq s \leq 1/2$, $v$ is zero outside of $[0,1]$, $r^{+}$ restricts a function to $[0,1]$, and $f:[0,1] \rightarrow \mathbb{R}$ with $f(t) = t^{-s}$ or more generally, $f(t) = t^{k}$ for some fixed value $k$.

Most references I can find concern themselves with proving the regularity of solutions to the fractional Laplacian equation. Is there a simple way to solve this equation? I've looked in references such as: https://arxiv.org/pdf/1712.01196.pdf which purport to solve these equations, but I could not find a place in the text where a method for solving such equations is given.

Suppose $(M,g)$ is a two dimensional Riemannian manifold. Let $\gamma:(-\delta,\delta)\to M$ be a geodesic segment and suppose that $\gamma(0)$ is not conjugate to any other point in $(-\delta,\delta)$. Is it true that there always exists a solution to the Jacobi equation along $\gamma$ that is non-vanishing anywhere along $\gamma$?

Thanks,

Given $3SAT$ formula $\phi$ in $n$ variables and a $\mu\in(0,1/2)$ what is the smallest degree of polynomial $f\in\mathbb Z[x_1,\dots,x_n]$ such that for an $(a_1,\dots,a_n)\in\{0,1\}^n$ if $\phi(a_1,\dots,a_n)=1$ then $|f(a_1\pm\delta,\dots,a_n\pm\delta)|>\frac12$ and if $\phi(a_1,\dots,a_n)=0$ then $|f(a_1\pm\delta,\dots,a_n\pm\delta)|<\frac12$ if $|\mu|<\delta$?

We know the degree cannot exceed $n$.

Do there exist connected proper smooth $\mathbb{C}$-schemes $X_i$ ($\forall i\in \mathbb{Z}_{>0}$) with $\mathrm{dim}_{\mathbb{C}}X_i=i$ such that $X_i$ admits an immersion into $X_{i+1}$ and any connected proper smooth $\mathbb{C}$-scheme admits an immersion into $X_j$ for some $j\in \mathbb{Z}_{>0}$? If this is not possible for $\mathbb{Z}_{>0}$, is this possible for $\mathbb{Z}_{>n}$ for some $n$?

Let $\Omega$ be a bounded open connected set in $\mathbb{R}^n$ with $C^1$ boundary and let $0<\alpha<1$. Then there exists a real number $\sigma_0>0$ and a dimensional constant $C>0$ such that $$||Du||_{L^\infty(\Omega)}\leq \sigma^\alpha [|Du|]_{\alpha,\Omega}+\frac{C}{\sigma}||u||_{L^\infty(\Omega)}$$ and $$[u]_{\alpha,\Omega}\leq \sigma[|Du|]_{\alpha,\Omega}+\frac{C}{\sigma^\alpha}||u||_{L^\infty(\Omega)}$$ hold for all $0<\sigma<\sigma_0$ and for all $u\in C^{1,\alpha}(\bar\Omega)$. Here $||u||_{C^{1,\alpha}}=||u||_{L^\infty(\Omega)}+||Du||_{L^\infty(\Omega)}+[|Du|]_{\alpha}$ and $[u]_\alpha=\sup_{x\neq y}\frac{|u(x)-u(y)|}{|x-y|^\alpha}$.

N.B. I have proved the above results for balls and then for domain with $C^2$ boundary. I cant proceed for $C^1$ boundary domain. Any help will be great.

We work over an algebraically closed field. Suppose $X\subset \mathbf{P}^n$ is an integral projective curve and $\pi:X\to Y$ is a linear projection that identifies two distinct points $p,q\in X$ to a point $y\in Y$ and is an isomorphism elsewhere.

I want to show that $h^0(Y,\mathscr{O}_Y(n)) = h^0(X,\mathscr{O}_X(n))-1$ for all $n\ge 1$.

Here is my proof: The natural map $H^0(Y,\mathscr{O}_Y(n))
\xrightarrow{f} H^0(X,\pi^*\mathscr{O}_Y(n))$ is injective and
$\pi^*\mathscr{O}_Y(n) = \mathscr{O}_X(n)$. Consider
$H^0(X,\mathscr{O}_X(n)) \xrightarrow{g} k$ that maps $\sigma$ to
$\sigma(p)-\sigma(q)$, where $\sigma(p)$ is the image of $\sigma_p$ in
$k$ under a choice of isomorphism
$(\pi^*\mathscr{L})_p/m_p(\pi^*\mathscr{L})_p \cong k$, and the same
for $q$. All such choices change $\sigma(p)$ (and $\sigma(q)$) by a
nonzero scalar. Since $\mathscr{O}_X(n)$ is very ample for $n\ge 1$,
we may find a section that vanishes on $p$ and not on $q$. This means
that $g$ is surjective for $n\ge 1$, and clearly the image of $f$ is
in the kernel of $g$. Thus we see that $h^0(Y,\mathscr{O}_Y(n))\le
h^0(\mathscr{O}_X(n))-1$. To prove equality, consider the sequence
$$0\to \mathscr{O}_Y \to \pi_* \mathscr{O}_X \to k(y) \to 0.$$ The
sequence is exact on the left because $\pi$ is surjective, and the
cokernel is supported on $y$, and it has length $1$, thus is $k(y)$.
Twisting by $\mathscr{O}_Y(n)$, and taking cohomology, we see that
$$h^0(X,(\pi_*\mathscr{O}_X)\otimes
\mathscr{O}_Y(n))-h^0(Y,\mathscr{O}_Y(n))\le 1.$$ However, by
projection formula $$\pi_*(\mathscr{O}_X(n)) \cong
\pi_*(\mathscr{O}_X\otimes \pi^*\mathscr{O}_Y(n)) \cong
(\pi_*\mathscr{O}_X)\otimes \mathscr{O}_Y(n)$$ and since $\pi$ is
finite we have $R^i\pi_* = 0$ for $i>0$ and

$$H^0(X,\mathscr{O}_X(n))\cong H^0(Y,\pi_*\mathscr{O}_X(n))\cong
H^0(Y,(\pi_*\mathscr{O}_X)\otimes \mathscr{O}_Y(n)).$$ Therefore we
have equality.

I have two questions: (1) Is the proof correct? I feel a little strange about it because I seem to draw weird consequences from this. (2) Is there a direct way to show that any section of $H^0(X,\mathscr{O}_X(n))$ that agrees on $p$ and $q$ must come from a section in $H^0(Y,\mathscr{O}_Y(n))$? What's the best way to analyse $H^0(Y,\mathscr{F})\to H^0(X,\pi^*\mathscr{F})$ in this situation?

In a *n* x *n* chessboard a white knight sits on the top left corner, and a black knight on the bottom right corner. Starting with white, the two knights take turns to move at random, and with equal probability, to any of the (up to eight) available cells.

What is the expected total number of moves that will occur before one of the knights lands on the cell occupied by the other?

Here's what I'm doing for a certain simulation software to produce a 2D image image with random values such that nearby pixels have positively correlated values:

1) Generate the initial 2D image of $N\times N$ random values.

2) Convolve the image with a Gaussian $e^{-(x^2+y^2)/\sigma^2}$.

Now I'm curious about the probability distribution of the value of any given pixel, and how does that distribution depend on the $\sigma$ of the Gaussian.

The extreme cases $\sigma\to 0$ and $\sigma\to +\infty$ are clear; I'm curious about small positive values of $\sigma$.

Can the Green function for the fractional Laplacian operator be estimated from above and below. $$ \left\{\begin{aligned} (-\Delta_x)^{s} G(x, y)+ G(x, y)&= \delta_{x}(y) &&\text{in } \Omega \\ G(x,y) & =0 &&\text{ in } \mathbb{R}^N\setminus \Omega \end{aligned} \right.$$ when $N\geq 2s$ with $s\in (0, 1).$

Recently, I read a little portion of homotopy theory from Bredon's 'Topology and Geometry' and found that I like it enough to want to continue reading material in Algebraic Topology.

A little digging around on the internet told me that books like the one by Peter May and Tammo tom Dieck are second texts, and that one would do well to start with Hatcher/Bredon/Massey.

Considering that I have only four months in which to know much of the material at the level of Tammo tom Dieck's book, I was wondering if there is any significant disadvantage to working from such a text, rather than an apparently more elementary text such as Hatcher.

To summarize: 1. What, if any, are the significant advantages of studying algebraic topology from the non-categorical viewpoint, before reading a categorical approach to it? 2. Does the categorical approach, as done in tom Dieck, subsume the non-categorical approach in terms of the results provable?

I know some category theory from MacLane's book, and learnt point-set topology from Munkres. My background in algebra is comprised of the sections on groups, rings, fields and Galois theory from 'Abstract Algebra' by Dummit and Foote and some part of modules from Herstein's 'Topics in Algebra'.

In algebraic geometry, we have something called Weil cohomology theories, which formalize the notion of a "good" cohomology theory of smooth projective varieties. I believe that for $l$-adic cohomology, we have a functorial construction of $l$-adic homotopy type. In general, given an arbitrary Weil cohomology theory is there a more-or-less formal construction of a (pro-)homotopy type having the corresponding cohomology groups?

I believe that my question should have nothing to do with motives since I am happy to serve one cohomology theory at a time but maybe I am wrong.

I was reading the following example from the book *Methods in Nonlinear Analysis* (Zhang, Springer) on page 10: First, everything was fine:

Example 2. Let $X = C^1(\overline \Omega, \mathbb R^N)$, $Y = \mathbb R^1$. Suppose that $g \in C^2(\overline \Omega \times \mathbb R^n,\mathbb R^1)$. Define $$ f(u) = \frac 1 2 \int_{\Omega } |\nabla u|^2 + \int_\Omega g(x, u(x)) $$ as $u \in X$. By definition, we have $$ f'(u) \cdot \varphi = \int_\Omega [\nabla u(x)\nabla \varphi (x) + g'_u(x, u(x))\varphi (x)]dx , $$ and $$ f''(u)(\varphi , \psi) = \int_\Omega [\nabla \psi(x)\nabla \varphi (x) + g''_{uu}(x, u(x))\varphi (x)\psi(x)]dx . $$ With some additional growth conditions on $g''_{uu}:$ $$ |g''_{uu}(x, u)| \le a(1 + |u|^{4/(n-2)} ), \ \ \ a>0,\ \forall u \in \mathbb R^N , $$ $f$ is twice differentiable in $H^1_0(\Omega ,\mathbb R^N )$.

Then, suddenly, I got totally lost:

As an operator from $H^1_0(\Omega ,\mathbb R^N )$ into itself, \begin{equation} f''(u) = id + (-\Delta)^{-1} g''_{u}(\cdot , u(\cdot )) . \end{equation} is self-adjoint, or equivalently, the operator $-\Delta+g''_{uu}(x, u(x))\cdot\ $ defined on $L^2$ is self-adjoint with domain $H^2 \cap H^1_0(\Omega ,\mathbb R^N )$.

I just don't understand anything about the second part of the example. For example, where is the $(-\Delta)^{-1}$ in the last equation from? I know it is from integration by parts, but shouldn't $f''(u)$ be an integral?

Please help make it clear. Thanks in advance.

At time of writing the first definition of a $ (p, q) $-tensor on the Wikipedia page is as follows.

**Definition.** A $ (p, q) $-tensor is an assignment of a multidimensional array $$ T^{i_1\dots i_p}_{j_{1}\dots j_{q}}[\mathbf{f}] $$
to each basis $\mathbf{f}$ of an $n$-dimensional vector space such that, if we apply the change of basis
$\mathbf{f}\mapsto \mathbf{f}\cdot R $
then the multidimensional array obeys the transformation law
$$
T^{i'_1\dots i'_p}_{j'_1\dots j'_q}[\mathbf{f} \cdot R] = \left(R^{-1}\right)^{i'_1}_{i_1} \cdots \left(R^{-1}\right)^{i'_p}_{i_p} T^{i_1, \ldots, i_p}_{j_1, \ldots, j_q}[\mathbf{f}] R^{j_1}_{j'_1}\cdots R^{j_q}_{j'_q} .
$$

This is a standard definition I can remember reading in textbooks during my undergraduate degree. To me, it also seems far too confusing. To understand a $ (p, q) $-tensor as an element in $$ \text{Hom}(\underbrace{V^* \otimes\dots\otimes V^*}_{p\text{}} \otimes \underbrace{V \otimes\dots\otimes V}_{q \text{}}, \mathbb{K}) $$ one only has to understand the tensor product on vector spaces (which is easy to define in terms of bases). To then recover the description of a multidimensional array one also has understand cobases, however these can also be easily explained constructively.

Question**Why would anyone give the standard definition?**

I initially thought the answer lay in applied mathematics. However linear maps are omnipresent in applied mathematics and I have never seen a linear map defined as a function on bases that satisfies coherence with respect to base change. Furthermore I feel the consensus would be that this is a bad definition from a pedagogical point of view (I certainly think it is). So why is the analogous definition of $ (p, q) $-tensors standard?

I consider k-ary strings of the form $a_1 \cdots a_n$ where $a_i \in \{0,\ldots,k-1\}$ for $1\le i \le n$. A necklace is the lexicographically smallest representative of an equivalence class where two strings are said to be equivalent if one is a rotation of the other (wiki). The number of all necklaces can be counted using Polya's enumeration theorem.

Now, let $1 \le j \le n$ be fixed. What is the number of necklaces with the additional constraint $a_j = k-1$? Is the a closed form expression for this number?

It is known that if one assumes the Riemann Hypothesis, then, there exists $k>0$, such that for $x\geq 3$, $|\theta(x)-x|<k\sqrt{x}\log^2 x$ $-$ ($\theta$ is the Chebyshev theta). Is the converse known to be true? Or something close to it? If so, can a reference be provided?

If $f:[0,1]\to [0,1]$ is given by

$$ f(x)= \begin{cases} 2x & \mbox{ if } x\in [0,1/3)\\ & \\ 2x-\frac{2}{3} & \mbox{ if } x\in [1/3,1/2)\\ 2x-\frac{1}{3} & \mbox{ if } x\in [1/2,2/3)\\ & \\ 2x-1 & \mbox{ if } x\in [2/3,1] \end{cases} $$ show that $f$ is Ergodic with respect of Lebesgue measure.

**Idea:** The technique is to show that given an invariant set of positive measure, this set will have full measure.

A very common process is to partition the domain, choose a point of density in the invariant set, use the the Lebesgue Density Theorem and the bounded distortion property

\begin{align*} \dfrac{m(f^k(E_1))}{m(f^k(E_2))}=\dfrac{m(E_1)}{m(E_2)} \end{align*}.

below is the iteration graph $f^5 (x)$. We see that the image size of each subinterval is $2/3$. The main problem that for some interval $E_1$, $f^k(E_1)$ is not $(0,1)$ integer "At first". I think that when we iterate $n$ times for very large $n$, we will have a graph virtually without breaks, with the linear arms. This would justify that the measure of each invariant set of positive measure is $1$. But it is only my intuition, I could not justify formally. Can anyone give a tip?

I have been reading this article A characterization of multiplicative linear functionals in Banach algebras and got stuck in the middle of the proof of theorem 1.2 on page 217.
In the 3rd line from below, they say that the function $f_{a,b}:\mathbb{C}\longrightarrow\mathbb{C}$ is Lipschitz and entire *hence it is affine*. Can anyone tell me why it would be affine. Or suggest me a reference to the result which states that an Lipschitz entire complex function will be affine.

Does there exist a countable set of connected proper smooth $\mathbb{C}$-schemes such that any connected proper smooth $\mathbb{C}$-scheme admits a $\mathbb{C}$-immersion into one of them?