Let $X$ be a partially ordered set. A subset $S\subseteq X$ is called Scott-open if and only if it is:

- Upward-closed: $x\in S$ and $x\le y$ implies $y\in S$;
- Inaccessible by directed suprema: if $D\subseteq S$ is directed and $\sup D\in S$, then there exist $d\in D\cap S$.

Scott-open sets are closed under taking finite intersections and arbitary unions, and the topology they define is called the Scott topology.

However, working with the class of all Scott-open sets can be hard to work with. What would be a basis or subbasis that one could use instead?

I'm mostly interested in the case where $X$ is a complete lattice.

Sometime ago, I read somewhere (should be in Titschmarsh) that, if $N(\sigma, T)$ denotes the number of zeros of the Riemann zeta function $\zeta(s)$ with $\Re(s)\geq \sigma>1/2$ up to height $T>0$, then some result of Ingham says $$N(\sigma, T)\ll T^{\frac{3(1-\sigma)}{2-\sigma}}\log^{5}T.$$

Where can I find a proof of this result? A Google search didn't yield much.

The way Divisors on Elliptic Curves are motivated in cryptography is to say it's a convenient way to represent rational functions by keeping track of multiplicities of the zeros and poles of a rational function. This of course forms an abelian group since polynomial multiplication => addition of exponent of poles and zeros. Then one defines the principal divisors as those degree-zero divisors which actually correspond to a rational function and Pic$^0$(C)=Div$^0$(C)/Princ(C), where Div$^0$ is the subgroup of degree 0 divisors of C.

I can easily visualize what Princ(C) looks like (they are rational functions), but I'm having hard time visualizing what non-principal divisors in Div$^0$(C) look like? Clearly, they are not rational functions by definition, so what are they?

Also, is the Able-Jacobi divisor ([P] - [0]) a principal divisor on an Elliptic Curve? (Me thinks not, but how do I prove it?)

Note: I'm not a mathematician (more like an applied cryptographer) and I'm assuming this site is not just for experts to impress other experts, but also to answer dumb questions that lesser mortals might have :-)

Consider the subgroup $G_{\lambda}$ of $SL_2(\mathbb R)$ generated by $N_{\lambda} = \begin{bmatrix} 1 & \lambda \\ 0 & 1 \end{bmatrix}$ and $S = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}$ where $\lambda>0$. Then it's known by Hecke that $G_{\lambda}$ is discrete if and only if $\lambda \geq 2$ or $\lambda=2 \cos(\frac{\pi}{n})$ where $n \geq 3$ is an integer. They are named for Hecke, and are used by Hecke to study modular forms.

Is there is a generalization of this fact to more general groups? For example, consider a subgroup of $SL_n(\mathbb R)$ generated by several elementary matrices and some involutions, when is it discrete?

What about $p$-adic analogues?

I am wondering why this property exists or if I have discovered a new property of prime numbers?

Let's do a sample here:

1,2,3,5,7,11,13,17

You obtain the gaps (subtracting the left number from the right number):

1,1,2,2,4,2,4

You then obtain the gaps of the gaps, following the same procedure:

0,1,0,2,-2,2

If I follow this type of procedure, why does the sum always tend back to 1? Here is more data illustrating this. Sum up each row, you'll notice the numbers cancel each other out. The **bold** number immediately to the right of the cancelling out is how many steps it took before it cancelled out.

[1] **1 or 2 if you start with 1**

[0, 2, -2] **3**

2, -2 **2**

2, 2, -4 **3**

4, -2, -2 **3**

2, 2, 0, -4 **4**

4, -2, -2 **3**

4, -2, 2, 2, -4, -2 **6**

2, -2 **2**

2, 10, -10, 2, -4 **5**

8, -8 **2**

4, 0, -2, 2, 0, -4 **6**

8, -8 **2**

2, -2 **2**

10, 0, -8, -2 **4**

2, 2, -4 **3**

8, -4, 0, 0, -4 **5**

4, -2, -2 **3**

8, 4, -10, -2 **4**

2, 10, -8, 4, -8 **5**

2, 2, 2, -2, 0, -2, 2, 2, -4, 4, 2, -8 **12**

8, -8 **2**

4, -2, 2, 2, -4, -2 **6**

2, 8, -4, -4, 4, -4, 2, 6, -10 **9**

16, -12, 4, -4, 0, -4 **6**

4, 4, -4, 0, -4 **5**

4, 0, -2, -2 **4**

10, -2, -8 **3**

2, 2, 0, -4 **4**

10, -8, 2, 2, 2, -2, 2, -2, -2, 0, -2, 4, -2, -2, 4, -4, 10, -4, 2, -10 **20**

8, -8 **2**

2, -2 **2**

8, 4, -10, -2 **4**

2, 10, -10, -2 **4**

Here is what I've discovered: -Running total will never go under 1, ever.

-Trends can lead into a long bout of 2s resulting into 20+ steps before it zeros out.

-Every running total starts with a positive even number

-So far, no number that is 6 greater than the current largest number is ever introduced. For example if 4 is the largest number, 10 can be introduced, but not 12. Not sure about this 100%, I'll need a more powerful device to test this theory.

-We can attempt to predict the numbers that are added or subtracted. Maybe AI can recognize a pattern if trained on the zeroing out data.

So using the above example

1,2,3,5,7,11,13,17

Can we use these rules to find the next prime? Well, yes, possibly:

The gaps of gaps is: 0,1,0,2,-2,2

So we have a sum of 3, but we want to get to a 1, this early in the sequence there is a high probability that the next gap of gaps is a -2. To achieve that you need to look at:

1,1,2,2,4,2,4

This would then require the next number to be a 2

Resulting in:

1,1,2,2,4,2,4,2

Which the would mean the next prime could be: 19. Then we could test if it is, and yes it is.

Now of course it is entirely possible that the -2 wasn't the correct answer, but we have a range of possible numbers:

All even numbers from: -2 to 8

The reason for 8 has to do with 2 being the current largest introduced number and adding 6 to it. I'm not sure about this rule however. But I know it doesn't seem to explode and suddenly jump thousands, it's a bit predictable.

Any thoughts of why this property exists? Is it simply a bounded descent path?

I am currently writing some retry loop logic for an external API call but was really hoping there was a quick way to figure out this math... I am embarrassingly bad at things involving more than just adding, subtracting, multiplying, and dividing so forgive my question here...

What I am trying to figure out is this... I want a total of 10 minutes in milliseconds (600000ms) before exiting the loop I start with a base amount of time, say 6000ms. What I am doing is a loop that multiplies the base time by the loop counter. So 6000 * 1, 6000*2, 6000*3...

What I would ideally like to do is provide the value of 600000ms and have the retry loop work it all out from there. Can someone help me with what this might look like?

In other words, I currently start with the base amount from which to multiply but would instead like to be able to start with the total time in ms and have that worked out.

I know there is no work shown on this but I don't honestly know where to start. If someone could even nudge me in the right direction it would be appreciated.

I am interested in the classifying space $BG$ of a finite group $G$.

A real representation $V$ of $G$ of dimension $r$ defines a real vector bundle over $BG$ of rank $r$. If the determinant of this representation is trivial, then this bundle is orientable and a choice of orientation determines an Euler class $e(V) \in H^r(BG,\mathbb{Z})$.

Do these classes generate $H^*(BG,\mathbb{Z})$ as a ring?

This is easy to show when $G$ is abelian. In that case we need only consider $G = \mathbb{Z}_n$ and the cohomology ring is generated in degree 2 by the Euler class of the $2\pi/n$-rotation representation.

More generally, for any $G$, the map $H^1(BG,SO(2)) \to H^2(BG,\mathbb{Z})$ is an isomorphism since the latter is torsion and this is a Bockstein-type operation. We can consider the element of $H^1(BG,SO(2))$ as a homomorphism $G \to SO(2)$ giving us a 2d representation and it is easy to see from the definition of the Bockstein that the above map is the Euler class of this representation.

Say we are solving a tree decomposition problem, e.g. given a graph $G = (V, E)$ we try to find a chordal graph $H$ such that $V(H) = V(G)$, $E(G) \in E(H)$ and the maximal clique in $H$ is minimal among all possible $H$.

This problem is known to be NP-hard. For other/better definitions of tree decomposition/treewidth see Wikipedia

Is there a way to reduce treewidth to some other NP problems, especially Maximum Independent Set/3-SAT or similar ones?

I found a lot of results of sort "if a graph has a bounded treewidth, then problem XXX has a polynomial solution", but nothing like an algorithm to find treewidth from a solved NP problem instance

In Kohnen & Skoruppa's 1989 inventiones paper, page 549, the operator $V_N: J_{k,1}^\text{cusp} \longrightarrow J_{k,N}^\text{cusp}$ is defined by the action $$ \sum_{\substack{D<0,r \in \mathbb{Z}\\D\equiv r^2(4)}} c(D,r)\, e\left( \frac{r^2-D}{4}\tau+rz\right) \longmapsto \sum_{\substack{D<0,r \in \mathbb{Z}\\D\equiv r^2(4N)}} \left(\sum_{\substack{d|(r,N)\\D\equiv r^2(4Nd)}} d^{k-1} c\left(\frac{D}{d^2},\frac{r}{d}\right) \right) \,e\left( \frac{r^2-D}{4N}\tau+rz\right) $$

Its adjoint (wrt Petersson inner product) $V_N^*: J_{k,N}^\text{cusp} \longrightarrow J_{k,1}^\text{cusp}$ is then given by $$ \sum_{\substack{D<0,r \in \mathbb{Z}\\D\equiv r^2(4N)}} c(D,r)\, e\left( \frac{r^2-D}{4N}\tau+rz\right) \longmapsto \sum_{\substack{D<0,r \in \mathbb{Z}\\D\equiv r^2(4)}} \left(\sum_{d|N} d^{k-2} \sum_{\substack{s(2d)\\s^2\equiv D(4d)}} c\left(\frac{N^2}{d^2}D,\frac{N}{d}s\right) \right) \,e\left( \frac{r^2-D}{4}\tau+rz\right) $$

Then, in the Propostion (ii) [page 549-550] the action of $V_N^*V_N$ is given by $$ V_N^*V_N = \sum_{t|N}\Psi(t)t^{k-2}T\left( \frac{N}{t}\right), $$ where $\Psi(t)=t\prod_{t|p}\left(1+\frac{1}{p}\right)$ and $T(n)$ denotes the Hecke operator on $J_{k,1}^\text{cusp}.$

I am not able to follow how $V_N^*V_N$ is derived. I am not sure about action of $T(n)$ also. I first applied $V_N$ and then taking its coefficients to be $c(D',d')$ applied $V_N^*$ on it, but cannot proceed further. Any help?

Lyndon basis for Free Lie algebras is well known in the literature.

My question is,

what is the analogous combinatorial model for the case of free Lie superalgebras? what is the super analogous of Lyndon words?

Kindly share your thoughts.

Thank you.

I'm faced with the following problem on primes. Does someone have any clue? Is it (a reformulation of) an open problem?

Let $d$ be a positive integer, $d\geq 2$. By Dirichlet's theorem, there is an infinite set $\mathcal{P}$ of primes congruent to $1$ modulo $d$.

Consider the set $N$ of integers $n=1+kd$ where all prime divisors of $k$ belong to $\mathcal{P}$.

Could we expect that $N$ contains infinitely many primes? (we need at least $d$ to be even).

Assume you have a finite, discrete probability distribution for a joint random variable and such that $P(X=i,Y=j) = p_{i,j}$ for $i \in \{1, \dots, |X|\},j \in \{1, \dots, |Y|\}$. The marginal distributions are given by $Prob(X=i) = p_i = \sum_{j=1}^{|Y|} p_{i,j}$ and similarly $Prob(Y=j) = q_j = \sum_{i=1}^{|X|} p_{i,j}$ for the other marginal.

**I would like to get a "good" upper bound in terms of the mutual information for the following expression:**
$$
\sum_{i,j} (p_{i,j} - p_i q_j) \log(p_i) \log(q_j).
$$

Now, I can do $$ \sum_{i,j} (p_{i,j} - p_i q_j) \log(p_i) \log(q_j)\\ \leq |\sum_{i,j} (p_{i,j} - p_i q_j) \log(p_i) \log(q_j)| \\ \leq \sum_{i,j} | p_{i,j} - p_i q_j|| \log(p_i)|| \log(q_j)| \\ \leq \sum_{i,j} | p_{i,j} - p_i q_j |\max_{i,j}|\log(p_i)|| \log(q_j)| \\ = ||P_{XY} - P_{X}P_{Y}||_1|\log( \min_i p_i)|| \log( \min_j q_j)| \\ \leq \sqrt{2I(X:Y)}|\log( \min_i p_i)|| \log( \min_j q_j)| $$ where $I(X:Y)$ denotes the mutual information and where I used the triangle inequality and Pinsker's inequality. Note that I can assume without loss of generality that $p_{i},q_j > 0$ for all $i,j$ since zero terms simply disappear from the original sum (taking $0\log0=0$).

However, this bound is not good enough for my purposes. I need a bound that cannot be made arbitrarily large simply by decreasing the smallest (non-positive) marginal probability. **Instead, I'm looking for a bound of the form $M\sqrt{I(X:Y)}\log(|X|)\log(|Y|)$ for some $M \in \mathbb{N}$.**

I'm trying to explain linear algebra to some programmers with computer science backgrounds. They took a course on it long ago, but don't seem to remember much. They can follow basic formalism, but none of it seems to stick when they don't see anything in "real life" to relate the concepts to. Do people know any good examples of applications in computer science using basic linear algebra concepts? Some things I was thinking of include:

- Eigenvalues and eigenspaces
- Image and kernel
- Determinants
- Dual space
- Transpose
- Inner products
- Singular value decomposition

This is a question that I came across a few days ago，Although it is not particularly like a research problem, the following problem is that I study the zero distribution of a class of elementary transcendental functions.And I don't think the following problems are easy to deal with. I 've been thinking about them for a few days and they 've all failed

if $f(x)=x^2-x-\ln{x}-\ln{a}$,and $f(x_{1})=f(x_{2})=0,0<x_{1}<x_{2}$.I have conjecture the roots $x_{1},x_{2}$ such $$\dfrac{3}{2a+1}<x_{1}x_{2}<\dfrac{\ln{a}}{a-1}，a>1$$

**This is my attempt**

since $$x^2_{1}-x_{1}-\ln{x_{1}}=x^2_{2}-x_{2}-\ln{x_{2}}=\ln{a}$$ let $x_{2}=tx_{1},t>1$,then we have $$x^2_{1}-x_{1}-\ln{x_{1}}=t^2x^2_{1}-tx_{1}-\ln{t}-\ln{x_{1}}$$ $$x_{1}=\dfrac{t-1+\sqrt{(t-1)^2+4\ln{t}\cdot(t^2-1)}}{2(t^2-1)}=f(t)$$ then $$x_{2}x_{1}=t(x_{1})^2=t\left(\dfrac{t-1+\sqrt{(t-1)^2+4\ln{t}\cdot(t^2-1)}}{2(t^2-1)}\right)^2=t(f(t))^2$$ and $$a=e^{x^2_{1}-x_{1}-\ln{x_{1}}}=e^{f^2(t)-f(t)-\ln{f(t)}}$$ so it must prove $$\dfrac{3}{2e^{f^2(t)-f(t)-\ln{f(t)}}+1}\le t(f(t))^2<\dfrac{f^2(t)-f(t)-\ln{f(t)}}{e^{f^2(t)-f(t)-\ln{f(t)}}-1},t>1$$Next thing I know, it's pretty complicated.

Partially ordered sets (*posets*) are important objects in combinatorics (with basic connections to extremal combinatorics and to algebraic combinatorics) and also in other areas of mathematics. They are also related to *sorting* and to other questions in the theory of computing. I am asking for a list of open questions and conjectures about posets.

Let $\mathcal{U}(N)$ be the unitary group.

It is well known that $$ \int_{\mathcal{U}(N)} U_{ij} U^\dagger_{nm} \,dU=\delta_{im}\delta_{jn}\frac{1}{N},$$ where $dU$ is the Haar measure.

More complicated averages can also be computed, such as $$\int_{\mathcal{U}(N)} |U_{11}|^2|U_{12}|^2 \,dU = \frac{1}{N(N+1)}.$$

Now, let $R_\lambda$ be an irreducible representation of $\mathcal{U}(N)$ which is different from the fundamental one. Then, the main orthogonality still holds, $$ \int_{\mathcal{U}(N)} [R_\lambda(U)]_{ij}[R_\lambda(U^\dagger)]_{nm} \, dU = \delta_{im} \delta_{jn} \frac{1}{d_\lambda(N)},$$ with the denominator replaced by the dimension of the irrep.

My question is: are there calculations of such integrals involving more matrix elements? Like $$\int_{\mathcal{U}(N)} \Bigl|[R_\lambda(U)]_{11}\Bigr|^2\Bigl|[R_\lambda(U)]_{12}\Bigr|^2 dU=\text{?}$$

Let $S$ be affine. A bit of background: Let us think of $S$-group schemes as abelian sheaves over a given site (etale, Zariski, fppf, etc). When we take a cokernel of a morphism $\phi$ this category: $$A \xrightarrow{\phi} B \xrightarrow{\beta} B/\phi(A)$$ it will often be a presheaf, and we must sheafify it wrt the given site in order to obtain a sheaf.

Is there an example of a cokernel where (a) we must work over the fppf site for the sheafification of the cokernel to still give an exact sequence (and the sequence is not exact as sheaves when we sheafify over the etale or zariski site) and (b) doesn't have $\mu_d$ as the kernel.

The only examples satisfying (a) that I, and those around me, can come up with are the following:

- $\mu_{d} \to \mu_{n} \xrightarrow{z \mapsto z^d} \mu_{n/d}$; if $d \notin \mathcal{O}_S^*$
- $\mu_{d} \to \mathbb{G}_{m} \xrightarrow{z \mapsto z^d} \mathbb{G}_{m}$; if $d \notin \mathcal{O}_S^*$
- $\mu_{d}(k) \to SL_d(k) \to PGL_d(k)$; if char $k \mid d$

But these feel essentially the same. Are there any different examples?

Edit: Jason Starr answered this in the comments.

Given a vector $x \in \mathbb{R}^{n}_{0+}$ such that $x = \sum^{k}_{i=1} \alpha_{i}v_{i}$, the vectors $(v_{1},...,v_{k}) \in \mathbb{R}^{n}_{0+}$ are an independent set, $k < n$, and $\alpha_{i} > 0$, it can be seen by simple combinatorial argument that for at least one vector $v_{i}$, $\frac{||x - \alpha_{i}v_{i}||_{2}}{||x||_{2}} \geq \frac{1}{k}$ (assume the opposite, then we have that the maximum value of $\frac{||x - \alpha_{i}v_{i}||_{2}}{||x||_{2}} < \frac{1}{k}$ but this implies that $||x - \sum^{k}_{i=1} \alpha_{i}v_{i}||_{2} > 0$ which is a contradiction).

Say we have an additional vector $y \in \mathbb{R}^{n}_{0+}$ such that $(v_{1},...,v_{k},y)$ is an independent set. What can be said about $\frac{||x - \beta y||_{2}}{||x||_{2}}$ where $\beta$ is the result of minimizing $||x-y\beta||_{2}$ subject to $\beta \geq 0$? If $(v_{1},...,v_{k},y)$ is orthogonal then $\frac{||x - \beta y||_{2}}{||x||_{2}} = 1$, but bounds in the absence of orthogonality don't seem as obvious. If we impose the additional requirement that $x - y\beta \geq 0$ element-wise are there stronger conclusions that can be drawn?

P.S. Apologies if the title I chose for this question is not perfectly representative of what I actually asked, I was unable to determine if there is a common technical term for this sort of question.

EDIT: Removed silly equivocation about the case when $(v_{1},...,v_{k},y)$ is orthogonal.

EDIT 2: Further research revealed a flaw in the reasoning presented here. I'm in the process of reformulating the question and examining if I can come to any conclusions myself in the meantime. I may repost at a later date if I can't make further progress, but as written here this question is ill-posed.

First, some definitions:

Let A be a shorthand for an infinite sequence ( a(1), a(2), a(3), ... ) of positive integers, likewise B is an infinite sequence of positive integers ( b(1), b(2), b(3) ,...), and C is an increasing infinite sequence of positive integers ( c(0), c(1), c(2), c(3), ...). Call c(0) the initial value of C.

We say that the pair A,B is a recursion pattern for C if for each positive integer n we have c(n) = a(n) * c(n-1) + b(n). Given a recursion pattern A,B, and an initial value for C, we can reconstruct all the sequence C.

Clearly, for any increasing infinite sequence C we can find a pair A,B which is a recursion pattern for C, simply by taking all the a(n) to be equal to 1, and defining the b's as required. If the sequence C is increasing quickly, there will be many pairs A,B which will be recursion patterns for C.

In particular, for any increasing sequence of primes, we can find recursion patterns, many of them if the sequence grows quickly.

Now the question: can a pair A,B be a recursion pattern for more than one sequence of primes? In other words, given A and B as a recursion pattern, is it possible there is more than one initial value for c(0) which will lead to an infinite sequence of primes?

Comments: If the answer is yes, and we can prove that, I would expect the proof to be quite difficult. Maybe a conditional proof would be possible. On the other hand, it may be elementary to show that the answer is no. This question was written up by Moshe Newman.