Math Overflow Recent Questions

Subscribe to Math Overflow Recent Questions feed
most recent 30 from mathoverflow.net 2018-08-17T17:02:15Z
Updated: 1 day 1 hour ago

Finite group with a character having one nonzero absolute value

Tue, 08/07/2018 - 15:19

Let $G$ be a finite group. Assume that $\chi$ is a complex irreducible character of $G$ of degree $n\geq 2$, with the property that for each element $g\in G$ either $\chi(g)=0$ or $|\chi(g)|=n$.

  1. Does it necessarily follow that $\chi$ is imprimitive, i.e., induced from a character of a subgroup?

The only such examples I could think of are given by extraspecial groups.

  1. Is there a classification of all the finite groups with this property?

The questions are partly motivated by this post: Finite groups with a character having very few nonzero values?

Derivative of complex matrix pseudo inverse with respect to real and imaginary components

Tue, 08/07/2018 - 01:34

I have a complex non-square matrix $\mathbf{Y}\in\mathbb{C}^{n \times m}$ whose inverse I compute using the Moore-Penrose pseudo inverse, $\mathbf{Z}=\mathbf{Y^+}$.

I am interested in evaluating the derivatives of the real and imaginary components of $\mathbf{Z}$ with respect to the real and imaginary components of $\mathbf{Y}$,

\begin{equation} \left[\begin{array}{c c } \frac{\partial \Re({Z_{ij}})}{\partial \Re(Y_{st})} & \frac{\partial \Re({Z_{ij}})}{\partial \Im(Y_{st})} \\[0.25cm] \frac{\partial \Im({Z_{ij}})}{\partial \Re(Y_{st})} & \frac{\partial \Im({Z_{ij}})}{\partial \Im(Y_{st})} \end{array}\right]. \end{equation}

From (Hjørungnes 2011) I am aware that the complex derivative of $\mathbf{Y^+}$ takes the form, \begin{equation} \frac{d}{dY_{st}} \mathbf{Y^+} = -\mathbf{Y^+} \left(\frac{d}{dY_{st}} \mathbf{Y}\right)\mathbf{Y^+} + \mathbf{Y^+} \mathbf{Y^{+H}} \left(\frac{d}{dY_{ij}} \mathbf{Y}\right)^\mathbf{H} (\mathbf{I}-\mathbf{Y}\mathbf{Y^+}) + (\mathbf{I}-\mathbf{Y^+}\mathbf{Y}) \left(\frac{d}{dY_{st}} \mathbf{Y}\right)^\mathbf{H}\mathbf{Y^{+H}} \mathbf{Y^+}. \end{equation}

My question is whether I can then apply the Cauchy-Reinmann equations to get the the derivatives with respect to the real and imagianry components?

\begin{equation} \left[\begin{array}{c c } \Re\left(\frac{\partial{Z_{ij}}}{\partial {Y_{st}} }\right) & -\Im\left(\frac{\partial{Z_{ij}}}{\partial {Y_{st}} }\right) \\[0.25cm] \Im\left(\frac{\partial{Z_{ij}}}{\partial {Y_{st}} }\right) & \Re\left(\frac{\partial{Z_{ij}}}{\partial {Y_{st}} }\right) \end{array}\right ] = \left[\begin{array}{c c } \frac{\partial \Re({Z_{ij}})}{\partial \Re(Y_{st})} & \frac{\partial \Re({Z_{ij}})}{\partial \Im(Y_{st})} \\[0.25cm] \frac{\partial \Im({Z_{ij}})}{\partial \Re(Y_{st})} & \frac{\partial \Im({Z_{ij}})}{\partial \Im(Y_{st})} \end{array}\right ]. \end{equation}

I am aware that their application requires the function in question to be analytic, but i do not know if this satisfied in the case of a complex pseudo inverse... If not, how else may i go about evaluating these derivatives?

Reference: @book{Hjrungnes:2011:CMD:2011870, author = {Hjrungnes, Are}, title = {Complex-Valued Matrix Derivatives: With Applications in Signal Processing and Communications}, year = {2011}, isbn = {0521192641, 9780521192644}, edition = {1st}, publisher = {Cambridge University Press}, address = {New York, NY, USA}, }

Can this criterion to indicate the randomness some numbers? [on hold]

Mon, 08/06/2018 - 17:10

John Derbyshire in his book PRIME OBSESSION says on page 366: CHAPTER 3 10.

"Here is an example of e turning up unexpectedly. Select a random number between 0 and 1. Now select another and add it to the first. Keep doing this, piling on random numbers.
How many random numbers, on average, do you need to make the total greater than 1?
Answer: 2.71828…."

Question: Is the opposite of the above statement true?

Explanation:

Select a number between 0 and 1. Now select another and add it to the first. Keep doing this, piling on numbers. If we find that, on average, we need 2.71828…. numbers to make the total greater than 1? Are these numbers random? -Where: (Random numbers are numbers that occur in a sequence such that two conditions are met: (1) the values are uniformly distributed over a defined interval or set, and (2) it is impossible to predict future values based on past or present ones. https://whatis.techtarget.com/definition/random-numbers)

Analytically: If we have a set of numbers D = {d1,d2,d3,...,dn} and we know nothing for dk except that it belongs in space (0,1), k=1,2,3,..,n and we take the subsets {d1,d2,...,dx1}, {dx1+1,dx1+2,...,dx2},.., {dxj+1,dxj+2,...,dxn=dn}, such that: x1, x2,..,xn are the minimum number of consecutive elements of set D required that their sum be greater than 1. ie d1 + d2 + d3 +...+ dx1 > 1, dx1+1 + dx1+2 +...+ dx2 > 1, ...

Now we define the sequence a(1) = x1, a(2) = x2,..., a(n) = xn.

Finally we find the $\lim \frac{a(1) + ... + a(n)}{n}$, n = 1,2,3,4,...

If we find it and is equal to $e = 2.718281828459045235360287471....$,

Can we claim that the elements of set D are random numbers? But if limit differs e then they are not random numbers?

Function is $L^p$-integrable for $p >1$ [Kähler Geometry]

Mon, 08/06/2018 - 16:24

I am reading through a proof in W. Ding and G. Tian's 1992 paper on the generalised Futaki invariant. To provide context, we are looking for obstructions to the existence of Kähler--Einstein metrics with positive scalar curvature. The Futaki invariant provides an example of such an obstruction. My confusion is not in the Kähler geometry, but in some standard Riemannian Geometry/Geometric Analysis.

Suppose that $X$ is a compact $\mathbb{Q}$-Fano variety, i.e., there is an ample line bundle $L \longrightarrow X$ which restricts to the pluri-anticanonical line bundle $K_X^{-m}$ over the regular part of $X$ for some $m$. Let $\omega$ be an admissible Kähler metric on $X$ which represents $\frac{1}{m} c_1(L)$, where $c_1(L)$ denotes the first Chern class of $L$. Note that by $\omega$ being admissible, we mean that it is given by the pullback of the Fubini-Study metric on $\mathbb{CP}^n$, i.e., $$\omega = \frac{\alpha}{m} \phi_m^{\ast} \left( \omega_{\text{FS}} + \frac{\sqrt{-1}}{2\pi} \partial \overline{\partial} \psi \right),$$ $\psi \in \mathscr{C}^{\infty}(\mathbb{CP}^n, \mathbb{R})$ and $\phi_m$ is the Kodaira embedding furnished from the global sections of $L \to X$.

Now let $\pi : \widetilde{X} \longrightarrow X$ be a smooth resolution of $X$, with $\pi$ given simply by projection. It is clear that the support of the cohomology class $\text{Ric}(\widetilde{\omega}) - \omega$ is contained the exceptional divisors of $\widetilde{X}$. Here $\widetilde{\omega}$ denotes a Kähler metric on $\widetilde{X}$ such that $\pi^{\ast} \omega \leq \omega$. Let $E_1, ..., E_{\ell}$ be the exceptional divisors, then $$\text{Ric}(\tilde{\omega}) - \pi^{\ast} \omega = \sum_{k=1}^{\ell} \alpha_k C_1([E_k]),$$ where $C_1([E_k])$ denotes the Poincaré duals to $E_k$ in $\widetilde{X}$.

For each $k$, let $\| \cdot \|_k$ denote a Hermitian metric on the line bundle $[E_k]$ and $S_k$ a section of $[E_k]$ whose zero locus is exactly $E_k$. Then in the sense of distributions, $$\text{Ric}(\tilde{\omega}) - \pi^{\ast} \omega = \frac{\sqrt{-1}}{2\pi} \partial \overline{\partial} \left( - \sum_{k=1}^{\ell} \alpha_k \log \| S_k \|_k^2 + \varphi \right),$$ where $\varphi \in \mathscr{C}^{\infty}(\widetilde{X}, \mathbb{R})$.

Continuing the proof, we obtain a function $f$ of the form $$f = - \sum_{k=1}^{\ell} \alpha_k \log \| S_k \|_k^2 + \varphi + \log \left( \frac{\widetilde{\omega}^n}{\pi^{\ast} \omega^n} \right) + \text{constant}$$ on the regular part of $X$.

Question: How does one show that $f \in L^p(X, \omega^n)$, i.e., $$\int_X \left| f \right|^p \omega^n < \infty,$$ where $p > 1$?

Of course, $\varphi$ is smooth, and $X$ is compact, so this is no concern. I cannot control the logarithm terms however, is there something I am missing that is blindingly obvious?

Is this conjecture strictly weaker than P=NP?

Sun, 08/05/2018 - 21:27

My three computability questions are related to the following group theory question (first asked by Bridson in 1996):

For which real $\alpha\ge 2$ the function $n^\alpha$ is equivalent to the Dehn function of a finitely presented group (i.e., what numbers belong to the isoperimetric spectrum)?

Clearly $\alpha\ge 1$ for every $\alpha$ in the isiperimetric spectrum, and by Gromov's theorem, the isoperimetric spectrum does not contains numbers from $(1,2)$.

In what follows, all functions are bounded by polynomials, so two functions $f(n), g(n)$ are equivalent if $af(n)<g(n)<bf(n)$ for some positive $a, b$. It is not necessary to know what the Dehn function of a group is (it is an important asymptotic invariant of a group). In this paper, we showed that if $\alpha\ge 4$ belongs to the isoperimetric spectrum if $\alpha$ can be computed by a non-deterministic Turing machine in time at most $2^{c2^m}$ for some $c>0$. Recently Olshanskii proved the same statement for all $\alpha\ge 2$ (the paper will appear in the Journal of Combinatorial Algebra). On the other hand if $\alpha$ is in the isoperimetric spectrum, then $\alpha$ can be computed in time at most $2^{2^{c2^{m}}}$ for some $c>0$. If P=NP, then one can reduce the number of 2's to two and bring the upper bound to be equal to the lower bound, completing the description of the isoperimetric spectrum. But the proof in our paper (Corollary 1.4) would give two 2's also if the following seemingly weaker conjecture holds.

Conjecture. Let $T(n)$ be the time function of a non-deterministic Turing machine which is between $n^2$ and $n^k$ for some $k$. Then there is a deterministic Turing machine $M$ computing a function $T'(n)$ which is equivalent to $T(n)$ and having time function at most $T(n)^c$ for some constant $c$ (depending on $T$). (For the definition of the time function see this question).

Question Is the conjecture strictly weaker than P=NP?

Properties of a "research announcement"

Sun, 08/05/2018 - 15:35

Some mathematics journals publish "research announcements", a class of publication that before today I had not heard of. An example is Electronic Research Announcements in Mathematical Sciences.

I presume that after publishing a research announcement in such a journal presenting a particular result, one can subsequently then publish a full research paper on the same result at a later date, generally in a different journal. Obviously, it is not ordinarily the case that journals will knowingly allow the same result to be published twice. Therefore, research announcements must have some defining properties which make this practice acceptable.

My question is the following: what are the properties of the research announcement which allow the subsequent publication of the full research paper describing the same result?

Such a question may be of importance; for example, to a journal editor who is handling a submission that describes a result which has been previously published as a research announcement. Perhaps the answer is simple: that the research announcement must contain no proofs. But perhaps the convention is more subtle than this, I'm not sure.

I am aware that the practice of publishing research announcements is not widespread, and I am not interested for the purposes of this question in discussing whether anyone ought to publish a research announcement in any particular situation.

I believe that this question is best suited to mathoverflow.net, rather than (for example) to academia.stackexchange.com, as I am asking specifically about publication practice in mathematics.

Three theorems on the number of nonzero coefficients of a polynomial

Sun, 08/05/2018 - 12:40

The number of positive real roots of a polynomial with real coefficients is strictly smaller than the number of nonzero coefficients of the polynomial. This is an immediate corollary of Descartes' rule of signs.

For a polynomial with complex coefficients of degree at most $p-1$, with $p$ prime, the number of distinct roots of the polynomial which are degree-$p$ roots of unity is strictly smaller than the number of nonzero coefficients of the polynomial. As observed by Tao, this is equivalent to an uncertainty inequality for prime-order groups.

For a polynomial over a field of zero characteristic, the multiplicity of any of its non-zero roots is strictly smaller than the number of nonzero coefficients of the polynomial. To my knowledge, this first appeared in a paper by Brindza.

Are these results reducible to each other? (Well, the ground fields are not quite the same, and yet...) Are there any other similar results known? And, ultimately, is there any "common parent" from which all these results can be derived?

Primality test for specific class of Proth numbers

Sun, 08/05/2018 - 10:54

Can you provide a proof or a counterexample for the following claim :

Let $P_m(x)=2^{-m}\cdot \left(\left(x-\sqrt{x^2-4}\right)^{m}+\left(x+\sqrt{x^2-4}\right)^{m}\right)$

Let $N=k\cdot 2^n+1$ such that $n>2$ , $0< k <2^n$ and

$\begin{cases} k \equiv 1,7 \pmod{30} ~ with ~ n \equiv 0 \pmod{4} ~,or \\ k \equiv 11,23 \pmod{30} ~ with ~ n \equiv 1 \pmod{4} ~,or \\ k \equiv 13,19 \pmod{30} ~ with ~ n \equiv 2 \pmod{4} ~,or \\ k \equiv 17,29 \pmod{30} ~ with ~ n \equiv 3 \pmod{4} \end{cases}$

Let $S_i=S_{i-1}^2-2$ with $S_0=P_k(8)$ , then $N$ is prime iff $S_{n-2} \equiv 0 \pmod N$ .

You can run this test here . A list of Proth primes sorted by coefficient $k$ can be found here . I have tested this claim for many random values of $k$ and $n$ and there were no counterexamples .

Note that for $k=1$ we have Inkeri's primality test for Fermat numbers . Reference : Tests for primality, Ann. Acad. Sci. Fenn. Ser. A I 279 (1960), 1-19.

Atiyah-Patodi-Singer for manifolds with cusps

Sun, 08/05/2018 - 10:03

Dear Colleagues and Friends,

Please let me know if you are aware of any references to the following question.

The classical result of Atiyah, Patodi and Singer tells us that if $W$ is a compact oriented Riemannian 4-manifold with boundary $M$ and, moreover, if we assume that near M the metric is isometric to a product, then $$ sign(W)= \frac{1}{3} \int_W p_1 - \eta(M),$$ where $p_1$ is the differential form representing the first Pontryagin class of $W$, and $\eta$ is the eta-invariant of $M$.

What about the case when both $W$ and $M$ are hyperbolic manifolds and are allowed to have cusps? Or, say, $W$ and $M$ are Riemannian as above, with infinite ends of finite volume, on which the metric is isometric to a product? (which will be the case if both are hyperbolic with cusps - I'm sure that this is not a very general setting :-))

Any information will be appreciated. Please excuse my ignorance as differential geometer.

The tensor product of two bounded operators

Sun, 08/05/2018 - 09:58

Let $E$, $F$ be two complex Hilbert spaces and $\mathcal{L}(E)$ (resp. $\mathcal{L}(F)$) be the algebra of all bounded linear operators on $E$ (resp. $F$).

The algebraic tensor product of $E$ and $F$ is given by $$E \otimes F:=\left\{\xi=\sum_{i=1}^dv_i\otimes w_i:\;d\in \mathbb{N}^*,\;\;v_i\in E,\;\;w_i\in F \right\}.$$

In $E \otimes F$, we define $$ \langle \xi,\eta\rangle=\sum_{i=1}^n\sum_{j=1}^m \langle x_i,z_j\rangle_1\langle y_i ,t_j\rangle_2, $$ for $\xi=\displaystyle\sum_{i=1}^nx_i\otimes y_i\in E \otimes F$ and $\eta=\displaystyle\sum_{j=1}^mz_j\otimes w_j\in E \otimes F$.

The above sesquilinear form is an inner product in $E \otimes F$.

It is well known that $(E \otimes F,\langle\cdot,\cdot\rangle)$ is not a complete space. Let $E \widehat{\otimes} F$ be the completion of $E \otimes F$ under the inner product $\langle\cdot,\cdot\rangle$.

If $T\in \mathcal{L}(E)$ and $S\in \mathcal{L}(F)$, then the tensor product of $T$ and $S$ is denoted $T\otimes S$ and defined as $$\big(T\otimes S\big)\bigg(\sum_{k=1}^d x_k\otimes y_k\bigg)=\sum_{k=1}^dTx_k \otimes Sy_k,\;\;\forall\,\sum_{k=1}^d x_k\otimes y_k\in E \otimes F,$$ which lies in $\mathcal{L}(E \otimes F)$. The extension of $T\otimes S$ over the Hilbert space $E \widehat{\otimes} F$, denoted by $T \widehat{\otimes} S$, is the tensor product of $T$ and $S$ on the tensor product space, which lies in $\mathcal{L}(E\widehat{\otimes}F)$.

Let $\operatorname{Im} (X)$ and $\overline{\operatorname{Im} (X)}$ denote respectively the range of an operator $X$ and the closure of its range.

Let $T,M\in \mathcal{L}(E)$ and $S,N\in \mathcal{L}(F)$ be such that

  • $\operatorname{Im} (T)\subseteq \overline{\operatorname{Im} (M)}$.

  • $\operatorname{Im} (S)\subseteq\overline{\operatorname{Im} (N)}$.

I want to prove that $$\operatorname{Im}(T \widehat{\otimes} S)\subseteq \overline{\operatorname{Im}(M \widehat{\otimes} N)}.$$

Note that I show that $$\overline{\operatorname{Im} (M)}\otimes\overline{\operatorname{Im} (N)}\subseteq\overline{\operatorname{Im}(M \otimes N)}.$$

Existence of Solution, System of Equations

Sun, 08/05/2018 - 08:57

Suppose $P(\lambda, i)$ is the probability that a Poisson random variable with average $\lambda$ is equal to $i$, i.e. $\frac{\lambda^i}{e^{\lambda}i!}$

I think the following system of equations always has solution in $x$ and $y$, non-negative real numbers, for any $\alpha>0$ and $k\in \mathbb{N}_+$

\begin{cases} \alpha=\sum_{i=0}^{\infty}P(x, i)\cdot P(y, k+i) \\ \alpha=\sum_{i=0}^{\infty}P(x, i)\cdot P(y, k+i+1) \end{cases}

where the necessary condition $\alpha\leq P(k+1, k+1)$ holds. It is easy to prove that this is indeed a necessary condition, equivalent to the condition that $\alpha=P(\lambda,k+1)$ has a solution. It is also easy to see that solution $y$ of the system is smaller or equal to $\lambda$, the largest solution to the equation $\alpha=P(\lambda,k+1)$. Experiments show that for each fixed $\alpha$ and $k$, there is a solution, but I did not manage to prove it analytically.

Is there any analogue for mean value theorem for multidimensional functions? Any suggestion for the proof directions will be appreciated.

Is there Thom isomorphism for equivariant K groups in algebraic geometry, not necessarily complex number field?

Sun, 08/05/2018 - 08:53

In Chriss and Ginzburg's fantastic book 'representation theory and complex geometry', they use the following Thom Isomorphism:

$\pi:E\rightarrow X$, is a G-equivariant affine linear bundle, then $\pi^{*}: K^{G}(X)\rightarrow K^{G}(F)$

It seems that the Thom isomorphism and cellular fibration can provide a lot of information of the equivariant K theory of flag varieties and Steinberg varieties.

In the book, the injectivity of the map is proved by using specialization which seems (for me) to be a topological method

Questions:

(1)Whether there is any algebraic proof of this isomorphism ?

(2)And does this still holds for other algebraically closed field?

Convolution of sheaves on R

Sun, 08/05/2018 - 08:50

I am trying to understand a basic computation of convolution. Throughout, $R$ is the real line as a topological group and $k$ is some base field. I would like to understand the computation of the convolution

$k_{(a,b)} \star k_{[0,\infty)}$.

I believe this convolution should be $k_{[a,\infty)}[-1]$, but I am unable to prove this.

In general, the stalk at $t \in R$ is (by Beck-Chevalley) the compactly supported cohomology of $k_{(a,b)} \boxtimes k_{[0,\infty)}$ restricted to the line $\{t_1 + t_2 = t\} \subset R^2$. In what follows, $A<B$ are real numbers:

  • When $t>b$, the stalk is $R\Gamma_c(k_{(A,B)}) \simeq k[-1]$, the reduced cohomology of the circle, as one is computing the compactly supported cohomology of an open interval.
  • When $a<t\leq b$, the stalk is $R\Gamma_c(k_{[A,B)})$. By the short exact sequence $k_{(-\infty,A)} \to k_{(-\infty,B)} \to k_{[A,B)}$, the stalk is thus zero. (The map $k_{(-\infty,A)} \to k_{(-\infty,B)}$ induces an isomorphism on compactly supported cohomology.)
  • When $t=a$, the stalk is the compactly supported cohomology of $R$ with respect to a skyscraper sheaf. The skyscraper sheaf is flabby, so we conclude that the stalk at $t=0$ is a copy of $k$ in degree 0.
  • When $t<a$, the stalk is zero.

Is this computation of stalks correct? I would like very much to know what I am doing wrong.

A condition on vector fields for uniqueness of (measure-valued solutions to) continuity equation

Sun, 08/05/2018 - 08:37

I recently got interested into this paper where the authors analyze the question of uniqueness of measure-valued solutions to the continuity equation. Although the paper is well written in my opinion, there is something that I can "understand" but quite not feel, i.e. I lack some intuition behind and I can't get it completely. Here is the main Theorem of the cited paper:

Although it is not important for this post, here is Cauchy problem (1.1) (to be understood in a distributional sense): $$ \partial_t \mu_t + \text{div} (b\mu_t) = 0, \qquad \mu_0 = \overline{\mu} $$ for some given measure $\overline{\mu}$ in $\mathbb R^d$. My question focuses on the assumptions and more precisely it is the following:

Q. What is condition (ii) really saying? What is this (rather mysterious) sequence $(V_k)_k$? Can someone get some intuition behind it?

As I said above, I can understand the paper, i.e. I see how this condition is used in the proof of the theorem and I see how this condition is verified in some examples: they usually choose $b_k$ to be the mollified vector fields (with some fixed convolution kernel) and then they choose either $V_k \equiv 1$ or in case $d=1$ they choose $V_k = 1/b_k^2$. I am completely lacking intuition behind this choices and I cannot figure out what this $V_k$ should stand for.

Some comments.

It is easily seen that the quantity $\langle A \xi, \xi \rangle$, being $A$ a matrix and $\xi \in \mathbb R^d$, depends only on the symmetric part $A^{sym}:=\frac{1}{2}(A + A^T)$ of the matrix $A$. All in all, recalling the definition of Rayleigh quotient condition (ii) is saying something like $$ \Lambda_k(x) V_k \le C V_k - \langle b_k, \nabla V_k \rangle, $$ where $\Lambda_k(x)$ is the maximum eigenvalue of the symmetric part of $\mathcal B_k(x)$. However, it does not seem to me a great progress... I am still missing the meaning of the formula.

Thanks.

A simple quadratic integer optimization problem

Sun, 08/05/2018 - 08:00

Consider the following optimization problem in positive integers $n_1, n_2, n_3$.

$$\begin{array}{ll} \text{maximize} & n_1(n_2+n_3)\\ \text{subject to} & n_1+n_2+n_3 = N\end{array}$$

If $n_1, n_2, n_3$ were reals, the solution would be $n_1 = \frac N2$ and $n_2 = n_3$. However, in my problem, $n_1, n_2, n_3$ are positive integers. Please help me solve this quadratic integer optimization problem.

Inverse and Composite Functions [on hold]

Sun, 08/05/2018 - 06:39

The function g is such that g(x) = kx2 where k is a constant. (b) Given that fg(2) = 12, work out the value of k I have no idea how to solve this any help? thanks :)

How to compute the asymptotic of a summation which involves binomial coefficients?

Sun, 08/05/2018 - 06:23

Let $v_1,v_2 \in \{0,1\}^n$. Denote $v_1v_2=((v_1)_1 (v_2)_1, \ldots, (v_1)_n (v_2)_n)$ and $|v|=\sum v_{i}$. \begin{align} {\scriptsize f(v_1, v_2) = \sum_{x_1=0}^{|v_1|} \sum_{x_2=0}^{|v_2|} \sum_{d=0}^{|v_1 v_2|} \frac{1}{2^{|v_1|+|v_2|-|v_1 v_2|}} \biggl| {|v_1| - |v_1 v_2| \choose x_1 - d} {|v_2| - |v_1 v_2| \choose x_2 - d} - {|v_1| - |v_1 v_2| \choose x_1 + 1 - d} {|v_2| - |v_1 v_2| \choose x_2 + 1 - d} \biggr|.} \end{align} I want to estimate $f(v_1, v_2)$ when $|v_1|, |v_2| \to \infty$.

As a first step, I obtain \begin{align} { \scriptsize f(v_1, v_2) = \sum_{x_1=0}^{|v_1|} \sum_{x_2=0}^{|v_2|} \sum_{d=0}^{|v_1v_2|} \frac{1}{2^{|v_1|+|v_2|-|v_1v_2|}} \biggl| \left( 1- \frac{(v_1-|v_1v_2|-x_1+d)(v_2-|v_1||v_2|-x_2+d)}{(x_1+1-d)(x_2+1-d)} \right) {|v_1| - |v_1v_2| \choose x_1 - d} \biggr|, } \end{align}

How to estimate $f(v_1,v_2)$? Thank you very much.

Transform a function with a homothecy

Sun, 08/05/2018 - 05:44

My question is the following. If I have a function $y(x)$, what is the expression that describes how it is transformed under a general homothetic transformation?

I need this because I'm trying to proof that given a particular solution of an homogeneous equation, if I apply a homothetic transformation to this particular solution I obtain a new one.

Thanks!

vector valued integration

Sun, 08/05/2018 - 04:54

$\Omega$=Locally compact space, $L^1(H)$=Trace class operators on the Hilbert space $H$ and $\mu$= positive bounded Radon measure on $\Omega$.

Let $\phi:\Omega\to L^1(H)$ be a Borel measurable function. Does the following hold?

$$\sup_{u\in B(H)_{\|\cdot\|\leq1}}|\int tr(\phi(t)u)d\mu|=\int tr(|\phi(t)|)d\mu$$

Does stability of equilibrium point preserved by permutation matrix (symmetry)?

Sun, 08/05/2018 - 03:25

Given the following differential equations:

\begin{equation} \begin{aligned} \dot{x}_1 &= f_1(x_1,\ldots,x_n) \\ \vdots \\ \dot{x}_n &= f_n(x_1,\ldots,x_n) \end{aligned} \end{equation}

In a compact way: $$\dot{\hat{x}} = \hat{F}(\hat{x})$$

Let the group $\Psi\subset S_n$, where $S_n$ is a symmetric group.

Suppose the following key property holds: $$\hat{F}(P_\sigma \hat{x})=P_\sigma \hat{F}(\hat{x}), \ \ \ \ \ \forall \sigma\in\Psi$$ where $P_\sigma$ is the permutation matrix corresponding to $\sigma\in \Psi$.

For example: \begin{equation} \begin{aligned} \dot{x}_1 &= x_1(x_1-1)(x_1+1) +x_2 \\ \dot{x}_2 &= x_2(x_2-1)(x_2+1) +x_1 + x_3 \\ \dot{x}_3 &= x_3(x_3-1)(x_3+1) +x_2\end{aligned} \end{equation}

In this case, we can choose $\sigma = (13)\in \Psi = \{(),(13)\}$, and the corresponding permutation matrix: $$P_\sigma = \begin{bmatrix}0 & 0 & 1\\0 & 1 & 0 \\ 1 & 0 & 0 \end{bmatrix}$$ It is easy to see that $\hat{F}(P_\sigma \hat{x})=P_\sigma \hat{F}(\hat{x})$

My question:

If $\hat{x}^*$ is a stable equilibrium point for $\dot{\hat{x}} = \hat{F}(\hat{x})$, can I say $P_\sigma \hat{x}^*$ is also a stable equilibrium point, for all $\sigma\in \Psi$?

Note: Easily to see that $P_\sigma \hat{x}^*$ is also an equilibrium point.

Pages