title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Convergence in distribution of empirical cdf almost surely.
Given i.i.d. random variables $\{X_n\}$ , define the empirical cdf $$ \hat{F}_n(\omega, x) = \frac{1}{n} \sum_{i = 1}^n \mathbf{1}_{X_i(\omega) \leq x}$$ where $\omega \in \Omega$ and $x \in \mathbb{R}$ . Show that $\hat{F}_n(\cdot, \cdot) \overset{d}{\to} F$ as $n \to \infty$ , i.e. $$ P\left(\left\{\omega : \lim_{n \to \infty} \hat{F}_n(\omega, x) = F(x)\text{ for every } x \in C(F)\right\}\right)=1$$ I know that each $\hat{F}_n(\cdot, x)$ is a random variable, and each $\hat{F}_n(\omega, \cdot)$ is a cdf. Furthermore, for fixed $x \in \mathbb{R}$ , we have $\hat{F}_n(\cdot, x) \overset{as}{\to} F$ by the Strong Law of Large Numbers; thus, each $$ P\left(\left\{\omega : \lim_{n \to \infty} \hat{F}_n(\omega, x) = F(x)\right\}\right) := P(A_x) = 1. $$ The problem is extending this to almost all $x$ . If I only had to deal with countably many $x$ , then I could just take a countable union of $A_x^c$ and obtain the result. It has been hinted that the problem can be reduced to this case,
While the existing answer provides a valid route to proving the desired result via the Glivenko-Cantelli theorem, I would like to offer a more simpler approach that avoids some of the technicalities involved in the theorem's proof. You have already shown (using Kolmogorov's SLLN) that given $x \in \mathbb R,$ there exists $A_x \subseteq \Omega$ with $\mathbb P \left (A_x \right ) = 1,$ such that for all $\omega \in A_x,$ $\lim\limits_{n \to \infty} \widehat {F_n} (\omega, x) = F(x).$ Let $A = \bigcap\limits_{x \in \mathbb Q} A_x.$ Since $\mathbb Q$ is countable, it follows that $\mathbb P (A) = 1.$ So $\lim\limits_{n \to \infty} \widehat {F_n} (\omega, x) = F(x),$ for all $\omega \in A$ and for all $x \in \mathbb Q.$ Now let $x_0 \in C(F)$ and choose $s,t \in \mathbb Q,$ such that $s \lt x_0 \lt t.$ Since $x \mapsto \widehat {F_n} (\omega, x)$ is a cdf for each $\omega \in \Omega,$ it follows that $\widehat {F_n} (\omega, s) \leq \widehat {F_n} (\omega, x_0) \leq \widehat {F_n} (\omega
|probability|probability-theory|probability-distributions|random-variables|probability-limit-theorems|
0
Fairly rigorous multivariable calculus books
I'm looking for recommendations for a multivariable calculus book at a somewhat sophisticated level; somewhere between Stewart's Calculus and Munkres' Analysis on Manifolds . I'll have a background in single variable calculus and the typical material from a basic "proofs" class (set theory, logic, proof techniques, some topics in discrete math). This will be my first formal exposure to multivariable calculus beyond some reading I've done for fun. Note that, although I'll have some mathematical maturity and some background in proof-writing, I'll have learned single variable calculus from Stewart, obviously not a very rigorous book. Let me know if you think it's really necessary that I read a more sophisticated calculus text (like Spivak's Calculus ) before moving on to multivariable calculus at the level that I'm describing. I think a book like Spivak's Calculus on Manifolds or Apostol's Calculus, Vol. 2 would be what I'm looking for. Of these two, I think I'd slightly prefer using Spiv
I have tried several calculus books over the past years, and indeed Spivak and Apostol sometimes tend to be overwhelming, as an introduction to analysis. I would like to recommend you "Vector Calculus, Linear Algebra and Differential Forms - A Unified Approach" , as it's not a formal analysis course nor a pure calculative book. If you are interested into geometry and manifolds, this can be a good choice for you, and also as transition text for a more rigurous treatment of calculus.
|multivariable-calculus|book-recommendation|
0
Solve the IVP $y'+2y = \frac{1}{1+x^2}$
Find the solution of the DE $$y'+2y = \frac{1}{1+x^2}\,\,\,\,\,\,\forall x \in \mathbb R$$ satisfying $y(0) = a$ where $a \in \mathbb R$ is a constant. My attempt: Since it's a linear ODE, therefore the Integration factor (I.F.) is $ e^{\int 2\,dx} = e^{2x}$ .And the solution is $$ ye^{2x} = \int \frac{e^{2x}}{1+x^2} dx$$ I'm facing trouble in solving the integral. I tried using some online integral calculator but the solutions over there tends to include imaginary expressions. I'm not sure if my approach was incorrect or if I'm missing something while solving the ODE. Edit: The question furter required us to find the value $$\lim_{x \to \infty} y_a(x)$$ . Can we find the limit for $$y(x) = e^{-2x}\int \frac{e^{2x}}{1+x^2} dx + e^{-2x}C$$ where C is the constant of integration. I'm not sure how to use the initial value in this case. The solution is: $$\lim_{x \to \infty} y_a(x) = 0\,\,\,\,,\forall\,a\in\mathbb R$$ Note: The question was asked in a maths competition where the syllabus d
Hint You've already found that $$e^{2 x} y(x) = \int_0^x \frac{e^{2 t} \,dt}{1 + t^2} + C$$ for some $C$ . The integrand has no closed-form antiderivative in terms of elementary functions---the exponential integral function $\operatorname{Ei}$ or its equivalent is necessary---but we don't need to evaluate the integral to compute the limit. Evaluating both sides at $x = 0$ gives $C = a$ , hence $$y(x) = \frac{a + \displaystyle \int_0^x \frac{e^{2 t} \,dt}{1 + t^2}}{e^{2 x}} .$$ The denominator approaches $+\infty$ as $x \to +\infty$ , so if we can show that the numerator also approaches $+\infty$ as $x \to +\infty$ , we can apply l'Hôpital's Rule to compute the limit. For $t > 0$ , we have $e^{2 t} = 1 + 2 t + 2 t^2 + \cdots > 1 + t^2$ , so the numerator of our formula for $y$ satisfies $$a + \displaystyle \int_0^x \frac{e^{2 t} \,dt}{1 + t^2} > a + \int_0^x \,dt = a + x .$$ Since $\lim_{x \to +\infty} (a + x) = +\infty$ , comparison gives that the numerator satisfies $\lim_{x \to +\inf
|calculus|integration|ordinary-differential-equations|indefinite-integrals|
0
Determine the probability that distance between roots of the equation $x^2+mx+n=0$ is not greater then $1$.
Suppose that $m,n$ be real number randomly choosen from $[0,1]$ .Determine the probability that distance between roots of the equation $x^2+mx+n=0$ is not greater then $1$ . What I try : Here Total ways $= $ Area of formed formed by $[1,1]$ , Which is $1$ And here $A(\alpha,f(\alpha))$ and $B(\beta,f(\beta)),$ Where $f(x)=x^2+mx+n$ Here $\alpha+\beta=-m$ and $\alpha\beta=n$ . Then We have to find $\displaystyle \sqrt{(\beta-\alpha)^2+(f(\beta)-f(\alpha))^2}\leq 1$ $\displaystyle \sqrt{(\beta-\alpha)^2+(\beta^2-\alpha^2)^2+m^2(\beta-\alpha)^2}\leq 1$ $\displaystyle |\beta-\alpha|\sqrt{1-m+m^2}\leq 1$ $\displaystyle \sqrt{m^2-4n}\sqrt{1+m+m^2}\leq 1$ How do I find favourable case, Please have a look on that problem, Thanks
The roots of $X^2+mX+n$ are $$ \alpha_{\pm}=\frac{-m\pm\sqrt{m^2-4n}}{2} $$ so $\alpha_+-\alpha_-=\sqrt{m^2-4n}\in\mathbf{C}$ . The probability you want to compute is $$ \begin{aligned} \mathbf{P}(|\alpha_+-\alpha_-|\leqslant 1) &= \mathbf{P}(|m^2-4n|\leqslant 1) \\ &=\mathbf{P}\left(\frac{m^2-1}{4}\leqslant n\leqslant\frac{m^2+1}{4}\right) \\ &=\int_0^1\mathbf{P}\left(\frac{t^2-1}{4}\leqslant n\leqslant\frac{t^2+1}{4}\right)dt \\ &=\int_0^1 \frac{t^2+1}{4}dt \,(\text{because } t^2-1\leqslant 0) \\ &=\frac{1}{3}. \end{aligned} $$
|probability|
1
Are $\{z\in \mathbb{C}:|z^2-3|<1\}$ and $\{z\in \mathbb{C}:|z^2-1|<3\}$ complex domains?
I got stuck on the following problem: Which of the following sets are domains in the complex plane: $$D_1=\{z\in \mathbb{C}:|z^2-3| $$D_2=\{z\in \mathbb{C}:|z^2-1| These sets are domains if they are arcwise connected open non-empty subsets of the complex plane. I first tried to sketch these regions to get some intuition about the problem. I tried the following : If we let $z=re^{i\theta}$ , then $z^2=r^2e^{2i\theta}=r^2(\cos2\theta+\sin2\theta)$ and our inequality becomes: $$|z^2-3|^2=|r^2(\cos2\theta+\sin2\theta)-3|^2=(r^4\cos^22\theta-6r^2\cos2\theta+9)+r^4\sin^22\theta=r^2(r^2-6\cos2\theta)+9 If we let $\theta=0$ , then $r^2(r^2-6) , since $r>0$ . And of course if z is a real number and $z\in(-2,-\sqrt2)$ then $|z^2-3| . So if $\theta=0$ and z lies in $(-2,-\sqrt2)\cup(\sqrt2,2)$ it satisfies the inequality. But what if $\theta\neq0$ ? I have tried a lot, but only got so far. I think that the first one is not a domain, since we will get two distinct open sets U and V such that their
Proving that $D_1$ and $D_2$ are open is not so hard, for instance by noting that for any $a \in \mathbb{C}$ the map $\mathbb{C} \to \mathbb{R} : z \mapsto |z^2 - a|$ is continuous. But I don't think you struggle with this? For $D_1$ your intuition is correct! To prove it is not connected it suffices of course to find two members that cannot be connected by a path. For this it suffices to look at $\sqrt{3}$ and $-\sqrt{3}$ . If we could connect them by a path $\gamma : [0,1]\to \mathbb{C}$ lying completely in $D_1$ , then that would mean that $\gamma$ intersects the imaginary axis. (Why? It's intuitive but to make it rigourous: apply the intermediate value theorem to a specific function ...) Howewer no purely imaginary number $ai$ is in $D_1$ , a contradiction. For $D_2$ you can show that you can connect any two elements $z_1, z_2 \in D_2$ by first taking a straight line path from $z_1$ to $0$ and then from $0$ to $z_2$ . You can prove that this total path will lie completely in $D_2$
|complex-analysis|complex-numbers|
1
Why are variables not considered part of the lexicon of LR($\sigma$)?
I am reading through Mathematical Logic by Chiswell and Hodges and just finished section 5.3, which introduces LR, the Language of Relations, for first-order logic. In this section the following notions are defined: A first-order signature Variables A parsing tree for terms of LR( $\sigma$ ) A parsing tree for formulas of LR( $\sigma$ ) Compositional definitions for each of the above trees mentioned. My question is on exercise 5.3.1, which asks: Suppose $\sigma$ is a signature. What are the symbols in the lexicon of LR( $\sigma$ )? The answer in the back of the book states that the lexicon consists of the symbols in $\sigma$ together with the twelve symbols ' $\lnot$ ' ' $\land$ ' ' $\lor$ ' ' $\to$ ' ' $\leftrightarrow$ ' ' $\bot$ ' ' $=$ ' ' $\forall$ ' ' $\exists$ ' ' $($ ' ' $)$ ' ' $,$ '. This is what I had gotten as my answer too, based on the details of the above notions, except I also inlcuded the variables (which in section 5.3 are defined as the infinitely many symbols $x_0$
Remarkably, this is the only textbook I am aware of that treats propositional calculus in a formal language setting; probably because propositional calculus is often the very initial subject with which teaching of logic begins. Let us review the relevant definition from that part: Definition 3.1.1 For each signature $\sigma$ : (a) The lexicon of $\mathrm{LP}(\sigma)$ is the set of symbols consisting of the truth function symbols (3.1), the parentheses (3.2) and the symbols in $\sigma$ . (b) An expression of $\mathrm{LP}(\sigma)$ is a string of one or more symbols from the lexicon of $\mathrm{LP}(\sigma)$ . The length of the expression is the number of occurrences of symbols in it. (Often the same symbol will occur more than once.) Lexicon (or, “vocabulary”) of a formal language $\mathcal{L}$ is composed of two segments: a logical lexicon and a non-logical lexicon. Non-logical lexicon is also called signature , for it is the distinctive part of the language. Indeed, it is common among m
|logic|first-order-logic|
1
Jay Cummings real analysis, question 4.14 - can be proved without Cauchy-Shwartz?
Here is the question: Prove that if $a_k>0$ for all k and $\sum\limits_{k=1}^{\infty}{a_k}^2$ converges, then $\sum\limits_{k=1}^{\infty}\frac{a_k}{k}$ converges. There is an already existing answer in this forum which provides a solution using Cauchy-Shwartz - Use Cauchy-Schwarz inequality to show series convergence However, I am asking a different question. My understanding is that the questions in the book can be solved using the knowledge acquired from the book so far . Therefore I am curious whether this question can be solved without Cauchy-Shwartz. Here is what I currently have in terms of trying to solve it. Suppose $\exists n \in \mathbb{N}$ such that $a_k^2 \ge (a_k - \frac{1}{k})^2$ $\forall k \ge n$ . In that case, we have something like this: $$ a_k^2 \ge (a_k - \frac{1}{k})^2 \\ a_k^2 \ge a_k^2 + \frac{1}{k^2} - \frac{2a_k}{k} \\ a_k \ge \frac{1}{2k} $$ And also: $$ L = \sum\limits_{k=n}^{\infty}{a_k}^2 \ge \sum\limits_{k=n}^{\infty}(a_k - \frac{1}{k})^2 > 0 $$ Therefore
We can use, that if series $\sum\limits_{n=1}^{\infty}a_{n}^{2}$ and $\sum\limits_{n=1}^{\infty}b_{n}^{2}$ converged, then also converged $\sum\limits_{n=1}^{\infty}|a_{n}b_{n}|\quad(1)$ This can be obtained from inequality $|a_{n}b_{n}|\leqslant \frac{1}{2}(a_{n}^{2}+b_{n}^{2})$ . Now, using $(1)$ , and take $b_n=\frac{1}{n}$ , we obtain desired convergence of $\sum\limits_{n=1}^{\infty}\frac{a_{n}}{n}$ .
|sequences-and-series|convergence-divergence|
1
Regarding Loss function of binary logistic regression using the sigmoid function
I have a the following likelihood function: $L(w)=\frac{1}{n}\sum_{t}\log(p(y_{t}/x_{t};\omega))$ and the following probability density: $p(y_{t} = 1/x_{t};\omega) = \sigma(w^{T}x_{t})$ $p(y_{t} = 0/x_{t};\omega) = (1 - \sigma(w^{T}x_{t}))$ so, $p(y_{t}/x_{t};\omega)$ is binary. From what I saw in texts is that: $L(\omega)=\sum_{t}\log(p(y_{t}/x_{t};\omega)) =$ $=\sum_{t}[y_{t}\log(\sigma(w^{T}x_{t})) + (1-y_{t})\log(1 - \sigma(w^{T}x_{t}))]$ How do the $y_{t}$ and $(1 - y_{t})$ terms went out of the $\log()$ ?
Note that since $y_t$ is binary, we can write the density function compactly as $$p(y_t|x_t; w) = \sigma (w^Tx_t)^{y_t}(1-\sigma (w^Tx_t))^{1-y_t}$$ Hence, if we take logarithm, we can bring down the $y_t$ and $1-y_t$ . \begin{align}\log (p(y_t|x_t; w)) &= \log (\sigma (w^Tx_t)^{y_t}(1-\sigma (w^Tx_t))^{1-y_t}) \\ &= \log (\sigma (w^Tx_t)^{y_t}) + \log(1-\sigma (w^Tx_t))^{1-y_t}) \\ &=y_t\log (\sigma (w^Tx_t)) + (1-y_t)\log(1-\sigma (w^Tx_t)))\end{align}
|machine-learning|logistic-regression|
1
Make the dynamic system associated with the differential equation $x'=t(x^2-1)$
I am trying to make the dynamic system associated with this differential equation by its phase space.It is a simple differential equation since it has separate variables.In class we have seen how to build a dynamic system from an autonomous differential equation And it is clearly seen that this equation is not autonomous. I have tried to proceed in the same way as if I were autonomous but it doesn't work. Could someone give me a clue to solve this exercise?
Let $(u,v) = (t,x)$ , then we have the system $$ \begin{aligned} \dot{u} &= 1 \\ \dot{v} &= u(v^2-1). \end{aligned} $$ We can then use a phase plane plotter to visualize the $uv$ -plane. Alternatively, we could do this by hand by plotting vectors with horizontal component $1$ and vertical component $t(x^2-1)$ in the $tx$ -plane.
|ordinary-differential-equations|
0
Logistic regression notation confusion
I am studying logistic regression but I am confused about why we can do this: $$P(y=1|x;\theta) = h_\theta(x)$$ $$P(y=0|x;\theta) = 1- h_\theta(x)$$ how these two become: $$P(y|x_i\theta) = h(x)^y (1-h(x))^{1-y}$$
This is due to $y$ is binary. Observe this term: $$P(y|x_i\theta) = h(x)^y (1-h(x))^{1-y}$$ When $y=1$ , $$P(y=1|x_i,\theta) = h(x)^1 (1-h(x))^{1-1}=h(x)(1-h(x))^0=h(x)$$ When $y=0$ , $$P(y=0|x_i,\theta) = h(x)^0 (1-h(x))^{1-0}=h(x)^0(1-h(x))^1=1-h(x)$$
|linear-algebra|logistic-regression|
1
Why is this function almost Lipschitz?
We are still in the saga of solving the 2002 qualifier. This question 6b has stumped me and I am mostly clueless about it: Say $f:\mathbb{R}\rightarrow \mathbb{R}$ is bounded with a finite constant $B$ such that: $$\frac{|f(x+y)+f(x-y)-2f(x)|}{|y|}\leq B$$ Prove there exists $M(\lVert f \rVert_\infty, B)$ such that for all $x\not=y$ : $$|f(x)-f(y)|\leq M |x-y|\left(1+\ln_+(\frac{1}{|x-y|})\right)$$ Where $\ln_+(x)=\max \{0,\ln(x)\}$ Intuitively this means that away from $y=x$ , $f(x+y)\rightarrow f(x)$ linearly. Close to $x$ , it is still true $f(x+y)\rightarrow f(x)$ but it is slightly perturbed by $\ln_+$ . Here are a couple of facts which have gotten me nowhere: Fact 0. The inequality in $B$ would be an approximation for $f''(x)$ if it were divided by $y^2$ instead of $y$ . This is particularly useless, because we have no regularity associated with $f$ . Even if we did $|f''(x)|\leq \lim M/|y|=\infty$ so this observation cannot be of any help. Fact 1. $\lim_{y\rightarrow 0} |f(x-y)-
Set $K = \Vert f \Vert_\infty$ . In order to simplify the notation a bit we fix $y \in \Bbb R$ and consider the function $$ g: \Bbb R \to \Bbb R , \, g(u) = f(y+u) - f(y) \, . $$ Note that $\Vert g \Vert_\infty \le 2K$ . We will prove that there is a constant $M = M(K, B)$ such that $$ \tag{$*$} |g(u)| \le M |u| \left(1+ \log^+ \frac {1}{|u|} \right) $$ for all $u \ne 0$ . Then $$ |f(x) - f(y)| = |g(x-y)| \le M |x-y| \left(1+ \log^+ \frac {1}{|x-y|} \right) $$ for all $x \ne y$ . Proof of $(*)$ : For $|u| \ge 1$ is $$ |g(u)| \le 2K \le 2K|u| \, , $$ that is the desired linear upper bound $(*)$ with $M=2K$ . Now we consider the case $0 . We have $$ \begin{align} 2 |g(u)| - |g(2u)| &\le |g(2u) - 2g(u)| \\ &= | f(y+2u) + f(y) - 2 f(y+u)| \\ &\le B |u| \, , \end{align} $$ using the given inequality with $\tilde x = y+u$ and $\tilde y = u$ . It follows that $$ |g(u)| \le \frac 12 |g(2u)| + \frac{B|u|}{2} \, . $$ We can apply this repeatedly to $2u, 4u, 8u, \ldots$ : $$ \begin{align} |g(u)|
|real-analysis|calculus|inequality|functional-equations|
1
Computing $h(h(x))$ where $h (x) = \lfloor 5x - 2 \rfloor$
In Velleman's "Calculus: a Rigorous Course," Example 9 from Section 1.3 tasks us with computing $ h(h(x)) $ , where $ h(x) = \lfloor 5x - 2 \rfloor $ . My initial solution: \begin{align*} h(\lfloor 5x - 2 \rfloor) &= \lfloor 5(\lfloor 5x \rfloor - 2) - 2 \rfloor \\ &= \lfloor 5\lfloor 5x \rfloor - 10 \rfloor - 2 \\ &= \lfloor 5\lfloor 5x \rfloor \rfloor - 12 \end{align*} However, the provided solution from the book is: $ 5\lfloor 5x \rfloor - 12 $ How can I get the correct solution ?
You almost got it! $\begin{align*} h(\lfloor 5x - 2 \rfloor) &= \lfloor 5(\lfloor 5x \rfloor - 2) - 2 \rfloor \\ &= \lfloor 5\lfloor 5x \rfloor - 10 \rfloor - 2 \\ &= \lfloor 5\lfloor 5x \rfloor \rfloor - 12 \end{align*}$ Then, since $\lfloor 5x \rfloor $ is always an integer we get that $5 \times \lfloor 5x \rfloor $ is also an integer so the outer floor is not needed indeed So we get $h(h(x))= 5\lfloor 5x \rfloor - 12$ So we can say $\lfloor a\lfloor b \rfloor \rfloor = a \times \lfloor b \rfloor $ , when $a$ is an integer .
|functions|solution-verification|ceiling-and-floor-functions|
1
Papa Rudin $6.16$ theorem.
There is the theorem: Suppose $1\leq p \lt \infty $ , $\mu$ is a $\sigma$ -finite positive measure on $X$ , and $\phi$ is a bounded linear functional on $L^{p}(\mu)$ . Then there is a unique $g \in L^{q}(\mu)$ , where $q$ is the exponent conjugate to $p$ , such that $$\phi(f) = \int_{X} fg \ d\mu \ \ (f \in L^{p}(\mu)). $$ Moreover, if $\Phi$ and $g$ are related as mentioned above, we have $$ ||\phi|| = ||g||_{q} . $$ There is the proof: The uniqueness of $g$ is clear, for if $g$ and $g’$ satisfy the relation with $\phi$ , then the integral of $g-g’$ over any measurable set $E$ of finite measure is $0$ ( as we see by taking $\chi_{E}$ for $f$ ), and the $\sigma$ -finiteness of $\mu$ implies therefore that $g - g’ = 0$ a.e. Next, if the relation between $g$ and $\phi$ holds, Hölder‘s inequality implies $$ ||\phi|| \leq ||g||_{q} . $$ I don’t understand how does Hölder’s inequality imply the last inequality. Any help would be appreciated.
One definition of the operator norm is $\|\phi\| = \sup_{\|f\|_p\leq 1} |\phi(f)|$ . So by Holder, if $\|f\|_p \leq 1$ , $$|\phi(f)| \leq \int |fg|\,d\mu \leq \|f\|_p\|g\|_q \leq \|g\|_q,$$ and taking the sup over all such $f$ gives $\|\phi\| \leq \|g\|_q$ .
|real-analysis|functional-analysis|analysis|measure-theory|normed-spaces|
1
Papa Rudin $6.16$ theorem.
There is the theorem: Suppose $1\leq p \lt \infty $ , $\mu$ is a $\sigma$ -finite positive measure on $X$ , and $\phi$ is a bounded linear functional on $L^{p}(\mu)$ . Then there is a unique $g \in L^{q}(\mu)$ , where $q$ is the exponent conjugate to $p$ , such that $$\phi(f) = \int_{X} fg \ d\mu \ \ (f \in L^{p}(\mu)). $$ Moreover, if $\Phi$ and $g$ are related as mentioned above, we have $$ ||\phi|| = ||g||_{q} . $$ There is the proof: The uniqueness of $g$ is clear, for if $g$ and $g’$ satisfy the relation with $\phi$ , then the integral of $g-g’$ over any measurable set $E$ of finite measure is $0$ ( as we see by taking $\chi_{E}$ for $f$ ), and the $\sigma$ -finiteness of $\mu$ implies therefore that $g - g’ = 0$ a.e. Next, if the relation between $g$ and $\phi$ holds, Hölder‘s inequality implies $$ ||\phi|| \leq ||g||_{q} . $$ I don’t understand how does Hölder’s inequality imply the last inequality. Any help would be appreciated.
Holder's inequality states that if $f\in L^p(\mu)$ and $g\in L^q(\mu)$ , then $fg\in L^1(\mu)$ and we have $$ \|fg\|_1\leq\|f\|_p\|g\|_q $$ Also note that $$ |\phi(f)|=\left|\int_Xfg\,d\mu\right|\leq\int_X|fg|\,d\mu=\|fg\|_1 $$ Depending on your definition of the norm of $\phi$ , we take here $$ \|\phi\|=\sup_{\|f\|_p=1}|\phi(f)|\leq\sup_{\|f\|_p=1}\|f\|_p\|g\|_q=\|g\|_q $$ This gives you your inequality.
|real-analysis|functional-analysis|analysis|measure-theory|normed-spaces|
0
Papa Rudin $6.16$ theorem.
There is the theorem: Suppose $1\leq p \lt \infty $ , $\mu$ is a $\sigma$ -finite positive measure on $X$ , and $\phi$ is a bounded linear functional on $L^{p}(\mu)$ . Then there is a unique $g \in L^{q}(\mu)$ , where $q$ is the exponent conjugate to $p$ , such that $$\phi(f) = \int_{X} fg \ d\mu \ \ (f \in L^{p}(\mu)). $$ Moreover, if $\Phi$ and $g$ are related as mentioned above, we have $$ ||\phi|| = ||g||_{q} . $$ There is the proof: The uniqueness of $g$ is clear, for if $g$ and $g’$ satisfy the relation with $\phi$ , then the integral of $g-g’$ over any measurable set $E$ of finite measure is $0$ ( as we see by taking $\chi_{E}$ for $f$ ), and the $\sigma$ -finiteness of $\mu$ implies therefore that $g - g’ = 0$ a.e. Next, if the relation between $g$ and $\phi$ holds, Hölder‘s inequality implies $$ ||\phi|| \leq ||g||_{q} . $$ I don’t understand how does Hölder’s inequality imply the last inequality. Any help would be appreciated.
If $\phi:E\to F$ and $\Vert\phi(x)\Vert_F\le K\Vert x\Vert_E$ then by definition we have $\Vert\phi\Vert\le K$ . Here $E=L^p(\mu)$ , and $F=\mathbb R$ . For all $f\in L^p(\mu)$ , $\phi(f)=\int_Xfg\,d\mu$ , so by Hölders inequality, $$ \vert\phi(f)\vert\le\Vert g\Vert_q\Vert f\Vert_p, $$ hence $\Vert\phi\Vert\le\Vert g\Vert_q$ (here $\Vert g\Vert_q=K$ from my notation above).
|real-analysis|functional-analysis|analysis|measure-theory|normed-spaces|
0
Parametric area of a region bounded by two curves
Let $S(\epsilon)$ be the area of the region bounded by $y=e^x$ and $y=x+1+\epsilon$ , where $\epsilon$ is a small positive number. When $\epsilon\to0,$ we have $$S(\epsilon)=S_0+\epsilon^\alpha S_1+\dots,\alpha>0$$ Find $S_0, S_1$ and $\alpha$ . For starters, that's how the graphs of $y=e^x$ and $y=x+1$ look like: Let's take $\epsilon=0.5$ , then we would have the following graphs: So for us to find the area of the region bounded by the two graphs, we would need to calculate the following integral $$\int _a ^b (x+1+\epsilon-e^x)dx$$ We have to determine the limits as well, so we have to solve $$x+1+\epsilon=e^x$$ $$e^x-x=1+e$$ $$1+x+\dfrac{x^2}{2!}+\dfrac{x^3}{3!}+\dots - x = 1+\epsilon $$ $$\epsilon=\dfrac{x^2}{2!}+\dfrac{x^3}{3!}+\dots,$$ which I am not how to use in order to find $x$ in terms of $\epsilon$ and $e$ . How do I continue from here? I am not sure what the sum given in the problem has to do with the integral and how we should calculate the first two terms. What are $S_0,
In fact, the area $S(\varepsilon)$ can be computed exactly. Let's determine (the abscissae of) the points of intersection between the two curves $y = e^x$ and $y = x + 1 + \varepsilon$ in the first place. They satisfy the relation $e^x = x + 1 + \varepsilon$ , which is a transcendental equation, but it can solved formally with the help of Lambert W function . Indeed, one has : $$ \begin{align} e^x &= x + 1 + \varepsilon \\ 1 &= e^{-x}(x+1+\varepsilon) \\ -e^{-(1+\varepsilon)} &= -e^{-(x+1+\varepsilon)}(x+1+\varepsilon) \\ W(-e^{-(1+\varepsilon)}) &= -(x+1+\varepsilon) \\ x &= -\left(1 + \varepsilon + W(-e^{-(1+\varepsilon)})\right) \end{align} $$ The Lambert function possesses two real branches within the domain $[-1/e;0]$ , namely $W_{-1}$ and $W_0$ , in such a way that $$ \begin{cases} a = -\left(1 + \varepsilon + W_{-1}(-e^{-(1+\varepsilon)})\right) \\ b = -\left(1 + \varepsilon + W_0(-e^{-(1+\varepsilon)})\right) \end{cases} $$ It is to be noted that $$ \lim_{\varepsilon\to0} a = \
|integration|definite-integrals|area|
0
Existence of Solutions of mixed Linear-Quadratic Equation System modulo $p$
I need some help on the existence of solutions of an equation system modulo some prime $p$ . The equation system has three parametes $n$ , $N$ and $p$ and the equations are of the form $$ \sum_{i=1}^n x_i^d = 1 \bmod p \qquad \text{for $d = 1, \dots, N$.} $$ The $x_i$ are elements of $\mathbb{Z}_p$ . A trivial solution to this equation system is a solution of the form $x_i = 0$ for all $i \neq k$ and $x_k = 1$ where $k \in \{1,\dots, n\}$ is a randomly chosen index. As @Peter pointed out, for $p=11$ , $n=3$ and $N=2$ there is a non-tivial solution $x_1 = 2$ , $x_2 = 4$ and $x_3 = 6$ . In the meantime, I also found a lot of other examples for $n=3$ , $N=2$ and different $p$ . I am interested in the following: Is there a criterion on $n$ , $N$ and $p$ to easily decide if there is a non-trivial solution? And, assume I challenge you and give you $p$ , $n$ and $N$ for which a non-trivial solution exists, how efficient can you find such a non-trivial solution? (In dependence of $n$ , $N$ and
This is not a mathematical answer, but merely a program for brute-forcing. Hopefully, someone will use this to find a pattern and explain it mathematically. I am adding it as an answer, as it's too long for a comment. Python code: from itertools import product from itertools import chain def run(p,n,tryToRemoveDuplicates=True): l=product(*([list(range(p))]*n)) l_new=[] for k in l: if sum(k)%p==1 and sum([a*a for a in k])%p==1: l_new.append(tuple(sorted(list(k)))) if not tryToRemoveDuplicates: print(k) if tryToRemoveDuplicates: l=set(l_new) for k in l: print(k) #examples print("For p=5, n=7") run(5,7) print("For p=5, n=11") run(5,7,tryToRemoveDuplicates=False) The $\text{run}$ function takes as parameters $p$ and $n$ and an optional boolean. If the boolean value is True, the program will try to remove duplicates (but setting it to False is better for larger $n$ ) For $p=5$ , $n=7$ , we get: (1, 2, 3, 3, 4, 4, 4) (0, 0, 0, 2, 3, 3, 3) (1, 1, 2, 2, 2, 4, 4) (0, 0, 0, 0, 3, 4, 4) (0, 0, 2,
|elementary-number-theory|modular-arithmetic|combinatorial-number-theory|
0
Prove that $g(x) = \sum_{n=0}^{+\infty}\frac{1}{2^n+x^2}$ ($x\in\mathbb{R}$) is differentiable and check whether $g'(x)$ is continuous.
The function $g(x)$ is a function series, so it is differentiable when $g'(x)$ converges uniformly. So I should just check uniform convergence of $g'(x)$ by using the Weierstrass M-test: $$g'(x) = \left(\sum_{n=0}^{+\infty}\frac{1}{2^n+x^2}\right)' = \sum_{n=0}^{+\infty}\left(\frac{1}{2^n+x^2}\right)',$$ then $$\left|-\frac{2x}{(2^n+x^2)^2}\right| = \frac{2|x|}{(2^n+x^2)^2} \leq \frac{2|x|}{(2^n)^2} = \frac{2|x|}{4^n}.$$ But now I can't find a sequence that is bigger than $\frac{2|x|}{4^n}$ to use. For checking whether function $g'(x)$ is continuous or not, I think I will use the same argument: If $g''(x)$ converges uniformly, then $g'(x)$ is differentiable $\Longrightarrow$ continuous. Am I solving this problem in a correct way? Any help would be much appreciated.
Note $$|g'_n(x)|=\left|-\frac{2x}{(2^n+x^2)^2}\right| = \frac{2|x|}{2^n+x^2}\cdot\frac{1}{2^n+x^2}.$$ Since $$ \frac{2|x|}{2^n+x^2}\le\frac{1+x^2}{2^n+x^2}\le1, \frac{1}{2^n+x^2}\le\frac1{2^n}$$ you have $$|g'_n(x)|=\left|-\frac{2x}{(2^n+x^2)^2}\right| = \frac{2|x|}{2^n+x^2}\cdot\frac{1}{2^n+x^2}\le\frac1{2^n}.$$ Now you can conclude that $\sum g_n'(x)$ converges uniformly.
|calculus|analysis|functions|
1
Why do we get a connected 2-regular graph?
In reading " PUBLIC-KEY CRYPTOSYSTEM BASED ON ISOGENIES " by Rostovtsev and Stolbunov, they claim on page 8 that the set $U=\{E_i(\mathbb{F}_p)\}$ of elliptic curves with a specific prime $l$ form a "branchless cycle". For some context, $U$ is a set of elliptic curves, each one being a uniquely determined by a $j$ -invariant. $l$ is a prime such that the Kronecker symbol $\left(\frac{D_\pi}{l}\right)=1$ , and $D_\pi$ is the Frobenius discriminant (which is common among all the elliptic curves in $U$ because they are all isogenous by Tate's theorem). Kohel (Theorem 2 in paper) showed that if an elliptic curve has $D_\pi$ and $l$ satisfying $\left(\frac{D_\pi}{l}\right)=1$ , then there are exactly two $l$ -degree isogenies from $E$ . Now this implies $U$ is $2$ -regular, but why does the following statement hold: It is practically determined that, when $\#U$ is prime, all the elements of $U$ form a single isogeny cycle. If $\#U=7$ can't we have two disjoint cycles of size $3$ and $4$ ? A
Answered on MO (MathOverflow) https://mathoverflow.net/questions/467175/why-do-we-get-a-connected-2-regular-graph/467743#467743 . The construction makes $U$ the Cayley graph associated to an abelian group $G$ and a pair $\pm g$ of elements of $G$ . In such a graph all the components are cycles of the same length, namely the order of $g$ , call it $|g|$ . Thus $|g|$ is a factor of $|G|$ (this is also part of Lagrange's theorem). In particular if $|G|$ is prime and $g$ is not the identity element then $|g| = |G|$ and the graph is a single $|G|$ -cycle. See the MO answer for further information.
|graph-theory|elliptic-curves|cryptography|
1
Is a polynomial monotone when the first derivative has only imaginary roots?
I have a polynomial over a specific the range. The first derivative has only two imaginary roots and no real roots. The first derivative is positive in the lower bound and upper bound. Does that mean that the function is monotone (increasing) over the range?
Not necessarily. The first derivative being positive at the lower and upper bounds of the range indicates that the function is increasing at those points. However, the function could still have other critical points within the range where it changes direction. For instance, consider a polynomial with two imaginary roots within the specified range. While the first derivative is positive at the lower and upper bounds, there could be a local minimum or maximum within the range where the derivative is zero or undefined, causing the function to change direction. To determine if the function is monotone over the entire range, you need to analyze the behavior of the first derivative and check for any additional critical points (where the derivative is zero or undefined) within the range. If there are no such critical points, then the function would indeed be monotone over that range.
|polynomials|complex-numbers|roots|irreducible-polynomials|
0
Why the polynomial $x^{p+1}=y$ has exactly $p+1$ solutions in $F_{p^2}^*$ for $y \in F_p^*$?
In some combinatorics book I found the following remarks about multiplicative groups of finite fields: Let p be an odd prime number and $q=p^2$ . Then $F_p^*$ can be identified with the elements $x$ of $F_q^*$ that satisfy $x^{p-1}=1$ . Furthermore, for fixed $y \in F_p^*$ the polynomial $y=x^{p+1}$ has exactly p+1 solutions in $F_q^*$ . I understand that the key to all of these is the decomposition $q-1=(p+1)(p-1)$ . Nevertheless one question still remains: Why the mentioned polynomial has exactly $p+1$ solutions? I know that the zeros of $x^{q-1}-1$ are exactly the elements of $F_q^*$ , but I can't finish the argument.
Define $\phi: F_q^\times \to F_q^\times$ by $\phi(x)=x^{p+1}$ . This is a group homomorphism. Then $\phi(x)^{p-1}=x^{(p+1)(p-1)}=x^{q-1}=1$ and so $\operatorname{im}{\phi} \subseteq \{ x \in F_q^\times : x^{p-1} =1 \}$ . Therefore, $|\operatorname{im}{\phi}| \le p-1$ . On the other hand, $\ker \phi = \{ x \in F_q^\times : x^{p+1} =1 \}$ and so $|\ker \phi| \le p+1$ . Therefore, $|\operatorname{im}{\phi}| \ge p-1$ because $|\operatorname{im}{\phi}|\cdot |\ker \phi| = |F_q^\times|$ . Thus, $|\operatorname{im}{\phi}| = p-1$ and so $\operatorname{im}{\phi} = \{ x \in F_q^\times : x^{p-1} =1 \} = F_p^\times$ . Finally, $|\ker \phi| = p+1$ , and we are done. This proof does not use that $F_q^\times$ is cyclic. It uses only that polynomials of degree $d$ have at most $d$ roots in a field and that $g^n=1$ in every group of order $n$ .
|abstract-algebra|group-theory|finite-groups|
0
what is the unit of this rate of change?
I'm given a table that shows the percent of Canadian households that used e-mail from year A to year B. Then what is the unit of the average rate of change of the percent of households using e-mails from year A to year B? Is it percent per year ? I'm fine with dollars per year, miles per hour and so forth, but percent per year seems weird to me.
The unit of the average rate of change of the percent of households using e-mail from year A to year B would indeed be percent per year. Here's why: When we talk about the rate of change, we're essentially talking about how much something changes over time. In this case, the percentage of households using e-mail is changing over a period of time from year A to year B. Percent is a unit of measurement that represents a portion of 100. So, when we say "percent per year," we're expressing the change in percentage over the course of one year. For example, if the percentage of households using e-mail increased from $30%$ in year A to $40%$ in year B over a span of 5 years, the average rate of change would be $(40% - 30%) / 5 years = 2%$ per year. This means that, on average, the percentage of households using e-mail increased by $2%$ each year during that time period.
|functions|elementary-functions|slope|
0
How does $e^{i x}$ produce rotation around the imaginary unit circle?
Euler’s formula states that $e^{i x} = \cos(x) + i \sin(x)$. I can see from the MacLaurin Expansion that this is indeed true; however, I don’t intuitively understand how raising $e$ to the power of $ix$ produces rotation. Can anyone give me an intuitive understanding?
Shitty answers from lame books. The next geometric approach a will appeal to euler's formulas is explored in detail by Needham (1997). By re-expressing $e^{i \theta}$ as a power series $e^{x} = 1 + x +\frac{x^2}{2!}+\frac{x^3}{3!}+\cdots$ , we have: $$e^{i \theta} = 1 + (i \theta) +\frac{(i \theta)^2}{2!}+\frac{(i \theta)^3}{3!}+\cdots$$ $$e^{i \theta} = 1 + i \theta -\frac{\theta^2}{2!}-\frac{i \theta^3}{3!}+\cdots$$ "This series is just as meaningful as the series for $e^x$ , but instead of the terms all having the same direction, here each term makes a right angle with the previous one, producing a kind of spiral." (Needham, 1997) If we visualize complex numbers as vectors then we are adding these real vectors and imaginary vectors that gets smaller at every term of the power series. What might not be obvious is that the path it takes as theta changes is actually a circle. So, we must show two things: The sum of the vectors have a lenght of 1 The angle formed by the sum of this vect
|complex-numbers|exponential-function|
0
Deciding a surface in Stokes Theorem
Compute $\int_C (y+z,z+x,x+y) d\vec{r}$ , where $C$ is the intersection of the cylinder $x^2 +y^2 = 2y$ and the plane $y = z$ . Is is true that all I can do is apply Stokes Theorem: since $C$ is a closed curve, I need to choose a surface $S$ bounded by $C$ such that orientations of $C$ and $S$ are compatible? Since the curl of $F$ is $0$ , then the integral is $0$ . My confusion is how to choose the surface $S$ in this question. Can I choose $S$ to be $\langle r\cos(\theta),1-r\sin(\theta),1-r\sin(\theta) \rangle$ , where $0\leq r \leq1$ and $0 \leq \theta \leq 2\pi$ ?
There are two approaches here to show that the integral is $0$ , which are more or less equivalent: The vector field $(y+z, x+z, x+y)$ is the gradient field of $f(x,y,z) := xy + xz + yz$ . By the gradient theorem, integrating this vector field along any curve from point $P$ to point $Q$ will give $f(Q)-f(P)$ . In particular, a closed curve such as the curve $C$ in the problem is a curve that starts and ends at the same point, and so we get $f(P) - f(P) = 0$ . By Stokes' theorem, the line integral of $(y+z, x+z, x+y)$ around $C$ is equal to the integral of the curl of $(y+z, x+z, x+y)$ across any surface whose boundary is $C$ . Here, the curl is $0$ , and since the vector field and its curl are defined on all of $\mathbb R^3$ , it does not really matter which surface with boundary $C$ we pick; we'll get $0$ . The surface parameterized by $\langle r \cos \theta, 1 - r \sin\theta, r\sin\theta\rangle$ , as pointed out in the comments, is not quite right: it lies in the plane $y+z=1$ , not
|calculus|integration|analysis|multivariable-calculus|vector-analysis|
1
Is a polynomial monotone when the first derivative has only imaginary roots?
I have a polynomial over a specific the range. The first derivative has only two imaginary roots and no real roots. The first derivative is positive in the lower bound and upper bound. Does that mean that the function is monotone (increasing) over the range?
Consider $f(x):=2x^3-9x^2+12x$ . Then $f(x)$ has just one real zero, at $x=0$ , the other two being complex. Also, its gradient $f'(x)=6(x^2-3x+2)=6(x-1)(x-2)$ is $12$ at $x=0$ and $x=3$ , while it has a maximum at $x=1$ and a minimum at $x=2$ . So it is not monotone in the range $0$ to $3$ , while having a positive gradient at both ends of this range. Answer to revised question: The derivative of a polynomial is also a polynomial. If this latter polynomial has only two imaginary roots and no real root, then it is either a quadratic or (if multiple roots are allowed) a power of a quadratic. In the real domain, a quadratic function that never takes the value zero is either always positive or always negative. So, in this case, the gradient is always positive. That is, the original polynomial is strictly increasing (and so monotone) everywhere.
|polynomials|complex-numbers|roots|irreducible-polynomials|
0
A locally compact Hausdorff space thas is also a group in which the operation is continuous is a topological group
I was reading the book Introduction to the topological groups by Taqdir Husain, he gives a proof for this theorem, but I had a problem understanding some part. The proof goes like this It remains to show that the inversion mapping $x\to x^{-1}$ is continuous. So let $U$ be a open neighborhood of $e$ . We wish to show that there exists a compact neighborhood $C$ of $e$ such that $C^{-1}\subseteq U$ . Suppose this is not possible, i.e. $C^{-1}\setminus U\neq\varnothing$ for all compact neighborhood $C$ of $e$ . Now define the following family $$ \mathscr{F}=\{C^{-1}\setminus U:C\text{ is a compact neighborhood of }e\} $$ In the last Lemma proves that $C^{-1}$ is compact for $C$ compact. So the family $\mathscr{F}$ is a family of compact sets. Then claims that $\mathscr{F}$ has the finite intersection property. This last thing it's not clear to me, because I don't see why for example two elements of $\mathscr{F}$ say $A^{-1}\setminus U$ and $B^{-1}\setminus U$ must have elements in common
Here is a proof that pairwise intersections are nonempty. The finite intersection property in general follows by induction. Recall that the standing assumption is that for every compact neighborhood $C$ of $\{1\}$ , $C^{-1}\setminus U$ is nonempty. Consider two compact neighborhoods $A, B$ of $1$ . Take $C=A\cap B$ . This is again a compact neighborhood of $1$ . Then $$ (A^{-1}\setminus U)\cap (B^{-1}\setminus U)= (A^{-1} \cap B^{-1}) \setminus U = (A\cap B)^{-1} \setminus U= C^{-1}\setminus U\ne \emptyset, $$ according to the standing assumption. qed Consider also reading the proof that every locally compact Hausdorff paratopological group is a topological group given by Alex Ravsky here , in the unnumbered Proposition in his answer.
|general-topology|topological-groups|
1
Mathematical coincidences concerning the numbers $\pi$, $e$ and $163$
Something similar to this has probably been posted, but since I can't find any at the moment I will post it here. There are many numerical expressions to do with $\pi$, $e$ and $163$ ( Wikipedia has many of these ). The following are some of the approximations I have discovered when trying out different operations using the three numbers on my calculator: $$e^\pi - \pi^{1-e} \approx 23$$ $$\sqrt[e]{\pi} \approx \dfrac{\pi+1}e$$ $$\sqrt{\pi+e+163} \approx 13$$ $$\sqrt[3]{163}-\sqrt[3]{\pi} \approx 4 $$ $$\sqrt{163}-\sqrt{\pi}\approx11$$ $$\dfrac{\sqrt{163}}{\sqrt[3]e} \approx 6+\pi $$ $$\dfrac{\pi}{2e} \approx \dfrac1{\sqrt3}$$ $$\sqrt[3]{\dfrac{\pi^3}{\sqrt[3]e}+\dfrac{e^3}{\sqrt[3]{\pi}}}\approx 3.3 \,\text{(my favourite)}$$ $$ e^\pi-2(4\pi-1)\approx0$$ $$ \dfrac{\pi}e\left(e^{\sqrt[3]{\pi}}\right)\approx5$$ EDIT : Inspired by @Raffaele's approximation I find that if $$x=\frac{163}{e}+\frac{e}{163}+\frac{\pi}{163}-e^{\pi}$$ then $\sin x \approx 0.6$, $\cos x \approx 0.8$ and $\tan x \
$$ \frac{1}{\pi} + \frac{4}{\pi^2 - 4} \approx 1 $$
|exponential-function|recreational-mathematics|pi|
0
Use Lagrange Multipliers to find max and min of $x+2y$ subject to $x+y+z=1$ and $y^2+z^2=4$
Using Lagrange Multipliers, determine the maximum and minimum of the function $f(x,y,z) = x + 2y$ subject to the constraints $x + y + z = 1$ and $y^2 + z^2 = 4$: Justify that the points you have found give the maximum and minimum of $f$. So, $$ \nabla f = (λ_1)\nabla g_1 + (λ_2)\nabla g_2 $$ I get to this point $$ (1,2,0) = λ_1(1,1,1) + λ_2(0,2y,2z) $$ Where do I go from here to find the critical points ect.
I get to this point $$ (1,2,0) = λ_1(1,1,1) + λ_2(0,2y,2z) $$ Componentwise we have $1=\lambda_1$ , $2=\lambda_1+2\lambda_2y$ , $0=\lambda_1+2\lambda_2z$ . Hence, $\lambda_1=1$ , $y=\frac1{2\lambda_2}$ , $z=-\frac1{2\lambda_2}$ , e.i., $z=-y$ . From the constraint $y^2+z^2=4$ and $z=-y$ , we have $y=\pm\sqrt2$ ; from the constraint $x+y+z=1$ and $z=-y$ , we have $x=1.$ Conclusion: $x+2y=1\pm\sqrt2$ are extrema. It is not so obvious but $1+\sqrt2$ is max. and $1-\sqrt2$ is min. One may need a second derivative test, Hessian matrix etc.
|multivariable-calculus|optimization|lagrange-multiplier|
0
Is the Galois group of polynomial invariant under iterating?
I tried to find galois group of polynomial $p(x) = x^5 + 2x + 2$ and I managed after a while using the fact that it has only $1$ real root and the others are complex so they correspond to transposition and real root to cycle of length $5$ . Combining everything gives Galois group is going to be $S_5$ . Now, my question is what if I shift my polynomial with $1$ i.e $p(x+1) = (x+1)^5 + 2(x+1) + 2$ . Is still the galois group $S_5$ ?. I checked in programme, YES it is. Actually, I checked the Galois group $p(x+a)=(x+a)^5 + 2(x+a) + 2$ where $1\le a\le 35$ is still $S_5$ . But I didn't see the reason. So, my question is pick arbitrary polynomial $f(x)$ with certain galois group then does galois group of iterated polynomial $f(x+a),a\in \mathbb Z$ be same?
Let the Galois group of polynomial $p(x)$ is $S_n$ . Any iteration of $f$ by $\pm c$ does not change anything. Reason follows from isomorphism of iteration.
|abstract-algebra|galois-theory|
0
How the Fourier transform show the frequency extent of $f(x)$?
The Fourier transform of a function is given by $$f(ξ) = \int_{-∞}^{∞} f(x) e^{-2πixξ}dx$$ the paper I was reading from says that the test function $e^{2πixξ}$ is a periodic with period $2\pi/ξ$ and it continues by saying ("so integrating $f$ against this test function gives information about the extent to which this frequency occurs in $f$ ") My question is how integrating our original function with this "test function" gives us information about the extent to which frequency occurs in $f$ ?
Actually, the definition of the Fourier transform, namely $$ \hat{f}(\xi) = \int_\Bbb{R} f(x)e^{-2\pi ix\xi} \,\mathrm{d}x, $$ corresponds to a (hermitian) inner product $\langle f, e^{2\pi ix\xi} \rangle$ , where $$ \langle \phi,\psi \rangle := \int_\Bbb{R} \phi(x)\overline{\psi(x)} \,\mathrm{d}x. $$ In the case of Fourier series, the trigonometric "monomials" $e^{2\pi ix\xi_n}$ , with $\xi_n = \frac{n}{T}$ , form an orthonormal basis of the Lebesgue space $L^2([-\frac{T}{2},\frac{T}{2}])$ , which is a functional vector space for recall, in such a way that the expression $$ f(x) = \sum_n c_n e^{2\pi ix\xi_n} $$ is nothing else than the projection of $f$ onto the Fourier basis. In consequence, the coefficient $c_n = \langle f, e^{2\pi ix\xi_n} \rangle$ "measures" how much $f$ "contains" the sinusoid $e^{2\pi ix\xi_n}$ as its coordinate in that particular basis. It is to be noted that the Taylor series are actually constructed in the same manner with respect to a polynomial basis. Also,
|fourier-analysis|fourier-transform|
0
Steenrod squares and higher cup products for differential forms?
I am physicist, so I am sorry if I am not too rigorous in the following. I have two (closely related I guess) questions: Let me consider a triangulated manifold $M$ and its simplicial cohomology. Here the Steenrod square is an operation $Sq^q: H^p (M,Z_2) \to H^{p+q}(M,Z_2)$ . My manifold is smooth and I have also a differential structure on it: is there an analogous operation also on the de Rham cohomology? (eventually by considering forms in $H_{dR}^p (M)$ mod 2 for example) Now, perhaps more important for me, the Steenrod squares can be written on a element $\alpha_p \in H^p(M,Z_2)$ by introducing the so called higher cup products, so that $Sq^q \alpha_p = \alpha_p \cup_{p-q} \alpha_p$ (where $\cup_0 = \cup$ is the standard cup product and $\cup_p$ actually makes sense for $\alpha_p \in H^p(M,G)$ for some Abelian group $G$ ). Is there a generalization of these products also for differential forms? In my understanding these higher cup products basically come from the fact that the st
Steenrod squares are defined as classes in $H^\bullet(M,\mathbb{Z}/2)$ because they are derived from the Bockstein homomorphism, which is induced by the short exact sequence $$\mathbb{Z}\xrightarrow{\cdot 2}\mathbb{Z}\to\mathbb{Z}/2$$ Now, for the induced sequence on cohomology to be interesting in any way, it is crucial that $\mathbb{Z}$ has non-trivial maximal (or principal) ideals, in this case $2\mathbb{Z}$ , the image of multiplication by $2$ . If you change the coefficients to $\mathbb{R}$ , you have no non-trivial maximal ideals. Any ring that is obtained as a quotient of $\mathbb{R}$ by an ideal of $\mathbb{R}$ is either $0$ , or $\mathbb{R}$ itself, in which case the map is an isomorphism. There is no interesting "middle ground" like when you take coefficients in a ring that is not a field. So there can be no construction like Steenrod squares for de Rham cohomology. The ground ring is a field which precludes the required properties. As for your second question: Steenrod squar
|differential-geometry|algebraic-topology|homology-cohomology|simplicial-complex|de-rham-cohomology|
0
Mathematical Meaning of Antiderivatives
I'm largely a self-taught highschooler in basic Calculus and I'm utterly confused regarding what Indefinite integrals (or antiderivatives) do mean geometrically (if they really do), physically or mathematically at all (in the intuitive level). My exact confusion is what relation do they have with $x$ ? For example, the value of the derivative at $x$ is the slope (physical meaning) of the tangent at $x$ similarly, what relation does the value of the antiderivative at $x$ have with it? Do Indefinite integrals have anything to do with area under the graph and if yes, from where to where? Also how does the constant add to the geometrical significance of the antiderivatives?
Jair is right - keep reading and it will become clear. But you probably knew that =) To tide you over temporarily: consider some continuous function $f(x)$ , like a polynomial. And choose one with a bunch of distinct roots, like a $5$ th or $10$ th degree polynomial so it looks like a wiggly earthworm over some interval. At this point you know $f'(x)$ has the interpretation of the slope of the tangent at $x$ . But $f'(x)$ is also just a function you could plot - another wiggly plot - forgetting that it represents the slopes of its parent function $f(x)$ . And you can repeat this process until no more derivatives are possible ( $n$ derivatives for an $n$ -th order polynomial). So you can generate a linked set of functions, each of which describes the tangents of its parent function, and each of which can also be described by its own derivative. Now just imagine that process in reverse. Knowing that a function $f'(x)$ describes the tangents of some parent $f(x)$ , you can see that the "s
|calculus|integration|indefinite-integrals|intuition|
1
Why is the $\nabla g\neq0$ condition needed for the method of Lagrange multipliers?
Based on my Calculus textbook, the method of Lagrange multipliers is stated as follow: Suppose that $f(x,y,z)$ and $g(x,y,z)$ are differentiable and $\nabla g \ne \mathbf 0$ when $g(x,y,z) = 0$ . To find the local extremum values of $f$ subject to the constraint $g(x,y,z) = 0$ , find the values of $x,y,z$ and $\lambda$ simultaneously satisfying the equations $\nabla f = \lambda \nabla g$ and $g(x,y,z) = 0$ . My questions are: Why do we need $\nabla g \ne \mathbf 0$ in the assumption? (Is there something realted to the implicit function theroem?) What happens if $\nabla g = \mathbf 0$ ?
Let me consider again some of the examples discussed in the other answer, but working out the details of the calculations. Considering the optimisation problem $\max_{x,y} (y-x^2)$ subject to the constraint $x+y=0$ . Graphically, the solution is straightforward, and we can also just substitute $y=-x$ into the cost, which then reads $-x-x^2$ , and easily find out that it maximises at $x=-1/2$ . Same answer is obtained via Lagrange's multipliers: computing the gradients of cost and constraint we have $$\begin{pmatrix}-2x\\1\end{pmatrix} = \lambda \begin{pmatrix}1\\1\end{pmatrix},$$ which has the only solution $x=-1/2$ . Let's consider now the completely equivalent problem $\max_{x,y}(y-x^2)$ subject to $(x+y)^2=0$ . The constraint is clearly equivalent, and thus so is the solution, however, if we now follow the Lagrange multipliers scheme we get $$\begin{pmatrix}-2x\\1\end{pmatrix} = 2\lambda(x+y)\begin{pmatrix}1\\1\end{pmatrix}.$$ But this system now doesn't have any solution in the fea
|calculus|lagrange-multiplier|
0
Simplifying a binomial sum for bridge deals with specific voids
While trying to get an expression for the number of deals from a generalised bridge deck with nobody being void in any suit I encountered the following subproblem. From a generalised bridge deck with $r$ ranks instead of just $13$ deal four hands of $r$ cards each. How many deals $A(r)$ are there where South is void in $\diamondsuit$ and $\heartsuit$ West is void in $\clubsuit$ and $\heartsuit$ North is void in $\clubsuit$ and $\diamondsuit$ East is unrestricted, though all players may be void in other suits? $A(r)$ is the coefficient of $(wxyz)^r$ in $((w+z)(x+z)(y+z)(w+x+y+z))^r$ where $wxyz$ correspond to $\clubsuit\diamondsuit\heartsuit\spadesuit$ respectively. I solved this as follows: South picks $a$ spades, West picks $b$ spades and North picks $c$ spades in one of $\binom r{a,b,c,r-a-b-c}$ ways East gives $r-a$ clubs, $r-b$ diamonds and $r-c$ hearts to South, West, North respectively in one of $\binom ra\binom rb\binom rc$ ways; the rest of the deal is forced Thus $$A(r)=\sum_{
The following holonomic proof follows Peter Paule and Carsten Schneider's 2024 report Creative Telescoping for Hypergeometric Double Sums , and was generated using their software . Denote the inner sum $$f(r,a)=\sum_{b=0}^a\binom ab^2\binom{r+b}a=\sum_{b=0}^as(r,a,b)$$ and compute two recurrences for it: In[1]:= f Out[5]= -(1 + a)^2 f[a] - (3 + 2 a) (1 + 2 r) f[1 + a] + (2 + a)^2 f[2 + a] == 0 In[7]:= {r0, r1, r2} = FunctionExpand[{sd, (sd /. r -> r + 1), (sd /. a -> a + 1)}/sd]; prehook = Gosper[sd, {b, 0, a}, Parameterized -> {r0, r1, r2}]; hook = ReleaseHold[prehook[[1, 1, 1]]] == 0 /. {DisplayForm[SubscriptBox["F", "0"]][b] -> f[a], DisplayForm[SubscriptBox["F", "1"]][b] -> f[r + 1, a], DisplayForm[SubscriptBox["F", "2"]][b] -> f[a + 1]} Out[7]= (-1 - a^2 - 2 r + 2 a r - 2 r^2) f[a] - (1 + a)^2 f[1 + a] + 2 (1 + r)^2 f[1 + r, a] == 0 $$(a+1)^2f(r,a)+(2a+3)(2r+1)f(r,a+1)-(a+2)^2f(r,a+2)=0$$ $$2(r+1)^2f(r+1,a)-(a^2-2ar+2r^2+2r+1)f(r,a)-(a+1)^2f(r,a+1)=0$$ The first recurrence has cer
|combinatorics|summation|binomial-coefficients|combinatorial-proofs|card-games|
1
Finding the minimum value of $\frac{a^2+b^4+c^2}{abc}$
Find the minimum value for $\dfrac{a^2+b^4+c^2}{abc}$ where $a,b,c$ belongs to the positive numbers. I tried to solve the numerator separately then to manipulate it using $abc$ but I am not getting the desired result.
It's evident that $$f(a,b,c):=\frac{a^2+b^4+c^2}{abc} \ge 0$$ The infimum $0$ cannot be reached. We can choose $(a,b,c) = (L^3,L,L^3)$ and make $L \to +\infty$ : $$f(L^3,L,L^3) = \frac{2L^6 + L^4}{L^7}\xrightarrow{L\to +\infty} 0$$
|algebra-precalculus|inequality|
0
The Connection between Fourier transform and Fourier series (derivation and Intuition)
the way Fourier transform $$f(ξ) = \int_{-∞}^{∞} f(x) e^{-2πixξ}dx$$ describe the frequency extent of the function is derived from the Fourier series by taking that $c_{n}$ of the Fourier series and make the period goes to infinity, so the ${c_n}$ is equivalent to $f(ξ)$ but $c_{n}$ for discrete frequencies and $f(ξ)$ for continuous of frequencies, that what I Know and that what all the books that I've read from derive the Fourier transform, but when I discuss with many users about this connection between Fourier series and Fourier transform , they just say this connection is not totally rigorous and the way Fourier transform is defined isn't by Limiting the Fourier series , I don't know what their point really is but I don't what's wrong with what I am saying all the books that I read from derive it the same way by the same procedure, if so what is the objection, then? I've read in a paper concerning this part that by integrating our $f(x)$ with this periodic function $e^{-2πixξ}$ it
This question seems related to your other questions The correct way of looking at Fourier transform and How the Fourier transform show the frequency extent of $f(x)$ ? . A function $f(x)$ can be approximated on the interval $-\frac{P}{2} by the exponential Fourier series $$f(x)=\sum\limits_{n=-\infty}^\infty c_n\, e^{i 2 \pi \frac{n}{P} x}\tag{1}$$ where $$c_n=\frac{1}{P} \int\limits_{-\frac{P}{2}}^{\frac{P}{2}} f(x)\, e^{-i 2 \pi \frac{n}{P} x} \, dx\tag{2}$$ The Fourier transform of the function $f(x)$ can be defined as $$F(\omega)=\mathcal{F}_{x}[f(x)](\omega)=\lim\limits_{P\to\infty}\left(\int\limits_{-\frac{P}{2}}^{\frac{P}{2}} f(x)\, e^{-i 2 \pi \omega x} \, dx\right)\tag{3}$$ which is somewhat analogous to formula (2) above (the exact relationship is clarified further below). Also the Fourier series for $f(x)$ and the inverse Fourier transform $$f(x)=\mathcal{F}^{-1}_{\omega}[F(\omega)](x)=\int\limits_{-\infty}^{\infty} F(\omega)\, e^{2 \pi i x \omega} \, d\omega\tag{4}$$ both r
|fourier-analysis|fourier-series|fourier-transform|
0
Expectation value of repeated dice throws
We throw a dice, if we throw a 6, then we throw again (any number of times). Let $X$ be the sum of all thrown numbers. Find $\mathbb{E} (X)$ . I know that if we were just throwing without repeating throws then it would be: $\mathbb{E} (Y) = 1*P(1) + 2*P(2) + 3*P(3) + ... + 6*P(6) = \frac{7}{2}$ Now if we would get to throw only once after a six throw, then it's ( $\mathbb{E} (Z)$ ): Let $\mathbb{E}(Y')$ be the expectation value of the second throw after a six was thrown. $\mathbb{E}(Y') = 1*P_{Y'}(1) + 2*P_{Y'}(2) + ... + 6*P_{Y'}(6)$ $ = 1*\frac{1}{6^2} + ... + 6*\frac{1}{6^2} = \frac{\frac{7}{2}}{6}$ $\mathbb{E} (Z) = \mathbb{E}(Y) + \mathbb{E}(Y')$ But how to calculate the repeated throws after a six is thrown? Is it the sum up to infinity? $$\mathbb{E}(X) = \sum_{n=1}^{\infty} \sum_{i=1}^{6} i\cdot \frac{1}{6^n}$$ How can I evaluate this double sum?
To find the sum: $$ \sum^{\infty}_{n=1}\sum_{i=1}^6\frac{i}{6^n}= 21\sum_{n=1}^{\infty}6^{-n}={21\over5}=4.2 $$ Becuase the sum of $$\sum_{n=1}^\infty6^{-n}=\sum_{n=1}^\infty\left({1\over6}\right)^n = {1\over5}$$
|statistics|expected-value|
0
find minimum value of $x^4-x$ without using calculus
as title, is there a way to find the minimum value of $x^4-x$ without using calculus? By calculus it's easy as $(x^4-x)'=4x^3-1$ , so we got $x=\frac1{\sqrt[3]4}$ , then we get the min value. But without calculus, is it possible? I tried to use AM-GM but to no avail...
Once you know the minimum must be negative [as $x^4-x=x(x^3-1) when $x\in (0, 1)$ ], it isn't hard to use AM-GM. Let this minimum be $-m$ . $$x^4+m=x^4+3\times\frac13m \geqslant 4\sqrt[4]{x^4m^3/(3^3)}=4\sqrt[4]{\frac{m^3}{3^3}}|x| \geqslant 4\sqrt[4]{\frac{m^3}{27}}x$$ Hence if we select $m$ s.t. $m^3=\frac{27}{4^4}$ , we get $x^4-x \geqslant -m$ and hence the minimum. P.S. It is simple to see that equality is possible.
|algebra-precalculus|
0
Prove that if a smooth manifold $M$ is contractible then every vector bundle over $M$ is trivial
I've seen that this can be proved by using that if two functions are homotopic then the pullbacks of such functions are isomorphic, but the only "easy" proof of this I found is in Hatcher's vector bundles book and I don't find this proof very clear. This was left to me as a homework exercise and all we've seen of vector bundles are the basic definitions, constrictions by cocycles and that every vector bundle has a riemmanian metric so I don't know how to proceed with only this, any help would be appreciated
You might also try Hussemoller's book "Fiber Bundles" , Chapter 2, Corollary 4.8. Usually one proves, in the same breath, that if $f,g : X \to Y$ are two homotopic maps and if $B$ is a vector bundle over $Y$ then $f^*(B)$ , $g^*(B)$ are isomorphic vector bundles over $X$ . In Hussemoller's book this is is Chapter 2, Theorem 4.7.
|differential-geometry|riemannian-geometry|differential-topology|homotopy-theory|vector-bundles|
1
Inequality regarding Matrix Norm and Inverse Matrix
Currently, I'm stuck to one of a statement in a paper . Following is a brief summary of the paper regarding my question. (although the topic of the paper is mainly statistics, the question purely relies on mathematics) For $\lambda>0,\beta\in\mathbb R^p,n , let $B_{X,\Sigma} := \lambda^2\beta^T(S_X+\lambda I)^{-1}\Sigma(S_X+\lambda I)^{-1}\beta$ where $S_X= X^TX/n$ . Under these facts, show the following inequality, where $C_1,C_2>0$ are constants: $$\begin{aligned} |B_{X_1,\Sigma_1}- B_{X_2,\Sigma_2}| &\le C_1\Vert S_{X_1}-S_{X_2}\Vert_{op} + C_2\Vert\Sigma_1-\Sigma_2 \Vert_{op} \\ &\le \frac{C_1}{n}(\Vert X_1\Vert_{op} + \Vert X_2\Vert_{op} )\cdot\Vert X_1-X_2\Vert_{op} + C_2\Vert\Sigma_1-\Sigma_2 \Vert_{op} \end{aligned}$$ The second inequality is straightforward by "adding and subtracting" some terms $X_1^TX_2$ and applying $\Vert A+B\Vert_{op} \le \Vert A \Vert_{op} +\Vert B \Vert_{op}$ and $\Vert AB \Vert_{op} \le \Vert A \Vert_{op}\Vert B \Vert_{op}$ . The thing is the first ine
To simplify notation, you are trying to bound an expression of the form $$ \lambda^2\big|\beta^TA_1^{-1}F_1A_1^{-1}\beta-\beta^TA_2^{-1}F_2A_2^{-1}\beta\big|, $$ where $F_j=\Sigma_j$ and $A_j=S_{X_j}+\lambda I$ . Note that $(S_{X_j}+\lambda I)\geq\lambda I$ , so $\|(S_{X_j}+\lambda I)^{-1}\|\leq\lambda^{-1}$ and so $\|A_j^{-1}\beta\|\leq\lambda^{-1}\,\|\beta\|$ . Also $\|\beta^T A_j^{-1}\|\leq\lambda^{-1}\,\|\beta\|$ , $\def\abajo{\\[0.2cm]}$ and \begin{align} \|A_1^{-1}-A_2^{-1}\|&=\|A_2^{-1}(A_2-A_1)A_1^{-1}\|\leq\|A_2^{-1}\|\,\|A_1^{-1}\|\,\|A_2-A_1\|\abajo &\leq\lambda^{-2}\,\|A_2-A_1\|. \end{align} Let $L=\big\|A_1^{-1}F_1A_1^{-1}-A_2^{-1}F_2A_2^{-1}\big\|$ . You have \begin{align} L &\leq\big\|A_1^{-1}F_1A_1^{-1}-A_1^{-1}F_1A_2^{-1}\big\| +\big\|A_1^{-1}F_1A_2^{-1}-A_1^{-1}F_2A_2^{-1}\big\|\abajo &\qquad\qquad+\big\|A_1^{-1}F_2A_2^{-1}-A_2^{-1}F_2A_2^{-1}\big\|\abajo &=\big\|(A_1^{-1}F_1)(A_1^{-1}-A_2^{-1})\big\| +\big\|A_1^{-1}(F_1-F_2)A_2^{-1}\big|+\big\|(A_1^{-1}-A_2^{-1})F_2A
|linear-algebra|matrices|operator-theory|normed-spaces|self-learning|
1
find minimum value of $x^4-x$ without using calculus
as title, is there a way to find the minimum value of $x^4-x$ without using calculus? By calculus it's easy as $(x^4-x)'=4x^3-1$ , so we got $x=\frac1{\sqrt[3]4}$ , then we get the min value. But without calculus, is it possible? I tried to use AM-GM but to no avail...
That the minimum is negative and thus the minimizer inside the interval $0 was already established. Use AM-GM in the form $$\sqrt[4]{a^3b}\le\frac{3a+b}4.$$ Setting $3a=1-x^3$ and $b=x^3$ gives the right side as constant $\frac14$ and the left side as $(x(1-x^3)/3)^{3/4}$ . Thus the minimum of the function is $$ -\frac3{4\sqrt[3]4} $$ at $x^3=(1-x^3)/3$ , that is, $$x=\frac1{\sqrt[3]4}.$$
|algebra-precalculus|
1
why cant i integrate the derivative of a cylinders volume to find it again?
For a cylinder, the volume formula is given by $V = \pi r^2 H $ . To find the differential ( dV ), we differentiate ( V ) with respect to ( r ) and ( H ), yielding $\left( dV = 2 \pi r H dr + \pi r^2 dH \right)$ . Upon integrating ( dV ) again, we obtain $( 2 \pi r^2 H )$ . However, I'm confused about where the factor of two originates. Additionally, in calculus, it's taught that to derive the formula for the volume of a cylinder, we perform a double integral of ( da ) over the cylinder's surface area ( A ) and ( dh ) over the height ( H ). This suggests that $( V = \iint 2 \pi r dr dh )$ , or equivalently $( V = \int \pi r^2 dH )$ , which only includes the second term of the earlier ( dV ) expression. Does this imply that the first term is zero?
When you take that derivative (as you did correctly) $$ dV=\partial_rV\,dr+\partial_HV\,dH=2\pi rH\,dr+\pi r^2\,dH $$ you end up with a differential form that asks for a line integral along a curve. You will not get $V=\pi r^2H$ back when you integrate the two terms separately from $0$ to $r$ and $0$ to $H$ and add the integrals up. The correct way of integrating is to choose a curve $\gamma$ that has one parameter and starts at $(0,0)$ and ends at $(r,H)\,.$ Such a curve is $$ \gamma:t\mapsto (tr,tH)\,. $$ Performing the line integral very pedantically gives \begin{align} \int_\gamma dV&=\int_0^1 2\pi\,\gamma_1(t)\,\gamma_2(t)\,\dot\gamma_1(t)\,dt+\pi\,\gamma_2^2(t)\,\dot\gamma_2(t)\,dt\\[2mm] &=\int_0^12\pi\,(t\,r)\,(t\,H)\,r\,dt+\pi (t\,r)^2\,(t\,H)\,dt=\tfrac23\pi r^2H+\tfrac13\pi r^2H=\pi r^2H \end{align} as it should.
|calculus|geometry|volume|
0
Duals in tensor-categories.
Assume $(\mathcal{C},\otimes,\mathbf{1},\phi,\psi)$ (let's denote this as $\mathcal{C}$ ) is a tensor-category in the sense of Deligne/J.S. Milne ( https://www.jmilne.org/math/xnotes/tc2018.pdf ). Assume furthmore that for each object $X$ in $\mathcal{C}$ , we have an adjunction $- \otimes X \dashv \underline{\text{Hom}}(X,-)$ . Let's define the dual of an object $X$ as $X^{\vee} := \underline{\text{Hom}}(X,\mathbf{1})$ . It is claimed in ibid. that by diagram $1.6.6$ we get the following where $\text{ev}_{X}:X^{\vee} \otimes X \to \mathbf{1}$ , and we say that ${}^{t}f:Y^{\vee} \to X^{\vee}$ is induced from a morphism $f$ , and is the unique morphism so that the diagram above commutes. How do we actually get this unique arrow? My guess is that by $1.6.5$ in ibid., we have $\text{Hom}(Y^{\vee} \otimes X,\mathbf{1}) \cong \text{Hom}(Y^{\vee},X^{\vee})$ . So any morphism $Y^{\vee} \otimes X \to \mathbf{1}$ corresponds to a morphism ${}^{t}f:Y^{\vee} \to X^{\vee}$ . But why is the induced
By definition, $X^{\vee}$ represents the (contravariant) functor $F = \text{Hom}(- \otimes X, \mathbf{1})$ . This means that there is a natural isomorphism $\eta : \text{Hom}(-,X^{\vee}) \to F$ . By the Yoneda Lemma, $\eta$ is determined by $\eta_{X^{\vee}}(\text{id}_{X^{\vee}}) = \text{ev}_X$ . Namely, for any $\varphi : T \to X^{\vee}$ we have $$\eta_T(\varphi) = F(\varphi)(\text{ev}_X) = \text{ev}_X \circ (\varphi \otimes \text{id}_X).$$ Given $f : X \to Y$ , note that $\text{ev}_Y \circ (\text{id}_{Y^{\vee}} \otimes f) \in F(Y^{\vee})$ . Since $\eta_{Y^{\vee}}$ is a bijection, there is a unique $\varphi : Y^{\vee} \to X^{\vee}$ such that $\text{ev}_Y \circ (\text{id}_{Y^{\vee}} \otimes f) = \text{ev}_X \circ (\varphi \otimes \text{id}_X)$ . This $\varphi$ is what the authors call ${}^t\!f$ .
|category-theory|
0
Check an equality relating the divergence of a matrix and gradient of a vector field
Going through a fluid mechanics book, I encountered this expression: $$\nabla \cdot (uu^T) = u \cdot \nabla u, $$ whereby $u$ is supposed to be the velocity vector field. I can not make sense of this. Considering $u$ as a vector, $uu^T$ will be a 3 by 3 matrix. For the left-hand side of the equation above I took the divergence by taking the divergence of the three column vectors of the matrix. For the right-hand side I do not know what the gradient of a vector field is defined and what the dot product there means. Can somebody explain all the terms and operations in the equation and prove it holds ? Thanks.
In terms of Cartesian coordinates and using the Einstein summation convention (sum over repeated indices): $$\nabla \cdot(\mathbb{uu}^T) = \partial_i(u_iu_j) = (\partial_iu_i)u_j + u_i \partial_i u_j= (\nabla \cdot \mathbb{u})\mathbb{u} + \mathbb{u} \cdot \nabla \mathbb{u}$$ If the flow is incompressible, we have $\nabla \cdot \mathbb{u} = 0$ and we get $$\nabla \cdot(\mathbb{uu}^T) =\mathbb{u} \cdot \nabla \mathbb{u}$$
|real-analysis|
0
Factorisation of continuous maps
I'm studying general topology and a question has come to my mind. I am referring to the class of theorems that in algebra go by the name of "homomorphism theorems". In my topology course, we have seen some results alike, but it lacks the general result on when, given two continuous maps with the same domain, there exists a continuous map which composed with the first gives the second. Is there a necessary and sufficient condition for that existence? I know from set theory that for such a map to exist it's necessary that, if the first map has equal values on two arguments, so does the second. But there should be a second condition, involving the topological structure, assuring that the existing maps (possibly more than one, if the first map is not surjective) are continuous. I add a statement highlighting the condition (?) which I'm looking for: Let $f\colon X\to Y$ and $g\colon X\to Z$ be two continuous functions having the same domain (not only as sets, but as topological spaces). The
Let $X$ be a compact Hausdorff space, [EDIT: $Y$ and $Z$ Hausdorff], $f: X \to Y$ and $g: X \to Z$ continuous such that $f(x_1) = f(x_2)$ implies $g(x_1) = g(x_2)$ . This condition lets us define $h: f(X) \to Z$ by $h(y) = g(x)$ where $f(x) = y$ . Thus $h \circ f = g$ . The question is whether $h$ is continuous. If not, there is a net $y_\alpha$ in $f(X)$ converging to some $y\in f(X)$ with $h(y_\alpha)$ not converging to $h(y)$ . Taking a subnet, by compactness of $X$ we may assume $y_\alpha = f(x_\alpha)$ with $x_\alpha$ converging to some $x \in X$ . By continuity of $f$ , $y = \lim_\alpha y_\alpha = \lim_\alpha f(x_\alpha) = f(x)$ . So $h(y) = g(x)$ . But by continuity of $g$ , $g(x) = \lim_\alpha g(x_\alpha) = \lim_\alpha h(y_\alpha)$ .
|general-topology|geometry|quotient-spaces|
0
$f(x + iy) = u(x,y) + iv(x,y)$ holomorphic implies that $F(x,y) = (u(x,y), v(x,y))$ is differentiable
My question comes from section 1.6 of Complex Analysis (4th edition) by Serge Lang: Let $U$ be an open subset of $\mathbb{C}$ and let \begin{align*} f(x + iy) = u(x,y) + i v(x,y), \qquad x + iy \in U \end{align*} be a complex-valued function on $U$ . Lang argues that if $f$ is holomorphic at some point $z = x + iy$ , then the real vector field associated with $f$ is differentiable at $(x,y)$ . However, I don't quite follow the argument. The relevant passage is below: At a fixed $z$ , let $f'(z) = a + ib$ . Let $w = h + ik$ , with $h,k$ real. Suppose that $$ f(z + w) - f(z) = f'(z) w + \sigma(w) w, $$ where $$ \lim_{w \to 0} \sigma(w) = 0. $$ Then $$ f'(z) w = (a + ib)(h + ik) = ah - bk + i(ak + bh). $$ On the other hand, let $$ F: U \to \mathbb{R}^2 $$ be the map such that $$ F(x,y) = \big( u(x,y), v(x,y) \big). $$ We call $F$ the (real) vector field associated with $f$ . Then $$ F(x + h, y + k) - F(x,y) = (ah - bk, bh + ak) + \sigma_1(h,k) h + \sigma_2(h,k) k, \hspace{1cm} (1) $$ wher
$\sigma_1, \sigma_2$ are not the functions as you defined them in your proof. Let us write $$\sigma(w) = \sigma(h,k) = \bar \sigma_1(h,k) + i \bar \sigma_2(h,k)$$ with real-valued functions $\bar \sigma_1, \bar \sigma_2$ . Then we get \begin{align*} & F(x + h, y + k) - F(x,y) \\[3pt] = \; & \big(ah - bk, ak + bh \big) + \big( h \bar \sigma_1(h,k) - k \bar \sigma_2(h,k), k \bar \sigma_1(h,k) + h \bar \sigma_2(h,k) \big) \\ = \; & \big(ah - bk, ak + bh \big) + \big(\bar \sigma_1(h,k),\bar \sigma_2(h,k)\big)h + \big(-\bar \sigma_2(h,k), \bar \sigma_1(h,k)\big)k . \end{align*} Now take $\sigma_1(h,k) = \big(\bar \sigma_1(h,k),\bar \sigma_2(h,k)$ and $\sigma_2(h,k) = \big(-\bar \sigma_2(h,k), \bar \sigma_1(h,k)\big)$ . These are functions with values in $\mathbb R^2$ tending to $0$ as $(h,k)$ tends to $0$ .
|complex-analysis|complex-numbers|cauchy-riemann-equations|
1
Relationship between $\zeta(3)$ and ordinary logarithm function $\text{log}(\text{x})$
While working on another problem, I came up with the following expression. This involved many manual definite integral evaluations and not at all elegant. So I am not going into the details of the derivation. $$ \zeta(3) = \frac{\pi^2}{96} \lim_{\text{N} \to \infty} \left[1444 + 15232\text{log}(2)+3456\text{log}(3)+\left\{\sum\limits_{\text{k}=5}^{\text{N}-1}[88+256\text{log}(2)+32\text{k}^2 + 384\text{k}^2\text{log}(\text{k})]\right\} + 32\text{N}^2 -64(\text{N}-3)\text{N}^2\text{log}(\text{N}) - 64(\text{N}-2)(\text{N}+1)^2\text{log}(\text{N}+1)-(2\text{N}+7)(2\text{N}-1)^3\text{log}(2\text{N}-1)+(2\text{N}-5)(2\text{N}+3)^3\text{log}(2\text{N}+3)\right] $$ Notes: This seems like a known result, as Wolfram Cloud (guessing same as Mathematica underneath) gives the exact expression $\zeta(3)$ when you type in the right hand side of the expression above. Limit[(Pi^2/96)(1444+15232Log[2]+3456Log[3]+Sum[88+32k^2+256Log[2]+384k^2Log[k],{k,5,N-1}]+32N^2-64(N-3)N^2Log[N]-64(N-2)(N+1)^2Log[N+
Edit : A general result that is already known to others can be found in this : thread on MO I am posting an answer that attempts to answer my question 1 partially. I already made some simplifications by reaching Edit4 (within the body of my question) after I posted my original question. Thanks for many useful suggestions in the comments section. Here, I will try to list a couple of additional relations in the same fashion. I am posting an answwer only because there seems to be some pattern emerging. I need to spend more time on this to establish a more generalized result. $$ \zeta(1) = -2 \pi^{0} \lim\limits_{\text{N} \to \infty} \left [ \left\{\sum\limits_{\text{k}=1}^{\text{N}}\text{k}^{0}\text{log}(\text{k})\right\} - \left\{\text{log}(\text{N})\sum\limits_{\text{k}=1}^{\text{N}}\text{k}^{0}\right\}+\text{N}-\text{log}(\text{N})\right] $$ $$ \zeta(3) = 4 \pi^{2} \lim\limits_{\text{N} \to \infty} \left [ \left\{\sum\limits_{\text{k}=1}^{\text{N}}\text{k}^{2}\text{log}(\text{k})\right
|logarithms|zeta-functions|
1
How to find the derivative of a matrix if Y=AXBX^TC
How to find the matrix derivative dY/dX, where Y=AXBX^TC and X: p x q, if the dimensions of X are unspecified. Determine the dimensions of the other matrices depending on the expression A, B, and C are constant matrices. Add dimension to the identity matrix.
$ \def\k{\otimes} \def\h{\odot} \def\o{{\tt1}} \def\p{\partial} \def\grad#1#2{\frac{\p #1}{\p #2}} \def\E{{\cal E}} \def\F{{\cal F}} \def\G{{\cal G}} \def\LR#1{\left(#1\right)} \def\op#1{\operatorname{#1}} \def\vc#1{\op{vec}\LR{#1}} \def\trace#1{\op{Tr}\LR{#1}} \def\frob#1{\left\| #1 \right\|_F} \def\qiq{\quad\implies\quad} \def\c#1{\color{red}{#1}} \def\fracLR#1#2{\LR{\frac{#1}{#2}}} \def\gradLR#1#2{\LR{\grad{#1}{#2}}} \def\Eij{E_{ij}} \def\Eji{E_{ji}} \def\Xij{X_{ij}} $ First, recall that the component-wise self-gradients of $X$ are $$\eqalign{ \grad X\Xij &= \Eij \qquad\qquad \grad {X^T}\Xij &= \Eij^T = \Eji \\ }$$ where $\Eij$ is the so-called Single Entry matrix whose components are all equal to $0$ except for the $(i,j)$ component which equals $\o$ . This can be used to compute the component-wise gradient of $Y$ $$\eqalign{ Y &= AXBX^TC \\ \grad Y{\Xij} &= A\gradLR{X}{\Xij}BX^TC + AXB\gradLR{X^T}{\Xij}C \\ &= A\Eij BX^TC + AXB\Eji C \\ }$$ The full matrix-by-matrix gradient is a
|matrices|derivatives|
0
Uniqueness and continuous dependence on the data of Heat equation.
Let two smooth $v_1$ and $v_2$ both satisfy the system $$\partial_t{v}-\Delta v=f \quad \text{in} \quad U \times (0,\infty), $$ $$v = g \quad \text{on} \quad \partial U \times (0,\infty),$$ for some fixed given smooth $f: \bar{U}\times (0,\infty) \rightarrow \mathbb{R}$ and $g: \partial U \times (0,\infty).$ $U$ is open, bounded and $U \subset \mathbb{R}^n.$ Show that $$\sup_{x \in U} |v_1(t, x) − v_2(t, x)| \rightarrow 0,$$ as $t \rightarrow \infty.$ This is my work: Let $ u =v_1 -v_2,$ it is sufficient to prove $\sup_{x \in U} |u(x,t)| \rightarrow 0,$ as $t \rightarrow \infty. (1)$ $u$ obeys the system $$\partial_t{u}-\Delta u=0 \quad \text{in} \quad U \times (0,\infty), $$ $$u = 0 \quad \text{on} \quad \partial U \times (0,\infty).$$ Multiply both sides by $u.|u|^{2(m-1)},$ note that $\partial_t(|u|^{2m})=2m\partial_tu.u.|u|^{2(m-1)}$ then $$\dfrac{1}{2m}\partial_t\int_{U}|u|^{2m}dx=\int_{U}\Delta u.u.|u|^{2(m-1)}dx$$ Apply integration by part for the RHS, we get $$\dfrac{1}{2m}\par
Yes, as mentioned in the comments, the energy energy integral is useful here. Let $u$ be defined as in the question. Define $$E(t)=\int_{U}{u(t,x)}^2~\mathrm d^m x$$ Note $E$ is bounded below by $0$ . Now, observe $$\dot E(t)=\int_U 2 ~u(t,x)~\partial_tu(t,x)~\mathrm d^m x \\ =2\int_U (u ~\Delta u)(t,x)\mathrm d^mx \\ =2\int_U \big(u ~\nabla\cdot( \nabla u)\big)(t,x)\mathrm d^mx$$ Recall the generalized integration by parts: $$\int_U \phi~\nabla\cdot v~\mathrm d\mu^m=\int_{\partial U}n\cdot \phi v~\mathrm d\mu^{n-1}-\int_{U}v\cdot \nabla\phi~\mathrm d\mu^m$$ Taking in our case $\phi=u$ and $v=\nabla u$ , we get $$\dot E(t)=2\int_U \big(u ~\nabla\cdot( \nabla u)\big)(t,x)\mathrm d^mx \\ =2\int_{\partial U} \big( n\cdot (u\nabla u)\big)(t,x)\mathrm d^m x-2\int_U |\nabla u|^2(t,x)\mathrm d^m x$$ The first integral is zero due to the assumptions on the boundary data of $u$ , and therefore we obtain $$\dot E(t)=-2\int_U|\nabla u|^2(t,x)\mathrm d^m x$$ Poincare's inequality implies $$\dot E(
|analysis|partial-differential-equations|heat-equation|gronwall-type-inequality|
1
Can you explain to me why this proof by induction is not flawed? (Domain is graph theory, but that is secondary)
Background I am following this MIT OCW course on mathematics for computer science. In one of the recitations they come to the below result: Official solution Task: A planar graph is a graph that can be drawn without any edges crossing. Also, any planar graph has a node of degree at most 5. Now, prove by induction that any planar graph can be colored in at most 6 colors. Solution.: We prove by induction. First, let n be the number of nodes in the graph. Then define P (n) = Any planar graph with n nodes is 6-colorable. Base case, P (1): Every graph with n = 1 vertex is 6-colorable. Clearly true since it’s actually 1-colorable. Inductive step: P (n) → P (n + 1): Take a planar graph G with n + 1 nodes. Then take a node v with degree at most 5 (which we know exists because we know any planar graph has a node of degree ≤ 5), and remove it. We know that the induced subgraph G’ formed in this way has n nodes, so by our inductive hypothesis, G’ is 6-colorable. But v is adjacent to at most 5 oth
There are other good answers here, I wanted to offer a perspective that I think might appeal to a computer scientist: in a proof by induction, the inductive step is often presented as a way to build up $P(n+1)$ from $P(n)$ , but it is often better to think of the inductive step as breaking $P(n+1)$ down into smaller, more manageable subparts. Concretely, in this situation, imagine you're trying to program a function that takes in a planar graph $G$ , and returns a 6-coloring of $G$ . If I were naively to try to create a recursive algorithm to do this, it might look something like this: def 6color(graph): #Base case: if there's only one vertex, color that vertex if len(graph.vertices) == 1: return {graph.vertices[0]: 0} coloring = {} #Remove an arbitrary vertex to get a smaller graph, then recurse v = some_vertex(graph) smaller_graph = graph.remove(v) smaller_coloring = 6color(smaller_graph) #For vertices other than v, we can lift the coloring from the subgraph for vertices u != v: colo
|graph-theory|proof-writing|proof-explanation|induction|planar-graphs|
0
Uniqueness of a linear map for which $Tv_{k} = w_{k}$
I am reading Sheldon Axler's masterpiece Linear Algebra Done Right (Fourth Edition). If you thought you knew linear algebra, read this - you'll learn more in the first 50 pages than the entirety of a typical 300+ level course. Anyways, I am having a bit of trouble following one of his results. He states and proves the following. 3.4 linear map lemma Suppose $v_{1}, \ldots, v_{n}$ is a basis of $V$ and $w_{1}, \ldots, w_{n} \in W$ . Then there exists a unique linear map $T: V \rightarrow W$ such that $Tv_{k} = w_{k}$ for each $k = 1, 2, \ldots, n$ . Proof In the interest of brevity, I will skip the first part of the proof where he shows that the function $$T(c_{1}v_{1} + \ldots + c_{n}v_{n}) = c_{1}w_{1} + c_{2}w_{2} + \cdots + c_{n}w_{n}$$ is indeed a linear map from $V$ to $W$ for which $Tv_{k} = w_{k}$ for each $k = 1, \ldots, n$ . The next part is what confuses me. To prove uniqueness, now suppose that $T \in \mathcal{L}(V, W)$ and that $Tv_{k} = w_{k}$ for each $k = 1, \ldots, n$ .
The point is that because $\{v_1, v_2, \ldots, v_n \}$ is a basis, there's only one way to express any element of $V$ as a linear combination of the $v_i$ (that follows from linear independence), and each element of $V$ can be expressed as such a linear combination (that follows because the $v_i$ span $V$ ). So for any element $v \in V$ , express $v= \sum c_iv_i$ . We now know that $Tv=\sum c_i w_i$ , and we can come up with such a combination for all $v \in V$ .
|linear-algebra|linear-transformations|
0
Is "If ⊨□φ then ⊨φ" true in modal logic?
(Under propositional modal logic, system K) There's seems to be an obvious counterexample: A model with one world w s.t. $\phi$ is not true in $\phi$ and $\lnot R(w,w)$ . But consider this argument: Assuming $\vDash\square\phi$ , take an arbitrary model $M$ and construct a new model $M^*$ s.t. for any $w$ in $M$ , add a world x s.t. $R(z,x)$ . Since $M^*,x \vDash \square\phi$ , $M^*,w \vDash\phi$ . And since $\langle M,w\rangle$ and $\langle M^*,w\rangle$ is bisimilar, $M,w \vDash\phi$ . Because $M$ and $w$ was arbitrary, $\vDash\phi$ . I don't think this argument is right because we don't know if $\langle M,w\rangle$ and $\langle M^*,w\rangle$ is bisimilar, since we don't know if adding an $x$ would change how $M^*, w$ satisfies sentential variables. Intuitively, this claim holds when $\phi$ is a modal tautology. But I'm not sure if that's always the case given that $\vDash\square\phi$ . I suspect it isn't. Update: The counterexample does not work because: since $\phi$ is any wff, we
We notice that $$\Box\psi\rightarrow\psi$$ is not a theorem of $\mathbf{K}$ , hence $$\not\vdash_{\mathbf{K}}\Box\psi\rightarrow\psi$$ However, $$\vdash\Box\psi\implies\vdash\psi$$ equivalently, $$\Vdash\Box\psi\implies\Vdash\psi$$ is a metatheorem of $\mathbf{K}$ . We may intuitively see this noticing that $\Box\psi$ says that $\psi$ holds in all worlds, therefore, the question of reflexivity has already been resolved. Indeed, the proof system presented in James Garson's Modal Logic for Philosophers includes it as an inference rule. Johan van Benthem gives an interesting proof of it in his Modal Logic for Open Minds . I quote it here for completeness: Proof. Suppose that $\psi$ is not provable. Then by completeness, there is a counter-model $\mathcal{M}$ with a world $w$ where $\neg\psi$ holds. Now here is a semantic trick that is used a lot in modal logic. Take any new world $v$ , add it to $\mathcal{M}$ and put just one extra $R$ -link, from $v$ to $w$ : The atomic valuation at $v$
|logic|modal-logic|
1
Investigate whether the polynomial $q(x) = 2x^5 - 78x^3 + 39x + 21$ is irreducible in $\mathbb{F}_{13}[x]$.
Investigate whether the polynomial $q(x) = 2x^5 - 78x^3 + 39x + 21$ is irreducible in $\mathbb{F}_{13}[x]$ . Solution : In $\mathbb{F}_{13}[x]$ , $q(x) = 2x^5 + 8 = 2(x^5 + 4)$ . This polynomial has a root in $\mathbb{F}_{13}: p(a) = 0 \in \mathbb{F}_{13} $ if and only if $a^5 = -4 \in \mathbb{F}_{13}$ . Since $\mathbb{F}_{13}^*$ is cyclic of order $13 - 1 = 12$ , and $\text{gcd}(12, 5) = 1$ , every congruence of the form $x^5 \equiv b \mod 13$ has a solution. I have an old math exam question with the solution included, but there are certain steps of the solution I don't understand. Questions: Since the polynomial has a zero point it means that the polynomial is reducible, since the polynomial can be written as a product of factors although how do I know that one of the factors must be a unit? Why do we look at the order of $\mathbb{F}_{13}^*$ instead of at $\mathbb{F}_{13}$ and how does knowing the order of $\mathbb{F}_{13}^*$ and that the $\text{gcd}(12,5)=1$ lead to the conclusion t
The claim is that it decomposes into two smaller polynomials neither of which is a unit, as a factor $x - a$ is not a unit in $\mathbf{F}_{13}[x]$ . To answer your second question, think about what happens when we take $k$ -th powers of elements in $\mathbf{F}_{13}^*$ with $\text{gcd}(k, 12) = 1$ . Hint: when do wo have that $\left(a^k\right)^n = 1$ (take maybe some generator of $\mathbf{F}_{13}^*$ )?
|abstract-algebra|irreducible-polynomials|
0
Question About Half-Open Cubes on $\mathbb{R}^d$
I am self-studying real analysis. I encountered the following assertion without a proof: For each positive integer $k$ , let $\mathcal{C}_k$ be the collection of all cubes of the form \begin{align*} \{(x_1,\dots,x_d):j_i2^{-k} \leq x_i where $j_1,\dots,j_d$ are arbitrary integers. Then (1) each $\mathcal{C}_k$ is a countable partition of $\mathbb{R}^d$ , and (2) if $k_1 , then each cube in $\mathcal{C}_{k_2}$ is included in some cube in $\mathcal{C}_{k_1}$ . I tried a couple of examples, for instance, I tried $k=1$ and $j_i=i$ , $k=1$ and $j_1=0$ , $j_2=-5$ , $j_3=10$ , and so on. It does show that the claim is correct. However, I am really having a hard time proving it rigorously. I really appreciate it if someone could help me out! I do not have the definition of a partition of $\mathbb{R}^d$ . But I think it means a collection of nonempty disjoint sets whose union is $\mathbb{R}^d$ . Reference: Lemma 1.4.2 from Measure Theory by Donald Cohn
Frame challenge: this is not the kind of place in a real analysis course where putting in the effort required for a "formal proof" is a useful way to spend your time. The picture in the plane is good enough. Save your insistence on rigor for times that matter: the epsilons and deltas. If you must fill in the details here I would start by showing that each cube in each of the partitions is a translate of a cube with "lower left corner" at the origin. That should get you the countability and the partition property (which you state correctly) in part (1) and the inclusion in part (2).
|real-analysis|analysis|measure-theory|elementary-set-theory|problem-solving|
0
Prove that $g(x) = \sum_{n=0}^{+\infty}\frac{1}{2^n+x^2}$ ($x\in\mathbb{R}$) is differentiable and check whether $g'(x)$ is continuous.
The function $g(x)$ is a function series, so it is differentiable when $g'(x)$ converges uniformly. So I should just check uniform convergence of $g'(x)$ by using the Weierstrass M-test: $$g'(x) = \left(\sum_{n=0}^{+\infty}\frac{1}{2^n+x^2}\right)' = \sum_{n=0}^{+\infty}\left(\frac{1}{2^n+x^2}\right)',$$ then $$\left|-\frac{2x}{(2^n+x^2)^2}\right| = \frac{2|x|}{(2^n+x^2)^2} \leq \frac{2|x|}{(2^n)^2} = \frac{2|x|}{4^n}.$$ But now I can't find a sequence that is bigger than $\frac{2|x|}{4^n}$ to use. For checking whether function $g'(x)$ is continuous or not, I think I will use the same argument: If $g''(x)$ converges uniformly, then $g'(x)$ is differentiable $\Longrightarrow$ continuous. Am I solving this problem in a correct way? Any help would be much appreciated.
We can prove that the function $g(x)$ is differentiable infinitely many times. It is convenient to consider $$h(x)=\sum_{n=0}^\infty {1\over 2^n+x},\quad x>-1$$ since the differentation of each term is much simpler and $g(x)=h(x^2).$ It suffices to show that $h$ is differentiable infinitely many times. However $$a_n(x):-{d^k\over dx^k}(2^n+x)^{-1}=(-1)^k {k!\over (2^n+x)^{k+1}}$$ Hence $|a_n(x)|\le k!\,2^{-(k+1)n},$ which implies that the series $$(-1)^kk!\sum_{n=0}^\infty {1\over (2^n+x)^{k+1}}$$ is uniformly convergent for $x\ge -1/2.$ Thus $h\in C^\infty[0,\infty).$
|calculus|analysis|functions|
0
Proof that $\sin(z-w)=\sin(z) \cos(w)-\sin(w) \cos(z)$
Note that \begin{aligned} \sin(z-w) &= \frac{e^{i(z-w)}-e^{-i(z-w)}}{2i} \\ &= \frac{e^{iz}e^{-iw}-e^{-iz}e^{iw}+e^{iz}e^{iw}-e^{iz}e^{iw}}{2i} \\ &= \frac{e^{iw}(e^{iz}-e^{-iz})}{2i}-\frac{e^{iz}(e^{iw}-e^{-iw})}{2i} \\ &= e^{iw}\sin(z)-e^{iz}\sin(w) \\ &= 2(\cos(w) \sin(z)-\cos(z) \sin(w))+e^{-iz}\sin(w)-e^{iw} \sin(z) \end{aligned} I have omitted to add $e^{-iw} \sin(z)-e^{-iw} \sin (z)$ The idea was to achieve that $e^{-iz} \sin(w)-e^{iw} \sin (z) = -\sin (z-w)$ to clear this value and easily obtain the equality, but using the sin parity is not given, any suggestions?
After your first formula block, you could just take the real part, as the left side is a real function. Or you could resolve the exponentials into the Euler formula $e^{iw}=\cos w+i\sin w$ and then recognize that the resulting two $i\sin w\sin z$ terms cancel. You went one step too far into this direction.
|complex-analysis|
0
Prove that the following function is one-to-one
Define a function $g$ from the set of real numbers to $S$ by the following formula: $$ g(x) = \frac12\biggl( \frac x{1+|x|} \biggr) + \frac12,\quad x\in\mathbb{R}. $$ Prove that $g$ is a one-to-one correspondence. (It is possible to prove this statement either with calculus or without it.) What conclusion can you draw from this fact? My question is that what is the conclusion we can draw after we decide that it is a one-to-one correspondence? I would prove its one-to-one correspondence through its graph, which is one-to-one in that no two $x$ 's are mapped to the same $y$ .
first recall $|x|=\sqrt{x^2}$ then $$g(x)=\frac{x}{2+2\sqrt{x^2}}+\frac{1}{2}$$ using the quotient rule we can find the derivative $$g'(x)=\frac{2+2\sqrt{x^2}-x(\frac{2x}{\sqrt{x^2}})}{(2+2\sqrt{x^2})^2}$$ multiplying both the numerator and denominator by $\sqrt{x^2}$ $$g'(x)=\frac{2\sqrt{x^2}+2x^2-2x^2}{(2+2\sqrt{x^2})^2}=\frac{2\sqrt{x^2}}{(2+2\sqrt{x^2})^2}=\frac{2|x|}{(2+2|x|)^2}$$ since $2|x|$ is greater than or equal to zero and $(2+2|x|)^2$ is greater than zero then $g'(x)\geq0$ therefore $g(x)$ is one to one
|algebra-precalculus|functions|discrete-mathematics|
0
Tricky Algebraic Reduction
So I'm trying to work my way through Ernst Kummer's De Numeris Complexis , and I've reached a point where I keep stumbling over something that should be very, very simple. After almost an hour of playing around with this, I have been unable to solve it, and so appeal to the community for help. The basic proposition can be boiled down to what follows: Let's say that $a,b$ are some numbers such that $1+a+b=0$ , and consider the product $p = (ax+by)(ay+bx)$ where $x$ and $y$ are independent variables. The claim is that $p$ can then be expressed as a form in $x,y$ wherein neither $a$ nor $b$ appear. What is this form, and how does one find it? EDIT: To give some idea of my own failed attempts, one idea I had was that you write out the product as follows: $$(x^2+y^2)ab + xy(a^2+b^2).$$ Then you use that $$0=0^2=(1+a+b)^2 = 1 + 2(a+b)+ 2 ab + a^2+b^2$$ to get that $$a^2+b^2=-1-2(a+b)-2ab.$$ I then inserted that into $(x^2+y^2)ab + xy(a^2+b^2)$ to get $$(x^2+y^2)ab - xy(1 + 2(a+b)+ 2 ab).$$ S
This can't be correct. If $a=0, b=-1$ , then $p=xy$ . If $a=b=-\frac 12$ , then $p=\frac 14(x+y)^2$ . If $p$ could be expressed independently of $a, b$ , these expressions would have to be equal, but they're not.
|abstract-algebra|
1
Expectation value of repeated dice throws
We throw a dice, if we throw a 6, then we throw again (any number of times). Let $X$ be the sum of all thrown numbers. Find $\mathbb{E} (X)$ . I know that if we were just throwing without repeating throws then it would be: $\mathbb{E} (Y) = 1*P(1) + 2*P(2) + 3*P(3) + ... + 6*P(6) = \frac{7}{2}$ Now if we would get to throw only once after a six throw, then it's ( $\mathbb{E} (Z)$ ): Let $\mathbb{E}(Y')$ be the expectation value of the second throw after a six was thrown. $\mathbb{E}(Y') = 1*P_{Y'}(1) + 2*P_{Y'}(2) + ... + 6*P_{Y'}(6)$ $ = 1*\frac{1}{6^2} + ... + 6*\frac{1}{6^2} = \frac{\frac{7}{2}}{6}$ $\mathbb{E} (Z) = \mathbb{E}(Y) + \mathbb{E}(Y')$ But how to calculate the repeated throws after a six is thrown? Is it the sum up to infinity? $$\mathbb{E}(X) = \sum_{n=1}^{\infty} \sum_{i=1}^{6} i\cdot \frac{1}{6^n}$$ How can I evaluate this double sum?
You can adapt your first expression to the "rethrow 6s" case with $$\mathbb{E}[Y] = 1\times P(1) + 2\times P(2) + 3\times P(3) + \cdots + (6+\mathbb{E}[Y]) \times P(6) $$ which will give you $(1-P(6))\times \mathbb{E} [Y] = \frac72$ and so $\mathbb{E} [Y] = \frac{21}{5}$ . JMoravitz's comment says much the same thing, and also suggests you think about $\sum\limits_{n=1}^{\infty} \sum\limits_{i=1}^{6} i\cdot \frac{1}{6^n} = \sum\limits_{i=1}^{6} i\cdot \sum\limits_{n=1}^{\infty} \frac{1}{6^n} = 21 \cdot \frac15= \frac{21}{5}$ .
|statistics|expected-value|
1
Factorisation of continuous maps
I'm studying general topology and a question has come to my mind. I am referring to the class of theorems that in algebra go by the name of "homomorphism theorems". In my topology course, we have seen some results alike, but it lacks the general result on when, given two continuous maps with the same domain, there exists a continuous map which composed with the first gives the second. Is there a necessary and sufficient condition for that existence? I know from set theory that for such a map to exist it's necessary that, if the first map has equal values on two arguments, so does the second. But there should be a second condition, involving the topological structure, assuring that the existing maps (possibly more than one, if the first map is not surjective) are continuous. I add a statement highlighting the condition (?) which I'm looking for: Let $f\colon X\to Y$ and $g\colon X\to Z$ be two continuous functions having the same domain (not only as sets, but as topological spaces). The
You suggest a theorem of the form: If $X$ , $Y$ , and $Z$ are structures, $f\colon X\to Y$ and $g\colon X\to Z$ are morphisms (respecting the structure), then provided that blah holds, there exists a morphism $h\colon Y\to Z$ such that $g=h\circ f$ . In groups, this is not a standard theorem (in the way that, say, the Noether Homomorphism Theorems are standard. Best I can come up with would be something like: Proposition. If $f\colon G\to H$ and $g\colon G\to K$ are group homomorphisms, $\ker(f)\subseteq\ker(g)$ , and $f$ is surjective, then there exists a group homomorphism $h\colon H\to K$ such that $g=h\circ f$ . (The condition $\ker(f)\subseteq\ker(g)$ is required because this is what ensures that $f(x)=f(x')$ implies $g(x)=g(x')$ , which is required if $g=h\circ f$ . Surjectivity of $f$ is also required, as for example the inclusion $f\colon\mathbb{Z}\hookrightarrow\mathbb{Q}$ and the identity map $g\colon\mathbb{Z}\to\mathbb{Z}$ cannot be factor into a map $\mathbb{Z}\hookrightar
|general-topology|geometry|quotient-spaces|
1
How do you prove $\lvert x - y \rvert < 1$ then $\lvert x\rvert<\lvert y\rvert +1 $?
How do you prove $\lvert x - y \rvert then $\lvert x\rvert ? I know this proof has the form of the triangle inequality, but I can't seem to figure it out. This is from Kenneth Ross 17.1
$|x| = |x-y+y| \leq |x-y|+|y|\stackrel{|x-y| More generally, this trick shows $|x|-|y|\leq |x-y|$ and by symmetry, $|y|-|x|\leq |x-y|$ and therefore, $||x|-|y||\leq |x-y|$ which is a common way to prove that absolute value is a (uniformly) continuous function.
|analysis|absolute-value|triangle-inequality|
0
Proof that $\sin(z-w)=\sin(z) \cos(w)-\sin(w) \cos(z)$
Note that \begin{aligned} \sin(z-w) &= \frac{e^{i(z-w)}-e^{-i(z-w)}}{2i} \\ &= \frac{e^{iz}e^{-iw}-e^{-iz}e^{iw}+e^{iz}e^{iw}-e^{iz}e^{iw}}{2i} \\ &= \frac{e^{iw}(e^{iz}-e^{-iz})}{2i}-\frac{e^{iz}(e^{iw}-e^{-iw})}{2i} \\ &= e^{iw}\sin(z)-e^{iz}\sin(w) \\ &= 2(\cos(w) \sin(z)-\cos(z) \sin(w))+e^{-iz}\sin(w)-e^{iw} \sin(z) \end{aligned} I have omitted to add $e^{-iw} \sin(z)-e^{-iw} \sin (z)$ The idea was to achieve that $e^{-iz} \sin(w)-e^{iw} \sin (z) = -\sin (z-w)$ to clear this value and easily obtain the equality, but using the sin parity is not given, any suggestions?
Just expand $e^{iw}$ and $e^{iz}$ and the imaginary terms will cancel out. $\begin{aligned} \sin(z-w) &= \frac{e^{i(z-w)}-e^{-i(z-w)}}{2i} \\ &=...\\ &= e^{iw}\sin(z)-e^{iz}\sin(w) \\ &= (\cos(w)+i\sin(w))\sin(z)-(\cos(z)+i\sin(z))\sin(w) \\ &= \cos(w)\sin(z)-\cos(z)\sin(w)+i(\sin(w)\sin(z)-\sin(z)\sin(w)) \\ &= \cos(w)\sin(z)-\cos(z)\sin(w) \\ \end{aligned} $
|complex-analysis|
0
Prove that the following function is one-to-one
Define a function $g$ from the set of real numbers to $S$ by the following formula: $$ g(x) = \frac12\biggl( \frac x{1+|x|} \biggr) + \frac12,\quad x\in\mathbb{R}. $$ Prove that $g$ is a one-to-one correspondence. (It is possible to prove this statement either with calculus or without it.) What conclusion can you draw from this fact? My question is that what is the conclusion we can draw after we decide that it is a one-to-one correspondence? I would prove its one-to-one correspondence through its graph, which is one-to-one in that no two $x$ 's are mapped to the same $y$ .
To prove that the function is one-to-one, we will have to prove that the function is injective $(g(x) = g(y))$ and surjective $(g(x) = y)$ . Injectivity: Let x and y are a member of the function. $$ \frac{1}{2}\left(\frac{x}{1+|x|} \right) + \frac{1}{2} = \frac{1}{2}\left(\frac{y}{1+|y|} \right) + \frac{1}{2} $$ By simplifying the above expression. $$ \frac{x}{1+|x|} = \frac{y}{1+|y|} $$ $$ x + x|y| = y + y|x| $$ $$ x = y + y|x| -x|y| $$ $$ x = y $$ This shows that the function is an injective function. Now we will prove that the function is a surjective function. Surjection: $$ y = \frac{1}{2} \left( \frac{x}{1 + |x|} \right) + \frac{1}{2} $$ $$ y - \frac{1}{2} = \frac{1}{2} \left( \frac{x}{1+|x|} \right) $$ $$ 2y - 1 = \frac{x}{1+|x|} $$ $$ (2y-1)(1+|x|) = x $$ This shows that for every $y$ in the function, there exists an $x$ such that $g(x) = y$ which thus proves surjectivity for the function. As a consequence of proving the injectivity and surjectivity of the function. We can now
|algebra-precalculus|functions|discrete-mathematics|
0
Proof that $\sin(z-w)=\sin(z) \cos(w)-\sin(w) \cos(z)$
Note that \begin{aligned} \sin(z-w) &= \frac{e^{i(z-w)}-e^{-i(z-w)}}{2i} \\ &= \frac{e^{iz}e^{-iw}-e^{-iz}e^{iw}+e^{iz}e^{iw}-e^{iz}e^{iw}}{2i} \\ &= \frac{e^{iw}(e^{iz}-e^{-iz})}{2i}-\frac{e^{iz}(e^{iw}-e^{-iw})}{2i} \\ &= e^{iw}\sin(z)-e^{iz}\sin(w) \\ &= 2(\cos(w) \sin(z)-\cos(z) \sin(w))+e^{-iz}\sin(w)-e^{iw} \sin(z) \end{aligned} I have omitted to add $e^{-iw} \sin(z)-e^{-iw} \sin (z)$ The idea was to achieve that $e^{-iz} \sin(w)-e^{iw} \sin (z) = -\sin (z-w)$ to clear this value and easily obtain the equality, but using the sin parity is not given, any suggestions?
You also have \begin{aligned} \sin(z-w) &= \frac{e^{i(z-w)}-e^{-i(z-w)}}{2i} \\ &= \frac{e^{iz}e^{-iw} - e^{-iz}e^{iw} + e^{-iz}e^{-iw} - e^{-iz}e^{-iw}}{2i} \\ &= \frac{e^{-iw}(e^{iz} - e^{-iz})}{2i} - \frac{e^{-iz}(e^{iw} - e^{-iw})}{2i} \\ &= e^{-iw}\sin(z)-e^{-iz}\sin(w). \end{aligned} Therefore \begin{aligned} \sin(z-w) &= \frac12\sin(z-w) + \frac12\sin(z-w) \\ &= \frac12\left(e^{iw}\sin(z)-e^{iz}\sin(w)\right) + \frac12\left(e^{-iw}\sin(z) - e^{-iz}\sin(w)\right) \\ &= \frac{e^{iw} + e^{-iw}}{2}\sin(z) - \frac{e^{iz} + e^{-iz}}{2}\sin(w) \\ &= \cos(w)\sin(z) - \cos(z)\sin(w). \end{aligned} But the other answers seem like less work. Less symmetric, but quicker, you already have \begin{aligned} \sin(z-w) &= e^{iw}\sin(z)-e^{iz}\sin(w) \\ &= 2(\cos(w) \sin(z)-\cos(z) \sin(w)) + \underbrace{e^{-iz}\sin(w)-e^{iw} \sin(z)}_{-\sin(z - w)}. \end{aligned} Add $\sin(z - w)$ to both sides: $$ 2\sin(z - w) = 2(\cos(w) \sin(z)-\cos(z) \sin(w)), $$ then divide by $2$ .
|complex-analysis|
1
Representation of $V$ as $\mathbb{C}^{2}$
Let $V$ be a finite-dimensional complex inner product space and suppose that there is an operator (a matrix) $A$ on $V$ that satisfies the following anti-commutation relations: $$AA + AA = 0$$ $$A^{*}A + AA^{*} = I,$$ where $I$ is the identity matrix and $A^{*}$ is the adjoint of $A$ . In this case, one can show that $V$ has an even dimension and that $A$ has a representation as a $2 \times 2$ matrix: $$A = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \tag{1}\label{1} $$ This is done in Simon's book . The basic idea is that $V$ has an orthogonal direct sum decomposition $V = \operatorname{Ker}(A) \oplus \operatorname{Ker}(A^{*})$ and, thus, every vector $\phi \in V$ can be seen as a vector: $$\phi = \begin{pmatrix} \varphi \\ \psi \end{pmatrix} \tag{2}\label{2}$$ with $\varphi \in \operatorname{Ker}(A)$ and $\psi \in \operatorname{Ker}(A^{*})$ . In this case, $A$ acts as a $2 \times 2$ matrix on these vectors. I have some questions regarding some terminology and concepts written on Sim
As you note on MO, algebras , as well as groups , have representations. One way to view Simon's claim is that we are taking a $*$ -representation of the $*$ -algebra (i.e., algebra with complex-anti-linear anti-involution) $\mathcal A = \mathbb C\langle A, A^*\rangle/(A^2 = 0, A^{*\,2} = 0, A^* A + A A^* = 1)$ —where $A$ and $A^*$ are treated as formal symbols, subject only to the requirement that the involution on $\mathcal A$ takes $A$ to $A^*$ and the further relations by which we quotient—in the sense that it is acting on an inner-product space (not just an abstract vector space) $V$ by a (unital) homomorphism $\pi : \mathcal A \to \operatorname{End}(V)$ such that $\pi(x^*)$ equals $\pi(x)^*$ for all $x \in \mathcal A$ (which, in this case, is equivalent to just requiring that $\pi(A^*)$ equals $\pi(A)^*$ ). This is what he means by referring to “a representation of (II.6.1–2)”, not a representation of $\operatorname{End}(V)$ . (Simon leaves out the relation $A^{*\,2} = 0 \iff \{A^
|linear-algebra|abstract-algebra|analysis|vector-spaces|representation-theory|
1
What even is a lower and upper triangular matrix in the context of Jacobi method?
I cannot understand how the hell L and U works in terms of the Jacobi Method. \begin{equation} x_{n+1} = D^{-1}(b-(L+U)x_n) \end{equation} Take this matrix system as an example: \begin{equation} \underbrace{\begin{pmatrix} 3 & -1 & 1 \\ 3 & 6 & 2 \\ 3 & 3 & 7 \end{pmatrix}}_A \underbrace{\begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix}}_x = \underbrace{\begin{pmatrix} 1 \\ 0 \\ 4 \end{pmatrix}}_b \end{equation} As I understand lower triangular is \begin{equation} \begin{pmatrix} 0 & 0 & 0 \\ 3 & 0 & 0 \\ 3 & 3 & 0 \end{pmatrix} \end{equation} as, you know, it is lower. embed and the upper triangular as \begin{equation} \begin{pmatrix} 0 & -1 & 1 \\ 0 & 0 & 2 \\ 0 & 0 & 0 \end{pmatrix} \end{equation} with the same reason. but looking at my professors notes, Jacobi Method is applied like this: \begin{align*} x_1^{(n+1)} = \frac{1}{3}(1-(-x_2^{(n)} + x_3^{(n)})) \\ x_2^{(n+1)} = \frac{1}{6}(0-(3x_1^{(n)} + 2x_3^{(n)})) \\ x_3^{(n+1)} = \frac{1}{7}(4-(3x_1^{(n)} + 3x_2^{(n)})) \\ \end{align
You have switched from using subscripts to index the sequence of vectors (in $x_{n+1} = D^{-1} (b - (L + U) x_n)$ , expressing the $(n+1)$ st vector $x_{n+1}$ in terms of the $n$ th vector $x_n$ ) to using superscripts to index the sequence of vectors and subscripts to denote the entries of vectors (in the equations that give the entries of the $(n+1)$ st vector, now called $x^{(n+1)}$ , in terms of the $n$ th, now called $x^{(n)}$ ). Using your latter notation to write $x^{(n)} = \begin{pmatrix} x^{(n)}_1 \\ x^{(n)}_2 \\ x^{(n)}_3 \end{pmatrix}$ , we have, with $L = \begin{pmatrix} 0 & 0 & 0 \\ 3 & 0 & 0 \\ 3 & 3 & 0 \end{pmatrix}$ and $U = \begin{pmatrix} 0 & -1 & 1 \\ 0 & 0 & 2 \\ 0 & 0 & 0 \end{pmatrix}$ as you wrote, that $$ U x^{(n)} = \begin{pmatrix} 0 & -1 & 1 \\ 0 & 0 & 2 \\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} x^{(n)}_1 \\ x^{(n)}_2 \\ x^{(n)}_3 \end{pmatrix} = \begin{pmatrix} -x^{(n)}_2 + x^{(n)}_3 \\ 2 x^{(n)}_3 \\ 0 \end{pmatrix} $$ and $$ L x^{(n)} = \begin{pmatrix} 0
|linear-algebra|matrices|
1
Navigating Consecutive Integer Quadratic Roots:
Prove that if $a$ , $b$ and $c$ are consecutive positive integers then the roots of the quadratic equation of the form $ax^2 + bx + c = 0$ has Complex roots My attempt: In my attempt to grapple with this intriguing problem, I embarked on a journey through the realms of quadratic equations and consecutive integers. At first glance, the task seemed daunting, but I was determined to unravel its mysteries. The discriminant of a quadratic equation determines the nature of its roots: positive for two real roots, zero for repeated roots, and negative for complex roots. So in this question I had to explore the scenario where the roots are complex. First, I considered the relationship between the coefficients $a$ , $b$ , and $c$ . Since they are consecutive positive integers, we can express them as (a), (a + 1), and (a + 2), respectively. Substituting these values into the discriminant formula, I obtained: $D = (a + 1)^2 - 4a(a + 2)$ Expanding and simplifying this expression, I arrived at: $D =
You were almost there! when $a,b,c$ are consecutive possitive integers the trinomial can be written $nx^2+(n+1)x+(n+2)=0 $ , The discriminant in this case would be $D=(n+1)^2-4n(n+2)=-3n^2-6n+1 $ We know that $n$ is a possitive integer so $-3n^2,-6n+1 We can also say that when $n\ge 1 , D \le -8 So indeed the trinomial has $2$ complex roots If you are still not convinced We can let $n+1=y$ or $n+2=y$ and the discriminant is still negative
|algebra-precalculus|complex-numbers|quadratics|
0
Antiderivative of a linear matrix expression
Let $f:\mathbb{R}^{n\times m}\rightarrow \mathbb {R}$ be a function that takes an $n\times m$ matrix $X$ and maps it to the real line. Suppose that the derivative of $f$ with respect to one element $X_{ij}$ is $$ \frac{df}{dX_{ij}}=a^\top XE1_jb_i $$ where $a$ is an $n\times 1$ vector, $E$ is an $m\times m$ positive definite matrix, $b$ is an $n\times 1$ vector, and $1_j$ is the $j$ th standard basis vector of $\mathbb{R}^m$ . My question is: What is $f$ ? Does such a $f$ even exist? If $a=b$ , I believe that the answer is $$f=\frac{1}{2}a^\top X E X^\top a+\text{cte},$$ but I can't seem to generalize this result. Similarly, if $m=n=1$ it is trivial to find $f$ .
A better way to write the desired derivative is $$\eqalign{ \def\b{\beta} \def\a{\alpha} \def\h{\tfrac12} \frac{\partial f}{\partial X_{ij}} &= \big(ba^TXE\big)_{ij} \\ }$$ Consider the function $$\eqalign{ f &= \h{\rm Trace}\big(ba^TXEX^T\big) \quad\implies\quad \frac{\partial f}{\partial X_{ij}} &= \h\big(ba^TXE + ab^TXE\big)_{ij} \\ }$$ Then a derivative of the desired form occurs if $\,b=a,\,$ as you noted. Also as you noted, if $\,n={\tt1},\,$ then it is easy to find a function which produces the desired derivative because $i$ gets fixed at $i={\tt1},\,$ the vectors $(a,b)\,$ collapse to scalars $(\a,\b)\,$ and $X$ to a row vector $x^T$ $$\eqalign{ f &= \h{\rm Trace}\big(\a\b\,x^TEx\big) \quad\implies\quad \frac{\partial f}{\partial x_{j}^T} &= \big(\b\a\,x^TE\big)_{j} \\ }$$ If you replace the outer product $ba^T$ by a general matrix $A,\,$ then $$\eqalign{ f &= \h{\rm Trace}\big(AXEX^T\big) \quad\implies\quad \frac{\partial f}{\partial X_{ij}} &= \h\big(AXE + A^TXE^T\big)_{ij} \
|calculus|linear-algebra|integration|matrix-equations|matrix-calculus|
0
Investigate whether the polynomial $q(x) = 2x^5 - 78x^3 + 39x + 21$ is irreducible in $\mathbb{F}_{13}[x]$.
Investigate whether the polynomial $q(x) = 2x^5 - 78x^3 + 39x + 21$ is irreducible in $\mathbb{F}_{13}[x]$ . Solution : In $\mathbb{F}_{13}[x]$ , $q(x) = 2x^5 + 8 = 2(x^5 + 4)$ . This polynomial has a root in $\mathbb{F}_{13}: p(a) = 0 \in \mathbb{F}_{13} $ if and only if $a^5 = -4 \in \mathbb{F}_{13}$ . Since $\mathbb{F}_{13}^*$ is cyclic of order $13 - 1 = 12$ , and $\text{gcd}(12, 5) = 1$ , every congruence of the form $x^5 \equiv b \mod 13$ has a solution. I have an old math exam question with the solution included, but there are certain steps of the solution I don't understand. Questions: Since the polynomial has a zero point it means that the polynomial is reducible, since the polynomial can be written as a product of factors although how do I know that one of the factors must be a unit? Why do we look at the order of $\mathbb{F}_{13}^*$ instead of at $\mathbb{F}_{13}$ and how does knowing the order of $\mathbb{F}_{13}^*$ and that the $\text{gcd}(12,5)=1$ lead to the conclusion t
$q(x) = 2x^5 - 78x^3 + 39x + 21$ is equivalent to $f(x)=2x^5+8=2(x^5+4)$ in $\Bbb F_{13}[x]$ and if $x^5=-4=9$ modulo $13$ then $x=3$ because $3^3=1$ and $3^2=9$ modulo $13$ . Then $q(x)$ is not irreducible in $\Bbb F_{13}[x]$ . On the other hand $$\dfrac{x^5+4}{x-3}=x^4+3x^3+9x^2+x+3+\dfrac{247}{x-3}$$ so, because $247\equiv0\pmod{13}$ , the quotient in $\Bbb F_{13}[x]$ is $$h(x)=x^4+3x^3+9x^2+x+3$$ We can verify that $h(x)\ne0$ for all the elements of $\Bbb F_{13}$ so $h(x)$ has not linear factor.I don't look at the possibility of two quadratic factors because the question was about the irreducibility of $q(x)$ and the answer is not, $q(x)$ is reducible and we have $$q(x)=2(x-3)(x^4+3x^3+9x^2+x+3)\in\Bbb F_{13}[x]$$
|abstract-algebra|irreducible-polynomials|
0
How many distinct subsets of the set $S=\{1,8,9,39,52,91\}$ have odd sums?
How many distinct subsets of the set $S=\{1,8,9,39,52,91\}$ have odd sums? Let, $O = $ Odd, and $E = $ Even. I figured, that only $\text{odd}+\text{even}=\text{odd}$, so I divided up the problem into 5 cases: Case 1: ${O}$ Case 2: ${O,E}$ Case 3: ${O,O,O}$ and ${O,E,E}$ Case 4: ${O,O,O,E}$ Case 5: ${O,O,O,E,E}$ and I found $28$ such subsets, but that's incorrect.
We can use parity to approach this problem. Note that in this set ${1,8,9,39,52,91}$ , one is odd, and five are even. Let's list the ways in which we will sum up to an odd number: $$[odd]$$ $$[odd,odd,odd]$$ $$[odd,even]$$ $$[odd,even,even]$$ $$[odd,odd,odd,even]$$ $$[odd,odd,odd,even,even]$$ In these cases, we have $\binom{4}{1}+\binom{4}{3}+\binom{4}{1} \binom{2}{1}+\binom{4}{1} \binom{2}{2}+\binom{4}{3} \binom{2}{1}+\binom{4}{3} \binom{2}{2}$ , respectively. Like everyone who answered, I also get $\boxed{32}$ .
|combinatorics|multisets|
0
Math History Reference Request: Open-Access Introduction Textbooks, Recently Published from $2015-2022$
As of recent, I have deeply read through Stephen Hawking’s “A Brief History of Time”, Thomas Levenson’s “The Hunt for Vulcan”, and Howard Eve’s “Foundations and Fundamental Concepts of Mathematics”. The thing is, I didn’t consider math history to be at all interesting until I have was enthralled in the tidbits of history presented throughout these books, and thus I reached out to a math history professor at my college for further reading material. However, the books he had told me to get is on the extremely-costly for a college student who cannot afford to buy books on her own outside given scholarship money (Book one , two , and three ) $^{[1]}$ . However, I have had extremely good luck in the past from self-studying open-access materials like the Combinatorics and Abstract Algebra open-access textbooks. Thus, I am wondering if there exists other Open-Access text books of Math History like the above - preferably recently published, as I have noticed/told that newer published books ten
Recent Open Access Books: (2020) Making up Numbers, A History of Invention in Mathematics Standalone, $2000-2020$ Open Access E-Books: Mathematics and Its History History of Mathematics by Burton A History of Mathematics by Florian Cajori A Short Account of the History of Mathematics by W. W. Rouse Ball History of Modern Mathematics by David Eugene Smith The number-system of algebra treated theoretically and historically Book Archives: Online Historical Maths Textbooks from the 16th to 19th Centuries Online Books Page - Math History
|reference-request|math-history|
0
Existence of atlas with certain properties
I have a question about the following definition from Gilkey's book, "Invariance Theory, the Heat Equation, and the Atiyah-Singer Index Theorem" (p. 31): (Let $M$ be a closed manifold.) Definition: An atlas $\mathcal{U}$ is a collection $\{U_i,h_i,\phi_i\}$ where the $U_i$ are a finite open cover of $M$ , $(U_i,h_i)$ is a coordinate chart, and the $\phi_i$ are a partition of unity subordinate to the cover $U_i$ . We assume that any two points of $M$ belong to at least one of the $U_i$ . We assume that for any pair of indices $i$ and $j$ there exists a coordinate chart $(O_{ij},h_{ij})$ so that $\bar U_i\cup\bar U_j\subseteq O_{ij}$ . The first condition I think follows from Hausdorffness of $M$ , so we can put enough charts into the collection (which might be rather large) to make this happen. Question: How do we know that a collection $\mathcal{U}$ exists that satisfies the second condition, $\bar U_i\cup\bar U_j\subseteq O_{ij}$ ?
Let $M$ be a smooth compact connected $n$ -dimensional manifold. Let $g$ be a Riemannian metric on $M$ , $d$ the corresponding Riemannian distance function on $M$ . Let $R$ denote the injectivity radius of $(M,g)$ and $r:= R/6$ . Then the products $B(x,R)\times B(y,R)$ , $(x,y)\in M^2=M\times M$ , form an open cover of $M^2$ . We let $\{x_1,...,x_N\}$ denote a finite subset of $M$ such that $$ M^2= \bigcup_{1\le i, j\le N} B(x_i,r)\times B(x_j,r) $$ Thus, for every pair of points $x, y\in M$ there exist $i, j$ such that $x\in B(x_i,r)$ , $y\in B(x_j,r)$ . Set $U_{ij}:= B(x_i,r)\cup B(x_j,r)$ , $1\le i\le j\le N$ . Then we clearly have that any two points in $M$ belong to one of the subsets $U_{ij}$ . Next, consider $U_{ij}, U_{kl}$ . We will need to find an open subset $O_{ijkl}\subset M$ containing the closure of $U_{ij}\cup U_{kl}$ such that $O_{ijkl}$ is diffeomorphic to an open subset of $\mathbb R^n$ . There are several cases which may occur, I will analyze just two and leave you
|general-topology|differential-geometry|manifolds|
1
Significance of $\sigma$-finite measures
From Wikipedia : The class of $\sigma$-finite measures has some very convenient properties; $\sigma$-finiteness can be compared in this respect to separability of topological spaces. Some theorems in analysis require σ-finiteness as a hypothesis. For example, both the Radon–Nikodym theorem and Fubini's theorem are invalid without an assumption of $\sigma$-finiteness (or something similar) on the measures involved. Though measures which are not σ-finite are sometimes regarded as pathological, ... I was wondering what makes $\sigma$-finite measures so natural to mathematicians ( they often think of them in the first place when it comes to measures , while I as a layman don't have that instinct), well-behaved (as opposite to "pathological") and important (appearing in conditions in many theorems such as Radon-Nikodym, Lebesgue decomposition and Fubini's Theorems)? In what sense/respect, can $\sigma$-finiteness be compared to separability of topological spaces? For example, are most or all
-finiteness can be compared in this respect to separability of topological spaces In a separable topological space a countable dense set can be used to approximate any element of the topological space, in the sense that any environment around any element contains a member of the countable dense set. This also allows you to construct, in a metric space, a sequence of the countable dense set which converges to your desired element. In the same sense, you can construct using a " sigma-finite set " of measurable sets $E_n$ a "measure-approximation" of $A$ by utilizing cuts of $E_n$ with respect to $A$ . We will use below the two most common characterizations for $\sigma$ -finiteness, that of a countable increasing monotone sets and that of the countable partitioning sets (countable pairwise disjoint cover): let $(\Omega,\mathcal{A},\mu)$ be a $\sigma$ -finte measure space and let $A\in\mathcal{A}$ , then: $$ \begin{align*} &E_n\uparrow\Omega &\Rightarrow \mu(A) = \lim_{n\to\infty} \mu(E_n\
|general-topology|measure-theory|intuition|
0
Is the notion "If a polynomial has small coefficients (relative to the exponent), then it has small roots" true?
Basically I'm trying to find good starting values for algorithms that determine the roots of a polynomial (e.g. newton method). Obviously we are trying to get as close as possible to the root as we can, but how can we estimate where the roots of a polynomial lie? Is a argument like: "If the coefficients are relatively small compared to the degree of the polynomial, then the magnitude of the roots is somewhere near the coefficients" correct? Are there counterexamples of polynomials with very small coefficients and very large roots?
A bit late to the party, but I always find it insightful to look at the easiest or most know answers. In this case look at the standard roots for the quadratic equation $ax^2+bx+c$ : $$ x = \frac{-b\pm \sqrt{b^2-4ac}}{2a} $$ From this we can see that whenever $a$ , the coefficient of the quadratic term, is very small compared to $b$ , there will be a very large negative root, which is a simple counterexample to your statement. To provide some insight to your overarching question of finding relatively good intial guesses, you might want to find an $x_0$ s.th. $f(x_0)f''(x_0)>0$ , which can be done relatively easily for polynomial functions. See Darboux's theorem for more details.
|polynomials|numerical-methods|roots|
0
Find all matrices that commute with $\left(\begin{smallmatrix}2&3\\1&4\end{smallmatrix}\right)$
Find all $2\times 2$ matrices that commute with $$\left( \begin{array}{cc} 2 & 3 \\ 1 & 4 \end{array} \right)$$ My progress: I know that a square matrix commutes with itself, the identity matrix of that order, the null matrix of that order and any scalar matrix of that order. The answer has been given as: $$\left( \begin{array}{cc} m & 3n \\ n & m+2n \end{array} \right)$$ I don't understand how they're getting that form. Can someone please explain?
A somewhat more insightful approach is to think in terms of the eigendecomposition of the given matrix. Namely, if $A$ is a 2x2 diagonalisable non-degenerate matrix (which yours is), then $[A,B]=0$ iff $B$ is also diagonalisable in the same eigenbasis. Case in point, your matrix decomposes as $$A = \begin{pmatrix}2&3\\1&4\end{pmatrix} = \frac14\begin{pmatrix}1&3\\-1 & 1 \end{pmatrix} \begin{pmatrix}5&0\\0&1\end{pmatrix} \begin{pmatrix}1&-3\\1 & 1 \end{pmatrix}.$$ It immediately follows that $[A,B]=0$ iff $B$ decomposes as $$B = \frac14\begin{pmatrix}1&3\\-1 & 1 \end{pmatrix} \begin{pmatrix}a&0\\0&b\end{pmatrix} \begin{pmatrix}1&-3\\1 & 1 \end{pmatrix} =\frac14\begin{pmatrix}a+3b & 3a-3b \\ a-b & 3a+b\end{pmatrix}$$ for any $a,b$ . You can then get the parametrisation via $n,m$ by a simple change of variables. See e.g. this post for the generalisation of these statements about finding commutants from eigendecompositions.
|linear-algebra|matrices|
0
Upper bound for series with $\arcsin$, $\arccos$ and $\arctan$
Just a only doubt. Supposing that I have these three series: $$\sum_{n=1}^\infty\left[\arcsin(p(x))\right]^n, \quad \sum_{n=1}^\infty \left[\frac{1}{2\pi}\arccos(q(x))\right]^n, \quad \sum_{n=1}^\infty \left[\frac{1}{2\pi}\arctan\left(g(x)\right)\right]^n$$ Supposing that the three goniometric inverse functions are defined into your domains. I was interested for the upper bond of each series. Look the red colours. $$\left|\left[\arcsin(p(x))\right]^n\right| \color{red}{=}\color{red}{or\, \leq}\left|\arcsin(p(x))\right| ^n\leq \left(\frac{\pi}2\right)^n, \quad \forall n\in\Bbb N$$ $$\left|\left[\frac{1}{2\pi}\arccos(q(x))\right]^n\right| \color{red}{\leq} \left|\frac{1}{2\pi}\arccos(q(x))\right|^n \leq \left(\frac{1}{2\pi}\cdot \pi\right)^n=\left(\frac 12\right)^n, \quad \forall n\in \Bbb N$$ For the $\arctan?$ I know that the codomain of $\arctan$ is $]-\pi/2,\pi/2[$ . How is the arcotangent upper bound is done in this case? Like that of the arcosine since both are odd functions?
Um... Assuming $g$ , $p$ , and $q$ are real-valued... Since the trig functions don't have arguments that depend on $n$ , these are three geometric series: $\sum_{n=1}^\infty \left( f(x) \right)^n = \frac{f(x)}{1-f(x)}$ (when $-1 ). So the arcsine series diverges when $\arcsin p(x) \in [-\pi/2, -1] \cup [1, \pi/2]$ (i.e., when $p(x) \in [-1,\sin(-1)] \cup [\sin(1), 1]$ ) and otherwise converges, the arccosine series always converges ( $[0/2\pi, \pi/2\pi] \subseteq (-1,1)$ ), and the arctangent series always converges ( $[(-\pi/2)/2\pi, (\pi/2)/2\pi] \subseteq (-1,1)$ ).
|sequences-and-series|upper-lower-bounds|inverse-function|
1
All positive integers $n$ such that $\sum\limits_{k=1}^{n}\frac{2^{k-1}}{a_k^2}=1$?
How can we find the all positive integers $n$ for which there exists positive integers $a_1,a_2,\cdots,a_n$ such that $\sum\limits_{k=1}^{n}\frac{2^{k-1}}{a_k^2}=1$ ? My attempt: For any odd $n=2k+1$ , I guess that the solution will in this form: $a_1=a_2=2,a_{2k+1}=4^k,$ and $ a_{2j+1}=a_{2j+2}=4a_{2j−1},1 . For $n$ even, things get tricky a little bit. For example for $n=2, \frac{1}{a_1^2}+\frac{2}{a_2^2}=1$ this implies that $a_1,a_2>1$ but then $a_1,a_2≥2$ yields $\frac{1}{a_1^2}+\frac{2}{a_2^2}≤\frac34 . There is no solution. For $n=4$ , we can have the following solution: $\frac{1}{6^2}+\frac{2}{6^2}+\frac{4}{12^2}+\frac{8}{3^2}=1.$ Thank you for your help.
There are solutions for $n\ge 3$ . This can be proven by induction. Base Case: For $n=3$ , we have the solution $\dfrac14+\dfrac24+\dfrac4{16} = 1$ . Induction Step: Suppose $n$ is odd and there are $a_1,a_2,\dots,a_n$ positive integers such that $$\frac1{a_1^2}+\frac2{a_2^2}+\dots+\frac{2^{n-1}}{a_n^2} = 1$$ For $n+1$ , we have $$\begin{aligned} \frac1{(3a_1)^2}+\frac2{(3a_2)^2}+\dots+\frac{2^{n-1}}{(3a_n)^2}+\frac{2^n}{(3\cdot2^{(n-3)/2})^2} &=\\ \frac19\left(\frac1{a_1^2}+\frac2{a_2^2}+\dots+\frac{2^{n-1}}{a_n^2}+\frac{2^n}{2^{n-3}}\right) &=\\ \frac19(1+8) &= 1 \end{aligned}$$ and for $n+2$ we have $$\begin{aligned} \frac1{(2a_1)^2}+\frac2{(2a_2)^2}+\dots+\frac{2^{n-1}}{(2a_n)^2}+\frac{2^n}{(2^{(n+1)/2})^2}+\frac{2^{n+1}}{(2^{(n+3)/2})^2} &=\\ \frac14\left(\frac1{a_1^2}+\frac2{a_2^2}+\dots+\frac{2^{n-1}}{a_n^2}\right)+\frac{2^n}{2^{n+1}}+\frac{2^{n+1}}{2^{n+3}}&=\\ \frac14+\frac12+\frac14 &= 1\\ \end{aligned}$$ thus concluding the induction. $\square$
|number-theory|
0
If $f(0)=0,f(1)=1$,find all $a$ such that $\exists \xi\in (0,1)$ such that $f(\xi)+a=f'(\xi)$
Let V be the set of all contimuous functions $f : [0,1] \to \mathbb{R} $ , differentiable on $(0,1)$ ,with the property that $f(0) = 0$ and $f(1)= 1$ . Determine all $a\in \mathbb{R}$ such that for every $f\in V$ ,there exists some $\xi\in (0,1)$ such that $$f(\xi)+a=f'(\xi)$$ I have tried some functions like $x$ and $x^2$ ,and find that $a$ must be in $(0,1)$ ,but I'm not sure if this is the answer. I also try to construct the function $g(x)=e^{-x}f(x)$ ,and then $e^{\xi}g'(\xi)=a$ .But I don't know how to keep going. Could someone give me some hints?
We want to reduce set of different possibilities for $a$ , so it makes sense to try to find a function s.t. $f'(x) - f(x)$ has low number of possible values. If it has just one, then $f'(x) - f(x) = a$ , which has solution $f_0(x) = c e^x + d$ . Substituting $f(0) = 0$ , $f(1) = 1$ , we get $f_0(x) = \frac{e^x}{e - 1} - \frac{1}{e - 1}$ . This gives $a = \frac{1}{e - 1}$ . Now, the question is if for any $f$ satisfying given conditions, $f(\xi) + a = f'(\xi)$ has solution for $\xi \in (0, 1)$ . Let's write $f(x) = g(x) + f_0(x)$ , then $g(0) = g(1) = 0$ and if $f(\xi) + a = f'(\xi)$ is equivalent to $g(\xi) = g'(\xi)$ . If $g$ is zero on $(0, 1)$ , then for $\xi$ in this interval $g(\xi) = 0 = g'(\xi)$ . Otherwise, take some interval $(a, b)$ s.t. $g(a) = g(b) = 0$ but $g(x) \neq 0$ for $x \in (a, b)$ . Wlog assume $g(x) > 0$ for $x \in (a, b)$ . $g(\xi) = g'(\xi)$ for positive $g$ is equivalent to $(\ln g(\xi))' = 1$ . And as $\ln g(a + \epsilon)$ is very large negative number, $\ln g
|functions|continuity|
0
Non-decreasing functions satisfying functional inequalities $f(1+ax)\leq a f(1+x)$ and $f(xy)\leq f(x)+f(y)$
Can we exactly determine a class of non-decreasing functions defined on the set of non-negative real numbers satisfying: $$f(1+ax)\leq a f(1+x)$$ and $$f(xy)\leq f(x)+f(y)$$ for any $x,y\in [0,\infty)$ , assuming that $a\in[0,1)$ can vary? Par example, log fulfills second inequality, but not the first one... Looking for any kind of input or advice. (or book to look for some results on this topic)
We establish the following claim. CLAIM : Let $f:\mathbb R_{\ge 0}\rightarrow\mathbb R$ be a function. Then $f$ satisfies all of the following conditions if and only if $f(x)=0$ for all $x\ge 0$ . The function $f$ is non-decreasing. For all $a\in[0,1)$ we have $f(1+ax)\le af(1+x)$ . for all $x,y\ge 0$ we have $f(xy)\le f(x)+f(y)$ . First, notice that the all-zero function satisfies all three conditions. Conversely, assume that all three conditions hold for $f$ . By the second condition for $a=0$ we have $f(1)\le 0$ , so by the first condition we have $f(x)\le 0$ for $x\le 1$ . Assume that $f(0) , then we get a contradiction with the third condition for $x=y=0$ , so we have $f(0)=0$ . By the first condition we have $f(x)=0$ for $0\le x\le 1$ . Also, by the first condition we have $f(x)\ge 0$ for $x>1$ . Now, assume there exists $v>0$ such that $f(1+v)>0$ . By the third condition for $x=y=1+v$ we have $f((1+v)^2)\le 2f(1+v)$ . But for $x=(1+v)^2-1$ and $a=v/x$ we have $$af(1+x)\le \frac{
|functions|functional-inequalities|
0
How do I interpret the intersection of a variety with a "non-closed hyperplane?"
I am trying to understand Vakil's statement and proof of Bertini's theorem, which has been updated since many of the questions related to it were posted on this website (for what it's worth, I'm not entirely sure how to interpret the original statement either). I guess this question leads into the more general question of "how do incidence varieties work?" For context, this is the statement of Bertini's theorem in Vakil (Theorem 13.4.2, December 31, 2022 Draft): Suppose $X$ is a smooth subvariety of $\mathbb{P}_k^n$ of (pure) dimension $d$ . Then there is a nonempty (= dense) open subset of dual projective space ${\mathbb{P}_k^n}^\vee$ such that for every point $p = [H] \in U$ , $H$ doesn't contain any component of $X$ , and the scheme $H \cap X$ is smooth over $\kappa(p)$ of (pure) dimension $d - 1$ . (1) What is $H \cap X$ for a point $p = [H] \in {\mathbb{P}_k^n}^\vee$ , and why is it a scheme over $\kappa(p)$ (in particular, how do we make sense of this if $k$ is not algebraically
I eventually figured this out but forgot to post an answer. In the statement of the theorem, $H \cap X$ refers to the closed subscheme of $X \times_k \operatorname{Spec}{\kappa(p)}$ cut out by the equation $a_0x_0 + \cdots + a_nx_n = 0$ ; the $a_i$ correspond to "constant functions" i.e. elements of $\kappa(p)$ when considered on appropriate affine charts. More cleanly, for $p \in {\mathbb{P}_k^n}^\vee$ a point, $$\require{AMScd} \begin{CD} H \cap X @>>> X \times_k \operatorname{Spec}{\kappa(p)} @>>> \operatorname{Spec}{\kappa(p)};\\ @VVV @VVV @VVV \\ {} I @>>> X \times_k {\mathbb{P}_k^n}^\vee @>>> {\mathbb{P}_k^n}^\vee;\\ {} @VVV @VVV \\ {} @. X @>>> k; \end{CD}$$ where every square is Cartesian. In particular, when we say that " $H$ does not contain any irreducible components of $X$ ," we are saying that $H$ (as a subscheme of $\mathbb{P}_{\kappa(p)}^n$ ) does not contain any irreducible component of $X \times_k \operatorname{Spec}{\kappa(p)}$ .
|algebraic-geometry|schemes|projective-schemes|
1
If $f(0)=0,f(1)=1$,find all $a$ such that $\exists \xi\in (0,1)$ such that $f(\xi)+a=f'(\xi)$
Let V be the set of all contimuous functions $f : [0,1] \to \mathbb{R} $ , differentiable on $(0,1)$ ,with the property that $f(0) = 0$ and $f(1)= 1$ . Determine all $a\in \mathbb{R}$ such that for every $f\in V$ ,there exists some $\xi\in (0,1)$ such that $$f(\xi)+a=f'(\xi)$$ I have tried some functions like $x$ and $x^2$ ,and find that $a$ must be in $(0,1)$ ,but I'm not sure if this is the answer. I also try to construct the function $g(x)=e^{-x}f(x)$ ,and then $e^{\xi}g'(\xi)=a$ .But I don't know how to keep going. Could someone give me some hints?
This is problem 7 from last year's IMC (2023), as can be found here . I know, because I composed this problem :] For these kinds of problems, you often want to use Rolle's Theorem, which states that if $a and $f$ is continuous on the interval $[a,b]$ and differentiable on $(a,b)$ , satisfying $f(a) = f(b)$ , then there exists some $\xi \in (a,b)$ such that $f'(\xi) = 0$ . By varying the function $f$ that you apply this to, you can prove statements that one does not immediately recognise as being a corollary of Rolle's theorem. Let me give you an example. Problem: Let $f : \mathbb{R} \to \mathbb{R}$ be differentiable such that $f(0) = f(1) = 0$ . Prove that for some $\xi \in (0,1)$ it holds that $f'(\xi) = f(\xi)$ . Solution: It is not clear that this is a result of Rolle's theorem, since both $f$ and $f'$ appear in the statement of the problem, but it really is, albeit with some trickery. Namely, let $g(x) = e^{-x}f(x)$ . Then also $g(0) = g(1) = 0$ , so we may apply Rolle's theorem to
|functions|continuity|
1
Proof that a specific exponential integral converges (Admissibility of complex Morlet wavelet)
As part of a proof of the admissibility of the complex Morlet wavelet, I am trying to show that the following integral is positive and finite $$ 0 Where $\sigma > 0$ . Is anyone willing to give any guidance, or provide an explicit solution for the integral? Alternatively, I believe that one of the niceties of the Morlet wavelet is that it suffices to show that the following condition holds instead (for admissibility): $$ \hat{\psi}(0)=0\implies\int^\infty_{-\infty}\psi(t)dt=0 $$ Where the Fourier transform $\hat{\psi}(\omega)=(e^{\sigma\omega}-1)e^{-\frac{1}{2}(\sigma^2+\omega^2)}$ If anyone is interested the actual admissibility criterion is that $0 , where: $$ C_\psi = \int_{-\infty }^\infty{\frac{\left|\hat{\psi}(\omega)\right|^2}{|\omega |}}d\omega = \int_{-\infty }^\infty{\frac{\left|c_\sigma(e^{\sigma \omega}-1)e^{-\frac{1}{2}(\sigma^2 + \omega^2)}\right|^2}{|\omega |}}d\omega $$
The inverse Fourier transform of $$\hat{\psi}(\omega)=(e^{\sigma\omega}-1)\, e^{-\frac{1}{2}(\sigma^2+\omega^2)}\tag{1}$$ is $$\psi(t)=\mathcal{F}_{\omega}^{-1}\left[\hat{\psi}(\omega)]\right](t)=\frac{1}{\sqrt{2 \pi}} \int\limits_{-\infty}^{\infty} \left(e^{\sigma \omega}-1\right) e^{-\frac{1}{2} \left(\sigma^2+\omega^2\right)} e^{-i t \omega} \, d\omega\\=e^{-\frac{1}{2} t (t+2 i \sigma )}-e^{-\frac{\sigma^2}{2}-\frac{t^2}{2}}\tag{2}$$ and $$\int\limits_{-\infty}^{\infty} \psi(t) \, dt=\int\limits_{-\infty}^{\infty} \left(e^{-\frac{1}{2} t (t+2 i \sigma)}-e^{-\frac{\sigma^2}{2}-\frac{t^2}{2}}\right) \, dt=0\tag{3}.$$ With respect to the admissibility criterion, Mathematica gives the result $$C_\psi=\int\limits_{-\infty}^\infty{\frac{\left|\hat{\psi}(\omega)\right|^2}{|\omega|}}d\omega=\int\limits_{-\infty}^{\infty} \frac{\left|\left(e^{\sigma \omega}-1\right)\, e^{-\frac{1}{2} \left(\sigma^2+\omega^2\right)}\right|^2}{|\omega|} \, d\omega\\=-e^{-\sigma^2} \sigma^2 \left(\, _2F_2\left
|integration|indefinite-integrals|fourier-transform|wavelets|
0
How to find the global minimum of a convex function?
The Problem The gradient descent algorithm finds a minimum of a convex function, but it does not guarantee that the found minimum is the global one. I don't know if it's possible to find a minimum of a function by its derivative (assuming no, then what would be the reason for using the gradient descent algorithm). Just for clarification, the function in this whole question context is assumed to be multidimensional. An Alternate IDEA An alternate idea came to mind when trying to solve this problem (please don't judge strictly as I am not a professional in this sphere): Let's have a plane compared to which the given function is convex (I am almost sure this is a wrong formulation, but I hope you understand what I wanted to say). If the plane crosses with the function, let's move it down (close to the bottom) with some steps (step size can initially be taken very big, unlike the gradient descent step) until it does not cross. Then, move the plane up, downsize the step size two times, and
If you're applying the gradient descent algorithm on a convex function, then the solution it returns is the global solution. See this answer that proves this notion Page $7$ of this paper formally proves that the local minimum of a convex function is the global maximum. Here's the transcription of that linked proof: Theorem 1 (Local Minimum is also a Global Minimum) Let $f\mathbb{R}^d\rightarrow\mathbb{R}$ be convex. If $x^∗$ is a local minimum of f over a convex set $D$ , then $x^∗$ is also a global minimum of $f$ over a convex set $D$ . Proof: Since $D$ is a convex set, for any $y$ , $y − x^∗$ is a feasible direction. Since $x^∗$ is a local minimum, for any $y ∈ D$ , we can choose a small enough $\alpha > 0$ , such that $$f (x^∗) ≤ f (x^∗ + \alpha(y − x^∗))$$ Furthermore, since $f$ is convex, we have $$f (x^∗ + α(y − x^∗)) = f (αy + (1 − α)x^∗) ≤ αf (y) + (1 − α)f (x^∗)$$ Combining these, we have $$f (x^∗) ≤ αf (y) + (1 − α)f (x^∗)$$ which implies that $f (x^∗) ≤ f (y)$ . Since $y$ i
|calculus|optimization|algorithms|maxima-minima|gradient-descent|
1
Application of Dirichlet theorem and Dirchlet density
I'm reading Serre's A course in Arithmetic and I have the following question about Proposition 14 in Chapter VI. It uses this version of Dirichlet's theorem: Let $m\ge 1$ , $(a,m) = 1$ . Let $P_a$ be the set of prime numbers such that $p\equiv a \mod m$ . The set $P_a$ has density $1/\phi(m)$ . Proposition 14 Let $a$ be an integer which is not a square. The set of prime numbers $p$ such that $\bigl(\frac{a}{p}\bigr) = 1$ has density $\frac{1}{2}$ . His proof goes like this: WLOG $a$ is square free. Let $m=4|a|$ and $\chi_a$ be the unique character $\mathbb{Z}/m\mathbb{Z}^\times$ such that $\chi_a(p) = \bigl(\frac{a}{p}\bigr)$ for all prime numbers $p$ not dividing $m$ . Note that $\bigl(\frac{a}{p}\bigr) = 1$ iff $p \in \ker\chi_a$ . Using Dirichlet's theorem, $[\mathbb{Z}/m\mathbb{Z}^\times: \ker \chi_a]$ is equal to the density of the primes satisfying this condition. $\square$ I have three questions about this: Why can we assume a is square free? How do we apply Dirichlet? In partic
Why can you take $a$ squarefree: just look at an example, say $a = 45 = 3^2 \cdot 5$ . The value of $(\frac{45}{p})$ equals the value of $(\frac{5}{p})$ as long as $p \not= 3$ . Since $\chi_a$ is a nontrivial quadratic character, it takes two values and those values are given by congruence conditions : the kernel of $\chi_a$ is a subgroup of $(\mathbf Z/m\mathbf Z)^\times$ with index 2, so the kernel has size $\varphi(m)/2$ . The primes in each unit class has density $1/\varphi(m)$ , so the primes that reduce mod $m$ to a congruence class in $\ker \chi_a$ have density $(\varphi(m)/2))/\varphi(m) = 1/2$ .
|number-theory|algebraic-number-theory|dirichlet-character|
1
Hensel's "proof" of transcendence of e
When introducing $p$ -adics, Kurt Hensel produced an incorrect proof of the transcendence of $e$ : see https://mathoverflow.net/q/416296 for more details. The problem is that the proof relies on the "universality" of series, that is, the assumption that the series $$\sum_{n=1}^\infty\frac{p^n}{n!},$$ which converges in the $p$ -adics, would converge to a number with similar properties as $e^p\in\mathbb R$ . (This is false.) My question is: is there a more dramatic way to apply this "proof" idea to show an obviously false result? For instance, using it to show the transcendence of a number that is actually algebraic.
In Example 6.2 here is a series of rational numbers converging to $0$ in $\mathbf R$ and to $1$ in $\mathbf Q_p$ for whichever prime $p$ you want. Example 9.5 there is an infinite series of rational numbers that converges to $8/7$ in $\mathbf R$ and to $-8/7$ in $\mathbf Q_3$ and $\mathbf Q_5$ . Remark 9.7 gives an example of an infinite series of rational numbers that converges to $3$ in $\mathbf R$ and to $-3$ in $\mathbf Q_2$ .
|p-adic-number-theory|
0
Formal definition of $m$-th digit of positive integer
Let $b$ be a base, that is, an integer greater than or equal to $2$ , and let $n$ be a positive integer that has $d$ digits in base $b$ . What is the formal definition of the function that takes a positive integer $m$ less than or equal to $d$ , and outputs the $m$ -th digit of $n$ , starting from the left? I know what it is, intuitively, but I want a formal definition.
$\displaystyle f(n, m)=\left \lfloor \frac{n}{b^{d-m}} \right \rfloor \bmod b$
|elementary-number-theory|definition|
1
When is the initial form of a principal ideal generated by the initial form of the original ideal's generator?
$ \DeclareMathOperator{\init}{in} \DeclareMathOperator{\gr}{gr} \newcommand{\calO}{\mathcal{O}} $ Let $(R,\mathfrak{m})$ be a Noetherian local ring and $\gr_{\mathfrak{m}}(R)=\bigoplus_{i=0}^\infty\mathfrak{m}^i/\mathfrak{m}^{i+1}$ be the associated graded ring. For $f\in R$ , the initial form of $f$ is $\init(f)=f+\mathfrak{m}^{i+1}\in\mathfrak{m}^i/\mathfrak{m}^{i+1}$ where $f\in\mathfrak{m}^i$ but $f\notin\mathfrak{m}^{i+1}$ . For an ideal $I\subset R$ , let $\init(I)$ be the ideal of $\gr_{\mathfrak{m}}(R)$ generated by the initial forms of elements of $I$ . In general, if $f_1,\dots,f_k$ generate $I$ , then $\init(I)$ is not necessarily generated by $\init(f_1),\dots,\init(f_k)$ (see for example here ). In Eisenbud's Commutative Algebra he gives an exercise (5.2) to find a local ring $R$ and $f\in R$ where $(\init(f))\subsetneq\init((f))$ . However in the "Tangent Cones" section of Milne's notes on algebraic geometry, he states that in the setting where we view $f$ as an element o
Let $\newcommand{\In}{\operatorname{in}}\In(f)=f+\newcommand{\m}{\mathfrak m}\m^{p+1}$ . Then $\In(\langle f\rangle)=\langle \In(f)\rangle$ if and only if $\m^n\cap \langle f\rangle=\m^{n-p}\langle f\rangle$ for all $n\geq 1$ . This is a special case of a more general theorem (Theorem 1.1) proved by Valabrega and Valla in their paper Form Rings and Regular Sequences ( Project Euclid link ). The proof of sufficiency is easy: all we have to do is notice that the $n$ -th homogeneous component of $\In(\langle f\rangle)$ is $\m^n\cap \langle f\rangle+\m^{n+1}/\m^{n+1}$ and that of $\langle \In(f)\rangle$ is $\m^{n-p}f+\m^{n+1}/\m^{n+1}$ .
|algebraic-geometry|commutative-algebra|local-rings|
1
Exponential distribution with random variable as parameter
Let X be a random variable that distibuted exp(L) when L is random variable that distibuted exp( $\lambda$ ) I need to find $f_x(x)$ My first thought was maybe to use the rule of multiplication but it makes it even harder, next I thought about total probability but I can't figure out how to calculate this. Any hint?
$P(X \le x)=EP(X \le x|L)=E(1-e^{-Lx})=1-\int_0^{\infty} \lambda e^{-xy } e^{-\lambda y}dy=1-\frac {\lambda} {\lambda +x}$ . Differentiaton gives $f_X(x)=\frac {\lambda} {(\lambda +x)^{2}}, 0 .
|probability|probability-distributions|
0
How can I show that the quotient function to the real projective space is closed?
As I have to demonstrate that this space is a Hausdorff space, obviously I hope to find a proof without the use of this fact
I'm assuming by quotient function, you mean the one from $S^n \rightarrow \mathbb{R}P^n$ ? Regardless, it makes no difference to the outcome. The way that the quotient function/topology is defined gives you the result. Edit: As David pointed out, quotient maps are not always closed. However, due to the specifics of how $\mathbb{R}P^n$ is defined using the quotient, it actually is in this case. But then this is just restating the question. My comments about the CW complex still stands. If you are trying to show that $\mathbb{R}P^n$ is Hausdorff, you can construct a CW complex structure for it since any CW complex is Hausdorff. This may be simply moving the goal posts in your case though if you haven't seen this result yet. When I have time, I can find a link to another post on this site showing CW complexes are Hausdorff, there's bound to be at least 1.
|general-topology|projective-space|
0
A variation of an exercice from Chapter 16 (Counting and Choosing) of Liebeck's book
The rules of a lottery are as follows: You select 10 numbers between 1 and 50. On lottery night, the celebrity mathematician Richard Thomas chooses at random 6 'correct' numbers. If your 10 numbers include all 6 correct ones, you win. How many ways are there to win the lottery? From my understanding, the requested number is given by $$\binom{10}{6}\times \binom{44}{4}$$ Am I right? Thank you very much for you help. For the sake of completeness, here is the original statement taken from the book : The rules of a lottery are as follows: You select 10 numbers between 1 and 50. On lottery night, celebrity mathematician Richard Thomas chooses at random 6 “correct” numbers. If your 10 numbers include all 6 correct ones, you win. Work out your chance of winning the lottery.
The number of ways Thomas can choose $6$ winning numbers = $^{50} C_6= w$ . The number of possible number combinations you can end up with $^{50} C_{10} = b$ . Of the $b$ possibilities, you must have bought the one $6$ numbers of which match Thomas' winning number set, there are $w$ possibilities for that.
|combinatorics|solution-verification|lotteries|
0
Construction of a space with fundamental group $\pi(S)=\mathbb{Z}/n\mathbb{Z}$ (with obstacles!)
I've been reading the algebraic topology book by Massey and I've been trying to do the next exercise, from chapter 4, about the van Kampen's theorem: Construct for every integer $n>2$ a space such that it's fundamental group is cyclic and of order $n$ From what I have seen we can do this exercise fairly easy with covering maps. The problem is that the exercise appears before the chapter that covers covering maps. I asked my teacher and he told me that you can do this exercise considering a $n$ -polygon where each edge is identified as the same and we consider the same orientation. For example the projective plane is a $2$ -polygon where each edge is identified as the same and we follow the same orientation. When we consider a $n$ -polygon we are obviously not treating with a surface. How can we use van Kampen's theorem, to prove that the group is (I assume) a group of 1 generator and relations that is isomorphic to $\mathbb{Z}/n\mathbb{Z}$ ? Could you please elaborate on how to do it?
Let's call $X$ the space, the quotiented polygon, so the trick to computing this homotopy group is the following, let $*$ be a point in the center of the polygon, let $U$ be a neighbourhood around it wich doesn't touch the boundary, and $V=X\\{*\}$ . Now this two open sets satisfie the hypothesis of Seifert Van Kampen, and each open set involved has the following homotopy type: U is contractible $U \cap V = U\setminus \{*\} \cong \mathbb{S}^1$ and the most difficult seeing is $V$ , but to figure it out, you can see that before the identifications, $V$ has the $\mathbb{S}^1$ as a deformation retract, and after the identifications this is also an $\mathbb{S}^1$ , to see this check that you are partiotioning the circle into equal pieces and pasting one after the other, this is just another $\mathbb{S}^1$ . However the important point is that in the inclusion of $U \cap V$ in $V$ after doing the different reformation retractions this is not an inclusion anymore but can be represented as wi
|abstract-algebra|general-topology|algebraic-topology|
0
Can anyone explain how the complex matrix representation of a quaternions is constructed?
I am reading some properties of quaternionic matrices and I am unable to understand how can we got such matrix representation. please help in this regards.
I know this is an old question, but I'd like to share a (hopefully) more basic method of deriving the matrices. Using the notation $U,I,J,K$ for the quarternions, we know $I$ is a root of $X^2+1=0$ , so $I$ has an eigenvector $v$ with eigenvalue $i$ . Now let $w=Jv$ (we want the quarternions to live in $GL_2(\mathbb{C})$ so we hope that span $\{v,w\}$ is fixed under $Q_8$ ). $Iw=IJv=-JIv=-Jiv=-iw$ so $w$ is an eigenvector of $I$ with eigenvalue $-i$ . In particular, it must be linearly independent from $v$ . A quick check gives us: $Jw=J^2v=-v$ $Kv=IJv=Iw=-iw$ $Kw=IJw=-Iv=-iv$ So the matrices of $U,I,J,K$ with basis $\{v,w\}$ are: $$U=\begin{pmatrix}1&0\\0&1\end{pmatrix}\ I=\begin{pmatrix}i&0\\0&-i\end{pmatrix}\ J=\begin{pmatrix}0&-1\\1&0\end{pmatrix}\ K=\begin{pmatrix}0&-i\\-i&0\end{pmatrix}$$
|linear-algebra|complex-analysis|quaternions|
0
Volume of intersecting oblique cylinders
The problem: Two oblique circular cylinders of equal height $h$ have a circle of radius $a$ as a common lower base and their upper bases are tangent to each other. Find the common volume. I have a solution, below, but I arrived at it through an analogy, so I'm asking how others would solve this problem. This is problem 31 of Additional Problems of Chapter 7 of Simmons Calculus, and it is appearing around where many first semester courses in calculus might end. My initial approaches led to integrals a reader is not yet ready to solve in the book, so I kept looking til I found something. The picture shows my approach. The sketch has sections of the intersecting regions shaded, they are like discs sliding past each other. Each overlapping area is symmetric, and each symmetric half is reminiscent of a rising or setting sun, which prompted the idea that a solution can be had by treating the volume similarly, since the rise of the sun at a constant rate over the horizon is equivalent to half
This is not an easier solution, but it is provided as a way to confirm that your solution is correct. For a given height $h_0 \in [0, h]$ above the base, the cross-sectional area common to the two cylinders is given by the intersection of two circles of radius $a$ and distance between the centers $$d(h_0) = 2a \cdot \frac{h_0}{h},$$ since when $h_0 = 0$ , the bases coincide, and when $h_0 = h$ , the bases are tangent and thus their centers are separated by twice the radius, which is $2a$ . The area of this "lens" shape as a function of $d$ can be calculated via elementary trigonometry: the angle $\theta$ between the line joining the two centers and a ray from one center to an intersection point on the boundary satisfies $$\cos \theta = \frac{d}{2a},$$ thus the area is $$A(d) = 4 \cdot \frac{1}{2} a^2 \theta - 2 \cdot \frac{d}{2} a \sin \theta = 2a^2 \cos^{-1} \frac{d}{2a} - \frac{d}{2} \sqrt{4a^2 - d^2}.$$ Written as a function of $h_0$ , this is $$A(h_0) = 2a^2 \left(\cos^{-1} \frac{h
|calculus|geometry|
1
A question about a proof of Proth's Theorem
This theorem is The number $N=2^n\cdot k+1$ with $k is prime if and only if there exists $a$ with $a^{(N-1)/2}\equiv -1\mod N$ This proof is $\Longrightarrow$ : If $N$ is prime, let $a$ be a primitive root of $N$ . Fermat's little theorem gives $a^{N-1}\equiv 1\mod N$ , hence $a^{(N-1)/2}\equiv \pm1\mod N$ . But since $a$ is a primitive root , we cannot have $a^{(N-1)/2}\equiv 1\mod N$ $\Longleftarrow$ : Now suppose, there exists $a$ with $a^{(N-1)/2}\equiv -1\mod N$ . Let $s$ be the order of $a$ modulo $N$ , in other words the smallest positive integer with $a^s\equiv 1\mod N$ . Since $a^{N-1}\equiv 1\mod N$ , we have $s\mid N-1$ If we write $s=2^m\cdot p$ with odd $p$ , we cannot have $m because then we would have $a^{(N-1)/2}\equiv 1\mod N$ because $s$ would divide $(N-1)/2$ . Hence $s$ is at least $2^n$ . If $q$ is a prime factor of $N$ , we have $a^{q-1}\equiv 1\mod q$ because $a$ and $q$ must be coprime, and $s$ must be the order of $a$ modulo $q$ , otherwise $s$ would not be the
Here's how I would rewrite the part of the referenced proof which you question . . . Assume $N=2^n\cdot k+1$ with $k . We can assume $k$ is odd since that would only increase $n$ and decrease $k$ . Now suppose $a$ is such that $a^{(N-1)/2}\equiv -1\;(\text{mod}\;N)$ . Let $q$ be a prime factor of $N$ , and let $s$ be the order of $a$ mod $q$ . Then $a^{q-1}\equiv 1\;(\text{mod}\;q)$ , hence $s{\,\mid\,}(q-1)$ . But from $$ \left\lbrace \begin{align*} a^{(N-1)/2}&\equiv -1\;(\text{mod}\;N)\\[4pt] a^{N-1}&\equiv 1\;(\text{mod}\;N)\\[4pt] \end{align*} \right. $$ we get $$ \left\lbrace \begin{align*} a^{(N-1)/2}&\equiv -1\;(\text{mod}\;q)\\[4pt] a^{N-1}&\equiv 1\;(\text{mod}\;q)\\[4pt] \end{align*} \right. $$ hence $s{\,\mid\,}(N-1)$ but $s{\,\not\mid\,}\bigl((N-1)/2\bigr)$ It follows that $2^n{\,\mid\,}s$ . To explain the above claim, write $s=2^mj$ , where $j$ is odd. Then from $s{\,\mid\,}(N-1)$ we get \begin{align*} & (2^mj){\,\mid\,}(2^nk) \\[4pt] \implies\;& 2^m{\,\mid\,}2^n \;\text{
|elementary-number-theory|primitive-roots|
1
The "turning-point fraction" of a random sample from a discrete distribution must have expectation less than 2/3?
A sequence of reals $x_1,...,x_n$ is said to have a turning point at index-value $i$ ( $1\lt i\lt n$ ) iff $x_{i-1}\lt x_{i}\gt x_{i+1}$ or $x_{i-1}\gt x_{i}\lt x_{i+1}$ . The number of turning points in the sequence is denoted $T(x_1,...,x_n)$ , and we define the turning-point fraction as $$R(x_1,...,x_n)={\text{number of turning points}\over\text{number of potential turning points}}={T(x_1,...,x_n)\over n-2}$$ so $0\le R(x_1,...,x_n)\le 1.$ If $X_1,...,X_n$ are random variables, we define the corresponding r.v.s $T_n=T(X_1,...,X_n)$ and $R_n=R(X_1,...,X_n).$ Conjecture: If $X_1,...,X_n$ are i.i.d. r.v.s with any discrete distribution, then $E[R_n]\lt{2\over 3}$ . (It's easy to show that $E[R_n]={2\over 3}$ when the $X_i$ are i.i.d. with any continuous distribution.) Supposing the $X_i$ are i.i.d. with a discrete distribution having p.m.f. $p()$ and c.d.f. $F()$ , we have the following: $$\begin{align*}E[R_n] &={1\over n-2}E\left[ \sum_{i=2}^{n-1}\mathbb{1}_{(X_{i-1} X_{i+1}) \text{ o
We can use the linearity of expectation argument in the discrete case as well, we just have to be a little bit more careful. For each $i$ with $1 , there are three cases: The values of $X_{i-1}, X_i, X_{i+1}$ are all distinct. Conditioned on this case, there is a $\frac23$ probability that $X_i$ is a turning point: by symmetry, all six orderings of $X_{i-1}, X_i, X_{i+1}$ are equally likely, and four of them lead to a turning point at $X_i$ . Two of the values $X_{i-1}, X_i, X_{i+1}$ are equal, but different from the third. Conditioned on this case, there is a $\frac13$ probability that $X_i$ is a turning point: it is a turning point if and only if it is the "odd one out" of the three, and by symmetry, each of $X_{i-1}, X_i, X_{i+1}$ is equally likely to be the odd one out. All three of the values $X_{i-1}, X_i, X_{i+1}$ are equal. In this case, it is impossible for $X_i$ to be a turning point. If the probabilities of the three cases are $p_1, p_2, p_3$ with $p_1 + p_2 + p_3 = 1$ , the
|probability|statistics|random-variables|expected-value|time-series|
1
Operator norm of the sum of two positive semidefinite matrices
Consider two positive semidefinite and symmetric matrices, namely $\bf{A}$ and $\bf{B}$ . Denote $\Vert\cdot\Vert$ as the operator norm. If we have another positive semidefinite matrix $\bf{B}^*$ that satisfies $\Vert\bf{B}^*\Vert\geqslant\Vert\bf{B}\Vert$ , can we get $$\Vert\bf{A}+\bf{B}\Vert\leqslant\Vert\bf{A}+\bf{B}^*\Vert$$ ? I have worked out that for general $\bf{A}$ , $\bf{B}$ that is not positive semidefinite, we cannot get the target result since the eigenvalues may be negative. However, I cannot prove the case for positive semidefinite matrices. This problem is important for me because I am trying to use a concentration method to control operator norm of summation of dependent matrices. If we have independent $\mathbf{X}_1,...,\mathbf{X}_n\in\mathbb{R}^{m_1\times m_2}$ , we can use basic concentration inequalities (such as matrix Bernstein and matrix Hoeffding) to control $\Vert\sum_{i}\mathbf{X}_i\Vert$ , which fails in the case when $\mathbf{X}_i$ are dependent. Suppose w
If $B^\ast \geq B$ , then the result is true since then $A + B \leq A + B^\ast$ , which implies $\|A + B\| \leq \|A + B^\ast\|$ . If you’re only assuming $\|B^\ast\| \geq \|B\|$ , then the result is not necessarily true. Just choose $A = \begin{pmatrix} 0 & 0\\0 & 2 \end{pmatrix}$ , $B = \begin{pmatrix} 0 & 0\\0 & 1 \end{pmatrix}$ , and $B^\ast = \begin{pmatrix} 2 & 0\\0 & 0 \end{pmatrix}$ , say. Then $\|B^\ast\| = 2 > 1 = \|B\|$ but $\|A + B\| = 3 > 2 = \|A + B^\ast\|$ .
|matrices|eigenvalues-eigenvectors|positive-semidefinite|
1
find the complex limit
I want to compute $$\lim_{z \to 2e^{i\pi /3}} \frac{z^3+8}{z^4+4z^2+16}$$ Note that if we replace the value $$(2e^{i\pi /3})^3+8=-8+8=0$$ and $$(2e^{i\pi /3})^4+4(2e^{i\pi /3})^2+16=16(e^{i 4\pi /3}+e^{i 2\pi /3}+1)=16(-1+1)=0$$ So it is a case of $\frac{0}{0}$ of course my goal is not to use L'Hopital yet, but it should be possible to simplify the expression of the denominator, any suggestion to factor this? thanks for that
There is an algorithm to find the GCD of two polynomials with rational coefficients. $$ \left( x^{4} + 4 x^{2} + 16 \right) $$ $$ \left( x^{3} + 8 \right) $$ $$ \left( x^{4} + 4 x^{2} + 16 \right) = \left( x^{3} + 8 \right) \cdot \color{magenta}{ \left( x \right) } + \left( 4 x^{2} - 8 x + 16 \right) $$ $$ \left( x^{3} + 8 \right) = \left( 4 x^{2} - 8 x + 16 \right) \cdot \color{magenta}{ \left( \frac{ x + 2 }{ 4 } \right) } + \left( 0 \right) $$ $$ \frac{ 0}{1} $$ $$ \frac{ 1}{0} $$ $$ \color{magenta}{ \left( x \right) } \Longrightarrow \Longrightarrow \frac{ \left( x \right) }{ \left( 1 \right) } $$ $$ \color{magenta}{ \left( \frac{ x + 2 }{ 4 } \right) } \Longrightarrow \Longrightarrow \frac{ \left( \frac{ x^{2} + 2 x + 4 }{ 4 } \right) }{ \left( \frac{ x + 2 }{ 4 } \right) } $$ $$ \left( x^{2} + 2 x + 4 \right) \left( \frac{ 1}{4 } \right) - \left( x + 2 \right) \left( \frac{ x }{ 4 } \right) = \left( 1 \right) $$ $$ \mbox{confirming GCD} = \color{blue}{ \left( x^{2} - 2 x + 4 \rig
|complex-analysis|
1
Prove the following integral inequality: $\int_{0}^{1}f(g(x))dx\le\int_{0}^{1}f(x)dx+\int_{0}^{1}g(x)dx$
Suppose $f(x)$ and $g(x)$ are continuous function from $[0,1]\rightarrow [0,1]$, and $f$ is monotone increasing, then how to prove the following inequality: $$\int_{0}^{1}f(g(x))dx\le\int_{0}^{1}f(x)dx+\int_{0}^{1}g(x)dx$$
For equality, we get: $$\int_{0}^{u} f(x)dx = 0 \Rightarrow f(x) = 0, \forall x \in (0, u)$$ $$\int_{u}^{1} f(u)dx = \int_{u}^{1} f(x)dx \Rightarrow f(x) = f(u), \forall x \in [u, 1)$$ for $f$ to be continuous, we need $u \in \{0, 1\}$ , in other words, $f$ is constant. We quickly get that $g$ is constant zero. As for Vincent's question, there is no equality in the last inequality because $x^2$ isn't constant. Proving a better bound would be more interesting. As pointed out by Clement, the second inequality is wrong and the corollary as a whole seems quite weak.
|real-analysis|integration|integral-inequality|
0
Integral of rational trigonometric function
I would need some help trying to evaluate the following integral: $$ I = \frac{1}{\pi}\int_0^{\pi} \frac{1-\cos \left(2n u\right)}{2 \cos \left(u\right)-x}\mathrm{d}u, $$ where $|x|>2$ and $n=1,2,3,4,\dots$ . However I have no idea how to proceed.
Define the function $\mathcal{I}{\left(n,a\right)}$ by the integral $$\mathcal{I}{\left(n,a\right)}:=\int_{0}^{\pi}\mathrm{d}\varphi\,\frac{1-\cos{\left(2n\varphi\right)}}{1-a\cos{\left(\varphi\right)}};~~~\small{n\in\mathbb{N}\land a\in(-1,1)}.$$ Note: the original integral from the OP in terms of the function above would then be $$I=-\frac{1}{\pi\,x}\mathcal{I}{\left(n,\frac{2}{x}\right)};~~~\small{|x|>2}.$$ Consider the following Fourier cosine series: $$\sum_{k=0}^{\infty}p^{k}\cos{\left(k\varphi\right)}=\frac{1-p\cos{\left(\varphi\right)}}{1-2p\cos{\left(\varphi\right)}+p^{2}};~~~\small{|p| $$\implies-1+2\sum_{k=0}^{\infty}p^{k}\cos{\left(k\varphi\right)}=\frac{1-p^{2}}{1-2p\cos{\left(\varphi\right)}+p^{2}}.$$ With the help of this Fourier expansion, we can then derive the following integration formula with relative ease: for any $m\in\mathbb{N}\land p\in(-1,1)$ , $$\begin{align} \int_{0}^{\pi}\mathrm{d}\varphi\,\frac{\left(1-p^{2}\right)\cos{\left(m\varphi\right)}}{1-2p\cos{\left
|trigonometric-integrals|
1
Solving a convex problem with quasiconvexity with CVXPY?
I have a question regarding quasiconvexity and its usage in CVXPY. I have the following optimization problem. \begin{equation*} \begin{aligned} \min_{x} \quad & \sqrt x\\ \textrm{subject to:} \quad & 1 \leq x \leq 2\\ \end{aligned} \end{equation*} The solution is trivial $x^* = 1, \ p^* = 1$ but I want to let CVXPY solve this problem for me. Since $\sqrt x$ is quasiconvex the problem is DQCP conform. And CVXPY should be able to solve it. But using the following code: import cvxpy as cp x = cp.Variable() objective = cp.Minimize(cp.sqrt(x)) constraints = [x**3 = 1] problem = cp.Problem(objective, constraints) problem.solve(qcp=True) print("Optimal value of x:", x.value) print("Optimal objective value:", objective.value) I get: Optimal value of $x: 1.4250764776108187$ Optimal objective value: $1.193765671147742$ Which is pretty far away from the real optimal value. How come that the solution is so bad? Is that problem ill-conditioned? Can someone enlighten me as to why the solution is so
Keep in mind when solving nonlinear problems, the type of solver you use is incredibly important, as they often deploy different solving techniques. Often, free, open-source solvers tend to return sub-optimal results than commercial solvers due to the amount resources commercial solvers have behind them. For example here's the input I used: import cvxpy as cp x = cp.Variable() objective = cp.Minimize(cp.sqrt(x)) constraints = [1 and got the following answer from the Gurobi solver: Set parameter Username Academic license - for non-commercial use only - expires 2024-11-16 Optimal value of x: 1.0 Optimal objective value: 1.0 Using the ECOS solver, it returns: Optimal value of x: 1.425076477610819 Optimal objective value: 1.193765671147742 Using the SCS solver, it returns: Optimal value of x: 1.2630186133331835 Optimal objective value: 1.1238410089212725 Using the CPLEX solver, it returns: Optimal value of x: 1.0 Optimal objective value: 1.0 Observe that CPLEX and Gurobi were the only solv
|convex-optimization|nonlinear-optimization|numerical-optimization|cvx|cvxpy|
0