title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
How to Prove the Binomial Series by Differentiation?
|
(a) Let $g(x) = \sum_{n=0}^\infty \binom{k}{n} x^n$ . Differentiate this series to show that $$g'(x) = \frac{kg(x)}{1+x}, \quad -1 (b) Let $h(x) = (1+x)^{-k} g(x)$ and show that $h'(x) = 0$ . (c) Deduce that $g(x) = (1+x)^k$ . For part (a), I'm getting $g'(x) = g(x) \frac{n}{x}$ . How did they get $g'(x) = \frac{kg(x)}{x+1}$ ?
|
\begin{align*} g(x) &= \sum_{n=0}^\infty \binom{k}{n} x^n \\ \implies g'(x) &= \sum_{n=1}^\infty n \binom{k}{n} x^{n-1}. \end{align*} Note that, as you've done, you can't pull $n$ out the sum here. $n$ is a dummy variable; $g'(x)$ cannot depend on $n$ . Instead, note \begin{align*} (1+x) g'(x) &= \sum_{n=1}^\infty \left(n \binom{k}{n} x^{n-1} + n \binom{k}{n} x^n\right) \\ &= \sum_{n=1}^\infty \left(\frac{n k!}{(k-n)! n!} x^{n-1} + \frac{nk!}{(k-n)! n!} x^n\right) \\ &= k \sum_{n=1}^\infty \left(\binom{k-1}{n-1} x^{n-1} + \binom{k-1}{n-1} x^n\right) \\ &= k \sum_{n=0}^\infty \left( \binom{k-1}{n} x^n + \binom{k-1}{n-1} x^n\right) \end{align*} by reindexing the first term. But by Pascal's rule , we obtain $$(1+x)g'(x) = k \sum_{n=0}^\infty \binom kn x^n = kg(x).$$
|
|calculus|sequences-and-series|binomial-theorem|
| 1
|
Time change for Brownian Motion
|
It is well known that any Itô local martingale of the form $dX_t=Y_sdB_s$ with the initial condition that $X_0=0$ and $Y_s$ being adapted, continuous, and locally square integrable can be reparametrized as a Brownian motion by defining: $$\tau_t:= \inf\{s \geq 0: \int_0^s Y_u^2du = t\}$$ And we have that the process $X_{\tau_t}$ is a Brownian motion (with a suitable change of filtration). Is it possible to do the same where $Y_s$ exhibits finite time blowup? Specifically if $\lim_{t \to \infty}\tau_t with positive probability. Does the same method work? The context of this problem is that my lecturer vaguely remarked about the existence of such theorems and I am trying to find a reference for this. My guess is that this is provable with the Levy characterization of Brownian motion, but I have yet to make too much progress on that. So I am asking to see if anyone knows of such a theorem and if so could they please provide a source (I am not asking help to prove this for myself and I jus
|
This is a tricky issue because in order to even define stochastic integration at time $t=T$ $$M_{t}=\int_{0}^{T} Y_{s}dB_{s},$$ we need to have existence of the expected quadratic variation i.e. $E[\langle M\rangle_{T}] . For example, in Shreve-Karatzas problem 4.11 section 3.4, the integral $\int^{1}_{0}X_{s}dB_{s}$ is undefined on the event $$E:=\left\{ \int^{1}_{0}X_{s}^{2}ds=+\infty \right\}.$$ So instead in order to apply the Dubins-Schwarz theorem for local martingales ( Time-Changed Brownian Motion ) Theorem 1 Any continuous local martingale X with $X_0=0$ is a continuous time-change of standard Brownian motion (possibly under an enlargement of the probability space). we need to truncate in some way eg. $X_{s}:=Y_{s}\wedge M$ .
|
|reference-request|stochastic-processes|
| 1
|
Local trivialization of $\mathcal O(-1)$, proposition 2.2.6, complex geometry by Huybrechts
|
I was reading Complex Geometry by Daniel Huybrechts. On page 68, section 2.2 we have a proposition of holomorphic line bundle over $\mathbb P^n$ , Proposition 2.2.6: The projection $\pi:\mathcal O(-1)\rightarrow\mathbb P^n$ is given by projecting to the first factor. Let $\{U_i\}_{i=0}^n$ be an open covering of $\mathbb P^n$ . A canonical trivialization of $\mathcal O(-1)$ over $U_i$ is given by, $$\psi_i:\pi^{-1}(U_i)\rightarrow U_i\times\mathbb C,\quad(\ell,z)\mapsto(\ell,z_i)$$ The transition maps $\psi_{ij}(\ell):\mathbb C\rightarrow\mathbb C$ are given by $w\mapsto \frac{z_i}{z_j}\cdot w$ , where $\ell=(z_0:\cdots,z_n)$ . Suppose we have $(\ell,z^*)$ where $\ell$ belongs to $U_i$ and $z^*\in\mathbb C\setminus\{0\}$ . In this scenario, I assumed that if we map $(\ell,z^*)$ using $\psi_j^{-1}$ , it would look like this: $(\ell,z_0,\cdots,z_{j-1},z^*,z_{j+1},\cdots,z_n)$ , inserting $z^*$ at position $j$ . However, if this option doesn't hold, what alternatives should we consider? Be
|
Your $\psi_j^{-1}$ is not quite right. We have $$ \psi_j^{-1}: U_j\times {\mathbb C}\to \pi^{-1}(U_j);\ (\ell, w)\to (\ell, z), $$ where $\ell = (z_0:\dots:z_j:\dots:z_n)$ with $z_j\neq 0$ by $\ell\in U_j$ , and $$ z=\frac{w}{z_j}(z_0, \cdots, z_j, \cdots, z_n) = \Big(\frac{z_0}{z_j}w,\cdots, w,\cdots, \frac{z_n}{z_j}w\Big). $$ So $z\in {\mathbb C}^{n+1}$ is the unique vector on the line $\ell$ whose $j$ th component is $w$ . We do this by the multiplication of a suitable scale. Also note that we changed the homogeneous coordinates using $:$ in $\ell$ to ordinary coordinates using $,$ in $z$ . Then we see that $$ \psi_{ij}=\psi_i\psi_j^{-1}: (\ell, w)\to (\ell, z)\to \Big(\ell, \frac{z_i}{z_j}w\Big), $$ since that is the $i$ th component of $z$ . That is why Huybrechts writes $$ \psi_{ij}(\ell)(w)=\frac{z_i}{z_j}w, $$ where $\ell=(z_0:\dots:z_n)\in U_i\cap U_j$ . Let me add a bit about sections. I don't think the $1$ you chose would work. A local section $$s_i: U_i\to \pi^{-1}(U)\overs
|
|algebraic-geometry|reference-request|complex-geometry|online-resources|line-bundles|
| 0
|
On the logarithm of a matrix
|
While teaching a course on ODE, I needed to introduce the notion of matrix logarithms. I intend to define it as follows. Definition (Matrix Logarithm) Let $A\in GL_n(\mathbb{C})$ . We define A) Unipotent case: When $A$ is unipotent, i.e. $A=I+N$ , where $N\in M_n(\mathbb{C})$ is nilpotent, we define $\ln A\in M_n(\mathbb{C})$ as $$ \ln A=\ln(I+N):=\sum_{k=1}^{n}\frac{(-1)^{k+1}}{k}N^k. $$ B) Diagonalizable case: When $A$ is diagonalizable, i.e. $A= PDP^{-1}$ , where $P\in GL_n(\mathbb{C})$ and $D:=\operatorname*{diag}(\lambda_1,\ldots,\lambda_n)$ with $\lambda_1,\ldots,\lambda_n\in \mathbb{C}\setminus\{0\}$ , we define $\ln A\in M_n(\mathbb{C})$ as $$ \ln A:= P\operatorname*{diag}(\ln\lambda_1,\ldots,\ln\lambda_n)P^{-1}, $$ where $\ln \lambda_i = \ln |\lambda_i|+i\arg \lambda_i$ , for all $i=1,\ldots,n$ . C) Invertible case: When $A\in GL_n(\mathbb{C})$ , we define $$ \ln A:=\ln D + \ln (I+D^{-1}N), $$ where $D, N\in M_n(\mathbb{C})$ are diagonalizable and nilpotent respectively, $A=D+
|
The question reduces to prove that $\ln(D)(D^{-1}N)^k = (D^{-1}N)^k\ln(D)$ for every $k$ , so proving that $\ln(D)$ , $D^{-1}$ and $N$ all commute is sufficient. If $D=P\Sigma P^{-1}$ where $\Sigma$ is diagonal, then $$\ln(D)D^{-1}=P\ln(\Sigma)P^{-1}P\Sigma^{-1}P^{-1} \\= P\ln(\Sigma)\Sigma^{-1}P^{-1} \\=P\Sigma^{-1}\ln(\Sigma)P^{-1} \\=P\Sigma^{-1}P^{-1}P\ln(\Sigma)P^{-1} =D^{-1}\ln(D).$$ Moreover, $DN=ND$ by hypothesis, so $$\Sigma P^{-1}NP = P^{-1}NP \Sigma$$ meaning that $P^{-1}NP$ is block diagonal with blocks corresponding to the eigenspaces of $\Sigma$ , that are the same as those of $\ln(\Sigma)$ , so $$\ln(\Sigma) P^{-1}NP = P^{-1}NP \ln(\Sigma)\implies \ln(D)N = N\ln(D).$$
|
|linear-algebra|matrices|logarithms|matrix-analysis|
| 0
|
Let $S_1, S_2, \dots , S_m$ be distinct subsets of $\{1, 2, \dots , n\}$ such that $|S_i \cap S_j | = 1$ for all $i \ne j$. Prove that $m \le n$.
|
Let $S_1, S_2, \dots , S_m$ be distinct subsets of $\{1, 2, \dots , n\}$ such that $|S_i \cap S_j | = 1$ for all $i \ne j$ . Prove that $m \le n$ . I got this problem from the double counting handout ( here ). Progress: Well define $X_i$ as the set of sets which contain $i$ as an element. So note that $X_1+\dots +X_n=|S_1|+|S_2|+\dots+|S_m|$ . Also if $\{S_i\}=\{a_1,\dots,a_k\}$ then we have $(X_{a_1}-1)+(X_{a_2}-1)+\dots+(X_{a_k}-1)=m-1$ as every element $a_j$ is there in $X_{a_j}$ sets including $S_i$ . So we get that $X_{a_1}+X_{a_2}+\dots+X_{a_k}-|S_i|=m-1$ . Well, let's write it as $X_{a_1}+X_{a_2}+\dots+X_{a_k}=m-1+ |S_i|$ . And now sum, so we get RHS as $m(m-1)+|S_1|+\dots+|S_m|= m(m-1)+X_1+\dots +X_n$ . And then $a_i$ appears in $X_{a_i}$ sets. So we have LHS as $(X_1)^2+\dots (X_n)^2$ . So we have $(X_1)^2+\dots (X_n)^2= m(m-1)+X_1+\dots +X_n\implies (X_1)(X_1-1)+\dots (X_n)(X_n-1)= m(m-1)$ Any solutions?
|
I can't think of any combinatorial proof of this, I doubt that there is a simple one which does not somehow interpret the following argument. Assume that $|S_i|>1$ . If not (say $|S_1|=1$ ), the family forms a sunflower, $(S_i\setminus S_1)$ partitions $[n]$ , and the proposition follows trivially. Assuming now that $|S_i|>1$ , form the incidence matrix of the family and call it $M_{n\times m}$ . Clearly for $u\in\{0,1\}^n$ interpreted as a subset $U\subseteq [n]$ , $(uM)_i$ counts $|S_i\cap U|$ . Then it follows that for $V=M^TM$ , $$V_{ij} = |A_i\cap A_j|$$ It is easy to verify that the columns of $V$ are all linearly independent, and so the rank of $V$ is $m$ . But since $\mathsf{rank}(V)\leq\mathsf{rank}(M)\leq n$ , the proposition follows.
|
|combinatorics|contest-math|combinatorial-proofs|extremal-combinatorics|combinatorial-designs|
| 0
|
Why is $P_{\rho}$ is a probability measure on the Borel subsets of $H\ $?
|
I am going through a paper on Operator Probability Theory by Stan Gudder. The author introduced the notion of probability distribution of self-adjoint operators on a Hilbert space where the self-adjoint operators are thought of as complex valued random variables relative to a fixed state. Let $A \in \mathcal S (H)$ (self-adjoint operator) and $\rho$ be a state on $H.$ Let $P^A$ be the spectral measure corresponding to the self-adjoint operator $A.$ Then for a Borel subset $\Delta \subseteq \sigma (A)$ (spectrum of $A$ ) we define $$P_{\rho} (A \in \Delta) = \text{tr} \left (\rho P^A (\Delta) \right ).$$ So the expectation of $A$ is given as $:$ $$E_{\rho} (A) = \int_{\sigma (A)} \lambda\ \text{tr} \left (\rho P^A (d \lambda) \right ) = \text{tr} (\rho A).$$ But I don't understand why $P_{\rho}$ is a valid probability measure. Also I don't follow why $E_{\rho} (A)$ evaluates to $\text {tr} (\rho A).$ Any suggestion in this regard would be warmly appreciated. Thanks for your time.
|
Over the last few days I have been thinking about your question (so thank you for posting it, it has been fun to think about), and I have multiple times begun typing up an answer, only to realise that I was missing some detail or simply had the wrong idea. I think that I finally am satisfied with my answer, so hopefully you will be as well. In Gert K. Petersens book "Analysis Now", we find in Proposition 4.6.11 that a functional $\varphi$ on $\mathcal B(H)$ is given by $\varphi(T)=\mathrm{tr}(ST)$ for some trace class operator $S$ if and only if $\varphi$ is $\sigma$ -weakly continuous. Proposition 4.6.14 then says that this is the case if and only if $\varphi$ is weakly continuous on the bounded subsets of $\mathcal B(H)$ . Now let as in your case $\rho$ be a positive trace class operator with unit trace, and define a functional $\varphi$ by $$ \varphi(T)=\mathrm{tr}(\rho T),\quad (T\in\mathcal B(H)). $$ By the above we get that $\varphi$ is weakly continuous on bounded subsets of $\m
|
|probability-theory|probability-distributions|operator-theory|expected-value|operator-algebras|
| 0
|
If $A$ is symmetric positive definite then so is $2D-A$
|
I wish to show the followin: For any positive definite symmetric tridiagonal matrix $A$ ; $$A=\left[\begin{array}{ccccc}a_1 & c_1 & & & \\ c_1 & a_2 & c_2 & & \mathbf{0} \\ & \ddots & \ddots & \ddots & \\ \mathbf{0} & & c_{n-2} & a_{n-1} & c_{n-1} \\ & & & c_{n-1} & a_n\end{array}\right]$$ we have that $2D-A$ is positive definite as well. Here $D$ is the diagonal part of $A$ . This is coming from the context of the convergence of the Jacobi method. In particular, this is from an exam. The question gives [Hint: At the last step, you may find it useful to consider two vectors $x=\left(x_1, x_2, \ldots, x_n\right)$ and $y=\left((-1) x_1,(-1)^2 x_2, \ldots,(-1)^n x_n\right)$ .] I really do not see how to use the hint. I have tried the following: I have calculated the other product of all possible combinations $x,x$ ; $x,y$ , ... but nothing seemed to give above matrix. I have calculated $x^T A x$ and $y^T Ay$ to only get $$ x^T A x +y^T Ay=\sum_{k=1}^n a_k x^2_k $$ which doesn't seem to he
|
As you said $D=\operatorname{diag}(A)$ . Then $$2D-A$$ is the matrix flipping the signs of all sub diagonals, but the diagonal keep the same entries. Let $x$ be arbitrary vector and $y$ be the one in hints. Then by positive definiteness of $A$ we have $$y^TAy=\sum^n_{k=1}a_kx_k^2-\sum^{n-1}_{k=1}2c_ka_ka_{k+1}>0$$ Similarly, you can see $$x^T(2D-A)x=\sum^n_{k=1}a_kx_k^2-\sum^{n-1}_{k=1}2c_ka_ka_{k+1}$$ So $2D-A$ is positive definite.
|
|linear-algebra|numerical-methods|
| 1
|
System of differential equations using matrix methods
|
Im trying to solve these differential equations but I’m having great difficulty. $$3x’+y’+5x-y=2e^{-t} + 4e^{-3t}$$ $$x’+4y’-2x+7y=-3e^{-t}+5e^{-3t}$$ $$x=y=0, t=0$$ (I would be appreciative for any help grasping an intuitive understanding as to why as well as how.) My initial idea was to eliminate y’ from one equation, and x’ from the other. This got it into a form I was familiar with solving. However I had little success getting the answer from there.
|
Using the method as summarised in this note (by @Chappers ), you can start by writing the equations in matrix form $$ \begin{pmatrix} 3 & 1 \\ 1 & 4 \end{pmatrix} \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} -5 & 1 \\ 2 & -7 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} + \begin{pmatrix} 2 \\ -3 \end{pmatrix}e^{-t} + \begin{pmatrix} 4 \\ 5 \end{pmatrix}e^{-3t} $$ Multiply by $\begin{pmatrix}3 & 1 \\ 1 & 4 \end{pmatrix}^{-1} = \frac{1}{11}\begin{pmatrix} 4 & -1 \\ -1 & 3 \end{pmatrix}$ to get $$ \begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} -2 & 1 \\ 1 & -2 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} + \begin{pmatrix} 1 \\ -1 \end{pmatrix}e^{-t} + \begin{pmatrix} 1 \\ 1 \end{pmatrix}e^{-3t} $$ Solving the unforced equation we find that $\begin{pmatrix} -2 & 1 \\ 1 & -2 \end{pmatrix}$ has eigenvalues $\lambda_1 = -3$ and $\lambda_2 = -1$ and corresponding eigenvectors $v_1 = \begin{pmatrix} 1 \\ -1 \end{pmatrix}$ and $v_2 = \begin{pmatrix} 1 \\ 1 \end{p
|
|partial-differential-equations|
| 1
|
How does $(k+1)!(k+2)(k+1)$ simplify to $(k+2)!(k+1)$
|
If $$n!=n(n-1)!$$ then $$(k+1)!= (k+1)k(k-1)!$$ and $$(k+2)!$$ would be $$(k+2)(k+1)k(k-1)!$$ or $$(k+2)(k+1)!$$ but what does the extra (k+1) do to make it (k+2)!(k+1)
|
$$5! = 5\times4\times3\times2\times1$$ But then: $$4! = 4\times3\times2\times1\implies 5 !=5\times 4!$$ So by the same logic $$(k+2)!=(k+2)(k+1)!$$
|
|algebra-precalculus|induction|factorial|
| 1
|
Behavior of function $\sum_{j = n}^\infty \frac{\sin^2((2j-1) \pi x)}{(2j-1)^2}$
|
For a positive integer $n$ , define the function $$ F_n(x) = n^2 \sum_{j = n}^\infty \frac{\sin^2((2j-1) \pi x)}{(2j-1)^2}. $$ I am trying to understand the behavior of $F_n(x)$ in the following sense. For a positive exponent $\alpha$ , I would like to compute the limit $$ L_\alpha = \lim_{n \to \infty} F_n(1/n^\alpha). $$ Based on plotting in Mathematica, it appears that $L_\alpha$ has the following behavior: it seems to be $0$ for $\alpha > 2$ , infinite for $\alpha and some finite number for $\alpha = 2$ . I tried to simplify the summation but I am having difficulty passing to the limit.
|
This will be a partial answer, since I can't establish what happens when $\alpha , and because I haven't been terribly rigorous in passing from the sum to the integral below. Perhaps someone else can fill in the gaps. Assume $\alpha \geq 1$ . Then: \begin{align} L_\alpha &= \lim_{n\rightarrow\infty} n^2\sum_{j=n}^\infty {\left[\frac{\sin\bigl((2j-1)\pi/n^\alpha\bigr)}{2j-1}\right]}^2\\ &=\lim_{n\rightarrow\infty} \frac{n}{2}\sum_{j=n}^\infty \frac{2}{n}{\left[\frac{\sin\left(\frac{\pi}{n^{\alpha-1}}\,\frac{2j-1}{n}\right)}{\frac{2j-1}{n}}\right]}^2 \qquad\text{Riemann sum with $x = (2j-1)/n$}\\ &\rightarrow\lim_{n\rightarrow\infty} \frac{n}{2}\int_2^\infty {\left[\frac{\sin\left(\frac{\pi}{n^{\alpha-1}}\,x\right)}{x}\right]}^2 dx\qquad\;\;\,\text{Change of variables $x = y n^{\alpha-1}$}\\ &=\lim_{n\rightarrow\infty} \frac{n^{2-\alpha}}{2}\int_{2/n^{\alpha-1}}^\infty {\left[\frac{\sin\left(\pi\,y\right)}{y}\right]}^2 dy \qquad\;\;\;(1)\\ \end{align} There are now two cases to consider:
|
|limits|inequality|asymptotics|trigonometric-series|
| 0
|
Determine whether the complex power series converges at a point
|
I need to determine if a series $$\sum\limits_{n=1}^{\infty} \frac{(z-1+i)^{2n-1}}{5^n(n+1)ln^3(n+1)}$$ converges at a point $z_1 = -1$ After substituting the point, I got: $$ \sum\limits_{n=1}^{\infty} \frac{(i - 2)^{2n-1}}{5^n(n+1)ln^3(n+1)} $$ And I do not know what to do next. I have heard that it is necessary to split this series into two, one with real coefficients, the other with complex ones. But here I don't see any way to break up this row like that. Can I investigate for convergence here using the root test ( $\lim\limits_{n \to \infty} \sqrt[n]{|a_n|}$ )? $$\lim\limits_{n \to \infty} \sqrt[n]{|a_n|} = \lim\limits_{n \to \infty} \sqrt[n]{\left|\frac{(i-2)^{2n-1}}{5^n(n+1)ln^3(n+1)}\right|} = \lim\limits_{n \to \infty} \sqrt[n]{\left| \frac{(i-2)^{2n}}{(i-2)5^n} \right|} = \lim\limits_{n \to \infty} \frac{|i-2|^2}{5\sqrt{5}} = \lim\limits_{n \to \infty} \frac{5}{5\sqrt{5}} = \frac{1}{\sqrt{5}}$$ Therefore it converges.
|
$\mid-1-(1-i)\mid=\sqrt5.$ Thus we need a radius of convergence $\ge\sqrt 5.$ By Cauchy-Hadamard, $$r=\frac1{\limsup\mid a_n\mid ^{1/n}}=\limsup \mid5^n(n+1)\ln^3(n+1)\mid ^{\frac 1{2n-1}}=\sqrt5$$ $\therefore $ we're on the boundary. Something more needs to be done, and you have done that, with the root test. The series converges at $z=-1.$
|
|complex-analysis|complex-numbers|power-series|
| 0
|
Can you explain to me why this proof by induction is not flawed? (Domain is graph theory, but that is secondary)
|
Background I am following this MIT OCW course on mathematics for computer science. In one of the recitations they come to the below result: Official solution Task: A planar graph is a graph that can be drawn without any edges crossing. Also, any planar graph has a node of degree at most 5. Now, prove by induction that any planar graph can be colored in at most 6 colors. Solution.: We prove by induction. First, let n be the number of nodes in the graph. Then define P (n) = Any planar graph with n nodes is 6-colorable. Base case, P (1): Every graph with n = 1 vertex is 6-colorable. Clearly true since it’s actually 1-colorable. Inductive step: P (n) → P (n + 1): Take a planar graph G with n + 1 nodes. Then take a node v with degree at most 5 (which we know exists because we know any planar graph has a node of degree ≤ 5), and remove it. We know that the induced subgraph G’ formed in this way has n nodes, so by our inductive hypothesis, G’ is 6-colorable. But v is adjacent to at most 5 oth
|
I think what's confusing you is that the theorem is indeed wrong, and the proof is missing a step! The theorem that you're asked to prove is "any graph can be colored in at most 6 colors." But that theorem is false; not all graphs are 6-colorable. What they meant to write is that any planar graph can be colored in at most 6 colors. So you're not expected to use a property of planar graphs to prove a general property of graphs, you're expected to use one property of planar graphs to prove another property of planar graphs. Likewise, the proof relies on the unstated fact that the induced graph G′ is planar. I think this fact is clear enough that it's OK to state it without proof (all edges of G′ are edges of G, so if you draw G with no edges crossing, then the corresponding drawing of G′ also has no edges crossing), but I think the proof does need to actually state this fact. Once those are fixed, the theorem and proof are both good.
|
|graph-theory|proof-writing|proof-explanation|induction|planar-graphs|
| 0
|
Appropriate way to allocate study time for a multi-subject, weighted scoring exam.
|
I'm preparing for a government exam that consists of three tests, each contributing to a final score. The structure and scoring of the tests are as follows: Test 1: Comprises 20 questions across 6 subjects (equal weight). Score is calculated as (Correct Answers * 100) / 20. Test 2: Contains 50 questions across 5 subjects (varying weights). Score is calculated as (Correct Answers per subject * Weight per subject). The score is obtained by summing all the results together. Test 3: Scored out of 100 points. The exam allows me to compete for three different positions, each with different subject weights for Test 2: Position 1 (highest pay, highest priority): Weights are 3, 3, 2, 1, 1. Position 2 (second priority): All weights are 2. Position 3 (third priority): Weights are 2, 2, 1, 3, 2. The final scores for each position are calculated as follows: Position 1: Test 1 (20%), Test 2 (50%), Test 3 (20%). An additional 10 points are available, but I don't qualify for these. Positions 2 and 3:
|
So let's list out the weight of each of the $12$ subjects for the three positions (given as percentages). $$ \begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Subjects \rightarrow & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12\\ \hline Position 1 & 3.3 & 3.3 & 3.3 & 3.3 & 3.3 & 3.3 & 15 & 15 & 10 & 5 & 5 & 20 \\ \hline Position 2 & 4.16 & 4.16 & 4.16 & 4.16 & 4.16 & 4.16 & 11 & 11 & 11 & 11 & 11 & 20 \\ \hline Position 3 & 4.16 & 4.16 & 4.16 & 4.16 & 4.16 & 4.16 & 11 & 11 & 5.5 & 16.5 & 11 & 20 \\ \hline Average & 3.87 & 3.87 & 3.87 & 3.87 & 3.87 & 3.87 & 12.3 & 12.3 & 8.83 & 10.83 & 9 & 20 \\ \hline \end{array} $$ Let's assume you have equal amount of time available to study for each test (i.e. they are all on the same day). Let's also assume that your current knowledge and studying efficiency is uniform across all $12$ subjects, and also that each subject has the same difficulty. If so, then to study for a particular position you need to allocate your study time accordingly based
|
|probability|statistics|
| 1
|
Questions on sequences and modular forms
|
Let me apologize ahead of time since I am not at all well versed in the theory of modular forms. I have seen some nice examples where modular forms are used to study certain interesting numbers. For example, one might like to study the sequence $(h_n)_{n \in \mathbb{Z}}$ where $h_n$ is the class number of $\mathbb{Q}(\sqrt{n})$ . Now, if $q = e^{2\pi i z}$ , we could consider $F(z) = \sum_{n \in \mathbb{Z}} h_n q^n$ (I'm not even sure whether or not this converges), which is most likely not the Fourier expansion of some modular form. On the other hand, for $r \in \mathbb{Z}$ with $r \geq 2$ , define \begin{align*} H(r,N) = \begin{cases} 0 & \text{ if } N \not\equiv 0,1 \mod4 \\ \zeta(1 -2r) & \text{ if } N = 0 \\ L(1-r, \chi_D)\sum_{d \mid n} \mu(d)\chi_D(d)d^{r-1}\sigma_{2r-1}(n/d) & \text{ if } (-1)^rN = Dn^2 \end{cases} \end{align*} where $D$ is the fundamental discriminant of a quadratic field, and $\sigma_k(m) = \sum_{d \mid m} d^k$ . Then by the class number formula, certain valu
|
It's a very natural question! If you know about Eisenstein series, you might spot that the third choice of $H(r,N)$ as very similar to the Eisenstein series of weight $2r$ and character $\chi_D$ . If you don't, then I would certainly looking them up, they are very classical examples of modular forms. In this case, the modular form you write down is related to an Eisenstein series by the Shimura correspondence , which was one of the original motivations for studying half-integer weight modular forms. See for example Theorem 1 here for a similar statement about cusp forms. I'm sure no one wrote down $F_r (z)$ without knowing what they were looking for! This is also a good question. In general, you have to be lucky - or there has to be a good reason for it. In some cases, you want the coefficients $a_n$ to be multiplicative in the sense that $a_{mn} = a_m a_n$ for $(m,n)=1$ . An example of this is the Ramanujan $\Delta$ -function, whose coefficients are related to the partition function.
|
|number-theory|reference-request|quadratic-forms|modular-forms|
| 1
|
Very difficult Bayes, smokers problem
|
50% of people don't smoke, 20% are light smokers & 30% are heavy smokers. Heavy smokers are twice as likely to die prematurely as light smokers, light smokers are twice as likely to die prematurely as nonsmokers. What's the probability of being a heavy smoker given person died prematurely? I'm sure this would be a Bayes' theorem problem, but I can't figure out how to find the probability of deaths.
|
This is a more general solution than my shortcut one above. Notation: "Heavy Smoker" = $HS$ , "Light Smoker" = $LS$ , "Non Smoker" = $NS$ , "Smoker" = $S$ "Dead" = $D$ . The question asks to find $P(HS|D)$ . Applying Bayes:: $$P(HS|D)=\frac{P(D|HS) * P(HS)}{P(D)}\tag{1}$$ From given information: $$P(D|LS)=2P(D|NS) $$ $$P(D|HS) = 2P(D|LS) = 4P(D|NS)$$ $$P(HS) = 0.3$$ $$P(LS) =0.2$$ $$P(NS) = 0.5$$ $$P(S) = 0.5$$ Continuing, we need $P(D)$ . Since smoker status partitions the space: $$P(D) = P(D|HS)P(HS) + P(D|LS)P(LS) + P(D|NS)P(NS)$$ Substitute in: $$P(D) = 4P(D|NS)0.3 + 2P(D|NS)0.2 + P(D|NS)0.5$$ $$P(D) = 2.1P(D|NS)\tag{2}$$ Plug (2) and other known information into (1): $$P(HS|D) = \frac{4P(D|NS) * 0.3}{2.1P(D|NS)}$$ $$P(HS|D) = \frac{1.2P(D|NS)}{2.1P(D|NS)}$$ $$P(HS|D) = \frac{1.2}{2.1}$$ $$P(HS|D) = \frac{12}{21}$$ $$P(HS|D) = \frac{4}{7}$$
|
|probability|problem-solving|bayesian|bayes-theorem|
| 0
|
Sum of angles in a $1$-by-$3$ rectangle
|
This problem was in a competition for a job. It seems simple BUT the challenge is you cannot use trigonometry. Let there be 3 squares with side length of $\ell$ arranged in such a way that it forms a rectangle with length $3\ell$ and width $\ell$ . So $ABCD$ , $CEHD$ , and $EFGH$ are squares. (This is notation because it's easier to tell you this way.) If $\alpha = m(\angle AED)$ and $\beta = m(\angle AFD)$ , then what is $\alpha + \beta$ ? I tried to use angles of quadrilaterals but so far I didn't find anything useful. Have fun solving it! Non-OP edit : added diagram to confirm construct or clarify
|
$$\alpha = \widehat{AED} = \widehat{DFH}$$ $$\beta + \alpha = \widehat{AFD} + \widehat{DFH} = \widehat{AFH}$$ $$S_{\triangle{AFH}} = \frac12 FG.AH = \frac12 AF.HF \sin{\widehat{AFH}}$$ $$l^2 = \frac12 \sqrt{10}l.\sqrt2l.\sin{\widehat{AFH}}$$ $$\widehat{AFH} = \alpha + \beta = \arcsin{\frac{1}{\sqrt5}} = \arccos{\frac{2}{\sqrt5}} = \arctan{\frac{1}{2}}$$
|
|geometry|angle|quadrilateral|
| 0
|
Prove that $f(x)$ is irreducible in $\mathbb{Z}$ with $f(b)$ a prime, $f(b-1) \neq 0$ and $\Re(\alpha_i) < b -1/2$
|
I need some help with a lemma I need to prove. First I will provide some background with previous lemmas that I already have been able to prove. Maybe these lemmas are needed to proof the last lemma Given $f(x)\in \mathbb{Z}[X]$ and $\Omega =\{f(z)|z\in \mathbb{Z}\}$ , if $\Omega$ contains infinitely many prime numbers, then $f$ is irreducible in $\mathbb{Z}[X]$ . Let $f \in \mathbb{Z}[x]$ . If $\alpha$ is a complex root of $f$ , then so is the conjugate $\bar{\alpha}$ . Let $f(x) = a_mx^m + ... + a_1x + a_0$ be a polynomial with coefficients $a_i$ in $\mathbb{Z}$ for $i = 0,...,m$ . Suppose that $\alpha_1, ..., \alpha_m$ are roots of f and $\Re e(\alpha_i) for all $i$ where $b$ is a fixed number. Then the polynomial f(x + b) has no missing coefficients and all the coefficients have the same sign. So I have already been able to prove these previous lemmas. I am however struggling to prove the following lemma: Let $f(x) \in \mathbb{Z}[x]$ be a polynomial with roots $\alpha_1, \alpha_2,
|
For the sake of contradiction assume $f(x)=g(x)h(x)$ is a non-trivial factorization. Then from $f(b)$ being a prime we can assume (without loss of generality) that $|g(b)|=1$ . Now let $g(x)=a\prod (x-\beta_i)$ where $\beta_i$ are its roots (so in particular they are also roots of $f$ ). Since $f(b-1)=g(b-1)h(b-1)\neq 0$ we have $g(b-1)\neq 0$ . The condition $\Re e(\beta_i) implies $\beta_i$ is closer to $b-1$ then it is to $b$ , or in other words $|b-1-\beta_i| . Hence $$ |g(b-1)|=|a|\prod |b-1-\beta_i| So $|g(b-1)| and yet $|g(b-1)|\neq 0$ , impossible as it is a non-negative integer.
|
|linear-algebra|abstract-algebra|algebra-precalculus|polynomials|irreducible-polynomials|
| 1
|
If you lived in a 4-torus, what would the doughnut hole look like from the inside?
|
I'm not just curious; it refers to general relativity. Specifically, would the hole in the torus' center look to us like a sphere, one you cannot enter because you always slip across the side and go around it instead of through?
|
One way to "visualize" the 4-torus is by $$T^4=S^1\times S^1\times S^1\times S^1$$ This is an ordered set of four angles. We have the notion of Betti numbers and we often say that " $b_n$ counts the number of $n$ -dimensional holes." For $T^4$ , it turns out that Betti numbers are $b_0=1,\ b_1=4,\ b_2=6,\ b_3=4,\ b_4=1$ . The binomial sequence gives the pattern! Contrast this with the Betti numbers for $T^2$ which are $b_0=1,\ b_1=2,\ b_2=1$ . From the representation above of $T^4$ as the product of four circles, you can clearly see the four one-dimensional holes. How do we see the six two-dimensional holes? Select two of the four circles: there are six ways to do this. Then look at the 2-torus $S^1\times S^1$ formed by that pair.
|
|general-topology|
| 0
|
How does $(k+1)!(k+2)(k+1)$ simplify to $(k+2)!(k+1)$
|
If $$n!=n(n-1)!$$ then $$(k+1)!= (k+1)k(k-1)!$$ and $$(k+2)!$$ would be $$(k+2)(k+1)k(k-1)!$$ or $$(k+2)(k+1)!$$ but what does the extra (k+1) do to make it (k+2)!(k+1)
|
$(k+1)!(k+2) = (k+2)!$ , as explained in your question $(k+1)!(k+2)(k+1) = (k+2)!(k+1)$ follows, by multiplying both sides by $(k+1)$ . You can always multiply both sides of a true equation by the same thing to obtain another true equation. You can also do anything else to both sides of an equation.
|
|algebra-precalculus|induction|factorial|
| 0
|
Can you explain to me why this proof by induction is not flawed? (Domain is graph theory, but that is secondary)
|
Background I am following this MIT OCW course on mathematics for computer science. In one of the recitations they come to the below result: Official solution Task: A planar graph is a graph that can be drawn without any edges crossing. Also, any planar graph has a node of degree at most 5. Now, prove by induction that any planar graph can be colored in at most 6 colors. Solution.: We prove by induction. First, let n be the number of nodes in the graph. Then define P (n) = Any planar graph with n nodes is 6-colorable. Base case, P (1): Every graph with n = 1 vertex is 6-colorable. Clearly true since it’s actually 1-colorable. Inductive step: P (n) → P (n + 1): Take a planar graph G with n + 1 nodes. Then take a node v with degree at most 5 (which we know exists because we know any planar graph has a node of degree ≤ 5), and remove it. We know that the induced subgraph G’ formed in this way has n nodes, so by our inductive hypothesis, G’ is 6-colorable. But v is adjacent to at most 5 oth
|
The missing step that I see in this proof is reasoning for why the subgraph, which is formed specifically by removing a degree Assuming that you have not omitted anything relevant from the official problem and solution, and that the given solution is in fact correct, I can think of only one explanation for this: The stipulation that a planar graph has a node of degree at most 5 is a proven property of planar graphs, not part of the definition of planar graphs, and can therefore be assumed true for any planar graph. In other words, the constraint of being able to draw the graph with no crossing edges is sufficient to prove the existence of a node of degree at most 5. For the purpose of the task, this property is stated as a given, with its proof being outside the task's scope. It seems obvious to me that, with the definition of planar graphs being solely about the lack of crossing edges, any subgraph of a planar graph must also be a planar graph, so that stipulation being a derived rath
|
|graph-theory|proof-writing|proof-explanation|induction|planar-graphs|
| 0
|
Bounding the solution of a logarithmic equation
|
Given a small number $\varepsilon >0$ and a constant $1/3\le \alpha , I am looking for the smallest possible number $x^*$ such that for all real $x\ge \max\{x^*,3\}$ , we have $$\frac{x}{(\log x)^{\alpha}} \ge \varepsilon^{-2\alpha}. \tag1$$ Equivalently, the quantity I am looking for is (an upper bound of) the solution of the above equation with the $``\ge" $ symbol replaced by $``="$ , but my understanding is that the equation is not tractable (WolframAlpha is also clueless), and I am not sure of how to find bounds for the solution. For instance, when I look for candidates of the form $x^*\equiv\varepsilon^\beta$ , I end up having to solve $(\log\varepsilon)^{-\alpha}\ge\beta^\alpha\varepsilon^{-2\alpha-\beta}$ for $\beta$ , which does not seem easier. Since for the values of $x$ and $\alpha$ I'm considering, we have $(\log x)^{-\alpha}\ge(\log x)^{-1}$ , I decided instead to let $M\equiv\varepsilon^{-2\alpha} $ and look for the solution of the easier equation $$ \frac{x}{\log x} \ge
|
Let $y := x^{1/\alpha}$ . The condition is written as: for all $y^\alpha \ge \max(x^\ast, 3)$ , $$\frac{y^\alpha}{(\alpha \ln y)^\alpha} \ge \varepsilon^{-2\alpha}$$ or $$\frac{y}{\ln y} \ge \frac{\alpha}{\varepsilon^2}. \tag{1}$$ Note that $\frac{y}{\ln y} \ge \frac{3^{1/\alpha}}{\ln 3^{1/\alpha}}$ on $y \ge 3^{1/\alpha}$ . If $\frac{\alpha}{\varepsilon^2}\le \frac{3^{1/\alpha}}{\ln 3^{1/\alpha}}$ , then (1) is true for all $y\ge 3^{1/\alpha}$ (thus, $x^\ast = -\infty$ ). In the following, assume that $\frac{\alpha}{\varepsilon^2} > \frac{3^{1/\alpha}}{\ln 3^{1/\alpha}}$ (in this case, $x^\ast > 3$ ). The condition (1) is equivalent to $$y \ge - \frac{\alpha}{\varepsilon^2} W_{-1}\left(-\frac{\varepsilon^2}{\alpha}\right)$$ where $W_{-1}(\cdot)$ is the second branch of the Lambert W function. Thus, $$x^\ast = \left(- \frac{\alpha}{\varepsilon^2} W_{-1}\left(-\frac{\varepsilon^2}{\alpha}\right)\right)^{\alpha}.$$
|
|real-analysis|inequality|logarithms|lambert-w|
| 1
|
Suppose $f : \mathbb{R} \to \mathbb{R}$ is a continuous function such that $f(\mathbb{Q}) \subseteq \mathbb{N}$. Show that $f$ is constant.
|
Suppose $f : \mathbb{R} \to \mathbb{R}$ is a continuous function such that $f(\mathbb{Q}) \subseteq \mathbb{N}$ . Show that $f$ is constant. --Skip this section if you want-- As this site disallows effortless questions, I'm adding my solution to this problem without going into too much detail: Suppose $\exists \ r \in \mathbb{R} \backslash \mathbb{Q}$ such that $f(r) \notin \mathbb{N}$ . Pick a sequence of rational numbers $\{q_i\}_{i \in \mathbb{N}} \to r$ . By sequential definition of continuity, $\{f(q_i)\}_{i \in \mathbb{N}} \to f(r) \implies f(r) \in \mathbb{N}$ (which can be shown using the definition of convergence). Now that we know $f(\mathbb{R}) \subseteq \mathbb{N}$ , pick $x, y \in \mathbb{R} \ (x and observe that by the intermediate value theorem (IVT), $\exists \ z \in (x,y)$ such that $f(z) \in \text{img}(f) \cap \mathbb{R} \backslash \mathbb{N} \equiv \phi$ . This gives us that $f$ is constant. I request someone to help me solve this without using the definition of sequ
|
The function $f:[0,1]\to\mathbb R$ is uniformly continuous, so there exists $\delta_0>0$ , such that when $x,y\in[0,1]$ and $|x-y| , we have $$|f(x)-f(y)| Choose $$0=x_0 such that $$\max_{1\leq k\leq N}(x_k-x_{k-1}) So when $x,y\in[x_{k-1},x_k]\cap\mathbb Q$ , by $(*)$ , we have $f(x)=f(y)$ . i.e. $$f(x)\equiv C_k,\quad \forall x\in[x_{k-1},x_k]\cap\mathbb Q.$$ For $x_0\in[x_{k-1},x_k]\cap\mathbb{R\setminus Q},$ choose rational sequence $\{r_n\}\subset[x_{k-1},x_k]$ , such that $r_n\to x_0(n\to\infty)$ , by continuity of $f$ , $$f(x_0)=\lim_{n\to\infty}f(r_n)=C_k$$ This implies $$f(x)\equiv C_k,\quad \forall x\in[x_{k-1},x_k].$$ Hence (note that $C_{k}=f(x_k)=C_{k+1},k=1,2,\cdots,N-1$ ) $$f(x)\equiv f(0),\quad \forall x\in[0,1].$$ Similarly ,we can conclude that $$f(x)\equiv D_n,\quad \forall x\in[n-1,n],n\in\mathbb Z.$$ Due to $$D_{n}=f(n)=D_{n+1},\quad \forall n\in\mathbb Z,$$ we have $$f(x)\equiv f(0),\quad \forall x\in\mathbb R.$$
|
|real-analysis|analysis|alternative-proof|
| 0
|
Is it possible to find a function $u_0 \in H_0^{1}(\Omega)$ such that $I(u_0)<0$?
|
Let $\Omega \subset \mathbb{R}^n$ be an bounded domain with smooth boundary $\partial \Omega$ and consider the functional $I:H_0^{1}(\Omega) \to \mathbb{R}$ defined by $$I(u)=\frac{1}{2}\lVert u \rVert_{H_0^{1}(\Omega)}^2+\int_{\Omega} u \cos(u)dx-\int_{\Omega} \sin(u)dx, $$ there exist some $u_0$ such that $I(u_0) ? Context of the problem: I was able to prove the existence of weak solution of the partial differential equation $$-\Delta u+u=u\sin(u)\,\,\,\text{in}\,\,\, \Omega\\ u=0\,\,\,\text{in}\,\,\, \partial \Omega $$ by finding $v$ such that $$I(v)=\inf_{u \in H_0^1{(\Omega})} I(u) $$ I want to show that the weak solution founded is not trivial, to do this, it is enough to find some point in which $I$ is negative.
|
No, such a function $u_0$ does not exist. Indeed, we have $$ I(u) \ge \int_{\Omega} \frac{|u(x)|^2}{2} + u\cos(u(x)) - \sin(u(x)) dx, $$ so it is enough to show that $$ f(x) := \frac{x^2}2 + x\cos(x) - \sin(x) \ge 0 $$ for all $x\in\Bbb R$ in order to prove that $I(u) \ge 0$ for all $u \in H^1_0(\Omega)$ . Since $f'(x) = x - x\sin(x)$ , the critical points of $f$ are exactly at $x = 0$ and $x = \frac{(4n+1)\pi}2$ for all $n\in \Bbb Z$ . You can then show that the global minimum of $f$ is at $x=0$ where $f(0) = 0$ since $f$ is increasing on $[0,\infty)$ and decreasing on $(\infty,0]$ . This shows that $f \ge 0$ on $\Bbb R$ and hence $I \ge 0$ on $H^1_0(\Omega)$ .
|
|sobolev-spaces|elliptic-equations|
| 1
|
How does $(k+1)!(k+2)(k+1)$ simplify to $(k+2)!(k+1)$
|
If $$n!=n(n-1)!$$ then $$(k+1)!= (k+1)k(k-1)!$$ and $$(k+2)!$$ would be $$(k+2)(k+1)k(k-1)!$$ or $$(k+2)(k+1)!$$ but what does the extra (k+1) do to make it (k+2)!(k+1)
|
$(K+2)!= (k+1)!(k+2)$ so $(k+1)!(k+1)(k+2)= (k+2)!(k+1)$
|
|algebra-precalculus|induction|factorial|
| 0
|
Integers that are sums of two squares
|
It is easy to decide if a given integer $n$ is the sum of two squares, and in fact there is a simple formula (based on the prime factorization) to compute the number of ways that $n$ can be written as the sum of two squares (see for example Wikipedia ). But I cannot find a discussion of how two different representations will be related. In other words, suppose that we find a representation $a^2+b^2=n$ . Is there a way to find another pair $(c,d)$ from $(a,b)$ so that $n=c^2+d^2$ ? I am thinking of something similar to the solutions of $ax-by=1$ for relatively prime $a,b$ . It is easy to see that all other solutions will be of the form $(x+bk, y+ak)$ , where $k$ is any integer. Is there something similar for the sum of two squares?
|
It's easy, if factorization in the Gaussian integers is easy. E.g., from $65=8^2+1^2$ you get $65=(8+i)(8-i)$ . Now if you can factor $8+i=(2-i)(3+2i)$ , whence $8-i=(2+i)(3-2i)$ , then you can calculate $(2+i)(3+2i)=4+7i$ , yielding $65=4^2+7^2$ .
|
|elementary-number-theory|sums-of-squares|
| 1
|
Unbounded operator whose spectrum is the entire complex plane?
|
The question is simple: how to find an unbounded operator $T:H\to H$ where $H$ is a Hilbert space such that $\text{Sp} T = \mathbb C$ ? This seems a very basic thing, but I have not found an example in the literature. In some proofs, we need to consider this case separately. This example should be quite important.
|
Here is an elementary example. Let $a_n$ be a sequence dense in the complex plane. For example $a_n$ is an enumeration of $\mathbb{Q}\oplus i\mathbb{Q}.$ Consider the operator $T$ on $\ell^2(\mathbb{N})$ given by $$T\{x_n\}=\{a_nx_n\}$$ with domain $$D(T)=\left\{\{x_n\}\,:\, \sum |a_nx_n|^2 Then $T$ is closed and the spectrum is equal to the entire complex plane. Another example: let $$(Tf)(x,y)=(x+iy)f(x,y)$$ act on $L^2(\mathbb{R}^2)$ with domain $$D(T)=\left\{f\,:\, \iint (x^2+y^2)|f(x,y)|^2 \,dx\,dy
|
|functional-analysis|spectral-theory|
| 0
|
Unbounded operator whose spectrum is the entire complex plane?
|
The question is simple: how to find an unbounded operator $T:H\to H$ where $H$ is a Hilbert space such that $\text{Sp} T = \mathbb C$ ? This seems a very basic thing, but I have not found an example in the literature. In some proofs, we need to consider this case separately. This example should be quite important.
|
Consider $L^2(\mathbb C,dA(z))$ with $dA(z)$ the area measure and take $$ Tf(z)=zf(z). $$ Recall that the spectrum is the (essential) range of the function defining a multiplication operator.
|
|functional-analysis|spectral-theory|
| 0
|
Impossible Rubik's cube position (2 corners swapped!)
|
Left side, white up, Red, green, white corner swapped with Red, blue, white corner Right side, white up, Red, blue, white corner swapped with Red, green, white corner Right, Down, Back view of cube. By Transposing the corners, it is possible to get all four corners in the correct orientation but the edges are at right angles to their correct sides. (View white up, left side.) Transposed corners, Right, Down, Back view. Same transpose but after U', Right top view Again, Same transpose but after U', Left Down view Solving this 8x8 cube, I ran into this impossible position. The white, red, green corner is swapped with the white, red, blue corner. Note: It is impossible to cheat on this simulator... So, the question is, "Is it really an impossible position? Or is the simulator flawed?"
|
The simulator is not flawed. This position is possible, but only on cubes 4x4 and up. One way to fix your cube is to perform the two edge swap parity case that only arises in larger cubes, like the picture below, taken from here . After you do that, your cube should be in a state that is solvable with regular 3x3 techniques.
|
|algorithms|algorithmic-game-theory|rubiks-cube|analysis-of-algorithms|algorithmic-randomness|
| 0
|
Having trouble with finding the order of $z = 0$ for $(e^{z^2} - 1 - z^2)\sin^3z$
|
My attempt revolves around trying to represent $(e^{z^2} - 1 - z^2)\sin^3z$ as $f(z) = z^na(z)$ where $a(z)$ is analytic at $z = 0$ and $a(0)\neq 0$ . $$e^{z^2} - 1 - z^2 = \frac{z^4}{2!} + \frac{z^6}{3!}+ \ldots$$ So, we have $$\left(\frac{z^4}{2!} + \frac{z^6}{3!}+ \ldots\right)\left(z - \frac{z^3}{3!} + \frac{z^5}{5!} - \ldots\right)^3$$ but I'm not seeing how to get this into the form $f(z) = z^na(z)$ , assuming it's even possible to do this via this route. Any tips to help me progress? Thank you
|
Order of zero of a function is multiplicative. Since $e^{z^2}-1-z^2$ has zero of order 4 and $\sin^3z$ has zero of order 3, $(e^{z^2}-1-z^2)\sin^3z$ has order 7. In case you want to know why the order of zero is multiplicative, let $f,g$ be two analytic function having zero at $0$ (WLOG) with order $m,n$ . Then we can write $f(z)=z^mf_1(z),g(z)=z^ng_1(z)$ such that $f_1,g_1$ are analytic and are not vanishing at $0$ . Then $fg(z)=z^{m+n}f_1g_1(z)$ with $f_1g_1(0)\ne0$ . So the order of zero of $fg$ is $m+n$ . This idea can be easily generalised to any finite production functions.
|
|complex-analysis|
| 0
|
The set of irrationals in $[0,1] \times [0,1]$ has measure zero in $\Bbb{R}^2$?
|
I'm trying to prove that the set of discontinuities $D$ of Thomae's Function has measure zero using the complement of $D$ , which is the set of points where $f$ is continuous, namely $([0,1] \cap \mathbb{I}) \times ([0,1] \cap \mathbb{I})$ . I know that $[0,1] \cap \mathbb{I}$ does not have measure zero in $\mathbb{R}$ . However, does $([0,1]\cap \mathbb{I})^2$ have measure zero in $\mathbb{R}^2$ ?
|
Presuming we talk about the Lebesgue-measure: Note that $\mu([0,1] \cap \mathbb{Q})=0$ , so using $\mathbb{R}=\mathbb{Q} \cup \mathbb{I}$ . $$1= \mu([0,1])= \mu([0,1] \cap \mathbb{R} )= \mu([0,1] \cap \mathbb{Q}) + \mu([0,1] \cap \mathbb{I}) = \mu([0,1] \cap \mathbb{I}) $$ So it follows that: $\mu^2((\mu([0,1] \cap \mathbb{I})^2)= (\mu([0,1] \cap \mathbb{I}))(\mu([0,1] \cap \mathbb{I})) \neq 0$ .
|
|measure-theory|multivariable-calculus|
| 1
|
Solve differential equation by change of variables
|
I need help with this exercise. Solve the differential equation $$y(2t^{2}\sqrt{y}+2)\,\mathrm dt + t(t^{2}\sqrt y +2) \,\mathrm dy=0$$ using $u = t^{2}\sqrt y$ . My steps: $$\frac{\mathrm du}{\mathrm dt} = 2t \sqrt y + t^{2}\frac{1}{2 \sqrt y} y'$$ And $y= \frac{u}{t^2}$ gives $$y'= \frac{1}{t^2}\times (- \sqrt u + \frac{1}{2 \sqrt u})$$ When I replacement in the equation, the solution is not real number, I tried this exercise by different forms. Thanks if you give me the solution. Also, please I need some recommendations about find interesting books or material with differential equation by change of variables.
|
Your first step is correct: $$ \frac{du}{dt}=2t\sqrt{y}+\frac{t^2y'}{2\sqrt{y}}. \tag{1} $$ Now, let's substitute $$ y'=-\frac{y(2t^2\sqrt{y}+2)}{t(t^2\sqrt{y}+2)} \tag{2} $$ in $(1)$ : $$ \frac{du}{dt}=2t\sqrt{y}-\frac{t\sqrt{y}(2t^2\sqrt{y}+2)}{2(t^2\sqrt{y}+2)}. \tag{3} $$ Substituting $\sqrt{y}=\frac{u}{t^2}$ in $(3)$ , we obtain $$ \frac{du}{dt}=\frac{2u}{t}-\frac{u(2u+2)}{2t(u+2)}=\frac{u(u+3)}{t(u+2)}. \tag{4} $$ Equation $(4)$ is a separable ODE. You should have no difficulty in solving it. Remark. Using the fact that the original differential equation is exact, one can find its solution in a less convoluted way. The result is $$ \frac{2}{3}t^3y^{3/2}+2ty=C. \tag{5} $$
|
|ordinary-differential-equations|
| 0
|
Partial sum of Fourier series
|
I found the coefficients of the Fourier series. The partial sum of the Fourier series is defined on the interval $[0,2\pi]$ by $$S_N(x) = \frac{a_0}{2} + \sum_{n=1}^{N}[ a_n \cos(nx) + b_n \sin(nx)],$$ where $$a_0 =\frac{1}{\pi} \int_{0}^{2\pi} f(x)\, dx, $$ $$a_n = \frac{1}{\pi} \int_{0}^{2\pi} f(x) \cos(nx)\, dx,$$ $$b_n =\frac{1}{\pi} \int_{0}^{2\pi} f(x) \sin(nx)\, dx.$$ Is there any way to put all the coefficients in the partial sum and to derive the function? I struggled after I put everything in one equation to see if it can be simplified more. Your help is greatly appreciated!
|
I found the above question might be a classic issue. Its digital form is called compressed-sensing . Here's a link and page 6 introduces this problem.
|
|analysis|fourier-series|
| 0
|
Estimating Exponents
|
What's the best way to estimate exponents by hand? Say for example $(1.07)^{10}$ $\sim2$ , or like $(1.07)^{15}$ , or $(1.05)^{15}$ . Is there any standard way of doing these calculations?
|
One popular method is using the $\textbf{rule of}$ $\mathbf{7 2}$ for estimating an investment's doubling time. It's a simple way to approximate the effect of compounding interest. According to this rule, you divide 72 by the annual interest rate to find how many years it will take for your investment to double. This method can be loosely adapted to estimate the growth of numbers slightly above 1 raised to high powers. For a more general approach, especially for numbers that don't conveniently fit the doubling scenario, you can use the linear approximation method from calculus, which is essentially the first term of the Taylor series expansion. For small $x,(1+x)^n \approx 1+n x$ , where $x$ is the rate of growth and $n$ is the number of periods. Applying the Linear Approximation Method: For $(1.07)^{10}$ , we approximate it as $1+10 \cdot 0.07=1.7$ . This is an underestimate because it doesn't account for compounding beyond the first period. For $(1.07)^{15}$ , it's $1+15 \cdot 0.07=2
|
|compound-interest|
| 0
|
Change of Variables and Expectation of Random Variable
|
Given that we know about random variable X and its probability density function and it's expectation is E(X). We know that when we calculate E(AX+B) we get AE(X)+B. But when I did change of Variables, assumed Y=AX+B, and computed its probability density function it came out to be 1/A times the pdf of X. So According to me, when we calculate expectation of X i.e E(AX+B) we have to integrate Y*probability density of Y.. which gives the expectation as E(X)+B/A enter image description here
|
This equation is wrong: $$\mathfrak{f}_y(y)=\frac{d}{dy}(f_y(y))=\frac{d}{dy}\left(f_x\left(\frac{y-B}{A}\right)\right)=f_x(x)\cdot \frac{1}{A}$$ You accidentally turned $(y-B)/A$ into $x$ in the last step. Instead, it should be $$\mathfrak{f}_y(y)=\frac{d}{dy}(f_y(y))=\frac{d}{dy}\left(f_x\left(\frac{y-B}{A}\right)\right)=\mathfrak{f}_x\left(\frac{y-B}{A}\right)\cdot \frac{1}{A}$$ Thus, we have $$ \begin{align*} \mathbb{E}[Y] &= \int y\mathfrak{f}_y(y)dy \\ &= \int y \mathfrak{f}_x\left(\frac{y-B}{A}\right) \cdot \frac{1}{A}dy \\ &= \int (Ax+B) \mathfrak{f}_x(x)dx \\ &= A \int x\mathfrak{f}_x(x)dx + B\int \mathfrak{f}_x(x)dx \\ &= A\mathbb{E}[X] + B\cdot 1 \\ &= A\mathbb{E}[X]+B \end{align*} $$ as expected. Notice that the $1/A$ goes away when changing the integral from $dy$ to $dx$ because $x=(y-B)/A$ , so $dx=(1/A)\cdot dy$ .
|
|probability|probability-distributions|random-variables|expected-value|
| 0
|
Why are local models of blow-ups compatible by gluing with transition functions in the normal bundle?
|
I am trying to understand the blow-up along a submanifold as explained in Huybrechts "Complex Geometry An Introduction", p. 99, Example 2.5.2. For background, for $m \leq n$ , we see $\mathbb C^m \subset \mathbb C^n$ as $\{ z_{m+1} = \ldots = z_n = 0 \}$ and define $$ Bl_{\mathbb C^m} (\mathbb C^n) := \{ (x,z): z_i x_j = z_j x_i, \forall i, j = m+1, \ldots, n \} \subset \mathbb P^{n-m-1} \times \mathbb C^n $$ Let $U, V \mathbb C^n$ be two open sets and with a biholomorphic map $\phi: U \rightarrow V$ with $\phi(U \cap \mathbb C^m) = V \cap \mathbb C^m$ . I am trying to understand Huybrechts' explanation of how to glue together $Bl_{\mathbb C^m \cap U} (U)$ and $Bl_{\mathbb C^m \cap V}(V)$ using $\phi$ . Firstly, writing the coordinate in $U$ as $z=(z_1, \ldots, z_n)$ and $\phi=(\phi^1, \ldots, \phi^n)$ , he defines the complex numbers $\phi_{k,j}$ by the equation $$ \phi^k = \sum_{j=m+1}^n z_j \phi_{k,j} $$ This seems a bit nonsensical to me, as $\phi^k$ is simply a function of $z$ , i
|
First of all, in the equations for $k\geq m+1$ , $$ \phi^k(z) = \sum_{j=m+1}^n z_j \phi_{k,j}(z), \tag{1} $$ the author didn't mean that $\phi_{k, j}(z)$ are numbers. (His omitting $(z)$ seems to be the source of the confusion.) They are actually functions of $(z_1,\dots,z_n)$ , and their existence is guaranteed by the condition that $\phi(U\cap {\mathbb C}^m) = V\cap {\mathbb C}^m$ . Basically, when $z_j=0$ for all $j\geq m+1$ we have $\phi^k=0$ for all $k\geq m+1$ , so we have such a form. (See also the proof of him 2.4.7.) The $\phi_{k,j}$ are different from your linearized version $\frac{\partial \phi^k}{\partial z_j}$ , although they are the same on $U\cap {\mathbb C}^m$ , where $z_{m+1}=\dots=z_n=0$ . For we can differentiate (1) to see \begin{align*} \frac{\partial \phi^k}{\partial z_j} &= \frac{\partial}{\partial z_j}\Big(\sum_{l=m+1}^n \phi_{k,l}(z) z_l\Big)\\ &= \phi_{k, j}(z) + \sum_{l=m+1}^n \frac{\partial \phi_{k,l}}{\partial z_j} z_l. \end{align*} Therefore, on $U\cap {\m
|
|algebraic-geometry|complex-geometry|submanifold|blowup|
| 1
|
Expected value of the exponential of a stopping time
|
Problem Let $a>0$ and $B$ be a standard $\mathbb{R}$ -valued Brownian motion. Define the stopping time $S_a:=\inf\{t\geq 0\ \vert \left\lvert B_t\right\rvert = a\}$ . Compute $\mathbb{E}\left[e^{-\frac{\lambda^2}{2}S_a}\right]$ for $\lambda\in\mathbb{R}$ . My (edited) attempt I noticed that the expression I needed to compute looks similiar to the martingale $Z=(e^{\lambda B_t - \frac{1}{2}\lambda^2t})_{t\geq 0}$ . Consider the bounded stopping time $S_a \land n\leq n$ for some $n\in\mathbb{N}$ . Doob’s optional sampling theorem yields: \begin{align*} \mathbb{E}\left[Z_{S_a\land n}\right] = \mathbb{E}\left[Z_0\right] = \mathbb{E}\left[e^{\lambda B_0 - \frac{1}{2}\lambda^20}\right] = 1\ \text{(a.s.)} \end{align*} We now have $\lim_{n\to\infty} Z_{S_a\land n} = Z_{S_a}$ (a.s.) and: $$\left\lvert Z_{S_a\land n}\right\rvert = e^{\lambda B_{S_a\land n} - \frac{1}{2}\lambda^2\left(S_a\land n\right)} \leq e^{\lambda B_{S_a\land n}} \leq e^{\left\lvert \lambda a\right\rvert}\ \text{(a.s.)}$$ Le
|
All good. Perhaps, you can add more details on the "by symmetry" Independence of $T$ and $B_T$ i.e. we have $B_t\stackrel{d}{=}-B_{t}$ and $S_{a}$ is only a function of $|B_{t}|$ , which is symmetric already. One can also use pde methods to find the law for the exit time double barrier stopping time density function . Here is also the more general formula for exit time from (a,b) Distribution of first exit time of Brownian motion , where symmetry doesn't apply and instead one has to use the strong Markov property.
|
|stochastic-processes|expected-value|brownian-motion|martingales|stopping-times|
| 1
|
Show cosine/sine is periodic.
|
This question is in the spirit of this question I asked earlier. Let $f, g:\mathbb{R}\to\mathbb{R}$ be differentiable functions such that $f(0)=0$ , $g(0)=1$ , $f'(x)=g(x)$ , and $g'(x)=-f(x)$ . Is it possible to show that both $f$ and $g$ are periodic just from these conditions (without using the fact that $f(x)=\sin(x)$ and $g(x)=\cos(x)$ are the unique solutions of this pair of conditions)?
|
Yes. You have $0=f'f+g'g=\frac12(f^2+g^2)'$ . So latter is constant and you get $f^2+g^2=1$ . This also gives you $-1\leq f,g\leq1$ . Next you get the addition formulas. Let $h(x)=f(x+y)$ . We have $h''(x)=-h(x)$ , $h(0)=f(y)$ , $h'(0)=g (y)$ . Knowing that this IVP has unique solution, one checks that $f(y)g (x)+g (y)f (x)$ is a solution, hence the solution and so $$ f(x+y)=f (x)g (y)+g(x)f (y). $$ And differentiating we get $$ g (x+y)=g (x)g (y)-f(x)f(y). $$ Note that $g>0$ on some interval around $0$ . If $g$ is never $0$ , this would imply that $f$ is always increasing and bounded, so $\lim_{x \to\infty}g(x)=0$ . From $f(2x)=2f(x)g (x)$ this leads to the contradiction that $f\to0$ . It follows that there exists $b>0$ with $g(b)=0$ . Using the continuity of $g$ , that $g(0)=1$ , and taking the infimum of all positive zeroes, we may assume that $b$ is the least positive zero. We have $f(b)^2=1$ . As $f(0)=0$ and $f$ is increasing up to $b$ , $f(b)=1$ . It follows that $$ f(2b)=2f(b)g
|
|real-analysis|calculus|analysis|trigonometry|
| 0
|
$x^x$ graph freaks out
|
So I was goofing around in desmos when I plugged the function $f(x)=x^x$ And noticed that desmos shows only the positive part of the graph. Why does this happen. The function has real values. Can you explain why it doesn't work?
|
Desmos doesn't work with imaginary numbers, and (for example) the expression $(-1)^{0.5}$ is imaginary. the exponent must be an integer when x is negative, and that is not continuous, so perhaps Desmos "decides" not to show it.
|
|functions|desmos|
| 0
|
Bayes' Theorem with cases
|
Exactly $\frac{1}{5}$ people that live on earth have a condition .There are two tests for this condition, the Z1 test and the Z2 test. When a person goes to a doctor to test for this condition, with probability $\frac{2}{3}$ the doctor conducts Z1 on them and with probability $\frac{1}{3}$ the doctor conducts Z2 on them. When Z1 is done, the outcome is as follows: If the person has the condition, the result is positive with probability $\frac{3}{4}$ . If the person does not have the condition, the result is positive with probability $\frac{1}{4}$ . When Z2 is done, the outcome is as follows: If the person has the condition, the result is positive with probability 1. If the person does not have the condition, the result is positive with probability $\frac{1}{2}$ . A person is picked uniformly at random and sent to a doctor to test for this condition. The result comes out positive. What is the probability that the person has the condition? Let A= Person has the condition and B= The test
|
With $A$ = diseased, $B$ = tests positive $P(A|B) = \Large\frac{P(A \cap B)}{P(B)} = \frac{P(B|A)\cdot P(A)} {P(B|A)\cdot P(A) + P(B|A^c)\cdot P(A^c)}$ For Z1 $P(A|B) =\Large\frac{\frac34\frac15}{\frac34\frac15 + \frac14\frac45}$ Compute similarly for $Z_2$ , and then apply the law of total probability to combine the results. I suppose you can carry on by yourself now ?
|
|probability|discrete-mathematics|probability-distributions|solution-verification|bayes-theorem|
| 1
|
Prove that the isotopy generated by a time-dependent symplectic vector field is a symplectomorphism
|
Let $M$ a compact and connected smooth manifold. Suppose $X_t$ is a time-dependent symplectic vector field and let $\phi_t$ be the isotopy generated by $X_t$ . Prove that $\phi_t ∈ Symp(M, \omega)$ for all $t$ . I know that : Given a symplectic form $\omega$ on $M$ , we denote by $Symp(M, \omega)$ the space of symplectomorphisms of $(M, \omega)$ , that is, diffeomorphisms $\Phi : M \to M$ such that $\Phi^∗\omega = \omega$ A vector field $X ∈ \scr X(M)$ , is called symplectic if $i_X\omega$ is closed as a 1-form on M. A time-dependent vector field $X_t$ is called symplectic if $X_t$ is a symplectic vector field for all $t$ Also: My try: I have to prove that $\phi_t$ is smooth and $\phi_t^∗\omega = \omega$ . Since it is an isotopy it is already a smooth map $M \to M$ by definition so I guess I just have to prove that $\phi_t^∗\omega = \omega$ . So I tried doing this $\phi_t^*\omega(Y,Z)=\omega (\phi_t(X),\phi_t(Y))$ but it does not seem to go anywhere. Moreover it feels wrong that $\phi_t
|
Since $\phi_0=Id$ , to show $\phi_t^*\omega = \omega$ , we only need $$ \frac{d}{dt}\phi_t^*\omega = 0. $$ But this by definition is Lie derivative. So by the Cartan formula , \begin{align*} \frac{d}{dt}\phi_t^*\omega &= L_{X_t}\omega = (di_{X_t} + i_{X_t}d)\omega = di_{X_t}\omega + i_{X_t} d\omega = 0 + 0 = 0, \end{align*} since $X_t$ is a symplectic vector field and $\omega$ is a symplectic form.
|
|differential-geometry|manifolds|differential-forms|symplectic-geometry|exterior-algebra|
| 0
|
$x^x$ graph freaks out
|
So I was goofing around in desmos when I plugged the function $f(x)=x^x$ And noticed that desmos shows only the positive part of the graph. Why does this happen. The function has real values. Can you explain why it doesn't work?
|
I suppose it is a convention not to show the negative part of the graph (though the function is real valued at integral points on the negative side of $x$ ) to avoid the disastrous case of fractional negative indices as $x$ is also in the exponent. For example, At $x=-2.3$ , $f(x)=(-2.3)^{-2.3}$ which can be evaluated as the reciprocal of $-2.3$ raised to $2.3$ or the tenth root of $\frac{-1}{2.3}$ raised to $23$ i.e. $\sqrt[10]{(\frac{-1}{2.3})^{23}}$ which is an eventh root under a negative expression and hence is complex. The thing gets interesting here as the expression can also be written as the hundredth root of $\frac{-1}{2.3}$ raised to $230$ i.e. $\sqrt[100]{(\frac{-1}{2.3})^{230}}$ which is a real valued expression indeed. The same expression cannot be complex and real at the same time. Due to this apparent contradiction, we conventionally don't allow negative exponents to muddle up with negative bases together. i.e. You won't get the graph of $a^x$ (where, $a$ is -ve) in con
|
|functions|desmos|
| 0
|
Estimating Exponents
|
What's the best way to estimate exponents by hand? Say for example $(1.07)^{10}$ $\sim2$ , or like $(1.07)^{15}$ , or $(1.05)^{15}$ . Is there any standard way of doing these calculations?
|
I wanted to elaborate on my comments in the answer provided by @Cuteshrek in case it would be useful. Let nonnegative integer $n$ be given and define $f(x)=(1+x)^n$ . Then the $N$ th order Maclaurin series expansion of $f(x)$ can be shown to be $$\sum_{i=0}^N {n \choose i} x^i.$$ Note that when $N \geq n$ , $$\sum_{i=0}^N {n \choose i} x^i=\sum_{i=0}^n {n \choose i} x^i=(1+x)^n=f(x),$$ where we have used the binomial theorem $(y+x)^n = \sum_{i=0}^n {n \choose i} y^{n-i}x^i$ with $y=1$ . This makes sense because $f$ is a $n$ th degree polynomial. Moreover, since $x>0$ in our context, the summand is always positive for all integers $0 \leq i \leq n$ . Thus, for nonnegative integers $N , the $N$ th order Maclaurin series expansion for $f(x)$ can be seen as an underapproximation of $(1+x)^n$ that becomes increasingly more accurate as $N$ increases. The binomial approximation $(1+x)^n \approx 1+nx$ notably corresponds to the $N=1$ Maclaurin series expansion. Since ${n \choose i}=\frac{n^{\u
|
|compound-interest|
| 0
|
Ambiguity textbook exercise involving $\sqrt{-144}$
|
Consider the following questions, the whole exercise is dedicated to determining the square root of negative numbers, after introducing the complex numbers. Eg. $$\sqrt{-144}$$ Solutions for the whole exercise only give one solution for each problem, the above being $12i$ . However, this is providing to be rather confusing. From my understanding, we know that $i$ is a number that when squared $i^2=-1$ . However, this does not mean that $\sqrt{-1}=i$ , as $-1$ has 2 roots that are indistinguishable. Defining $\sqrt{-1}=i$ goes bad really quickly. Hence for the problem above, the answer should be $\pm12i$ to avoid confusion, we cannot say that it is $12i$ , unlike for positive real numbers, where the principal square root is defined, there is no such counterpart in the complex plane. Hence when we write $\sqrt{-144}$ , it must be $\pm12i$ . So are the answers incorrect? By defining $i^2=-1$ , how can we extend this, such that we can use it for such problems?
|
There is always one principal root. The radical " $\sqrt{}$ " always indicates the positive root if the roots are real. e.g. $\sqrt {144} = 12.$ It does not equal $-12.$ Yet, $(-12)^2 = 144$ Or, we could say $x^2 = 144$ has two solutions $x = \pm 12.$ How does this work with principal roots? If we take the square roots of both sides: $\sqrt {x^2} = \sqrt {144}\\ |x| = 12\\ x = \pm 12$ This concept of principal roots does not create a contradiction. So, moving on to imaginary numbers... $i$ is the principal root. $\sqrt {-1} = i$ However, $-i$ is also a root, just not the principal root. $x^2 = -1 \implies x = \pm i$ In the problem at hand $-12i$ is a solution to $x^2 + 144 = 0$ $\sqrt {-144} = 12i$ has one solution... the principal root.
|
|algebra-precalculus|complex-numbers|definition|arithmetic|radicals|
| 0
|
Open Set Containing a Single Axis of the 2-D Real Plane
|
Let $A \subset \mathbb{R}^2$ be the set containing only the " $x$ -axis" of the real 2-D plane (that is $A = \{(x, y) : x \in \mathbb{R}, y = 0\}$ ). Then, any open set $E$ containing $A$ must contain a set $B$ of the form $$B = \bigcup_{x \in \mathbb{R}} \{x\}\times\{(-r_x, r_x)\}.$$ It was stated in the solution to a problem I had read that since the number of points along the $x$ -axis is uncountable, there must exist a countable sequence of distinct numbers along the axis, call it $\{x_k\}_{k \in \mathbb{N}}$ , such that $r_{x_k} > 1/N$ for large enough $N \in \mathbb{N}$ and all $k \in \mathbb{N}$ . Why is it the case that such a countable sequence of distinct numbers must exist? I thought of trying to construct the countable sequence in the following manner: pick any number $x_1$ along the $x$ -axis, by the Archimedean property of the real numbers $r_{x_1} > 0$ implies $ r_{x_1} > 1/N_1$ for some $N_1 \in \mathbb{N}$ . Subsequently choose a second number $x_2$ along the $x$ -axis
|
For each integer $N \geq 1$ , let $B_N = \left\{x \in \mathbb{R} : r_x > \frac{1}{N}\right\}$ . Using the Archimedean property (as you did in your post) we have that for each $x\in \mathbb{R}$ there is an $N$ such that $r_x > 1/N$ , so $\mathbb{R} = \bigcup_{N=1}^{\infty}B_N$ . We know that $\mathbb{R}$ is uncountable, while the union of countably many finite sets is countable, so there must be some $N$ such that $B_N$ is infinite; choosing any sequence from $B_N$ will give you what you want. By the way, the union of countably many countable sets is also countable, so the same argument here also shows that there is some $N$ where $B_N$ is uncountable, so we can get much more than just a sequence, but it sounds like a sequence is all you needed.
|
|real-analysis|general-topology|measure-theory|
| 1
|
Is this a correct approach to calculating $\lim_{n\rightarrow \infty} {\sqrt[n]{\ln(n)}}$?
|
We have just started covering the limit of sequences and I've stumbled upon this limit in our uni's excercises: $$\lim_{n\rightarrow \infty} {\sqrt[n]{\ln(n)}}$$ I've considered solving it using the fact that $\lim_{n\rightarrow \infty} {\sqrt[n]{a}}=1$ for $a>0$ . And since we're dealing with natural numbers, with the exception of $n=1$ , the expression $\ln(n)$ should be $>0$ , right? So is it correct to assume that $\lim_{n\rightarrow \infty} {\sqrt[n]{\ln(n)}}=1$ using this thought process?
|
To solve this limit, it's helpful to employ L'Hôpital's Rule, but in a form applicable to sequences and involving logarithms, due to the indeterminate form that arises. The form we'll use involves taking the logarithm of the sequence and then applying L'Hôpital's Rule: First, recognize that directly applying L'Hôpital's Rule to the original sequence isn't straightforward. We need to manipulate the expression into a form that allows us to apply the rule. Convert the limit into an exponent of $e$ to facilitate the use of L'Hôpital's Rule: $$ \lim _{n \rightarrow \infty} \sqrt[n]{\ln (n)} = \lim _{n \rightarrow \infty} e^{\frac{\ln(\ln(n))}{n}} $$ This step involves understanding that $\sqrt[n]{x} = x^{\frac{1}{n}} = e^{\frac{\ln(x)}{n}}$ . Now, consider the exponent separately: $$ \lim _{n \rightarrow \infty} \frac{\ln(\ln(n))}{n} $$ This limit appears to be of the form $\frac{\infty}{\infty}$ as $n \rightarrow \infty$ , allowing us to apply L'Hôpital's Rule. Apply L'Hôpital's Rule by di
|
|calculus|limits|limits-without-lhopital|
| 0
|
Are linearly ordered topological spaces well-based?
|
A linearly-ordered topological space or LOTS is one whose topology admits a basis generated by open intervals of a total ordering of its points. A well-based space is one which admits a local basis of each point that is totally ordered by set inclusion. It appeared to me that the former implied the latter, and I attempted to prove it via the following: for any point $p$ in a LOTS take (possibly transfinite) sequences $a:\Gamma\rightarrow[-\infty,p)$ and $b:\Gamma\rightarrow(p,\infty]$ for some ordinal $\Gamma$ which are monotonic and surjective with $a_0=-\infty$ and $b_0=\infty$ . Then for any open interval $(c,d)$ containing $p$ there is an ordinal $\beta$ such that $c\le a_\beta , so $\{(a_\alpha,b_\alpha)\mid\alpha\in\Gamma\}$ is a local basis of $p$ . However, I got a response that a LOTS can be not well-based if there exists a point with different cofinalities on the left and the right, giving the example of $\omega_1+1+\omega*$ with the order topology (where $+$ denotes order co
|
(This proves the same thing as Jakobian's answer, but presented in the more general context of partially ordered sets.) Some generalities about posets Let $(P, \le)$ be a partially ordered set. A subset $A\subseteq P$ is a chain if any two of its elements are comparable. A subset $A\subseteq P$ is cofinal if every element of $P$ is less than or equal to some element of $A$ . If $P$ is totally ordered, there is a well-ordered cofinal subset of some cardinality $\kappa$ . The smallest such cardinal is the cofinality of $P$ and is a regular cardinal (or $1$ is the poset has a maximum element) and there is a strictly increasing (transfinite) sequence $(x_\alpha)_{\alpha cofinal in $P$ . In a general (non-totally ordered) poset $P$ there may not exist a cofinal chain. If $P$ has a cofinal chain, one says that $P$ has true cofinality (Jech, Set Theory, p. 461). In that case, as shown in In a poset with a cofinal chain, does every cofinal subset admit a cofinal chain? , every cofinal subset $
|
|general-topology|order-theory|
| 0
|
Is this a correct approach to calculating $\lim_{n\rightarrow \infty} {\sqrt[n]{\ln(n)}}$?
|
We have just started covering the limit of sequences and I've stumbled upon this limit in our uni's excercises: $$\lim_{n\rightarrow \infty} {\sqrt[n]{\ln(n)}}$$ I've considered solving it using the fact that $\lim_{n\rightarrow \infty} {\sqrt[n]{a}}=1$ for $a>0$ . And since we're dealing with natural numbers, with the exception of $n=1$ , the expression $\ln(n)$ should be $>0$ , right? So is it correct to assume that $\lim_{n\rightarrow \infty} {\sqrt[n]{\ln(n)}}=1$ using this thought process?
|
Method of L'H $\hat{\text{o}}$ pital rule is introduced above by Zuko. I can state another method, which still require some basic knowledge of calculus. I personally would seldomly use L'H $\hat{\text{o}}$ pital rule on limit of sequence as originally this should be a discrete limit instead of continuous stuff, though the result is same. The calculus result we need to use is $$1-\dfrac{1}{n}=\dfrac{n-1}{n}\le\ln(n)\le n-1$$ This is easy to prove by considering their difference and check the monotonicity and $\cdots$ . With this fact, we have $$\sqrt[n]{1-\dfrac{1}{n}}\le\sqrt[n]{\ln(n)}\le\sqrt[n]{n-1}$$ As $n\to+\infty$ , RHS tends to one (can be done by binomial theorem), and LHS tends to one as $1-\dfrac{1}{n}$ is strictly less than one. So any power-rooting a number smaller than one will make it tends to 1. Then by sandwich theorem, we get the required limit is $1$ as $n\to+\infty$ .
|
|calculus|limits|limits-without-lhopital|
| 1
|
Expectation of the indicator function
|
Define: For $n \geq 0$ , on note $X_n=(n+1) \mathbb{1}_{[n+1,+\infty}$ , and $\mathcal{F}_n=\sigma(\{1\},\{2\}, \ldots,\{n\},[n+$ $1,+\infty[)$ and $\forall k \in \mathbb{N}^*, \mathbb{P}(\{k\})=\frac{1}{k}-\frac{1}{k+1}$ I am trying to prove that it's a martingale. We have that $$X_{n+1}=(n+2) 1_{[n+2,+\infty}[$$ $$\mathbb{E}\left[X_{n+1} \mid \mathcal{F}_n\right]=(n+2) \mathbb{E}\left[1_{[n+2,+\infty[} \mid \mathcal{F}_n\right]$$ And because $\{k\}$ for $k \geq n+2$ is independant from $\mathcal{F}_n$ $$\mathbb{E}\left[1_{[n+2,+\infty} \mid \mathcal{F}_n\right]=\mathbb{P}([n+2,+\infty[)$$ $$\mathbb{P}\([n+2,+\infty[)=\sum_{k=n+2}^{\infty}\left(\frac{1}{k}-\frac{1}{k+1}\right)$$ $$=\frac{1}{n+2}-\frac{1}{n+3}+\frac{1}{n+3}-\frac{1}{n+4}+\ldots$$ $$=\frac{1}{n+2}$$ $$\mathbb{E}\left[X_{n+1} \mid \mathcal{F}_n\right]=(n+2) \times \frac{1}{n+2}=1$$ But this is supposed to be equal to $X_n$ . What am I doing wrong ?
|
$\{k\}$ is not independent of $\mathcal F_n$ . Since $\mathcal F_n$ is generated by a partition, $$ \mathbb E\left[X_{n+1}\mid\mathcal F_n\right]=\sum_{k=1}^n \mathbb E\left[X_{n+1}\mathbf{1}_{\{k\}}\right]\frac 1{\mathbb P(\{k\})}\mathbf{1}_{\{k\}}+\mathbb E\left[X_{n+1}\mathbf{1}_{[n+1,\infty)}\right]\frac 1{\mathbb P([n+1,\infty))}\mathbf{1}_{[n+1,\infty)}. $$ Only the last term remains and since $[n+2,\infty)\subset [n+1,\infty)$ , we have $$ \mathbb E\left[X_{n+1}\mathbf{1}_{[n+1,\infty)}\right]\frac 1{\mathbb P([n+1,\infty))}=(n+2)\frac{\mathbb P([n+2,\infty))}{\mathbb P([n+1,\infty))}, $$ which, after simplification, gives the result.
|
|conditional-expectation|martingales|characteristic-functions|
| 0
|
Estimating Exponents
|
What's the best way to estimate exponents by hand? Say for example $(1.07)^{10}$ $\sim2$ , or like $(1.07)^{15}$ , or $(1.05)^{15}$ . Is there any standard way of doing these calculations?
|
$(1+x)^{n}$ when x is very small: I subsitute $m = nx$ and then the starting expression is $(1+x)^{\frac{m}{x}}$ which is approximately $e^{m}$ so it is about $e^{nx}$ . This is very hard to use alone, merge it with the other methods
|
|compound-interest|
| 0
|
Center position of an orthogonal rectangle that has a side or corner touching a circumference
|
I need to find how distant the center of an orthogonal rectangle is from the center of a circle, given a specific angle. The dimensions of the rectangle are proportional to the circle radius, so they use a 0-1 range, where 1 is the radius of the circle. The objective is to get the position of the center of that rectangle, knowing its size, so that when an angle is given one of its corner (or sides) is exactly on the circumference. To clarify, here are three image examples. Here is a rectangle that touches the circle on one of its sides: The same rectangle, with a different angle, now touching the circle at one of its corners: Another rectangle, with the same angle as the last, showing a different position for its center: The above is just an example, the final purpose is to get an arbitrary function , no matter the angle: I'm actually trying to put a given text as close as possible to the circle. The bounding rectangle of the text is known (its size is proportional to the circle radius
|
I see that your problem is not so much a mere geometrically driven one, it rather is more about programming some geometric display (of given visual elements). However I cannot understand, why you try to refer with the angle directly to the rectangle's midpoint. The geometric relevant point should be the contact point, i.e. the respective corner instead. Thus, assuming the circle itself is origin centered, that contact point $P$ ought have coordinates $$P\left(r\cdot\cos(\varphi );\ r\cdot\sin(\varphi )\right)$$ If your program however asks for the rectangle's midpoint $M$ instead, that one then could be given as $$M\left(r\cdot\cos(\varphi )+\frac w2\cdot\text{sgn}(\cos(\varphi ));\ r\cdot\sin(\varphi )+\frac h2\cdot\text{sgn}(\sin(\varphi ))\right)$$ where $$\begin{array}{cl} r & \text{radius of circle}\\ w & \text{width of rectangle}\\ h & \text{height of rectangle}\\ \text{sgn} & \text{signum function, returning just the sign value}\ \in \{+1, -1\} \end{array}$$ --- rk
|
|geometry|trigonometry|circles|rectangles|collision-detection|
| 0
|
What is the lapace transform of (f(t))^n?
|
Well, I was doing some Laplace transform problems. I was intrigued by it and I came across one interesting problem The Laplace transform of $f(t)^n$ I attempted to solve it using the definition of Laplace but it was complex Is there a way to find it using some other method?
|
There isn't a general formula for the Laplace transform of $f(t)^n$ without knowing more specifics about $f(t)$ . The Laplace transform, by its definition, involves an integral that can become significantly more complicated when dealing with powers or compositions of functions. For specific functions and powers, there might be techniques or properties that could simplify the process, such as: Linear Properties: If $f(t)$ is a linear combination of simpler functions whose Laplace transforms are known, you can use the linearity of the Laplace transform to simplify the problem. Convolution Theorem: If $f(t)^n$ can be interpreted as a convolution of $f(t)$ with itself $n-1$ times, then you can use the convolution theorem, which states that the Laplace transform of a convolution is the product of the Laplace transforms. Differentiation and Integration Properties: The differentiation and integration properties of Laplace transforms can be used for functions that are derivatives or integrals
|
|calculus|integration|
| 0
|
Find the number of solutions of $2^x+3^x+4^x-5^x=0$ (without using graphical calculator)
|
Find the number of solutions of $$2^x+3^x+4^x-5^x=0$$ Answer is given $1$ . I tried to take the derivative of the function $$f(x)=2^x+3^x+4^x-5^x\\f'(x)=\ln2 (2^x)+\ln3(3^x)+\ln4(4^x)-\ln5(5^x)$$ but I could not make the conclusion wheather it is $\gt0$ or, $\lt0$ . Projecting $f'(x)$ in desmos it is showing both +ve for certain values of $x$ and -ve for other values. So function is not monotonic. I have also projected $f(x)$ in desmos; found $f(x)$ to be first incresing and then decreasing. But how can this be solved without using graphical calculator?
|
Consider the equation $$\begin{align} 2^x+3^x +4^x &=5^x\\ \left({2\over 5}\right)^x+\left({3\over 5}\right)^x+\left({4\over 5}\right)^x &=1 \tag 1 \end{align}$$ Now consider the function $$\begin{align} g(x) &= \left({2\over 5}\right)^x+\left({3\over 5}\right)^x+\left({4\over 5}\right)^x \\ g'(x) &= \ln\left({2\over 5}\right)\left({2\over 5}\right)^x+\ln\left({3\over 5}\right)\left({3\over 5}\right)^x+\ln\left({4\over 5}\right)\left({4\over 5}\right)^x \end{align}$$ Now all $\ln\left({2\over 5}\right),\ln\left({3\over 5}\right),\ln\left({4\over 5}\right)$ terms are $-ve$ and $k^x;\;\forall k\in \mathbb R$ terms are $+ve$ so $g'(x)\lt0$ hence decreasing function also for $\begin{cases}x \to -\infty & g(x)\to \infty\\x\to\infty & g(x)\to 0 \end{cases}$ so it will cut the line $y=k ,\; k\in \mathbb R^+$ only once.
|
|calculus|derivatives|contest-math|exponential-function|
| 1
|
Number Theory (Q4 BMO 1996)
|
Define $q(n)=\left\lfloor\frac{n}{\lfloor\sqrt{n}\rfloor}\right\rfloor$ for $n \in \mathbb{N}$ Determine with proof all positive integers $n$ for which $q(n)>q(n+1)$ . Claim: Let $m\in \mathbb{N}$ , then for all $n=m^2-1$ with $m>1$ , $q(n)>q(n+1)$ . Proof: Observe that since $(m-1)^2 $q(m^2-1)=m+1$ . Also $q(m^2)=m$ . Therefore $n=m^2-1$ is a solution. I need help with proving that it's the only solution. I thought that since q(n) is increasing between $n^2$ and $(n+1)$ there cannot be another solution.
|
Rough idea: If neither $n$ nor $n+1$ are perfect squares, then $\lfloor n\rfloor$ and $\lfloor n+1 \rfloor$ are equal. Let their value be $k$ . Now $\lfloor\frac{n}{k}\rfloor\leq \lfloor\frac{n+1}{k}\rfloor$ . So such values of $n$ and $n+1$ can’t be chosen. So $n=m^2$ or $n=m^2-1$ You can check for the case $n=m^2$ , and show that it is impossible.
|
|elementary-number-theory|contest-math|
| 1
|
Can you explain to me why this proof by induction is not flawed? (Domain is graph theory, but that is secondary)
|
Background I am following this MIT OCW course on mathematics for computer science. In one of the recitations they come to the below result: Official solution Task: A planar graph is a graph that can be drawn without any edges crossing. Also, any planar graph has a node of degree at most 5. Now, prove by induction that any planar graph can be colored in at most 6 colors. Solution.: We prove by induction. First, let n be the number of nodes in the graph. Then define P (n) = Any planar graph with n nodes is 6-colorable. Base case, P (1): Every graph with n = 1 vertex is 6-colorable. Clearly true since it’s actually 1-colorable. Inductive step: P (n) → P (n + 1): Take a planar graph G with n + 1 nodes. Then take a node v with degree at most 5 (which we know exists because we know any planar graph has a node of degree ≤ 5), and remove it. We know that the induced subgraph G’ formed in this way has n nodes, so by our inductive hypothesis, G’ is 6-colorable. But v is adjacent to at most 5 oth
|
The specific flaw in your mock-proof (which I love, by the way) is that the property "there exists someone with at most 2 friends" is not stable under taking subsets. (Put differently, if we define a school to be a group in which "in every school there is someone with at most 2 friends," $G'$ doesn't need to be a subschool of $G$ , e.g. if you had a school with one loner and three mutual friends, then removed the loner, you would not longer have a "school".) On the other hand, the property of being a planar graph is stable under taking subsets, which is a key point in the inductive step of the proof you are questioning (which really ought to contain the observation that $G'$ is planar when it goes to apply the inductive hypothesis).
|
|graph-theory|proof-writing|proof-explanation|induction|planar-graphs|
| 0
|
Generator of the joint process $(X_t,Y_t)$ where $Y_t= e^{-t}W(e^{2t})$ and $X_t = \int^t_0 Y_sds$.
|
Let $(W_t)_{t\geq 0}$ be a standard one-dimensional Brownian motion and let $$ Y_t := e^{-t}W(e^{2t}), \qquad X_t := \int^t_0 Y_s ds $$ Show that the joint process $(X_t,Y_t)$ is Markovian and find the generator of the process. This is an exercise that I came across while reading a book. I can show the first part where the joint process is indeed Markovian. However, I don't know how to find the generator $$ L[f](0,x) := \lim_{t \rightarrow 0}\frac{E_{(0,x)}[f(X_t,Y_t)] -f(0,x)}{t}. $$ for $f$ sufficiently regular. My first thought is to use Ito, but I only know the form of $f(t,X_t)=...$ not the form $f(X_t,Y_t)=...$ (if there is any) and my second thought is to use the joint density but don't know how to proceed to find the joint density for now.
|
To find the generator I will look for the two-dimensional SDE that is solved by $(X_t,Y_t)\,.$ It is not hard to see that $$\tag{1} B_t:=\int_0^te^{-s}\,dW_{e^{2s}} $$ is a continuous martingale with quadratic variation $t\,.$ Therefore is is a Brownian motion that satisfies $$\tag{2} Y_t=e^{-t}\,W_{e^{2t}}=e^{-t}\int_0^te^u\,dB_u=\int_0^te^{u-t}\,dB_u\,. $$ It is easy to see that the SDE that is solved by $Y_t$ is the one of an Ornstein-Uhlenbeck process: $$\tag{3} dY_t=-Y_t\,dt+dB_t\,. $$ By definition, $$\tag{4} dX_t=Y_t\,dt\,. $$ The system we were looking for is (3) and (4). This can be written in matrix/vector form as $$\tag{6} d\mathbf{Z}_t=\underbrace{\pmatrix{0&1\\0&-1}}_{\textstyle=:\mathbf{b}}\,\mathbf{Z}_t\;dt+\underbrace{\pmatrix{0&0\\1&0}}_{\textstyle=:\boldsymbol{\sigma}}\,d\mathbf{B}_t $$ where $$\tag{7} \mathbf{Z}_t=\pmatrix{X_t\\Y_t}\,,\quad \mathbf{B}_t=\pmatrix{B_t\\C_t} $$ and $C_t$ is a dummy Brownian motion independent of $B\,.$ Since $$\tag{8} \boldsymbol{\sigma
|
|probability-theory|stochastic-processes|stochastic-calculus|brownian-motion|
| 0
|
proof about trigonometrical functions and their relation to the exponential function
|
Let $w$ be from $\mathbb{C}$ with $|w| = 1$ . I need to show that there exists exactly one $t$ from $[0,2\pi)$ such that $w = e^{it}$ . My approach would be to rewrite $w$ as $a+bi$ and $e^{it}$ as $\cos(t) + i \sin(t)$ , and then set up two equations so that $a = \cos(t)$ and $b = \sin(t)$ and then argue with the inverse functions of cos and sin. But how can I proof with those equations that there is only exactly one $t$ which fulfills the requirements? (When I look at the unit circle it's obvious, but that's not a solid proof.) Furthermore I need to make the proof more universal by finding all $z$ from $\mathbb{C}$ to fulfill the equation $w = e^z$ for any $w$ from $\mathbb{C}$ . Help would be really appreciated.
|
First of all you already stated yourself the restriction about $[0;\ 2\pi)$ , i.e.all mentioned functions $\exp(it)$ , $\sin(t)$ , and $\cos(t)$ clearly are $2\pi$ -periodic. Now divide that range into open the quadrants $(0;\ \pi/2)$ , $(\pi/2;\ \pi)$ , $(\pi;\ 3\pi/2)$ , and $(3\pi/2;\ 2\pi)$ . On each of those the functions $\sin(x)$ and $\cos(x)$ clearly are monotone (as can be seen from their respective derivatives). Moreover the according sign combinations within each of those sub-ranges clearly is unique: $++$ , $-+$ , $--$ , $+-$ . Therefore the parameter $t$ indeed is specified uniquely for every point on the unit circle. (Without your above restriction you clearly would obtain just $t+2\pi\,n$ instead, for any $n\in\mathbb{Z}$ .) --- rk
|
|analysis|trigonometry|exponential-function|
| 1
|
Is addition by a specific nonzero natural number a term function in this structure?
|
Consider the structure $(\mathbb{N};+,\times,0)$ . I know that every nonzero natural number $k$ is definable by a first-order formula in that structure, and hence, so is the unary function $x+k$ . However, I want to know if the function $x+k$ is a term function in that structure. I strongly suspect it is not, but I want to see the rigorous proof that it is not. I apologize if my question is too elementary or pedantic, but I still want to see the proof.
|
It is a term function if and only if $k=0$ . Let $t(x)$ be a uni-variate term function. If $t$ has length $1$ then it is either $x$ or $0$ . The first case is the term function for $x \mapsto x+0$ . Otherwise, $t$ is $t_0 + t_1$ or $t_0 \times t_1$ for term functions $t_0(x), t_1(x)$ . In both casses, if $t_0, t_1$ both produce multiples of $x$ then so does $t$ . But if $k \neq 0$ then we can take some $x$ such that $x \not\mid x+k$ .
|
|model-theory|natural-numbers|universal-algebra|
| 1
|
Is it true that $\lim_{n \to \infty } \frac{\int_\delta^ \frac{\pi}{2} \cos^n (x )dx}{\int_0^ \frac{\pi}{2} \cos^n (x )dx}= 0$
|
I was trying to solve this problem: Suppose $f$ is a Riemann integrable function and $|f(t) − f(0)| \le M t$ for some positive constant $M$ on $[0,\frac \pi 2]$ . It follows that $$\lim_{n \to \infty } \frac{\int_0^\frac{\pi}{2} f(x)\cos^n (x )dx}{\int_0^\frac{\pi}{2} \cos^n (x )dx} =f(0)$$ and I tried to use squeeze theorem Since $\cos(x)$ is monotone decreasing on $(0,\frac{\pi}{2})$ then for all $\varepsilon >0 \ \exists \delta >0, \ N \in \mathbb{N} $ such that for all $\delta $\cos^n (x ) $$\frac{\int_0^\frac{\pi}{2} f(x)\cos^n (x )dx}{\int_0^\frac{\pi}{2} \cos^n (x )dx}= \frac{\int_0^\delta f(x)\cos^n (x )dx +\int_\delta^ \frac{\pi}{2} f(x)\cos^n (x )dx }{\int_0^\delta \cos^n (x )dx +\int_\delta^ \frac{\pi}{2} \cos^n (x )dx} $$ $$\frac{\int_0^\frac{\pi}{2} f(x)\cos^n (x )dx}{\int_0^\frac{\pi}{2} \cos^n (x )dx} \le (f(0)+M \delta ) + \frac{M\pi}{2}\cdot \frac{\int_\delta^ \frac{\pi}{2} \cos^n (x )dx}{\int_0^ \frac{\pi}{2} \cos^n (x )dx}$$ The last part $\frac{\int_\delta^ \frac{\p
|
It just needs some inequality techniques $$ \cos^n(1)(1-\delta) \leq \int_{\delta}^{\frac{\pi}{2}}\cos^n(x)\leq \frac{\pi}{2}\cos^n(\delta)\quad \int_{0}^{\delta}\cos^n(x)\geq \frac{\delta}{2}\cos^n (\frac{\delta}{2})$$ $$\frac{\int_{\delta}^{\frac{\pi}{2}}\cos^n(x)}{\int_{0}^{\frac{\pi}{2}}\cos^n(x)} \leq \frac{\frac{\pi}{2}\cos^n(\delta)}{\cos^n(1)(1-\delta)+\frac{\delta}{2}\cos^n (\frac{\delta}{2})}=\frac{\frac{\pi}{2}(\frac{\cos(\delta)}{\cos(\frac{\delta}{2})})^n}{(1-\delta)(\frac{\cos(1)}{\cos(\frac{\delta}{2})})^n+\frac{\delta}{2}} \longrightarrow 0\quad \text{as}\quad n\to \infty $$
|
|real-analysis|integration|limits|definite-integrals|
| 0
|
Estimation of $\int_{|z| = 1 + \varepsilon} \frac{|\mathrm{d}z|}{|z - 1|}$ when $\varepsilon \to 0^+$
|
Consider the following family of integrals for $\varepsilon > 0$ : $$I(\varepsilon) := \int_{|z| = 1 + \varepsilon} \frac{|\mathrm{d}z|}{|z - 1|} = \int_0^{2\pi} \frac{(1 + \varepsilon)\,\mathrm{d}\theta}{|(1+\varepsilon)e^{i\theta} - 1|}$$ Question : The objective would be to show the following behaviour: $$\liminf_{\varepsilon \to 0^+}\, I(\varepsilon) However, is this true? The first red flag I'm seeing is that the integral $\displaystyle\int_0^{2\pi} \frac{\mathrm{d}\theta}{|e^{i\theta} - 1|}$ , which would be a limit-candidate for $I(\varepsilon)$ , is infinite, as we have $|e^{i\theta} - 1| = 2 \sin(\theta/2)$ on $[0, 2\pi]$ and: $$\int_{\delta}^{\pi - \delta} \frac{\mathrm{d}\theta}{\sin \theta} = \left[\frac{1}{2} \ln\left(\frac{1 + \cos \theta}{1 - \cos \theta}\right)\right]_\delta^{\pi - \delta} = \ln\left(\frac{1 - \cos \delta}{1 + \cos \delta}\right) \xrightarrow[\delta \to 0^+]{} +\infty$$ but this of course does not eliminate the possibility for the $I(\varepsilon)$ s to
|
It seems like this appraoch to OP's question won't work because the integrals diverge to $+\infty$ (as I expected/feared). Indeed, we have: $$\begin{split} \int_0^{2\pi} \frac{\mathrm{d}\theta}{|(1+\varepsilon)e^{i\theta} - 1|} &= \int_0^{2\pi} \frac{\mathrm{d}\theta}{\sqrt{\left((1+\varepsilon)\cos \theta - 1\right)^2 + (1 + \varepsilon)^2\sin^2 \theta}}\\ &\geq \int_0^{2\pi} \frac{\mathrm{d}\theta}{\sqrt{-2(1+\varepsilon)\cos \theta + 1 + (1 + \varepsilon)^2}}\\ &\geq \int_0^{2\pi} \frac{\mathrm{d}\theta}{\sqrt{-2(1+\varepsilon)\cos \theta + (1 + \varepsilon) + (1 + \varepsilon)^2}}\\ &\geq \frac{1}{\sqrt{1 + \varepsilon}}\int_0^{2\pi} \frac{\mathrm{d}\theta}{\sqrt{-2\cos \theta + 1 + (1 + \varepsilon)}}\\ &\geq \frac{1}{\sqrt{1 + \varepsilon}}\int_0^{2\pi} \frac{\mathrm{d}\theta}{\sqrt{4\sin^2\left(\frac{\theta}{2}\right) + \varepsilon}}\end{split}$$ Yet, thanks to the monotone convergence theorem applied to any decreasing subsequence $(\varepsilon_n)_n$ tending to $0^+$ when $n \to
|
|calculus|integration|convergence-divergence|
| 1
|
Geometric Applications of Calculus: find the equation of the tangent
|
I'm not able to solve this problem: Find the equation of the tangent to the function $f(x) = \frac{5}{x} + 2x$ which passes through the point $P=(0,-4)$ . Also give the coordinates of the tangent point $B$ and the angle of intersection between $t$ and $f$ . I have already taken the derivative of the function: $f'(x) = -\frac{5}{x^2} +2$ but when I plugged $P=(0,-4)$ it did not work because I got $f'(0) = -\frac{5}{0^2} +2$ . Then I got error because I can't divide by zero. Thank you for helping!!
|
The tangent line to the point $(a, 5/a+2a)$ has the slope $f’(a)=-5/a^2+2$ . So the tangent line equation is $$y= f’(a)(x-a)+f(a)$$ $$y= (-5/a^2+2)(x-a)+ 5/a+2a$$ $$y= (-5/a^2+2)x+10/a.$$ Put $(0,-4)$ in this: $$-4=10/a.$$ So $$a=-2.5.$$ The coordinates of the intersection point are $$(-2.5, -7).$$ The tangent line equation is then $$y=1.2x-4.$$
|
|calculus|
| 1
|
Do exponentials in semigroups only have formal meaning?
|
Let $L$ be a linear differential operator and consider the PDE $$\begin{cases} u_t + Lu = 0, \quad x \in \mathbb{R}^n\\ u = f, \quad t = 0\end{cases} \tag{1}$$ It is known that we may construct a continuous semigroup $S(t)$ whose infinitesimal generator is $-L$ , and thus for all $t > 0$ , $u(t) = S(t)f$ for sufficiently nice $f$ . What I am confused about is how one arrives at the often seen expression $S(t) = e^{Lt}$ . For example, on its own it is not clear what it means to apply $e^{Lt}$ to a function $f$ , and it seems what is actually meant is to apply $S(t)$ to $f$ . I know that this is inspired by the fact that in (1) we replace $L$ by a constant matrix $A$ , then the resulting ODE has a solution $e^{-At}$ . I also know that some meaning can be given to the exponential of a matrix either by an infinite series (if $A$ is bounded) or by using the functional calculus (if $A$ is self-adjoint). However in neither of these two cases do I see why $S(t) = e^{Lt}$ . Is there any justifi
|
Time evolution has an exponential type of property. For example, suppose you have a state vector $x_1$ at $t=t_1$ , and you want to know what the state vector will evolve to become at some $t_2 > t_1$ . Then you can symbolically write $$ x_2 = S(t_2,t_1)x_1 $$ This assumes, of course, that $x_2$ is unique, which would be the case for any well-defined time evolution problem. You can see that there must be an exponential type of description because $$ x_3=S(t_3,t_2)S(t_2,t_1)x_1, $$ which implies that $$ S(t_3,t_2)S(t_2,t_1)=S(t_3,t_1). $$ If the system $S$ is time-independent, then $S$ will depend only on the differences of the two arguments, which leads to a time-invariant formulation where $S(t'',t')=\mathscr{S}(t''-t')$ , meaning that the evolution depends only on the difference between the two arguments, and not the arguments themselves. The end result is a simple exponential property: $$ \mathscr{S}(t_b)\mathscr{S}(t_a)=\mathscr{S}(t_b+t_a). $$ In other words, time evolution has a
|
|functional-analysis|ordinary-differential-equations|partial-differential-equations|operator-theory|semigroup-of-operators|
| 1
|
Globally-generated property of vector bundle and the tautological bundle on its projective bundle?
|
Here is my question which seems trivial: Let $X$ be a projective manifold with a vector bundle $\scr{E}$ on it. If $\mathscr{O}_{\mathbb{P}(\mathscr{E})}(1)$ is generated by global sections, whether the $\scr{E}$ generated by global sections or not? If not, is there some easy counterexamples? In which special case we have this property? Here $\mathbb{P}(-)$ is in the sense of Grothendieck. Note that we let $f:\mathbb{P}(\mathscr{E})\to X$ , then $f_*\mathscr{O}_{\mathbb{P}(\mathscr{E})}(1)\cong\scr{E}$ . If we have $0\to\mathscr{K}\to\mathscr{O}^{\oplus N}\to\mathscr{O}_{\mathbb{P}(\mathscr{E})}(1)\to0$ , then we have $$0\to f_*\mathscr{K}\to f_*\mathscr{O}^{\oplus N}=\mathscr{O}^{\oplus N}\to f_*\mathscr{O}_{\mathbb{P}(\mathscr{E})}(1)=\mathscr{E}\to R^1f_*\mathscr{K}\to0.$$ We don't know whether $R^1f_*\mathscr{K}$ is zero or not. Thank you for your help!
|
First, note that $$ H^0(X, \mathcal{E}) \cong H^0(X, f_*\mathcal{O}(1)) \cong H^0(\mathbb{P}(\mathcal{E}), \mathcal{O}(1)). $$ Next, observe that the evaluation morphism for $\mathcal{O}(1)$ factors as $$ H^0(\mathbb{P}(\mathcal{E}), \mathcal{O}(1)) = H^0(X, \mathcal{E}) \otimes \mathcal{O}_{\mathbb{P}(\mathcal{E})} \to f^*\mathcal{E} \twoheadrightarrow \mathcal{O}(1) $$ through the evaluation morphism for $\mathcal{E}$ and the tautologocal morphism (which is surjective). Now, if $\mathcal{E}$ is globally generated, its evaluation morphism is surjective, hence so is the evaluation morphism for $\mathcal{O}(1)$ . On the other hand, if $\mathcal{E}$ is not globally generated, its evaluation morphism is not surjective, hence there is a point $x \in X$ and a vector $0 \ne e \in \mathcal{E}_x$ which is not in the image of the evaluation morphism of $\mathcal{E}$ . Then it is clear that the evaluation morphism of $\mathcal{O}(1)$ is not surjective at the point $[e] \in \mathbb{P}(\mathcal{E}
|
|algebraic-geometry|
| 1
|
Geometry problem from BMO2 2001 with angle bisector of a triangle
|
I was attempting the 2001 BMO 2 and was unable to solve question 3. The question was: A triangle ABC has $\measuredangle ACB > \measuredangle ABC$ . The internal bisector of $\measuredangle BAC$ meets BC at D. The point E on AB is such that $\measuredangle EDB = 90◦$ . The point F on AC is such that $\measuredangle BED = \measuredangle DEF$ . Show that $\measuredangle BAD = \measuredangle FDC$ . My progress: I first started out labelling $\measuredangle BAD = \measuredangle DAC = \alpha$ and $\measuredangle ABC = \beta$ . I then did some angle chasing but found nothing interesting like a cyclic quadrilateral or parallel lines. However, I did find that FE was perpendicular to AB. I then tried to create some cyclic quadrilaterals. I chose to look at the $\triangle EFA$ first, so I drew the perpendicular from F to AD and set its foot as X. This way we have AEXF as a cyclic quadrilateral with diameter AF. Using angles in the same segment we see $\measuredangle XFE = \measuredangle XEF = \a
|
See that $D$ is the $A-$ excenter of $\triangle AEF$ wherefrom $\widehat{EDF}=90^\circ-\dfrac{\widehat{BAC}}2$ , i.e. $\widehat{CDF}=\dfrac{\widehat{BAC}}2$ .
|
|contest-math|euclidean-geometry|triangles|
| 0
|
Infinite Series $\sum_{n=1}^\infty\frac{H_n}{n^22^n}$
|
How can I prove that $$\sum_{n=1}^{\infty}\frac{H_n}{n^2 2^n}=\zeta(3)-\frac{1}{2}\log(2)\zeta(2).$$ Can anyone help me please?
|
It would appear this one can be related to the first integral in the three log integrals you posted. The one with the upper limit of 1/2. $$\displaystyle \sum_{n=1}^{\infty}\frac{H_{n}}{n^{2}}x^{n}=\int_{0}^{x}\frac{Li_{2}(t)+1/2log^{2}(1-t)}{t}dt$$ .....[1] By letting $\displaystyle x=1/2$ and integrating we get: $$\displaystyle \int_{0}^{x}\frac{Li_{2}(t)+\frac{1}{2}\log^{2}(1-t)}{t}dt=\left [-Li_{3}(1-t)+Li_{3}(t)+Li_{2}(1-t)\log(1-t)+\frac{1}{2}\log(t)\log^{2}(1-t)\right ]_{0}^{\frac{1}{2}}$$ Note that $$\displaystyle Li_{3}(1/2)=\frac{7}{8}\zeta(3)+\frac{1}{6}\log^{3}(2)-\frac{{\pi}^{2}}{12}log(2)$$ and $$\displaystyle Li_{2}(1/2)=\frac{{\pi}^{2}}{12}-\frac{log^{2}(2)}{2}$$ So, we arrive at: $$\displaystyle =\boxed{\zeta(3)-\frac{{\pi}^{2}}{12}log(2)}$$ The integral in [1] can be derived by beginning with: $$\displaystyle \sum_{n=1}^{\infty}\frac{H_{n}}{n}x^{n}=\int_{0}^{x}\sum_{n=1}^{\infty}H_{n}t^{n-1}dt$$ $$\displaystyle =-\int_{0}^{x}\frac{log(1-t)}{t}dt-\int_{0}^{x}\frac{log(
|
|real-analysis|sequences-and-series|closed-form|zeta-functions|harmonic-numbers|
| 0
|
Mathematica gives: $\int_{0}^{\infty}{\cos(x^n)-\cos(x^{2n})\over x}\cdot{\ln{x}}\mathrm dx={12\gamma^2-\pi^2\over 2(4n)^2}$
|
How do we show that the given result by Mathematica is correct? $$\int_{0}^{\infty}{\cos(x^n)-\cos(x^{2n})\over x}\cdot{\ln{x}}\mathrm dx={12\gamma^2-\pi^2\over 2(4n)^2}\tag1$$ $n>0$ Where $\gamma=0.577216...$ I would try substitution, because it may help to simplify the problem into a manage integral to deal with. $u=x^n$ $du=nx^{n-1}dx.$ $${1\over n}\int_{0}^{\infty}{\cos(u)-\cos(u^2)\over u^{1\over n}}\cdot{\ln{u^{1\over n}}}{\mathrm dx\over u^{n-1\over n}}={12\gamma^2-\pi^2\over 2(4n)^2}$$ Simplified to $${1\over n^2}\int_{0}^{\infty}{\cos(u)-\cos(u^2)\over u}\cdot{\ln{u}}\mathrm du={12\gamma^2-\pi^2\over 2(4n)^2}$$ We can remove $\ln{u}$ by doing another substitution $v=\ln{u}$ $udv=du$ $${1\over n^2}\int_{-\infty}^{\infty}{\cos(e^v)-\cos(e^{2v})\over e^v}\cdot{v}\cdot{e^v}\mathrm du={12\gamma^2-\pi^2\over 2(4n)^2}$$ Then we finally simplified to $$={1\over n^2}\int_{-\infty}^{\infty}v\cos(e^v)\mathrm dv -{1\over n^2}\int_{-\infty}^{\infty}v\cos(e^{2v})\mathrm dv$$ At this stage I
|
Comment 1: Laplace $$\displaystyle{\int\limits_0^\infty {{y^a} \cdot {e^{ - x \cdot y}}dy} = \frac{{\Gamma \left( {1 + a} \right)}}{{{x^{1 + a}}}}}$$ and $$\displaystyle{\int\limits_0^\infty {\cos y \cdot {e^{ - x \cdot y}}dy} = \frac{x}{{1 + {x^2}}}}$$ transformations (considered known). Comment 2: From here http://mathworld.wolfram.com/GammaFunction.html (relations 39 and 35) we know $$\displaystyle{\Gamma \left( {1 + a} \right) = a \cdot \Gamma \left( a \right)}$$ as well as $$\displaystyle{\frac{1}{{\Gamma \left( {1 + 2 \cdot m \cdot z} \right)}} = 1 + 2m\gamma \cdot z + \frac{{{m^2}}}{3}\left( {6{\gamma ^2} - {\pi ^2}} \right){z^2} + \frac{{2{m^3}}}{3}\left( {2{\gamma ^3} - \gamma {\pi ^2} + 4\zeta \left( 3 \right)} \right){z^3} + ..}$$ Comment 3: Because $$\displaystyle{\Gamma \left( z \right) \cdot \Gamma \left( {1 - z} \right) = \frac{\pi }{{\sin \pi z}}}$$ (relationship 42 above) it follows (with Taylor analysis) that $$\displaystyle{\Gamma \left( {1 + m \cdot z} \right)\Gamma
|
|calculus|integration|definite-integrals|improper-integrals|euler-mascheroni-constant|
| 0
|
Justify $\zeta(3)=2\int_0^1 \left(Li_2(e^{-2\pi i x})+Li_2(e^{2\pi i x}\right))\log \Gamma(x)dx$
|
I don't know if this approach to get a formula involving the Apéry constant was in the literature. This idea was a simple idea few minutes ago, when I was studying the answers in this site Math Stack Exchange for the question Integral $\int_0^1 \log \Gamma(x)\cos (2\pi n x)\, dx=\frac{1}{4n}$ . One has that since Wolfram Alpha said that $$\sum_{k=1}^\infty\frac{\cos (2 \pi k x)}{k^2}=\frac{Li_2(e^{-2\pi i x})+Li_2(e^{2\pi i x})}{2},$$ where $Li_s(z)$ is the polylogarithm function . Then using the dominated convergence theorem we should have then $$\zeta(3)=2\int_0^1 \left(Li_2(e^{-2\pi i x})+Li_2(e^{2\pi i x}\right))\log \Gamma(x)dx .$$ Question. Please can you justify all these claims to provide us this nice exercise for this site Mathematics Stack Exchange? I say justify the closed-form for the series involving the cosine function (if you find a reference in this site, only is required add it) and after jusfity how one uses the dominated convergence theorem. Thanks in advance. With r
|
Comment : from here http://functions.wolfram.com/ZetaFuncti ... owAll.html We know that $$\displaystyle{L{i_2}\left( z \right) + L{i_2}\left( {\frac{1}{z}} \right) = - \frac{1}{2}{\log ^2}\left( { - z} \right) - \frac{{{\pi ^2}}}{6}} ,$$ Consequently $$\displaystyle{L{i_2}\left( {{e^{ - 2i\pi x}}} \right) + L{i_2}\left( {{e^{2i\pi x}}} \right) = - \frac{1}{2}{\log ^2}\left( { - {e^{2i\pi x}}} \right) - \frac{{{\pi ^2}}}{6}}$$ $$\displaystyle{S = 2\int\limits_0^1 {\left( {L{i_2}\left( {{e^{ - 2i\pi x}}} \right) + L{i_2}\left( {{e^{2i\pi x}}} \right)} \right)\log \left( {\Gamma \left( x \right)} \right)dx} = }$$ $$\displaystyle{2\int\limits_0^1 {\left( { - \frac{1}{2}{{\log }^2}\left( { - {e^{2i\pi x}}} \right) - \frac{{{\pi ^2}}}{6}} \right)\log \left( {\Gamma \left( x \right)} \right)dx} = }$$ $$\displaystyle{ = \int\limits_0^1 {\left( { - {{\log }^2}\left( {{e^{i\pi \left( {2x - 1} \right)}}} \right) - \frac{{{\pi ^2}}}{3}} \right)\log \left( {\Gamma \left( x \right)} \right)dx} = \in
|
|integration|sequences-and-series|definite-integrals|lebesgue-integral|polylogarithm|
| 0
|
Justify $\zeta(3)=2\int_0^1 \left(Li_2(e^{-2\pi i x})+Li_2(e^{2\pi i x}\right))\log \Gamma(x)dx$
|
I don't know if this approach to get a formula involving the Apéry constant was in the literature. This idea was a simple idea few minutes ago, when I was studying the answers in this site Math Stack Exchange for the question Integral $\int_0^1 \log \Gamma(x)\cos (2\pi n x)\, dx=\frac{1}{4n}$ . One has that since Wolfram Alpha said that $$\sum_{k=1}^\infty\frac{\cos (2 \pi k x)}{k^2}=\frac{Li_2(e^{-2\pi i x})+Li_2(e^{2\pi i x})}{2},$$ where $Li_s(z)$ is the polylogarithm function . Then using the dominated convergence theorem we should have then $$\zeta(3)=2\int_0^1 \left(Li_2(e^{-2\pi i x})+Li_2(e^{2\pi i x}\right))\log \Gamma(x)dx .$$ Question. Please can you justify all these claims to provide us this nice exercise for this site Mathematics Stack Exchange? I say justify the closed-form for the series involving the cosine function (if you find a reference in this site, only is required add it) and after jusfity how one uses the dominated convergence theorem. Thanks in advance. With r
|
With completely Fourier ( somewhat similar ) we get the following: One can easily verify $(\dagger)$ identity $$\displaystyle{\sum_{n=1}^\infty\frac{\cos (2 \pi n x)}{n^2}=\frac{{\rm Li}_2(e^{-2\pi i x})+{\rm Li}_2(e^{2\pi i x})}{2}}$$ Then we have sequentially: $$\displaystyle{\begin{aligned} \int_0^1 \bigg({\rm Li}_2 \left(e^{-2\pi i x} \right)+{\rm Li}_2 \left(e^{2\pi i x} \right) \bigg)\log \Gamma(x) \; {\rm d}x &= 2\int_{0}^{1} \log \Gamma(x) \sum_{n=1}^{\infty} \frac{\cos 2 \pi n x}{n^2} \, {\rm d}x\\ &= 2\sum_{n=1}^{\infty} \frac{1}{n^2} \int_{0}^{1}\cos 2 n \pi x \log \Gamma(x) \, {\rm d}x\\ &\overset{(*)}{=} 2 \sum_{n=1}^{\infty} \frac{1}{4n^3} \\ &= \frac{\zeta(3)}{2} \end{aligned}}$$ $\dagger)$ It is left as an exercise to the reader. Of course, the sum of polylogarithms falls into something elementary, since for example it is true $$\displaystyle{\sum_{n=1}^{\infty} \frac{\sin nx}{n} = \frac{\pi-x}{2} \quad , \quad x \in (0, 2\pi)}$$ where with proper manipulation and integ
|
|integration|sequences-and-series|definite-integrals|lebesgue-integral|polylogarithm|
| 0
|
Seperability for the collections of all non-empty compact subsets of $\mathbb{R}^2$ with Hasudorff metric
|
Let $X$ be the collections of all non-empty compact subsets of $\mathbb{R}^2$ , which has the Euclidean metric. Let $(X,d)$ be a metric space, where $d$ is the Hasudorff metric. Is $X$ separable? Furthermore, let $Y$ be the collections of all finite subsets of $X$ . Let $(Y,d)$ be a metric space, where $d$ is the Hasudorff metric. Is $Y$ separable?
|
Let us suppose, more generally, that $(M, \rho)$ is a separable metric space, with countable and dense subspace $N$ . Let $X$ be the set of non-empty compact subsets of $M$ , $Y \subseteq X$ be the set of non-empty finite subsets of $M$ , and $Z \subseteq Y$ be the non-empty finite subsets of $N$ (note that $Z$ is countable). Let $d$ be the Hausdorff metric. I claim that $Z$ is dense in $Y$ , $Y$ is dense in $X$ , hence $Z$ is dense in $X$ , making both $X$ and $Y$ separable. Fix $A \in X$ and $\varepsilon > 0$ . Since $A$ is compact, it is totally bounded, and so we may find an $\varepsilon$ -net, i.e. a finite set $a_1, \ldots, a_n \in A$ such that $$A \subseteq \bigcup_{i=1}^n B(a_i; \varepsilon). \tag{1}$$ In particular, the finite set $A' = \{a_1, \ldots, a_n\}$ is a subset of $A$ , hence $$\sup_{x \in A'} \inf_{y \in A} \rho(x, y) = 0.$$ Moreover, $(1)$ implies that \begin{align*} \sup_{x \in A} \inf_{y \in A'} \rho(x, y) &\le \sup_{x \in \bigcup_{i=1}^nB(a_i; \varepsilon)} \inf_
|
|real-analysis|general-topology|metric-spaces|separable-spaces|hausdorff-distance|
| 1
|
Evaluate $\int_0^{\frac{\pi}{2}}\frac{x^2}{1+\cos^2 x}dx$
|
Evaluate the following integral $$\int_0^{\frac{\pi}{2}}\frac{x^2}{1+\cos^2 x}dx$$ This function does not have an elementary anti-derivative. How can we solve this?
|
Entry 1 If $\displaystyle{\left| a \right| > 1}$ then $$\displaystyle{\int\limits_0^\pi {\frac{{{x^2}}}{{{e^{ix}} - a}}dx} = - \frac{{{\pi ^3}}}{{3a}} - \frac{{2\pi }}{a}L{i_2}\left( { - \frac{1}{a}} \right) + i\left( { - \frac{{{\pi ^2}}}{a}\log \left( {1 + \frac{1}{a}} \right) - \frac{2}{a}L{i_3}\left( { - \frac{1}{a}} \right) + \frac{2}{a}L{i_3}\left( {\frac{1}{a}} \right)} \right)}$$ because $$\displaystyle{\int\limits_0^\pi {\frac{{{x^2}}}{{{e^{ix}} - a}}dx} = - \frac{1}{a}\int\limits_0^\pi {\frac{{{x^2}}}{{1 - \frac{{{e^{ix}}}}{a}}}dx} = - \frac{1}{a}\int\limits_0^\pi {{x^2}\sum\limits_{n = 0}^\infty {\frac{{{e^{ixn}}}}{{{a^n}}}} dx} = - \frac{1}{a}\int\limits_0^\pi {{x^2}dx} - \frac{1}{a}\sum\limits_{n = 1}^\infty {\frac{1}{{{a^n}}}\int\limits_0^\pi {{x^2}{e^{ixn}}dx} } = - \frac{{{\pi ^3}}}{{3a}} - }$$ $$\displaystyle{ - \frac{1}{a}\sum\limits_{n = 0}^\infty {\frac{1}{{{a^n}}}\left( { - i\frac{{{{\left( { - 1} \right)}^n}{\pi ^2}}}{n} + \frac{{2{{\left( { - 1} \right)}^n}\pi }}
|
|real-analysis|integration|definite-integrals|contour-integration|
| 0
|
Closed form of $\mathscr{R}=\int_0^{\pi/2}\sin^2x\,\ln\big(\sin^2(\tan x)\big)\,\,dx$
|
Inspired by Mr. Olivier Oloa in this question . Does the following integral admit a closed form? \begin{align} \mathscr{R}=\int_0^{\Large\frac{\pi}{2}}\sin^2x\,\ln\big(\sin^2(\tan x)\big)\,\,dx \end{align} It will be my last question before I take a long break from my activity on Mathematics StackExchange. So, please be nice. No more downvotes for no reason because this is a challenge problem . Edit : I am also interested in knowing the numerical value of $\mathscr{R}$ to the precision of at least $50$ digits. If you use Mathematica to find its numerical value, please share your method & the code.
|
After the substitution $\tan{x}\mapsto x$ , we get \begin{align} \mathscr{R} =&\int^\infty_0\frac{x^2}{(1+x^2)^2}\ln(\sin^2{x})\ {\rm d}x\\ =&\Re\int^\infty_{-\infty}\frac{x^2\ln(1-e^{i2x})}{(1+x^2)^2}{\rm d}x-\ln{2}\underbrace{\int^\infty_{-\infty}\frac{x^2}{(1+x^2)^2}{\rm d}x}_{\frac{\pi}{2}} \end{align} Even though the function $\displaystyle f(z)=\frac{z^2\ln(1-e^{i2z})}{(1+z^2)^2}$ has infinitely many branch points at $z=n\pi$ , once we close the contour along the upper half of $|R|$ and make semicircular bumps around the branch points, one may check (by letting $z=n\pi+ \epsilon e^{i\theta}$ ) that the contribution along the bumps vanishes. The integral along the big arc also tends to $0$ . Hence \begin{align} \mathscr{R} =&2\pi i{\rm Res}(f,i)-\frac{\pi}{2}\ln{2}\\ \end{align} Using WolframAlpha to compute the residue and simplify terms, $$\mathscr{R}=\frac{\pi}{e^2-1}-\pi-\frac{\pi}{2}\ln{2}+\frac{\pi}{2}\ln(e^2-1)$$
|
|calculus|real-analysis|integration|closed-form|
| 0
|
Is this a correct approach to calculating $\lim_{n\rightarrow \infty} {\sqrt[n]{\ln(n)}}$?
|
We have just started covering the limit of sequences and I've stumbled upon this limit in our uni's excercises: $$\lim_{n\rightarrow \infty} {\sqrt[n]{\ln(n)}}$$ I've considered solving it using the fact that $\lim_{n\rightarrow \infty} {\sqrt[n]{a}}=1$ for $a>0$ . And since we're dealing with natural numbers, with the exception of $n=1$ , the expression $\ln(n)$ should be $>0$ , right? So is it correct to assume that $\lim_{n\rightarrow \infty} {\sqrt[n]{\ln(n)}}=1$ using this thought process?
|
Since we have $\dfrac{\ln(n)}n\to 0$ then for any given $\varepsilon>0$ there exists $n_0$ such that $n\ge n_0$ then $\dfrac{\ln(n)}{n} Therefore $1\le\ln(n)\le n\varepsilon\le 1+n\varepsilon\le(1+\varepsilon)^n\quad$ by binomial expansion And you get taking the the n-root $\quad 1\le\sqrt[n]{\ln(n)}\le 1+\varepsilon$ .
|
|calculus|limits|limits-without-lhopital|
| 0
|
Closed form of $\mathscr{R}=\int_0^{\pi/2}\sin^2x\,\ln\big(\sin^2(\tan x)\big)\,\,dx$
|
Inspired by Mr. Olivier Oloa in this question . Does the following integral admit a closed form? \begin{align} \mathscr{R}=\int_0^{\Large\frac{\pi}{2}}\sin^2x\,\ln\big(\sin^2(\tan x)\big)\,\,dx \end{align} It will be my last question before I take a long break from my activity on Mathematics StackExchange. So, please be nice. No more downvotes for no reason because this is a challenge problem . Edit : I am also interested in knowing the numerical value of $\mathscr{R}$ to the precision of at least $50$ digits. If you use Mathematica to find its numerical value, please share your method & the code.
|
$$I = \int_{0}^{\frac{\pi}{2}} \sin^2(x) \ln(\sin^2(\tan(x))) \, dx \\ = \underbrace{2 \int_{0}^{\infty} \frac{t^2}{(1 + t^2)^2} \ln(\sin(t)) \, dt}_{\text{Let } t = \tan(x)} \\$$ $$= 2 \int_{0}^{\infty} \frac{t^2}{(1 + t^2)^2} \left(-\ln(2) - \sum_{n=1}^{\infty} \frac{\cos(2nt)}{n}\right) \, dt$$ $$= -2 \ln(2) \int_{0}^{\infty} \frac{t^2}{(1 + t^2)^2} \, dt - 2 \sum_{n=1}^{\infty} \frac{1}{n} \int_{0}^{\infty} \frac{t^2 \cos(2nt)}{(1 + t^2)^2} \, dt$$ $$= -\frac{\pi}{2} \ln(2) - \frac{1}{2} \sum_{n=1}^{\infty} e^{-2n} \left(\pi - 2\pi n\right) = -\frac{\pi}{2} \ln(2) + \frac{\pi}{e^2 - 1} - \frac{\pi}{2} \left(2 - \ln(e^2 - 1)\right)$$
|
|calculus|real-analysis|integration|closed-form|
| 0
|
Compact set and total boundedness, confusion with definition
|
In Conway's Functions of one complex variable I the following definition of total boundedness is given: Let $(X, d)$ be a metric space; For every $\epsilon>0$ there are a finite number of points $x_1, ..., x_n$ in $X$ s.t. $X = \cup_{k=1}^{n}B(x_k;\epsilon).$ Then we have the following theorem: $X$ is compact iff $X$ is complete and is total bounded. My confusion comes from the definition of total boundedness using $=$ and not $\subset$ . For example if we take the metric space $([0, 1], d)$ , the theorem above implies there is a finite union of open sets that is equal to our metric space. If that is the case can someone show me an example of the construction of such a finite collection of open sets? I am confused because any union of open balls is an open set so I do not understand how a finite union of open sets can equal our metric space (I know our set [0, 1] is open and closed). For me the $\subset$ operator makes sense but not the $=$ . I am pretty sure I am missunderstanding som
|
When considering metric space $(X, d)$ all balls are contained in $X$ , as they are defined by $B(x, r) = \{y \in X: d(x, y) . In the case of $[0, 1]$ , you could write $$ [0, 1] = \bigcup_k B(k/n, 1/n) = [0, 1/n) \cup (0, 2/n) \cup (1/n, 3/n) \cup \ldots (1-2/n, 1) \cup (1 - 1/n, 1] $$
|
|real-analysis|complex-analysis|
| 1
|
Roll until 5 or 6 is obtained on die without mid-game cash out
|
This question is from QuantGuide(Busted 6 II): Suppose you play a game where you continually roll a die until you obtain either a 5 or a 6. If you receive a 5, then you cash out the sum of all of your previous rolls (excluding the 5). If you receive a 6, then you receive no payout. You do not have the decision to cash out mid-game. What is your expected payout? My Approach: In the case when we can't cash out mid-game the expected value will be 2.5. Now in the case of stopping midgame, we calculate the expected value at each stage of the throw. For the $i^{th}$ throw the expected value will be: \begin{equation} \frac{2}{3}^i(2.5i)+\frac{2}{3}^{i-1}(2.5(i-1))\frac{1}{6} \end{equation} The 2.5 value is due to at each throw the average value of the dice roll will be $\frac{1+2+3+4}{4}$ . The value I am getting is 2.59 but it doesn't pass. It would be great to get some help.
|
The game ends if you roll a $5$ or $6$ , with probability $\frac13$ . The expected number of trials until a trial with probability $\frac13$ succeeds is $3$ . So you expect to roll $3$ times, $2$ of which count towards the payout, each of which has an average of $2.5$ . The probability that you actually get that payout is $\frac12$ . So the expected payout is $2\cdot2.5\cdot\frac12=2.5$ .
|
|probability|probability-theory|expected-value|
| 0
|
Why does finding only one particular solution yield the general solution?
|
I am currently working with linear differential equations, but I lack some very foundational understanding for why they are solved certain ways. Suppose we have a simple equation, such as: $y'-2y=8x$ To find the general solution, we need to find the homogenous solution to the equation and one particular solution. I understand why, becuase of linearity, if $y_c$ is a solution to the homogenous equation, then $y_c+y_p$ will be a solution to the non-homogenous equation. But how come we only need to find one particular solution? Won't the one we choose change our answer? One particular solution to the equation above would be $y=-4x-2$ . But we could easily find another particular solution using the integrating factor $e^{-2x}$ . That would yield a particular solution: $y=-2e^{-2x}(2x+1)+C$ . So now we have two possible solutions that look widely different to me: $y=C_1e^{-2x}-4x-2$ $y=C_1e^{-2x}-2e^{-2x}(2x+1)+C_2$ I guess my questions are: Are both of these the same general solution? Why
|
If you think about second order linear ODEs, like $$\ddot{y}-3\dot{y}+2y=2$$ then it is possible for combining a homogenous solution with two different particular solutions to give two different answers. E.g. take $y_c=e^x$ as the homogenus solution and $y_1=1$ and $y_2=1+e^{2x}$ as the particular solutions. Using $y_1$ we get $$y=1+Ae^x$$ and using $y_2$ we get $$y=1+e^{2x}+Ce^x$$ These are two genuinely different answers. But the reason this is able to happen is that neither of them are full answers. You'll notice that if both of these are solutions, then their difference $y=e^{2x}$ must be a solution to the homogenous equation, and one that we missed to start with. If we include this, to get the full solution to the homogenous equation $$y_c=Ae^x+Be^{2x}$$ then we get the same answer whether we use $y_1$ or $y_2$ : $y_1$ gives us $$y=1+Ae^x+Be^{2x}$$ and $y_2$ gives us $$y=1+e^{2x}+Ce^x+De^{2x}=1+Ce^x+(D+1)e^{2x}$$ which you can see are the same. What's happened here is that as long
|
|ordinary-differential-equations|
| 1
|
Limit of sequence of sets
|
I have a sequence of sets $(S_k)$ . What could be the condition for $S_\infty$ (the one taken at infinity) to be subset of, or be equal to, the limit $\limsup S_k$ of the sequence? Eg: $S_k = \{x \in \mathbb{R}: x/k = 0\}$ . With this, $S_k = \{0\}$ for all $k$ and the sequence converges to this set, but $S_\infty = \mathbb{R}$ .
|
The condition is that $\lim \sup 1_{S_k}(x) = 1_{S_\infty}(x)$ where $1_{S_k}(x)$ is the indicator function of $S_k$
|
|sequences-and-series|convergence-divergence|
| 0
|
Minimum record of exponentials getting broken finitely/infinitely many times
|
Let us consider the following scenario: we have a sequence $(X_n)_{n\geq 1}$ of independent random variables, where for every integer $n\geq 1$ , $X_n$ is exponentially distributed with parameter $\lambda_n > 0$ . Let us define the random variable $m_n = \min \{X_1, \ldots, X_n\}$ for any integer $n\geq 1$ . We need to show that if $\sum\limits_{n=1}^{\infty}\lambda_n = \lambda , then, with probability 1, the minimum record gets broken only finitely many times, and if that same $\lambda$ is $\infty$ , then, with probability 1, the record gets broken infinitely many times. My attempt: It is known that $m_n$ is also exponential with parameter $\lambda_1 + \ldots + \lambda_n$ (for any positive integer $n$ ). Thus, one can easily see that $m_n \xrightarrow{d}\text{EXP}(\lambda)$ , if $\lambda , and $m_n \xrightarrow{d} 0$ if $\lambda=\infty$ . Now, let $N_n$ be the number of times the minimum record is broken up to step $n$ , i.e. $$N_n = \sum_{i=2}^n \mathbb{I}_{\{m_i = X_i\}}=\sum_{i=2}^
|
I think your work is already in the right direction. By the (first) Borel–Cantelli lemma (which doesn’t require independence), the probability of infinitely many records being set is $0$ if the sum of the probabilities of those events is finite. And indeed \begin{eqnarray*} \sum_i\mathbb P(X_i\lt m_{i-1}) &=& \sum_i\frac{\lambda_i}{\lambda_1+\cdots+\lambda_i} \\ &\le& \sum_i\frac{\lambda_i}{\lambda_1} \\ &=& \frac\lambda{\lambda_1} \\ &\lt& \infty\;. \end{eqnarray*} For the other result, we can’t use the second Borel–Cantelli lemma (at least I don’t see how to) because that does require independence. Instead, consider the events $B_n$ that the $X_i$ never go below $\frac1n$ . Their probability is $0$ : \begin{eqnarray*} \mathbb P(B_n) &=& \prod_i\mathbb P\left(X_i\ge\frac1n\right) \\ &=& \prod_i\exp\left(-\lambda_i\cdot\frac1n\right) \\ &=& \exp\left(-\sum_i\lambda_i\cdot\frac1n\right) \\ &=&0\;. \end{eqnarray*} Thus, the probability of the union of these countably many events is also
|
|probability-theory|maxima-minima|exponential-distribution|poisson-process|
| 0
|
Roll until 5 or 6 is obtained on die without mid-game cash out
|
This question is from QuantGuide(Busted 6 II): Suppose you play a game where you continually roll a die until you obtain either a 5 or a 6. If you receive a 5, then you cash out the sum of all of your previous rolls (excluding the 5). If you receive a 6, then you receive no payout. You do not have the decision to cash out mid-game. What is your expected payout? My Approach: In the case when we can't cash out mid-game the expected value will be 2.5. Now in the case of stopping midgame, we calculate the expected value at each stage of the throw. For the $i^{th}$ throw the expected value will be: \begin{equation} \frac{2}{3}^i(2.5i)+\frac{2}{3}^{i-1}(2.5(i-1))\frac{1}{6} \end{equation} The 2.5 value is due to at each throw the average value of the dice roll will be $\frac{1+2+3+4}{4}$ . The value I am getting is 2.59 but it doesn't pass. It would be great to get some help.
|
Ignore for a second that there's only a $\frac12$ chance of getting the payout. Then consider the first roll. There's a $\frac13$ chance of getting 5 or 6, so you end with $0$ and a $\frac23$ chance of getting any other roll, which puts you back in the same position except with on average $\frac52$ added to your final score. So for this 'game' we have $$\mathbb{E}S=\frac13\cdot0+\frac23(\frac52+\mathbb{E}S)$$ Which tells us $$\frac13\mathbb{E}S=\frac53$$ so $\mathbb{E}S=5$ . The answer for your game will just be half of this.
|
|probability|probability-theory|expected-value|
| 0
|
Demonstrating Density Using the Stone-Weierstrass Theorem
|
In exploring the dense subsets of $C\left(I_n\right)$ , I've been particularly focused on the application of a foundational result: $\textbf{Stone-Weierstrass Theorem}$ : This theorem posits that for any compact set in $\mathbb{R}^n$ and an algebra $\mathcal{A}$ of continuous real-valued functions on this set, if $\mathcal{A}$ separates the points and includes constant functions, then $\mathcal{A}$ is dense in $C(K)$ . Our interest lies in the functions of the form: $$ G(x)=\sum_{k=1}^M \beta_k \prod_{j=1}^{N_k} \varphi\left(w_{j k}^T x+\theta_{j k}\right) (1) $$ where $w_{j k} \in \mathbb{R}^n, \beta_j, \theta_{j k} \in \mathbb{R}$ , and $x \in I_n$ , under the $\textbf{valid property}$ that $\textbf{they form a dense subset in $C\left(I_n\right)$}$ using a nonconstant activation function $\varphi$ . My approach for proving it depends on identifying the set $\mathcal{U}$ , given by the finite sums of products of the specified type, as an algebra of real continuous functions on $I_n$ t
|
Your approach to applying the Stone-Weierstrass Theorem through the lens of neural networklike functions is indeed intriguing. One aspect that might further illuminate your proof is considering the richness of the space generated by your function $G(x)$ under different choices of the activation function $\varphi$ . Specifically, the role of $\varphi$ in ensuring the algebra $\mathcal{V}$ not only separates points but also universally approximates any function in $C\left(I_n\right)$ could be crucial. Have you explored how the specific properties of $\varphi$ , such as smoothness or boundedness, impact the density of $\mathcal{V}$ within $C\left(I_n\right)$ ? This angle might reveal deeper connections between the algebraic structure of $\mathcal{V}$ and its approximation capabilities.
|
|functional-analysis|measure-theory|compactness|approximation-theory|dense-subspaces|
| 0
|
On volumes of rectangles; Hunter
|
I have some questions regarding two basic results (but not so trivial to prove) in measure theory. It concerns the volumes of rectangles that cover another rectangle. The notes I'm reading are from here by Hunter. But I have summarized all the necessary details in this post. My question concerns the proposition, but it uses the definition below and the lemma. Definition 2.1. An $n$ -dimensional, closed rectangle with sides oriented parallel to the coordinate axes, or rectangle for short, is a subset $R\subset \mathbb R^n$ of the form $$R=[a_1,b_1]\times[a_2,b_2]\times\cdots\times[a_n,b_n]$$ where $-\infty for $i=1,\ldots,n$ . The volume $\mu(R)$ of $R$ is $$\mu(R)=(b_1-a_1)(b_2-a_2)\ldots (b_n-a_n).$$ We say two rectangles are almost disjoint if they intersect at most along their boundaries. Lemma 2.5. Suppose that $$R=I_{1} \times I_{2} \times \cdots \times I_{n}$$ is an $n$ -dimensional rectangle where each closed, bounded interval $I_{i} \subset \mathbb{R}$ is an almost disjoint uni
|
Upon closer thought, I think I have figured out the answers to my questions: $R$ is an almost disjoint union of rectangles $\{R_1,\ldots,R_N\}$ , each which is in turn an almost disjoint union of rectangles from the collection $\left\{S_{j_{1} j_{2} \ldots j_{n}}: 1 \leq j_{i} \leq N_{i} \text { for } 1 \leq i \leq n\right\}$ . So the union of the rectangles $\{R_1,\ldots,R_N\}$ contains all such $S_{j_{1} j_{2} \ldots j_{n}}$ exactly once. This follows from the remark that discarding overlaps can only reduce the sum of the volumes. Here it's best to draw a picture, e.g. if $(R\cap R_1)\cap(R\cap R_2)\neq\emptyset$ , then discard this part. The author means the overlaps only add extra volume to the sum of volumes, hence discarding them reduces the sum of the volumes. It helps drawing a picture.
|
|measure-theory|
| 1
|
The need for independence in random sums when using law of total expectation
|
Let $(X_n : n\in \mathbb{N})$ be a sequence of i.i.d. random variables with mean $\mu$ and variance $\sigma^2$ . Let $S_0 = 0$ and $S_n = \sum _{i=1}^{n}X_i$ for $n \geq 1$ . Let $N$ be a non-negative integer-valued random variable. The question I am working on asks to assume further that $N$ is independent of the random variables $X_i$ , and to show that $\mathbb{E}(S_N)=\mu\mathbb{E}(N) $ . I know how to show this in the standard way using total expectation, by conditioning on $N$ : \begin{align*}\mathbb{E}(S_N)&=\sum_{n=0}^\infty \mathbb{E}(S_N|N=n)\mathbb{P}(N=n)\\ &=\sum_{n=0}^\infty \mathbb{E}(S_n)\mathbb{P}(N=n)\\ &=\sum_{n=0}^\infty n\mu \mathbb{P}(N=n) \\ &= \mu\mathbb{E}(N). \end{align*} My question is the following: Do we need $N$ to be independent of the random variables $X_i$ for the above argument to follow? I do not see where I used the independence of $N$ in the above working. On trying to find an answer to this, I found the following document concerning the proof of Wa
|
By the law of total expectation,the general formula is given by $$\color{blue}{E[S_N] = E[N \,E[X | N]]},$$ with $X_i \sim X, i=1,2,\dots .$ This can be simplified to $$E[S_N] = E[N]\mu,$$ when $N$ and $X$ are independent as $E[X | N]= E[X]=\mu$ .
|
|probability|stochastic-processes|
| 0
|
Distance between triangle incenter and vertices
|
after many researches on the subject, I can't find any convincing argument anywhere, so I come to you about this problem which has been brought by some of my high school students. Let $ABC$ be a random triangle and $I$ his incenter. It is known that $AB=c$ , $AC=b$ and $BC=a$ . I'm looking for a clean way to express the distance AI only from $a$ $b$ and $c$ parameters (no angles). When I searched on the internet, I found the formula : $$ AI^{2}=\frac{p-a}{p}bc$$ Where $p=\frac{a+b+c}{2}$ is the semi-perimeter. The formula is quite nice, but I can't find a proof. I tried with law of cosines, heron's formula, but I can't quite catch the idea which will bring me this particular formula. Any idea ?
|
We can use angle relations and convert the formulas to one in terms of a, b and c.We can easily see that: $AI=\frac r{sin\frac{\alpha}2}\space\space\space \space\space\space \space\space\space (1)$ $BI=\frac r{sin\frac{\beta}2}\space\space\space \space\space\space \space\space\space (2)$ $CI=\frac r{sin\frac{\gamma}2}\space\space\space \space\space\space \space\space\space (3)$ We also know: $r=(p-a)tan\frac{\alpha}2\space\space\space \space\space\space \space\space\space (4)$ Where $p=\frac{a+b+c}2$ (1)&(4) gives: $AI=\frac{p-a}{cos\frac{\alpha}2}\space\space\space \space\space\space \space\space\space (5)$ We also know: $cos\frac{\alpha}2=\sqrt{\frac{p(p-a)}{bc}}\space\space\space \space\space\space\space\space\space \space\space\space(6)$ (5)&(6) give: $AI=\frac{\sqrt{bc(p-a)}}{\sqrt p}\Rightarrow AI^2=\frac{p-a}p bc\space\space\space \space\space\space\space\space\space \space\space\space(7)$ If you want to prove for exapmple formula (6) , we have: $a^2=b^2+c^2-2(bc) cos\alpha\Righ
|
|geometry|euclidean-geometry|triangles|triangle-centres|
| 0
|
Expectation of the squared conditional expectation
|
I am considering the expectation of the squared expectation, as asked here but with no answer and so wanted to get the communities thoughts. Since $E[Y|X]$ is not independent with itself then the first line of the equation below should be true. I then go on to try to derive the answer using the definitions but would be grateful if someone could point out where I am going wrong. $$\begin{align} E[E(Y|X)^2] & \neq E[E(Y|X)] \cdot E[E(Y|X)] \\[10pt] & = \int_{x \in A_x} E(Y|X)^2 f_{X}(x) dx \\[10pt] & = \int_{x \in A_x} \Big( \int_{y \in A_y} y f_{Y|X}(y|x) dy \Big)^2 f_{X}(x) dx \\[10pt] & = \int_{x \in A_x} \int_{y \in A_y} y \ f_{Y|X}(y|x)f_{X}(x) \ dx \ dy \ \Big( \int_{y \in A_y} y \ f_{Y|X}(y|x) \ dy \Big) \\[10pt] & = \int_{y \in A_y} y \left\{ \int_{x \in A_x} f(y,x)\ dx \right\} \ dy \ \Big( \int_{y \in A_y} y \ f_{Y|X}(y|x) \ dy \Big) \\[10pt] & = \int_{y \in A_y} y f_{Y}(y) \ dy \ \Big( \int_{y \in A_y} y \ f_{Y|X}(y|x) \ dy \Big) \\[10pt] & = E[Y] \cdot E[Y|X] \end{align}$$ In
|
You are going wrong on the fourth line. You cannot distribute the second conditional expectation outside the scope of the outer integral. Both are conditioned on the same variable $(x)$ . $\quad\begin{align}\int_\Bbb R\left(\int_\Bbb R y f_{\small Y\mid X}(y\mid x)\,\mathrm dy\right)^2f_{\small X}(x)\,\mathrm d x &=\int_\Bbb R f_{\small X}(x)\int_\Bbb R yf_{\small Y\mid X}(y\mid x)\int_\Bbb R uf_{\small Y\mid X}(u\mid x)\,\mathrm d u\,\mathrm d y\,\mathrm d x\\[1ex]&= \iint_{\Bbb R^2} y f_{\small X,Y}(x,y)\int_\Bbb R uf_{\small Y\mid X}(u\mid x)\,\mathrm d u\,\mathrm d (x,y)\\[2ex] &= \mathbb E(Y\,\mathbb E(Y\mid X))\end{align}$
|
|probability-theory|conditional-probability|conditional-expectation|
| 0
|
Why does finding only one particular solution yield the general solution?
|
I am currently working with linear differential equations, but I lack some very foundational understanding for why they are solved certain ways. Suppose we have a simple equation, such as: $y'-2y=8x$ To find the general solution, we need to find the homogenous solution to the equation and one particular solution. I understand why, becuase of linearity, if $y_c$ is a solution to the homogenous equation, then $y_c+y_p$ will be a solution to the non-homogenous equation. But how come we only need to find one particular solution? Won't the one we choose change our answer? One particular solution to the equation above would be $y=-4x-2$ . But we could easily find another particular solution using the integrating factor $e^{-2x}$ . That would yield a particular solution: $y=-2e^{-2x}(2x+1)+C$ . So now we have two possible solutions that look widely different to me: $y=C_1e^{-2x}-4x-2$ $y=C_1e^{-2x}-2e^{-2x}(2x+1)+C_2$ I guess my questions are: Are both of these the same general solution? Why
|
If you apply the integrating factor to get $$ (e^{-2x}y)'=e^{-2x}(8x), $$ then the integration preserves that factor on the left side, so you get $$ e^{-2x}y(x)=-2e^{-2x}(2x+1)+C $$ Isolating the solution gives $$ y(x)=-2(2x+1)+Ce^{2x}, $$ so that you gain nothing new over the solution from the first method.
|
|ordinary-differential-equations|
| 0
|
is it true that a certain induction of an H-irrep $\psi$ is irreducible iff there's a G-conjugacy class on which $\chi_\psi$ is nonconstant
|
Assume $H\subset G$ is a subgroup of index $2$ . Let a G-conjugacy class mean the elements conjugate to a fixed h in G. Prove or disprove whether the induced representation $Ind_H^G (\psi)$ of an H-irrep $\psi$ is irreducible iff there's a G-conjugacy class on which $\chi_\psi$ is nonconstant. By definition, the induced representation acts on a direct sum of two vector spaces that are isomorphic to the two cosets of H, say $g_i H$ for $i=1,2$ . Normally, characters are constant on conjugacy classes. If $\chi_\psi$ is not constant on a conjugacy class, I'm not sure how this relates to the irreducibility of $\psi$ . I know that for each $g\in G$ and each $i, \exists h_i\in H, j(i)\in \{1,2\}$ so that $g g_i = g_{j(i)} h_i$ and $g\cdot \sum_{i=1}^2 g_i v_i = \sum_{i=1}^2 g_{j(i)} \psi(h_i) v_i$ , but again I'm not sure how to proceed from here.
|
We proceed by contrapositive. Suppose that $\psi^G:=Ind_H^G(\psi)$ is constant over $\text{cl}_G(h)$ for every $h\in H$ . Let $g\in G\setminus H$ . By Frobenius reciprocity it holds that $[\psi^G,\psi^G]=[(\psi^G)_H,\psi]$ . Let $h\in H$ . Then $$(\psi^G)_H(h)=\psi(h)+\psi(ghg^{-1})=\psi(h)+\psi(h)=2\psi(h).$$ Thus $(\psi^G)_H=2\psi$ and $[\psi^G,\psi^G]=2$ , so $\psi^G$ is not irreducible. On the other hand, if $\psi^G$ is not irreducible then $1\not=[\psi^G,\psi^G]=[(\psi^G)_H,\psi]\leq2$ , since $\psi^G(1)=2$ . Then $(\psi^G)_H=2\psi$ . Let $h\in H$ and let $g\in G$ . Then, $$2\psi(h^g)=(\psi^G)_H(h^g)=\psi^G(h^g)=\psi^G(h)=(\psi^G)_H(h)=2\psi(h),$$ as $\psi^G$ is a class function. Then $\psi(h^g)=\psi(h)$ and we are done.
|
|abstract-algebra|group-theory|representation-theory|irreducible-representation|
| 0
|
If the set of vectors $U$ is linearly independent in a subspace $X$, then can vectors be removed from $U$ to create a basis for $X$?
|
I believe the answer is False, but only based on an intuitive feeling right now as I cannot manage to definitively prove why, ie, I've been trying to find a counterexample but can't. Also, does the fact that $U$ is a "subspace" rather than a vector-space change anything? (I don't think it does). My current reasoning is: By the definition of a basis, we should only consider spanning sets. But if it were also linearly independent, then the set would need to be minimally spanning, thus we cannot remove any vectors.
|
If $U$ spans $X$ then $U$ is a basis already. If $U$ does not span $X$ then you won't get a basis by removing any vectors. Counterexample: $ U = \{e_1\}$ and $X = \mathbb{R}^2$ .
|
|linear-algebra|vector-spaces|
| 0
|
How many ways are there to choose 8 coins from 100 identical pennies and 100 identical nickels?
|
The answer is $9$ ways. There are $i$ pennies and $8-i$ nickels. $i$ can be $0,1,...,8$ . Why is the answer not ${8 \choose 0}+{8 \choose 1}+{8 \choose 2}+{8 \choose 3}+{8 \choose 4}+{8 \choose 5}+{8 \choose 6}+{8 \choose 7}+{8 \choose 8}$ ?
|
The pennies are identical. So are nickels. That means it doesn't matter which pennies or nickels are chosen. All that matters is how many of each are chosen. We can focus on the number or pennies (or nickels), the number of the other type will be automatically determined. Of the $8$ coins, $0$ , $1$ , $2$ , ..., or $8$ can be pennies. So $9$ ways. You are treating the situation as if the coins are arranged in sequence and it matters what positions are occupied by each kind of coin (which is clearly not what is asked). Meaning, if there are $i$ pennies, they can be occupy any of the $i$ positions, so $8 \choose i$ ways. Again, this is wrong and will result in a way bigger number that the correct answer.
|
|combinatorics|
| 0
|
Showing that the set of open disks $U:=\{B((0,r+1/r),r):r>1\}$ forms an open cover over the upper half-plane
|
I am trying to demonstrate that the set of open disks $$U:=\{B((0,r+1/r),r):r>1\}$$ forms an open cover over the upper half-plane. I have tried to show that for every $(x,y)$ in the upper half-plane, there exists an $r$ such that $d((x,y),(0,r+1/r)) , but it seems the harder I try, the more confused I get. I am not so good with formal proofs, so any help would be appreciated!
|
We want to show $f(r)=r^2-\left( x_0^2+(y_0-r-\frac{1}{r})^2 \right)=2(r+\frac{1}{r})y_0-x_0^2-y_0^2-\frac{1}{r^2}-2>0$ for enough large $r$ . This is almost obvious since this part $2(r+\frac{1}{r})y_0$ can be arbitrarily large, while the other part are bounded.
|
|real-analysis|
| 1
|
What is the empty tensor product of vector spaces?
|
The tensor product of a space with itself once is $V^{\otimes1}$ , but what is $V^{\otimes0}$ ? Since it is an empty tensor product, it is - a fortiori - an empty product. So I'm looking for a " $1$ " of some sort, just not sure what that would mean in this context. "If I take the tensor product of a vector space with itself zero times, I would get ...", and I am guessing here, but is it the underlying field, $\mathbb{F}$ ?
|
You are correct. In any case one has a notion of "product" and a "unit" for that product, the proper convention is that the "empty product" equals the "unit". If we are working with vector spaces over a field $F$ and the corresponding tensor product $\otimes_F$ , the corresponding unit is the $F$ -vector space $F$ itself. Indeed, there is a canonical isomorphism $V\otimes_FF\rightarrow V$ for any $F$ -vector space $V$ given by $v\otimes\lambda\mapsto\lambda x$ (and similarly in the other order). Thus, the empty tensor product is $V^{\otimes0}=F$ .
|
|tensor-products|tensors|multilinear-algebra|
| 0
|
$\det(\bar B)=0$ for $\bar B$ columned matrix with only $2$ non-null entries equal to $1$ and $-1$?
|
Reading about graph theory, I am presented with the case in which a square incidence submatrix of a directed graph has, in each column, only 2 non-zero entries (from the definition of the incidence matrix it is known that there will only be 2 entries for each column then these entries are $1$ and $−1$ ). Here is some example incidence matrix. $$B= \begin{pmatrix}1&0&1&0\\0&-1&-1&-1\\0&0&0&1\\-1&1&0&0 \end{pmatrix} $$ that is the incidence matrix for an oriented graph, now the submatrix of the type of interest would be: $B_{(3,4)} = \bar B \to$ \begin{pmatrix}1&0&1\\0&-1&-1\\-1&1&0 \end{pmatrix} We delete row $3$ and column $4$ of $B.$ Therefore, each column has a sum equal to $0$ . 1. This is what I understand by column having sum equal to $0$ : For column $3$ of $\bar B=\begin{pmatrix}1\\-1\\0\end{pmatrix}=$ $$\sum_{i=1}^3 B_{(i,3)}=1+(-1)+0=\color{red}{0}$$ 2. I calculate the determinant by looking for a lower triangular matrix: $$det(\bar B)=\begin{vmatrix}1&0&1\\0&-1&-1\\-1&1&0 \en
|
A square matrix $M\in M_{n\times n}(\mathbb{R})$ has determinant $0$ if and only if its row vectors (or, equivalently, its column vectors) are linearly dependent. If we denote this vectors by $v_1,v_2,\ldots, v_n$ we have that $$1\cdot v_1+1\cdot v_2+\cdots+1\cdot v_n=0,$$ that is, a non-trivial linear combination of $v_1,\ldots,v_n$ that is equal to $0$ . For this reason, $v_1,\ldots,v_n$ are linearly dependent and $\det(M)=0$ .
|
|linear-algebra|matrices|graph-theory|determinant|
| 1
|
A conditional probability problem with no joint probability
|
There are 4 different event, $A$ , $B$ , $C$ , and $D$ , and the probability are 0.3, 0.4, 0.6, and 0.7, respectively, but we don't know the joint probability of any of them. Additionally, we already know that there must be exact 2 events occur among those 4. Is the information given above enough to calculate $P(B|¬A)$ ? Thanks.
|
$ \textbf {Example I}$ : draw an integer uniformly from $\{1, \cdots, 10\}$ . Define our events as $$A_1=\{1,2,3\}\quad B_1=\{7,8,9,10\} \quad C_1=\{1,2,3,4,5,6\}\quad D_1=\{4,5,6,7,8,9,10\}$$ We see that, as desired, each draw belongs to exactly two of your events, and we see that $P(B_1\,|\,A_1^c)=\frac 47$ $ \textbf {Example II}$ : Same process. Now define $$A_2=\{1,2,3\}\quad B_2=\{1,2,3,4\} \quad C_2=\{5,6,7,8,9,10\}\quad D_2=\{4,5,6,7,8,9,10\}$$ Again, one can verify that each draw is in exactly two events. This time, however, $P(B_2\,|\,A_2^c)=\frac 17$ So it is not determined.
|
|probability|conditional-probability|
| 1
|
Uniqueness in Seifert-van Kampen theorem in Munkres' Topology
|
Theorem 70.1 (Seifert-van Kampen theorem). Let $X=U \cup V$ , where $U$ and $V$ are open in $X$ ; assume $U, V$ , and $U \cap V$ are path connected; let $x_0 \in U \cap V$ . Let $H$ be a group, and let $ \phi_1: \pi_1\left(U, x_0\right) \longrightarrow H \ \text { and } \ \phi_2: \pi_1\left(V, x_0\right) \longrightarrow H $ be homomorphisms. Let $i_1, i_2, j_1, j_2$ be the homomorphisms indicated in the following diagram, each induced by inclusion. If $\phi_1 \circ i_1=\phi_2 \circ i_2$ , then there is a unique homomorphism $\Phi: \pi_1\left(X, x_0\right) \rightarrow H$ such that $\Phi \circ j_1=\phi_1$ and $\Phi \circ j_2=\phi_2$ . This theorem says that if $\phi_1$ and $\phi_2$ are arbitrary homomorphisms that are "compatible on $U \cap V$ ," then they induce a homomorphism of $\pi_1\left(X, x_0\right)$ into $H$ . Proof. Uniqueness is easy. Theorem 59.1 tells us that $\pi_1\left(X, x_0\right)$ is generated by the images of $j_1$ and $j_2$ . The value of $\Phi$ on the generator $j_1\l
|
This is the obvious consequence of the property that $\Phi \circ j_k = \phi_k$ for $k = 1, 2$ . $\pi_1(X,x_0)$ is generated by the set $\Gamma = j_1(\pi_1(U,x_0)) \cup j_2(\pi_1(V,x_0))$ . For each $g_1 \in \pi_1(U,x_0)$ we have $\Phi(j_1(g_1)) = (\Phi \circ j_1)(g_1) = \phi_1(g_1)$ and for each $g_2 \in \pi_1(V,x_0)$ we have $\Phi(j_2(g_2)) = (\Phi \circ j_2)(g_2) = \phi_2(g_2)$ . Hence $\Phi \mid_\Gamma$ is uniquely determined by $\phi_1$ ad $\phi_2$ .
|
|algebraic-topology|homotopy-theory|group-homomorphism|
| 0
|
Can you explain to me why this proof by induction is not flawed? (Domain is graph theory, but that is secondary)
|
Background I am following this MIT OCW course on mathematics for computer science. In one of the recitations they come to the below result: Official solution Task: A planar graph is a graph that can be drawn without any edges crossing. Also, any planar graph has a node of degree at most 5. Now, prove by induction that any planar graph can be colored in at most 6 colors. Solution.: We prove by induction. First, let n be the number of nodes in the graph. Then define P (n) = Any planar graph with n nodes is 6-colorable. Base case, P (1): Every graph with n = 1 vertex is 6-colorable. Clearly true since it’s actually 1-colorable. Inductive step: P (n) → P (n + 1): Take a planar graph G with n + 1 nodes. Then take a node v with degree at most 5 (which we know exists because we know any planar graph has a node of degree ≤ 5), and remove it. We know that the induced subgraph G’ formed in this way has n nodes, so by our inductive hypothesis, G’ is 6-colorable. But v is adjacent to at most 5 oth
|
I think your confusion is ultimately about how you prove a universal sentence (each object of type X has property P) by induction. To prove a universal sentence, you consider an arbitrary object O of type X. Then you do some arguing, aiming to end up with the conclusion that object O has property P. To use an assumption which has the form of a universal sentence in your proof, you simply feed it an object of type X of your choice and it allows you to jump to the conclusion that the object has property P. The difference is that previously, you were forced to consider an arbitrary object, whereas here you get to choose which object you apply the assumption to. Back to the proof. You are trying to prove that each object of type X with cardinality $n+1$ has property P, under the assumption that this holds for all objects of type X of cardinality $n$ . Very well, consider an arbitrary object O of type X with cardinality $n+1$ . This is how the proof must start (unless you are going for a pr
|
|graph-theory|proof-writing|proof-explanation|induction|planar-graphs|
| 0
|
Confused by the notation in Steele's Stochastic Calculus
|
Recently started Steele's Stochastic Calculus text and as I was going through the first chapter. While making a case for why $\tau$ is finite he introduces a super-script of $d$ on the variables without any prior mentions of the same. Anybody who has studied the text before, could you perhaps afford some clarity on what this denotes and means in this context. Would be extremely grateful for any responses.
|
The superscript $d$ on $\tau$ means the standard thing; i.e. $\tau^d$ is $\tau$ to the power of $d$ .
|
|stochastic-calculus|
| 1
|
Countable, self-similar total orders
|
A total order $I$ is said to be weakly self-similar if there exists a proper subset $J \subsetneq I$ together with a bijective, order-preserving function $f:I \to J$ (that is, $J$ is isomorphic to $I$ ). Analogously, $I$ is said to be strongly self-similar if there exists an element $i \in I$ such that one of the total orders $$I_{ i}, I_{\ge i}$$ is isomorphic to $I$ . Such sets are defined as you imagine, e.g. $I_{ . Does there exist self-similar total orders on a countable set, whether in a weak or strong sense?
|
As noted in the comments, the question is quite trivial: $\omega$ provides an example for both questions. Regarding a more stringent definition, that is $I_{ and $I_{\ge i}$ being both isomorphic to $I$ , one can consider $\mathbb{Q} \cap [0,1)$ and splitting at any value $q \in \mathbb{Q} \cap (0,1)$ , e.g. $q = 1/2$ . As a side note, observe that such countable total orders cannot define a "fractal behavior": if we define $T^0 I := I_{ as the two orders in which $I$ is split, 'most' of the sets $$ T^{\textbf{c}} I := \bigcap_{n=1}^{\infty} T^{c_n} \ldots T^{c_1} I$$ for $\textbf{c} := (c_1, \ldots, c_n, \ldots ) \in \{0,1\}^{\mathbb{N}}$ will be empty, since there are uncountably many such intersections. This is what misled me...
|
|logic|set-theory|order-theory|fractals|
| 0
|
Are all 2-connected graphs planar?
|
I know that all trees are planar , and so now I'm wondering whether 2-connected graphs are necessarily planar. I would imagine that this is true given that all 2-connected graphs have an ear decomposition (Theorem 4.10 in [ 1 ]), and that every ear decomposition I have seen is planar, like the following example (Figure 4.3, from [ 1 ]): So to be sure, I naturally tried to prove that ear decompositions are planar, but I'm uneasy about my proof: Proof . Let $G$ be a 2-connected graph with an ear decomposition $E = G_0, G_1, \ldots, G_k$ , where $G_0$ is a cycle and $G_i$ are ears for $1 \leq i \leq k$ . We proceed by induction on the number of ears, with the base case being $k = 1$ ( $E = G_0, G_1$ ), which is obviously* planar. So, we assume the statement is true for any graph with an ear decomposition having $k - 1$ ears, and we let $G$ have $k$ ears. Let $P = G_k\setminus \{u, v\}$ , where $G_k$ is the "outer" ear of $G$ and $u, v$ are the end-vertices of $G_k$ , and note that $G - P$
|
No. For any fixed $k\geq 1$ , let $n\gg k$ , let $F,H$ be two copies of a complete graph on $n$ vertices. Identify a copy of $K_k$ in $F$ and another one in $H$ , and merge then together, akin to a $k$ -clique-sum keeping all edges. The graph has connectivity exactly $k$ and is non planar. e.g. with $n=5,k=2$ , the following graph is $2$ -connected, not $3$ -connected, and non-planar because it contains $K_5$ . The issue with your proof is that "outer" is not defined if the graph is non-planar.
|
|graph-theory|solution-verification|planar-graphs|graph-connectivity|
| 1
|
Posterior probability of Wiener process within interval
|
Suppose that we have a Wiener process $x(t)$ . Also we know that the random variable $x(h)$ ( $h>0$ ) follows the $\mathcal{N}(0,h)$ normal distribution. Assume that we obervation the stochastic process $x(t)$ at $t=h$ , the exactly value is unknown but we know that $|x(h)| . I believe that the posterior probability distribution $x(h)$ conditioned on $|x(h)| will not be $\mathcal{N}(0,h)$ anymore. In this case, we need to use the Bayes' rule to calcuate the posterior probability. \begin{equation} f(x(h)||x(h)| $f(x(h))$ is a normal distribution. $f(|x(h)| may be calculated by Kolmogorov backward equation. But I do not know how to specify the boundary conditions. In addition, I do not know how to compute $f(|x(h)| . Some approximations can be made, such as $x(h)$ conditioned on $|x(h)| follows a uniform distribution between $[-\rho,\rho]$ or trucated normal distribution $[-\rho,\rho]$ . I want to know how to exactly calculate the distribution $f(x(h)||x(h)| even though it may be computa
|
$$\begin{aligned}P(B_h\leq x||B_h| The derivative wrt $x$ within the $\rho$ -radius ball is: $$\frac{d}{dx}P(B_h\leq x||B_h|
|
|probability|conditional-probability|brownian-motion|
| 0
|
Torsion elements of $SL_3(\mathbb{F}_p[x])$? (Quick question)
|
Is every element of $SL_3(\mathbb{F}_p[x])$ a torsion element? Here are my thoughts: First of all, the group is noncommutative, so a torsion element is an element of finite order. I'm thinking of examples of torsion elements: Any generator (since a generator p times is the identity.) Any element that has a rotation matrix as one of its factors. And there are a bunch more examples. Then I can't think of any element that does not have torsion in this group. Could someone please refer me to a result that makes this more precise? Your help will be very much appreciated!
|
There are elements of infinite order in this group. Consider the matrix $$ A=\left(\begin{array}{cc}1&x\\ x&1+x^2\end{array}\right)\in SL_2(\Bbb{F}_p[x]). $$ I claim that the order of $A$ is infinite. Many ways to see this. See the comments by KCd below this post for a very compact argument. Another explicit proof goes as follows. Let $K$ be an algebraic closure of the prime field, $K=\overline{\Bbb{F}_p}$ . The characteristic polynomial of $A$ is $$ \chi_A(T)=T^2-(2+x^2)T+1. $$ This has no zeros in $K$ . For if $\lambda$ is a root of $\chi_A(T)$ , then $1/\lambda$ is the other. But their sum $\lambda+1/\lambda=2+x^2$ is transcendental over the prime field. Hence the eigenvalues of $A$ cannot be roots of unity (those are all contained in $K$ ). Consequently $A^M\neq I_2$ for all integers $M>0$ . QED It is trivial to extend $A$ to a $3\times3$ matrix in $SL_3(\Bbb{F}_p[x])$ by adding a $1$ along the diagonal. The resulting matrix must have infinite order as well. My first solution (see
|
|abstract-algebra|group-theory|algebraic-geometry|geometric-group-theory|torsion-groups|
| 0
|
Weak equivalence of filtered Colimit
|
Given a model category $C$ , I have two functors $F,G:\mathbb{N}\rightarrow C$ , where see $\mathbb{N}$ as sequence category. Question: Given a natural transformation $J:F\rightarrow G$ and suppose the $J$ induces pointwise weak equivalence then when will the Colimit be weak equivalence? More specifically on the notes A primer on Homotopy colimits by Daniel dugger in page 10. https://pages.uoregon.edu/ddugger/hocolim.pdf I want to know what theorem enables us to conclude from the picture $|X|\rightarrow |Y|$ is weak equivalence.
|
In general, colimits preserve levelwise weak equivalences when the colimit is also a homotopy colimit . In this case, it is the fact that you are taking a sequential colimit of cofibrations between cofibrant objects in the Kan model structure on simplicial sets that gives you that the colimit preserves levelwise weak equivalences (see e.g. Proposition 17.9.1. in Hirschhorn's Model Categories and Their Localizations for a proof). Alternatively, we can note that such a diagram is Reedy cofibrant, so that the strict colimit also models the homotopy colimit of the diagram because in Hirschhorn's terminology the poset category $\mathbb{N}$ has fibrant constants. (Definition 15.10.1, and see Theorem 15.10.8 for the reason this is important.)
|
|category-theory|simplicial-stuff|model-categories|
| 0
|
Limit notation (for taking multiple limits at once)
|
This is probably a silly question, but I am interested in looking at limits of multi-variable functions, such as \begin{equation}\lim_{x_1\to\infty}\lim_{x_2\to\infty}\cdots\lim_{x_m\to\infty}f(x_1,\cdots,x_m).\end{equation} Is it notationally acceptable to simply write the above as \begin{equation}\lim_{\substack{x_i\to\infty}\\i=1,\cdots,m}f(x_1,\cdots,x_m)?\end{equation} This notation would be slightly less space-consuming which is why I thought it would be a good idea but I'm not sure if this is standard notation.
|
The notation you suggest might be ambiguous, since the limits might not commute, i.e. we might have (in the case $m=2$ here) $$ \lim_{x_1\to\infty}\lim_{x_2\to\infty}f(x_1,x_2)\neq\lim_{x_2\to\infty}\lim_{x_1\to\infty}f(x_1,x_2). $$ To give an example, consider $$ f(x_1,x_2)=\bigg(1+\frac{1}{x_1}\bigg)^{x_2}, $$ with $x_1,x_2>0$ . Then $$ \lim_{x_1\to\infty}\lim_{x_2\to\infty}f(x_1,x_2)=\infty\neq1=\lim_{x_2\to\infty}\lim_{x_1\to\infty}f(x_1,x_2). $$
|
|functional-analysis|algebra-precalculus|limits|multivariable-calculus|notation|
| 0
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.