title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
why is $\sum_{n=0}^\infty \frac{(3\log2)^n}{(n+1)!}$ = 8 GRE subject problem
|
$$\sum_{n=0}^\infty \frac{(3\log2)^n}{(n+1)!} = 8 $$ Hi, this is from the GRE subject practice test. I know the answer but don't understand why it is so.. My guess is $\sum_{n=0}^\infty \frac{x^n}{n!}= e^x$ so $e^{3\log2}= e^{\log(2^3)}=8$ . I tried $\sum_{n=0}^\infty \frac{(3\log2)^n}{(n+1)!} = \sum_{n=1}^\infty \frac{(3\log2)^{n-1}}{(n)!}$ but wasn't able to seperate 1 from $(3\log2)^{n-1}$ to get something like $\sum_{n=0}^\infty \frac{x^n}{n!}$
|
It strongly appears that somewhere, something got copied incorrectly. @Jochen gave the correct answer to the question as it stands currently. $$\sum_{n=0}^\infty\frac{(3\ln 2) ^n}{(n+1)!} = \frac{7}{2\ln 2}$$ However, if the denominator in the original series is $n!$ instead of $(n+1)!$ , the 8 listed as the answer in OP's question becomes correct. $$\sum_{n=0}^\infty\frac{(3\ln 2) ^n}{(n)!} = \sum_{n=0}^\infty\frac{x ^n}{n!} =e^{\ln 2^3}=8$$ using $x=3\ln 2 = \ln 8.$ Note: Throughout this question and these answers, $\log y$ is interpreted as $\log y = \ln y = \log_e y$
|
|real-analysis|calculus|
| 0
|
Least square derivatives
|
Let $X_1, \ldots, X_N \in \mathbb{R}^p$ and $Y_1, \ldots, Y_N \in \mathbb{R}$ . Define $$ X=\left[\begin{array}{c} X_1^{\top} \\ \vdots \\ X_N^{\top} \end{array}\right] \in \mathbb{R}^{N \times p}, \quad Y=\left[\begin{array}{c} Y_1 \\ \vdots \\ Y_N \end{array}\right] \in \mathbb{R}^N $$ Let $$ \ell_i(\theta)=\frac{1}{2}\left(X_i^{\top} \theta-Y_i\right)^2 \quad \text { for } i=1, \ldots, N, \quad \mathcal{L}(\theta)=\frac{1}{2}\|X \theta-Y\|^2 . $$ Show (a) $\nabla_\theta \ell_i(\theta)=\left(X_i^{\top} \theta-Y_i\right) X_i$ and (b) $\nabla_\theta \mathcal{L}(\theta)=X^{\top}(X \theta-Y)$ . Hint. For part (a), start by computing $\frac{\partial}{\partial \theta_j} \ell_i(\theta)$ . For part (b), use the fact that $$ M v=\sum_{i=1}^N M_{:, i} v_i \in \mathbb{R}^p $$ for any $M \in \mathbb{R}^{p \times N}, v \in \mathbb{R}^N$ , where $M_{:, i}$ is the $i$ th column of $M$ for $i=1, \ldots, N$ . My Preliminary Progress: For part (a): I started with the definition of $\ell_i(\theta)$ and
|
$ \def\R#1{{\mathbb R}^{#1}} \def\L{{\cal L}} \def\t{\theta} \def\k{\otimes} \def\h{\tfrac12\:} \def\o{{\tt1}} \def\LR#1{\left(#1\right)} \def\op#1{\operatorname{#1}} \def\trace#1{\op{Tr}\LR{#1}} \def\frob#1{\left\| #1 \right\|_F} \def\qiq{\quad\implies\quad} \def\p{\partial} \def\grad#1#2{\frac{\p #1}{\p #2}} \def\gradLR#1#2{\LR{\grad{#1}{#2}}} \def\c#1{\color{red}{#1}} $ Here are a few things that I find very helpful for Matrix Calculus problems. The Frobenius Product is denoted by a colon and will help you avoid silly transposition errors. It has the following properties $$\eqalign{ A,B&\in\R{m\times n}\qquad X\in\R{m\times p}\qquad Y\in\R{p\times n} \\ A:B &= \sum_{i=1}^m\sum_{j=1}^n A_{ij}B_{ij} \;=\; \trace{A^TB} \\ B:B &= \frob{B}^2 \qquad \{ {\rm Frobenius\;norm} \}\\ A:B &= B:A \;=\; B^T:A^T \\ \LR{XY}:B &= X:\LR{BY^T} \;=\; Y:\LR{X^TB} \\ A:B &= I_m\!:\LR{BA^T} \;=\; I_n:\LR{A^TB} \\ }$$ Next is the differential of $\L$ , which is denoted as ${d\L}$ and defined in terms of th
|
|linear-algebra|derivatives|least-squares|gradient-descent|
| 0
|
A curve intersected by a straight line having constant harmonic mean segments
|
It is known that the if the product of two line segments OA,OB drawn from any point O to a curve is a constant, then the curve is a circle (black). The product is the square of the geometric mean. However what is another curve (green) if harmonic mean of OA,OB is a constant? Thanks for all suggested insights or the curve itself if already known.
|
I have understood that you would like to find a type of curve such that, whatever the pole $O$ we have, for any secant line issued from $O$ : $$\text{HarmMean(OA,OB)=constant}$$ What I am giving below is a solution restricted to the case of a fixed pole (taken as the origin) ; nevertheless, it is a "piece of the puzzle" helping in the direction of finding either a general solution... or a proof that such a general type of curve doesn't exist. The particular solution I propose (a self-invariant curve with respect to its center)is in fact the union of two curves with resp. polar equations : $$\begin{cases}r_1(t)&=&\frac{\tan(t)}{1+\tan(t)}&&& (0 \le t \le 3 \pi/4 \ \ \text{blue curve})\\ r_2(t)&=& \frac{\tan(t)}{\tan(t)-1}&=&-\frac{r_1(t)}{2r_1(t)-1} &(\pi/4 \le t \le \pi\ \ \text{red curve})\end{cases}$$ (the two line asymptotes are not part of the curve).
|
|geometry|locus|
| 0
|
Distance of a point from a line measured parallel to a plane
|
Determine the distance of the point $(3,8,2)$ from the line $\frac{x-1}{2}=\frac{y-3}{4}=\frac{z-2}{3}$ measured parallel to the plane $3x+2y-2z+15=0$ . I am getting two different answers using two different approaches: The first approach is by determining the parametric point on the line $(2r+1,4r+3,3r+2)$ . Then according to the condition the vector $(2r+1-3,4r+3-8,3r+2-2)$ will be perpendicular to the normal of the plane. Doing dot product,we get $r=2$ and so the point is $(5,11,8)$ and the distance comes out to be $7$ . But the second approach is my first calculating the perpendicular distance $d$ from the point to the line which comes out to be $\sqrt{\frac{265}{29}}$ . And if we let $\theta$ to be the angle between the line and the inclined line parallel to the plane,the required distance would be $\frac{d}{\sin \theta}$ . Now,by observation $\theta$ is actually the angle between the plane and the vector $2i+4j+3k$ since the vector is parallel to the line and the inclined line is
|
The first approach is a good one. The problem in the second approach (I've checked all your calculations and get the same results ) has been pointed out by @Bob Dobbs: You reason as if you were in one and the same plane when you have non-coplanar vectors. To make this clear, let's take a simpler similar situation: Let $D=(-1,0,0)$ , the line $l:x=y=z$ and the plane $\mathcal P:z=0$ where a normal vector is $\vec{n}=(0,0,1)$ . The "distance of the point $D$ from the line $l$ parallel to $\mathcal P$ " is of course $1$ . On the other hand, if we were to follow your second approach, we would calculate the distance $d$ from D to its orthogonal projection $E=\frac13(-1,-1,-1)$ on $l$ , which is $\frac13\sqrt6$ . And then we'd look to use the angle $\theta=\widehat{EOD}$ . But there, with the evidence of the figure, we would not make your mistake, which my explanations have enabled you to understand, I hope.
|
|linear-algebra|vector-spaces|analytic-geometry|coordinate-systems|plane-geometry|
| 0
|
log and poisson-like integral
|
Here is a fun looking one some may enjoy. Show that: $$\int_{0}^{1}\log\left(\frac{x^{2}+2x\cos(a)+1}{x^{2}-2x\cos(a)+1}\right)\cdot \frac{1}{x}dx=\frac{\pi^{2}}{2}-\pi a$$
|
You may write \begin{align*} I := \int_{0}^{1} \log \left( \frac{x^{2} + 2x\cos\alpha + 1}{x^{2} - 2x\cos\alpha + 1} \right) \cdot \frac{dx}{x} &= \int_{-1}^{1} \log (x^{2} + 2x\cos\alpha + 1) \cdot \frac{dx}{x} \\ &= 2 \Re \int_{-1}^{1} \log (1 + x e^{i\alpha}) \cdot \frac{dx}{x}. \end{align*} Now shifting the contour to the upper semicircular arc $\displaystyle x = e^{i(\pi - t)}$ $\displaystyle (0 \leq t \leq \pi)$ , we have \begin{align*} I &= 2 \Re \int_{0}^{\pi} -i \log (1 - e^{i(\alpha-t)}) \, dt = \int_{\alpha-\pi}^{\alpha} 2 \Im \log (1 - e^{it}) \, dt. \end{align*} By noting that \begin{equation*} 2 \Im \log (1 - e^{it}) = \begin{cases} t + \pi, & -\pi for $\displaystyle 0 we obtain the desired result. For general $\displaystyle \alpha$ , we can also prove that \begin{equation*} \int_{0}^{1} \log \left( \frac{x^{2} + 2x\cos\alpha + 1}{x^{2} - 2x\cos\alpha + 1} \right) \cdot \frac{dx}{x} = \frac{\pi^{2}}{2} - \pi |\alpha| \quad \text{for } |\alpha| \leq \pi \text{ and extended
|
|integration|definite-integrals|
| 0
|
Integral $\int_0^\infty \frac{x^n - 2x + 1}{x^{2n} - 1} \mathrm{d}x=0$
|
Inspired by some of the greats on this site, I've been trying to improve my residue skills. I've come across the integral $$\int_0^\infty \frac{x^n - 2x + 1}{x^{2n} - 1} \mathrm{d}x=0$$ where $n$ is a positive integer that is at least $2$ . With non-complex methods, I know that the integral is $0$ . But I know that it can be done with residue theorem. The trouble comes in choosing a contour. We're probably going to do some pie-slice contour, perhaps small enough to avoid any of the $2n$ th roots of unity, and it's clear that the outer-circle vanishes. But I'm having trouble getting the cancellation for the integral. Can you help? (Also, do you have a book reference for collections of calculations of integrals with the residue theorem that might have similar examples?)
|
\begin{align*} \int_{0}^{\infty} \frac{x^n-2x+1}{x^{2n}-1} \, dx &= \int_{0}^{1} \frac{x^n-2x+1}{x^{2n}-1} \, dx - \int_{0}^{1} \frac{x^{n-2}-2x^{2n-3}+x^{2n-2}}{x^{2n}-1} \, dx \\ &= \frac{1}{2n} \left( \int_{0}^{1} \frac{x^{\frac{1}{2n}+\frac{1}{2}}-2x^{\frac{1}{n}}+x^{\frac{1}{2n}}}{x-1} \, \frac{dx}{x} - \int_{0}^{1} \frac{x^{\frac{1}{2}-\frac{1}{2n}}-2x^{1-\frac{1}{n}}+x^{1-\frac{1}{2n}}}{x-1} \, \frac{dx}{x} \right). \end{align*} But the duplication formula for the digamma function \begin{equation*} \psi_{0}(z) + \psi_{0}(z+\tfrac{1}{2}) - 2 \psi_{0}(2z) = -\log 4 \end{equation*} shows that \begin{equation*} \int_{0}^{1} \frac{x^{z} + x^{z+\frac{1}{2}} - 2 x^{2z}}{x - 1} \, \frac{dx}{x} = -\log 4. \end{equation*} Plugging this to the formula above, we obtain the desired conclusion.
|
|integration|complex-analysis|improper-integrals|residue-calculus|
| 0
|
Allegedly: the existence of a natural number and successors does not imply, without the Axiom of Infinity, the existence of an infinite set.
|
The Claim: From a conversation on Twitter, from someone whom I shall keep anonymous (pronouns he/him though), it was claimed: [T]he existence of natural numbers and the fact that given a natural number $n$ , there is always a successor $(n+1)$ , do not imply the existence of an infinite set. You need an extra axiom for that. It was clarified that he meant the Axiom of Infinity. The Question: Is the claim true? Why or why not? Context: I like how, if true, it goes against the idea that, if you just keep adding one to something, you'll get something infinite. This is beyond me. Searching for an answer online lead to some interesting finds, like this . To add context, then, I'm studying for a PhD in Group Theory. I have no experience with this sort of foundational question. I'm looking for an explanation/refutation. To get some idea of my experience with playing around with axioms, see: What is the minimum number of axioms necessary to define a (not necessarily commutative) ring (that doe
|
One way of thinking of this is in terms of Peano Arithmetic (PA). The theory PA includes, of course, the successor axiom, and therefore a way of generating arbitrarily large numbers. However, PA does not prove the existence of an infinite set. In fact, PA is equiconsistent with the theory ZFC with the axiom of infinity replaced by its negation; see wiki on this .
|
|set-theory|infinity|axioms|foundations|peano-axioms|
| 1
|
Why $[\mathbb{F}_{p}(\alpha): \mathbb{F}_{p^n}] = p$?
|
I was reading the second answer of the following question here Why $x^{p^n}-x+1$ is irreducible in ${\mathbb{F}_p}$ only when $n=1$ or $n=p=2$ : Prove that $f(X) = X^{p^n} - X + 1$ is irreducible over $\mathbb F_{p}$ if and only if either $n = 1$ or $n = p = 2.$ And the book gave the following hint: Note that if $\alpha$ is a root, then so is $\alpha + a$ for any $a \in \mathbb F_{p^n}.$ Show that this implies $\mathbb F_{p}(\alpha)$ contains $\mathbb F_{p^n}$ and that $[\mathbb F_p(\alpha) : \mathbb F_{p^n}] = p$ Here is the answer I am referring to: I have another solution that might be easier to follow. Let $\alpha$ be a root of $q(x)=x^{p^n}-x+1$ . Note that $\alpha + a$ is also a root of $q(x)$ for all $a \in \mathbb{F}_{p^n}$ . Consider cyclic muplicative group $\mathbb{F}_{p^n}^{\times} = \mathbb{F}_{p}(\theta)$ for some generator $\theta$ , then $\alpha + \theta$ and $\alpha$ are roots of $q(x)$ , so they belong to $\mathbb{F}_{p}(\alpha)$ which shows that $\theta \in \mathbb{F
|
$\mathbb{F}_p(\alpha)$ is a splitting field of the polynomial $q$ (when $q$ in irreducible). I'm not quite sure but I think that from general finite field theory, we know that a polynomial on any finite field either has no root in that field or splits in that field. So in fact, $\mathbb{F}_p(\alpha)$ is a splitting field for $q$ (when $q$ is irreducible). Here, $\sigma$ is $F^n$ where $F$ denotes the Frobenius map. It is an endomorphism since $F$ is an endomorphism as the characteristic of $\mathbb{F}_p(\alpha)$ is $p$ . $H$ fixes $\mathbb{F}_{p^n}$ beacause $\mathbb{F}_{p^n}$ is the splitting field of $X^{p^n}-X$ over $\mathbb{F}_p$ . To get that $[\mathbb{F}_p(\alpha):\mathbb{F}_{p^n}] = p$ , we want to prove that $H$ is the Galois group of $\mathbb{F}_p(\alpha)$ over $\mathbb{F}_{p^n}$ . It is clear from what we have said that $H$ is a subset of the Galois group. Let $\tau$ be an $\mathbb{F}_{p^n}$ -automorphisme of $\mathbb{F}_p(\alpha)$ . Again by general finite field theory, we k
|
|abstract-algebra|field-theory|galois-theory|finite-fields|extension-field|
| 0
|
Prime factor wanted of the huge number $\sum_{j=1}^{10} j!^{j!}$
|
What is the smallest prime factor of $$\sum_{j=1}^{10} j!^{j!}$$ ? Trial : This number has $23\ 804\ 069$ digits , so if it were prime it would be a record prime. I do not think however that this number is prime. According to my calculations , it has no prime factor below $10^{10}$ . To establish its compositeness I search the smallest prime factor. Motivation The fast growing function $$f(n):=\sum_{j=1}^n j!^{j!}$$ cannot be a prime number for any positive integer $n$ except from $n=2$ (which gives a very small prime) and $n=10$ (which might give a prime). The reason is that $13\mid f(n)$ holds for every integer $n\ge 12$ and for the other cases , prime factors can easily be found. This makes this case interesting since it completes the primality search.
|
$2198910029959$ is a prime factor (that's about $2 \times 10^{12}$ ). I'm about $90\%$ sure it's the smallest one - the other $10\%$ is if there's a bug in my code or I accidentally forgot to check some of the smaller primes. You can find my code here . I'm sure it could have been written to be faster/better or there's a better language to do it in :). I observed that the integers are all about the right size that we can get away with using $64$ - and occasionally $128$ -bit arithmetic and I just thought it would be fun to write a C program to search for factors. It checks about $1.5 \times 10^{6}$ primes per second on my laptop, and I searched several ranges in parallel overnight. I checked a number of its calculations against a more naïvely written Python program, as well as the final prime factor it produces, and that all checked out.
|
|elementary-number-theory|recreational-mathematics|factorial|prime-factorization|
| 1
|
Finding vector equation of a line
|
Show that the equation of a straight line passing through the point with position vector $\vec{b}$ and perpendicular to the line $\vec{r}=\vec{a}+\mu \vec{c}$ is of the form $\vec{r}=\vec{b}+\beta \vec{c}×\{(\vec{a}-\vec{b})×\vec{c}\}$ . How to derive the vector parallel to the required line? I get that this vector must be perpendicular to $\vec{c}$ but I can't derive the $\vec{c}×\{(\vec{a}-\vec{b})×\vec{c}\}$ form.
|
Let $H$ be the orthogonal projection of the point $B$ on the line $A+\Bbb R\vec c$ . $$H=A+k\vec c\text{ and }\overrightarrow{BH}\perp\vec c$$ hence $0=\overrightarrow{BH}\cdot\vec c=(\overrightarrow{BA}+k\vec c)\cdot\vec c$ , i.e. $$k\|\vec c\|^2=-\overrightarrow{BA}\cdot\vec c.$$ A vector `parallel to the line' $(BH)$ is therefore: $$\begin{align}\|\vec c\|^2\overrightarrow{BH}&=\|\vec c\|^2(\overrightarrow{BA}+k\vec c)\\&=(\vec c\cdot\vec c)\overrightarrow{BA}-(\vec c\cdot\overrightarrow{BA})\vec c\\&=\vec c\times(\overrightarrow{BA}\times\vec c). \end{align}$$
|
|vector-spaces|inner-products|coordinate-systems|cross-product|
| 0
|
How to show the quadratic form of the following matrix converges to zero?
|
Suppose we have a $l\times l$ real matrix $M=(X'X)^{-1}X'\Sigma X (X'X)^{-1}$ , where $X$ is a $n\times l$ real matrix and $\Sigma$ is a $l\times l$ real symmetric and positive definite matrix. $X'X$ is invertible and its minimum eigenvalue $\lambda_{min}(X'X)$ diverges to $\infty$ as $n\rightarrow \infty$ . How to show that $t'Mt\rightarrow 0$ for any vector $t$ with unit norm, i.e., $t't=1$ ? I know how to show the claim when $M=(X'X)^{-1}$ , in this case, $t'Mt\leq \lambda_{max}((X'X)^{-1})t't=\frac{1}{\lambda_{min}(X'X)}\rightarrow 0$ . Seems that a similar argument do not apply now because we cannot no longer single out the eigenvalue of $X'X$ or $(X'X)^{-1}$ .
|
In the proof, for a vector $x$ , we let $\|x\|$ denote the length of $x$ , so $x'x=\|x\|^2$ . Note that for any matrix $D$ and any vector $x$ such that $Dx$ is defined, $xD'Dx=(Dx)'Dx=\|Dx\|^2$ . We can write $$\Sigma=T'T$$ for some $l\times l$ matrix $T$ . Define $$A=X(X'X)^{-1}$$ and $$B=TA.$$ Note that $$B'B=A'T'TA=(X'X)^{-1}X'\Sigma X(X'X)^{-1}=M$$ and $$A'A=[(X'X)^{-1}X'][X(X'X)^{-1}]=(X'X)^{-1}.$$ Then for $t$ with $t't=1$ , we have \begin{align*} t'Mt & =(Bt)'(Bt)=\|Bt\|^2\leqslant \|T\|^2 \|At\|^2 = \|T\|^2(At)'(At) \\ & = \|T\|^2(t'A'At) = \|T\|^2 (t'(X'X)^{-1}t).\end{align*} Since $T$ does not depend on $n$ , the rest of the proof is handled by the case you presented in the question. EDIT: For fixed $r,s$ , let $\mathcal{M}_{r,s}$ denote the space of all $r\times s$ matrices with entries in $\mathbb{R}$ . Everything we say below is for $\mathbb{R}$ , but it also holds for $\mathbb{C}$ , except instead of the transepose, we need the conjugate transpose. Let's look at 1. trace
|
|eigenvalues-eigenvectors|quadratic-forms|symmetric-matrices|positive-definite|
| 1
|
Geometric question on finding the length of the tangent
|
Consider a circle $ C $ with radius $ r $ and a point $P $ outside the circle. Construct two tangents from $P$ to the circle, touching the circle at points $ A $ and $ B $ . Let $ O $ be the center of the circle. Question: If $ OP = 2r $ , find the exact length of the tangent segment $ PA $ (from point $ P $ to point $A $ . My approach: Now, let me give this problem a try. Given that the area of triangle OAB is $3r^2$ and OP is twice the length of the radius $2r$ , I need to determine the length of the tangent segment PA. First, I denoted the length of PA as $x$ . Since triangle $OAB$ is isosceles, we know that $OA = OB$ . Using the Pythagorean theorem, I expressed OB in terms of r: $OB^2$ = $OP^2$ - $BP^2$ $OB^2$ = $2r^2$ - $r^2$ $OB^2$ = $4r^2$ - $r^2$ $OB^2$ = $3r^2$ Next, I find the area of triangle OAB: $[Area_{OAB} = \frac{1}{2} \times PA \times OB]$ $[3r^2 = \frac{1}{2} \times x \times \sqrt{3r^2}]$ Solving for $(x)$ : $[6r^2 = x \times r \sqrt{3}]$ $[x = \frac{6r^2}{r \sqrt{3}}
|
This is a rather odd question. Since we know that $PA$ is tangent to the circle we immediately know that $OA$ is a radius. Of course, this means it has length $r$ , and also that $PAO$ is a right-angle. Additionally, you state that $OP$ has length $2r$ . This trivially allows the Pythagorean theorem to be used to solve for $x$ . $$ \begin{aligned} r^2 + x^2 & = (2r)^2 \\ r^2 + x^2 & = 4r^2 \\ x^2 & = 3r^2 \\ x & = \sqrt{3} r \end{aligned} $$ The area information is not used. I didn't check if it is consistent with everything else. It may just be a misdirect, or the problem may be overspecified in some incorrect way, but you can check this easily enough yourself.
|
|geometry|tangent-line|
| 0
|
What is the empty tensor product of vector spaces?
|
The tensor product of a space with itself once is $V^{\otimes1}$ , but what is $V^{\otimes0}$ ? Since it is an empty tensor product, it is - a fortiori - an empty product. So I'm looking for a " $1$ " of some sort, just not sure what that would mean in this context. "If I take the tensor product of a vector space with itself zero times, I would get ...", and I am guessing here, but is it the underlying field, $\mathbb{F}$ ?
|
A tensor product of spaces corresponds to a Cartesian product of bases: if $\forall j: V_j = \operatorname{span} {\hat V}_j$ , then $\bigotimes_j V_j = \operatorname {span} \prod_j {\hat V}_j$ (up to isomorphism). So you may equivalently look for the nullary Cartesian product: conventionally, this is a singleton of the 0-tuple. Therefore, the nullary tensor product is a one-dimensional space, isomorphic to the underlying field. This answer is somewhat flawed in that it relies on assigning a basis to each vector space, which is an arbitrary choice (and one not always possible to make in the absence of a choice axiom). But it’s a useful intuition to have.
|
|tensor-products|tensors|multilinear-algebra|
| 0
|
Improper Integral $\int_0^1\frac{\arcsin^2(x^2)}{\sqrt{1-x^2}}dx$
|
$$I=\int_0^1\frac{\arcsin^2(x^2)}{\sqrt{1-x^2}}dx\stackrel?=\frac{5}{24}\pi^3-\frac{\pi}2\log^2 2-2\pi\chi_2\left(\frac1{\sqrt 2}\right)$$ This result seems to me digitally correct? Can we prove that the equality is exact?
|
$$\displaystyle{\int\limits_{0}^{1}{\frac{\left( \arcsin x^{2} \right)^{2}}{\sqrt{1-x^{2}}}dx}=\frac{1}{2}\int\limits_{0}^{1}{\frac{\left( \arcsin x \right)^{2}}{\sqrt{x}\sqrt{1-x}}dx}=\frac{1}{2}\int\limits_{0}^{\frac{\pi }{2}}{\frac{x^{2}\cos x}{\sqrt{\sin x}\sqrt{1-\sin x}}dx}=\frac{\sqrt{2}}{2}\int\limits_{0}^{\frac{\pi }{2}}{\frac{\left( \frac{\pi }{2}-x \right)^{2}\cos \frac{x}{2}}{\sqrt{1-2\sin ^{2}\frac{x}{2}}}dx}}$$ $$\displaystyle{=\int\limits_{0}^{\frac{\pi }{2}}{\left( \frac{\pi }{2}-2\arcsin \left( \frac{\sin x}{\sqrt{2}} \right) \right)^{2}dx}=\left( \frac{\pi }{2} \right)^{2}\int\limits_{0}^{\frac{\pi }{2}}{dx}-2\pi \int\limits_{0}^{\frac{\pi }{2}}{\arcsin \left( \frac{\sin x}{\sqrt{2}} \right)dx}+4\int\limits_{0}^{\frac{\pi }{2}}{\arcsin ^{2}\left( \frac{\sin x}{\sqrt{2}} \right)dx}}$$ $$\displaystyle{=\frac{\pi ^{3}}{8}-2\pi \underbrace{\int\limits_{0}^{\frac{\pi }{2}}{\arcsin \left( \frac{\sin x}{\sqrt{2}} \right)dx}}_{I_{1}}+4\underbrace{\int\limits_{0}^{\frac{\pi }{
|
|integration|definite-integrals|improper-integrals|special-functions|closed-form|
| 1
|
The dual $(L^\infty)^{*}$ is not $L^1$ by constructing example
|
The problem statement is the same as this post: $L^{\infty *}$ is not isomorphic to $L^1$ . Let $L^\infty = L^\infty(m)$ , where $m$ is Lebesgue measure on $I=[0,1]$ . Show that there is a bounded linear functional $G \neq 0$ on $L^\infty$ that is $0$ on $C(I)$ , and that therefore there is no $g∈L^1(m)$ that satisfies $G(f) = \int_I fg$ for every $f \in L^\infty$ . Thus $(L^\infty)^{*} \neq L^1$ . I have no problem in constructing such $\lambda$ and then deduce that it cannot have an integral representation by an integrable function $g$ . My question is that I don't think this conclusion is strong enough to conclude that the two spaces are not isomorphic as it was answered in that post. The accepted answer there stated that $L^1$ is a $\textit{subspace}$ of $(L^\infty)^{*}$ be considering the embedding map $\lambda :L^1 \to (L^\infty)^{*} :h \mapsto \lambda_h$ , where $$ \lambda_h(f) : = \int hf \enspace ,\forall f \in L^\infty. $$ The functional $G$ constructed above is an example su
|
I'll give a proof here, which uses Hahn-Banach theorem and fills out the holes in the answer of the post mentioned above. We consider the collection $\{H_x\}_{x\in I}$ of functionals on $L^\infty$ , where each $H_x$ is the Hahn-Banach extension of the functional $G_x: C(I) \to \mathbb{R}$ defined by $$ G_x(f) = f(x) \enspace \forall x \in I, f\in C(I). $$ Let $ 1 >\delta >0$ be given. For each $x\neq y$ , we can always find an $f_{xy} \in C(I)$ such that $||f_{xy}||_\infty = 1$ and $|H_x(f) - H_y(f)| = |f(x) - f(y) | \geq \delta >0$ . Thus, $||H_x - H_y|| \geq \delta$ for all $ x\neq y$ . Then $ \{H_x\}_x \subseteq (L^\infty)^{*}$ , hence $(L^\infty)^{*}$ cannot be separable, which in turn implies that $(L^\infty)^{*}$ cannot be isometrically isomorphic to $L^1$ , since the later is separable.
|
|real-analysis|functional-analysis|lp-spaces|dual-spaces|riesz-representation-theorem|
| 0
|
$(ab)c + a(bc) = 2 b (ac) \implies^? x(yz) = (xy)z$?
|
Consider some unital commutative algebra $A$ such that for all its elements we have $$(ab)c + a(bc) = 2 b (ac) $$ Does this imply the algebra is associative ? or in symbols : $$(ab)c + a(bc) = 2 b (ac) \implies^? x(yz) = (xy)z$$
|
The following shows that the algebra is associative whenever the multiplication is distributive and 6 is not a zero-divisor. Taking $a=b$ gives $(aa)c=a(ac)$ . Next taking $a=b=x+y$ and $c=z$ in this new relation and using commutativity and distributivity gives $$ (x^2+y^2+2xy)z = (x+y)(xz+yz) = x(xz) + y(yz) + x(yz)+ y(xz), $$ so that $$ x(yz)+y(xz) = 2(xy)z. $$ Finally, we have $$ 2b(ac) - a(bc) = (ab)c = (ba)c = 2a(bc) - b(ac), $$ so that $$ 3a(bc) = 3b(ac). $$ Thus, as 6 is a not a zero divisor, we get $x(yz)=y(xz)$ , which combined with the earlier relation gives $2x(yz)=2(xy)z$ , and hence $x(yz)=(xy)z$ .
|
|abstract-algebra|commutative-algebra|associativity|nonassociative-algebras|
| 1
|
Integration of hypergeometric function on complex plane
|
I have come across an integral that involves a hypergeometric function, which can be expressed as follows: $$I = \int_0^1 x^{1/2}(1-x)^{\epsilon-1} {_{2}F_1}(\frac{1}{2}+\epsilon,1+\epsilon;\frac{3}{2};x) dx.$$ Here, $\epsilon$ is a small complex quantity where $|\epsilon|\ll1$ . I found an integral formula in "Table of Integrals, Series, and Products," ET II 399(4), as follows: $$\int_0^1 x^{\gamma-1}(1-x)^{\rho-1}F(\alpha,\beta,;\gamma;x)dx=\frac{\Gamma(\gamma)\Gamma(\rho)\Gamma(\gamma+\rho-\alpha-\beta)}{\Gamma(\gamma+\rho-\alpha)\Gamma(\gamma+\rho-\beta)}$$ , for $Re \ \gamma\gt0, Re\ \rho\gt0, Re\ (\gamma +\rho -\alpha - \beta)\gt0$ . As one can see, for my case, $\alpha=1/2+\epsilon, \beta=1+\epsilon, \gamma=3/2, \rho=\epsilon$ , and if assume $Re\ (\epsilon) \gt 0$ (this is not necessarily true), the third condition can not be satisfied as $Re\ (\gamma +\rho -\alpha - \beta)= Re\ (-\epsilon)\lt0$ . I have a question regarding the integral $I$ . Does the given case imply that $I$
|
For the antiderivative $$I = \int\sqrt x\,\,(1-x)^{\epsilon-1} \, _2F_1\left(\frac{1}{2}+\epsilon,1+\epsilon;\frac{3}{2};x\right)\, dx$$ $$ _2F_1\left(\frac{1}{2}+\epsilon,1+\epsilon;\frac{3}{2};x\right)=\frac{\left(1+\sqrt{x}\right)^{2 \epsilon }-\left(1-\sqrt{x}\right)^{2 \epsilon } } {4 \,\epsilon\, \sqrt{x}\, (1-x)^{2 \epsilon } }$$ $$I=\frac 1{4 \,\epsilon}\int \frac{\left(1+\sqrt{x}\right)^{2 \epsilon }-\left(1-\sqrt{x}\right)^{2 \epsilon } } {(1-x)^{\epsilon +1}}\,dx$$ $$I=\frac{x}{4 \epsilon }\Big(F_1\left(2;1+\epsilon ,1-\epsilon ;3;\sqrt{x},-\sqrt{x}\right)-F_1\left(2;1-\epsilon ,1+\epsilon ;3;\sqrt{x},-\sqrt{x}\right)\Big)$$ where appear Appell hypergeometric functions of two variables. The definite integral tends to infinity when $x\to 1^-$
|
|complex-analysis|definite-integrals|special-functions|contour-integration|hypergeometric-function|
| 1
|
A gambler's ruin problem with winning size of 3
|
I play a game where I have a $25$ % chance of winning \$ $3$ and a $75$ % chance of losing \$ $1$ . Currently, I have \$ $5000$ . I will stop playing once I either earn \$ $20000$ or lose all of my \$ $5000$ . When I stop, what is the probability of having lost all of my \$ $5000$ ? The winning is not $1$ , so I think I cannot use the formula $$\frac{1-(q/p)^i}{1-(q/p)^N}$$ where $N$ is the wining amount of money, i.e. \$ $5000$ and $i$ is the money we start with. What should be changed in this formula to calculate the probability of bankruptcy?
|
I have little knowledge of random walks, but it strikes me that if we look at it play by play, the expected gain for success $=\frac14\cdot 3 = \frac34$ \$ and the expected loss for failiure $=\frac34\cdot1 = \frac34$ \$ In other words, the expected gain/loss is linear, thus by the usual formula, P(go bankrupt) = $\dfrac{(20000-5000)}{20000} = 0.75$
|
|probability|stochastic-processes|
| 0
|
Solving $4^x = \log_2(x) + \sqrt{x-1} + 14$
|
Solve in $\mathbb{R}$ the following equation: $4^x = \log_2(x) + \sqrt{x-1} + 14.$ My approach: I noticed that $x=2$ satisfies the equation, then I investigated the intervals $[1,2)$ and $(2,\infty)$ , but it didn't lead me to a solution. Any help is appreciated.
|
Define $g(x)=4^x-\log_2(x)-\sqrt{x-1}-14$ . We have that $g'(x)=\log(4)4^x-\dfrac{1}{\log(2)x}-\dfrac{1}{2\sqrt{x-1}}$ . For $x\geq 2$ , it follows that $$g'(x)\geq 4^2-\dfrac{1}{\log(2)}-\dfrac{1}{2}>0,$$ so $g$ is strictly increasing and cannot have any more roots on $(2,\infty)$ . As $\log_2(x)+\sqrt{x-1}+14\geq 14$ in $[1,2)$ , there cannot be any roots before $\log_4(14)\approx 1.90$ , in particular before $\dfrac{3}{2}$ . If there were a root on $\left(\dfrac{3}{2},2\right)$ , by Rolle's Theorem there would exist some $c\in \left(\dfrac{3}{2},2\right)$ such that $g'(c)=0$ . But in $\left(\dfrac{3}{2},2\right)$ we have $$g'(x)\geq 4^{3/2}-\dfrac{1}{\log(2)}-\dfrac{1}{2\sqrt{0.5}}>0,$$ so there cannot be any roots on $\left(\dfrac{3}{2},2\right)$ and we conclude the only root of $g$ is $2$ .
|
|algebra-precalculus|inequality|logarithms|exponential-function|
| 0
|
Find the sum of $\sum_{n = 1}^{\infty} (-1)^n \frac{(n+1)^2}{n!}$
|
I need to find the sum of $\sum_{n = 1}^{\infty} (-1)^n \frac{(n+1)^2}{n!}$ . We can see that $$\frac{(2n+1)^2}{(2n)!} - \frac{(2n+2)^2}{(2n+1)!} = \frac{(2n+1)^3 - (2n+2)^2}{(2n + 1)!} = \frac{8n^3+8n^2-2n-3}{(2n + 1)!} Notice that the inequalities hold for large enough $n$ . Than the sum converges because the sum of $\frac{1}{n^2}$ converges and is the upper bound for our sum. However, I can not calculate what the sum converges to. If you have any ideas, please share.
|
$$\frac{(n+1)^2}{n!}=\frac{n(n-1)+3n+1}{n!}=\frac{1}{(n-2)!}+3\frac{1}{(n-1)!}+\frac{1}{n!}$$ Therfore: $$\sum_{n = 1}^{\infty} (-1)^n \frac{(n+1)^2}{n!}=\sum_{n = 1}^{\infty}\frac{(-1)^n}{(n-2)!}+3\sum_{n = 1}^{\infty}\frac{(-1)^n}{(n-1)!}+\sum_{n = 1}^{\infty}\frac{(-1)^n}{n!} =\sum_{n = 1}^{\infty}\frac{(-1)^{n-2}}{(n-2)!}-3\sum_{n = 1}^{\infty}\frac{(-1)^{n-1}}{(n-1)!}+\sum_{n = 1}^{\infty}\frac{(-1)^n}{n!}=-e^{-1}-1$$
|
|sequences-and-series|summation|
| 1
|
Expectation of a random variable that takes points in the natural numbers
|
Let $X : \Omega \to \mathbb{N} $ be a r.v. Then $$ \mathbb{E} \{ X \} = \sum_{n=0}^{\infty} P(X > n) $$ Attempt: $$ \mathbb{E} \{ X \} = \sum_j j P(X = j ) = \sum_j j(1 - P(X > j) - P(X But here it gets complicated. Am I on the right track?
|
In $$ E(X)=\sum_{j=0}^∞jP(X=j)=\sum_{j=0}^∞j(P(X\ge j)−P(X\ge j+1))$$ $$ \sum_{j=0}^∞j(P(X\ge j)−P(X\ge j+1))=P(X\ge 1) - P(X\ge 2)+2P(X\ge 2)-2P(X\ge 3)+3P(X\ge 3)-3P(X\ge 4)+... $$ $$ \sum_{j=0}^∞j(P(X\ge j)−P(X\ge j+1))=\sum_{j=1}^∞P(X\ge j)$$ which is $$ \sum_{j=0}^∞P(X\gt j)$$
|
|probability|
| 0
|
Why is maximum number of joints of 6 lines is 4?
|
The following is considered in Lary Guth's Polynomial Methods in Combinatorics, page 14. Let $L$ be a set of lines in $\mathbb R^3$ . A point $x$ which lies in some set of three non-co-planar lines of $L$ is called a joint of $L$ . Suppose $L$ has $6$ lines. Then, why is it that $L$ has at most $4$ joints? This has been my approach so far: Note that the tetrahedron has 6 edges and 4 vertices. If we take our $L$ to be the set of 6 lines containing each of the six edges of a tetrahedron, we get that each of its vertex is a joint as the three lines intersecting any vertex are non-co-planar. Now, I want to argue that if one wants to maximize the number of joints possible for any set of six lines, the configuration of a tetrahedron is the best possible one. The problem is that I don't know why or how to prove 3. Any suggestions will be really helpful :)
|
Let $G = (V,E)$ be a graph, and we're interested in the vertices which fulfill $deg(v) \geq 3$ . We notice that $|E| = 6$ from the assignment, and we can use your earlier fact that a tetrahedron is a lower bound. By the handshaking lemma, $\sum_{v \in V} deg(v) = 2 |E| = 12$ . This is immediately fulfilled by $4$ joints. Therefore, a tetrahedron is an upper bound as well.
|
|geometry|euclidean-geometry|affine-geometry|
| 1
|
Commutator Subgroup of Thompson's Group F
|
I am looking for a proof that the commutator subgroup of F is simple, I have found lots of articles about the commutator subgroup in general but havent found anything directly involving F and am struggling apllying the general case to F. Does anybody know of any articles that have this proof explicitily? Thanks :)
|
The book Introduction to Thompson’s group $F$ by Jose Burillo has an explicit and detailed proof, see Theorem $3.3.1$ . See also the following post and the links given there: Commutator Subgroup of Thompson's Group $F$
|
|group-theory|derived-subgroup|
| 0
|
Can you explain to me why this proof by induction is not flawed? (Domain is graph theory, but that is secondary)
|
Background I am following this MIT OCW course on mathematics for computer science. In one of the recitations they come to the below result: Official solution Task: A planar graph is a graph that can be drawn without any edges crossing. Also, any planar graph has a node of degree at most 5. Now, prove by induction that any planar graph can be colored in at most 6 colors. Solution.: We prove by induction. First, let n be the number of nodes in the graph. Then define P (n) = Any planar graph with n nodes is 6-colorable. Base case, P (1): Every graph with n = 1 vertex is 6-colorable. Clearly true since it’s actually 1-colorable. Inductive step: P (n) → P (n + 1): Take a planar graph G with n + 1 nodes. Then take a node v with degree at most 5 (which we know exists because we know any planar graph has a node of degree ≤ 5), and remove it. We know that the induced subgraph G’ formed in this way has n nodes, so by our inductive hypothesis, G’ is 6-colorable. But v is adjacent to at most 5 oth
|
As rufflewind said , you intuition of inductive proofs as recursive programs is actually a good one and made precise by proof checkers like Coq, Lean or Agda. I won't go quite there, but would like to highlight some concrete things that would be different. One thing that is important when representing proofs by functions is that you have the right type signature . In your attempt, that would have been is6Colorable: Graph -> Bool or (just different syntax) is6Colorable: ∀ (g: Graph): ∃ (b: Bool) But this is not the right type for such a proof! This type has trivial instantiations like is6Colorable(g) = True which obviously doesn't actually prove anything about 6-colourability. Instead, the proof (specifically, a constructive proof ) should give that what the theorem asserts; in this case, is6Colourable: ∀ (g: Graph): ∃ (c: Colouring): isColouring(g,c) ∧ |c| ≤ 6 Ok, but how would the implementation look like? Well, for the start, actually quite like yours: is6Colourable(g): if g.size ==
|
|graph-theory|proof-writing|proof-explanation|induction|planar-graphs|
| 0
|
QuantGuide Busted 6 II
|
This question is from QuantGuide(Busted 6 II): Suppose you play a game where you continually roll a die until you obtain either a 5 or a 6. If you receive a 5, then you cash out the sum of all of your previous rolls (excluding the 5). If you receive a 6, then you receive no payout. You have the decision to cash out mid-game. What is your expected payout following the optimal strategy? My Approach: First I look into the case when we can't cash out mid-game. The expected value is 2.5 in this case. Now for the additional option of stopping midgame, we calculate the expected value at each stage of the dice throw. For the $i^{th}$ throw the expected value will be: \begin{equation} \frac{2}{3}^i(2.5i)+\frac{2}{3}^{i-1}(2.5(i-1))\frac{1}{6} \end{equation} The 2.5 value is due to each throw having the average value of the dice roll to be $\frac{1+2+3+4}{4}$ (the first term is for when all the throws till now don't have 5 or 6 and the second term is for the case of landing with a 5 in the $i^{t
|
The optimal strategy clearly takes the form of playing until you’ve obtained some threshold value and then cashing out, so we need to determine the threshold. At the last value under the threshold, you know you’re going to cash out if you don’t roll a $5$ or $6$ . So the threshold is determined by the condition that a single roll will decrease the expected payout. If your current sum is $s$ , the expected payout if you cash out after the next roll is $$ \frac16\cdot0+\frac16\cdot s+\frac46\cdot\left(s+\frac{1+4}2\right)\;. $$ This is equal to $s$ for $s=10$ . So if you have a sum of $10$ , it doesn’t matter whether you roll once more or not, if you have less you should continue, and if you have more you should cash out. The additional payout $a_s$ you expect to gain when you have a sum of $s$ is \begin{eqnarray*} a_s &=& \frac16\cdot(-s)+\frac16\cdot0+\frac16\sum_{k=1}^4(k+a_{s+k}) \\ &=& \frac{10-s+\sum_{k=1}^4a_{s+k}}6\;, \end{eqnarray*} with the “final values” $a_k=0$ for $k\ge10$ .
|
|probability-theory|expected-value|
| 0
|
Find the sum of $\sum_{n = 1}^{\infty} (-1)^n \frac{(n+1)^2}{n!}$
|
I need to find the sum of $\sum_{n = 1}^{\infty} (-1)^n \frac{(n+1)^2}{n!}$ . We can see that $$\frac{(2n+1)^2}{(2n)!} - \frac{(2n+2)^2}{(2n+1)!} = \frac{(2n+1)^3 - (2n+2)^2}{(2n + 1)!} = \frac{8n^3+8n^2-2n-3}{(2n + 1)!} Notice that the inequalities hold for large enough $n$ . Than the sum converges because the sum of $\frac{1}{n^2}$ converges and is the upper bound for our sum. However, I can not calculate what the sum converges to. If you have any ideas, please share.
|
$\sum_{n=0}^{\infty}\left(-1\right)^{n}\ \frac{\left(n+1\right)^{2}}{n!}$ When solving such a problem, in my opinion, one should consider the expansion, $$e^{x}=\frac{1}{0!}+\frac{x}{1!}+\frac{x^{2}}{2!}+\frac{x^{3}}{3!}+.....$$ In this question, we will mostly use the conditions $x=1$ and $x=-1$ $$e=\sum_{n=0}^{\infty}\frac{1}{n!}$$ and $$e^{-1}=\sum_{n=0}^{\infty}\left(-1\right)^{n}\ \frac{1}{n!}$$ Now, coming back to the question, $$\sum_{n=1}^{\infty}\left(-1\right)^{n}\ \frac{\left(n+1\right)^{2}}{n!}$$ The above question can be simplified thus, $$\sum_{n=1}^{\infty}\left(-1\right)^{n}\ \frac{\left(n^{2}+2n+1\right)}{n!}$$ Further, $$\sum_{n=1}^{\infty}\left(-1\right)^{n}\ \frac{\left(n\left(n-1\right)+3n+1\right)}{n!}$$ Now, the pattern is quite clear. $\sum_{n=2}^{\infty}\left(-1\right)^{n}\ \frac{1}{\left(n-2\right)!}+\sum_{n=1}^{\infty}\left(-1\right)^{n}\ \frac{3}{\left(n-1\right)!}+\sum_{n=1}^{\infty}\left(-1\right)^{n}\ \frac{1}{n!}$ * Notice the second term here. We can re
|
|sequences-and-series|summation|
| 0
|
log and poisson-like integral
|
Here is a fun looking one some may enjoy. Show that: $$\int_{0}^{1}\log\left(\frac{x^{2}+2x\cos(a)+1}{x^{2}-2x\cos(a)+1}\right)\cdot \frac{1}{x}dx=\frac{\pi^{2}}{2}-\pi a$$
|
\begin{align} &\int_{0}^{1}\ln\frac{x^{2}+2x\cos a+1}{x^{2}-2x\cos a+1}\cdot \frac{1}{x}\ dx\\ =& \int_{0}^{1}\int_a^{\pi/2} \left(\frac{2\sin t\ }{x^{2}-2x\cos t+1}- \frac{2\sin t\ }{x^{2}+2x\cos t+1}\right) dt \ dx\\ =& \int_{0}^{1}\int_a^{\pi/2} \frac{4\sin t\ (1+x^2)}{x^4-2x^2\cos2t +1}dt \ dx\\ =& \int_a^{\pi/2} 2\tan^{-1}\frac{x-\frac1x}{2\sin t}\bigg|_0^1\ dt = \int_a^{\pi/2}\pi \ dt =\frac{\pi^{2}}{2}-\pi a \\ \end{align}
|
|integration|definite-integrals|
| 0
|
Spectral sequence with two non zero rows
|
I'm trying to solve exercise 5.2.2 of Weibel's Introduction to homological algebra : if a spectral converging to $H$ has $E_{p,q}^2=0$ expect for $q=0,1$ then there is a long exact sequence $$\cdots \to H_{p+1} \to E_{p+1,0}^2 \overset{d}{\to} E_{p-1,1}^2 \to H_p \to E_{p,0}^2 \overset{d}{\to} E_{p-2,1}^2 \to H_{p-1} \to \cdots$$ Let $p\in\mathbb{N}$ , since $\{E_{p,q}^r\}$ converges to $H$ there is a finite filtration $0=F_{-1}H_p \subseteq \ldots \subseteq F_pH_p=H_p$ and we have the following short exact sequences $$0 \to F_{p-1}H_p \to H_p \to E_{p,0}^2 \to 0$$ $$0 \to F_{p-2}H_p \to F_{p-1}H_p \to E_{p-1,1}^2 \to 0$$ $$0 \to \ker(d_{p,0}^2) \to E_{p,0}^2 \overset{d}{\to} E_{p-2,1}^2 \to 0$$ My problem is that I don't know if the fact that $F_{p-1}H_p$ surjects into $E_{p-1,1}^2$ is enough to conclude and if not, I can't find any short exact sequence of the form $$0 \to K_1 \to E_{p-1,1}^2 \to K_2 \to 0$$ Can anyone help me please?
|
Ultimately, the filtration comes from the $E^\infty$ page. So let's attempt to calculate explicitly what this page is. Well, from the $E^2$ page we can only immediately infer the $E^3$ page - so let's focus on that first. Obviously if $q\neq 0,1$ we have that $E^3_{p,q}=0$ ; the only subquotient of $0$ is $0$ . If $q=0$ , $E^3_{p,0}$ is the quotient of $\ker(E^2_{p,0}\to E^2_{p-2,1})$ by $\mathrm{im}(E^2_{p+2,-1}=0\to E^2_{p,0})=0$ i.e. it is just $\ker(E^2_{p,0}\to E^2_{p-2,1})$ . If $q=1$ , $E^3_{p,1}$ is the quotient of $\ker(E^2_{p,1}\to0=E^2_{p-2,2})=E^2_{p,1}$ by $\mathrm{im}(E^2_{p+2,0}\to E^2_{p,1})$ i.e. just $E^2_{p,1}/\mathrm{im}(E^2_{p+2,0}\to E^2_{p,1})$ . Putting it all together, this tells us there are exact sequences: $$0\to E^3_{p+2,0}\to E^2_{p+2,0}\to E^2_{p,1}\to E^3_{p,1}\to0$$ Can we calculate the $E^4$ page? Yes, and it's actually very easy. The $E^3$ -differentials have vertical grading $2$ , and we know $E^3$ vanishes outside of the $0,1$ rows so all differenti
|
|homology-cohomology|exact-sequence|spectral-sequences|
| 1
|
In complex geometry, is an holomorphic function continuous by definition?
|
I am attending a course on Introduction to Complex Geometry, and the definition they have given me of a holomorphic function between complex manifolds is as follows: Let $X,Y$ be complex manifolds. A map $f: X \rightarrow Y$ is $\textbf{holomorphic}$ if 1. f is continuous and 2. $\forall p\in X$ , there exist charts $(U,\phi)$ in $X$ , $(V,\psi)$ in $Y$ so that $\psi \circ f \circ \phi^{-1} : \phi(U\cap f^{-1}(V)) \rightarrow \psi(V)$ is holomorphic. My question is: Is the first hypothesis (f continuous) necessary? Does not the second one imply the first one? This question also applies to Differential Geometry in the real case. Does anyone know if there is a counterexample if the first hypothesis is removed? Thanks.
|
Let $f : A \rightarrow \Bbb{C}$ be a complex function, with $A \subseteq \Bbb{C}$ . Usually, to even say that $f$ is holomorphic, we require that $A$ be an open subset of $\Bbb{C}$ , just like with differentiability of real functions. It's not entirely obvious why, but I think it's standard. So, in order to be able to say $\psi \circ f \circ \phi^{-1}$ is continuous, we need its domain to be open. This requires $f$ to be continuous, because we need $f^{-1}(V)$ to be open.
|
|differential-geometry|differential-topology|complex-geometry|
| 1
|
Need help with proof of property related to group rings.
|
Suppose $G = \langle g \rangle$ is a cyclic group of order $n$ , $k$ is an algebraically closed field, and $w \in k$ a primitive $n$ th root of unity. I want to show that in the group algebra $k[G]$ , the elements $$e_1 = \frac{1}{n}\sum_{k=1}^n g^k \quad \text{and} \quad e_j = \frac{1}{n}\sum_{k=1}^{n} (w^{j} g)^k \quad (j = 2, ..., n)$$ satisfy $e_i \cdot e_j = \delta_{ij} e_i$ and $\sum_i e_i = \mathbb{1}_{k[G]}$ in $k[G]$ . The only thing I have been able to prove is $e_1^2 = e_1$ , but I don't know what to do for the rest.
|
For notational convenience I will write $$ e_i = \sum_{r=0}^{n-1} (\omega^ig)^r $$ So that "my" $\{e_0,\ldots,e_{n-1}\}$ is "your" $\{e_1,\ldots,e_n\}$ . We can compute $e_ie_j$ by looking at the coefficient of each $g^k$ , which is $$ n^{-2}\sum_{r+s=k} \omega^{ri+sj} = n^{-2}\sum_{r+s=k}\omega^{ri+si}\omega^{sj-si}=n^{-2}\omega^{ik}\sum_{s=0}^{n-1}(\omega^{j-i})^s$$ Since $\omega^{j-i}$ is an $n$ th root of unity, this last sum is $0$ when $j\neq i$ and is $n$ when $j=i$ . Similarly we can compute $\sum e_i$ . The coefficient of $g^k$ is then $$ n^{-1}\sum_{t=0}^{n-1} \omega^{tk} = n^{-1}\sum_{t=0}^{n-1}(\omega^k)^t $$ This last sum is $0$ when $k\neq0$ and $n$ when $k=0$ .
|
|abstract-algebra|group-theory|ring-theory|finite-groups|
| 1
|
Are compact and Hausdorff subsets closed if the topology is determined by them?
|
Let $(X, \mathcal{T})$ be a topological space such that $$\mathcal{T} = \{U \subset X : \forall C \in P(\mathcal{T}): U \cap C \in \mathcal{T}|C\},$$ where $P(\mathcal{T})$ is the set of all compact-and-Hausdorff subsets of $X$ and $\mathcal{T}|C$ denotes the subspace topology in $C$ . Are the sets in $P(\mathcal{T})$ closed?
|
I believe not. Consider a converging sequence with two limit points. Then the infinite compact Hausdorff subsets are given by cofinal subsequences with a single limit point, which are not closed in the space. But they determine the topology: sets here are open if they avoid both limit points, or they contain a final subsequence.
|
|general-topology|
| 1
|
$a_0=0,a_{n+1}^3=a_n^2-8.$ How to prove the series $\sum |a_{n+1}-a_n|$ converges?
|
Suppose that there is a sequence defined as below $$a_0=0,a_{n+1}^3=a_n^2-8$$ How to prove the series $\sum_{n=0}^{+\infty}|a_{n+1}-a_n|$ converges? I know that $a_n$ converge to a fixed point, but the question is how to estimate how fast $a_n$ converges to the fixed point. From comment: $a_n$ converges to the root of the equation $x^3−x^2+8=0$ , which is approximately equal to $-1.716$ ,and $a_n$ converges alternating to it.
|
$a_0=0$ , $a_1=-2$ , and the function $$f:[-2,0]\to[-2,0],\quad x\mapsto\sqrt[3]{x^2-8}$$ satisfies $$f'(x)=\frac{2x}{3\left|x^2-8\right|^{2/3}}\in[f'(-2),f'(0)]=[r,0],\quad r=-\frac{\sqrt[3]4}3\in(-1,0).$$ By the mean value theorem, $$a_{n+2}-a_{n+1}=r_n(a_{n+1}-a_n),\quad r_n\in(r,0),$$ hence the series $\sum(a_{n+1}-a_n)$ is alternating, and absolutely convergent.
|
|sequences-and-series|limits|
| 0
|
Find Number of unique durations that can be created given a list of durations and an upper bound
|
Lets say we are given a list of durations (5s, 10s, 10s, 15s, 15s, 15s, 25s, 30s....) and we want to find a list of unique durations that can be created using this list of single durations. for example if we have the original list of (5, 5, 15, 25) we can create the following durations: 5 -> using one 5 element 10 -> using two 5 elements 15 -> using one 15 element 20 -> using one 5 element and one 15 element 25 -> using two 5 elements and one 15 element OR using the 25 element 30 -> using 25 and 5 35 -> using 25 and two 5s 40 -> using 25 and 15 45 -> using 25, 15 and 5 50 -> using all elements As a bonus I want to limit the number of elements used as well as set an upper limit. For example I want to use a max of 2 elements, and an upper bound for the total duration of 37. This should eliminate the options of 40, 45, and 50 because they are above the limit, and it should eliminate the option of 35, because it uses more than 2 elements. Anyone know of a way to approach this?
|
You can recursively calculate the answer for each prefix of the list: when adding an entry $x$ to the list, the set of sums becomes $S \mapsto S \cup \{x + a | a \in S\}$ (and you start with the singleton set $\{0\}$ when the input list is empty). Keeping $S$ as a sorted list, this can be done in $O(mn \log(n))$ time, where $m$ is the length of the input list and $n$ is the length of the output list (which a priori is $O(2^m)$ ). If you want to restrict the number of summands to $M$ , then keep a list of pairs (sum, number of summands), which changes the complexity to $O(mMn \log(Mn))$ . If you want to put an upper bound on the sums, then prune the set $S$ after each update. This will not worsen the time complexity, although if the upper bound $B$ is small enough then you can replace the sorted list with a array of booleans of length $\sim B$ (or $MB$ if you want both restrictions) to change the complexity to $O(mB)$ (or $O(mMB)$ ).
|
|combinatorics|algorithms|computer-science|
| 0
|
Proving $\lim_{x\to\infty}xa^{x}=0$ in Elementary Ways
|
I wish to prove the following limit without using L'Hopital rule or other known limits: $$\lim_{x\to\infty}xa^{x}=0$$ where $0 . I wanted to do so using this sequence limit (which I know how to prove): $$\lim_{n\to\infty}na^{n}=0$$ I would appreciate to know if the following argument is valid (this is only the essence of it): Let's denote for each $x>1$ : $n_x=\lfloor x\rfloor$ . We thus have for all $x>1$ : $$0\le xa^{x}\le (n_{x}+1)a^{x}\le (n_{x}+1)a^{n_x}=n_{x}a^{n_x}+a^{n_x}$$ We can now use the fact that if $\lim_{k\to\infty}x_k=\infty$ , than $\lim_{k\to\infty}n_{x_k}=\infty$ , the limits $\lim_{n\to\infty}a^n,\lim_{n\to\infty}na^n=0$ and the squeeze theorem to get the desired result. Of course we are using here properties of real exponents, which is ok for this discussion. I would appreciate any feedback regarding this argument's validity. Thanks a lot in advance!!
|
I think your proof sketch is solid. An alternative route could be using elementary calculus (if that's elementary enough for you) to show that $xa^x$ is monotonically decreasing for $x>-\frac1{\ln a}$ . Which means that for any $x$ larger than $1-\frac1{\ln a}$ , we have $0 , again letting the squeeze theorem do the rest of the job.
|
|calculus|limits|limits-without-lhopital|
| 0
|
Miller-Rabin Primality Test-Witnesses and Liars - Implementing in Python
|
I have been studying the Miller-Rabin Primality Test, and am interested in implementing a code in Python to count witnesses and liars. In the basic code to determine if a number is probably prime or composite, I would like to incorporate 1) and 2) below to better understand witnesses compared to liars for n values: $1)$ $a$ to be tested for all values of $a$ from $1 , not random as it is currently coded. $2)$ Then for every $a$ in $1)$ above, a count of how many of those $a's$ are witnesses and how many are non-witnesses(liars) My ultimate goal is to use this, I'm sure with more modifications to the code, to compare to the Theorem: If n is odd, composite, and n>9, then at least 75% of the elements in $(\mathbb Z/n\mathbb Z)^x$ are Miller-Rabin witnesses. The Python code I am using is as follows: from random import randrange def probably_prime(n, k): """Return True if n passes k rounds of the Miller-Rabin primality test (and is probably prime). Return False if n is proved to be composit
|
A simple code that returns a list of liars or 'None' if n is prime is: # Miller-Rabin-Test of n, base a def miller_rabin(n, a): d = n1 = n-1 s = 0 while d & 1 == 0: d >>= 1 s += 1 b = pow(a, d, n) if b == 1 or b == n1: return True while s > 1: s -= 1 b = (b*b) % n if b == n1: return True return False # perform Miller-Rabin-Test of n with bases 1 n//4: return None # n is prime result += [a] return result # main program for testing if __name__ == "__main__": while True: n = int(input("n = ")) if n == 0: break if n The output for $n = 65$ would be 65 is composite, 4 liars: [8, 18, 47, 57] The function get_liars returns None if the test has been passed with more than $n/4$ bases because $n$ is certainly a prime then.
|
|cryptography|python|primality-test|
| 0
|
Find loop passing through two points with length $L\pi$
|
Problem: Find a nice simple closed curve other than circle which passes through the points $(0,0)$ and $(1,0)$ on the Cartesian plane and whose length is $L\pi$ . If the given condition is not the loop length but the loop area, it is easy to find a nice curve. The ellipse $$(2x-1)^2+\frac{y^2}{4A^2}=1$$ has the area $A\pi .$ Can someone solve the problem above? Thanks in advance!
|
I missed the simple answer, this is not an answer now! Bu $$(x-\tfrac12)^2+(y-\tfrac{\sqrt{L^2-1}}2)^2=\tfrac{L^2}4$$
|
|conic-sections|area|curves|arc-length|
| 0
|
Solve $dy/dx = \sin(x+y)$
|
Let $u(x)=x+y(x)$ , which implies $\frac{du(x)}{dx}=1+\frac{dy(x)}{dx}$ . The equation becomes separable : $$ \frac{du}{\sin(u)+1}=dx \iff \tan(u)-\frac{1}{\cos(u)}=x+C $$ This last equation seems to be a transcendental equation with no direct solution. However, the solutions are $y=-x-\pi /2+2k\pi$ , or $\sin(x+y)-1=(x+C)\cos(x+y)$ or $y=-x-2\arctan(1+\frac{2}{x+C})+2k\pi$ . Except for the first solution, I cannot manage to find those.
|
The key is the half-angle identity $$ \tan \frac{\theta}{2} = \frac{1-\cos\theta}{\sin\theta}. $$ Substituting $\theta\rightarrow \pi/2 - u$ gives $$ \tan\left(\frac{u}{2} -\frac{\pi}{4}\right) = \frac{\sin u - 1}{\cos u} = \tan u - \frac{1}{\cos u}, $$ which is what you got from variable separation. Doing the algebra then leads to the solution $$ y = 2\tan^{-1}(x + C) - x + \frac{\pi}{2} + 2k\pi. $$
|
|ordinary-differential-equations|
| 1
|
Is a hamiltonian isotopy real-valued of manifold-valued?
|
I have looked everywhere and I can't find a clear definition of hamiltonian isotopy. I have these definitions in my lecture notes but they are rather confusing Acording to the last definition a Hamiltonian isotopy $\phi_t$ (the one they given in the definition of Hamiltonian symplectomorphism) is a map $M \to M$ , otherwise the equality $\Phi=\phi_1$ would not make sense right? Since a symplectomorphism is a diffeomorphism $\Phi : M → M$ . Moreover since the isotopy $\phi_t: M \to M $ is a flow , it is by definition the identity on M at t=0 so more reason for it to be a map $M → M$ On the other hand it says that a Hamiltonian isotopy is the isotopy generated by a Hamiltonian vector field $X_t$ so $\phi_t$ plays the role of $H_t$ in the definition above in the next-to-last paragraph and it is a map $M \to \Bbb R$ I ran into trouble when I was trying to prove that Ham(M, ω) is a subgroup of Symp(M, ω) I started by taking $\Phi,\Psi\in Ham(M,\omega)$ so by definition there exist Hamiltonia
|
I think your confusion stems from abuse of terminology. There are several different things in symplectic geometry which all have the adjective "Hamiltonian": Time-dependent Hamiltonian functions on a manifold $M$ , which are smooth functions $$ H: M\times [0,1]\to {\mathbb R}. $$ Such $H$ can be regarded as a homotopy between the functions $H(\cdot, 0)$ and $H(\cdot, 1)$ . Time-dependent Hamiltonian vector fields , denoted $X_t$ , which are maps $$ M\times [0,1]\to TM, (p,t)\mapsto X_t(p)\in T_pM $$ (satisfying further properties). Time-dependent Hamiltonian maps (which is suboptimal) or Hamiltonian isotopies (which is better), which are maps $$ G: M\times [0,1]\to M $$ (satisfying further properties). Such a map is an isotopy between the identity map $G(\cdot, 0)$ and the diffeomorphism $G(\cdot, 1)$ (the time 1 map of the isotopy $G$ ). Hamiltonian maps or Hamiltonian symplectomorphisms , which are maps $$ \Phi: M\to M $$ for which there exists a Hamiltonian isotopy $G$ such that $\P
|
|differential-geometry|smooth-manifolds|differential-forms|symplectic-geometry|
| 1
|
Sequences of bounded variation have convergent series
|
A sequence $\{a_n\}$ has bounded variation if $\sum_{k=1}^\infty |a_{k+1}-a_k|$ converges. I am trying to prove that, if $\{a_n\}$ has bounded variation then $\sum_{k=1}^\infty a_n$ converges. This is for an exercise from Kosmala's real analysis textbook, (7.4 problem 20 (c)). It's in the section which covers alternating series, although I don't see a way to relate this to an alternating series. My best guess is that it is meant to be somehow related to absolute convergence of the of the bounded variation condition. I've tried looking at the Cauchy condition, $$\left|\sum_{k=m}^n a_k\right| and thought of how one could try to relate this to the bounded variation condition. Maybe $$ \left|\sum_{k=m}^n (a_k-a_{k+1}+a_{k+1}) \right| = \left|\sum_{k=m}^n (a_k-a_{k+1})+\sum_{k=m}^n a_{k+1}\right|$$ $$\le \sum_m^n |a_{k+1}-a_k|+\left|\sum_m^n a_{k+1}\right|$$ The first term after the inequality must go to zero eventually, but this looks like a complete dead-end to me. Note: my question is no
|
You cannot prove it, since it is false. For instance, the harmonic series ( $\sum_{n=1}^\infty\frac1n$ ) diverges, in spite of the fact that the sequence $\left(\frac1n\right)_{n\in\Bbb N}$ is a sequence of bounded variation.
|
|sequences-and-series|convergence-divergence|bounded-variation|
| 1
|
Calculate the volume of solid
|
Calculate the volume of solid consisting of the cylinder $x^2+y^2\leq 4, 0 \leq z \leq 2$ and by cone $x^2+y^2\leq z^2, 2\leq z \leq 5.$ I tried to draw the figure on geogebra and I'm trying to use cylinder coordenate but I could not. My attempt: put $x=r\cos(\theta), y=r \sin(\theta)$ and $z=z.$ We have $r^2\leq z^2,$ then $r\leq z$ . Futhermore, $0\leq r\leq 2$ and $0\leq \theta \leq 2\pi.$ Therefore the volume is $\int\int\int 1dzdxdy =\int_{0}^{2}\int_{0}^{2\pi}\int_{r}^{5}r dzdrd\theta = \dfrac{44\pi}{3} $ The answer is $47\pi$ but if I do as above, I can't get it. Can anyone help me, please?
|
Yes, the answer is $47\pi$ . This is so because the volume of the cylinder is $$\int_0^{2\pi}\int_0^2\int_0^2\rho\,\mathrm d\rho\,\mathrm dz\,\mathrm d\theta=8\pi,$$ whereas the volume of the cone is $$\int_0^{2\pi}\int_2^5\int_0^z\rho\,\mathrm d\rho\,\mathrm dz\,\mathrm d\theta=37\pi.$$
|
|calculus|integration|multivariable-calculus|multiple-integral|
| 1
|
If I have N children, what percentage of my chromosomes will be passed on?
|
If I have one child half my chromosomes are passed on. If I have two children somewhere between half and all my chromosomes will be passed on. From a statistical perspective what is the average number of chromosomes inherited with N children? N
|
Ignoring any biological details, and taking the math question at face value, this is a fairly standard problem. The trick is to think about the probability that a chromosome is not passed on to any of the children. As you state, if you have $M$ chromosomes and $1$ child, the probability that any given chromosome is passed on is $1/2$ , and the expected value of the total number passed on is $M/2$ . Of course, for just one child this expected value is also known to be the exact count. The other trivial case is no children, where the probability that a chromosome is passed on is $0$ and the expected value for the total is also $0$ . If you have $2$ children the probability that a given chromosome is passed on to neither child is $1/4$ . Therefore, the probability that a given chromosome is inherited by at least one child is $3/4$ and the expected value for the total is simply $3M/4$ . This pattern continues for $3$ children where the probability that a chromosome is not inherited by any
|
|statistics|
| 0
|
Finite graphs where every maximal clique is of even size is not EC$_\Delta$
|
For the question bellow I will use notations used in Enderton's book. Let $\mathcal{L}$ be a language with $=$ , $\forall$ and $R$ a binary relation symbol. A model $\mathfrak{A}$ is called a graph if $\mathfrak{A}$ satisfies $\forall x (\neg x R x)) $ (interpreted as no vertex is connected to itself via an edge) $\forall x \forall y (xRy \to yRx)$ (interpreted as undirected graphs) A clique in a graph $\mathfrak{A}$ is a set $\{v_1, v_2, \ldots v_n\} \subseteq |\mathfrak{A}|$ of $n$ distinct elements such that for all $i\neq j$ we have that $(v_1,v_j) \in R^{\mathfrak{A}}$ . (interpreted as a clique of size $n$ is, $n$ distinct vertices such that each vertex is connected to others via an edge). With these definitions I am given to prove that the class of finite graphs where every maximal clique of even size is not EC $_\Delta$ in the sense defined in Enderton's logic book. That is to say that there is no set of wff $\Sigma$ (finite or infinite) in the language $\mathcal{L}$ such that
|
It is a fact that if an $\mathrm{EC}_\Delta$ class (more commonly called an elementary class, or a first-order axiomatizable class, or the class of models of a first-order theory) contains arbitrarily large finite structures, then it contain an infinite structure. Two proofs are given in the answers to this question , and you can find many more explanations by searching for "compactness arbitrarily large finite models" on this site. Now it follows immediately that the class of finite graphs such that every maximal clique has even size is not $\mathrm{EC}_\Delta$ , since this class contains arbitrarily large graphs (e.g. the finite complete graphs of even size) but no infinite graphs.
|
|logic|model-theory|
| 1
|
Classical Nullstellensatz implies Hilbert Nullstellensatz
|
In artin's Algebra book (1st edition) the following theorems are stated: (Hilbert's Nullstellensatz) In $\mathbb{C}[x_1,...,x_n]$ maximal ideals are of the form $\langle x_1-a_1,...,x_n-a_n\rangle$ . (Classical Nullstellensatz) $f_1,...,f_n\in \mathbb{C}[x_1,...,x_n] $ . If $g=0$ in the variety defined by zeros of $f_1,...,f_n$ , then there is a power $g^m\in \langle f_1,...,f_n\rangle$ He then asks us to prove that the Hilbert Nullstellensatz is a consequence of the Classical Nullstellensatz. I think I was able to prove this, but it is not exactly pretty, I am looking for nicer solutions and for any possible flaw in my argument. Take $\mathcal{M}$ a maximal ideal and suppose $\mathcal{M}\not= \langle x_1-a_1,...,x_n-a_n\rangle$ for any $(a_1,...,a_n)\in \mathbb{C}^n$ . Because $\mathcal{M}$ is maximal and $\langle x_1-a_1,...,x_n-a_n\rangle$ are proper ideals of our ring this means that: $$\mathcal{M}\subset \langle x_1-a_1,...,x_n-a_n\rangle \quad \text{cannot hold.} $$ In other word
|
First we record a corollary of the classical theorem. (The proof is behind the spoiler block, in case you want to prove it yourself.) Corollary: If $I, J \subset \mathbb{C}[x_1, \dots, x_n]$ are ideals where $Z(I) = Z(J)$ as subsets of $\mathbb{C}^n$ , then $\sqrt{I} = \sqrt{J}$ . Proof. Let $f \in \sqrt{I}$ . Then, for some $n$ , $f^n$ vanishes along $Z(I) = Z(J)$ , so for some $m$ , $(f^n)^m \in J$ by the theorem, so $f \in \sqrt{J}$ . Hence, $\sqrt{I} \subseteq \sqrt{J}$ and we conclude by symmetry. Now, if $\mathcal{M} \subset \mathbb{C}[x_1, \dots, x_n]$ is a maximal ideal. Then $Z(\mathcal{M}) \subset \mathbb{C}^n$ is nonempty by the corollary, so we can find some $(a_1, \dots, a_n) \in Z(\mathcal{M})$ . It then follows that $\mathcal{M} \subseteq (x_1 - a_1, \dots, x_n - a_n)$ , which implies the desired result. EDIT: Let $I \subset \mathbb{C}[x_1, \dots, x_n]$ be an ideal. Then we define the zero locus of $I$ to be $$Z(I) = \{a \in \mathbb{C}^n \;|\; f(a) = 0\text{ for all }f\t
|
|algebraic-geometry|solution-verification|ring-theory|
| 1
|
Show that $\operatorname{rank}(A+B) \leq \operatorname{rank}(A) + \operatorname{rank}(B)$
|
I know about the fact that $\operatorname{rank}(A+B) \leq \operatorname{rank}(A) + \operatorname{rank}(B)$, where $A$ and $B$ are $m \times n$ matrices. But somehow, I don't find this as intuitive as the multiplication version of this fact. The rank of $A$ plus the rank of $B$ could have well more than the columns of $(A+B)$! How can I show to prove that this really is true?
|
$\newcommand{\rank}{{\rm rank}\;{}}$ We already have some very good answers to this question, but I would like to add one more using an approach based on partitioned matrices. For a partitioned matrix $$M = \begin{bmatrix}A&B \\ C&D\end{bmatrix},$$ let us define the invertible operations $${\sf rowSwap}(M) = \begin{bmatrix}C&D \\ A&B\end{bmatrix},$$ and $$ {\sf rowSynth}_{X}(M) {}={} \begin{bmatrix} A & B \\ C + XA & D+XB \end{bmatrix}. $$ It is easy to see that ${\sf rowSwap}^{-1} = {\sf rowSwap}$ and ${\sf rowSynth}_{X}^{-1} = {\sf rowSynth}_{-X}$ . These operations do not affect the rank of the original matrix. Now, we start with the observation that $$\rank \begin{bmatrix}A & 0 \\ B & B\end{bmatrix} = \rank A + \rank B.$$ Then, by applying the above elementary operations $$ \begin{bmatrix}A & 0 \\ B & B\end{bmatrix} \overset{{\sf rowSwap}}{\longrightarrow} \begin{bmatrix}B & B \\ A & 0\end{bmatrix} \overset{{\sf rowSynth}_{I}}{\longrightarrow} \begin{bmatrix}B & B \\ A+B & B\end{bm
|
|linear-algebra|matrices|inequality|matrix-rank|
| 0
|
Requirements for a Markov chain to converge to its stationary distribution.
|
I have seen in two places, different requirements for a Markov chain to converge to its stationary/invariant distribution: Irreducibility and aperiodicity. As mentioned here Irreducibility and recurrence. Is it true that under irreducibility and recurrence then the stationary distribution is also its limiting distribution? And how does recurrence relate to aperiodicity, is one requirement stronger than the other?
|
For an example, see my comment on the discrete time Markov chain (DTMC) with $S=\{1,2\}$ and $P_{12}=P_{21}=1$ , which is irreducible, recurrent, and periodic and has a solution $\pi = (1/2,1/2)$ to $\pi = \pi P$ . Here is my favorite steady state theorem: For a finite or countably infinite state space $S$ , define a "probability vector" as a nonnegative vector $(\pi_i)_{i\in S}$ that satisfies $\sum_{i \in S} \pi_i=1$ . Theorem: Suppose $\{X_t\}_{t=0}^{\infty}$ is a DTMC with finite or countably infinite state space $S$ and transition probability matrix $P=(P_{ij})$ . Suppose the DTMC is irreducible and there is a probability vector $(\pi_i)_{i\in S}$ that satisfies $$ \pi_j = \sum_{i\in S} \pi_i P_{ij} \quad \forall j \in S \quad \mbox{(stationary equations)}$$ Then a) We must have $\pi_i>0$ for all $i \in S$ . b) Regardless of the initial condition $X_0\in S$ , we have for all $i \in S$ : $$ \lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=0}^{T}1_{\{X_t=i\}} = \pi_i \quad \mbox{(with pr
|
|probability|stochastic-processes|markov-chains|markov-process|stationary-processes|
| 0
|
Why do k arithmetic right/left bit shifts divide by $2^k$/multiply by $2^k$ in two's complement, rigorously?
|
I want to understand the semantics of rights bit shifts x>>k in two's complement properly, in particular why do right bit shifts of size $k$ approximately divide $x$ by $2^k$ (and ideally making this "approx" precise and also handling the arithmetic left bit shift too). So the representation of a number in two complement is for a $N$ bit length word: $$ x = -a_{N-1} 2^{N-1} + \sum^{N-2}_{i=0} a_{i} 2^i = neg + pos $$ e.g., as a intentionally selected running example $-6$ is $1010_2$ which is $-8 + 0 + 2 + 0$ . I'm thinking of bit slots [-8, 4, 2, 1] [-2^3, 2^2, 2^1, 2^0] for each bit. When doing an arithmetic right bit shift we get $1101_2$ which ends up being $-3 = -8 + 4 + 0 + 1$ in twos complement. When the most significant bit (MSB) is $0$ the mathematics seems trivial. If you move the bits to the right it's a division by two because every number is divided by two (maybe floor since we lose the final bit sometimes if it's 1). With negative numbers using 2s complement it seems more
|
Since it seems that you haven't understood how 2-complement works, I'll explain it first, then I'll explain how right shift works for 2-complement negative numbers. How 2-complement works Let $N$ be the number of binary digits used for storing a number. For every $n\in\mathbb{Z},N\in\mathbb{N}^+$ there exists a unique $k \in \mathbb{Z}$ and a unique finite sequence $(d_i)$ , $d_i\in\{0,1\}$ , $0\leq i\leq N-1$ such that $$n = 2^N k+ \sum_{i = 0}^{N-1}{2^id_i}$$ This can simply be proven inductively. Now you see that $d_i$ is the $i$ -th digit of the binary system, which are stored in the memory. $k$ , however, is not stored, hence it must be chosen. For unsigned integers, the choice is naturally $k = 0$ , leading to $$n = \sum_{i = 0}^{N-1}{2^id_i}$$ A consequence is that $n$ can only represented for $0 \leq n \leq 2^N-1$ . For signed integers, this is clearly problematic because there isn't any negative integer at all. We also want a way to represent them in such a way that their diff
|
|computer-science|binary|binary-operations|
| 0
|
Seifert-van Kampen theorem, classical version
|
This is from Munkres' Topology page 431: Theorem 70.2 (Seifert-van Kampen theorem, classical version). Assume the hypotheses of the preceding theorem. Let $$ j : \pi_1(U, x_0) * \pi_1(V, x_0) \longrightarrow \pi_1(X, x_0) $$ be the homomorphism of the free product that extends the homomorphisms $j_1$ and $j_2$ induced by inclusion. Then $j$ is surjective, and its kernel is the least normal subgroup $N$ of the free product that contains all elements represented by words of the form $$ (i_{1}(g))^{-1} i_{2}(g)), $$ for $g \in \pi_{1}(U \cap V, x_{0}).$ and by hypotheses of the preceding theorem it means that $X$ is a topological space and $U$ , $V$ open subsets of $X$ such that $X = U \cup V$ and $x_0 \in U \cap V$ $j_1:\pi_1(U,x_0)\to\pi_1(X,x_0)$ and $ j_2:\pi_1(V,x_0)\to\pi_1(X,x_0)$ are inclusion induced homomorphisms. I read the proof and understood it but something is really bothering me and that is how do we know that the homomorphism $j$ mentioned in the theorem even exists?
|
It tells you: $j_1:U\subset X$ and $j_2:V\subset X$ induce maps on $\pi_1$ , $\pi_1(U)\to\pi_1(X)$ and $\pi_1(V)\to\pi_1(X)$ , and these are genuine homomorphisms, and if you have two groups $A,B$ and a third group $C$ and homomorphisms $A\to C,B\to C$ then there is a unique associated homomorphism $A\ast B\to C$ . In this case with $A=\pi_1(U),B=\pi_1(V),C=\pi_1(X)$ , there is a unique homomorphism $j:\pi_1(U)\ast\pi_1(V)\to\pi_1(X)$ associated to $j_1,j_2$ .
|
|algebraic-topology|homotopy-theory|group-homomorphism|fundamental-groups|
| 0
|
Non-trivial exemple of Hölder continuous function.
|
I've seen the following concept appear quite often in mathematics: A function $f:I\subset \mathbb{R}\to\mathbb{R}$ is said to be Hölder continuous if there are constants $\alpha$ and $M$ such that $$|f(x)-f(y)|\leq M|x-y|^\alpha$$ for all $x,y\in I$. What are some examples of Hölder continuous functions?
|
Another example is $\arcsin x $ that is $1/2$ H"older continuous in $[-1,1]$ .
|
|real-analysis|
| 0
|
Equal number of finite and infinite subsets implies amorphous
|
We work in $\sf ZF$ . An amorphous set is a set that cannot be partitioned into $2$ disjoint infinite sets. If $A$ is an amorphous set then it has an equal number of finite subsets and infinite subsets, with the bijection $x \mapsto x^c$ . Is the opposite also true? That is, given a nonempty set $x$ with an equal number of infinite and finite subsets, is it amorphous? I showed that it is enough to show that $2^A$ is Dedekind finite: define $B$ as the set of finite subsets of $A$ and $C$ as the set of cofinite subsets of $A$ . If there exists a bijection $f:B \to [A]^{\not , then define $g : B \cup C \to \mathcal P (A)$ as $x \mapsto f(x)$ if $x \in B$ and $x^c$ otherwise. $g$ is a bijection, so if $\mathcal P (A)$ is Dedekind finite, $B \cup C = \mathcal P (A)$ .
|
Referring to Theorem to 4.21, Proposition 4.22 In L. Halbeisen's book as explained in the comment: Let $A$ be an infinite set. Let $\text{fin}(A)$ be the finite subsets of $A$ , $\text{cof}(A)$ the cofinite subsets of $A$ . Write $\mathcal P(A)=\text{fin}(A)\sqcup\text{cof}(A)\sqcup \text{binf}(A)$ , where $\text{binf}(A)$ is the rest. If $|\mathcal P(A)\setminus \text{fin}(A)|=|\text{fin}(A)|$ then $|\mathcal P(A)|=2|\text{fin}(A)|$ . Now assume $A$ is not amorphous (so $\text{binf}(A)\ne\emptyset$ ) and $|\mathcal P(A)\setminus \text{fin}(A)|=|\text{fin}(A)|$ . Let $\sigma\colon\mathcal P(A)\setminus \text{fin}(A)\to\text{fin}(A)$ be a bijection. Let $\tau\colon \mathcal P(A)\setminus\text{cof}(A)\to\text{fin}(A)$ be defined by $$ \tau(x)=\begin{cases} \sigma(A\setminus x)&x\in\text{fin}(A)\\ \sigma(x)&x\in\text{binf}(A) \end{cases} $$ Then $\tau$ is a bijection. Let $u\in\text{binf}(A)$ . Then $\tau(u)\in\text{fin}(A)$ , so $\tau(u)\not\in\tau(\text{fin}(A))$ . Therefore $\tau\rvert
|
|set-theory|axiom-of-choice|
| 1
|
Round-robin tournament with 4 contestants per match and duplicates allowed
|
I need an algorithm that will calculate 4-contestant groups that ensure each contestant plays against each other contestant at least once but more than once is allowed . This is different from a Steiner quadrupel where they're only allowed to meet once. I tried the circle method for pairwise matches but it doesn't translate well to quadrupel matches. I'm looking for something slightly more optimized than just looping over the list of contestants. Thanks for the help!
|
Here is a method for 28 players in twelve rounds. There are 12 spare players in each of the first five rounds; they can be paired up as well. It is key here that $28/4=7$ is prime, or at least coprime to $6$ . Divide the players into four groups of seven, players in each group numbered $1$ to $7$ . In the first five rounds, players play within each group $1234,1235,1236,1237,4567$ . In the next seven rounds, group A players each have a table they stay at. Group B advances one table each round, Group C advances two tables each round, and Group D advances three tables each round.
|
|combinatorics|
| 0
|
Finding and proofing a closed formula for $\sum_{n=1}^k\sqrt{1+\frac{1}{n^2}+\frac{1}{(n+1)^2}}$
|
I want to find and proof a closed formula for the following sum $$\sum_{n=1}^k\sqrt{1+\frac{1}{n^2}+\frac{1}{(n+1)^2}}=\sqrt{1+\frac{1}{1^2}+\frac{1}{2^2}}+\sqrt{1+\frac{1}{2^2}+\frac{1}{3^2}}+\dots +\sqrt{1+\frac{1}{k^2}+\frac{1}{(k+1)^2}}$$ I have found a closed formula but I have problems with proofing it and proofing my steps in between. First I simplified the sum. To do that I calculated the radicands. After that I calculated the sum for $k=1$ to $k=4$ to see a pattern. Lastly I concluded from the pattern the closed formula: $$s_k=\frac{(k+1)^2-1}{(k+1)}=(k+1)-\frac{1}{(k+1)}$$ With $s_k$ I betitle the sum up to $k$ . This closed formula holds with the solutions. But now I want to proof it and my steps in between. Simplifying the sum I first calculated the following three radicands: \begin{alignat*}{3} &1+\frac{1}{1^2}+\frac{1}{2^2}&&=\frac{9}{4}&&&=\frac{3^2}{2^2} \newline\newline &1+\frac{1}{2^2}+\frac{1}{3^2}&&=\frac{49}{36}&&&=\frac{7^2}{6^2} \newline\newline &1+\frac{1}{3^2}+
|
Note that : $$\frac{n^2+n+1}{n^2+n}=\frac{n(n+2)}{n+1}-\frac{(n-1)(n+1)}{n}$$ Therfore: $$\sum_{n=1}^k\sqrt{1+\frac{1}{n^2}+\frac{1}{(n+1)^2}} =\sum_{n=1}^k\frac{n^2+n+1}{n^2+n}=\sum_{n=1}^k \left({\frac{n(n+2)}{n+1}-\frac{(n-1)(n+1)}{n}}\right)=\frac{k(k+2)}{k+1}=k+1-\frac{1}{k+1}$$
|
|algebra-precalculus|summation|telescopic-series|
| 1
|
Prove⌈a/b⌉ ≤ a/b + (b-1)/b
|
For integers $a, b > 0$ , Prove $⌈a/b⌉ ≤ (a + (b-1))/b$ RHS $= a/b + (b-1)/b $ where $ (b-1)/b $ is $[0,1)$ If $a/b$ is an integer, inequality holds true as we are adding non-negative term. If $a/b$ is not an integer, $⌈a/b⌉ -- Equation 1 How to demonstrate that switching 1 with some smaller number $(b-1)/b$ leads to the $ transforming to $≤$ in equation 1. Similarly, prove $⌊a/b⌋ ≥ (a - (b-1))/b$
|
$\lceil a/b \rceil$ is strictly less than $a/b+1$ and $b$ is positive, so that $$ b \left\lceil \frac ab \right\rceil The expressions on the left and on the right are both integers, and two distinct integers differ by at least $1$ . It follows that $$ b \left\lceil \frac ab \right\rceil \le a + b - 1 \, , $$ which is the desired inequality.
|
|inequality|proof-writing|ceiling-and-floor-functions|
| 0
|
When is the center of group contained in the derived subgroup
|
Let $N$ be a group. Assume that $N$ is torsion-free, finitely generated and nilpotent. I read somewhere that $$ Z(N) \subset [N,N] \iff N \text{ cannot be written as a direct product of groups } N = A \times B \text{ where }A \text{ is non-trivial abelian.}$$ One implication is clear to me: $\implies$ . I prove it by contraposition: assume $N$ can be decomposed as $A\times B$ where $A$ is non-trivial abelian. Then it is clear that $$ Z(N) = Z(A) \times Z(B) = A\times Z(B).$$ On the other hand, we have that $$ [N,N] = [A,A] \times [B,B] = \{1\} \times [B,B].$$ If we had that $Z(N) \subset [N,N]$ , we would need that $A\subset \{1\}$ , which is clearly not possible as $A$ was non-trivial. So by contraposition, we have proven the first implication. Now my problem is with the other implication. I don't know how I can prove the converse. I don't even know how to go about doing that. My gut suggests contraposition again, but then I have to use the assumption that $Z(N) \not\subset [N,N]$ to
|
This claim does not hold. A counterexample (even in the realm of finitely generated, torsion-free and nilpotent groups) is that of the Heisenberg group $H$ of upper triangular $3\times 3$ -matrices with entries in $2\mathbb{Z}$ . It can easily be seen that in this case $$ Z(H) \not\subset [H,H].$$ Yet, it is easy to check that $H$ does not have any abelian factor. The claim can however be proven for groups that are nilpotent, torsion-free and radicable. This has to do with Mal'cev completions and the correspondence between Mal'cev completions and rational Lie algebras.
|
|abstract-algebra|group-theory|direct-product|derived-subgroup|
| 1
|
Finding and proofing a closed formula for $\sum_{n=1}^k\sqrt{1+\frac{1}{n^2}+\frac{1}{(n+1)^2}}$
|
I want to find and proof a closed formula for the following sum $$\sum_{n=1}^k\sqrt{1+\frac{1}{n^2}+\frac{1}{(n+1)^2}}=\sqrt{1+\frac{1}{1^2}+\frac{1}{2^2}}+\sqrt{1+\frac{1}{2^2}+\frac{1}{3^2}}+\dots +\sqrt{1+\frac{1}{k^2}+\frac{1}{(k+1)^2}}$$ I have found a closed formula but I have problems with proofing it and proofing my steps in between. First I simplified the sum. To do that I calculated the radicands. After that I calculated the sum for $k=1$ to $k=4$ to see a pattern. Lastly I concluded from the pattern the closed formula: $$s_k=\frac{(k+1)^2-1}{(k+1)}=(k+1)-\frac{1}{(k+1)}$$ With $s_k$ I betitle the sum up to $k$ . This closed formula holds with the solutions. But now I want to proof it and my steps in between. Simplifying the sum I first calculated the following three radicands: \begin{alignat*}{3} &1+\frac{1}{1^2}+\frac{1}{2^2}&&=\frac{9}{4}&&&=\frac{3^2}{2^2} \newline\newline &1+\frac{1}{2^2}+\frac{1}{3^2}&&=\frac{49}{36}&&&=\frac{7^2}{6^2} \newline\newline &1+\frac{1}{3^2}+
|
$$1+\dfrac1{(n+a)^2}+\dfrac1{(n+b)^2}=\cdots=\dfrac{(n^2+(a+b)n+ab)^2+2n^2+2n(a+b)+a^2+b^2}{(n+a)^2(n+b)^2}$$ Comparing the numerator with $$(n^2+cn+d)^2=n^4+2cn^3+n^2(c^2+2d)+2n(cd)+d^2$$ Equating the constants and the coefficients of $n^3,n^2,n$ $c=a+b$ and $c^2+2d=(a+b)^2+2=c^2+2\implies d=1$ and $(a+b)(1+ab)=cd\iff c(1+ab)=c\iff ab=0$ $\implies$ either $a=0\implies c=b$ and $a^2+b^2+a^2b^2=d^2=1\implies b=\pm 1$ $\implies (n^2\pm n+1)^2=n^4\pm2n^3+3n^2\pm2n+1$ Similarly if $b=0$ Now if $g(n)=\dfrac{n^2+n+1}{n(n+1)}=1+\dfrac{n+1-n}{n(n+1)}=1+f(n)-f(n+1)$ where $f(m)=\dfrac1m$ $$\sum_{n=1}^kg(n)=\sum_{n=1}^k1+\sum_{n=1}^k\underbrace{(f(n)-f(n+1))}_{\text{ Telescoping series}}=k + f(1)-f(k+1)$$
|
|algebra-precalculus|summation|telescopic-series|
| 0
|
Problem about Fourier transform being integrable
|
I am currently reading a paper and the author makes the following claim: If $f \in L^1(\mathbb{R})$ is a continuous, even, and nonnegative function such that $\hat{f}(\alpha) \leq 0$ for $|\alpha| \geq 1$ , then $\hat{f} \in L^1(\mathbb{R})$ . He claims that this can be shown by approximation of the identity. I am not very comfortable with approximation of the identity. I would be extremely grateful if a proof could be outlined.
|
It suffices to show that the integral $$\int\limits_{|t|\ge 1} [-\widehat{f}(t)]\,dt$$ is convergent. To this end we will show that the integrals $$\int\limits_{|t|\ge 1} [-\widehat{f}(t)]e^{-a t^2}\,dt$$ are bounded with respect to $a>0.$ As the function $\widehat{f}$ is continuous the latter is equivalent to the boundedness of the integrals $$\int\limits_{-\infty}^\infty \widehat{f}(t)e^{-a t^2}\,dt$$ We have $$\int\limits_{-\infty}^\infty \widehat{f}(t)e^{-at^2}\,dt =\int\limits_{-\infty}^\infty\left ( \int\limits_{-\infty}^\infty f(x)e^{-2\pi it x}e^{-at^2}\,dx\right )\,dt\\ \int\limits_{-\infty}^\infty f(x)\left (\int\limits_{-\infty}^\infty e^{-at^2}e^{-2\pi itx}\,dt\right )\,dx\ = \sqrt{\pi\over a}\int\limits_{-\infty}^\infty f(x)e^{-\pi^2x^2/a}\,dx $$ The change of integration was justified as the function $f(x)e^{-2\pi itx} e^{-at^2}$ is absolutely integrable over $\mathbb{R}^2.$ The last integral can be split into $$ \sqrt{\pi\over a}\int\limits_{|x|\le 1} f(x)e^{-\pi^2x^2/a}
|
|fourier-analysis|
| 0
|
$Ham(M, \omega)$ acts transitively on $(M,\omega)$
|
Let $M$ be a compact and connected smooth manifold with a symplectic form $\omega$ . $Ham(M, \omega)$ denotes the space of hamiltonian symplectomorphisms of $(M,\omega)$ . I have the following statement in my lecture notes: Using Darboux’s theorem one can show that the action of $Ham(M, \omega)$ on M is transitive, that is: for any pair of points $p, q \in M$ , there exists $\Phi \in Ham(M, \omega)$ such that $\Phi(p) = q$ . The idea is that Darboux's theorem to go from local to global, i.e. to show that points that are close to each other in the symplectic manifold can be mapped to each other via a Hamiltonian diffeomorphism. ...which I unsuccesfully tried to prove. How is it done? I know that a symplectomorphism of $(M, \omega)$ is a diffeomorphism $\Phi : M \to M$ such that $\Phi ^∗\omega = \omega$ . $\Phi$ is Hamiltonian if there exists a Hamiltonian isotopy $\phi_t$ such that $\Phi=\phi_1$ . And that $ Ham(M, \omega)$ is a normal subgroup of $Symp(M, \omega)$ (the space of symplec
|
Translations of $\mathbb{R}^{2n}$ are generated by constant vector fields, i.e. the flow $x\mapsto x+ty$ is generated by the vector field $x\mapsto y$ (under the natural identification $T(\mathbb{R}^{2n})\cong \mathbb{R}^{2n}\times \mathbb{R}^{2n}$ ). For a constant vector field $X=\sum_{i,j}^nX^i \partial_{q^i}+Y_j\partial_{p_j}$ we have $\iota_{X}\omega=\sum X^idp_i-\sum Y_j dq^j$ and hence $d\iota_{X}\omega_{std}=0$ . By the closedness of $\omega_{std}$ , $\mathcal{L}_{X}\omega_{std}=0$ and hence the flow $x\mapsto x+ty$ is a symplectomorphism. There are several ways to see that this flow is Hamiltonian. First off, we can write a Hamiltonian $H_y(p,q)=\sum X^ip_i-Y_jq^j=\omega(X,(p,q))$ . But also, the Poincare lemma tells us that $\iota_X\omega$ is exact, i.e. $\iota_X\omega_{std}=dH$ for some function $H$ and hence there exists a Hamiltonian for this flow. This tells us that $Ham(\mathbb{R}^{2n},\omega_{std})$ acts transitively on $\mathbb{R}^{2n}$ since the time one flow of $x\ma
|
|differential-geometry|smooth-manifolds|differential-forms|symplectic-geometry|
| 0
|
Prove an integral inequality with squared integrals
|
Given $f, g$ integratable prove that $$\left(\int_0^1 f(t) \ \mathrm{d}t\right)^2 + \left(\int_0^1 g(t) \ \mathrm{d}t\right)^2 \leq \left(\int_0^1 \sqrt{f^2(t) + g^2(t)} \ \mathrm{d}t\right)^2$$ I think that this exercise could be solved by applying Holder inequality, $$\left(\int_a^b f^p(t) \ \mathrm{d}t\right)^{1/p} + \left(\int_a^b g^p(t) \ \mathrm{d}t\right)^{1/p} \geq \left(\int_a^b (f(t) + g(t))^p \ \mathrm{d}t\right)^{1/p}$$ But I don't see how exactly. If you have any ideas, please share.
|
Consider the curve $\gamma: [0, 1] \to \mathbb{R}^2$ parametrized as $\gamma(t)=(x(t), y(t))$ with $$x(t) = \int_0^t f(x)\,dx \quad \text{ and } \quad y(t)=\int_0^t g(x)\,dx$$ The arclength of $\gamma$ is given by $$l(\gamma) = \int_0^1\sqrt{f(t)^2 + g(t)^2}\,dt$$ while the length of the line segment joining $\gamma(0)$ and $\gamma(1)$ is just $$\sqrt{\left(\int_0^1 f(t)\,dt\right)^2 + \left(\int_0^1 f(t)\,dt\right)^2}$$ The inequality follows since the line segment is the shortest path between any two points in $\mathbb{R}^2$ .
|
|definite-integrals|integral-inequality|holder-inequality|
| 1
|
Rightmost digit of $ \left \lfloor \frac{10^{20000}}{10^{100}+3} \right\rfloor $
|
How could I find $$ 0 \leq a \leq 9 $$ such that $$ \left \lfloor \frac{10^{20000}}{10^{100}+3} \right\rfloor \equiv a \mod 10 $$ ?
|
NOTE: I just stumbled upon this (very) old question, and I think I might have a new (and hopefully, easier) perspective. This is not supposed to be entirely a new chain of thought, but I think that it might be a tad bit easier to understand. $$\begin{align}10^{20000}\pmod{10^{100}+3}&\equiv10^{20000}-10^{19900}\times(10^{100}+3)\\ &\equiv-3\times10^{19900}\\ &\equiv-3\times10^{19900}+3\times10^{19800}\times(10^{100}+3)\\ &\equiv(-3)^2\times10^{19800} \end{align}$$ As we can guess (and prove, using induction) from this method, for any whole number $n\leq20$ : $$10^{20000}\pmod{10^{100}+3}\equiv(-3)^n\times10^{20000-100n}$$ So, putting in $n=200$ : $$\begin{align}10^{20000}\pmod{10^{100}+3}&\equiv(-3)^{200}\times10^{20000-100\times200}\\ &\equiv9^{100}\end{align}$$ Now, $9^{100} , so, it is the remainder that we get when we divide $10^{20000}$ by $10^{100}+3$ . Furthermore, we also know that: $$10^{20000}=(10^{100}+3)\times\left \lfloor \frac{10^{20000}}{10^{100}+3} \right\rfloor+\text{r
|
|arithmetic|
| 0
|
A sequence with the convergence of a.s. and $L^1$convergence imply its conditional expectation a.s. convergence
|
If a sequence of random variables $X_n$ is defined on the space $(\Omega,\mathscr{F},\mathbb{P})$ , such that $X_n$ converges to $ X$ a.s. and $X_n$ converges to $ X$ in $L^1$ Is it true that for any sub $\sigma$ -field $\ \mathscr{G},\mathbb{E}[X_n|\mathscr{G}]\to\mathbb{E}[X|\mathscr{G}] \quad\mathbb{P}$ -a.s. ? I want to use Dominated Convergence Theorem to solve this problem. Since $X_n$ converges to $ X$ $\mathbb{P}$ -a.s. , all we need to do is to derive $X_n$ can be bouned by a integrable rancom variable.How can we deduce this from $L^1$ convergence? I have no idea.
|
As mentioned here A dominated convergence theorem for the conditional expectation $E( \cdot \mid \mathcal{F}_n)$ where $\mathcal{F}_n$ loses information over time Given a sequence $(\mathcal{F}_n)_n$ of $\sigma$ -Algebras with $\mathcal{F}_{n+1} \subset \mathcal{F}_n$ and defining $\mathcal{F}_\infty := \bigcap_n \mathcal{F}_n$ . If $Y_n \to Y_\infty$ a.s. and $|Y_n| \leq Z \in L^1$ then $$ E(Y_n \mid \mathcal{F}_n) \to E(Y_\infty \mid \mathcal{F}_\infty) \text{ a.s.} \tag{*}$$ However, here we don't have the bound $|Y_n| \leq Z$ . And indeed there is a counterexample for it Does almost sure convergence and $L^1$-convergence imply almost sure convergence of the conditional expectation? . The main idea is Take $Y_{n}$ the typographic-sequence converging to zero in $L^1$ but not a.s.. Consider iid $\xi_{n}$ with $P(\xi_{n}=2^{n})=2^{-n}$ and $P(\xi_{n}=0)=1-2^{-n}$ . Then by Borel-Cantelli the $Z_{n}=\prod_k \xi_k$ is zero for $k\geq N(\omega)$ . It also satisfies $E[Z_{n}]=1$ Set $\math
|
|probability|probability-theory|stochastic-processes|
| 1
|
Why is maximum number of joints of 6 lines is 4?
|
The following is considered in Lary Guth's Polynomial Methods in Combinatorics, page 14. Let $L$ be a set of lines in $\mathbb R^3$ . A point $x$ which lies in some set of three non-co-planar lines of $L$ is called a joint of $L$ . Suppose $L$ has $6$ lines. Then, why is it that $L$ has at most $4$ joints? This has been my approach so far: Note that the tetrahedron has 6 edges and 4 vertices. If we take our $L$ to be the set of 6 lines containing each of the six edges of a tetrahedron, we get that each of its vertex is a joint as the three lines intersecting any vertex are non-co-planar. Now, I want to argue that if one wants to maximize the number of joints possible for any set of six lines, the configuration of a tetrahedron is the best possible one. The problem is that I don't know why or how to prove 3. Any suggestions will be really helpful :)
|
Note that $2$ intersecting lines define a plane. Thus, a joint is formed by $3$ intersecting planes. Let's add one more plane to these three. Note that $4$ planes have at most $6$ distinct lines as intersections so $4$ is the minimum number of planes to contain $6$ intersecting lines that satisfy the requirement that no three of them are co-planar. Finally, there are $4$ ways to pick $3$ planes out of the set of $4$ so we can have at most $4$ joints. Tetrahedron is the shape with $4$ planes and $6$ lines.
|
|geometry|euclidean-geometry|affine-geometry|
| 0
|
Is there always an automorphism distinct from identity in a simple module?
|
Let $M$ be a simple module over a unital ring $R$ . If $|M| \leq 2$ , then $M$ has only one automorphism (the identity). I'm wondering whether for $|M| > 2$ , there always exists an automorphism distinct from identity. I showed that this is true for commutative $R$ . In fact in this case for any nonzero elements $x,y \in M$ , there is an automorphism that maps $x$ to $y$ . In the case of noncommutative $R$ , I wasn't able to prove the existence of a nontrivial automorphism or find a counterexample. I'm mostly interested in the case when $M$ and $R$ are finite . I showed that for any counterexample one would have that $|R| \geq |M|^2$ . However I'm too inexperienced with modules to find a counterexample if there is one. If this is in fact also true for noncommutative $R$ , I would appreciate a reference to this result.
|
It turns out it is not always the case for simple modules over noncommutative rings. One counterexample is the simple module $\mathbb{Z}_2 \times \mathbb{Z}_2 = \Big\{\begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \end{bmatrix}, \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 1 \end{bmatrix} \Big\}$ over the ring of $2 \times 2$ matrices over $\mathbb{Z}_2$ . Let $A = \begin{bmatrix} 0 & 1 \\ 0 & 1 \end{bmatrix}$ . Since $A\begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$ and both $A\begin{bmatrix} 0 \\ 1 \end{bmatrix}$ and $A\begin{bmatrix} 1 \\ 1 \end{bmatrix}$ are distinct from $\begin{bmatrix} 0 \\ 0 \end{bmatrix}$ , there is no automorphism that maps $\begin{bmatrix} 1 \\ 0 \end{bmatrix}$ to either $\begin{bmatrix} 0 \\ 1 \end{bmatrix}$ or $\begin{bmatrix} 1 \\ 1 \end{bmatrix}$ . Thus an automorphism can only map $\begin{bmatrix} 1 \\ 0 \end{bmatrix}$ to itself. However an automorphism of a simple module is determined by its valu
|
|ring-theory|commutative-algebra|modules|noncommutative-algebra|
| 0
|
Prove that $\sum_{i=1}^{k} \lambda_i f(g_i x) \geq 0$ holds for all $x \in G$, then $\sum_{i=1}^{k} \lambda_i \geq 0.$
|
Problem statement: Let $f(x) \geq 0$ be a nonzero, bounded, real function on an Abelian group $G$ , $g_1, \ldots, g_k$ are given elements of $G$ , and $\lambda_1, \ldots, \lambda_k$ are real numbers. Prove that if $$\sum_{i=1}^{k} \lambda_i f(g_i \cdot x) \geq 0$$ holds for all $x \in G$ , then $$\sum_{i=1}^{k} \lambda_i \geq 0.$$ My failed attempt We can suppose that $f(g_{1}) \geq 0$ . Denote by $A_{n}$ the set of those elements that can be written in the form $g_{1}^{\alpha_{1}}, \ldots, g_{k}^{\alpha_{k}}$ , where the maximum absolute value of the numbers $\alpha_{1}, \ldots, \alpha_{k}$ is $n$ , where $n > 0$ is an integer. Denote by $S(H)$ the sum $\sum_{x \in H} f(x)$ where $H$ is a finite set. In $$\lim_{n \to \infty} \frac{S(A_{n+1}) - S(A_{n-1})}{S(A_{n})} = 0$$ holds, since if for some $\epsilon > 0$ and for all $n > 0$ , $$\frac{S(A_{n+1}) - S(A_{n-1})}{S(A_{n})} > \epsilon$$ . would hold, then $$S(A_{n+1}) > S(A_{n-1}) + \epsilon S(A_{n}) \geq (1 + \epsilon) S(A_{n-1})$$ a
|
Indeed $$ \lim\limits_{n\rightarrow +\infty}\frac{S(A_{n+1})-S(A_{n-1})}{S(A_n)}=0. $$ If it were not the case, we would have $S(A_{2n+1})\geqslant (1+\varepsilon)^nS(A_1)$ for an infinite number of $n$ , and for some fixed $\varepsilon>0$ as you wrote, which is in contradiction with the upper bound $$ S(A_n)\leqslant\|f\|_{\infty}\#A_n\leqslant\|f\|_{\infty}(2n+1)^k. $$ We have $$ \sum_{i=1}^k\lambda_iS(g_iA_n)=\sum_{x\in A_n}\sum_{i=1}^k\lambda_i f(g_ix)\geqslant 0 $$ by hypothesis therefore $$ \sum_{i=1}^k\lambda_i\geqslant\frac{1}{S(A_n)}\sum_{i=1}^k\lambda_i(S(A_n)-S(g_iA_n)). $$ Note that $|S(A_n)-S(g_iA_n)|\leqslant S(A_{n+1})-S(A_{n-1})$ so $$ \sum_{i=1}^k\lambda_i\geqslant -\frac{1}{S(A_n)}\sum_{i=1}^k|\lambda_i|(S(A_{n+1})-S(A_{n-1})). $$ Taking the limit $n\rightarrow +\infty$ gives you the desired inequality $$ \sum_{i=1}^k\lambda_i\geqslant 0. $$
|
|real-analysis|group-theory|abelian-groups|
| 1
|
Finding and proofing a closed formula for $\sum_{n=1}^k\sqrt{1+\frac{1}{n^2}+\frac{1}{(n+1)^2}}$
|
I want to find and proof a closed formula for the following sum $$\sum_{n=1}^k\sqrt{1+\frac{1}{n^2}+\frac{1}{(n+1)^2}}=\sqrt{1+\frac{1}{1^2}+\frac{1}{2^2}}+\sqrt{1+\frac{1}{2^2}+\frac{1}{3^2}}+\dots +\sqrt{1+\frac{1}{k^2}+\frac{1}{(k+1)^2}}$$ I have found a closed formula but I have problems with proofing it and proofing my steps in between. First I simplified the sum. To do that I calculated the radicands. After that I calculated the sum for $k=1$ to $k=4$ to see a pattern. Lastly I concluded from the pattern the closed formula: $$s_k=\frac{(k+1)^2-1}{(k+1)}=(k+1)-\frac{1}{(k+1)}$$ With $s_k$ I betitle the sum up to $k$ . This closed formula holds with the solutions. But now I want to proof it and my steps in between. Simplifying the sum I first calculated the following three radicands: \begin{alignat*}{3} &1+\frac{1}{1^2}+\frac{1}{2^2}&&=\frac{9}{4}&&&=\frac{3^2}{2^2} \newline\newline &1+\frac{1}{2^2}+\frac{1}{3^2}&&=\frac{49}{36}&&&=\frac{7^2}{6^2} \newline\newline &1+\frac{1}{3^2}+
|
$$S :=\sum_{n=1}^k\sqrt{1+\frac{1}{n^2}+\frac{1}{(n+1)^2}}$$ $$= \sum_{n=1}^k\sqrt{\frac{n^2(n+1)^2+(n+1)^2+n^2}{n^2(n+1)^2}}$$ $$= \sum_{n=1}^k{\frac{\sqrt{n^4+2n^3+n^2+n^2+2n+1+n^2}}{n(n+1)}}$$ $$= \sum_{n=1}^k{\frac{\sqrt{n^4+2n^3+3n^2+2n+1}}{n(n+1)}}$$ The radicand has no rational roots, but maybe we can factor it into two quadratics. $$n^4+2n^3+3n^2+2n+1 = (n^2 + an + b)(n^2 + cn + d)$$ $$n^4+2n^3+3n^2+2n+1 = n^4 + cn^3 + dn^2 + an^3 + acn^2 + adn + bn^2 + bcn + bd$$ $$n^4+2n^3+3n^2+2n+1 = n^4 + (a + c)n^3 + (ac + b + d)n^2 + (ad + bc)n + bd$$ Matching up the coefficients gives the system of equations: $$a + c = 2 \tag{1}$$ $$ac + b + d = 3\tag{2}$$ $$ad + bc = 2 \tag{3}$$ $$bd = 1 \tag{4}$$ The only integer solutions to (4) are $b = d = 1$ and $b = d = -1$ . If $b = d = 1$ , then (1) and (3) both become $a + c = 2$ , and (2) becomes $ac = 1$ . If $b = d = -1$ , then (3) becomes $a + c = -2$ , which contradicts equation (1), $a + c = 2$ . So we must have $b = d = 1$ . From (1), $c
|
|algebra-precalculus|summation|telescopic-series|
| 0
|
On functions of a self-adjoint operator of the form $U^{-1} A U$
|
I found in Reed Simon that, since $\mathcal{F}$ (Fourier Transform) is unitary in $L^2(\mathbb{R}^n)$ and the self-adjoint operator (in a suitable domain) $-\Delta = H_{0}$ can be expressed as $H_{0} = \mathcal{F}^{-1}\lambda^2 \mathcal{F}$ , then $f(H_0) = \mathcal{F}^{-1}f(\lambda^2) \mathcal{F}$ for every bounded measurable function $f$ . Now, i was wondering how to prove this using functional calculus: I suppose this follows by a more general result like $f(U^{-1}AU) = U^{-1}f(A)U$ but I couldn't find the precise statement of this in the "mare magnum" of Reed Simon. I would greatly appreciate any reference and suggestion. Thank you in advance.
|
Note that powers of $H_0$ satisfy $H_0^k = (\mathcal{F}^{-1}\lambda^2\mathcal{F})^k = \mathcal{F}^{-1}(\lambda^2)^k\mathcal{F}$ . In consequence, if $f$ admits a power expansion (like Taylor or Laurent series), i.e. $f(x) = \sum_k a_kx^k$ , then you have : $$ f(H_0) = \sum_k a_k H_0^k = \sum_k a_k \mathcal{F}^{-1}(\lambda^2)^k\mathcal{F} = \mathcal{F}^{-1}f(\lambda^2)\mathcal{F} $$
|
|operator-theory|self-adjoint-operators|functional-calculus|
| 0
|
Round-robin tournament with 4 contestants per match and duplicates allowed
|
I need an algorithm that will calculate 4-contestant groups that ensure each contestant plays against each other contestant at least once but more than once is allowed . This is different from a Steiner quadrupel where they're only allowed to meet once. I tried the circle method for pairwise matches but it doesn't translate well to quadrupel matches. I'm looking for something slightly more optimized than just looping over the list of contestants. Thanks for the help!
|
You are looking for a $(v,4,2)$ -covering design, where $v$ is the number of people. In general, a $(v,k,t)$ -covering design is a collection of $k$ -element subsets of a $v$ -element set with the property that every $t$ -element subset is contained in some $k$ -subset in the collection. Dan Gordon maintains a database of the best known covering designs for parameters up to a certain size. Here is the data for $t=2$ : https://ljcr.dmgordon.org/cover/table.html#t=2 The $k=4$ column of that table gives the answer to your question. If you click on the entry in the $v^\text{th}$ row, you will see the best known $(v,4,2)$ -covering design, so the best known possible answer to your question, for up to 99 people.
|
|combinatorics|
| 1
|
Show that $\sum_{n=1}^{+\infty}\frac{1}{(n\cdot\sinh(n\pi))^2} = \frac{2}{3}\sum_{n=1}^{+\infty}\frac{(-1)^{n-1}}{(2n-1)^2} - \frac{11\pi^2}{180}$
|
What I do so far \begin{align*} \text{Show that} \quad &\sum_{n=1}^{+\infty}\frac{1}{(n\cdot\sinh(n\pi))^2} = \frac{2}{3}\sum_{n=1}^{+\infty}\frac{(-1)^{n-1}}{(2n-1)^2} - \frac{11\pi^2}{180} \\ \text{Lemma 1 } &\sum_{n = - \infty }^\infty \frac{1}{{z + n}} = \frac{\pi }{{\tan (\pi z)}} \\ \text{Lemma 2 } &\frac{1}{{\sinh^2(\pi z)}} = \frac{1}{{\pi^2 z^2}} + \frac{4z^2}{{\pi^2}}\sum_{k=1}^\infty \frac{1}{{(z^2 + k^2)^2}} - \frac{2}{{\pi^2}}\sum_{k=1}^\infty \frac{1}{{z^2 + k^2}} \\ &\text{Because:} \nonumber \\ &\frac{\pi }{{\tan (\pi z)}} = \sum_{k = - \infty }^\infty \frac{1}{{z + k}} \Rightarrow \left( \frac{\pi }{{\tan (\pi z)}} \right)' = -\sum_{k = - \infty }^\infty \frac{1}{{(z + k)^2}} \nonumber \\ &\Rightarrow \boxed{\frac{\pi^2}{\sin^2(\pi z)}} = \sum_{k = - \infty }^\infty \frac{1}{{(z + k)^2}} \Rightarrow \nonumber \\ &\Rightarrow \frac{\pi^2}{\sin^2(\pi iz)} = \sum_{k = - \infty }^\infty \frac{1}{{(iz + k)^2}} \Rightarrow \frac{\pi^2}{\sinh^2(\pi z)} = \sum_{k = - \infty }^
|
Take $$\frac{\pi}{x} \coth(\pi x)= \frac{1}{x^2} + \frac{2}{x^2 + 1^2} + \frac{2}{x^2 + 2^2} + \ldots,$$ differentiate with respect to $x$ and divide across by $-x$ to obtain $$\frac{\pi}{x^3} \coth(\pi x) + \frac{\pi^2}{x^2 \sinh^2(\pi x)}= \frac{2}{x^4} + \frac{4}{(x^2 + 1^2)^2} + \frac{4}{(x^2 + 2^2)^2} + \ldots$$ and thence $$\pi \sum_{1}^{\infty} \frac{\coth(\pi n)}{n^3} + \pi^2 \sum_{1}^{\infty} \frac{1}{(n \sinh(\pi n))^2} = 2\zeta(4) + 4 \sum_{1}^{\infty} \sum_{1}^{\infty} \frac{1}{(m^2 + n^2)^2}$$ Now $$\sum_{1}^{\infty} \sum_{1}^{\infty} \frac{1}{(m^2 + n^2)^s} = \zeta(s) \left( \frac{1}{1^s} - \frac{1}{3^s} + \frac{1}{5^s} - \ldots \right) - \zeta(2s)$$ \begin{align*} \pi \sum_{1}^{\infty} \frac{\coth(\pi n)}{n^3} &= \sum_{1}^{\infty} \frac{1}{n^4} + 2 \sum_{1}^{\infty} \sum_{1}^{\infty} \frac{1}{n^2(n^2 + m^2)} \\ &= \sum \frac{1}{n^4} + 2 \sum \sum \frac{1}{m^2} \left( \frac{1}{n^2} - \frac{1}{m^2 + n^2} \right) \\ &= \sum \frac{1}{n^4} + 2 \sum \frac{1}{m^2} \sum \frac{1}
|
|calculus|sequences-and-series|summation|
| 0
|
Why $[\mathbb{F}_{p}(\alpha): \mathbb{F}_{p^n}] = p$?
|
I was reading the second answer of the following question here Why $x^{p^n}-x+1$ is irreducible in ${\mathbb{F}_p}$ only when $n=1$ or $n=p=2$ : Prove that $f(X) = X^{p^n} - X + 1$ is irreducible over $\mathbb F_{p}$ if and only if either $n = 1$ or $n = p = 2.$ And the book gave the following hint: Note that if $\alpha$ is a root, then so is $\alpha + a$ for any $a \in \mathbb F_{p^n}.$ Show that this implies $\mathbb F_{p}(\alpha)$ contains $\mathbb F_{p^n}$ and that $[\mathbb F_p(\alpha) : \mathbb F_{p^n}] = p$ Here is the answer I am referring to: I have another solution that might be easier to follow. Let $\alpha$ be a root of $q(x)=x^{p^n}-x+1$ . Note that $\alpha + a$ is also a root of $q(x)$ for all $a \in \mathbb{F}_{p^n}$ . Consider cyclic muplicative group $\mathbb{F}_{p^n}^{\times} = \mathbb{F}_{p}(\theta)$ for some generator $\theta$ , then $\alpha + \theta$ and $\alpha$ are roots of $q(x)$ , so they belong to $\mathbb{F}_{p}(\alpha)$ which shows that $\theta \in \mathbb{F
|
Your questions are ok in the sense that the answers to both your questions are negative: It is not always true that $\Bbb{F}_p(\alpha)$ would be a splitting field, nor is it always true that we would have $[\Bbb{F}_p(\alpha):\Bbb{F}_{p^n}]=p$ . Simply because it is often possible to choose the zero $\alpha$ in such a way that $\Bbb{F}_p(\alpha)$ does not contain the field $\Bbb{F}_{p^n}$ as a subfield. Meaning that the degree $[\Bbb{F}_p(\alpha):\Bbb{F}_{p^n}]$ is non-sensical! The hint is correct: if $\alpha$ is any root of $f(x)$ , all the roots are of the form $\alpha+z$ , where $z\in\Bbb{F}_{p^n}$ . This implies that the splitting field $E$ must contain $\Bbb{F}_{p^n}$ and one of the zeros $\alpha$ . Obviously $\alpha\notin\Bbb{F}_{p^n}$ , so $E=\Bbb{F}_{p^n}(\alpha)$ . It is also correct that $[E:\Bbb{F}_{p^n}]=p$ . This is because the Galois group $Gal(\Bbb{F}_{p^n}(\alpha)/\Bbb{F}_{p^n})$ is generated by the Frobenius automorphism $F:z\mapsto z^{p^n}$ . As the old answers explai
|
|abstract-algebra|field-theory|galois-theory|finite-fields|extension-field|
| 1
|
Evaluate $\sum\limits_{n=1}^{+\infty} \frac{\left( \frac{3-\sqrt{5}}{2} \right)^{n}}{n^{3}}$
|
Evaluate $$\sum\limits_{n=1}^{+ \infty} \frac{ \left( \frac{3-\sqrt{5}}{2} \right)^{n} }{n^{3}}$$ We can use the Fourier series to calculate this sum, because it converges. Also, we know that $\frac{3-\sqrt{5}}{2} = \frac{1}{\varphi^{2}}$ where $\varphi = \frac{1+\sqrt{5}}{2}$ is the golden ratio. What is going on about this number $\frac{3-\sqrt{5}}{2}$ ? we know something else ? Thank you for your answer but we know something else without about trilogarithm function?
|
The result is $$\frac{2}{15} \left(6\zeta(3) + \pi^2 \log\left(\frac{-1 + \sqrt{5}}{2}\right) - 5 \left[\log\left(\frac{-1 + \sqrt{5}}{2}\right)\right]^3\right)$$ where $\zeta(s)$ is the Riemann zeta function. This is an immediate consequence of a formula due to Spence to the effect that if $$ \phi(x) = \sum_{n=1}^{\infty} \frac{x^n}{n^3} $$ $|x| \leq \frac{1}{2}$ , then $$ \phi\left(\frac{x}{x - 1}\right) + \phi(x) + \phi(1 - x) - \phi(1) = \frac{\pi^2}{6} \log(1 - x) + \frac{1}{6} \left[\log(1 - x)\right]^2 \left[\log(1 - x) - 3 \log(x)\right] $$ See W. Spence, "An essay on the theory of the various orders of logarithmic transcendents" (1809), p. 28. The formula was discovered independently by Ramanujan. In the Journal of the London Mathematical Society, v. 3 (1928), p. 217, G. N. Watson gives an elementary proof based upon the obvious relation $$ \phi(x) = \frac{x}{2} \int_{0}^{1} \frac{\log(1 - u)^2}{1 - x + xu} \, du $$ Expressing the left-hand side of Spence's formula as the sum
|
|sequences-and-series|
| 0
|
Odds of winning in my club's annual raffle when having two tickets
|
This problem should be simple to solve for you guys, but not for me. My Gun Club holds an annual raffle. There are 15 prizes, you can only win one prize per ticket. I purchased 2 tickets and there were 3500 tickets sold. What are the odds of winning? This is the most tickets they have ever printed. My argument is that they hurt the odds of winning by printing/selling too many tickets.
|
The probability of wining (at least one ticket) is $$ 1-\mathbb P(X= 0)=1- \frac{\binom{2}{0}\binom{3500-2}{15}}{\binom{3500}{15}}=1- \frac{3485 \times 3484}{3500 \times 3499} $$ with $X \sim \mathcal {HG} (N=3500, m=2, n=15)$ . The probability when $N$ is unknown is $$P(N)=1- \frac{(N-15)(N-16)}{N (N-1)}$$ This function is decreasing in $N$ as $P(N-1) , so is the odds of winning $$\text{Odds}(N)=\frac{P(N)}{1-P(N)}=\frac{1}{\frac{1}{P(N)}-1}$$ decreasing in $N$ , and thus they hurt it by printing/selling too many tickets.
|
|probability|
| 0
|
$S = \{\frac{1}{2015}, \frac{2}{2015},...,\frac{2014}{2015}, \frac{2015}{2016}\}.$ Replace any $a, b$ by $a + b - 5ab$ $2014$ times. Find new $S.$
|
Here's the full problem statement: $S = \{\frac{1}{2015}, \frac{2}{2015},...,\frac{2014}{2015}, \frac{2015}{2016}\}.$ Remove any numbers $a, b$ from $S$ and replace it by $a + b - 5ab.$ Repeat this process $2014$ times and $S = \{\frac{m}{n}\}$ . Find $\frac{m}{n}$ (reduced). Someone posted this problem in a Facebook group and it was supposed to be solvable by a middle school student. I honestly have no idea where to start other than recognizing that the process reduces 1 element after each iteration, so it makes sense that there is only $1$ element left after $2014$ iterations. I'd love to hear some guidance. Thank you!
|
If $a$ or $b$ is $1/5$ , then $a+b-5ab=1/5$ . In other words, as long as $1/5 \in S$ and $S$ is finite, we don't care what else is in $S$ . By the end of your game, the only remaining number will be $1/5$ . To finish, just note $1/5=403/2015 \in S$ .
|
|number-theory|elementary-number-theory|
| 1
|
Can a (bounded) linear least-squares problem include a scale factor in its solution?
|
I have a system of equations. $$ \small \begin{aligned} (1 - p_0)u_0 + (q_0 - 1)v_0 + u_1 - v_1 = r_0 - c t_0 \\\\ (1 - p_1)u_1 + (q_1 - 1)v_1 + u_0 - v_0 = r_1 - c t_1 \\\\ (1 - p_2)u_2 + (q_2 - 1)v_2 + u_3 - v_3 + u_4 - v_4 = r_2 - c t_2 \\\\ (1 - p_3)u_3 + (q_3 - 1)v_3 + u_2 - v_2 + u_4 - v_4 = r_3 - c t_3 \\\\ (1 - p_4)u_4 + (q_4 - 1)v_4 + u_2 - v_2 + u_3 - v_3 = r_4 - c t_4 \end{aligned} $$ Expressed in matrix form, the coefficient matrix can always be formulated as two adjacent block-diagonal matrices. $$ \scriptsize \begin{bmatrix} 1 - p_0 & 1 & 0 & 0 & 0 & q_0 - 1 & -1 & 0 & 0 & 0 \\\\ 1 & 1 - p_1 & 0 & 0 & 0 & -1 & q_1 - 1 & 0 & 0 & 0 \\\\ 0 & 0 & 1 - p_2 & 1 & 1 & 0 & 0 & q_2 - 1 & -1 & -1 \\\\ 0 & 0 & 1 & 1 - p_3 & 1 & 0 & 0 & -1 & q_3 - 1 & -1 \\\\ 0 & 0 & 1 & 1 & 1 - p_4 & 0 & 0 & -1 & -1 & q_4 - 1 \end{bmatrix} \begin{bmatrix} u_0 \\\\ u_1 \\\\ u_2 \\\\ u_3 \\\\ u_4 \\\\ v_0 \\\\ v_1 \\\\ v_2 \\\\ v_3 \\\\ v_4 \end{bmatrix} =\begin{bmatrix} r_0 - c t_0 \\\\ r_1 - c t_1 \\
|
a possible arrangement to determine the optimal $c$ . $ \cases{ p_{12} = (p_1, \ p_2)'\\ p_{35} = (p_3, \ p_4, \ p_5)'\\ q_{12} = (q_1, \ q_2)'\\ q_{35} = (q_3, \ q_4, \ q_5)'\\ P_{12}=\unicode{x1D7D9}_2-\ulcorner p_{12}\lrcorner\\ P_{35}=\unicode{x1D7D9}_3-\ulcorner p_{35}\lrcorner\\ Q_{12}=\unicode{x1D7D9}_2-\ulcorner q_{12}\lrcorner\\ Q_{35}=\unicode{x1D7D9}_3-\ulcorner q_{35}\lrcorner\\ u_{12} = (u_1, \ u_2)'\\ u_{35} = (u_3, \ u_4, \ u_5)'\\ v_{12} = (v_1, \ v_2)'\\ v_{35} = (v_3, \ v_4, \ v_5)'\\ r_{12} = (r_1, \ r_2)'\\ r_{35} = (r_3, \ r_4, \ r_5)'\\ t_{12} = (t_1, \ t_2)'\\ t_{35} = (t_3, \ t_4, \ t_5)'\\ } $ Now to minimize $L(U,V,c)$ $$ L(U,V,c)=\|P_{12}u_{12}-Q_{12}v_{12}+c r_{12}-t_{12}\|^2+\|P_{25}u_{25}-Q_{25}v_{25}+c r_{25}-t_{25}\|^2 $$ the stationary conditions are $$ \cases{ \frac{\partial L}{\partial u_{12}}=P'_{12}(P_{12}u_{12}-Q_{12}v_{12}+c r_{12}-t_{12})=0\\ \frac{\partial L}{\partial v_{12}}=Q'_{12}(P_{12}u_{12}-Q_{12}v_{12}+c r_{12}-t_{12})=0 } $$ $$ \cases{ \
|
|linear-algebra|optimization|regression|least-squares|linear-regression|
| 1
|
Weak derivative zero implies constant function
|
Let $u\in W^{1,p}(U)$ such that $Du=0$ a.e. on $U$. I have to prove that $u$ is constant a.e. on $U$. Take $(\rho_{\varepsilon})_{\varepsilon>0}$ mollifiers. I know that $D(u\ast\rho_{\varepsilon})=Du\ast\rho_{\varepsilon}$, so $u\ast\rho_{\varepsilon}(x)=c $ for every $x\in U$, since it is a smooth function. How can I conclude?
|
Suppose $u, \text{weak } u' \in L^p_{loc}(U)$ (this is the general definition of $W^{1,p}(U))$ . By $u'=0$ a.e., we know after mollification, $u_{\epsilon}' = C_{\epsilon}$ a.e. in $\Omega_{\epsilon}$ . Since $u_{\epsilon} \to u$ in $L^p_{loc}$ , then for any compact subset $W$ of $\Omega$ , we know the $L^p$ convergence on $W$ . Hence $C_{\epsilon} \to u$ on any compact subset. This immediately shows that $C_{\epsilon}$ is a bounded sequence and by Bolzano-Weierstrass, $C_\epsilon$ admits a convergent subsequence. Fix this convergent subsequence, and WLOG assume $C_{\epsilon} \to C$ as a sequence of real number. On $W$ , $\vert\vert u-C \vert\vert_p \leq \vert\vert u-C_\epsilon \vert\vert_p + \vert\vert C_\epsilon - C\vert\vert_p$ , the right hand side tends to $0$ by $L^p_{loc}$ convergence of $u_{\epsilon}$ and $W$ being of finite measure. Then $u=C$ a.e. on $W$ . If here we assume $U$ is a bounded open domain, then by $\bar{\Omega}_\epsilon$ compact in $U$ and increasing to $U$ , w
|
|functional-analysis|partial-differential-equations|weak-derivatives|
| 0
|
How to prove that Lebesgue measure is translation invariant
|
Can someone please explain: Assume for each $x \in \mathbb{R}$ and $A \subseteq \mathbb{R}$, that $x + A = \big\{ x + a \mid a \in A \big\}$. A and x + A are Borel sets for all $x \in \mathbb{R}$ Then, if λ is the Lebesgue measure on B , how can it be proven to be a translation invariant? So far all I have gotten is that λ (A) = λ ( x + A ), for all Borel sets A and for all $x \in \mathbb{R}$.
|
For translation invariance: Let $E \subset \Bbb{R}$ be Lebesgue measurable. Then if the measure is infinite the case is obvious, so suppose its finite and $$\lambda(E):=\inf\{\sum_{n \in \Bbb{N}}l((a_n,b_n)): E \subset \cup_{n \in \Bbb{N}}(a_n,b_n): l((a_n,b_n))=b_n-a_n\},$$ exists. So we have that $$E \subset \bigcup_{n \in \Bbb{N}}(a_n,b_n).$$ Then for any $x \in \Bbb{R}$ , one has (and this is clear to see set theoretically): $$E+x \subset \bigcup_{n \in \Bbb{N}}(a_n+x,b_n+x).$$ But then, \begin{align} l((a_n+x,b_n+x))&=b_n+x-(a_n+x)\\ &=b_n+x-a_n-x\\ &=b_n-a_n\\ &=l((a_n,b_n)). \end{align} Thus the lengths of the intervals covering $E$ and $E+x$ are the same so taking an infimum yield the same result forcing $\lambda(E)=\lambda(E+x)$ as needed. Also, this follows if you cover $E$ by unions of $(a_n,b_n]$ or $[a_n,b_n]$ or $[a_n,b_n)$ as these all have same Lebesgue measure as singletons have Lebesgue measure zero.
|
|real-analysis|measure-theory|lebesgue-measure|
| 0
|
Theorem 7.32 in Apostol's MATHEMATICAL ANALYSIS, 2nd ed: How to establish differentiability?
|
Here is Theorem 7.32 in the book Mathematical Analysis - A Modern Approach to Advanced Calculus by Tom M. Apostol, 2nd edition: Let $\alpha$ be of bounded variation on $[a, b]$ and assume that $f \in R(\alpha)$ on $[a, b]$ . Define $F$ by the equation $$ F(x) = \int_a^x f \, d \alpha, \qquad \mbox{ if } x \in [a, b]. $$ Then we have: i) $F$ is of bounded variation on $[a, b]$ . ii) Every point of continuity of $\alpha$ is also a point of continuity of $F$ . iii) If $\alpha \nearrow$ on $[a, b]$ , the derivative $F^\prime(x)$ exists at each point $x$ in $(a, b)$ where $\alpha^\prime(x)$ exists and where $f$ is continuous. For such $x$ , we have $$ F^\prime(x) = f(x) \alpha^\prime(x). $$ And, here is Apostol's proof of this theorem. If suffices to assume that $\alpha \nearrow$ on $[a, b]$ . If $x \neq y$ , Theorem 7.30 implies that $$ F(y) - F(x) = \int_x^y f \, d \alpha = c \big[ \alpha(y) - \alpha(x) \big], $$ where $m \leq c \leq M$ (in the notation of Theorem 7.30). Statements (i) an
|
If $m = \inf \{ f(x) : x \in [a,b] \}$ and $M = \sup \{ f(x) : x \in [a,b] \}$ , and $f$ is continuous, what would you conclude about $c$ as $y \to x$ ? Can you see why it is unnecessary to specify that $c$ is some function of $x$ and $y$ ?
|
|real-analysis|integration|analysis|definite-integrals|riemann-integration|
| 0
|
Odds of winning in my club's annual raffle when having two tickets
|
This problem should be simple to solve for you guys, but not for me. My Gun Club holds an annual raffle. There are 15 prizes, you can only win one prize per ticket. I purchased 2 tickets and there were 3500 tickets sold. What are the odds of winning? This is the most tickets they have ever printed. My argument is that they hurt the odds of winning by printing/selling too many tickets.
|
You don't need the actual numbers to see that for a fixed number of prizes, the more tickets sold the less likely each ticket is to win. That is true no matter how many tickets one person happens to buy, or whether or not tickets can get multiple prizes. The more tickets they sell the less each one is worth (from the odds point of view) and the more money the Gun Club nets after buying the prizes. If they print "too many" tickets sales may drop if the buyers may balk when they think of the value of each ticket. In your particular problem, if there were $15$ prizes and just $15$ then every ticket would be a winner. With thousands of tickets there are still just $15$ winners. The odds that you have one of those goes down as the number of tickets grows.
|
|probability|
| 0
|
Transforming an arc-based point together with other points to target considering angle
|
If have a point on an arc and other points related to it. I now want to move the point on the arc to a given target and rotate everything so that the arc-based-point has a vertical angle as the new position (like the if the arc-center would now be just below (on y-axis) the target. Example: I have an arc-center $R(0;-233.7)$ , radius $250$ and arc-point $P(9.9;16.1)$ . That point is at about $2.27°$ of the arc. The other point is $Q(0;0)$ , so $P$ and $Q$ are a "shape". Now if I move $P$ to target positon $T(0;16.3)$ in a way so that the midpoint $R$ would be on the y-axis, where is $Q$ in that case? This is not just vector movement but we need to rotate as well which gives me headaches. Edit: Maybe the single problem is easier in the context. The complete problem is that I have a shape defined by 3 arcs. Here you see it with three different colors for each arc (the blue one is quite small at $0°$ and $90°$ . I can calculate the shapepoints. In fact I am calculating a quarter of the sh
|
where is $Q$ in that case? When you move $P$ to $T$ , then $R$ moves to $R'$ where $\vec{RR'}=\vec{PT}$ implies $R'_x=R_x+T_x-P_x=-9.9$ and $R'_y=R_y+T_y-P_y=-233.5$ . The equation of the circle whose center is $R'$ with radius $250$ is given by $$(x+9.9)^2+(y+233.5)^2=250^2$$ So, the slope of the tangent line at $T$ is given by $$\frac{-T_x-9.9}{T_y+233.5}=-\frac{99}{2498}$$ So, the angle $\alpha$ between the tangent line and the $x$ -axis is given by $$\alpha\approx 177.7304^\circ$$ Therefore, we can get $Q''$ by rotating $Q'(-9.9,0.2)$ by $180^\circ-177.7304^\circ=2.2696^\circ$ around $T$ . Since $$\begin{pmatrix}Q''_x-T_x\\Q''_y-T_y\end{pmatrix}=\begin{pmatrix} \cos(2.2696^\circ) & -\sin(2.2696^\circ) \\ \sin(2.2696^\circ) & \cos(2.2696^\circ) \end{pmatrix}\begin{pmatrix} Q'_x-T_x \\ Q'_y-T_y\end{pmatrix}$$ we have $$\begin{align}Q''_x&=T_x+(Q'_x-T_x)\cos(2.2696^\circ)-(Q'_y-T_y)\sin(2.2696^\circ) \\\\&\approx -9.2547\end{align}$$ and $$\begin{align}Q''_y&=T_y+(Q'_x-T_x)\sin(2.2696
|
|geometry|
| 1
|
Orthogonal block-matrix
|
Let $$M=\begin{bmatrix}A & C \\ 0 & B\end{bmatrix}\in \mathbb R^{m\times n}$$ be a block matrix. (a) If $M$ is orthogonal and $C=0$ , are $A$ and $B$ orthogonal? (b) Suppose $A$ is orthogonal and $B=cI$ , for some $0 . Is $M$ orthogonal for any square matrix $C$ ? (Here $I$ denotes the identity matrix.) So far: It was pretty straight-forward to show that the answer is affirmative in (a), but I am stuck on (b). My gut tells me the answer is no, and I was given a hint to separately check the cases when $C$ is invertible, and when it is not, but I still have a hard time even getting started. I tried to play around with having $A=I$ to make $M$ diagonal, but it didn't help much.
|
Recall an orthogonal matrix is a matrix $M$ where $M^TM=MM^T$ . So let's try on your case. $$M^T=\begin{pmatrix}A^T & 0\\ C^T & B^T\end{pmatrix}=\begin{pmatrix}A^T & 0\\ C^T & cI\end{pmatrix}$$ So we have $$M^TM=\begin{pmatrix}A^T & 0\\ C^T & cI\end{pmatrix}\begin{pmatrix}A & C\\ 0 & cI\end{pmatrix}=\begin{pmatrix}A^TA & AC\\ C^TA & C^TC+c^2I\end{pmatrix}$$ On the other hand, we have $$MM^T=\begin{pmatrix}A & C\\ 0 & cI\end{pmatrix}=\begin{pmatrix}A^T & 0\\ C^T & cI\end{pmatrix}=\begin{pmatrix}AA^T+CC^T & cC\\ cC^T & c^2I\end{pmatrix}$$ Comparing the entries, we get $C^TC=CC^T=\textbf{O}$ . Now we show $C=0$ . This is because for any vector $v$ , we have $$\left =\left =\|Cv\|^2=0\implies Cv=0\quad\forall v$$ This must means $C=0$ . So $M=\begin{pmatrix}A & 0\\ 0 & cI\end{pmatrix}$ , which means it has eigenvalue of $c$ . But $c$ is in $(0,1)$ , and eigenvalue of orthogonal matrix can only be $\pm1$ in real field, so it is impossible for $M$ to be orthogonal. Edit: Thanks for user1551'
|
|linear-algebra|matrices|linear-transformations|
| 1
|
Are Strong- and weak Operator topologies on separable Hilbert spaces sequential?
|
If I am not mistaken, the norm operator topology should make the set of bounded operators into a sequential space, since the norm defines a metric. I was wondering if the Weak and Strong Operator topologies also turn the bounded operators into a sequential space, i.e. is a sequentially closed set in those topologies automatically closed? If that makes things easier, I am only interested in bounded operators on a separable Hilbert space. Best Lev
|
I’ll first prove that the strong operator topology is not sequential on $B(l^2)$ . We consider the following subset of $B(l^2)$ : $$A = \{np_{V^\perp}: V \subset l^2 \, \mathrm{finite \, dimensional \, subspace}, \, n \in \mathbb{N}_+, \, n \geq \dim(V)\}$$ It is not closed in the strong operator topology. Indeed, let $\lambda$ be the net of nontrivial finite dimensional subspaces of $l^2$ , ordered by inclusion, then it is not hard to see that $\lim_{V \in \lambda} \dim(V)p_{V^\perp} = 0$ in the strong operator topology. Indeed, for any $h \in l^2$ , as long as $V \supset \mathrm{span}\{h\}$ , we have $\dim(V)p_{V^\perp}(h) = 0$ . But $0 \notin A$ . However, I claim that $A$ is sequentially closed. Indeed, let $(a_m) _{m \in \mathbb{N}} \subset A$ be a sequence converging strongly to $a \in B(l^2)$ . By uniform boundedness principle, there exists $C > 0$ s.t. $\|a_m\| \leq C$ for all $m$ . But $\|np_{V^\perp}\| = n$ , so this means all $a_m$ must be of the form $a_m = n_mp_{V_m^\perp}
|
|functional-analysis|operator-theory|hilbert-spaces|
| 1
|
Solving the integral equation associated with Laplace transform
|
I am clueless on how to solve this integral equation: $\lambda\int_{0}^{\infty}f(x)\exp\{-\lambda x\}dx=\sqrt{2\lambda}$ , where function $f$ is a non-negative measurable function. And the result is $f(x)=\sqrt{\frac{2}{\pi x}}$ . Any help would be appreciated. I originally want to solve it through differentiating both sides, but I am not sure how to do that.
|
Hint . Compare your integral equation with the definition of the Gamma function , $$ \Gamma(z)=\int_0^{\infty}t^{z-1}e^{-t}\,dt\qquad(\Re(z)>0), \tag{1} $$ from which follows $$ \int_0^{\infty}x^{z-1}e^{-\lambda x}\,dx=\frac{\Gamma(z)}{\lambda^z}. \tag{2} $$
|
|integration|analysis|laplace-transform|
| 1
|
Which vector space axiom(s) is (are) going to fail if we take $\mathbb{C}$ as our set of vectors and $\mathbb{Z}$ as our set of scalars?
|
Here is the standard definition of vector space: Let $F$ be a field, and let $X$ be a non-empty set such that (A0) for each pair $x, y$ of elements of $X$ , there exists a unique element $x + y$ in $X$ ; (M0) for each element $\alpha \in F$ and for each element $x \in X$ , there exists a unique element $\alpha x$ in $X$ ; (A1) for any elements $x, y, z$ in $X$ , we have $(x+y)+z = x+(y+z)$ ; (A2) there exists a unique element $\mathbf{0}$ in $X$ such that $x + \mathbf{0} = x$ for every element $x \in X$ ; (A3) for each element $x$ in $X$ , there exists an element (in fact a unique element) $-x$ in $X$ such that $x+(-x) = \mathbf{0}$ ; (A4) for any elements $x$ and $y$ in $X$ , we have $x+y = y+x$ ; (M1) for any elements $\alpha$ and $\beta$ in $F$ and for any element $x$ in $X$ , we have $(\alpha + \beta) x = \alpha x + \beta x$ ; (M2) for any elements $\alpha$ and $\beta$ in $F$ and for any element $x$ in $X$ , we have $(\alpha \beta) x = \alpha (\beta x)$ ; (M3) for any element $\alp
|
There aren't really any problems. This is the $\Bbb Z$ -module structure of $\Bbb C$ , which is equivalent to the abelian group structure of $\Bbb C$ . We just don't call $R$ -modules vector spaces unless $R$ is a field.
|
|linear-algebra|abstract-algebra|vector-spaces|
| 0
|
Suppose that a functionis continuous and differentiable on $\mathbb{R}$ \c and that $\lim_ {x\to c} f'(x)$ exists. Prove f is differentiable at $c$
|
Suppose that $f: \mathbb{R} \to \mathbb{R}$ is continuous and differentiable on $\mathbb{R}\setminus{c}$ and that $\lim_{x\to c} f'(x)$ exists. Prove using the mean value theorem that $f$ is differentiable at $c$ and that $\lim_{x\to c}f'(x) = f(c)$ . My thoughts so far: Consider $f$ restricted to $[a,b]$ with $a then by the MVT, $\exists\alpha,\beta\in (a,b)$ s.t. $$f'(\alpha) = \frac{f(c)-f(a)}{c-a}, f'(\beta) = \frac{f(b)-f(c)}{b-c}$$ Can I go anywhere from here, or do I need to apply the MVT in another way?
|
Put $f'\left(x\right)\to A$ as $x\to c$ . Then for any positive $\epsilon $ , there will be a positive $\delta$ such that $$0 implies that $$\left|f'\left(x\right)-A\right| Note that for such $x\neq c$ , we have $$ \frac{f\left(c\right)-f\left(x\right)}{c-x}=f'\left(\alpha_x\right)$$ for an $\alpha_x$ between $x$ and $c$ . Then since $$\left|\alpha_x-c\right| we have $$\left|\frac{f\left(c\right)-f\left(x\right)}{c-x}-A\right| Then the proposition follows from the definition of a derivative.
|
|real-analysis|
| 0
|
Which vector space axiom(s) is (are) going to fail if we take $\mathbb{C}$ as our set of vectors and $\mathbb{Z}$ as our set of scalars?
|
Here is the standard definition of vector space: Let $F$ be a field, and let $X$ be a non-empty set such that (A0) for each pair $x, y$ of elements of $X$ , there exists a unique element $x + y$ in $X$ ; (M0) for each element $\alpha \in F$ and for each element $x \in X$ , there exists a unique element $\alpha x$ in $X$ ; (A1) for any elements $x, y, z$ in $X$ , we have $(x+y)+z = x+(y+z)$ ; (A2) there exists a unique element $\mathbf{0}$ in $X$ such that $x + \mathbf{0} = x$ for every element $x \in X$ ; (A3) for each element $x$ in $X$ , there exists an element (in fact a unique element) $-x$ in $X$ such that $x+(-x) = \mathbf{0}$ ; (A4) for any elements $x$ and $y$ in $X$ , we have $x+y = y+x$ ; (M1) for any elements $\alpha$ and $\beta$ in $F$ and for any element $x$ in $X$ , we have $(\alpha + \beta) x = \alpha x + \beta x$ ; (M2) for any elements $\alpha$ and $\beta$ in $F$ and for any element $x$ in $X$ , we have $(\alpha \beta) x = \alpha (\beta x)$ ; (M3) for any element $\alp
|
None fail. What you have is what is called a (unital) (left) module. More generally, if $R$ is any ring, then a left $R$ -module is a set $M$ , together with a binary operation $+\colon M\times M\to M$ (usually written in infix notation, so the result of applying $+$ to the pair $(m,n)$ is denote $m+n$ ), and a function $\cdot\colon R\times M\to M$ such that: $+$ is associative: for all $x,y,z\in M$ , $(x+y)+z = x+(y+z)$ . $+$ is commutative: for all $x,y\in M$ , $x+y=y+x$ . $+$ has an identity: there exists an element $0\in M$ such that for all $x\in M$ , $0+x=x+0=x$ . Existence of additive inverses: for all $x\in M$ there exists $y\in M$ such that $x+y=y+x=0$ . Associativity of $\cdot$ : for all $\alpha,\beta\in R$ and all $x\in M$ , we have $(\alpha\beta)\cdot x = \alpha\cdot(\beta\cdot x)$ . Left distributivity of $\cdot$ : for all $\alpha\in R$ and $x,y\in M$ , $\alpha\cdot(x+y) = (\alpha\cdot x) + (\alpha\cdot y)$ . Right distributivity of $\cdot$ : for all $\alpha,\beta\in R$ an
|
|linear-algebra|abstract-algebra|vector-spaces|
| 0
|
Which vector space axiom(s) is (are) going to fail if we take $\mathbb{C}$ as our set of vectors and $\mathbb{Z}$ as our set of scalars?
|
Here is the standard definition of vector space: Let $F$ be a field, and let $X$ be a non-empty set such that (A0) for each pair $x, y$ of elements of $X$ , there exists a unique element $x + y$ in $X$ ; (M0) for each element $\alpha \in F$ and for each element $x \in X$ , there exists a unique element $\alpha x$ in $X$ ; (A1) for any elements $x, y, z$ in $X$ , we have $(x+y)+z = x+(y+z)$ ; (A2) there exists a unique element $\mathbf{0}$ in $X$ such that $x + \mathbf{0} = x$ for every element $x \in X$ ; (A3) for each element $x$ in $X$ , there exists an element (in fact a unique element) $-x$ in $X$ such that $x+(-x) = \mathbf{0}$ ; (A4) for any elements $x$ and $y$ in $X$ , we have $x+y = y+x$ ; (M1) for any elements $\alpha$ and $\beta$ in $F$ and for any element $x$ in $X$ , we have $(\alpha + \beta) x = \alpha x + \beta x$ ; (M2) for any elements $\alpha$ and $\beta$ in $F$ and for any element $x$ in $X$ , we have $(\alpha \beta) x = \alpha (\beta x)$ ; (M3) for any element $\alp
|
There are no problems, except for that fact that $\mathbb Z$ is not a field. That is, there are no problems until you start attempting to apply theorems which require the scalars to form a field. For example, here is a theorem about vector spaces: For any vector space $X$ over any field $F$ , and for any $x \in X$ , the operations of addition and scalar multiplication on $X$ restrict to a vector space structure on the set $Fx = \{\alpha x \mid \alpha \in F\}$ . If you apply this to $x=1 \in \mathbb C$ using $F=\mathbb Z$ then you get $Fx=\mathbb Z$ which is definitely not a vector space (no matter what field you choose for scalars). What went wrong? Well, $F=\mathbb Z$ is not a field, that's what went wrong. If that's not convincing enough there are plenty of deeper theorems where things go wrong, for example theorems about linear independence and bases. Here's an example: If $X$ is a vector space over a field $F$ , and if there exists finitely many elements $x_1,...,x_K \in X$ such th
|
|linear-algebra|abstract-algebra|vector-spaces|
| 0
|
Limit of lacunar power series in $1^-$.
|
Let $\sigma:\mathbb{N}\longrightarrow\mathbb{N}$ be strictly increasing, and consider the power series $$ S_{\sigma}(x)=\sum_{n=0}^{+\infty}(-1)^nx^{\sigma(n)}. $$ Can any real number in $[0,1]$ be obtained as the limit $\lim\limits_{x\rightarrow 1^-}S_{\sigma}(x)$ for some $\sigma$ ? According to this answer, the limit always is $\frac{1}{2}$ when $\sigma$ is a polynomial, WolframAlpha suggests that the limit is also $\frac{1}{2}$ with $\sigma(n)=n\log n$ (think of $\sigma(n)$ as the $n$ -th prime number). Therefore my question can also be : Is the limit $\lim\limits_{x\rightarrow 1^-}S_{\sigma}(x)$ always $\frac{1}{2}$ ? if not, can any rational number in $[0,1]$ be obtained this way for some $\sigma$ ?
|
We show that if $\sigma(n)$ is exponential then the series tends towards $1/2$ but retains an oscillation about this value and does not completely converge. Let $\sigma(n)=3^n$ and sum from $n=0$ to $n=\infty$ , thus $f(x)=\lim_{x\to1^-}\sum\limits_{n=0}^\infty (-1)^n x^{3^n}.$ The following graph shows a plot of this function from Microsoft Excel where data were obtained for $x=1-3^{-k}$ with $k$ from $2$ to $12$ in increments of $1/4$ (we will see shortly why increments as large as $1$ would not work): One might suppose the sinusoidal variation mught be due to roundoff error, but the operation is not particularly ill-conditioned and the $1-x$ values explored exceed $10^{-6}$ even at the right end of the graph (largest $k$ , smallest $1-x$ explored). What is really going on? Below is a graph showing how the absolute values of the summation terms decrease with increasing $n$ for various $k$ values. It becomes evident that the transition from $\pm1$ to $0$ in the terms tends to a consta
|
|real-analysis|limits|power-series|analytic-number-theory|lacunary-series|
| 0
|
coin flipping competition
|
A, B, C are flipping a coin independently until they got a head (same experiment "to get a head" is repeated by these 3 people). Denote X, Y, Z stand for the number of flips they need, respectively. Find the probability P(X . My solution is that they form a permutation like XYZ or XZY etc. Each combination has same probability, so the P(X is equal to 1/6. But if we considered 2nd solution which integrate the multibinomial distribution $P(X=x, Y=y, Z=z) = (1/2)^{(x+y+z)}$ over the domain that $1 , the answer is 1/21. So what's wrong with the first solution? or is there any method without the integration? Thanks!
|
The first solution does not account for ties. What if they all found a head on the first try?
|
|probability|binomial-distribution|combinatorial-game-theory|
| 0
|
Is a symmetric monoidal category ("tensor-category" in P. Deligne & J.S. Milne's vocabulary) neccessarily locally small?
|
Let $(\mathcal{C},\otimes,\mathbf{1},\phi,\psi)$ (I will denote this by just $(\mathcal{C},\otimes)$ ) be a tensor-category (in P. Deligne & J.S. Milne's vocabulary, see https://www.jmilne.org/math/xnotes/tc2018.pdf ), or we could call it a symmetric monoidal category. Furthermore, assume that $(\mathcal{C},\otimes)$ is closed , by which we mean that for every object $B$ in $(\mathcal{C},\otimes)$ we have that $- \otimes B \dashv \underline{\text{Hom}}(B,-)$ , i.e. that $\underline{\text{Hom}}(B,-)$ is right-adjoint to the functor $- \otimes B$ . Is it the case that $(\mathcal{C},\otimes)$ is neccessarily locally small? The reason for asking, is the proof of proposition $3.2$ here: https://ncatlab.org/nlab/show/internal+hom#ClosedMonoidalCategory It seems like they use yoneda:s embedding in the proof, which assumes that $(\mathcal{C},\otimes)$ is locally small, but the proof only assumes closed symmetric monoidal.
|
No. Any presheaf category on a category with finite products has finite products and is therefore symmetric monoidal. But if the original category is locally small, then the presheaf category is locally small if and only if the original category is (essentially) small.
|
|category-theory|monoidal-categories|
| 0
|
Showing a family in a separable Hilbert space is a basis.
|
Let $\{e_n\}_{n \in \Bbb{N}}$ be an orthonormal basis for separable Hilbert space $H$ . Suppose $\{f_n\}_{n \in \Bbb{N}}$ is an orthonormal set such that $$\sum_n \vert \vert e_n - f_n \vert \vert Prove $\{f_n\}$ is a basis for $H$ . My Thoughts, By Besssels inequality, one has $$\sum_n \vert \langle f_n,e_n \rangle \vert \leq \vert \vert f_n \vert \vert^2.$$ where the RHS is just $1$ ? since the $f_n$ form an orthonormal set. This was the only exercise that gave me trouble today..Wait by Parsevals I get equality. So I need to show closure of $\text{span}\{f_1,f_2,...\}$ is all of $H$ ? i.e., the span of the $f_n$ is dense in $H$ .
|
Define the linear map $$ T:H\to H,T(e_i)=f_i $$ The inequality tells you that $\vert \vert T-I \vert \vert , in particular $T=I-(I-T)$ is invertible, hence $(f_i)$ is a basis and in fact an orthonormal basis.To see that: $\vert \vert T-I \vert \vert , pick an element $x=(x_n)_{n\geq 1}$ of the unit sphere of $H$ expanded in the basis $(e_n)_{n\geq 1}$ , then: $$ \vert\vert (T-I)x \vert \vert \leq \sum_{n\geq 1} \vert x_n\vert \vert \vert e_n-f_n \vert\vert \leq \vert \vert x\vert \vert \sqrt{\sum_{n\geq 1} \vert \vert e_n-f_n \vert\vert ^{2} } \leq \sum_{n\geq 1} \vert \vert e_n-f_n \vert\vert
|
|real-analysis|measure-theory|
| 0
|
Solving $(1+x^{2})y''+y=0$ using power series
|
I've been trying to work on a solution for this equation $$ (1+x{ ²})y''+y=0 $$ using power series around $x_{0}=0$ . So far, I have reached the following recursion relation: $$ a_{n+2}=\frac{-a_{n}(n^{2}-n+1)}{(n+2)(n+1)} $$ but I have been struggling to find a generic form for the coefficients. $a_{0}$ and $a_{1}$ are free since there are no initial conditions to be applied. Any thoughts on how to write this solution? Edit: coefficients need to be real.
|
With $$a_n=\frac{\left(-1\right)^{\frac{n}{2}}\cdot 2^{n}\cdot \Gamma \! \left(\frac{n}{2}-\frac{1}{4}-\frac{i\cdot \sqrt{3}}{4}\right)\cdot \Gamma \! \left(\frac{n}{2}-\frac{1}{4}+\frac{i\cdot \sqrt{3}}{4}\right)\cdot \left(\left\{\begin{array}{cc} \frac{a_{0}}{\Gamma \left(-\frac{1}{4}-\frac{i\cdot \sqrt{3}}{4}\right)\cdot \Gamma \left(-\frac{1}{4}+\frac{i\cdot \sqrt{3}}{4}\right)} & n::\mathit{even} \\ \frac{-\frac{i}{2}\cdot a_{1}}{\Gamma \left(\frac{1}{4}-\frac{i\cdot \sqrt{3}}{4}\right)\cdot \Gamma \left(\frac{1}{4}+\frac{i\cdot \sqrt{3}}{4}\right)} & n::\mathit{odd} \end{array}\right.\right)}{\Gamma \! \left(n+1\right)}$$ for $n > 1$ we get $$y(x)=a_0+a_1 x-\frac{1}{2}a_0 x^2-\frac{1}{6}a_1 x^3+\frac{1}{8}a_0 x^4+\frac{7}{120}a_1 x^5+...$$
|
|ordinary-differential-equations|power-series|
| 0
|
If $f(X)=AX-XA$ is diagonalizable, show that $A$ is diagonalizable
|
Let $f:M_n(F)\rightarrow M_n(F), X\mapsto AX-XA$ . If $f$ is diagonalizable, I want to show that $A$ is diagonalizable. I'd prefer to avoid Jordan Blocks. I know that $f$ is diagonalizable if and only if: its minimal polynomial is square-free, or there exist $d$ linearly independent eigenvectors where $d = \dim M_n(F)$ , or the characteristic polynomial of $f$ factors into linear terms and each geometric multiplicity equals the corresponding algebraic multiplicity.
|
I assume $\mathbb F$ is algebraically closed (or at least a splitting field for $A$ ) and $\text{char }\mathbb F \neq 2$ . Let $\lambda$ be an arbitrary eigenvalue for $A$ and $B:= \big(A-\lambda I\big)$ . We need to show $\lambda$ is semi-simple $\iff \dim \ker B \leq \dim \ker B^2$ is met with equality . We can re-write $f$ as $f\big(X\big) = AX - XA = AX - XA - \lambda I X - X (-\lambda I) = BX - XB$ Now suppose for contradiction that $\dim \ker B \lt \dim \ker B^2$ i.e. there is some $\mathbf x \in \ker B^2$ but not in the nullspace of $B$ and some corresponding $\mathbf y^T$ is the left nullspace of $B^2$ but not in the left nullspace of $B$ and set $X:=\mathbf {xy}^T$ (i.) if $X \in \ker f$ $\mathbf 0 = f\Big(f\big(X\Big)\Big)= f^2\big(X\big) = B^2X + XB^2 - 2BXB=-2BXB = -2\big(B\mathbf x\big)\big(\mathbf y^TB\big)$ which contradicts $\mathbf x,\mathbf y^T$ not being in the right and left nullspaces of $B$ respectively (ii.) if $X \not\in \ker f \implies X \not\in \ker f^3$ since
|
|linear-algebra|matrices|diagonalization|
| 0
|
jump condition of a Green's function
|
Find the Green's function for the BVP $$y''-\frac1xy'=0 \ \ ; \ \ y(0)=y(1)=0$$ Clearly the operator is not self-adjoint, so the equivalent self-adjoint equation is $$\left(\frac{y'}{x}\right)'=0$$ Therefore the Green's function could be taken as $$G(x,t)=\begin{cases}A+Bx^2 & \text{if} \ \ 0\leq x Now boundary conditions give $A=C+D=0$ . Giving $$G(x,t)=\begin{cases}Bx^2 & \text{if} \ \ 0\leq x Continuity of $G$ at $x=t$ gives $Bt^2=C(1-t^2)$ . Also the jump discontinuity of $\displaystyle\frac{\partial G}{\partial x}$ at $x=t$ gives $$-2Ct-2Bt=t\implies B+C=-\frac12$$ Therefore $C=-\frac{t^2}2$ and $B=\frac{t^2-1}2$ . Hence $$G(x,t)=-\begin{cases}\dfrac{(1-t^2)x^2}2 & \text{if} \ \ 0\leq x which is clearly wrong since the correct answer is $$G(x,t)=-\begin{cases}\dfrac{(1-t^2)x^2}{2t} & \text{if} \ \ 0\leq x I checked that taking the jump $1$ instead of $t$ would give the correct answer, but here in self-adjoint form the coefficient is $\displaystyle\frac1x$ , whose reciprocal is wha
|
In order that the solution to the non-homogeneous ODE $$ y''-\frac{1}{x}y'=f(x), \qquad y(0)=y(1)=0 \tag{1} $$ be given by $$ y(x)=\int_0^1G(x,t)f(t)\,dt, \tag{2} $$ the Green's function $G(x,t)$ must be the solution to $$ \left(\frac{\partial^2}{\partial x^2}-\frac{1}{x}\frac{\partial}{\partial x}\right)G(x,t)=\delta(x-t), \qquad G(0,t)=G(1,t)=0 \qquad(0 Eq. $(3)$ can be rewritten as $$ \frac{\partial}{\partial x}\left(\frac{1}{x}\frac{\partial}{\partial x}G(x,t)\right)=\frac{1}{x}\delta(x-t), \tag{4} $$ which implies $$ \lim_{\epsilon\to 0^{+}}\int_{t-\epsilon}^{t+\epsilon}\frac{\partial}{\partial x}\left(\frac{1}{x}\frac{\partial}{\partial x}G(x,t)\right)dx =\lim_{\epsilon\to 0^{+}}\int_{t-\epsilon}^{t+\epsilon}\frac{1}{x}\delta(x-t)dx =\frac{1}{t} $$ $$ \implies \lim_{\epsilon\to 0^{+}}\left.\frac{1}{x}\frac{\partial}{\partial x}G(x,t)\right|_{t-\epsilon}^{t+\epsilon}=\frac{1}{t} \implies \lim_{\epsilon\to 0^{+}}\left.\frac{\partial}{\partial x}G(x,t)\right|_{t-\epsilon}^{t+\epsilo
|
|ordinary-differential-equations|greens-function|
| 1
|
Find a group of order 5784 that does not have a normal subgroup of index 12
|
This is a second part of a two-part question. In the first part I was asked to prove that any group of order $5784 = 2^3 \cdot 3 \cdot 241$ has subgroups of the following indexes : $3, 6, 8, 12,$ and $24$ . I proved that part by showing: G has subgroups of orders $2, 2^2, 2^3, 3, 241$ (by Sylow theorem). Moreover, the subgroup of $241$ , lets call it H, is normal since by Sylow's third theorem its easy to see its the only subgroup of that order. Therefore for any other subgroup of $G$ , $K$ , $HK$ is a subgroup as well (since HK = KH due to H normal) But the order of HK is $o(HK) = \frac{o(H) \cdot o(K)}{o(H \cap K)} = o(H) \cdot o(K)$ since necessarily $o(H\cap K) = 1$ due to order of H being prime and any other subgroup will not have order that is a multiple of that prime (241) Therefore we have subgroups of orders: $241$ , $241 \cdot 2$ , $241 \cdot 2^2$ , $241 \cdot 2^3$ , $241 \cdot 3$ , which give us the subgroups of required indexes (24, 12, 6, 3, 8 respectively) as well, QED. N
|
Posting the answer I arrived at with the kind help of @SteveD in the comment thread above: Consider the group = $S_4 \times C_{241}$ . As the direct product of $S_4$ (the symmetric group on 4 elements, of order 4! = 24), and the finite cyclic group of order 241, this is a group of order 5784, as needed. Assume it has a normal subgroup N of index 12. Then N has order 5784/12 = 482. Note that 482 = 241 * 2 which is of the form $p \cdot q$ for two primes. Therefore (by Cauchy theorem) it has a subgroup of order 241. Further, this must be the one and the same group $C_{241}$ since there is only a single unique subgroup of this order in G (by Sylow's third theorem which constraints the number of sylow subgroups of a given order that can exist, in this case the conditions are easily seen to imply a single subgroup). So we have $N \unlhd G$ , $C_{241} \unlhd G$ and $C_{241} \unlhd N$ , therefore by the third isomorphism theorem we have $ N / C_{241} \unlhd G / C_{241}$ . But $G / C_{241} \sim
|
|group-theory|sylow-theory|
| 1
|
Guaranteed graph labyrinth solving sequence
|
Starting from a vertex of an unknown, finite, strongly connected directed graph, we want to 'get out' (reach the vertex of the labyrinth called 'end'). Each vertex has two exits (edge which goes from vertex in question to an other one), one exit is labeled 'a', the other exit is labeled 'b'. We have limitless 'memory' but, we don't recognize when we arrive at the same vertex again, so at each step we can only pick if we go exit a or exit b, or we recognize when we have entered the exit vertex. Show that there is an algorithm to get out of any maze! Write the algorithm. If n is its input, then its output is a sequence 'a', 'b' that exits any maze with at most n vertices. I got this assignment (math student) in a course to do with algorithms. I don't believe the actual code outputting the 'a', 'b' sequence is particularly difficult once the structure of the function is found mathematically. I've had multiple ideas, it is clear, that were we to find a sequence that would guarantee a visit
|
There are only finitely many possible labyrinths on $n$ vertices, where the data defining a labyrinth includes all of the edges, the start vertex, and the target vertex. Say there are $M$ labyrinths, and name them $L_1,L_2,\dots,L_M$ . Given a labyrinth $L$ and an arbitrary vertex $v$ of $L$ , let $\newcommand\sol{\text{sol}}\sol(L,v)$ be a sequence of $a$ and $b$ which, starting from $v$ , takes you to the exit of $L$ . Let $w_1=\sol(L_1,v_1)$ , where $v_1$ is the start vertex of $L_1$ . The first part of the solution path will be $w_1$ . Let $v_2$ be the vertex you end up on after following the path $w_1$ on the labyrinth $L_2$ . Then, let $w_2=\sol(L_2,v_2)$ . The second portion of the solution path is $w_2$ ; as long as the true labyrinth is $L_2$ , then we will be done after completing $w_1+w_2$ . Let $v_3$ be the vertex you end up on after following the concatenation of paths $w_1+w_2$ on the labyrinth $L_3$ . Then, let $w_3=\sol(L_3,v_3)$ . The third portion of the solution path
|
|combinatorics|graph-theory|algorithms|recursive-algorithms|tiling|
| 1
|
Figuring out if $\lim_{(x,y)\to(0,0)}\frac{-x^6y^1(x^2+1)}{(x^6+y^2)\sqrt{x^2+y^2}}$ exists
|
I need to find out if the limit exists. $$\lim_{(x,y)\to(0,0)}\frac{-x^6y^1(x^2+1)}{(x^6+y^2)\sqrt{x^2+y^2}}$$ First, I approached the limit from $y=0$ , and the result was $\frac{0}{x^7}$ . Then, I approached from $x = 0$ , and the result was $\frac{0}{y^3}$ . This made me assume that the limit does not exist. However, Wolfram Alpha calculated the limit as zero. What is the solution to this question?
|
Since this term has some $(x^2+y^2)$ term, it may be inspiring to use polar coordinate to investigate its behaviour first. Thus, under polar coordinate, we have the limit becomes $$\lim_{r\to0}\dfrac{r^7\cos^6\theta\sin\theta(1+r^2\cos^2\theta)}{r^2(r^4\cos^4\theta+1)r}=\lim_{r\to0}r^4\times\left(\dfrac{\cos^6\theta\sin\theta(1+r^2\cos^2\theta)}{(r^4\cos^4\theta+1)}\right)$$ As you can see the RHS term is bounded as the denominator is at least 1, so sandwich theorem!
|
|limits|multivariable-calculus|
| 0
|
For what $n$ is there an injective homomorphism from $\mathbb{Z_n} \to S_7$
|
So I came across this problem: For what $n$ is there an injective homomorphism from $\mathbb{Z}_n\to S_7$ ? I have the solution but the solution doesn't make much sense to me. It said that $n=1,2,3,4,5,6,7,10,12$ . Also I'm wondering why $n$ can't be $8$ ? A homomorphism $\phi$ is just a mapping such that for all $a,b\in \mathbb{Z}_n$ $\phi(ab)=\phi(a)\phi(b)$ Since we can always map the elements of $\mathbb{Z}_8$ injectively to $S_7$ as the order difference is quite large I would assume that the property $\phi(ab)=\phi(a)\phi(b)$ is the property that doesn't hold for $\mathbb{Z}_8$ . But I am not exactly sure how to explicitly show this, please help!!
|
For $n=1,\dots,7$ , clearly $S_7$ contains $n$ -cycles: each of them is isomorphic to $\mathbb Z_n$ . For $n=10$ , for example the subgroup $\langle (12)(34567)\rangle$ is isomorphic to $\mathbb Z_{10}$ . Likewise, for $n=12$ , for example the subgroup $\langle (123)(4567)\rangle$ is isomorphic to $\mathbb Z_{12}$ . Can you see now why $S_7$ hasn't got subgroups isomorphic to $\mathbb Z_n$ for $n=8,9,11$ ?
|
|group-theory|group-homomorphism|
| 0
|
Proof for Vector Spaces $V = U + W$ with $U \cap W = {\mathbf{0}}$
|
I am going through question in my text book regarding proof. I have done that but I think I am making mistake of proving that $u_1=u_2$ and $w_1=w_2$ by using this same statement that I have to prove. Am I wrong or I have done following proof correctly. Kindly help Question: If $U$ and $W$ are subspaces of vector space $V$ such that $V = U + W$ and $U \cap W = {\mathbf{0}}$ , then prove that every vector in $V$ has a unique representation of the form $\mathbf{u} + \mathbf{w}$ , where $\mathbf{u} \in U$ and $\mathbf{w} \in W$ .( $V$ is called the direct sum of $U$ and $W$ , and is written $V = U \oplus W$ . ) Proof: Existence: For any $\mathbf{v} \in V$ , since $V = U + W$ , $\exists \mathbf{u} \in U, \mathbf{w} \in W$ such that $\mathbf{v} = \mathbf{u} + \mathbf{w}$ . Uniqueness: Suppose $\mathbf{v} = \mathbf{u}_1 + \mathbf{w}_1 = \mathbf{u}_2 + \mathbf{w}_2$ , where $\mathbf{u}_1, \mathbf{u}_2 \in U$ and $\mathbf{w}_1, \mathbf{w}_2 \in W$ . Then, $\mathbf{u}_1 - \mathbf{u}_2 = \mathbf
|
Your proof is completely correct. There is nothing to add a priori, but nevertheless I regret the absence of drawings and examples because of the close link between linear algebra and elementary geometry. It is understandable that in a textbook, the teacher avoids drawings in order to focus his readers on reasoning independently of the drawing, which are only supports that are not absolutely necessary. On the other hand, it is strongly recommended that the student of the same book multiply the drawings and look for examples in his knowledge. Here, it's simple: we can take for example the vector space over $\mathbb R$ $$V:=(\mathbb R^2,+,.)$$ $$U:=\mathbb R (1,0)$$ $$W:=\mathbb R (0,1)$$ What you have shown then allows you to assert that any element $(x,y)$ of $\mathbb R^2$ has a unique representation of the form $u+w$ , with $u\in U$ and $w\in W$ , which you will no doubt recognize: $$\text{Denoting } \vec i=(1,0) , \vec j=(0,1), \forall \vec v\in V, \exists !(x,y)\in \mathbb R^2: \vec
|
|linear-algebra|solution-verification|vector-spaces|
| 1
|
What is meant when mathematicians or engineers say we cannot solve nonlinear systems?
|
I was watching a video on "system identification" in control theory, in which the creator says that we don't have solutions to nonlinear systems. And I have heard this many times in many contexts, related to control problems or nonlinear odes, etc. I think I am reacting to these kinds of blanket statements, and I would like to understand more precisely what is meant. But I wanted to understand precisely what is meant that we can't solve nonlinear systems? Indeed, there are probably hundreds of questions on Math SE regarding numerical solutions to nonlinear systems. There are many algorithms for numerically solving different types of nonlinear systems of equations, including Newton's method, sequential quadratic programming, BFGS, Broyden's method, etc. All of these methods have their own limitations, such as positive definiteness, the existence of hessians, and so forth. Now in a linear ode or linear system of equations, we can get the solution for the system pretty easily, even for la
|
Your question is indeed overly broad. As asked, the answer is probably that the unsolvability refers to the absence of answers that are essentially formulas of some kind. The numerical methods you refer to don't count as "solutions" with this definition.
|
|numerical-methods|nonlinear-system|nonlinear-dynamics|
| 0
|
Understanding when probability distributions are in the exponential family.
|
I'm starting to study Generalized Linear Models and I need help understanding how to show that a distribution is part of the exponential family. I know that in general, a distribution is a member of the exponential family if it can take on the following form. $$p(x | \eta) = h(x) \exp(\eta \pi(x) - A(\eta))$$ I get that the basic idea is to take the exponential of the logarithm of the distribution then try to get things to match up. When I look at some examples like Poisson distribution, I kind of get it, but I'm still left with a lot of questions. Given $$p(x | \lambda) = (\lambda^x e^{-\lambda}) / x!,$$ it can be rewritten as follows $$p(x | \lambda) = 1/x! \exp(x \log(\lambda) - \lambda)$$ I understand most of this. $\log(\lambda^x) = x \log(\lambda)$ and $\log(e^{-\lambda})$ simplifies to $-\lambda$ . But I don't understand why doesn't the $(1/x!)$ become $-log(1/x!)$ . I also don't understand how to assign the different values to their respective parts $\eta = \log(\lambda)$ $T(x)
|
Suppose $X$ is a member of an exponential family. A standard form for the pdf/pmf of a member of an exponential family is $$f_X(x|\eta) = h(x)\exp\bigl(T(x)\eta-A(\eta)\bigr).$$ We can rewrite this as $$h(x)\cdot e^{T(x)\eta}\cdot e^{-A(\eta)}.$$ This is how we'll assign the different parts: By writing the pdf/pmf as a product of one term that depends only on $x$ , one that depends only on $\eta$ , and one that depends on both (in a very specific way). Note that $h(x)$ is already on the outside, as is the $1/x!$ , so we aren't going to need to do anything with that. However, the $\exp(T(x)\eta)=\lambda^x$ , so we have to re-jigger the $\lambda^x$ as $$\lambda^x = \exp(\log(\lambda^x)).$$ This is why we did have to do something with the $\lambda^x$ term that we didn't have to do to the $h(x)$ . And similarly, we have to rewrite the $e^{-A(\eta)}$ . In this case, it's already going to appear as $e^{-A(\eta)}=e^{-\color{red}{\text{some junk}}},$ so we aren't going to have to do much to $A
|
|probability|probability-theory|probability-distributions|logarithms|logistic-regression|
| 1
|
Find $f(x)$ so that volume of revolution on $[a,b]$ is $b^3-ab^2$
|
This is Additional Problem $26$ in Chapter $7$ of Simmons Calculus: "A solid is generated by revolving about the $x$ -axis the area bounded by a curve $y=f(x)$ , and the lines $x=a$ and $x=b$ . Its volume is $\pi(b^3-b^2a)$ for all $b>a$ . Find $f(x)$ ." Some false starts: Since $V=\pi b^2(b-a)$ the impulse is to see this expression as the volume of a cylinder with radius $b$ , but $f(x)=b$ would yield the correct volume for only intervals $[a,b]$ and not for any other $[a,b']$ . So $f(x)$ is not constant and the volume is not a cylinder. If we say that $g(x)=[f(x)]^2$ and $\int g(x)=G(x)$ then $G(b)-G(a)=b^2(b)-b^2(a)$ and it would seem that $G(x)=b^2x$ but then $g(x)=b^2$ and $f(x)=b$ which is incorrect. So the simple assignments to $G(x)$ aren't right, but I also see no way to disentangle $a$ from $b$ . Rewriting slightly with the disc method we have $\int_{a}^{b} [f(x)]^2dx=b^3\frac{b}{b}-b^3\frac{a}{b}$ , implying the volume grows linearly from $0$ to $b^3$ as $a$ goes from $0$ to
|
So we want a function $f(x)$ such that $\displaystyle\int_{a}^{b} [f(x)]^2dx=b^3-b^2a$ for every $b>a$ , and $a$ is fixed (I'll assume $a>0$ ). Define $F(x)=\displaystyle\int_{a}^{x} [f(t)]^2dt$ . By the Fundamental Theorem of Calculus we will have $F'(x)=f(x)^2$ . But by the above equality we have $F(x)=x^3-x^2a$ , and so $F'(x)=3x^2-2ax$ . Thus (assuming we want a continuous function), $$f(x)^2=3x^2-2ax\implies f(x)=\sqrt{3x^2-2ax}\space\text{ or } f(x)=-\sqrt{3x^2-2ax}$$ Both functions are well defined as the only roots of $3x^2-2ax$ are $0$ and $\dfrac{2a}{3} .
|
|calculus|solid-of-revolution|
| 1
|
Improving my way of showing $\sin^212^\circ+\sin^221^\circ+\sin^239^\circ+\sin^248^\circ=1+\sin^29^\circ+\sin^218^\circ$
|
This problem is from 1904 and was given to students studying for the Cambridge and Oxford entry examinations. My solution is presented below, but I am of the opinion that it can be improved. All ideas welcome. Show that $$\sin^{2}{12^{\circ}}+\sin^{2}{21^{\circ}}+\sin^{2}{39^{\circ}}+\sin^{2}{48^{\circ}}=1+\sin^{2}{9^{\circ}}+\sin^{2}{18^{\circ}}$$ A solution $$\begin{align} \sin^{2}{12^{\circ}}=\sin^{2}{(30^{\circ}-18^{\circ})} &=(\sin{30^{\circ}}\cos{18^{\circ}}-\cos{30^{\circ}}\sin{18^{\circ}})^{2} \tag1\\ &=\left(\frac{1}{2}\cos{18^{\circ}}-\frac{\sqrt{3}}{2}\sin{18^{\circ}}\right)^{2} \tag2\\ &=\frac{1}{4}\cos^{2}{18^{\circ}}+\frac{3}{4}\sin^{2}{18^{\circ}}-\frac{\sqrt{3}}{2}\cos{18^{\circ}}\sin{18^{\circ}} \tag3 \\ \\ \\ \sin^{2}{48^{\circ}} &=\sin^{2}{(30^{\circ}+18^{\circ})} \tag4 \\ &= (\sin{30^{\circ}}\cos{18^{\circ}}+\cos{30^{\circ}}\sin{18^{\circ}})^{2} \tag5 \\ &=\left(\frac{1}{2}\cos{18^{\circ}}+\frac{\sqrt{3}}{2}\sin{18^{\circ}}\right)^{2} \tag6 \\ &=\frac{1}{4}\cos^{2}{
|
Well, I think the most obvious improvement would be to establish a general identity and then use it in the specific cases, rather than to redo the same computation four times: $$\begin{align} \sin^2 (x+y) + \sin^2 (x-y) &= (\sin x \cos y + \cos x \sin y)^2 + (\sin x \cos y - \cos x \sin y)^2 \\ &= 2 \left( \sin^2 x \cos^2 y + \cos^2 x \sin^2 y \right) \tag{1} \\ \end{align}$$ so that with $x = 30^\circ$ , $$\sin^2 (30^\circ + y) + \sin^2 (30^\circ - y) = \frac{\cos^2 y + 3 \sin^2 y}{2} = \frac{1}{2} + \sin^2 y. \tag{2}$$ Now substituting $y = 9^\circ$ and $y = 18^\circ$ , we immediately find the LHS of the claimed identity equals $$1 + \sin^2 9^\circ + \sin^2 18^\circ.$$
|
|trigonometry|
| 1
|
How can I evaluate the limit of $\theta$ in the expression $f(x+h)-f(x)=h f'(x+\theta h)$?
|
I'm working on a problem which asks to calculate the value of $\lim_{h\rightarrow 0} \theta$ ,where $\theta$ comes from the mean value theorem $f(x+h)-f(x)=hf'(x+\theta h)$ ,and $f$ is first order continuous differentiable. It's quite easy if $f$ is second order continuous differentiable by comparing the coefficient of Taylor expansion of $f(x+h)$ at $x$ , from which I can obtain $\lim\limits_{h\to 0} \theta =\frac{1}{2}$ . However, when I come to the first order case, I've got totally no idea about whether "pull out" $\theta$ from $f$ or construct a counter-example. So my question is does this conclusion still work in the first order case? How can prove it?
|
The conclusion is not always true if $f'$ is continuous at $x$ but not differentiable at $x$ . For a counterexample, consider $f(x) = 2x \sqrt{|x|}$ at $x=0$ . The derivative is $f'(x) = 3\sqrt{|x|}$ . For every real $h$ , the function $\theta(h)$ must satisfy $$ f(0+h)-f(0) = h f'(0+h \theta(h)) $$ $$ 2h\sqrt{|h|} = 3h \sqrt{|h\theta(h)|} $$ $$ \theta(h) = \frac 49 $$ $$ \lim_{h \to 0} \theta(h) = \frac 49 \neq \frac 12 $$ (Also as the question you linked in comments implies, the conclusion is not necessarily true if $f''(x) = 0$ .)
|
|calculus|mean-value-theorem|
| 1
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.