title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
In how many ways can we distribute $~B~$ white and $~B~$ black balls into $~U~$ distinguishable urns such that each urn has at least 1 ball?
|
This is a self-answer question that was inspired by this recently posted question . $\underline{\text{The Question}}$ Assuming that balls of the same color are indistinguishable, in how many ways can we distribute $~B~$ white and $~B~$ black balls into $~U~$ distinguishable urns so that each urn has at least 1 ball? See my answer.
|
Corrected a typo. In the computation of $~f(n),~$ the first factor should be $~\displaystyle \binom{U}{n},~$ rather than $~\displaystyle \binom{B}{n}.~$ There may be a more elegant way of attacking this problem than the one that I will use. I considered inclusion-exclusion, but found it to be messy. So, I had to settle for the direct approach, which yields a computation that (unfortunately) I am only able to express as a summation. $\underline{\text{Preliminary Result}}$ PR-1 For $~r \in \Bbb{Z_{\geq 0}}, ~s,t \in \Bbb{Z^+},~$ such that $~r \leq s \leq t,~$ the number of solutions to : $x_1 + x_2 + \cdots + x_s = t,$ $x_1, x_2, \cdots, x_s \in \Bbb{Z_{\geq 0}},$ For $~r > 0, ~x_1, x_2, \cdots, x_r \in \Bbb{Z^+},$ is $~\displaystyle \binom{[t - r] + [s - 1]}{s-1}.~$ Proof Employ the change of variables $~y_i = x_i - 1 ~: ~1 \leq i \leq r~$ and $~y_i = x_i ~: ~i > r.$ Then, there is a bijection between the set of solutions to the above enumeration problem, and the set of solutions to the
|
|combinations|
| 0
|
An attempt for approximating the logarithm function $\ln(x)$: Could be extended for big numbers?
|
An attempt for approximating the logarithm function $\ln(x)$ : Could be extended for big numbers? PS: Thanks everyone for your comments and interesting answers showing how currently the logarithm function is numerically calculated, but so far nobody is answering the question I am asking for, which is related to the formula \eqref{Eq. 1} : Is it correctly calculated?, Could a formula for the logarithm of large numbers be found with it? Here with "big/large numbers" I am meaning in the same sense of how the Stirling's approximation formula approximates the factorial function at large values. Intro__________ On a previous question I found that the following approximation could be used: $$\ln\left(1+e^x\right)\approx \frac{x}{1-e^{-\frac{x}{\ln(2)}}},\ (x\neq 0) \quad \Rightarrow \quad \dfrac{\ln\left(1+x^{\ln(2)}\right)}{\ln\left(x^{\ln(2)}\right)} \approx \frac{x}{x-1}$$ And later I noted that I could do the following: $$\dfrac{\ln\left(1+x^{\ln(2)}\right)}{\ln(2)} \approx \frac{x\ln\lef
|
Hasting's classic "APPROXIMATIONS FOR DIGITAL COMPUTERS" published in 1954 contains a number of nice approximations for $\log_{10}$ . Here is the most accurate one. For $\dfrac1{\sqrt{10}} \le x \le \sqrt{10} $ , let $z=\dfrac{x-1}{x+1}$ . Then $\log_{10}(x) \approx c_1z+c_3z^3+ c_5z^5+c_7z^7 +c_9z^9$ with $c_1 = .8685,91718\\ c_3 = .2893,35524\\ c_5 = .1775,22071\\ c_7 = .0943,76476\\ c_9 = .1913,37714 $ with an error less than $1.5*10^{-7}$ . A more recent collection is "Computer Approximations" by Hart, Cheney, Lawson, Maehley, Mesztenyi, Rice, Thacher, and Witzgall, published in 1968. This has a number of quite accurate approximations to log.
|
|real-analysis|combinatorics|convergence-divergence|solution-verification|pochhammer-symbol|
| 0
|
Formal definition of $m$-th digit of positive integer
|
Let $b$ be a base, that is, an integer greater than or equal to $2$ , and let $n$ be a positive integer that has $d$ digits in base $b$ . What is the formal definition of the function that takes a positive integer $m$ less than or equal to $d$ , and outputs the $m$ -th digit of $n$ , starting from the left? I know what it is, intuitively, but I want a formal definition.
|
It's well established that for any natural $n$ and any natural base $b$ there are a distinct set of digits so that $n$ may be express with those digits base $b$ . So to say let $f(m)$ be the $m$ th digit from the left is perfectly well-defined. What makes it informal is the "from the left". As you are assuming that the number of digits, $d$ , is known though we can formally state this as the digit coefficient for $b^{d-m}$ . If that feels to obfuscated we can be a bit more explanatory and state that $f(m) = a_{d-m}$ where $a_0, ...., a_{d-1}$ are the unique digits $a_k\in \mathbb N; a_k where $n = \sum_{k=0}^{d-1}a_k b^k$ . If you want something more calculatory ... well, Robert Shore really has a good answer to that. $f=\lfloor \frac n{b^{d-m}}\rfloor \% b$ where $\lfloor \rfloor$ is the floor (greatest integer) function and $\%$ is the remainder operator.
|
|elementary-number-theory|definition|
| 0
|
How to determine the value of $\displaystyle f(x) = \sum_{n=1}^\infty\frac{\sqrt n}{n!}x^n$?
|
How to determine the value of $\displaystyle f(x) = \sum_{n=1}^\infty\frac{\sqrt n}{n!}x^n$ ? No context, this is just a curiosity o'mine. Yes, I am aware there is no reason to believe a random power series will have a closed form in terms of well established functions, but also I have no way to know if that is the case here, so that is why I'm asking. Do you know this power series or any method I could use to determine its value? In my research I've found out about the polylogarithm , which is defined as $$\mathrm{Li}_s(x) = \sum_{n=1}^\infty\frac{x^n}{n^s} = \frac1{\Gamma(s)}\int_0^\infty\frac{t^{s-1}}{e^t/x-1}dt$$ This called my attention because $$\begin{aligned} f(x) &= \sum_{n=1}^\infty\frac{\sqrt n}{n!}x^n\\ &= x\sum_{n=1}^\infty\frac1{\sqrt n}\frac{x^{n-1}}{(n-1)!}\\ &= x\sum_{n=1}^\infty\frac1{\sqrt n}\mathcal L^{-1}\left\{\frac1{x^n}\right\}\\ &= x\mathcal L^{-1}\left\{\sum_{n=1}^\infty\frac1{\sqrt n}\frac1{x^n}\right\}\\ &= x\mathcal L^{-1}\left\{\mathrm{Li}_{1/2}\left(\frac
|
In my opinion, getting the integral representation is too little for a post, but in order to put some meat on the bones, we can find a full asymptotics using this formula. $$S(x)=\sum_{n=1}^\infty\frac{\sqrt n}{n!}x^n=x\sum_{n=0}^\infty \frac{x^n}{n!\sqrt {n+1}}=x\sum_{n=0}^\infty \frac{x^n}{n!}\frac1{\sqrt\pi}\int_0^\infty t^{-1/2}e^{-(n+1)t}dt$$ Performing summation first, $$S(x)=\frac x{\sqrt\pi}\int_0^\infty e^{xe^{-t}-t}\frac{dt}{\sqrt t}$$ The asymptotics at $x\to0$ is evident. Decomposing $e^{xe^{-t}}$ near $x=0$ $$S(x)\sim\frac x{\sqrt\pi}\int_0^\infty e^{-t}\big(1+xe^{-t}+\frac{x^2}{2!}e^{-2t}+...\big)\frac{dt}{\sqrt t}$$ what, of course, after integration simply coincides with the initial sum. At $x\to\infty$ $$S(x)\overset{s=e^{-t}}{=}\frac x{\sqrt\pi}\int_0^1\frac{e^{xt}}{\sqrt{\ln\frac1t}}dt\overset{t=1-s}{=}\frac{xe^x}{\sqrt\pi}\int_0^1\frac{e^{-xs}}{\sqrt{\ln\frac1{1-s}}}ds\overset{xs=t}{=}\frac{e^x}{\sqrt\pi}\int_0^x\frac{e^{-t}}{\sqrt{\ln\frac1{1-\frac tx}}}dt$$ The in
|
|power-series|generating-functions|laplace-transform|closed-form|polylogarithm|
| 0
|
Can we make this subspace $\aleph_0$-dimensional?
|
Let $X$ be a compact Hausdorff space and $A\subseteq X$ a subspace of $X$ . Is it possible for the space $\{f\vert_A:f\in\mathcal{C}(X,\mathbb{R})\}$ to be $\aleph_0$ -dimensional as a $\mathbb{R}$ -vector subspace of $\mathcal{C}(X,\mathbb{R})$ ? I know that in this case $\mathcal{C}(X,\mathbb{R})$ is, at least, $\mathfrak{c}$ -dimensional as an infinite-dimensional Banach algebra. But I don't know if we can restrict it to make it $\aleph_0$ -dimensional somehow. Thanks!
|
I claim that your space is the same as $\mathcal C(\bar A, \Bbb R)$ . In particular the restriction map $f \mapsto f|_A$ from $\mathcal C(\bar A, \Bbb R)$ to your space is an isomorphism. The restriction map is a well-defined map to your space because if $f: \bar A \to \Bbb R$ is continuous then it extends to $X$ by the Tietze extension theorem. It's clearly surjective, as the restriction of a continuous function on $X$ is the restriction of the restriction of that function to $\bar A$ . It's injective, because two continuous functions to $\Bbb R$ that agree on $A$ must agree on $\bar A$ . It follows that your space cannot be $\aleph_0$ -dimensional, for the same reason $\mathcal C(X, \Bbb R)$ cannot be.
|
|general-topology|functional-analysis|continuity|banach-spaces|topological-vector-spaces|
| 1
|
Find the maximum number of possible real roots of the equation $a x^4+ b x^3+ x^2+ x+1=0$ is equal to where $a \ne 0$.
|
The question goes on like this : "Let $a$ and $b$ be real numbers such that $a \ne 0$ . Then the maximum number of possible real roots of the equation $ax^4+bx^3+x^2+x+1=0$ is equal to " My attempt: First I differentiate wrt $x$ , but due to variable: $a$ and $b$ , I can't make any direct conclusion Also by writing $-ax^4-bx^3=x^2+x+1$ , I could draw graph of RHS but not of LHS. Please help with this problem in shortest way out.
|
I will try for a longer answer a bit later, but my first step would be to "depress" the quartic to remove the $x^{3}$ term using the transformation $x'=x+\frac{b}{4a}$ . From here, I would try the quartic discriminant $\Delta=16c^{4}e-4c^{3}d^{2}-128c^{2}e^{2}+144cd^{2}e-27d^{4}+256e^{3}$
|
|calculus|algebra-precalculus|polynomials|
| 0
|
Find the maximum number of possible real roots of the equation $a x^4+ b x^3+ x^2+ x+1=0$ is equal to where $a \ne 0$.
|
The question goes on like this : "Let $a$ and $b$ be real numbers such that $a \ne 0$ . Then the maximum number of possible real roots of the equation $ax^4+bx^3+x^2+x+1=0$ is equal to " My attempt: First I differentiate wrt $x$ , but due to variable: $a$ and $b$ , I can't make any direct conclusion Also by writing $-ax^4-bx^3=x^2+x+1$ , I could draw graph of RHS but not of LHS. Please help with this problem in shortest way out.
|
$x=0$ cannot be a root, so we can equivalently investigate the roots of its reciprocal polynomial $p(x)=x^4+x^3+x^2+bx+a$ . Now $p''(x) = 12x^2+6x+2$ doesn't have real roots, so at best $p'(x)$ has one real root, which means $p(x)$ can have atmost two real roots.. Now it suffices to note that $x^4+x^3+x^2+x-1$ is negative at the origin, so two real roots is clearly possible.
|
|calculus|algebra-precalculus|polynomials|
| 1
|
Question about convergence in stochastic integration
|
In Hui-Hsiung Kuo's Introduction to Stochastic Integration, lemma 4.3.3, equations (4.3.11) says that since $f$ is assumed to be bounded, we have \begin{equation} \int_a^b |f(t,\cdot)-f(t-n^{-1}\tau,\cdot)|^2\ dt\to 0,\quad\text{almost surely}, \end{equation} as $n\to\infty$ . I have question about how this convergence holds, do we need to require that $f$ is left continuous?
|
No here we only assume $f$ is bounded and in $L^{2}$ , he just skipped a few steps. In particular, he skipped the proof that $$g(y):=\left(\int_{R} |f(x-y)-f(x)|^{p}dx\right)^{1/p}$$ is a continuous function of $y$ . As the author mentioned this proof is actually based on the original proof by Itô. In Øksendal's SDE book in " Construction of the Itô Integral" he goes over the same proof. The particular step you are studying was about proving that: if $g_{n}(t):=f\ast \psi_{n}(t)$ , where $\psi_{n}(t)$ is an approximation to the identity mollifier, then $$E[\int |f(s)-g_{n}(s)|^{2}ds]\to 0.$$ And the logic is indeed just that $f\ast \psi_{n}\stackrel{L^{p}}{\to} f$ . For $p=2$ case see Approximate $L^2$ function by convolving with mollifiers . For general $L^{p}$ see mollifiers . The logic is Upper bound $$\|f-f\ast \psi_{n}\|_{p}^{p}\leq \int \|f(\cdot-y)-f\|_{p}\psi_{n}(y)dy\leq \int g(\epsilon y)\psi(y)dy,$$ where $g(y):=\|f(\cdot-y)-f\|_{p}$ Prove that $g(y)$ is continuous and bound
|
|probability|stochastic-calculus|
| 1
|
Two definitions for $\mathcal F_{\infty}$
|
I have just started learning about Brownian motion $(B_t)_{t \geq 0}$ . The book I'm following defines a filtration $(\mathcal F_t)\_{t \geq 0}$ by $\mathcal F_t := \sigma(B_r, 0 \leq r \leq t)$ . I think it's fairly clear that this is in fact a filtration. What I'm confused about is that the author defines $\mathcal F_{\infty} := \sigma(B_t, t \geq 0)$ . I was expecting a different definition, which I thought was standard, call it $\mathcal F_{\infty}' := \sigma\left( \bigcup_{t \geq 0} \mathcal F_t\right)$ . Do we in fact have $\mathcal F_{\infty} = \mathcal F_{\infty}'$ ? I think it should be true, but I have no idea how to show it. By definition, we have \begin{align*} \mathcal F\_{\infty} = \sigma \left( \bigcup\_{t \geq 0} \{ B\_t^{-1}(A) \colon A \in \mathcal B(\mathbb R^d) \} \right), \end{align*} and on the other hand, \begin{align*} \mathcal F_{\infty}' = \sigma\left( \bigcup_{t \geq 0} \sigma\left( \bigcup_{s \leq t} \{ B_s^{-1}(A) \colon A \in \mathcal B(\mathbb R^d) \} \ri
|
$\mathcal F_{\infty} \subseteq \mathcal F'_{\infty}$ because RHS is a $\sigma-$ field and each $B_t$ is measurable w.r.t. it. $\mathcal F'_{\infty} \subseteq \mathcal F_{\infty}$ because each $\mathcal F_t$ is conatined in $\mathcal F_{\infty}$ .
|
|real-analysis|probability-theory|measure-theory|brownian-motion|filtrations|
| 0
|
If $y=\frac{\sin kx}{1+\cos kx }$ where $k$ is a positive integer, show that $\sin kx\frac{d^2y}{dx^2}=k^2y^2$
|
If $$y=\frac{\sin kx}{1+\cos kx }$$ where $k$ is a positive integer, show that $$\sin kx\frac{d^2y}{dx^2}=k^2y^2$$ My attempt: $$y+y\cos kx=\sin kx$$ $$\frac{dy}{dx}+(\frac{dy}{dx}\cos kx -yk\sin(kx))=k\cos(kx)$$ $$\frac{d^2y}{dx^2}+\frac{d^2y}{dx^2}\cos(kx)-k\frac{dy}{dx}\sin(kx)-yk^2\cos(kx)+\frac{dy}{dx}k\sin(kx)=-k^2\sin(kx)$$ $$\frac{d^2y}{dx^2}(1+\cos(kx))=-k^2\sin(kx)+yk^2\cos(kx)$$ $$\frac{d^2y}{dx^2}=\frac{yk^2\cos(kx)-k^2\sin(kx)}{1+\cos(kx)}$$ $$\sin(kx)\frac{d^2y}{dx^2}=\frac{\sin^2kx}{1+cos(kx)}*\frac{k^2\cos(kx)}{1+\cos(kx)}-\frac{k^2\sin^2(kx)}{1+cos(kx)}$$ Not sure where I did wrong that I cant get $k^2y^2$ .
|
I think you have a sign error early on. Upon fixing that, you might get something correct that seems wrong until you apply a half angle identity. You might use it at the end of your proof, but starting things off with the substitution saves time. $y=\frac{\sin kx}{1+ \cos kx}=\frac{\sqrt{1-\cos kx}}{\sqrt{1+\cos kx}}\cdot \frac{\sqrt{1+\cos kx}}{\sqrt{1+\cos kx}}=\tan (kx/2)$ Prove $\sin kx \cdot y''=k^2y^2$ $dy/dx=(k/2)\sec^2(kx/2)$ $d^2y/dx^2=(k^2/4)\cdot 2 \cdot \sec^2{(kx/2)}\tan(kx/2)=(k^2/2)\sec^2{(kx/2)}\tan(kx/2)$ $\sin (kx) y'' =2\sin(kx/2)\cos (kx/2) (k^2/2)\sec^2(kx/2)\tan(kx/2)=k^2\tan^2(kx/2)=k^2y^2$
|
|calculus|derivatives|
| 0
|
If there is a bijection from a subset $S$ of a group $G$ onto $X$ then $F(X)$ isomorphic to $\langle S \rangle$, Where $F(X)$ free group on X
|
Let $\phi: G \to F(X)$ be a group homomorphism suppose that $\phi$ maps a subset $S$ of $G$ bijectively onto $X$ . Then $F(X) $ is isomorphic to $\langle S\rangle$ , where $F(X)$ free group with basis $X$ . Idea of the proof : $\phi:S\to X$ is a bijection . $\phi^{-1}:X\to S\subset G$ is injective . Then by definition of free group there exist an unique group homomorphism $\bar{\phi}^{-1}: F(X)\to G$ which is also injective. I am unable to understand why the induced homomorphism is also injective?
|
Becaulse inverse of homomorphism is homomorphism and homomorphism is injection if only if $ker \phi^{-1} = \{ 0 \} $ . If it has more elements than 0.then $ \phi^{-1}(x)=0 $ then $x= \phi (0) =0$ so kerium of inverse of injection is singleton wich implies injectivity of inverse
|
|group-theory|free-groups|
| 0
|
If there is a bijection from a subset $S$ of a group $G$ onto $X$ then $F(X)$ isomorphic to $\langle S \rangle$, Where $F(X)$ free group on X
|
Let $\phi: G \to F(X)$ be a group homomorphism suppose that $\phi$ maps a subset $S$ of $G$ bijectively onto $X$ . Then $F(X) $ is isomorphic to $\langle S\rangle$ , where $F(X)$ free group with basis $X$ . Idea of the proof : $\phi:S\to X$ is a bijection . $\phi^{-1}:X\to S\subset G$ is injective . Then by definition of free group there exist an unique group homomorphism $\bar{\phi}^{-1}: F(X)\to G$ which is also injective. I am unable to understand why the induced homomorphism is also injective?
|
The idea with that proof seems to be noting that $\phi \circ \bar{\phi}^{-1}$ is the identity $F(X) \to F(X)$ . This is just because each element of $X$ maps to itself and the unique group homomorphism that does that is the identity.
|
|group-theory|free-groups|
| 1
|
Question regarding formula for range of quadratic function
|
While reading through my textbook i saw 2 formulas for the range of quadratic functions as follows $$\text{When } a > 0 \text{ range is } \left[\frac{-D}{4a}, \infty\right)$$ $$\text{When } a Quite confused where these formulas come from could anyone point me in the right direction
|
When $a>0$ , it is an upward parabola so the $y$ coordinate of the vertex is the minimum value and range is $\left[−\frac{D}{4a},\infty\right)$ . When $a , it is a downward parabola so $y$ coordinate of vertex is maximum value and range is $\left(−\infty,−\frac{D}{4a}\right]$ .
|
|functions|quadratics|
| 0
|
Calculating $\mathbb{E}[2\sin (\pi Z)|\cos (\pi Z)]$ when $Z$ is uniform on $[0,2]$
|
I am trying to calculate the following conditional expectation. Let Z be a uniformly distributed random variable on the closed interval $[0, 2]$ . Define $X = \cos(\pi Z)$ and $Y = 2\sin(\pi Z)$ . Calculate $\mathbb{E}[Y|X] = \mathbb{E}[Y|\sigma(X)]$ . I have tried multiple approaches, but don't know how to proceed since I am only familiar with the standard techniques for calculating the conditional expectation, i.e. when Y is independent of X, Y is measurable with respect to $\sigma(X)$ or if a joint density $f(x,y)$ exists. The first two parts don't apply here and since X and Y aren't independent I don't know how to calculate the joint density. Any help is greatly appreciated. Thank you very much!
|
I have an upcoming exam, so I wanted to tackle this question as an exercise. We use the second approach suggested by the accepted answer. Let $X=\cos(\pi Z)$ and $Y=\sin(\pi Z)$ where $Z \sim \text{Uniform}([0,2])$ . Let $h(X)$ be a version of the conditional expectation $\mathbb{E}[Y|X]$ . Let $u : \mathbb{R} \rightarrow \mathbb{R}$ bounded and measurable (or non-negative and measurable, it does not matter for this approach). It must hold that $$ \mathbb{E}[Y u(X)] = \mathbb{E}[h(X) u(X)]$$ With this in mind, we calculate: $$ \begin{align*} \mathbb{E}[Yu(X)] &= \frac{1} {2\pi}\int_0^{2\pi}\sin(x)u(cos(x)) \; dx \\\\ &=\frac{1}{2\pi}\left( \int_0^{\pi}\sin(x)u(cos(x)) \;dx + \int_{\pi}^{2\pi}\sin(x)u(cos(x)) \;dx \right) \\\\ &=\frac{1}{2\pi}\left( \int_0^{\pi}\sin(x)u(cos(x)) \;dx + \int_0^\pi\sin(2\pi - x)u(cos(2\pi - x)) \;dx \right) \\\\ &=\frac{1}{2\pi}\left( \int_0^{\pi}\sin(x)u(cos(x)) \;dx + \int_0^\pi-\sin(x)u(cos(x)) \;dx \right) \\\\ &=0 \end{align*} $$ Since this holds for
|
|probability-theory|conditional-probability|conditional-expectation|density-function|
| 0
|
Find the maximum value of $m^2+n^2$ if $(m^2-mn-n^2)^2=1$
|
Given that the integers $m$ and $n$ in the set $A=\left\{1,2,3,....,2024\right\}$ satisfy $(m^2-mn-n^2)^2=1$ . Find the maximum possible value of $m^2+n^2$ . My effort: We have $m^2-mn-n^2=\pm 1$ Case $1.$ If $m^2-mn-n^2=1 \Rightarrow m^2-mn-(n^2+1)=0$ Now the Discriminant is $$D=n^2+4(n^2+1)=k^2, k \in \mathbb{Z}$$ $$ \implies 5n^2+4=k^2$$ I am not able to proceed now.Same problem with Case $2.$
|
Continuing from your relaxed equation, $m^2 - mn - n^2 = \pm 1$ $m^2 - mn - n^2 \pm 1 = 0 $ m = $ \frac{n \pm {\sqrt {5n^2 \pm 4}}}{2} $ As a property, x is a fibonacci number if atleast one of $5x^2 + 4$ and $5x^2 - 4$ is a perfect square. So, for a valid m to exist, $5n^2 \pm 4$ must be a perfect square, and thus a/c to the lemma, n must be a fibonacci number. What are possible values of n if it's a fibonacci number ? $1,1,2,3,5,8,13,21,34,55,89,144,233,377,610,987,1597$ (since all of these lie in the range $1,2,...2024$ ). Putting n = $1597$ into the solution, results m = $2584$ which isn't possible since m lies out of the permissible range, i.e. $(1,2,...2024)$ . Putting n = $987$ into the solution, results m = $1597$ and these will be the maximum values of n & m that you can achieve since you began with the best values of n. Hence, the answer to your question $m^2 + n^2$ = $1597^2 + 987^2$ .
|
|elementary-number-theory|contest-math|quadratics|
| 0
|
A condition for matrices to commute
|
Recently found an exercise for high school students as follows: Let $A,B$ be $2\times 2$ matrices with real entries and suppose $A^2=B^2$ , $\operatorname{tr}(A)\neq0$ , $\operatorname{tr}(B)\neq0$ . Show that $AB=BA$ . Since it only ask the case for $2\times 2$ matrix and hence we can simply bash out all the equations. My problem. Does this hold for $n\times n$ matrices, or does it have a generalization?
|
Here is a counterexample in $\mathbb{R}^4$ (or $F^4$ for any field $F$ with characteristic different from 2) \begin{equation} A: (x, y, z, t)\mapsto (x, y, z, -t) \end{equation} \begin{equation} B: (x, y, z, t)\mapsto (z, t, x, y) \end{equation} Then $A^2=B^2=I$ but $A B \not = B A$ because \begin{equation} A B: (x, y, z, t)\mapsto (z, t, x, -y) \end{equation} \begin{equation} B A: (x, y, z, t)\mapsto (z, -t, x, y) \end{equation} For a more general example, let $U$ and $V$ be non colinear column vectors in $F^n$ such that $V^T U = -2$ . Let $A = I + U V^T$ and $B = A^T = I + V U^T$ . Then $A^2 = B^2 = I$ and $A B \not = B A$ . One also has $\text{tr} A = \text{tr} B = n - 2$ , so if $n\not = 2$ , the non-zero trace assumption is satisfied.
|
|linear-algebra|matrices|
| 0
|
Divisors Sum Related Interesting Approximate Relation
|
Working on Divisors Sum Efficient calulcation topic. Accidentaly discovered one interesting relation which is accurate up to $10^{17}$ order. $$\sum_{i=1}^{\infty}{\frac{\sigma(i)}{e^{i}}}\approx\frac{\pi^2}{6}-\frac{1}{2}+\frac{1}{24}$$ To get things more clear look at the below numbers: $$\sum_{i=1}^{\infty}{\frac{\sigma(i)}{e^{i}}}=1.1866007335148928206...$$ $$\frac{\pi^2}{6}-\frac{1}{2}+\frac{1}{24}=1.1866007335148931031...$$ Just would like to share this nice relation, check if you know some paper about this and wondering if there are some similar known relations for other number theoretical functions:) Just to clarify things this is not the only realation, but one of many, for example: $$\sum_{i=1}^{\infty}{\frac{\sigma(i)}{\sqrt{e^i}}}\approx\frac{2\pi^2}{3}-1+\frac{1}{24}$$ EDITED Accorging to @Greg Martin comment just realized that this is Lambert Series example. Lets assume now $s>0$ . So the general rule is $$\sum_{i=1}^{\infty}{\frac{\sigma(i)}{e^{si}}}=\sum_{i=1}^{\infty}{
|
Using, as you did, Euler-MacLaurin summation, the formula for infinite values of $n$ are a bit more complex. For example, the expansion around $s=0$ leads to $$\sum_{i=1}^{\infty}{\frac{i}{e^{si}-1}}=\frac{\pi^2}{6s^2}-\frac{1}{2s}+\frac{1}{24}+$$ $$\frac{s^9}{632282112}\left(1-\frac{29332259 s^2}{62107500}+\frac{8569 s^4}{117000}-\frac{19774139 s^6}{2705040000}+O\left(s^8\right) \right)$$ Edit After doing the Euler-MacLaurin expansion,assuming $s>0$ and $n \to \infty$ , what is left is $$\color{blue}{\frac{\text{Li}_2\left(e^{-s}\right)}{s^2}-\frac{\log (1-e^{-s})}{s}+\frac{\sum _{s=0}^7 e^{k s}\,\, P_k(s)}{1209600 \left(e^s-1\right)^8}}$$ where the polynomials are of degree $7$ . Starting from the constant term, the list of coefficients are $$\left( \begin{array}{cc} k & \text{coefficients of } P_k(s) \\ 0 & \{-504000\} \\ 1 & \{3528000,100800,-5040,-1680,200,40,-7,-1\} \\ 2 & \{-10584000,-604800,20160,0,1600,960,-392,-120\} \\ 3 & \{17640000,1512000,-25200,15120,-3800,600,-1715,-119
|
|real-analysis|number-theory|approximation|divisor-sum|
| 1
|
Question regarding formula for range of quadratic function
|
While reading through my textbook i saw 2 formulas for the range of quadratic functions as follows $$\text{When } a > 0 \text{ range is } \left[\frac{-D}{4a}, \infty\right)$$ $$\text{When } a Quite confused where these formulas come from could anyone point me in the right direction
|
Write, assuming $a\neq0$ , $$ax^2+bx+c=a\left(x^2+\frac bax+\frac ca\right)=a\left(x+\frac{b}{2a}\right)^2+\frac{4ac-b^2}{4a}=aP(x)-\frac{D}{4a}$$ where $P(x)$ is a function that has range $[0,\infty)$ . When $a>0$ , the range of $aP(x)$ is also $[0,\infty)$ (why?) and then the range of $aP(x)-D/4a$ is $\left[-\frac{D}{4a},\infty\right)$ (why?). Similarly, when $a , the range of $aP(x)$ is $(-\infty,0]$ (why?) and then the range of $aP(x)-D/4a$ is $\left(-\infty,-\frac{D}{4a}\right]$ (why?). Hope this helps. :)
|
|functions|quadratics|
| 1
|
Primitive element of a finite field whose powers do not lie inside the prime subfield
|
Let $p$ be a prime, and consider the finite field $\mathbb{F}_p$ . Fix any $n\ge1$ , and consider the field extension $\mathbb{F}_{p^n}/\mathbb{F}_p$ . If $\alpha\in\mathbb{F}_{p^n}$ is a multiplicative generator, then $\alpha$ is also a primitive element, that is, $\mathbb{F}_{p^n}=\mathbb{F}_p(\alpha)$ , and further, the set $\{1,\alpha,\ldots,\alpha^{n-1}\}$ is an $\mathbb{F}_p$ -linear basis of $\mathbb{F}_{p^n}$ . Therefore, we can conclude that the elements $\alpha,\ldots,\alpha^{n-1}\not\in\mathbb{F}_p$ . In the above statement, I could conclude 'non-membership' in $\mathbb{F}_p$ by using linear independence over $\mathbb{F}_p$ . Is a converse of this true, that is, must 'non-membership' imply linear independence over $\mathbb{F}_p$ ? Specifically, for $n,m\ge1$ , if $\alpha\in\mathbb{F}_{p^n}$ is a multiplicative generator such that the elements $\alpha,\ldots,\alpha^{m-1}\not\in\mathbb{F}_p$ , then must $m\le n$ ?
|
Let $p=2$ and $n=3$ , and let $\alpha$ be a generator of the multiplicative group, so that $\alpha$ has order $7$ . Then none of $\alpha,\alpha^2,\alpha^3,\alpha^4,\alpha^5,\alpha^6$ lie in $\mathbb{F}_2$ . Yet $7>3$ .
|
|field-theory|galois-theory|finite-fields|extension-field|galois-extensions|
| 1
|
prove a holomorphic function is injective
|
Let $f: D(0,1)\to\mathbb{C}$ be holomorphic s.t $\Re (f'(z))>0\quad\forall z\in D(0,1)$ . Prove that $f$ is injective. My attempt: Since $f'(z)=\dfrac{\partial u}{\partial x}+i\dfrac{\partial v}{\partial x}=\dfrac{\partial v}{\partial y}-i\dfrac{\partial u}{\partial y}$ (Cauchy Riemann), we have $\dfrac{\partial u}{\partial x}>0$ and $\dfrac{\partial v}{\partial y}>0$ . I don't know how to continue. Could someone help me? Thanks in advance
|
Following kobe's idea, I give the proof as following: For $z\ne w$ , by $N-L$ formula, we have, $$ \int_0^1 f'(tz+(1-t)w)\, dt=\frac{f(tz+(1-t)w)}{z-w}\bigg|_{0}^{1} =\frac{f(z)-f(w)}{z-w}.$$ If $f(z)=f(w)$ , then $$\int_0^1 f'(tz+(1-t)w)\, dt=0,$$ so $$\Re\left(\int_0^1 f'(tz+(1-t)w)\, dt\right)=\int_0^1 \Re(f'(tz+(1-t)w))\, dt =0,$$ This cannot happen since $\Re(f'(z))>0$ .
|
|complex-analysis|
| 0
|
Limits of generalized inverse function
|
For a nondecreasing function $F:\mathbb{R}\to\mathbb{R}$ the generalized inverse function is defined as $$F^{-}(y):=\inf\{x\in\mathbb{R}:F(x)\ge y\},$$ where $F(-\infty):=\inf\{F(x):x\in\mathbb{R}\} . Is there anything, that can be said about the limits $\lim\limits_{y\to F(-\infty)}F^{-}(y)$ and $\lim\limits_{y\to F(\infty)}F^{-}(y)$ ?
|
Since $F$ is nondecreasing we have that $F(x_n)$ is nonincreasing for any sequence $x_n$ that nonincreasing. It is also clear that $$\tag{1} F(-\infty)=\inf\{F(x):x\in\mathbb R\}=\lim_{n\to\infty} F(x_n) $$ for a sequence $x_n$ that descends to $-\infty\,.$ By definition of $F^-\,,$ $$\tag{2} F^-(F(x))\le x\,. $$ Using the sequence from (1) it follows now that $$\tag{3} \lim_{n\to\infty}F^-(F(x_n))\le\lim_{n\to\infty}x_n=-\infty\,. $$ Therefore, $$\tag{4} \lim_{y\to F(-\infty)}F^{-}(y)=-\infty\,. $$ Conversely, it is not true that $$\tag{5} \lim_{y\to F(+\infty)}F^{-}(y)=+\infty\, $$ holds: When $F$ assumes a maximum at $x_* then $F(+\infty)=F(x_*)$ and $$\tag{6} F^-(F(+\infty))=F^-(F(x_*))\le x_*\,. $$ Therefore, for any sequence $y_n\uparrow F(+\infty)$ we have $F^-(y_n)\le x_*
|
|real-analysis|probability-theory|
| 0
|
Characteristic function of a random variable by Fourier transform
|
this is character function in probability theory $$\phi(u)=\int_{-\infty}^{\infty}\mathrm{e}^{\mathrm{i}ux}f(x)\mathrm{d}x$$ Let an asset price $S_t$ (e.g. a stock) be modeled with a Geometric Brownian motion: $$\mathrm{d}S_t=rS_t\mathrm{d}t+\sigma S_t\mathrm{d}W_t$$ where $W_t$ is a Wiener process, $r$ the risk-free rate and $\sigma$ the volatility. Consider a European call option written on $S_t$ with strike $K$ and maturity $T$ . We apply the following transformation: $$x=\log{(\frac{S_0}{K})}$$ and $$y=\log{(\frac{S_T}{K})}$$ Show that the characteristic function of y is given by $$\phi_y(u)=\mathrm{e}^{\mathrm{i}u(x+(r-\frac{1}{2}\sigma)T)-\frac{1}{2}T\sigma^2u^2}$$ Hint: You may use the fact that the characteristic function of a standard normal distri- bution $Z$ is given by $\phi_Z(u)=e^{-\frac{1}{2}u^2}$ my problem is i don't even know where to start. Anyone give some tips till the point i can drive this?
|
I'll admit that I'm not familiar with most if not all stochastic finance terminology. So, I'll assume that $K$ and $S_{0}$ are constants. See that for $f(x)=\log(x)$ , by Ito's Lemma, you get, \begin{align}\log(S_{t})-\log(S_{0})&=\int_{0}^{t}f'(S_{s})\,dS_{s}+\frac{1}{2}\int_{0}^{t}\sigma^{2}S_{s}^{2}\cdot f''(S_{s})\,ds\\\\ &=\int_{0}^{t}\frac{1}{S_{s}}(rS_{s}\,ds+\sigma S_{s}dW_{s})+\int_{0}^{t}\frac{1}{2}\sigma^{2}S_{s}^{2}\cdot\frac{-1}{S_{s}^{2}}\,ds\\\\ &=\int_{0}^{t}(r\,ds+\sigma dW_{s})-\frac{\sigma^{2}}{2}\int_{0}^{t}\,ds\\\\ &=\sigma W_{t}+t(r-\sigma^{2}/2) \end{align} Hence you have $\log(S_{t}/K)=\log(S_{0}/K)+\sigma W_{t}+t(r-\sigma^{2}/2)$ Thus, you have \begin{align}\mathbb{E}(\exp(iu y))&=\exp\bigg(iux+iuT(r-\frac{\sigma^{2}}{2})\bigg)\mathbb{E}(\exp(iu\sigma W_{T}))\\\\ &=\exp\bigg(iux+iuT(r-\frac{\sigma^{2}}{2})-\frac{T\sigma^{2}u^{2}}{2}\bigg)\end{align} by using the Fourier transform for for Normal distribution.
|
|probability|stochastic-calculus|finance|stochastic-differential-equations|
| 1
|
Does the sum of two convex functions with global minimizers have a global minimizer?
|
If $f(x)$ and $g(x)$ are two convex functions defined in $\mathbb{R}^n \to \mathbb{R}$ . Suppose $f(x)$ has a global minimizer $x_1 \in \mathbb{R}^n$ but not necessarily the unique global minimizer. Similarly, $g(x)$ has a global minimizer $x_2 \in \mathbb{R}^n$ but not necessarily the unique global minimizer. Does there exist a global minimizer in $\mathbb{R}^n$ for the function $f(x) + g(x)$ ?
|
I would expect that the claim is true for $n=1$ , or if $f$ and $g$ are quadratic functions, or if one of $f,g$ is strongly convex. Here is a counterexample for the general case: $$ f(x) := \max(y,0), \quad g(x) := \max(e^x - y,0). $$ Both functions are convex. In addition, $f\ge0$ , $g\ge0$ , $f(0,0)=0$ , $g(0,1)=0$ . And $f$ and $g$ have global minimizers. The infimum of $f+g$ is zero: $$ f(-n,0) + g(-n,0)= e^{-n} \to 0. $$ However, zero is not attained: $f(x,y) + g(x,y)=0$ implies $y\le0$ and $e^x\le y$ , which is impossible. If one raises $f$ and $g$ to power $p\in \mathbb N$ then one gets $p$ -times continuously differentiable functions with the same property.
|
|calculus|functions|optimization|convex-analysis|
| 1
|
If n = aaaaaaaaabcd how many of them are divisible by 45 such that $a \neq 0$ and a,b,c,d are not necessarily distinct.
|
Let $n = aaaaaaaaabcd$ be a $12$ -digited number divisible by $45$ where the digits a, b, c, d are not necessarily distinct and $a \neq 0 $ . How many such numbers are there . My approach Since $a \in$ {1,2,3,4,5,6,7,8,9} so $a$ has $9$ choices. $b,c \in$ {0,1,2,3,4,5,6,7,8,9} so $b,c$ has $10$ choices each. $d \in $ {0,5} therfore $c$ has only 2 choices Therefore total number of $12$ digited number of form $aaaaaaaaabcd$ which are divisible by $12$ is $2*10*10*9 = 1800$ This answer is wrong How can I solve this
|
$b,c\in\{0,1,2,3,4,5,6,7,8,9\}$ so $b,c$ has $10$ choices each This is where you went wrong. Consider a divisibility test for an integer by $9$ , which is that the sum of its digits must itself be a multiple of $9$ . Clearly it doesn't matter what $a$ is; there are $9$ of them in our template, and something added to itself $9$ times is always going to result in a multiple of $9$ . This means that it's down to $b.c.d$ whether or not the final number is divisible by $9$ ; i.e. we need $b+c+d=9k$ for some integer $k$ . Now I want you to think of divisibility by $5$ . What characteristic do all numbers divisible by $5$ share? If you thought "well, they all end in $0$ and $5$ ", great! That's the correct answer. And clearly you know it already, given that you've identified that $d\in\{0,5\}$ . But ask yourself this: if $b+c+d$ is a multiple of $9$ — and $d=0$ , say — what does that mean for $b$ and $c$ ? Well, it means that the sum of $b$ and $c$ itself has to be a multiple of $9$ , doesn't
|
|combinatorics|elementary-number-theory|
| 1
|
Multivariable Calculus - Exercise about Lagrange multipliers
|
I need your help with the following exercise: The sphere $x^{2}+y^{2}+z^{2}=4$ is made of certain material whose density is given by $\rho(x,y,z)=y^{2}+xy+2$ . I need to find the extreme values of $\rho(x,y,z)$ using the Lagrange multiplier method. If $g(x,y,z) = x^{2}+y^{2}+z^{2}-4=0$ , $\rho_{x}(x,y,z) = y$ , $\rho_{y}(x,y,z)=2y+x$ , and $\rho_{z}(z,y,z) =0$ , then $$ \begin{align}y &= 2x\lambda \tag{EQ 1} \\ 2y+x &= 2y\lambda \tag{EQ 2} \\ 0 &= 2z\lambda \tag{EQ 3} \\ x^{2}+y^{2}+z^{2}-4&= 0 \tag{EQ 4} \end{align} $$ I'm having trouble solving this system of equations because I'm getting $\lambda=0$ from the third equation. What does it mean? Or what am I doing wrong?
|
OP has derived these : $$ \begin{align}y &= 2x\lambda \tag{EQ 1} \\ 2y+x &= 2y\lambda \tag{EQ 2} \\ 0 &= 2z\lambda \tag{EQ 3} \\ x^{2}+y^{2}+z^{2}-4&= 0 \tag{EQ 4} \end{align} $$ With EQ3 , we get [CASE 1] $\lambda = 0$ or [CASE 2] $z=0$ [CASE 1] then gives $y=0$ , $x=0$ , $z=\pm2$ At these Points , $\rho=0+0+2$ [CASE 2] gives something else. Plug EQ 1 in EQ 2 to get $2 \times 2x\lambda + x = 2 \times 2x\lambda \times \lambda $ $4 \lambda + 1 = 4 \lambda^2 $ $\lambda = [1 \pm \sqrt{2}]/2$ With that , we then get $x$ & $y$ values (Due to Quadratic Equation & 2 values for $\lambda$ , we get four Solution Points involving $\pm \sqrt{2}$ ) We then evaluate $\rho$ , which is $4$ (at all Solution Points) : [[ generated by Wolfram Online Tool ]] OVER-VIEW : [CASE 1] is the Extrema $2$ which occurs at the top & bottom of the Sphere [CASE 2] is the Extrema $4$ , which occurs Symmetrically on the $XY$ Plane ERRATA : The above corresponds to $\rho=x^2+xy+2$ which has a typo. This corresponds to t
|
|multivariable-calculus|solution-verification|maxima-minima|lagrange-multiplier|
| 0
|
Does the order of the partial derivatives in a gradient vector matter?
|
Probably not a great question but I was thinking about how to apply multivariable calculus in real life and came up with a question. If you have a multivariable function with variables $x,y,z$ and a single output. Does the order of the partial derivatives in the gradient matter? As in, are $(\partial x, \partial y, \partial z)$ and $(\partial z, \partial x, \partial y)$ the same? If not, then what are these differences? Thank you,
|
It matters, for example, the total derivative $T_{\mathbf a}$ at $\mathbf a$ can no longer be written as $\nabla f(\mathbf a)\cdot\mathbf v$ . Some things stay same like the Laplacian $\nabla\cdot\nabla$ remains the same. Point is, you want to write it in Cartesian coordinates using the basis vectors $(\mathbf{\hat x},\mathbf{\hat y},\mathbf{\hat z})\newcommand{\pdv}[2]{\frac{\partial#1}{\partial #2}}$ . You want to write $$\nabla=\frac{\partial}{\partial x}\mathbf{\hat x}+\frac{\partial}{\partial y}\mathbf{\hat y}+\frac{\partial}{\partial z}\mathbf{\hat z}\equiv\left(\pdv{}{x},\pdv{}{y},\pdv{}{z}\right)$$ But writing $$\nabla=\left(\pdv{}{z},\pdv{}{y},\pdv{}{x}\right)\equiv\pdv{}{z}\mathbf{\hat x}+\pdv{}{y}\mathbf{\hat y}+\pdv{}{x}\mathbf{\hat z}$$ doesn't make sense. Hope this helps. :)
|
|multivariable-calculus|
| 0
|
How to represent $x^n$ as a sum of $P_k:= (x)(x-1)\dots(x-k+1)$?
|
Just for curiosity I want to represent $x^n$ as a sum of $P_k:= (x)(x-1)\dots(x-k+1)$ . Since $x=P_1,\ xP_n= P_{n+1} +nP_n$ , this proves that it is possible for any $x^n$ to be represented as a sum of $P_k:= (x)(x-1)\dots(x-k+1)$ where $1\le k \le n$ , $ \ n \in \mathbb{N}$ . We have $$x^n = \sum\limits_{k=1}^n C_{(n,k)} P_k$$ where $C_{(n,k)}$ is just some constant that depends on $n, \ k$ , This also give us a way to calculate the first $C_{(n,k)}$ for $1\le n\le 6$ . $$x=P_1$$ $$x^2 = P_1+ P_2 $$ $$x^3 = P_1 + 3P_2 +P_3 $$ $$x^4=P_1 +7 P_2 + 6P_3 +P_4$$ $$x^5=P_1+15P_2 +25P_3+10P_4 +P_5$$ $$x^6=P_1+31P_2 +90P_3 +65P_4+15P_5+ P_6$$ The question is How to represent $C_{(n,k)}$ in a "nice" closed from? I expected at first that $C_{(n,k)}$ have some relation to the binomial coefficients $x, \ x^2, \ x^3$ I believed that such simple relation could be found, but after I calculated $x^4, \ x^5, \ x^6$ it turns out that $C_{(n,k)}$ is more complicated that I initially thought . After some
|
Maybe this can be another way to solve the problem, let me know if you agree with me. You can observe that $$P_k(j)=0$$ for each $j\leq k-1$ , and that $$P_k(j)=j(j-1)\cdots (j-k+1)=\binom{j}{k}k!$$ Now we have that $$x^n=\sum_{k=1}^nC_{n,k}P_k \implies j^n=\sum_{k\leq j}C_{n,k} \binom{j}{k}k!$$ Hence you get the following linear system $$\begin{pmatrix}1 & 0 & 0 & \dots & 0 \\ \binom{2}{1}1!& \binom{2}{2}2! & 0 & \dots &0 \\ \vdots & && &\\ \binom{n}{1}1! & \binom{n}{2}2! & \dots & &\binom{n}{n}n! \end{pmatrix}C_n=\begin{pmatrix}1^n \\ 2^n \\ \vdots \\ n^n\end{pmatrix}$$ You can find the inverse of this matrix to get the values of $C_{n,k}$ . You can find here a way to invert a lower triangular matrix.
|
|algebra-precalculus|analysis|binomial-coefficients|closed-form|
| 0
|
Why is my answer not correct?
|
You are watering a garden. The height $h$ (in feet) of water spraying from the garden hose can be modeled by $h(x)=−0.1x^2+0.7x+3$ , where $x$ is the horizontal distance (in feet) from where you are standing. You raise the hose so that the water hits the ground $1$ foot farther from where you are standing. Write a function that models the new path of the water. We can move the curve to the right: $x\mapsto x-1$ and get $-0.1 x^2+0.9 x+2.2$ , but the solution to the problem says the correct answer is $−0.1x^2 +0.7x+4.4$ , which seems to say you should move the curve up such that the root moves to the right by $1$ foot. Can someone explain why this is the correct answer?
|
The question specifies that you raise the hose. This implies a vertical dilation or translation. Of course, there are multiple transformations which achieve the required result, but in the context of the problem I believe the writer of the problem was after a vertical translation. In a related matter - I have an issue with the absence of a domain for the function. Was there a domain specified originally?
|
|algebra-precalculus|
| 0
|
Prove that $f = g$ almost everywhere on $\mathbb{R}$
|
Let $f$ and $g$ be functions in $L^1(\mathbb{R})$ such that $$ \int_E f \, dm = \int_E g \, dm $$ for every measurable subset $E$ of $\mathbb{R}$ . Prove that $f = g$ almost everywhere on $\mathbb{R}$ . Proof note that $$ \int_E f \, dm = \int_E g \, dm $$ then $ \int_E f \, dm -\int_E g \, dm = 0, $ and by the property of Lebesgue integrals it holds that $ \int_E (f -g) dm = 0 $ then f=g almost everywhere Is this proof correct?
|
Even if $f$ is not non-negative almost everywhere, if you have that $\int_{E}f\,dm=0$ for all measurable $E$ , then $f=0$ a.e. Consider $E_{n}=\{x:f(x)\geq \frac{1}{n}\}$ . Then if possible let $m(E_{n})>0$ . Then you have that $\int_{E_{n}}f\,dm\geq \frac{1}{n}\cdot m(E_{n})>0$ which is a contradiction. Hence $m(E_{n})=0$ for each $n$ . Hence, if $P=\bigcup_{n=1}^{\infty}E_{n}=\{x:f(x)> 0\}$ , then $m(P)=0$ by subadditivity. Similarly, if $M_{n}=\{x:f(x)\leq -\frac{1}{n}\}$ , then by the same argument, $m(M_{n})=0$ for each $n$ . Hence if $N=\bigcup_{n=1}^{\infty}M_{n}=\{x:f(x) , then $m(N)=0$ . Thus $m(P\cup N)=m(\{x:f(x)\neq 0\})=0$ Thus $f(x)=0$ a.e. Now apply this to $f-g$ to get that $f-g=0$ a.e. which means $f=g$ a.e.
|
|measure-theory|lebesgue-integral|
| 0
|
AB is a chord of length 2ka of a circle of radius a. The tangents to the circle at A and B meet in C if k^7 is negligible calculate the area of ABC
|
AB is a chord of length 2ka of a circle of radius a . The tangents to the circle at A and B meet in C . Show that, if k is so small compared with unity that $k^7$ is negligible, the area of the triangle ABC is $a^2k^3+\frac12 a^2k^5$ . This image is a rough idea of what I think is going on. ( O is the center of the circle and b is the length of line AC and X is the intersection point of line OC and chord AB . The area required can be given as $k\times a\times h$ I first noticed that getting b in terms of a and k was possible because Triangle OAC and Triangle OBC right-angled triangles so: $$\frac{1}{a^2} + \frac{1}{b^2} = \frac{1}{(ka)^2}$$ $$b = \frac{ka}{1- k^2}$$ Using binomial expansion to $k^4$ I got $$b = ka(1 + \frac{1}{2}k^2 + \frac{3k^4}{8})$$ Using this I got h as follows: $$a^2 + b^2 = \sqrt{a^2-(ka)^2} +h$$ $$h = a^2 + [ka(1 + \frac{1}{2}k^2 + \frac{3k^4}{8})]^2 -a\sqrt{1-k}$$ Expanding $\sqrt{1-k}$ $$h = a^2 + [ka(1 + \frac{1}{2}k^2 + \frac{3k^4}{8})]^2-a[1-\frac{k}{2}-\fr
|
$\square OACB$ is a kate with $\angle A=\angle B =90^\circ$ and $OA=OB=a.$ Since $\angle O=2k$ , $AC=BC=a\tan k$ and $\angle C=\pi-2k$ . The area wanted is then $(ACB)=\frac12ABAC\sin\angle C=a^2\tan^2k\sin2k=a^2\sin^3k\sec k=a^2k^3+O(k^7).$
|
|geometry|algebra-precalculus|proof-writing|binomial-theorem|
| 0
|
Prove that the function $f$ is injective
|
Define a function: $\mathbb N \times \mathbb N \to \mathbb Q$ : $$f((a,b)) = a + \frac{1}{b}.$$ Choose $(a,b) \in \mathbb{N}, \ (c,d) \in \mathbb{N}$ workings: $$ f((a,b)) = a + \frac{1}{b} \\ f((c,d)) = c + \frac{1}{d} \\ $$ Suppose that $f((a,b)) = f((c,d))$ : $$ a + \frac{1}{b} = c + \frac{1}{d} \\ d(ab+1) = b(cd+1). $$ How do I make $a = c$ and $b = d$ ?
|
If (for instance) $b=1$ then $d=1$ because $1/d\in\Bbb N$ etc (id. if $d=1$ ) If $b,d\neq1$ then $a=c=\lfloor f(a,b)\rfloor$ etc
|
|discrete-mathematics|
| 0
|
Basic question about algebraic topology definitions
|
Trying to learn some basics of algebraic topology and wanted to ask clarifications about this sentence of Hatcher ( third page, just the beginning ;) ): 'It is true in general that two spaces X and Y are homotopy equivalent if and only if there exists a third space Z containing both X and Y as deformation retracts'. He says that the implication Question: Would this statement be more precise? If there exists a third space Z containing X' and Y' as deformation retracts such that X' is homeomorphic to X and Y' is homeomorphic to Y, than X and Y are homotopy equivalent. Trial: First some definitions: $\phi_X : X \rightarrow X'$ and $\phi_Y : Y \rightarrow Y'$ are the given homeomorphism $r_X(t): Z \times I \rightarrow Z$ and $r_Y(t): Z \times I \rightarrow Z$ are the deformation retracts. We need to define maps $f:X \rightarrow Y$ and $g:Y \rightarrow X$ such that $fg$ and $gf$ are hotomopycally equivalent to the identity. One way to define a map $X \rightarrow Y$ is to embed $X$ in $Z$ ,
|
The correct statement is in fact If there exists a third space $Z$ containing $X'$ and $Y'$ as deformation retracts such that $X'$ is homeomorphic to $X$ and $Y'$ is homeomorphic to $Y$ , then $X$ and $Y$ are homotopy equivalent. If $X$ and $Y$ are not disjoint, we can in general not expect to find such $Z$ containing $X, Y$ as genuine subspaces . The "trivial implication" is really trivial. It suffices to observe that each deformation retract of a space $Z$ is homotopy equivalent to $Z$ (see Hatcher p.3). Since "being homotopy equivalent" is a transitive relation and "homeomorphic" implies "homotopy equivalent", we are done.
|
|solution-verification|algebraic-topology|
| 1
|
Sum of all elements in a matrix with specified conditions
|
Let M be a matrix with 20 rows and 21 columns, containing the following elements $M[i][j]=i*i$ if $i=j$ $M[i][j]=min(i,j)$ if $i≠j$ for $1\leqslant i\leqslant 20, 1\leqslant j\leqslant 21$ . What is the sum of all elements in the matrix? My approach: I drew a $4\times 5$ matrix with the specified elements and figured out that there are: $i+j-1-1=7$ elements of $1$ $i+j-3-1=5$ elements of $2$ $i+j-5-1=3$ elements of $3$ $i+j-7-1=1$ elements of $4$ Plus the squared elements $(1,4,9,16)$ Now, for the $M[20][21]$ , following the same approach ,we have: $S=(39\times1+37\times2+\ldots+1\times20)+(1+4+\ldots+20)=S'+S"$ $S'=39\times1+37\times2+\ldots+1\times20=2870$ $S"=1+4+\ldots+20=1/6\times20\times(20+1)\times(2\times20+1)=2870$ $S=2\times2870=5740$ So, my questions are: How can I calculate $S'$ without doing every operation? Is there a easier way to solve the problem?
|
I'm not aware of any special formula for this strange kind of matrix, but a closed-form sum for any $m \times n$ matrix is fairly simple to figure out. I will do this for $n > m$ , since that is what you have above, but you can simply switch these dimensions in the final formula if $m > n$ . You have already worked out most of the pattern. Let's take the $4 \times 7$ example which looks like $$ \begin{bmatrix} 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 4 & 2 & 2 & 2 & 2 & 2 \\ 1 & 2 & 9 & 3 & 3 & 3 & 3 \\ 1 & 2 & 3 & 16 & 4 & 4 & 4 \end{bmatrix} $$ We can quickly count that the sum for this is $80$ , but let's break it down. You have already shown that the diagonal of squares has to be treated separately and has a known formula $$ \sum_{i=1}^m i^2 = \frac{m(m+1)(2m+1)}{6} $$ We can see that everything above the diagonal follows an obvious pattern and everything below is simply reflected, but truncated because of the smaller dimension. For the elements above the diagonal it is simply the sum from
|
|combinatorics|matrices|summation|
| 0
|
Prove that the function $f$ is injective
|
Define a function: $\mathbb N \times \mathbb N \to \mathbb Q$ : $$f((a,b)) = a + \frac{1}{b}.$$ Choose $(a,b) \in \mathbb{N}, \ (c,d) \in \mathbb{N}$ workings: $$ f((a,b)) = a + \frac{1}{b} \\ f((c,d)) = c + \frac{1}{d} \\ $$ Suppose that $f((a,b)) = f((c,d))$ : $$ a + \frac{1}{b} = c + \frac{1}{d} \\ d(ab+1) = b(cd+1). $$ How do I make $a = c$ and $b = d$ ?
|
Since $b\ge 1$ we have $a . Then from $a + \frac{1}{b} = c + \frac{1}{d}$ we get $a=\left\lceil a + \frac{1}{b}\right\rceil=\left\lceil c + \frac{1}{d}\right\rceil=c$ .
|
|discrete-mathematics|
| 0
|
Finding the distance beween two sequences
|
Let $\sum_2=\{s=(s_0s_1s_2)|s_j=0\ \text{or} \ 1\}$ , which is a sequence space of $0$ and $1$ 's. We make $\sum_2$ into a metric space as follows: for two sequences $s=(000\dots)$ and $r=(1010\dots)$ , then the distance between them is $$d[s,r]=\sum_{i=0}^\infty \frac{1}{2^{2i}}=\frac{1}{1-\frac14}=\frac43$$ since the distance between two series $s, r$ $$d[s,r]=\sum_{i=0}^\infty \frac{|s_i-r_i|}{2^i}$$ is dominated by the geometric series $$\sum_{i=0}^\infty \frac{1}{2^i}=2$$ But how did $2i$ end up in the exponent, and how did that denominator turn into $1-\frac14$ ? Thanks
|
Note that $$|s_i - r_i| = \begin{cases} 1 & \text{if } 2 \mid i, \\ 0 & \text{otherwise.}\end{cases}$$ Thus, $|s_i - r_i|=1$ if and only if $i$ is a multiple of 2. Hence, we may write $$ \sum^\infty_{i=0} \frac{1}{2^i} = \sum^\infty_{i=0} \frac{1}{2^{2i}} $$ to keep the nonzero terms only. This is just a geometric series, so $$ \sum^\infty_{k=0} ar^k = \frac{a}{1 - r} $$ for $|r| . Use $a = 1$ and $r = \frac{1}{4}$ to obtain the result.
|
|sequences-and-series|
| 0
|
Help solving a definite integral
|
I've been exploring a problem regarding dropping small circles on a large circle, and attempting to find the expected value for the covered area for a throw. The assumptions is that the smaller corcles midpoint is inside the bigger circle. But I met a hurdle when integraring a function that was suposed to find the angle to their intersection. So I'm trying to solve the following definite integral: $$ \int_{R-r}^{R}\sin^{-1}\left(\frac{x^{2} + R^{2} - r^{2}}{2xR}\right){\rm d}x $$ I know that: $0 . But I'm stuck after that. Wolfram doesn't seem to have any ideas either. Any suggestions to proceed forward $?$ .
|
I asume that for you $\sin^{-1}(\cdot) = \arcsin(\cdot)$ . As a starting point this could help \begin{align} &\int_{R-r}^{R}\arcsin\left(\frac{x^2+R^2-r^2}{2xR}\right)dx\\ &= \int_{R-r}^{R}\arcsin\left(\frac{x^2+\left(R-r\right)\left(R+r\right)}{2xR}\right)dx\\ &= \int_{R-r}^{R}\arcsin\left(\frac{x^2+\left(R-r\right)\left(R+r\right)}{2xR}\right)dx\\ &=\int_{R-r}^{R}\arcsin\left(\frac{x}{2R} + \frac{\left(R-r\right)\left(R+r\right)}{2xR}\right)dx &=\int_{R-r}^{R}\arcsin\left(\alpha\,x + \frac{\beta}{x}\right)dx \end{align} For the case $\alpha = \beta = 1$ , WolframAlpha has a solution For the case $a>0$ and $\beta = 0$ , the solution of the integral is given in Gradshteyn's table of integrals (see photo below)
|
|real-analysis|integration|trigonometry|
| 0
|
Finding the distance beween two sequences
|
Let $\sum_2=\{s=(s_0s_1s_2)|s_j=0\ \text{or} \ 1\}$ , which is a sequence space of $0$ and $1$ 's. We make $\sum_2$ into a metric space as follows: for two sequences $s=(000\dots)$ and $r=(1010\dots)$ , then the distance between them is $$d[s,r]=\sum_{i=0}^\infty \frac{1}{2^{2i}}=\frac{1}{1-\frac14}=\frac43$$ since the distance between two series $s, r$ $$d[s,r]=\sum_{i=0}^\infty \frac{|s_i-r_i|}{2^i}$$ is dominated by the geometric series $$\sum_{i=0}^\infty \frac{1}{2^i}=2$$ But how did $2i$ end up in the exponent, and how did that denominator turn into $1-\frac14$ ? Thanks
|
For the given $s,r$ , $$|s_i-r_i|=\left\{\begin{array}{ll} 1 & : i \text{ is even}\\0 & : \text{otherwise}.\end{array}\right.$$ So $$d[s,r]=\frac{1}{2^0}+\frac{0}{2^1}+\frac{1}{2^2}+\frac{0}{2^3}+\frac{1}{2^4}+\ldots=\sum_{i\text{ even}}\frac{1}{2^i}=\sum_{i=0}^\infty \frac{1}{2^{2i}}.$$ This is where the $2$ in the exponent comes from. Next, we note that $2^{2i}=(2^2)^i=4^i$ , so $\frac{1}{2^{2i}}=\frac{1}{4^i}$ , and $$\sum_{i=0}^\infty \frac{1}{2^{2i}}=\sum_{i=0}^\infty \frac{1}{4^i}=\frac{1}{1-1/4}.$$ This is because $\sum_{i=0}^\infty r^i=\frac{1}{1-r}$ for $-1 .
|
|sequences-and-series|
| 1
|
Upper Bound on number of Elementary Row Operations
|
Problem: Show that an $n \times n$ matrix with real coefficients can be put into Reduced Row Echelon Form (RREF) by a sequence of at most $n^2$ elementary row operations. This feels simple, but proving this somehow eludes me. I thought about proceeding by Induction - for a given $(n+1) \times (n+1)$ matrix, consider the upper left block $n \times n$ matrix - by the Inductive Hypothesis, this can be put into RREF after at most $n^2$ steps. Now, for each non-zero element $a_i$ in the $(n+1)$ -st row lying below a leading $1$ (and also not in the rightmost position), we can make it zero by the row operation $R_{n+1} \rightarrow - a_i R_j + R_{n+1}$ , where $R_j$ is the row containing the leading $1$ . But at this point, I run into an annoying problem - what if $a_i$ lies below a column of $n$ zeroes, thus preventing it from being transformed into zero by the above described row operation? Intuitively, I would want to perform a row swap, but I don't see how to proceed from here. Any insigh
|
Go column by column. Processing the first column takes $n$ operations. Processing the second column takes $n$ operations. Etc.
|
|linear-algebra|
| 0
|
A variation of an exercice from Chapter 16 (Counting and Choosing) of Liebeck's book
|
The rules of a lottery are as follows: You select 10 numbers between 1 and 50. On lottery night, the celebrity mathematician Richard Thomas chooses at random 6 'correct' numbers. If your 10 numbers include all 6 correct ones, you win. How many ways are there to win the lottery? From my understanding, the requested number is given by $$\binom{10}{6}\times \binom{44}{4}$$ Am I right? Thank you very much for you help. For the sake of completeness, here is the original statement taken from the book : The rules of a lottery are as follows: You select 10 numbers between 1 and 50. On lottery night, celebrity mathematician Richard Thomas chooses at random 6 “correct” numbers. If your 10 numbers include all 6 correct ones, you win. Work out your chance of winning the lottery.
|
I post here an answer based on the comments. Different interpretations lead to different results. $\binom{6}{6} \times \binom{44}{4}$ : (see this answer ) In this interpretation, one first focus on selecting all 6 correct numbers out of the 6 numbers drawn by the mathematician. Then, one needs to choose the remaining 4 numbers out of the 44 incorrect numbers not drawn by Richard. By multiplying these two binomial coefficients together, one calculates all possible winning combinations where you've chosen all 6 correct numbers and 4 incorrect numbers out of the total pool of 50 numbers. $\binom{50}{6}\binom{44}{4}=\binom{50}{10}\binom{10}{6}$ : (thanks to users @Haris et @user469053) Firstly, one select 6 numbers out of the 50 total numbers available. Then, you're selecting the remaining 4 numbers out of the 44 numbers not chosen by Richard. This is represented by $\binom{44}{4}$ . Alternatively, one could first choose the 10 numbers out of the 50 total numbers, represented by $\binom{50
|
|combinatorics|solution-verification|lotteries|
| 0
|
Is geodesic distance locally approximated by euclidean distance?
|
Let $(M,g)$ be a Riemannian manifold. Let $p\in M$ . Is it possible to find a chart $\varphi: p\in U \to \mathbb R^d$ such that the geodesic distance $d_M$ on $U$ is close to the euclidean distance $d_E$ on $\phi_i(U_i)$ ? More precisely, if I take $\phi_i = \exp^{-1}_p: \{x\in M: d_M(x,p)\le \delta\} \to \{v\in \mathbb R^d: d_E(v,0 )\le \delta\}$ , can I find bounds $$A(p, M ,\delta) \le \frac{d_M(x,y)}{d_E(\exp^{-1}_p(x), \exp^{-1}_p(y))}\le B(p, M ,\delta)\enspace,$$ such that $A(p, M ,\delta)\small{\nearrow} \normalsize1$ and $B(p, M ,\delta)\small\searrow \normalsize1$ as $\delta\to 0$ ? Presumably, $A$ and $B$ will depend only on the curvature at $p$ and on $\delta$ . Of course, if $x=p$ , then $A=B=1$ by the definition of the exponential map. The difficult part is $x,y\neq p$ . I think I know how to derive $B$ . I would consider $\delta$ sufficiently small such that the geodesic ball is convex. A path in $\{v\in \mathbb R^d: d_E(v,0 )\le \delta\}$ from $\exp^{-1}_p(x)$ to $\exp^
|
Yes, the exponential map provides a diffeomorphism which is $(1+\epsilon)$ -bi-Lipschitz on sufficiently small balls, for any $\epsilon>0$ . This follows from the fact that the Jacobian of the exponential map at $p$ is the identity map.
|
|geometry|differential-geometry|riemannian-geometry|
| 0
|
Closed integral curve in vector field implies vector field is not conservative?
|
I believe, if we have a closed integral curve in a vector field then it is non-conservative. The idea is that say if it were conservative then we have a potential function say $\phi(x)$ . Which gradient will be the vector field $\nabla\phi(\bar{x})$ . We also know that the gradient tangent to the curve (integral curve) points to the biggest increase/decrease of the function. But we know that the integral curve ends up at the same point since it is closed, then the $\phi$ function value should not have increased at all, this implies that $$\nabla\phi(\bar{x}) = 0$$ which is true only if the tangent vector for our parameterization is zero i.e. $\bar{x}(t) = 0$ , here obviously it cannot be a closed curve. Which we also see in the following idea $$\oint\limits_{\gamma} w = \int_{a}^{b} \frac{1}{{\lambda(t)}} \left(\bar{x}(t)\right)^2 dt = 0 \implies \bar{x}(t) = 0$$ Is this argument correct? How could I make it more rigorous as right now it feels more intuitive? Thanks for any suggestions
|
As stated, your belief is not quite true, since point curves $x(t) = x_0$ are closed integral curves for conservative vector fields with critical points, i.e. $F(x_0) = \nabla\phi(x_0)=0$ . However, you can show that if $F$ is conservative then $\dot{x}(t)=0$ for any closed curve $x(t)$ . Indeed if $F =\nabla\phi$ then $\dot{x}(t) = F(x(t))$ implies $$|\dot{x}(t)|^2 = \nabla\phi(x(t))\cdot\dot{x}(t)=\frac{d}{dt} \phi(x(t))$$ by the chain rule. But if $x(0)=x(b)$ for some $b$ then by the fundamental theorem of calculus $$\int_0^b |\dot{x}(t)|^2 dt = \phi(x(b))-\phi(x(0)) = 0$$ And since $|\dot{x}(t)|^2 \geq 0$ , this implies $\dot{x}(t)=0$ for all $t$ .
|
|integration|multivariable-calculus|definite-integrals|vector-analysis|vector-fields|
| 1
|
Uniform Distribution of a mod hash function
|
I wonder if there is any efficient way to determine whether the hash values of $h(x)=3x\ mod\ 2^{64}$ are uniformly distributed for $x$ being uniformly distributed in $[0,2^{512}-1]$ ? I try to approach this by definition: for $y\in[0,2^{64}-1]$ , $P(h(x)=y)=P(x=\frac y3+k\cdot\frac{2^{64}}3)$ for a set of intergers $k$ . The range of $k$ depends on $y$ by $0\leq k\leq3\cdot2^{448}-\frac{y+3}{2^{64}}$ . So when $y$ is very large or zero, the range of $k$ should differ by one. Then I conclude that this hash function is not uniformly distributed. But I think this approach is very ineffective and maybe there is even something wrong in my proof. Could anyone tell me how to approach this question more effectively? Thank you very much!
|
If $X$ is valued in $Z/bZ$ then $X$ is uniform if and only if $E(e^{2i\pi X g/b})=0$ for all $g\in Z/bZ$ except for $g\equiv 0 \ mod\ b.$ if $a$ and $b$ have no common divisor, then $aX$ is uniform on $Z/bZ$ if $X$ is uniform on $Z/bZ.$ Proof : if not there exists $g\not \equiv 0 \ mod\ b$ such that $E(e^{2i\pi aX g/b})\neq 0$ and $ag\equiv 0 \ mod\ b$ which contradicts the fact that $a$ and $b$ have no common divisor. Apply this to $a=3$ and $b=2^{64}.$ If $X$ is uniform on $Z/bb'Z$ , then it is uniform on $Z/bZ.$ Proof: if not there exists an integer $g$ such that $g\not \equiv 0 \ mod\ b$ such that $E(e^{2i\pi X g/b})\neq 0.$ This contradicts the fact that $E(e^{2i\pi X g/bb'})\neq 0$ for all $g\not \equiv 0 \ mod\ bb'$ . Application $b=2^{64},\ bb'=2^{512} .$
|
|probability|hash-function|
| 0
|
Are $\{z\in \mathbb{C}:|z^2-3|<1\}$ and $\{z\in \mathbb{C}:|z^2-1|<3\}$ complex domains?
|
I got stuck on the following problem: Which of the following sets are domains in the complex plane: $$D_1=\{z\in \mathbb{C}:|z^2-3| $$D_2=\{z\in \mathbb{C}:|z^2-1| These sets are domains if they are arcwise connected open non-empty subsets of the complex plane. I first tried to sketch these regions to get some intuition about the problem. I tried the following : If we let $z=re^{i\theta}$ , then $z^2=r^2e^{2i\theta}=r^2(\cos2\theta+\sin2\theta)$ and our inequality becomes: $$|z^2-3|^2=|r^2(\cos2\theta+\sin2\theta)-3|^2=(r^4\cos^22\theta-6r^2\cos2\theta+9)+r^4\sin^22\theta=r^2(r^2-6\cos2\theta)+9 If we let $\theta=0$ , then $r^2(r^2-6) , since $r>0$ . And of course if z is a real number and $z\in(-2,-\sqrt2)$ then $|z^2-3| . So if $\theta=0$ and z lies in $(-2,-\sqrt2)\cup(\sqrt2,2)$ it satisfies the inequality. But what if $\theta\neq0$ ? I have tried a lot, but only got so far. I think that the first one is not a domain, since we will get two distinct open sets U and V such that their
|
Consider the function $f:\Bbb C\rightarrow\Bbb C$ , $f(z)=z^2$ and the open sets $E_1=\{w:|w-3| and $E_2=\{w:|w-1| . a) $f^{-1}(E_1)=D_1=C_1\cup(-C_1)$ has two components, So it is not a domain. b) $f^{-1}(E_2)=D_2$ has one component. It is domain since $f$ is continuous.
|
|complex-analysis|complex-numbers|
| 0
|
If $m^2+n^2=1$, find the maximum of $\dfrac{5-4m}{5-4n}$
|
If $m^2+n^2=1$ , find the maxmum of $\dfrac{5-4m}{5-4n}$ . The original question is to find the maximum of $\dfrac{BD}{CD}$ . By simplifying the formula through cosine theorem, I get the above formula. The value should be equal to $\sqrt{\dfrac{5-4m}{5-4n}}$ . How to find the value? Any elegant geometric solutions are also welcomed.
|
Take $m = \sin \theta$ & $n=\cos \theta$ . Now put these values in the given expression and convert $\sin \theta$ & $\cos \theta$ into $\tan \left(\frac{\theta}2\right)$ using $$\sin θ= \frac{2\tan(θ/2)}{1+\tan^2(θ/2)}$$ $$\cos θ = \frac{1-\tan^2(θ/2)}{1+\tan^2(θ/2)}$$ Now take $\tan(θ/2) = y$ The above expression becomes $f(y)$ . For maxima, $\frac{\mathbb d}{\mathbb d y}f(y)=0$ . Solve for $y$ and put this value of $y$ in $f(y)$ .
|
|geometry|triangles|
| 0
|
Maximizing area of the triangle in a quarter circle
|
The radius of the quarter circle is $6\sqrt 5$ and we assume that $OA= 5$ and $OC=10$ . What is the maximum area of the blue triangle? Interpreting the problem statement, I believe that points $A$ and $C$ are fixed and point $B$ can move on the arc. To solve this problem, I assumed that the coordinate of $O$ is $(0,0)$ and then assigned coordinates for each vertex of the triangle: $A(5,0), C(0,10), B(x,\sqrt{180-x^2})$ where $x \in [0, 6\sqrt5]$ . Then I applied the formula for the area of the triangle given its vertices, and the problem is reduced to maximizing $$A(x)= \left|25-(\frac52\sqrt{180-x^2}+5x)\right|\quad \text{for}\quad x \in [0, 6\sqrt5]$$ Which is easy to continue and I got $50$ as the answer. I'm looking for other approaches to solve this problem. I'm particularly interested in geometric approaches.
|
Applying Heron's formula for the area of a triangle as a function of the value of the length of its sides and finding the value of x for which this value has a maximum, we obtain that: $AC=5\sqrt{5};AB=\sqrt{85};CB=4\sqrt{10}$ being the coordinates of the point B : $x=12, y=6$ .
|
|geometry|euclidean-geometry|triangles|area|
| 0
|
How is the Riemann integral over the real line defined?
|
I suddenly start to wonder this question. The main text I would refer to is Baby Rudin, and in chapter 6 about Riemann integral, only $\int_{a}^{b}f(x)dx$ , where $a, b$ are real numbers, is defined. In the exercise 8 of that chapter, it says if $\int_{a}^{b}f(x)dx$ exists for every $b>a$ where $a$ is fixed, then $\int_{a}^{\infty}f(x)dx$ is defined as the limit of $\int_{a}^{b}f(x)dx$ as $b$ approaches infinity, provided that this limit exists. And then how is $\int_{-\infty}^{\infty}f(x)dx$ defined? I remember in the course complex analysis something is defined as $\lim_{R \xrightarrow{} \infty}\int_{-R}^{R}f(x)dx$ . Is this the definition we use in default when we talk about the probability density function in elementary probability? Recall we say $\int_{-\infty}^{\infty}f(x)dx$ for a valid pdf $f$ .
|
In the PDF case in probability ... the two calculations $$\lim_{M,N\to\infty} \int_{-M}^N f(x)dx\quad\text{and} \tag1$$ and $$\lim_{R \xrightarrow{} \infty}\int_{-R}^{R}f(x)dx \tag2$$ are equivalent in the case $f$ is nonnegative. It is only when $f$ is allowed to change sign that $(1)$ could fail to exist while $(2)$ exists. We could call $(1)$ the "improper Riemann integral" and call $(2)$ the "principal value integral".
|
|probability|integration|
| 1
|
Let $M_1$ and $M_2$ be affine subspaces of $\Bbb R^n$ with $M_1 \cap M_2 \neq \emptyset$ and $\dim(M_1 \cap M_2)=\dim(M_1)$. Then $M_1 \subset M_2$.
|
Let $M_1$ and $M_2$ be affine subspaces of $\mathbb{R}^n$ with $M_1 \cap M_2 \neq \emptyset$ and $\dim(M_1 \cap M_2)=\dim(M_1)$ . Then $M_1 \subset M_2$ . I tried to prove by contradiction and took an element in $M_1$ and assumed it was not in the intersection, but it did not work. I also tried to use the dimensions of linear subspaces to create these affine subspaces, but it did not work. Any help is appreciated.
|
Without loss of generality assume that $M_1$ and $M_2$ are linear subspaces of $\mathbb{R}^n$ . Pick a basis of $M_1 \cap M_2$ , say $v_1, \ldots, v_k$ . But then $v_1, \ldots, v_k$ lie in particular in $M_1$ , so by the dimensionality assumption they span $M_1$ and are therefore a basis, whence $M_1 \subseteq M_2$ .
|
|linear-algebra|affine-geometry|
| 0
|
The rate of convergence of $x_n$ to $0$
|
Consider the sequence $\{x_{n}\}_{n=1}^{\infty}$ consisting of positive numbers that satisfies the following conditions: $(1).\lim_{n\to\infty}nx_n=\infty$ ; $(2).\lim_{n\to\infty}x_n=0$ . Does there exist a $d\in(0,1)$ and two positive constants $K,L$ such that for all $n>N_0$ , the inequality $0 holds ? This outcome seems quite natural, but I am not sure if there are any counterexamples. I am attempting to rigorously prove it using a proof by contradiction.
|
The sequence $x_n = \log(n)/n$ satisfies the assumptions, but for any $d \in (0,1)$ we have $\lim_{n\rightarrow \infty} x_n/n^{-d} = 0$ , and so no such $d$ exists satisfying the required condition.
|
|real-analysis|sequences-and-series|limits|
| 0
|
For $a,b,c\in\left[\frac{1}{\sqrt{6}}, 6\right]$: $\sum_{cyc}\frac{4}{a+3b}\geq \sum_{cyc}\frac{3}{a+2b}$
|
For $a,b,c\in\left[\frac{1}{\sqrt{6}}, 6\right]$ prove that $$\frac{4}{a+3b}+\frac{4}{b+3c}+\frac{4}{c+3a}\geq\frac{3}{a+2b}+\frac{3}{b+2c}+\frac{3}{c+2a}.$$ I can't really find a way to exploit the given condition. I noticed we can substitute $a\leftarrow \frac{\sqrt{6}}{a}$ , $b\leftarrow \frac{\sqrt{6}}{b}$ , $c\leftarrow \frac{\sqrt{6}}{c}$ preserving the conditions on $a,b,c$ which leads us to the same inequality for $\frac{1}{a}$ , $\frac{1}{b}$ , $\frac{1}{c}$ where the terms are a bunch of weighted harmonic means. Any ideas would be apreciated.
|
Ok this is P4 Seniors from Romania NMO 2015 SHL. My proof is: $$\sum \frac{4}{a+3b} \geq \sum \frac{3}{a+2b} $$ $$\sum \left(\frac{4}{a+3b}- \frac{3}{a+2b}\right) \geq 0 $$ $$\sum \left(\frac{a-b}{(a+3b)(a+2b)}\right) \geq 0 $$ $$\sum \left(\frac{a-b}{(a+3b)(a+2b)} + \frac{a-b}{12ab}\right) \geq 0 $$ $$\sum \frac{(a-b)(-a^2+7ab+6b^2)}{(a+3b)(a+2b)}\geq 0 $$ $$\sum \frac{(a-b)^2(6b-a)}{(a+3b)(a+2b)} \geq 0 $$ which is obvious
|
|inequality|summation|sum-of-squares-method|rearrangement-inequality|tangent-line-method|
| 0
|
Area inside the circle $x^2+y^2=4$, under $y=x\sqrt{3}$ and above $y=1$
|
I have to find the area in the circle $x^{2}+y^{2}=4$ such that it's under the line $y=x\sqrt{3}$ and above $y=1$ . Since in polar coordinates $x=r\cos(t), y=r\sin(t)$ , then if $y=1$ : $$r\sin(t)=1\Rightarrow r=\frac{1}{\sin(t)}$$ And due to the fact that the radius of the circle is 2, so the area should be given by $$\int_{0}^{\pi/3}\int_{1/sin(t)}^{2}rdrdt$$ but the integral above diverges. What's wrong?
|
$y=\sqrt3 x\to\theta=\tfrac\pi 3$ , $(\sqrt3,1)\to\theta=\tfrac\pi 6$ . The given region is spanned by $\tfrac\pi 6\leq\theta\leq\tfrac\pi 3$ . As you determined, the outer radius of the region is $r_1=2$ , the inner radius is $r_2=\csc\theta$ . Hence, by area formula $$A=\int_{\tfrac\pi 6}^{\tfrac\pi 3}\tfrac12 \left(2^2-(\tfrac{1}{\sin\theta})^2\right)d\theta=\left.(2\theta+\tfrac12\cot\theta)\right\vert_{\tfrac\pi 6}^{\tfrac\pi 3}=\tfrac\pi 3-\tfrac1{\sqrt3}$$
|
|area|polar-coordinates|
| 0
|
If $m^2+n^2=1$, find the maximum of $\dfrac{5-4m}{5-4n}$
|
If $m^2+n^2=1$ , find the maxmum of $\dfrac{5-4m}{5-4n}$ . The original question is to find the maximum of $\dfrac{BD}{CD}$ . By simplifying the formula through cosine theorem, I get the above formula. The value should be equal to $\sqrt{\dfrac{5-4m}{5-4n}}$ . How to find the value? Any elegant geometric solutions are also welcomed.
|
(I'll write $x=m$ and $y=n$ ) Lagrange multipliers gives the following group of equations $$\begin{align} -\frac{4}{5-4y} + 2\lambda x = 0 \\ 4\frac{5-4x}{(5-4y)^2} + 2\lambda y = 0 \\ x^2 + y^2 = 1 \end{align} $$ Solving from first $\lambda = \frac{2}{x(5-4y)}$ and plugging it in second to get $$ (5-4x)x = -y(5-4y) $$ This is an equation of a circle: $$ \left(x-\frac{5}{8}\right)^2 + \left(y-\frac{5}{8}\right)^2 = \frac{25}{32} $$ So we're left with finding the intersections of this with the unit circle. They are $$ x = \frac{2}{5} - \sqrt{\frac{17}{50}} \\ y = \frac{2}{5} + \sqrt{\frac{17}{50}} $$ (and vice versa gives the minimum).
|
|geometry|triangles|
| 0
|
Proof that if $f$ is uniformly continuous then for every Cauchy sequence $(x_n)$ with $a < x_n < b$ $f(x_n)$ is also cauchy.
|
I need to show that the following statement holds true: Given $a, b \in \mathbb{R}$ , $a , $f: (a, b) \to \mathbb{R}$ , $f$ continuous. Show that $f$ uniformly continuous $\Rightarrow$ $\forall$ cauchy sequences $(x_n)$ with $a is a cauchy sequence. I feel like I am very close to the proof but I just cannot see the last step. Can you please help me? My proof goes like this so far: $f$ is uniformly continuous $\Leftrightarrow$ $\forall \epsilon > 0 \space \exists \delta > 0: |f(x) - f(x')| with $|x - x'| . Let $(x_n)_{n \in \mathbb{N}}$ be an arbitrary cauchy sequence with $a 0 \space \exists N \in \mathbb{N}: |x_n - x_m| . We need to show that $f(x_n)$ is cauchy. So let $\epsilon > 0$ . Search for $N \in \mathbb{N}$ such that $|f(x_n) - f(x_m)| . Because of $f$ being uniformly continuous and $x_n, x_m \in (a, b) \space \forall n, m$ there exists $\delta > 0: |f(x_n) - f(x_m)| with $|x_n - x_m| . This is almost exactly the definition of $f(x_n)$ being cauchy but without the restriction
|
This question is already very old but I still want to provide an answer since I learned that this problem is actually not hard at all. Let $\epsilon > 0$ be given. Since $f: (a, b) \to \mathbb{R}$ is uniformly continuous there exists a $\delta_\epsilon > 0$ such that $|x-y| . Since $(x_n)$ is a Cauchy sequence there exists a $N_\epsilon \in \mathbb{N}$ such that $\forall n, m \geq N_\epsilon: |x_n - x_m| . Because of uniform continuity this implies $|f(x_n) - f(x_m)| and therefore $(f(x_n))$ is also Cauchy. Note that $f(x_n)$ is actually well defined because $a .
|
|real-analysis|continuity|cauchy-sequences|uniform-continuity|
| 1
|
Let $M_1$ and $M_2$ be affine subspaces of $\Bbb R^n$ with $M_1 \cap M_2 \neq \emptyset$ and $\dim(M_1 \cap M_2)=\dim(M_1)$. Then $M_1 \subset M_2$.
|
Let $M_1$ and $M_2$ be affine subspaces of $\mathbb{R}^n$ with $M_1 \cap M_2 \neq \emptyset$ and $\dim(M_1 \cap M_2)=\dim(M_1)$ . Then $M_1 \subset M_2$ . I tried to prove by contradiction and took an element in $M_1$ and assumed it was not in the intersection, but it did not work. I also tried to use the dimensions of linear subspaces to create these affine subspaces, but it did not work. Any help is appreciated.
|
Let $a\in M_1\cap M_2$ Let $U_1$ and $U_2$ two linear subspaces of $\mathbb R^n$ such that $$M_i=a+U_i(i=1,2)$$ By definition, $\dim M_i=\dim U_i$ . On the other hand, we obviously have that $$M_1\cap M_2=a+U_1\cap U_2$$ So, by hypothesis, we have that $$\dim U_1\cap U_2=\dim U_1$$ $U_1\cap U_2$ is a linear subspace of $U_1$ . Therefore $$U_1\cap U_2=U_1$$ So $$M_1=a+U_1=a+U_1\cap U_2\subset M_2=a+U_2.\square$$
|
|linear-algebra|affine-geometry|
| 1
|
If $m^2+n^2=1$, find the maximum of $\dfrac{5-4m}{5-4n}$
|
If $m^2+n^2=1$ , find the maxmum of $\dfrac{5-4m}{5-4n}$ . The original question is to find the maximum of $\dfrac{BD}{CD}$ . By simplifying the formula through cosine theorem, I get the above formula. The value should be equal to $\sqrt{\dfrac{5-4m}{5-4n}}$ . How to find the value? Any elegant geometric solutions are also welcomed.
|
A geometric approach: Draw a unit circle and a circle of radius $4$ , and mark $D$ at $(5,5)$ . Then for any point $(m,n)$ on the unit circle, $(4m,4n)$ lies on the outer circle, and $5-4m$ and $5-4n$ are the horizontal and vertical projections, illustrated for point $E$ . The ratio you wish to maximize is the slope, which will be maximal at the point of tangency forming triangle $DGA$ . We already have sides $AD,AG$ by construction, so the angle $DAG$ is $\arctan\frac{\sqrt{50}}{4}$ which is about $55.55$ degrees, and since the angle above the x-axis is $45$ degrees by construction, the angle under the x-axis is about $\alpha=10.55$ degrees. The coordinates of $m,n$ are $(\cos{\alpha},\sin{\alpha})$ which is about $(-0.9831,-0.1831)$ , giving maximal slope of about $5.36$ (I'm fixing the negative signs mentally here, since I inadvertently made the construct in quadrant II). These values should be exactly those found by @ploosu2.
|
|geometry|triangles|
| 0
|
Property of Lipschitz Domains
|
I have been working on a research problem of mine and came across the concept of Lipschitz domains. I am curious about whether it is possible to show that there always exists a bi-Lipschitz map from $\mathcal{X} \to \mathcal{Y}$ whenever $\mathcal{X}, \mathcal{Y} \subset \mathbb{R}^d$ are Lipschitz domains. I came across a result which shows the converse, i.e, if there exists a bi-Lipschitz map between $\mathcal{X}$ and $\mathcal{Y}$ and $\mathcal{X}$ is a Lipschitz domain then $\mathcal{Y}$ is also a Lipschitz domain. From my understanding of the topic (which is relatively limited), I think that there should exist a map between Lipschitz domains. However, I haven't been able to prove this claim or find a reference that proves/disproves the claim. Any leads/references will be greatly appreciated. Thanks!
|
With the definition you quoted, the existence of a bilipschitz homeomorphism is utterly wrong. The simplest example is $X={\mathbb R}$ and $Y=(0,1)\subset \mathbb R$ . One can give more interesting examples when both domains are bounded, for instance, $X$ is the unit disk in $\mathbb R^2$ and $Y$ is an annulus in $\mathbb R^2$ bounded by two disjoint circles. In the latter case, $X, Y$ are not even homeomorphic. On the other hand, if you assume that both $X, Y$ are relatively compact Lipschitz domains in $R^n$ and there exists a homeomorphism $\bar{X}\to \bar Y$ , then there exists also a bilipschitz homeomorphism $\bar X\to \bar Y$ , unless $n=5, n=4$ , see Tukia, P.; Väisälä, J. , Lipschitz und quasiconformal approximation and extension , Ann. Acad. Sci. Fenn., Ser. A I, Math. 6, 303-342 (1981). ZBL0448.30021 . as well as Luukkainen, Jouni , Lipschitz and quasiconformal approximation of homeomorphism pairs , Topology Appl. 109, No. 1, 1-40 (2001). ZBL0964.57023 .
|
|geometric-topology|diffeomorphism|
| 1
|
finite steps to Hessenberg form and/or triangular form
|
I am learning numerical linear algebra and curious about one thing. It is possible to reduce any matrix to the Hessenberg form in finite steps with a unitary matrix. But why is it impossible to reduce it further into an upper triangular form in finite step? What is the fundamental barrier?
|
If you could reduce to a triangular matrix $A = QTQ^*$ (a Schur factorization ) in a finite number of steps (involving elementary arithmetic operations and n-th roots only), this would violate the Abel-Ruffini theorem : it would allow you to exactly (neglecting roundoff errors) compute roots of arbitrary polynomials in a finite number of steps. The reason is that the diagonal entries of the triangular matrix $T$ are equal to the eigenvalues of $A$ , and for any degree- $n$ polynomial $p(z)$ you can find the roots by transforming $p$ into a corresponding $n \times n$ matrix (a companion matrix ). The Abel–Ruffini theorem says that it is impossible to find the roots of an arbitrary polynomial of degree 5 or higher in a finite number of elementary steps (there is no "quintic formula"). That tells us that it is impossible to find the Schur factorization of an arbitrary matrix (exactly) in a finite number of steps for matrices $5 \times 5$ or larger. Hence, all Schur algorithms (unlike Hess
|
|numerical-linear-algebra|computational-mathematics|
| 1
|
If $m^2+n^2=1$, find the maximum of $\dfrac{5-4m}{5-4n}$
|
If $m^2+n^2=1$ , find the maxmum of $\dfrac{5-4m}{5-4n}$ . The original question is to find the maximum of $\dfrac{BD}{CD}$ . By simplifying the formula through cosine theorem, I get the above formula. The value should be equal to $\sqrt{\dfrac{5-4m}{5-4n}}$ . How to find the value? Any elegant geometric solutions are also welcomed.
|
Let $m=\sin\theta$ and $n=\cos\theta.$ Since we are after the maximal value we may assume that $m\le 0$ and $n\ge 0.$ We want to maximize $$f(\theta)={4\sin\theta-5\over 4\cos\theta-5}$$ The numerator of the derivative $f'(\theta)$ is equal $$ 4\cos\theta(4\cos\theta-5)-(4\sin\theta-5)(-4\sin\theta)\\ =16\cos^2\theta -20\cos\theta-20\sin\theta+16\sin^2\theta\\ =4[4-5(\cos\theta+\sin\theta)] $$ Therefore $f'(\theta)=0$ if $\sin\theta+\cos\theta={4\over 5}.$ By taking into account $\sin^2+\cos^2\theta=1$ we get that both $\sin\theta_0$ and $\cos\theta_0$ are the solution of the quadratic equation $$5x^2-4x-{9\over 10}=0$$ Since we want $\sin\theta_0\le 0$ and $\cos\theta_0\ge 0$ then $$\cos\theta_0={4+\sqrt{34}\over 10},\quad \sin\theta_0={4-\sqrt{34}\over 10}$$
|
|geometry|triangles|
| 0
|
Concern about the Lucas primality test
|
The Lucas primality test states that a number N is prime iff there exists a base $B, 1 , such that: [a] $\ B^{N-1} \equiv 1 \pmod N$ ...and... [b] $B^{(N-1)/F} \not\equiv 1 \pmod N$ for all prime factors $F$ of $N-1$ . Now we could select B at random but then the test wouldn't be deterministic. In order to force it to be so we would have to try every possible base until either [a] or [b] fails. So what's the lower bound? Well for most numbers I assume it's fairly reasonable but for one class of numbers at least it's in the neighborhood of $\ N^{1/3}$ : the Carmichael numbers. In fact, the only bases that give a definitive answer for a Carmichael are those that contain at least one of its factors. In other words, trial division would beat a Lucas primality test on these numbers! So my concern here is whether or not this fact has been taken into account (or is at least a documented caveat) in the definition of Pratt certificates and such? *** EDIT *** I guess the takeaway here is that, w
|
For Carmichael numbers there is no base $B$ such that condition [b] is fulfilled. Therefore a valid Pratt certificate cannot be created. Choosing random bases is pretty useless for this test as condition [b] is not fulfilled for each pair of prime and base but only for at least one. Example: $43$ is prime, but $11^{42/3} = 11^{14} \equiv 1 \pmod{43}$ .
|
|elementary-number-theory|primality-test|
| 0
|
$T\in\mathcal K(H)$ , $(e_n)$ orthonormal sequence. Prove $Te_n \rightarrow 0$.
|
Let $H$ be a Hilbert space and $T:H\to H$ a compact operator. Let $(e_n)$ be an orthonormal sequence in H. Prove $Te_n \rightarrow 0$ . Hint: what do you know about $\langle Te_n,f\rangle$ for $f\in H$ ? I'm taking a course in Hilbert space and operators and stumbled on this problem that I can't see how to solve. Can somebody please help me solve this?
|
The hint tells you to note that $\langle Te_n,f\rangle \to 0$ , as $$ \langle Te_n,f\rangle=\langle e_n,T^*f\rangle\to 0 $$ since $e_n$ converges to $0$ weakly. So $Te_n$ converges to $0$ weakly. Now for the sake of contradiction, suppose it does not converge strongly to $0$ , then along a subsequence $$ \lim_{n'\to \infty}\|Te_{n'}\|>\delta>0. $$ But then by compactness, there is a further subsequence such that $Te_{n''}\to g$ for some $g$ strongly. But $\|g\|>\delta$ by the above, while we still have $Te_{n''}$ converges weakly to $0$ , contradicting that weak and strong limits should agree.
|
|functional-analysis|hilbert-spaces|compact-operators|
| 1
|
solutions to the equation $(2m^2-1)^2=2n^2-1$, where $m$, $n$ are positive integers
|
I'm studying the equation $(2m^2-1)^2=2n^2-1$ ( $\ast$ ), where $m$ , $n$ are positive integers. It is known that $m$ can only be 1 or 2 by using Wolfram Alpha. Now I want to prove that result. That is my attempt: If a prime $p$ | gcd( $m$ , $n$ ), ( $\ast$ ) is equivalent to $(2m^2-1)(m^2-1)=(n-m)(n+m)$ , $p$ divides RHS but not LHS, which is impossible. So gcd( $m$ , $n$ )=1. Module 8 to ( $\ast$ ) and we find $n$ must be odd. This is as far as I got. Can someone help me?
|
There is already an elementary proof in $\textit{Number Theory: Conceptions and Problems}$ written by Titu Andreescu et al. , which is example 3.56.
|
|diophantine-equations|
| 0
|
Simplex method: finding the next vertex?
|
I am quite new to the simplex algorithm, but I have been following the explanation of this excellent video . Unfortunately, I believe that there are some cases not discussed in the video that I'd like to understand. Let me set up the problem: Assume that I have variables $x_1$ and $x_2$ (the actual case I care about is higher-dimensional, but for the ability to visualize, let's keep it 2D), as well as a number of inequalities: $$ \begin{align} - x_1 \leq 0 \tag{1} \\ - x_2 \leq 0 \tag{2} \\ 0.5x_1 + x_2 \leq 5 \tag{3} \\ x_1 + 0.5x_2 \leq 5 \tag{4} \\ \end{align} $$ The resulting feasible region should look like below: Let us reformulate that into the equalities via the introduction of slack variables $s_1,\dots,s_4$ : $$ \begin{align} s_1 &= x_1 \tag{5} \\ s_2 &= x_2 \tag{6} \\ s_3 &= 5 - 0.5x_1 - x_2 \tag{7} \\ s_4 &= 5 - x_1 - 0.5x_2 \tag{8} \\ \end{align} $$ Now assume I start from vertex 1 ( $0,0$ ). Plugging this into the equations above yields $$ \begin{align} s_1 &= 0 \\ s_2 &=
|
We like to use the term " binding " if the constraint is active for a feasible solution within a model such that its corresponding slack value is equal to zero . What the Simplex Method is doing is exchanging basic and non-basic variables in-and-out of the basis to and traversing/exchanging and releasing a constraint to bind to another. It finds this improving direction by calculating the reduced cost of introducing a basic variable into the basis such that a new binding constraint is achieved after releasing one. ( Pages $2-3$ ) In terms of "correct" direction, is a topic of pivoting rules in which the shortest explanation is: you don't really know which route is the best route, but we know of methods that prevent cycling and methods that reduce CPU runtime; all we know is that there exists an improving direction(s) per pivot, and how we handle the direction(s) determines how fast the problem will solve. Additionally, there might be multiple solutions on a feasible point, thereby maki
|
|linear-algebra|inequality|linear-programming|polytopes|simplex-method|
| 0
|
Generalizations of fibre products
|
For maps $f:X\to Z$ and $g:Y\to Z$ of topological spaces, we can define fibre product as $X\times_{Z}Y=\{(x,y)\in X\times Y \mid f(x)=g(y)\}$ . I was wondering if there is a generalization of this concept. More precisely, if we have $f_i:X_i\to Z$ for $1\leq i\leq n$ , then can we define generalized fibre product as a topological space $S=\{(x_1,\dots,x_n)\in \prod_{i=1}^nX_i \mid f_i(x_k)=f_j(x_l) ,i\neq j ~\text{and}~ k\neq l \}$ ?
|
Obviously the space you wrote down exists, it's just a (possibly empty) subspace of a finite product, so in one sense the answer to "can we define [...]" is trivially yes. But of course this begs the question of "in what ways is this a generalization $X \times_Z Y$ ", and this in turn begs the question of what properties of the fibre product we should consider in the first place. Here's one suggestion: The fibre product $X \times_Z Y$ satisfies a universal property: Given a space $W$ and maps $\alpha\colon W \to X$ , $\beta\colon W \to Y$ such that $f \circ \alpha = g \circ \beta$ , then there exists a unique map $\gamma\colon W \to X \times_Z Y$ such that $\alpha = \pi_X \circ \gamma$ and $\beta = \pi_Y \circ \gamma$ (where $\pi_X\colon X \times_Z Y \to X$ is the projection, and likewise for $\pi_Y$ ). The space $S$ also has a property like this: Given $W$ and maps $\alpha_i\colon W \to X_i$ such that $f_i \circ \alpha_i = f_j \circ \alpha_j$ for all $i, j$ , there exists a unique map
|
|fiber-bundles|fibre-product|
| 0
|
Atiyah-MacDonald Ch. 4 exercise 20: what's the module analogue of $\sqrt{\mathfrak{a}+\mathfrak{b}} = \sqrt{\sqrt{\mathfrak{a}}+\sqrt{\mathfrak{b}}}$?
|
Atiyah-MacDonald exercises 20-23 in chapter 4 develop a theory of primary decomposition for modules, in analogy with the theory developed in the chapter for rings. Exercise 20 begins with this definition: Definition: Given a (commutative, unital) ring $A$ and an $A$ -module $M$ , and a submodule $N\subset M$ , the radical of $N$ in $M$ is $$r_M(N) = \sqrt{\operatorname{Ann} M/N}$$ It then asks us to prove analogues to the formulas in exercise 1.13 for the radical of an ideal. Formula 1.13(v) is $$\sqrt{\mathfrak{a}+\mathfrak{b}} = \sqrt{\sqrt{\mathfrak{a}}+\sqrt{\mathfrak{b}}}$$ This is true by taking radicals in the pair of inclusions $\mathfrak{a}+\mathfrak{b}\subset \sqrt{\mathfrak{a}}+\sqrt{\mathfrak{b}}$ and $\sqrt{\mathfrak{a}+\mathfrak{b}} \supset \sqrt{\mathfrak{a}}+\sqrt{\mathfrak{b}}$ , the first of which is totally obvious and the second of which is because if $x^k\in\mathfrak{a}$ and $y^\ell\in\mathfrak{b}$ then $(x+y)^{k+\ell}\in\mathfrak{a}+\mathfrak{b}$ . It seems to me
|
I don't know what the right generalization should be, but \eqref{eq} is not true in general: Let $A=\mathbb{Z}$ , $M=\mathbb{Z}/p\mathbb{Z}\oplus\mathbb{Z}/p\mathbb{Z}$ , where $p\in\mathbb{Z}$ is prime. Let $N=\langle e_1\rangle$ , $N'=\langle e_2\rangle$ . Then $r_M(N)=(p)=r_M(N')$ , thus $\sqrt{r_M(N)+r_M(N')}=\sqrt{(p)}=(p)$ and since $M=N+N'$ we have $r_M(N+N')=(1)$ .
|
|abstract-algebra|ring-theory|commutative-algebra|modules|
| 0
|
The closed unit ball of a normed space X is compact iff X is finite-dimensional
|
I have a question about this theorem: " The closed unit ball of a normed space X is compact iff X is finite-dimensional." if I take (R,+,.,Q) R is a infinite dimensional vector space when we take rational numbers set as a field. but if we take Euclid norm. we know from Heine-Borel Theorem the closed unit ball in R is compact. this contadicts the theory. There must be something wrong with my example, but I want to know what it is?
|
Your space is not a normed space. A normed space is a vector space over $\mathbb{R}$ , or sometimes over $\mathbb{C}$ (which is not that different from $\mathbb{R}$ in this context) with some additional structure. By definition. But not over $\mathbb{Q}$ . That would have to be an explicit assumption, $\mathbb{Q}$ behaves very differently from the other two (topological) fields. For example over $\mathbb{Q}$ no finite dimensional ball is compact, while some infinite dimensional are (but not all). While $\mathbb{C}$ is quite similar to $\mathbb{R}$ in this context. In fact, $\mathbb{C}$ -normed spaces are pretty much $\mathbb{R}$ -normed spaces of even dimension.
|
|functional-analysis|vector-spaces|normed-spaces|compactness|
| 0
|
Ring and quotient ring
|
Let R be a ring and I is an ideal of R. Can quotient ring R/I has an identity but R does not? I'm having trouble with the example. Thank you so much.
|
Take $R = \mathbb{Z} \times 2\mathbb{Z}$ (addition and multiplication are done componentwise). Then $R$ has no identity. Define $\varphi: R \longrightarrow R$ by $(x, y) \mapsto (x, 0)$ . Then $\ker(\varphi) = 0 \times 2\mathbb{Z}$ . Then $$R/\ker(\varphi) \cong \mathbb{Z}$$ which does have an identity. The idea behind this example is that we take a ring with an identity (such as $\mathbb{Z} \times 0$ ) and we add more elements (in this case $0 \times 2\mathbb{Z}$ ) so that what used to be an identity is no longer an identity. Then we quotient out the new elements to get back to the original ring.
|
|ring-theory|
| 0
|
Let $\xi \in \mathbb{R}$. Use Cauchy's Theorem to prove that $\int_{-\infty}^{\infty}e^{-\pi x^2}e^{-2\pi ix \xi} dx = e^{-\pi \xi^2}$.
|
It wants us to prove that $\int_{-\infty}^{\infty}e^{-\pi x^2}e^{-2\pi ix \xi} dx = e^{-\pi \xi^2}$ by integrating $f(z) = e^{-\pi z^2}$ over the rectangle with vertices $\pm n, \pm n + i \xi$ and taking the limit $n \to \infty$ . I have tried to do this using Cauchy's Integral Formula, but it ends up with a nasty integral that results in $\text{erf}(z)$ , which I know for a fact we cannot use. Any help on how to start with this problem? I have seen similar questions but none that asked to find it by integrating over this $2n \times i \xi$ rectangle.
|
We first estimate the integral of $e^{-\pi z^2}$ over $\left[\pm n,\pm n+i\xi\right]$ . Note that for $z=\pm n+ic$ on the segment, we have $$\left | e^ {z^2} \right | =e^{c^2-\pi n^2 }\leqslant e^{\xi^2-\pi n^2}, $$ therefore $$\left | \int_{\pm n}^{\pm n+i\xi } e^{-\pi z^2}\mathrm d z \right | \leqslant \left | \xi \right |e^{\xi^2-\pi n^2}\to 0. $$ Now fix $n$ , it is easy to see that the integrand is analytic.Recall that $$\text{The Rectangle}=\left [ -n,n \right ] \cup \left [ -n+i\xi,n +i\xi \right ] \cup \left [ n,n+i\xi \right ] \cup \left [ -n,-n+i\xi \right ] $$ and the whole integral over the rectangle is $0$ . Take $n\to \infty$ and we see that $$\int_{-\infty+i\xi }^{\infty+i\xi} e^{-\pi z^2}\mathrm d z=\int_{-\infty }^{\infty} e^{-\pi z^2}\mathrm d z.$$ Therefore we have $$\int_{-\infty }^{\infty} e^{-\pi \left ( z+i\xi \right ) ^2}\mathrm d z=\int_{-\infty }^{\infty} e^{-\pi z^2}\mathrm d z=1.$$ Take the $e^{\pi\xi^2}$ out from the left hand side and we are done.
|
|calculus|complex-analysis|cauchy-integral-formula|
| 0
|
Closure of quasi-projective variety: if $X=V(I)\setminus V(J)$, must $\overline{X}=V(I)$?
|
Let $X=V(I)\setminus V(J)$ in a complex projective space $\Bbb P^n$ , where $I,J$ are ideals of complex polynomials in $n+1$ variables and $V(\ldots)$ their common zeros. I mean $X$ is locally closed set in the Zariski topology. I was told: the closure of X is $V(I)$ and $V(J)$ is equal to that closure ‘less’ $X$ . For me it seems false, I mean I can only say that closure is inside $V(I)$ .
|
You're correct that if one write $X=V(I)\setminus V(J)$ , it is not necessarily true that $\overline{X}=V(I)$ , but given a locally closed subset $X$ one can always choose $I,J$ so that $\overline{X}=V(I)$ : since $\overline{X}$ is closed, it is of the form $V(I')$ for some homogeneous ideal $I'$ , and then you can write $\overline{X}\setminus X = V(I')\setminus V(J+I')$ , as $V(J)\cap V(I') = V(J+I')$ . Here's a counterexample: let $X=V(xy)\setminus V(y)\subset\Bbb P^2$ . Then $\overline{X}=V(x)\neq V(xy)$ , so you can write $X=V(x)\setminus V(x,y)$ .
|
|algebraic-geometry|zariski-topology|
| 0
|
Wordle: Best Play to keep Average Down Given Five Possible Solutions
|
Suppose the following situation in Wordle. Please allow me to assume you know the rules. You play the word SALET You get SAL all green and ET both gray. You deduce that there are five possible answers: SALAD SALSA SALVO SALON SALLY (And for the sake of this discussion, please assume that only these five are possible solutions. There are some other rare words possible, but Wordle does not use rare words as solutions nor simple plurals with S on the end, etc.) Two Questions: In regular mode (where you can play any acceptable word next), if you are after the lowest average score in your play, what should be your next play? (I will withhold my answer for now.) In hard mode, (where you must play one of the five words) what should a reasonable player, one who wants to keep his average the lowest possible, average in such a situation? (My guess is 3.2. But I am not sure.)
|
At this point, you've narrowed down the final two letters to the set DNOVY . In regular mode, play any word that matches 4 of the given letters (I can't find one with all 5). For example, DOWNY, ENVOY, or SYNOD. Suppose that you play DOWNY. Then the possible outcomes all produce different letter highlighting: gray + gray + gray + gray + yellow = SALAD gray + gray + gray + gray + gray = SALSA gray + yellow + gray + gray + gray = SALVO gray + yellow + gray + yellow + gray = SALON gray + gray + gray + gray + green = SALLY So you now have enough information to find the word. Exactly 3 guesses are required: SALET, DOWNY, and then the correct word. In hard mode, don't pick SALLY, as that will give you no additional information if it isn't the correct word. Otherwise: If you guess the correct word, you win. If you get one yellow and one gray, then you know the correct word: If you picked either SALAD or SALSA, and get a yellow A, pick the other word. If you picked either SALVO or SALON, and g
|
|probability|
| 0
|
set the limits of integration of the spherical coordinates between two paraboloids and a plane
|
Find the volume of the solid $\mathcal{S}$ enclosed laterally by the paraboloids $\mathcal{P}_1$ of equation $z = x^2 + y^2$ and $\mathcal{P}_2$ of equation $z = 3(x^2 + y^2)$ and from above by the plane $z = 1$ using spherical Coordinates. How one may set the limits of integration of the spherical coordinates? Now, for a solid bounded below by paraboloid $z=a^2(x^2+y^2)$ for $a>0$ , and above by plane $z=1$ , the volume can be obtained and it is equal to $\dfrac{\pi}{2a^2}$ . Using this result we can get the volume of the solid $\mathcal{S}$ and obtain that $$ V(\mathcal{S})=V_{\mathcal{P}_1}-V_{\mathcal{P}_2}=\dfrac{\pi}{2 \times 1}-\dfrac{\pi}{2\times 3}=\dfrac{\pi}{3}. $$ But, how one can obtain the volume of $S$ using spherical coordinates?
|
Let's firstly consider our figure only in first quadrant and at the end will multiply result on $4$ . Let me consider spherical coordinates $$\begin{cases}x=r\sin\phi\cos\theta & \\ y= r \sin\phi\sin\theta & \\ z=r\cos\theta \end{cases}$$ Accordingly we have $0\leqslant r , $\phi \in [0,\pi]$ and $\theta \in (0, 2\pi]$ . Paraboloid $z=x^2+y^2$ gives $r=\frac{\cos\phi }{\sin^2\phi}$ and $z=3(x^2+y^2)$ gives $r=\frac{\cos\phi }{3\sin^2\phi}$ . As expected they are not dependent on $\theta$ , so, it takes all range in first quadrant $\left[0, \frac{\pi}{2} \right]$ . Plane $z=1$ , obviously become $r=\frac{1}{\cos\phi}$ . With respect to $\phi$ volume is divided in two parts: from $\frac{\pi}{6}$ up to $\frac{\pi}{4}$ where ray from origin firstly meet $z=3(x^2+y^2)$ and then plane $z=1$ i.e. we are between $r=\frac{\cos\phi }{3\sin^2\phi}$ and $r=\frac{1}{\cos\phi}$ . And from $\frac{\pi}{4}$ up to $\frac{\pi}{2}$ volume is between $r=\frac{\cos\phi }{3\sin^2\phi}$ and $r=\frac{\cos\phi
|
|integration|multivariable-calculus|multiple-integral|spherical-coordinates|cylindrical-coordinates|
| 0
|
Ring and quotient ring
|
Let R be a ring and I is an ideal of R. Can quotient ring R/I has an identity but R does not? I'm having trouble with the example. Thank you so much.
|
Take $R = 2\Bbb Z$ and $I = R$
|
|ring-theory|
| 0
|
$A\subset\mathbb{N},$ natural density $1/2.$ Half the members of $A$ are even, half are odd. Is $A$ an (eventual) additive basis of $\mathbb{N}?$
|
Proposition: If $A\subset\mathbb{N},\ A$ has natural density $d> \frac{1}{2},$ then $\exists\ N\in\mathbb{N}\ $ such that $\ n>N \implies \exists\ a,b\in A\ $ such that $\ a+b=n.$ Proof sketch: Since $d> \frac{1}{2},\ \exists\ N\ $ such that $\ \left\lvert A\cup \{1,2,\ldots, n\} \right\rvert > \frac{1}{2}\ \forall\ n>N.\ $ Consider any $n>N.$ By considering the $\frac{n}{2}\ $ pairs $\ (1,n-1),\ (2,n-2),\ \ldots,\ \left( n/2,n/2 \right),\ $ by the PHP, both members of one of these pairs must be in $A$ , that is, $\ x\in A\ $ and $\ n-x \in A,\ $ as desired. If we replace $d> \frac{1}{2}$ in the above proposition with $\ d = \frac{1}{2},\ $ then the set of even numbers (or the set of odd numbers) are counter-examples to the proposition. Furthermore, requiring there to be at least one even and at least one odd number in $A$ doesn't resolve this: consider $\left(2\mathbb{N}\setminus\{2^n:n\in\mathbb{N}\}\right)\cup\{7\}.$ Then no such $N$ exists. But what about if, for example, $A$ has n
|
A very straightforward counter-example exists: $\ A = (4\mathbb{N})\ \cup (4\mathbb{N}+1).\ $ Then, $A+A = (4\mathbb{N})\ \cup (4\mathbb{N}+2)\cup (4\mathbb{N}+1),\ $ which is not equal to $\mathbb{N},\ $ and has density $\frac{3}{4}.$
|
|examples-counterexamples|pigeonhole-principle|additive-combinatorics|
| 1
|
What are the eigenvalues and eigenvectors of $\operatorname{ad}x$ for non-diagonalizable $x$?
|
We know the following proposition is true. The proof together with the specification of the eigenvectors of $\operatorname{ad}x$ is here . Let $x\in \operatorname{gl}(n,F)$ be diagonalizable with $n$ eigenvalues $a_1,\ldots,a_n$ in $F$ . The eigenvalues of $\text{ad }x$ , where $\operatorname{ad}x(y):=[x,y]=xy-yx$ are precisely the $n^2$ scalars $a_i-a_j$ ( $1\leq i,j\leq n$ ). What is the result if $x$ is not diagonalizable? We know $x$ can always be transformed into the Jordan canonical form. I solved the cases of $2\times2$ and $3\times3$ Jordan canonical forms. I would like to know the general solution.
|
@Justauser has given a great answer to this that's the best way to think about it from a hands-on perspective, but there's also a machinery-heavy perspective from which this fact isn't surprising: $\operatorname{ad}$ is an algebraic representation, and algebraic representations carry semisimple (respectively, nilpotent) elements to semisimple (respectively, nilpotent) elements. Indeed, this is why the notion of the Jordan decomposition of an algebraic Lie algebra can be defined intrinsically (as opposed to an abstract Lie algebra, where we'd want to consider every element of $\operatorname{Lie}(\mathbb R)$ nilpotent and every element of $\operatorname{Lie}(\mathbb R^\times)$ semisimple, but the non-algebraic isomorphism $\operatorname{Lie}(\exp) : \operatorname{Lie}(\mathbb R) \to \operatorname{Lie}(\mathbb R^\times)$ forces us to give up on morphisms respecting this notion). The relevant result in full generality is Theorem 4.4 of Borel - Linear algebraic groups ; and the special case
|
|linear-algebra|eigenvalues-eigenvectors|lie-algebras|adjoint-action|
| 0
|
Asymptotic Gambler's Ruin Probability with Unequal Gain/Loss with Zero-Mean Payoff Distribution
|
The gambler's ruin problem with unequal gain/loss with a payoff distribution whose support is a finite subset of $\mathbb Z$ is an old problem; for example, see Feller (1968, Vol.1, Section XIV.8) and this old MSE question . The simple case of the problem where the payoff distribution takes $-1$ and $1$ with probabilities $p$ and $1-p$ , has been studied in many introductory text books (it can be easily adjusted for the case the payoff distribution takes $-1$ , $1$ , and $0$ with probabilities $p$ , $q$ , and $1-p-q>0$ ). The ruin probability for the general case is a linear combination of the roots of the following equation: $$P_X(z)=1 \Leftrightarrow M_X(t)=1$$ where $P_X(z)=\mathbb E(z^X)$ and $M_X(t)=\mathbb E(e^{tX})$ are the generating function and moment generating function of $X$ , respectively. Inspired by the heuristic method used in this answer , I guess if $\mathbb E(X)=0$ and the range of $X$ is fixed, we have $$\color{blue}{\lim_{N,M \to \infty} \mathbb P_\text{ruin}(M,N)
|
This can be proved by appealing to the optional stopping theorem for martingales. If $\ X_r\ $ is the amount the player wins in round $\ r\ ,$ $$ \color{blue}{X_r\in[-a,b\,]} $$ with probability $1$ , where $\ a\ $ and $\ b\ $ are positive, $\ \mathbb{E}\big(X_r\big)=0\ ,$ and $\ \mathbb{E}\big(\big|X_r\big| \big)>0\ ,$ then the player's wealth $\ W_r\ $ after round $\ r\ $ is given by $\ W_0=M\ $ and $$ W_r=M+\sum_{i=1}^rX_i $$ for $\ r\ge1\ ,$ provided the process hasn't terminated. Since \begin{align} \Bbb{E}\big(W_{r+1}\,\big|\,W_1,W_2,\dots,W_r\,\big)&=\Bbb{E}\big(W_r+X_{r+1}\,\big|\,W_1,W_2,\dots,W_r\,\big)\\ &=W_r\ , \end{align} the process $\ \big\{W_r\big\}\ $ is a martingale . The process stops on round $\ T\ ,$ given by $$ T=\inf\big\{\,r\ge1\,\big|\,W_t\le0\ \text{ or }\ W_t\ge N\,\big\}\ . $$ The random variable $\ T\ $ is a stopping time for $\ \big\{W_r\big\}\ ,$ and is finite with probability $1$ (will be proved later). Therefore, the optional stopping theorem tells us
|
|real-analysis|probability|statistics|stochastic-processes|
| 1
|
Proof of one version of Cauchy-Schwarz in $\mathbb{R}^n$
|
Show that for $a,b \in \mathbb{R}$ and $x,y > 0$ that $$\frac{(a+b)^2}{x+y} \le \frac{a^2}{x} + \frac{b^2}{y}$$ and generalize this result for $a_1, a_2, \dots, a_n \in \mathbb{R}$ and $x_1, x_2, \dots, x_n > 0$ I am not sure how to solve this problems. Specifically, I tried expanding the left hand side first and reducing either $x$ or $y$ in the denominator. This couldn't work due to the $2ab$ term not being taken care of. Then I tried to move everything on one side and showing that $$\frac{(a+b)^2}{x+y} - \frac{a^2}{x} - \frac{b^2}{y} \ge 0$$ . But I just don't see how to factor the left. Edit: This is a subpart of a question where I need to prove C-S inequality. I cannot use C-S inequality.
|
First prove the following - Lemma: Let $x,y \in \mathbb{R}^+$ . Then, $\dfrac{x+y}{2} \geq \sqrt{xy}$ . (This is a special case of the AM-GM Inequality for two variables.) Now, as per your work, it suffices to show that $\frac{(a+b)^2}{x+y} - \frac{a^2}{x} - \frac{b^2}{y} \leq 0$ . Combining denominators and simplifying, show that this is equivalent to proving $2abxy \leq a^2y^2 + b^2x^2$ . Conclude the proof by applying the Lemma (Absolute values may be involved).
|
|inequality|cauchy-schwarz-inequality|
| 1
|
Is the order of sequences preserved in the limit in general Polish space?
|
Suppose that X is a Polish space with a natural order topology, that is, for any $x \in X$ , both sets $\{ y: y \ge x\}$ and $\{y: y \le x\}$ are closed. Consider two sequences $x^n$ and $y^n$ in $X$ , which converge to $x$ and $y$ , respectively. If $x^n \ge y^n$ for all $n$ , then $x \ge y$ . The proof of this result is easy if $X$ is the Euclidean space but seems quite elusive in the general (Polish) space. I would much appreciate it if anyone could enlighten me with a proof.
|
This is sort of a boring counterexample: let $X = \mathbb{R}$ be equipped with its usual topology but a different partial order $\leq$ defined by, $$a \leq b \Leftrightarrow a = b \, \mathrm{or} \, a = -r, b = r \, \mathrm{for \, some} \, r > 1$$ Then for any $x \in X$ , both $\{y: y \geq x\}$ and $\{y: y \leq x\}$ are closed because both are finite. We have $-1-\frac{1}{n} \leq 1+\frac{1}{n}$ for all $n$ , but $\lim_n (-1-\frac{1}{n}) = -1 \not\leq 1 = \lim_n (1+\frac{1}{n})$ .
|
|real-analysis|sequences-and-series|
| 0
|
How to deduce an expression of a specific conditional expression
|
The problem occurs when reading Bombardini et al., 2023, "Did US Politicians Expect the China Shock?", American Economic Review , Vol.1, PP174-209 . The authors define $\xi_{it}$ to be a Gaussian term obeying $N(0,2\sigma_{\xi}^2)$ ( see line 7, page 7 in the link ), and $Y_{i,t} = \mathbf{1}\{a_{t}\theta_{i}+b_{t}+\delta_{t} E[S_{i,t+1}|\mathcal{I}_{i,t}]\geq\xi_{it}\}$ ( see equation 4, page 8 ). To my understand, $a_{t}\theta_{i}+b_{t}+\delta_{t} E[S_{i,t+1}|\mathcal{I}_{i,t}]$ can be seen as a constant once $\mathcal{I}$ is given. I do not know how, but the authors manage to show that $$-E[Y_{i,t}\xi_{i,t}|\mathcal{I}_{i,t}]=(1-Y_{i,t})\frac{\phi(a_{t}\theta_{i}+b_{t}+\delta_{t} E[S_{i,t+1}|\mathcal{I}_{i,t}])}{1-\Phi(a_{t}\theta_{i}+b_{t}+\delta_{t} E[S_{i,t+1}|\mathcal{I}_{i,t}])},$$ where $\Phi$ and $\phi$ are respectively the distribution function and density function of the standard normal distribution ( see equation (13), page 12 in the link ). Can anyone explain to me in det
|
I am not entirely sure if this proof is complete and correct, but I would like to share what I have attempted. Let $z\in\mathbb{R}$ , then it holds $$ \begin{aligned} \int_{\mathbb{R}}-\mathbb{1}_{\{z\ge x\}}x\ d\mu_{\mathcal{N}(0,1)}(x) &=\int_{\mathbb{R}}-\mathbb{1}_{\{z\ge x\}}x\phi(x)\ dx\\ &=\phi(z)\\ &=\frac{\phi(z)}{1-\Phi(z)}\int_\mathbb{R}\mathbb{1}_{\{z Applying Fubini's theorem for conditional expectations yields $$ \begin{aligned} \int_{\mathbb{R}}E[-\mathbb{1}_{\{Z\ge x\}}x|\mathcal{G}]\ d\mu_{\mathcal{N}(0,1)}(x) &=E[\int_{\mathbb{R}}-\mathbb{1}_{\{Z\ge x\}}x\ d\mu_{\mathcal{N}(0,1)}(x)|\mathcal{G}]\\ &=E[\int_\mathbb{R}(1-\mathbb{1}_{\{Z\ge x\}})\frac{\phi(Z)}{1-\Phi(Z)}\ d\mu_{\mathcal{N}(0,1)}(x)|\mathcal{G}]\\ &=\int_\mathbb{R}E[(1-\mathbb{1}_{\{Z\ge x\}})\frac{\phi(Z)}{1-\Phi(Z)}|\mathcal{G}]\ d\mu_{\mathcal{N}(0,1)}(x) \end{aligned} $$ for a random variable $Z$ and a $\sigma$ -algebra $\mathcal{G}$ . You then might want to choose $Z:=a_{t}\theta_{i}+b_{t}+\delta_{t}
|
|probability|statistics|stochastic-processes|economics|
| 0
|
If $m^2+n^2=1$, find the maximum of $\dfrac{5-4m}{5-4n}$
|
If $m^2+n^2=1$ , find the maxmum of $\dfrac{5-4m}{5-4n}$ . The original question is to find the maximum of $\dfrac{BD}{CD}$ . By simplifying the formula through cosine theorem, I get the above formula. The value should be equal to $\sqrt{\dfrac{5-4m}{5-4n}}$ . How to find the value? Any elegant geometric solutions are also welcomed.
|
The maximum of the functions $f_{\pm}(x)=\frac{5\pm\sqrt{1-x^2}}{5-4x}$ on $[-1,1]$ is asked. The numerators of their first derivatives give the equations $5\sqrt{1-x^2}=\pm(5x-4)$ for critical points, respectively. The solution of the equation belonging to $f_+$ gives the critical point $x=\frac{4+\sqrt{34}}{10}$ with the maximum value $\frac{25+4\sqrt{34}}{9}\approx 5.34>5$ on the interval $[-1,1]$ . On the other hand, the maximum of $f_-$ is $5$ at one of the boundary points, namely $x=1$ . WolframAlpha gives more details.
|
|geometry|triangles|
| 0
|
$\langle T,\varphi_n\rangle\rightarrow 0$ for all distributions $T$ of finite order $\implies\varphi_n\rightarrow 0$ in $\mathcal D(\mathbb{R}^{d})$.
|
Let $(\varphi_n)_{n \in \mathbb{N}} \subset \mathcal{D}(\mathbb{R}^{d})$ such that $\langle T, \varphi_n \rangle \rightarrow 0$ for all distributions $T$ of finite order. Prove that $\varphi_n \rightarrow 0$ in $\mathcal{D}(\mathbb{R}^{d})$ . My attempt: We have to prove that there exists a compact $K \subset \mathbb{R}^{d}$ such that $\operatorname{supp}(\varphi_n) \subset K$ for all $n \in \mathbb{N}$ and $|D^\alpha \varphi_n(x)|\rightarrow 0$ uniformly in $K$ for all $\alpha\in\mathbb{N}^{d}$ . Let $T \in \mathcal{D}'(\mathbb{R}^{d})$ of finite order, say $k$ . Then, there exists a compact $K_k$ and $C_{K_k}>0$ such that $|\langle T, \varphi \rangle| \leq C_{K_k}\max_{|\alpha|\leq k}\sup_{x \in K}|D^\alpha\varphi (x)|$ for all $\varphi \in C_{0}^{\infty}(K_{k})$ . But I don't know how to proceed.
|
I don't believe the statement is true. In dimension 1, if $\varphi_n(x)=\frac{x^n}{n!}\theta(x)$ where $\theta$ is some adequate bump function, for any fixed integer $k$ the sequence $(\varphi^{(k)}_n)_n$ goes uniformly to $0$ as $n\rightarrow +\infty$ (which implies convergence towards $0$ when applying any finite order distribution ) but the uniform norm of $\varphi_n^{(n)}$ is at least $1$ (which forbids convergence to $0$ in $\mathcal{D}(\mathbb{R})$ .
|
|functional-analysis|distribution-theory|
| 0
|
The "turning-point fraction" of a random sample from a discrete distribution must have expectation less than 2/3?
|
A sequence of reals $x_1,...,x_n$ is said to have a turning point at index-value $i$ ( $1\lt i\lt n$ ) iff $x_{i-1}\lt x_{i}\gt x_{i+1}$ or $x_{i-1}\gt x_{i}\lt x_{i+1}$ . The number of turning points in the sequence is denoted $T(x_1,...,x_n)$ , and we define the turning-point fraction as $$R(x_1,...,x_n)={\text{number of turning points}\over\text{number of potential turning points}}={T(x_1,...,x_n)\over n-2}$$ so $0\le R(x_1,...,x_n)\le 1.$ If $X_1,...,X_n$ are random variables, we define the corresponding r.v.s $T_n=T(X_1,...,X_n)$ and $R_n=R(X_1,...,X_n).$ Conjecture: If $X_1,...,X_n$ are i.i.d. r.v.s with any discrete distribution, then $E[R_n]\lt{2\over 3}$ . (It's easy to show that $E[R_n]={2\over 3}$ when the $X_i$ are i.i.d. with any continuous distribution.) Supposing the $X_i$ are i.i.d. with a discrete distribution having p.m.f. $p()$ and c.d.f. $F()$ , we have the following: $$\begin{align*}E[R_n] &={1\over n-2}E\left[ \sum_{i=2}^{n-1}\mathbb{1}_{(X_{i-1} X_{i+1}) \text{ o
|
Requested from comments and building on Misha Lavrov's answer : This uses linearity of expectation on each of the $n-2$ triplets $X_{i-1}, X_i, X_{i+1}$ with $1 . If $q_3= \sum\limits_x \Pr[X_i=x]^3$ then there is probability $q_3$ that all three are the same and there is no turning point at $X_i$ If $q_2 = \sum\limits_x \Pr[X_i=x]^2$ then there is probability $q_2 - q_3$ that $X_{i-1}= X_i$ and $X_{i+1}$ is distinct and there is no turning point at $X_i$ and there is probability $q_2 - q_3$ that $X_{i}= X_{i+1}$ and $X_{i-1}$ is distinct and there is no turning point at $X_i$ but there is probability $q_2 - q_3$ that $X_{i-1}= X_{i+1}$ and $X_{i}$ is distinct and there is a turning point at $X_i$ Otherwise $X_{i-1}, X_i, X_{i+1}$ are distinct, with probability $1-3q_2+2q_3$ and with probability $\frac13(1-3q_2+2q_3)$ that $X_{i-1} or $X_{i-1}> X_i> X_{i+1}$ and there is no turning point at $X_i$ but with probability $\frac23(1-3q_2+2q_3)$ that $X_{i-1} X_{i+1}$ or $X_{i-1}> X_i and th
|
|probability|statistics|random-variables|expected-value|time-series|
| 0
|
Is it possible to write an equation that represents the root of this kind of function?
|
I have the parabola $x^2$ , and a circle whose center is the point $(3,2)$ . My goal was to figure out the minimum length of the circle's radius so that it would be tangent to the parabola. To do this, I decided to use the Pythagorean theorem to create a function that would show me the distances between $(3,2)$ and any point on the parabola: $f(x)=\sqrt{(3-x)^{2}+(2-x^2)^{2}}$ Now to figure out the minimum value of $f(x)$ , I needed to find the root of its derivative: $\begin{align} \frac{df}{dx}=\frac{1}{2\sqrt{(3-x)^{2}+(2-x^2)^{2}}}(-2(3-x)-4x(2-x^2))\\ =\frac{4x^3-6x-6}{2\sqrt{(3-x)^{2}+(2-x^2)^{2}}} \end{align}$ After this, I graphed $\frac{df}{dx}$ and saw that its root was approximately $1.567$ , which told me the location of the point on the parabola closest to $(3,2)$ . I then used the Pythagorean theorem again to find the distance between that point and $(3,2)$ , and got my answer for the radius. This is all well and good, but what if I move the center of the circle some addi
|
Say the centre is $(a,b)$ . Then, $$f(x)=\sqrt{(a-x)^2+(b-x^2)^2}$$ Minimizing $f(x)$ is the same as minimizing $f(x)^2=g(x)$ . $$g(x)=(a-x)^2+(b-x^2)^2$$ Note that $$g^\prime(x)=2(x-a)+4x(x^2-b)=4x^3-(4b-2)x-2a$$ If you wish, use the cubic formula to get that the root to the above equation is $\sqrt[3]{r_+}+\sqrt[3]{r_-}$ where $$r_{\pm}=-a\pm\sqrt{a-\frac{(4b-2)^3}{27}}$$ (This might not always be a minimum, check it manually). Set this into the expression of $f(x)$ and you are good to go. Hope this helps. :)
|
|functions|derivatives|optimization|
| 0
|
Suppose we want to prove that a property $P$ is true for every integer in $ℕ_{odd}$ = $\{1,3,5,7,9,...\}$.
|
Suppose we want to prove that a property $P$ is true for every integer in $ℕ_{odd}$ = $\{1,3,5,7,9,...\}$ . Consider the following induction mechanism: Base case: Verify the property $P(1)$ Inductive step: Prove that for all $k ≥ 1, P(k) ⇒ P(k + 1)$ (a) Why might the above mechanism not constitute a valid proof? (b) How would you modify the inductive step to obtain a valid proof? (c) Use your modified mechanism to prove that every integer $n$ ∈ $ ℕ_{odd}$ satisfies $2^n+3^n = 5m$ , where $m$ is an integer. What I did A) What I thought is that it doesn't constitute a valid proof because it is not complete. In that it should have been a strong induction for the set of all odd numbers in the set $ℕ_{odd}$ . It only verifies $P(1)$ . B) I am not sure, but to continue on from the previous one, maybe continue by changing the inductive step to constitute for all odd numbers in the set $ℕ_{odd}$ . C) I am also not sure, but most likely this is an inductive proof where base case = $2^1+3^1 = 5(
|
$$\varphi:\mathbb N\to \mathbb N_{odd},k\mapsto 2k+1$$ is a bijection, whose reciprocal bijection is $$\varphi^{-1}:\mathbb N_{odd}\to \mathbb N, n\mapsto \frac{n-1}{2}$$ To prove that an assertion $P(n)$ is TRUE for all $n\in \mathbb N_{odd}$ is then to prove that $$\forall k \in \mathbb N, P'(k):=P(\varphi(k)) \text{ is TRUE}$$ What can be done by classical induction. $$P'(k)\to P'(k+1)$$ is written then $$P(2k+1)\to P(2(k+1)+1)=P(2k+1\color{green}{+2})$$ ie $$P(n)\to P(n\color{green}{+2})$$ as @J.W.Tanner simply and nicely explained to you.
|
|discrete-mathematics|induction|parity|
| 0
|
Find $\mathop{\limsup}\limits_{n\to\infty} a_n$ and $\mathop{\liminf}\limits_{n\to\infty} a_n$ if $a_n = n(2+(-1)^n)$
|
Given the sequence $a_n = n(2+(-1)^n)$ . Find $\mathop{\overline{\lim}}\limits_{n\to\infty} a_n$ and $\mathop{\underline{\lim}}\limits_{n\to\infty} a_n$ . The following is how I approach this problem. $\mathop{\lim}\limits_{n\to\infty} a_{2n} = \mathop{\lim}\limits_{n\to\infty} n(2+1) = \infty$ and $\mathop{\lim}\limits_{n\to\infty} a_{2n+1} = \mathop{\lim}\limits_{n\to\infty} n(2-1) = \infty$ Could I conclude $\mathop{\overline{\lim}}\limits_{n\to\infty} a_n = \mathop{\underline{\lim}}\limits_{n\to\infty} a_n = \infty$ ? I'm confused because $\infty$ is not a constant.
|
What's the problem with the fact that $\infty$ is not a constant? For each $n\in\Bbb N$ , $a_n\geqslant n$ , and therefore $\lim_{n\to\infty}a_n=\infty$ . So, $\limsup_{n\to\infty}a_n=\liminf_{n\to\infty}a_n=\infty$ . In general, if $l\in\Bbb R\cup\{\pm\infty\}$ , then $$\lim_{n\to\infty}a_n=l\iff\limsup_{n\to\infty}a_n=\liminf_{n\to\infty}a_n=l.$$
|
|limits|limsup-and-liminf|
| 1
|
Prove that that the function $f:\mathbb R^3\to \mathbb R$ defined by $f(x,y,z)=ye^x+xz^2$ is differentiable at the point (0,-4,2) using definition.
|
Prove that that the function $f:\mathbb R^3\to \mathbb R$ defined by $f(x,y,z)=ye^x+xz^2$ is differentiable at the point $\vec a=\langle 0,-4,2\rangle$ using definition. My attempt:- Let $\vec{h}=\langle h_1,h_2,h_3\rangle.$ Then we need to prove $$\lim_{\vec{h}\to \vec 0}\frac{f(\vec a+\vec{h})-f(\vec a)-f_x(\vec a)h_1-f_y(\vec a)h_2-f_z(\vec a)h_3}{||\vec h||}=0.$$ Consider the LHS $$\lim_{\vec{h}\to \vec 0}\frac{f(\vec a+\vec{h})-f(\vec a)-f_x(\vec a)h_1-f_y(\vec a)h_2-f_z(\vec a)h_3}{||\vec h||}=\\\lim_{\vec{h}\to \vec 0}\frac{(-4+h_2)e^{h_1}+h_1(2+h_3)^2+4-h_2}{\sqrt{h_1^2+h_2^2+h_3^2}}$$ I can use $(-4+h_2)\leq h_2$ . I don't know how to proceed after that. I used spherical coordinated. I used all the possibilities available for me. Could you help me?
|
Spherical coordinates will help you to notice that any term containing $h_1^{p_1}h_2^{p_2}h_3^{p_3}/||\vec h||$ with $p_1+p_2+p_3>1$ will tend to $0$ in the limit $\vec h\to\vec 0$ . How is this important? It means that you can skip all higher order terms, like $h_i^2$ or $h_ih_j$ . $$f(\vec a+\vec h)-f(\vec a)=(-4+h_2)e^{h_1}+h_1(2+h_3)^2+4$$ In the second term one will keep $4h_1$ and disregard higher powers. Similarly, in the first term, once you write $$e^{h_1}=1+h_1+\cal O(h_1^2)$$ Then $$f(\vec a+\vec h)-f(\vec a)=-4+h_2-4h_1+4h_1+4+\cal O(h^2)=h_2+\cal O(h^2)$$ In order that the limit is $0$ , you get $f_x=0$ , $f_y=1$ , $f_z=0$
|
|multivariable-calculus|derivatives|
| 1
|
Total derivative of f(x, g(x, y)) and its approximation
|
I understand the steps to calculate the total derivative of f(x, g(x)) Related: Derivative of $f(x, g(x))$ with respect to $x$ I have three sub-questions related to calculating the total derivative of f(x, g(x, y)), (1) How do I calculate its total derivative, here's my attempt: $$ df=\Big(\frac{\partial{f}}{\partial x}+\frac{\partial{g}}{\partial x}\Big)dx+\frac{\partial{f}}{\partial y}dy $$ So applying a simple example of f(x, x+y) where g(x,y)=x+y $$ df=\Big(\frac{\partial{f}}{\partial x}+1\Big)dx+\frac{\partial{f}}{\partial y}dy $$ (2) Why do I not need to consider higher order terms? Looking at Taylor Series would it make it more accurate? (3) In terms of approximating the total derivative, is this logic correct? $$ df(x, x+y) = f(x+\Delta x, y+\Delta y) - f(x,y) \approx \Big(\frac{f(x+\Delta x, x+\Delta x + y)-f(x, x+y)}{\Delta x}+1\Big)\Delta x + \Big(\frac{f(x, x + y + \Delta y)-f(x, x+y)}{\Delta y}\Big)\Delta y $$
|
Aside: Higher Order Terms Why do I not need to consider higher order terms? Looking at Taylor Series would it make it more accurate? This is a largely-unrelated question that I will not address here; I just want to point out that it applies equally well in a single variable case like $\mathrm dy=\dfrac{\mathrm dy}{\mathrm dx}\,\mathrm dx$ . The question is probably mostly addressed by Why isn't $df=\frac{\partial f}{\partial x}\:dx+\frac{\partial f}{\partial y}\:dy$ defined to resemble a Taylor series further? and its comments/answers. Working with Differentials The General Case Set $z=f(u,v)$ , $u=x$ , and $v=g(x,y)$ . Then $\mathrm{d}z=\dfrac{\partial f}{\partial u}\mathrm{d}u+\dfrac{\partial f}{\partial v}\mathrm{d}v$ by how differentials/the multivariate chain rule works. And $\mathrm{d}v=\dfrac{\partial g}{\partial x}\mathrm{d}x+\dfrac{\partial g}{\partial y}\mathrm{d}y$ for the same reason. And then (for good measure) $\mathrm{d}u=\dfrac{\partial u}{\partial x}\mathrm{d}x+\dfrac{
|
|calculus|multivariable-calculus|derivatives|partial-derivative|implicit-differentiation|
| 1
|
definition of (cartesian) product of sets
|
In Ronald Brown's book "Topology and Groupoids" we have, in Appendix A3, the following definition of what a (cartesian) product is: Let $\{X_{\lambda}\}_{\lambda\in L}$ be a family of sets and let $X$ be the set of all families $x=(x_{\lambda})_{\lambda\in L}$ such that $x_{\lambda}\in X_{\lambda}$ . Then $X$ is called a product of the family $(X_{\lambda})_{\lambda\in L}$ and is denoted by $$\prod_{\lambda\in L}X_{\lambda}.$$ Shouldn't the definition read instead: ... and let X be the set of all tuples x I am referring to the 2006 version of his book, further corrected in 2020 which he makes available here . There is a list of errata further down on that same page but the above section does not appear there.
|
It's not a typo. What is a "tuple"? Normally we think of a "tuple" with index set $I$ as a function with domain $I$ . So, viewed as a set of ordered pairs, a tuple $(x_i)_{i\in I}$ would be the set $$\{ (i,x_i)\mid i\in I\}.$$ For index sets that are of the form $I=\{1,2,3,\ldots,n\}$ , we often denote this as $(a_1,a_2,\ldots,a_n)$ , with the understanding that this just means " $a_1$ is the image of $1$ , $a_2$ is the image of $2$ , etc." What is a "family"? A family of sets is the result of applying the Axiom (Schema) of Replacement to a set $I$ , together with the indexing set $I$ . Recall that the Axiom (Schema) of Replacement says, roughly, that if you have a set $I$ , and you have a way of associating a set $F(i)$ to each $i\in I$ , then the collection $\{F(i)\mid i\in I\}$ is also a set. Roughly, that if you have a way of replacing each element of $I$ with a well-defined set, then the result will also be a set. So a family $\{X_i\}_{i\in I}$ means that you have an indexing $I$
|
|general-topology|
| 1
|
Is equation of unit circle a linear tranformation
|
Is $T$ : $R^2 \longrightarrow R^2$ a linear tranformation with it's range in the circle $x^2 + y^2 = 1 $ I am considering $T(x,y) = (x,\sqrt[2]{1-x^2})$ , so $T(a_1 +b_1, a_2 + b_2) = (a_1+b_1,\sqrt[2]{1-(a_1+b_1)^2})$ but then I am unable to expand the $\sqrt{}$ term. I feel like $T$ can be checked for linearity of transformation in a more simplified and intuitive way.
|
In case you did not understand the hint, then recall a linear transformation must preserve zero (That is $T(\textbf{0})=\textbf{0}$ , this is because $T(0+y)=T(0)+T(y)$ for any $y\in\mathbb{R}^2$ .) So does your "linear transformation" preserve zero? In fact, a linear map should have some 'linear structure', since an entry is with square root, it may be suspicious to guess that the map is not linear.
|
|linear-algebra|linear-transformations|
| 0
|
Convergence problems with numerical integration
|
I am trying to do numerical integration of a function that I know converges over a specified interval. (It is everywhere twice differentiable and strictly positive, with a long near-exponential tail that converges to zero). But every numerical integration routine I have tried reports convergence failure for values in excess of 10 to the 10th, or thereabouts. I am looking for transforms or other techniques that will enable to achieve convergence and still back out the value of the integral of the original function. For example, while just re-scaling the integral does not help, the log of the function integrates without difficulty. If there is a way to recover the integral of the original function from the integral of the log of the function, that would be a solution.
|
Are you trying to integrate over a semi-infinite interval, i.e. you want $\int_0^\infty f(x) dx$ ? Just making the interval really large (e.g. you mention $10^{10}$ ) is a probably bad idea, because the quadrature routine will start with too few points over that huge interval and have trouble refine. Instead, one approach is to do a coordinate transformation that maps the infinite integration domain to a finite interval, e.g. $\int_0^\infty f(x)dx = \int_0^1 f\left(\frac{t}{1-t}\right) \frac{1}{(1-t)^2} dt$ is one possible transformation, and there are others such as tanh–sinh quadrature . Another possibility, is to use a quadrature scheme designed for infinite intervals. For example, if you know the asymptotic exponential decay rate (or can bound it), you can factor this out of your integrand and use a Gauss–Laguerre quadrature rule . Since you don't actually write down the integral you want to compute, it's hard to give you more advice.
|
|integration|definite-integrals|numerical-methods|
| 0
|
If $m^2+n^2=1$, find the maximum of $\dfrac{5-4m}{5-4n}$
|
If $m^2+n^2=1$ , find the maxmum of $\dfrac{5-4m}{5-4n}$ . The original question is to find the maximum of $\dfrac{BD}{CD}$ . By simplifying the formula through cosine theorem, I get the above formula. The value should be equal to $\sqrt{\dfrac{5-4m}{5-4n}}$ . How to find the value? Any elegant geometric solutions are also welcomed.
|
It is surprising to see no answer using quadratic equations and their properties. Let us write $$r = \frac{4m-5}{4n-5}\iff 4n=\frac{4m-5}{r}+5$$ Substitute into $m^2+n^2=1$ to get a quadratic in $m$ which becomes: $$g(m):=16m^2(r^2+1)-40m(1-r)+25(1-2r)+9r^2=0$$ and since we need $m\in \mathbb R$ we put $\text{Disc}_m(g) \geq 0$ . This means $$1600(r-1)^2-4\cdot 16(r^2+1)\cdot (9r^2-50r+25)\geq 0$$ This becomes upon expansion $$-64r^2(9r^2-50r+9)\geq 0\iff 9r^2-50r+9\le 0$$ Now note that $f(x):= 9x^2-50x+9$ has two real roots, viz $$x_0=\frac{25- 4\sqrt{34}}9$$ and $$x_1=\frac{25+4\sqrt{34}}9$$ Thus we need $(r-x_0)(r-x_1)\le 0$ which means $$r \in [x_0,x_1]$$ We conclude that $$\boxed{\max r = x_1=\frac{25+4\sqrt {34}}9}$$ $$\boxed{\min r = x_0 = \frac{25-4\sqrt{34}}9}$$
|
|geometry|triangles|
| 0
|
Does there exist other integer models that contain an exponential number of branches thats not knapsack for the branch-and-bound method?
|
During a class assignment, I was presented with the following question: Provide an integer program that has an exponential number of branches...(expunged excess) There was more to the question, but I'm primarily concerned and curious about finding more integer programs than I the ones I found that fit this criterion. A few models I came up with are: $$\min z = x_{n+1}$$ Subject to: $$2x_1 + 2x_2 + \cdots + 2x_n + x_{n+1} = n$$ $$x_n\in\{0,1\}\forall n$$ when $n$ is odd, and $$\max z = x_1 + x_2 + \cdots + x_n$$ Subject to: $$x_1 + x_2 + \cdots + x_n \le n-\frac{1}{2},\exists n\in\mathbb{R}^+$$ $$x_n\in\{0,1\}\forall n$$ Both of these are single-constraint knapsack problems . I mentioned this problem to one of my peers, who said it reminded him of the Klee-Minty Cube . Therefore, I'm wondering/asking if more model types are not a single-constraint knapsack that proves that the branch-and-bound algorithm will take exponential steps, and if there isn't why?
|
You might want to look at Karp's 21 NP-complete problems .
|
|integer-programming|
| 0
|
Urn draws with replacement problem: number of drawn white balls till we draw black ball for the k-th time
|
In the urn we have a white, b red and c black balls. We draw with replacement. Calculate the expected number of white balls, which are drawn until we drew black ball for the k-th time. My solution: The expected number of draws till be draw black ball for the k-th time is: $$\left(\frac{a+b+c}{c} \right)^k$$ That should be correct? I don't really know how to continue. The below is obviously wrong due to comments. In the sequence of turns there are $$\left( \frac{a+b+c}{c} \right)^k-k$$ possibilities to draw white or red. Therefore $$\frac{b}{a+b+c}\left(\left(\frac{a+b+c}{c} \right)^k-k \right)$$ of them are white. Is my solution correct?
|
$\textbf{Hint}$ Following lulu's advice to use friendly nomenclature for symbols, and realising that the red balls are irrelevant, Let $w$ = # of white balls, $b$ = # of black balls, $n = w+b$ , then P(black) $=p = \dfrac{b}{n},\;\;$ P(white) $=1-p = q$ The earliest the $k_{th}$ black could come would be at position # $k$ in the queue with no white preceding it, then at # $(k+1)$ with one white preceding it, and so on upto all $w$ whites preceding it Thus P(expected # of whites preceding # $k_{th}$ ball) $= \dfrac{(k-1)!}{(k-1)!0!}q^0p^k*0 + \dfrac{k!}{(k-1)!1!}q^1p^k*1 +...$ (If you prefer, you could use binomial coefficirnts rather than permutations) You should be able to continue from here, and also condense the formula ? Letting $j$ be the counter for the number of whites before the $k_{th}$ black, the expected # of whites before the $k_{th}$ black $$ =\sum_{j=0}^w \binom{k+j-1}{j}q^jp^k*j$$
|
|probability|
| 0
|
Textbook Recommendations: Measure Theory to Supplement Asymptotic Statistics (van der Vaart)
|
I'm a math and economics undergraduate interested in econometrics and statistics. I'm trying to put together an independent study course (or courses) that will give me a working understanding of measure and asymptotic theory. The professor I'm working with (an econometrician) recommended we read thru Asymptotic Statistics (van der Vaart). It looks like a great textbook, but reading the preface the author recommends some understanding of measure theory to get a really good grasp on the proofs. I was curious if anyone had textbook recommendations for a readings course in measure theory that could supplement or precede an asymptotic theory course using van der Vaart. For an idea of my background, I've taken a probability course that used Larsen and Marx, a first real analysis course that used Cummings ("long-form" textbook, not sure how widely used it is but its easy reading), and a year-long sequence in linear algbera. I have yet to take any courses in abstract algebra or topology, thoug
|
No idea how much prerequisite in measure theory you need. If you are only supposed to be familiar with the basic definitions and ideas, then chapter one and two of Papa Rudin will be helpful. My undergraduate course on Lebesgue integral and measure theory is around that depth. If you need deeper results, then there is a measure theory book by Halmos.
|
|measure-theory|statistics|reference-request|asymptotics|book-recommendation|
| 0
|
Solving for $DE$ in a Geometric Puzzle
|
I hope this message finds you well. I am contacting you to seek your help in solving a fascinating geometry problem that I encountered in a recent competition. Despite my diligent attempts, I have not been able to find a solution. I am keen to acquire insights that will undoubtedly enhance my comprehension of this geometric puzzle. Problem Description In $\Delta$ $ACD$ , $AC=AD=CD$ , and $AB$ = $3$ , $BC$ = $6$ , find the value of $DE$ Approach: I tried this way to find the values of $x$ and $y$ . therefore, I can apply the sine law. I made a note of this method, and then I tried to determine the values of $x$ & $y$ but I was unable to do so... I would be extremely grateful for any help or advice in figuring out the intricacies of this problem. Thank you for your expertise and support.
|
Hint 1: $AE:EC=1:2$ Consider the ratio of area $\Delta ABE:\Delta CBE$ . They are triangle with common vertex, so the ratio is equal to the base ratio $AE:EC$ . On the other hand we have the area of the triangles are $$\Delta ABE=\dfrac{1}{2}(3)(BE)\sin\angle ABE$$ $$\Delta CBE=\dfrac{1}{2}(6)(BE)\sin\angle CBE$$ As you have found, the angle are the same so the area ratio is $1:2$ . Hint 2: What is $AC$ ? This is direct followed from cosine formula on $\Delta ABC$ , where you can get $AC=\sqrt{63}$ . And therefore, you get $AE,AD$ and $\angle DAE$ . You get $DE$ as desired!
|
|geometry|analytic-geometry|
| 1
|
Signs as part of a number coordinate
|
Suppose we have a 1-dimensional space: a number line. We can name each element of it using its module and a "sign"(+/-). Is there a system which have something like additional signs that are used to name numbers? In other words: Is there any property like signness which unite "+" and "-", and can we create new object with such property ? I am not talking of signs of operation like (multiplication $\times$ ) (division $\div$ ) etc. that are working on the same number line. They can not be used to name a number like " $\times4$ "... as far as I get it I came up with a system of n rays that are sent from one point, so we can name each ray with some symbol and use it to name numbers on it. Example: we have 3 rays and 3 operations that are "pulling" a result to the corresponding $\infty$ : addition, upperSubtraction and lowerSubtraction. upperMinus infinity \ \ \________ plus infinity / / / lowerMinus infinity we have upperMinus 4 we do plus 8 and get plus 4 we lowerMinus 8 and get lowerMin
|
The system you create is kind of neat, but your operations lack some of the basic properties we generally like to see in mathematical systems (particularly for operations called " $+$ "): No associativity, ie $(x + y) + z \neq x + (y + z)$ in your system. For example: $(2 + upperMinus 4) + lowerMinus 4 = lowerMinus 2$ , but $2 + (upperMinus4 + lowerMinus4) = 0$ . No unique inverses. Since $2$ has two different things it can be added to that make $0$ , this means that you can't have a concept of the negative of a number, which means you can't have a well-defined subtraction operation. Without these, you're going to struggle to find any useful applications for this system, as you'll be unable to do most algebraic manipulations or use basic algebraic concepts like "the sum of three numbers". A more standard approach to extending the real number line is to extend it to the complex number plane .
|
|linear-algebra|abstract-algebra|soft-question|
| 0
|
Help with Direct comparison Test: $\sum_{n=2}^{\infty}\frac{1}{n^n}$
|
The Direct Comparison test states that if we have two series $a_n$ and $b_n$ with $b_n ≥ 0$ and $a_n ≤ b_n$ for all n. If b converges so does a. If a diverges so does b. I was given the series $\sum_{n=2}^{\infty}\frac{1}{n^n}$ and I attempted to use the direct comparison test on this, but was unable to come up with a series that I could compare this to. However this looks geometric series with |R| 1 at N ≥ 2. When I asked my professor what methodology he would employ to find another series to compare this to he told me to compare it to either a p series or geometric series and "reason it out". How do I "reason this out"? What is the first step in the Direct Comparison Test and is there a sequence of steps that I can follow to arrive at a the right series or is it just trial and error and knowledge of different series formats?
|
When $n\geq 2$ , $n^n\geq n^2$ , so $\dfrac{1}{n^n}\leq \dfrac{1}{n^2}$ and, as $\displaystyle\sum_{n=2}^{\infty}\dfrac{1}{n^2}$ converges being $2>1$ , we deduce $\displaystyle\sum_{n=2}^{\infty}\dfrac{1}{n^n}$ does too.
|
|sequences-and-series|
| 0
|
Is this a valid "easy" proof that two free groups are isomorphic if and only if their rank is the same?
|
I have read on different sources that it is not possible to give a simple proof that "two free groups are isomorphic if and only if they have the same rank" using only what "a student who has just read the definition of free group as a set of words over an alphabet" would know. See for example the answers to this question Is there a simple proof of the fact that if free groups $F(S)$ and $F(S')$ are isomorphic, then $\operatorname{card}(S)=\operatorname{card}(S')?$ . I think I have come with such a proof, but I would like to know if it is valid. The proof goes as follows. If A and B have the same cardinality, we can define a bijection between letters on A alphabet and letters on B alphabet. This establishes a bijection between (reduced) words on A and (reduced) words on B , and the isomorphism between the free groups F(A) and F(B) . This proves the "if". Now suppose that | A | B |. We can define a bijection between letters on A and a subset of the letters on B . Put differently, we can
|
Let me write out, in full, what the "only if" direction says: If there exists an isomorphism between the free group $F(A)$ and the free group $F(B)$ then $A$ and $B$ have the same cardinality. Let's write the contrapositive: If $A$ and $B$ have different cardinality then no isomorphism exists between the free group $F(A)$ and the free group $F(B)$ . Your proof amounts to the statement that one particular homomorphism from $F(A)$ to $F(B)$ , namely the homomorphism induced by a certain choice of injection $A \hookrightarrow B$ , is not an isomorphism. Your proof that this one particular homomorphism is not an isomorphism is correct. But that does not amount to a proof that no isomorphism exists . And continuing in that vein is really not going to work, because you cannot possibly go through the list of all possible homomorphisms, testing them one at a time to be sure that none of them is an isomorphism. The standard proof of the "only if" direction is to show that the tensor product $\m
|
|group-theory|solution-verification|group-isomorphism|free-groups|
| 1
|
How do I solve this Diophantine equation?
|
How do I solve this Diophantine equation, $a^{2n}+b^2=c^{2n}$ , where $n$ is any postive integer $>1$ & $a,b,c\ne0$ ? I tried it by applying the Pythagorean triplets generating formula, but unable to find any integer solutions & nor I'm able to prove that there are no integer solutions except $a,b,c=0$ . Thanks in advance for any help or guidance.
|
Let $k>1$ be a positive integer, and let $a$ , $b$ and $c$ be positive integers such that $$a^{2k}+b^2=c^{2k}.\tag{0}$$ Let $d:=\gcd(a,c)$ so that $d^{2n}$ divides $a^{2k}$ and $c^{2k}$ , and hence also $b^2$ . Then $d^k$ divides $b$ and so $(a,b,c)=(dA,d^kB,dC)$ for some positive integers $A$ , $B$ and $C$ with $\gcd(A,C)=1$ . It follows that also $\gcd(A,B)=\gcd(B,C)=1$ , and it is easily verified that $$A^{2k}+B^2=C^{2k}.$$ So without loss of generality $a$ , $b$ and $c$ are pairwise coprime. Then $(a^k,b,c^k)$ forms a primitive Pythagorean triple, and so either $$a^k=m^2-n^2,\qquad b=2mn,\qquad c^k=m^2+n^2,$$ or $$a^k=2mn,\qquad b=m^2-n^2,\qquad c^k=m^2+n^2,$$ for some coprime positive integers $m$ and $n$ with $mn$ even. In the first case we see that $$a^k+c^k=(m^2-n^2)+(m^2+n^2)=2m^2.$$ But by the main theorem of this article such a solution does not exist for $k\geq3$ . In the second case we see that $$a^k+c^k=(2mn)+(m^2+n^2)=(m+n)^2.$$ Again by the main theorem of that article
|
|algebraic-number-theory|diophantine-equations|integers|
| 1
|
What's the difference between zig-zags and a helixes?
|
I've been reading through the Polytope-Wiki entry on helices . To my understanding, an $n$ -gonal helix is a blend of a planar $n$ -gon $\{n\}$ with the regular linear apeirogon $\{\infty\}$ . The blend of the apeirogon with a line segment produces the only two dimensional "helix", the zig-zag. Finally, $\{\infty\}$ is a blend of itself with a point and so is a one-dimensional helix. All of these polygons have the same graph-structure. Every vertex joins two edges and every edge meets two vertices. There are no cycles, so their Hasse diagrams are isomorphic. This means that all the helices are isomorphic as abstract polytopes. As far as I can make out, their symmetry groups are isomorphic. Every symmetry of $\{\infty\}$ corresponds to a symmetry of the $n$ -gonal helix and vice versa. So how do these shapes differ? All the helices are isomorphic as abstract polygons and they've got isomorphic symmetry groups. The only way I can see to distinguish them is via "undoing" the blending. The
|
A helix is a non-planar shape. Here is a much better picture of a helix: A zig-zag is a planar shape. Therefore a helix is not a zig-zag.
|
|solution-verification|polygons|polytopes|
| 0
|
What does it mean that "A $G$-torsor is like the group $G$ that forgot its identity element"?
|
A right $G$ -torsor (or a principal homogeneous space ) is defined to be a set $X$ together with a free and transitive right $G$ -action $\lhd: X\times G \to X$ . It is often said that "A $G$ -torsor is like the group $G$ that forgot its identity element". What, roughly does this last statement mean? I've had a hard time understanding it because there is no multiplication operation on $X$ .
|
This answer is heavily motivated by Torsors Made Easy by Joan Baez . A right $G$ -torsor is a set $X$ together with a free and transitive right $G$ -action $\lhd: X\times G \to X$ . Here free means for each $x\in X$ that $x\lhd g = x$ means $g=e$ , the identity element in $G$ . Transitive means that for each $x, y \in X$ that there is a $g\in G$ such that $x\lhd g = y$ . Together these two properties mean that for each $x,y\in X$ there is a unique $g\in G$ denoted by $g_{xy}$ such that $x \lhd g_{xy} = y$ . A group has a multiplication operation $\cdot_G: G\times G \to G$ . However, we can also define a division operation by \begin{align} /_G: G\times G \to G\\ g/_G h \mapsto g\cdot_G h^{-1}. \end{align} Because the $G$ -action on $X$ in a $G$ -torsor is transitive and free we can define a similar division operation $/_X: X\times X \to G$ on $X$ : $$ /:_X X\times X \to G\\ y/_X x \mapsto g_{xy}. $$ We can see that $x \lhd y/_X x = x \lhd g_{xy} = y$ . We can say that group elements mea
|
|group-theory|
| 0
|
Find $\lim_{z \to 0} \left(z/\bar{z} \right)^2$
|
I want to calculate the limit $\lim_{z \to 0} \left(\frac{z}{\overline{z}} \right)^2$ , if we take $z=re^{i \Theta}$ note that $$\left(\frac{z}{\overline{z}} \right)^2=\left(\frac{re^{i\Theta}}{re^{-i\Theta}} \right)^2=e^{4i\Theta}$$ This may help clarify things a bit, but I still don't see the value of the limit. Any suggestions?
|
$\lim\limits_{z \to 0}\frac{z}{\bar{z}}: DNE$ (Does Not Exist) The limit exists if and only if it is the same regardless of what path you use to approach the limit value. As a corollary, if you find 2 or mor paths that yield different values for the limit, the limit does not exist. Take $z=R+Ii$ , where $R$ and $I$ are the real and imaginar parts. Then $\bar{z}=R-Ii$ . $$\lim\limits_{z \to 0}\bigg(\frac{z}{\bar{z}}\bigg)^2=\lim\limits_{R, I \to 0}\frac{(R+Ii)^2}{(R-Ii)^2}=\lim\limits_{R, I \to 0}\frac{R^2+2RIi-I^2}{R^2-2RIi-I^2}$$ If you approach on the path $R=0$ (i.e. along the Imaginary axis) you get: $$\lim\limits_{R, I \to 0}\frac{R^2+2RIi-I^2}{R^2-2RIi-I^2}=\lim\limits_{I \to 0}\frac{0+0-I^2}{0-0-I^2}=1$$ On the other hand, if you approach on the path $I=R$ (i.e. along the $\frac{\pi}{4}$ line) you get: $$\lim\limits_{R, I \to 0}\frac{R^2+2RIi-I^2}{R^2-2RIi-I^2}=\lim\limits_{I \to 0}\frac{I^2+2IIi-I^2}{I^2-2IIi-I^2}=\lim\limits_{I \to 0}\frac{2I^2i}{-2I^2i}=-1$$ The limits along
|
|complex-analysis|
| 0
|
What does a parametric equation mean?
|
I am following the last module of Differential Calculus on Khan Academy, that deals with Parameteric equations. Here are the parametric equations described in the lecture. $x(t) = 5t + 10$ $y(t) = 50 - 5t^2/2$ However, I really don't understand what parametric equations really mean. How do they differ from normal equations. According to Wikipedia : "In mathematics, a parametric equation defines a group of quantities as functions of one or more independent variables called parameters." I really don't understand what this definition is trying to convey. From what I observed, if two functions share a variable, it typically gets defined as a parametric equation. But that seems to be a loose definition. Regarding my prerequisite knowledge, I have a Masters in Engineering. Therefore I understand the formulae of calculus quite well. I just never bothered to understand some of the underlying concepts. Therefore, I am revisiting it by through Khan Academy.
|
As the parameter $t$ varies the two equations tell you the position of a point $(x(t),y(t))$ as it wanders along a curve in the plane. (Try drawing a picture of that curve and marking each point with the corresponding value of $t$ . The parameter in a system like this is often named " $t$ " to suggest time.) Then you can use some calculus to find the tangents to that curve and the speed with which the point traverses it. That's probably what the Khan Academy lesson is about. The graph of a function can be thought of as a parametric curve for which the parameter is the value on the $x$ -axis. Then the graph is the curve: the set of points $(x,f(x))$ .
|
|calculus|
| 1
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.