title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Using a compass and straightedge, what is the shortest way to divide a line segment into $n$ equal parts?
|
Sometimes I help my next door neighbor's daughter with her homework. Today she had to trisect a line segment using a compass and straightedge. Admittedly, I had to look this up on the internet, and I found this helpful site . There they claim that the fewest number of "steps" necessary to trisect the segment is $4$, where by one "step" we mean any stroke of the pencil, either with the compass or straightedge. Immediately, this got me thinking about the length of other optimal constructions, which has led to the question of this post: What is a the minimum number of steps necessary to construct a segment of length $\frac{1}{n}$ given a segment of length $1$? If $s(n)$ is the quantity in question, then this other helpful site shows that $s(n)\le n+6$. However, $s(2)=3$ and $s(3)=4$, so the bound is not sharp. Also, we can see that $s(mn)\le s(m)+s(n)$ by creating a segment of length $\frac{1}{mn}$ from one of length $\frac{1}{n}$. Finally, at the bottom of the first site, they hint at on
|
A possibly simpler method to divide a given line into n segments is to draw a second line of any length above and parallel to the given line such that the start of that 2nd line is perpendicular to the start of the first line. Mark out the number of random but equal length segments desired on the 2nd line and then draw concentric circles starting at the start of the 2nd line. Construct an upright line perpendicular to the end of the first line so as to intersect the last circle drawn. Draw a radial line from the origin of the circles to that intersection point. Then drop lines perpendicular to the given line from the intersection of the radial line with the constructed circle diameters. You will have divided it up into n equal lengths. Here is a drawing of the trisection of a line with some of the construction points hidden to avoid confusion. The blue line is the given line and the red line is the radial line drawn to meet the perpendicular upright from the end of the given line. Tris
|
|sequences-and-series|optimization|geometric-construction|
| 0
|
Differential Operator McLaurin Series
|
The Non-Homogeneous Differential Equation $$Ly=R$$ Where $$ L=F(D)= \sum_i a_iD^i $$ Is solved with the way below. $$ Ly=R \Rightarrow y=\frac{R}{L}=\frac{R}{F(D)}$$ Then we expand the qiotient $ \frac{1}{F(D)}$ into Taylor Series in the neighborhood of 0 (McLaurin). In order to expand F(D) we have to give $D=\frac{d}{dx} $ a value. How can we give a differential operator a value?
|
These are derived by the differential operator polynomial. Take one example, if we are going to solve the ODE with source term $R=e^x \cos(2x)$ , then the followng substitution steps are probably what you are after. \begin{align} y'' - 2y' + 5y = e^x \cos(2x)\implies (D^2-2D+5)y=e^x \cos(2x) \end{align} To find the particular solution, we can divide the operator polynomial \begin{align} y_p&=\frac{1}{D^2-2D+5}~e^x~\cos(2x)\\ &=e^x~\frac{1}{(D+1)^2-2(D+1)+5}~\cos(2x)\\ &=\color{blue}{e^x~\frac{1}{D^2+4}~\cos(2x)}\\ &=e^x \Re\left\{e^{2ix}\frac{1}{(D+2i)^2+4}\right\}\\ &=e^x \Re\left\{e^{2ix}\frac{1}{(D+4i)}\frac{1}{D}\right\}\\ &=e^x \Re\left\{e^{2ix}\frac{1}{4i(1-\frac{Di}{4})}x\right\}\\ &=e^x \Re\left\{\frac{-ie^{2ix}}{4}\left(1+\frac{Di}{4}\right)x\right\}\\ &=e^x \Re\left\{\frac{-ie^{2ix}}{4}\left(x+\frac{i}{4}\right)\right\}\\ &=\frac{x}{4}~e^x~\sin(2x)+\frac{1}{16}e^x~\cos(2x)\\ \end{align} where the second addend as a scaled source term is just a by-product of particular solutio
|
|calculus|ordinary-differential-equations|taylor-expansion|fractional-calculus|
| 0
|
Terminology: What is sort?
|
My current understanding is that sorts are ranges of quantification. many-sorted logic (MSL) allows quantification over a variety of domains (called sorts) - Stanford Encyclopedia of Philosophy First-order Peano arithmetic(PA) has only one sort $N$ , so it is single-sorted. If I modify PA so that quantification over $\mathrm{Prop}$ is possible, is it two-sorted? Or, single-sorted second-order theory?
|
Aside from the more philosophical examples in the Stanford Encylopedia of Philosophy (SEP), many-sorted logic (MSL) is useful in mathematics for giving formal languages that fit well with structures like vector spaces, where our intuition is that there are two kinds of object, scalars and vectors, which combine algebraically in a controlled way: if $x$ and $y$ are scalars and $v$ and $w$ are vectors, we have a well-defined notion of what $x + y$ , $x\times y$ , $v + w$ and $x\times v$ mean, but we view terms like $x + v$ and $v \times w$ as meaningless. MSL allows us to exclude these unwanted terms from the language. It is, however, just a convenience, we could work in a single-sorted language where the universe includes both scalars and vectors and we would have predicates to distinguish the two sorts of object. However, I think you want your sort Prop to be a sort whose values ranges over propositions (truth-values) and you want to be able to use terms of type Prop to form formulas.
|
|logic|terminology|type-theory|
| 1
|
Weird change of variables (?) I would like to understand formally
|
In this paper (bottom of p. 4), the authors state the following (I've added all definitions below): Given the deterministic mapping $z=g_{\phi}(\epsilon, x)$ we know that: $q_\phi (z|x)\Pi_i dz_i = p(\epsilon)\Pi_i d\epsilon_i$ . Therefore, $$\int q_\phi(z|x)f(z)dz = \int p(\epsilon)f(z)d\epsilon=\int p(\epsilon)f(g_\theta(\epsilon, x))d\epsilon$$ where: $z$ is a continuous random variable from some conditional distribution: $z \sim q_\phi(z|x)$ $z$ can be expressed using a deterministic, vector-valued function $g_\phi(\epsilon, x)$ parameterized by $\phi$ using $\epsilon \sim p(\epsilon)$ . For example, for $z \sim N(\mu(x), \sigma(x)^2)$ , we could write $z = \mu(x) + \sigma(x)\epsilon$ where $\epsilon \sim N(0,1)$ . $f$ is some function. I can't derive this formally (I can intuitively see how they plugged in the equality before the "Therefore," and that it 'kind of makes sense'). I was trying to see if I could somehow prove this using change of variables, but since $\epsilon$ is its
|
I used some better notation. These are assumed: $$f_Z(z)= q_\phi(z|x)$$ $$Z \sim g_{\phi}(\mathcal E, x)$$ $$f_{\mathcal E}(\epsilon)= p(\epsilon)$$ where $f_Z(z)$ and $f_{\mathcal E}(\epsilon)$ denote the pdfs of random variables $Z$ and $\mathcal E$ , respectively. To avoid confusion, I here replace $f$ in the OP by $H$ . Then, $$\int q_\phi(z|x)H(z)dz=\int f_Z(z)H(z)dz=\mathbb E [H(Z)]\\ =\mathbb E \big [H(\color{blue}{g_{\phi}(\mathcal E, x)}) \big ]=\int f_{\mathcal E}(\epsilon)H(g_\phi(\epsilon, x))d\epsilon=\int p(\epsilon)H(g_\phi(\epsilon, x))d\epsilon.$$
|
|calculus|probability|change-of-variable|
| 1
|
Is $(B_t^2 - t^2)_{t\geq 0}$ a local martingale?
|
Let $B$ be a standard $\mathbb{R}$ -valued Brownian motion. It is very easy to show that $(B_t^2 - t^2)_{t\geq 0}$ is not a martingale by checking the martingale condition. First, note that $(B_t^2 - t)_{t\geq 0}$ is a martingale. Using the martingale condition of $(B_t^2 - t)_{t\geq 0}$ yields: \begin{align*} &\mathbb{E}\left[B_t^2 - t^2\vert\mathcal{F}_s\right] = \mathbb{E}\left[B_t^2 - t\vert\mathcal{F}_s\right] + t - t^2 = B_s^2 - s + t - t^2 \overset{!}{=} B_s^2 - s^2\\ &\Leftrightarrow t - t^2 = s - s^2 \end{align*} The last equation is not always true (e.g. take $s=1$ and $t=2$ ) and thus $(B_t^2 - t^2)_{t\geq 0}$ is not a martingale. However, I have no clue how to investigate if $(B_t^2 - t^2)_{t\geq 0}$ is a local martingale.
|
If $B^2_t-t^2$ were a local martingale the so too would $t-t^2=(B^2-t)-(B^2-t^2)$ be a (continuous) local martingale, but also a process of finite variation. Such a process must be constant in time, forcing $t-t^2=0$ for all $t>0$ , a contradiction. Therefore $B^2_t-t^2$ is no local martingale.
|
|brownian-motion|local-martingales|
| 0
|
Is this a valid proof of the continuity of $f(x)=\sqrt{x}$ at $x=2$, using epsilon-delta definition?
|
Prove that $f(x)=\sqrt{x}$ is continuous at $x=2$ Proof: Givens: $\varepsilon > 0$ and $\delta > 0$ and $|x-2| for $\delta . Let $0 $$ |f(x)-f(2)| = |\sqrt{x}-\sqrt{2}| $$ x-2 Also, using the givens, $$ |x-2| Then it follows that $$ x - 2 Therefore, $$ |f(x)-f(2)| The reason I am unsure is because I know that delta should be minimized but I am unsure if all my steps are correct.
|
You can prove something a bit stronger with less work. $f(x)=\sqrt{x}$ $\sqrt{x}-\sqrt{c}= \frac{x-c}{\sqrt{x}+\sqrt{c}}$ WLOG, $x>c$ $x,c>1\implies \sqrt{x}-\sqrt{c} $|x-c| So $\delta=\epsilon$ proves uniform continuity in the region in question $(1,\infty)$ . Uniform continuity implies continuity and $c=2$ is in that region where the function is uniformly continuous, so the function is continuous at $x=2$ .
|
|real-analysis|calculus|solution-verification|continuity|epsilon-delta|
| 0
|
Proving $\lim_{x\to\infty}xa^{x}=0$ in Elementary Ways
|
I wish to prove the following limit without using L'Hopital rule or other known limits: $$\lim_{x\to\infty}xa^{x}=0$$ where $0 . I wanted to do so using this sequence limit (which I know how to prove): $$\lim_{n\to\infty}na^{n}=0$$ I would appreciate to know if the following argument is valid (this is only the essence of it): Let's denote for each $x>1$ : $n_x=\lfloor x\rfloor$ . We thus have for all $x>1$ : $$0\le xa^{x}\le (n_{x}+1)a^{x}\le (n_{x}+1)a^{n_x}=n_{x}a^{n_x}+a^{n_x}$$ We can now use the fact that if $\lim_{k\to\infty}x_k=\infty$ , than $\lim_{k\to\infty}n_{x_k}=\infty$ , the limits $\lim_{n\to\infty}a^n,\lim_{n\to\infty}na^n=0$ and the squeeze theorem to get the desired result. Of course we are using here properties of real exponents, which is ok for this discussion. I would appreciate any feedback regarding this argument's validity. Thanks a lot in advance!!
|
Another way could be as follows. We have $a^{-1}=1 +\delta.$ Then for $x>2,$ in view of the binomial identity, we get $$(1+\delta)^x\ge (1+\delta)^{n_x}\ge {n_x\choose 2 }\delta^2\ge {(x-1)(x-2)\over 2}\delta^2$$ Hence $$0
|
|calculus|limits|limits-without-lhopital|
| 0
|
Handling Excess People with Indistinguishable Chairs in Circular Arrangements
|
I'm grappling with a problem involving seating arrangements in a room that features two circles of chairs. One circle consists of 11 chairs, and the other has 7, making a total of 18 chairs. The twist is considering scenarios with 20 people, focusing on the case where the chairs are indistinguishable. The challenge I face is understanding how to approach the situation where there are more people than available chairs, especially under the constraint of indistinguishable chairs. Here's the original problem statement: A total of 18 chairs are arranged in a room, forming two circles. One of the circles contains 11 chairs. Calculate the different ways twenty people can be seated in each of the following cases: Assuming the chairs are distinguishable. Assuming the chairs are indistinguishable. For the first part, with distinguishable chairs, the calculation was straightforward. For the circle of 11 chairs, selecting 11 out of 20 people and arranging them yields $\binom{20}{11} \times 10!$ w
|
For round seating of $n$ people in $n$ chairs, the formula is $n!$ if the chairs are distinguishable (=numbered), and $\frac{n!}{n} =(n-1)!$ if indistinguishable (=unnumbered) because rotations don't change relative positions of the people. Firstly form $3$ groups of $\binom{20}{11},\binom97, \binom22$ people These groups get automatically labelled by size, call them group $A,B,C$ Now if the chairs are numbered, the people can be seated in $[1]:\binom{20}{11}11! \times\binom977!$ ways and if unnumbered, in $[2]: \binom{20}{11}10! \times\binom976!$ ways
|
|combinatorics|permutations|permutation-cycles|
| 0
|
Solve the second order equation $u^{\prime \prime}(t)=\frac{16 t\left(\beta t^3-27\right)}{81 \beta} u(t)$
|
I need to solve the second order equation $$u^{\prime \prime}(t)=\frac{16 t\left(\beta t^3-27\right)}{81 \beta} u(t)$$ or alternatively, $$u''(t) = \frac{16}{81}t^4u(t) - \frac{432}{81 \beta }tu(t)$$ $\beta > 0$ . I'm not sure how to approach this as it is second order. The equation is in normal/regular form. Wolfram gives me a series solution, but I would like something closed form. The original equation was $$9f''-10t^2f'+\left(t^4+\left(\frac{48}{\beta}-10\right)t\right)f =0 $$ and I used the integrating factor $$f(t) = \exp\left({\frac{5}{27}t^3}\right)u(t)$$ to obtain the form seen above. EDIT: Here is a simpler related case which is solved: $$9 f^{\prime \prime}-10 t^2 f^{\prime}+\left(t^4-10 t\right) f=0$$ which tranforms into regular form by $$ f(t)=\exp \left(\frac{5}{27} t^3\right) u(t) $$ where $u(t)$ then satisfies the equation $$ u^{\prime \prime}-\frac{16}{81} t^4 u=0 $$ the solutions of which are $$ t^{1 / 2} I_{ \pm 1 / 6}(T) \text { and } t^{1 / 2} K_{1 / 6}(T) $$ wher
|
In the original equation, $$9 f''(t) - 10 t^2 f'(t) + \left(t^4 + \left(\frac{48}{\beta} - 10\right)\right) f(t) = 0 ,$$ changing variables via $f(t) = t \exp\left(\frac{t^3}{27}\right) w(t)$ and $\tau = \frac{8 t^3}{27}$ gives $$\tau w''(\tau) + \left(\frac{4}{3} - \tau\right) w'(\tau) - \left(\frac{2}{3} - \frac{2}{\beta}\right) w(\tau) = 0,$$ which is Kummer's equation with parameters $\mu = \frac{2}{3} - \frac{2}{\beta}, \nu = \frac43$ , so the general solution is $$w(\tau) = c_1 M_{\frac{2}{3} - \frac{2}{\beta}, \frac43}(\tau) + c_2 U_{\frac{2}{3} - \frac{2}{\beta}, \frac43}(\tau) ,$$ where $M_{\mu, \nu} = {}_1 F_1(\mu; \nu; \,\cdot\,)$ is Kummer's confluent hypergeometric function, and $U_{\mu, \nu}$ is Tricomi's confluent hypergeometric function; for more, see the above link, as well as the NIST DLMF entry for Kummer functions . Translating back to $f(t)$ thus gives the general solution $$\boxed{f(t) = c_1 t \exp \left(\frac{t^3}{27}\right) M_{\frac{2}{3} - \frac{2}{\beta}, \fra
|
|calculus|ordinary-differential-equations|special-functions|bessel-functions|
| 1
|
When is there an homomorphism into $S_n$?
|
I was reading about groups of order 12 and found this: Let $G$ be a non-abelian group of order $12$. By Cauchy's theorem, it has an element, hence subgroup, $H$, of order $3$ H is not normal in G, then as $[G:H]=4$, there is a homomorphism $ϕ:G→S_4$ I don't understand why there is such homomorphism and what would it be. I understand that since $[G:H]=4$, I could define the set of the left cosets of $H$ in $G$, $A=\{H, g_1H, g_2H, g_3H\}$ and the group of permutations of $A$ would be isomorphic to $S_4$. But I still don't know how I could define the homomorphism $\phi$. Any help would be greatly appreciated.
|
As pointed out in the other answer, the $G$ -action by left multiplication on the left quotient set $G/H$ yields a homomorphism $G\stackrel{\varphi}{\to} S_{G/H}\cong S_4$ . But there's more: since $H\not\unlhd G$ , then $\tilde gH\tilde g^{-1}\ne H$ for some $\tilde g\in G$ . This suffices to get $\ker\varphi=$ $\bigcap_{g\in G}gHg^{-1}$ $=\{1\}$ , being $H$ of prime order. Therefore, actually $G$ embeds into $S_4$ or, equivalently, $S_4$ contains an isomorphic copy of $G$ .
|
|abstract-algebra|group-theory|group-homomorphism|
| 0
|
A computionally-easy neccessary and sufficient condition for the inverse of line graph to exist
|
Here I have a conjecture: Let $G$ be a simple undirected graph. The neccessary and sufficient condition for a graph $H, L(H) = G$ to exist is to have no vertex $u \in V(G)$ so at least three adjacent vertices of $u$ appear in an independent set. It surely is a necessary condition: Let's assume that there is a vertex $u \in V(G)$ with three adjacent vertices in an independent set, and let $xy$ be its corresponding edge in $H$ . According to the pigeonhole principle, there are at least two edges $e_1, e_2$ that their corresponding vertices in $G$ are independent, and are adjacent with $xy$ in $H$ , where their common vertex is either $x$ or $y$ . In this case, $e_1, e_2$ are adjacent, that contradicts with the independence of their corresponding vertices in $G$ .
|
Your condition is equivalent to forbidding $K_{1, 3}$ as an induced subgraph (the 3 branches star graph). However, this is not sufficient to characterize fully the line graphs. However, a characterization exists by excluding induced subgraphs , which requires eight more forbidden subgraphs. These eight graphs provide the minimal counterexamples to your conjecture. Another (easy) characterization is that a line graph needs to admit a clique partition where each vertex belongs to exactly two cliques .
|
|graph-theory|
| 1
|
Improving my way of showing $\sin^212^\circ+\sin^221^\circ+\sin^239^\circ+\sin^248^\circ=1+\sin^29^\circ+\sin^218^\circ$
|
This problem is from 1904 and was given to students studying for the Cambridge and Oxford entry examinations. My solution is presented below, but I am of the opinion that it can be improved. All ideas welcome. Show that $$\sin^{2}{12^{\circ}}+\sin^{2}{21^{\circ}}+\sin^{2}{39^{\circ}}+\sin^{2}{48^{\circ}}=1+\sin^{2}{9^{\circ}}+\sin^{2}{18^{\circ}}$$ A solution $$\begin{align} \sin^{2}{12^{\circ}}=\sin^{2}{(30^{\circ}-18^{\circ})} &=(\sin{30^{\circ}}\cos{18^{\circ}}-\cos{30^{\circ}}\sin{18^{\circ}})^{2} \tag1\\ &=\left(\frac{1}{2}\cos{18^{\circ}}-\frac{\sqrt{3}}{2}\sin{18^{\circ}}\right)^{2} \tag2\\ &=\frac{1}{4}\cos^{2}{18^{\circ}}+\frac{3}{4}\sin^{2}{18^{\circ}}-\frac{\sqrt{3}}{2}\cos{18^{\circ}}\sin{18^{\circ}} \tag3 \\ \\ \\ \sin^{2}{48^{\circ}} &=\sin^{2}{(30^{\circ}+18^{\circ})} \tag4 \\ &= (\sin{30^{\circ}}\cos{18^{\circ}}+\cos{30^{\circ}}\sin{18^{\circ}})^{2} \tag5 \\ &=\left(\frac{1}{2}\cos{18^{\circ}}+\frac{\sqrt{3}}{2}\sin{18^{\circ}}\right)^{2} \tag6 \\ &=\frac{1}{4}\cos^{2}{
|
As an alternative, using that $\sin^2 \theta = \frac{1-\cos(2\theta)}{2}$ we obtain the equivalent $$\cos 18^\circ-\cos 24^\circ+\cos 36^\circ-\cos 42^\circ-\sin 12^\circ+\sin 6^\circ=0$$ and by sum to product identities applied to $\cos 18^\circ-\cos 24^\circ$ , $\cos 36^\circ-\cos 42^\circ$ and $-\sin 12^\circ+\sin 6^\circ$ we obtain $$\require{cancel} \cancel 2\sin 21^\circ \cancel{\sin 3^\circ}+\cancel2\sin 39^\circ \cancel{\sin 3^\circ}-\cancel2\cos 9^\circ \cancel{\sin 3^\circ} =0$$ and again by sum to product identities $$2 \sin 30^\circ \cancel{\cos 9^\circ}-\cancel{\cos 9^\circ}=0$$ which is true.
|
|trigonometry|
| 0
|
Integrating cumulative distribution function of normal and exponential
|
Let $F(\cdot)$ be the cdf of an exponential distribution with mean 1 and $\Phi(\cdot)$ be the cdf of the standard normal distribution. I need to show that there exists some $n$ such that \begin{align*} \int_0^\infty (1- F^n(x))\Phi^n(x)~dx > 0.99. \end{align*} I have no idea how to proceed here. I'm fairly certain that it's probably not possible to find a closed form for the left hand-side, but I'm not sure if there's some trick I should be using.
|
The inequality in the OP holds for $$n\ge3.$$ I used Wolfram numerical integration to show: for $n=1$ , it becomes : 0.761578 for $n=2$ , it becomes : 0.980789 for $n=3$ , it becomes : 1.09101. In the links, I used $$\Phi(x)=\frac{1}{2}+\frac{1}{2}\text{erf}\left(\frac{x}{\sqrt{2}} \right).$$ where $\Phi$ is the cdf of the standard normal distribution and $\text{erf}$ is the error function .
|
|probability|probability-theory|analysis|probability-distributions|
| 0
|
Which of the following extensions are normal?
|
I want to understand, how to deal with following task: Which of the following extensions are normal? $\mathbb{Q}(i \sqrt[6]{3}) / \mathbb{Q}$ ; $\mathbb{C}(t) / \mathbb{C}\left(t^4\right)$ ; $\mathbb{R}(t) / \mathbb{R}\left(t^4\right)$ ; The first one is pretty obvious, if i am right. I know, that extension is finite and normal iff it is splitting field for some polynomial. In first example polynomial is $x^6+3$ , and it is easy to calculate, that every other root of that polynomial is algebraic over $\mathbb{Q}(i \sqrt[6]{3})$ . It means that $\mathbb{Q}(i \sqrt[6]{3}) / \mathbb{Q}$ is splitting field, i.e. extension is normal. But i don't know how to approach 2nd and 3rd cases. I think that both extensions are finite, but i can't think of some polynomials like i did in 1st example. Can i use an exact same logic or i need something different? Upd: as @RobertShore mentioned i didn't wrote proof of first example correct. I meant, that every other root of $x^6+3$ is contained in $\mathbb
|
The minimal polynomial of $t$ in $\Bbb C(t^4)$ or in $\Bbb R(t^4)$ is simply $p(x)=x^4-t^4$ . Moreover, the roots of $p(x)$ (in a suitable algebraic extension) are simply $\pm t, \pm it$ . This tells you that $\Bbb C(t)$ is normal over $\Bbb C(t^4)$ , but since $i \notin \Bbb R(t^4)$ , we also find that $\Bbb R(t)$ is not normal over $\Bbb R(t^4)$ . And yes, since $(i \sqrt[6]{3})^3=-i\sqrt 3$ , you are correct that the sixth roots of unity are in $\Bbb Q(i \sqrt[6]{3})$ and that field is normal over $\Bbb Q$ .
|
|galois-theory|extension-field|
| 1
|
What are the orientable prime compact three-manifolds that can be embedded in $\mathbb{R}^4$?
|
I am a physicist working in quantum field theory and it happens that I stumble on the problem of the title: What are the orientable compact prime three-manifolds that can be embedded in $\mathbb{R}^4$ . My starting point is an orientable compact three-manifold embedded in $\mathbb{R}^4$ that I can decompose in connected sums. I am intuitibly assuming that this decomposition can be done inside $\mathbb{R}^4$ and that the resulting prime manifolds are also embedded in $\mathbb{R}^4$ . Any information on the higher dimensional problem, that is, prime n-manifolds embedded in $\mathbb{R}^{n+1}$ would also be much appreciated.
|
Actually, regarding connected sums, it is much more subtle than you think. On one hand, if $M_1, M_2$ are 3-dimensional manifolds each of which embeds in $\mathbb R^4$ , then their connected sum also embeds. But the converse is false. For instance, lens spaces $L(p,q), p>1$ , do not embed smoothly in $\mathbb R^4$ , while for any coprime integers $p, q$ , with $p$ odd, the connected sum of the lens spaces $L_{p,q} \# (- L_{p,q})$ embeds smoothly in $\mathbb R^4$ . (Here the negative sign means the opposite orientation.) See my answer here and reference therein, as well as Donald, Andrew , Embedding Seifert manifolds in $S^{4}$ , Trans. Am. Math. Soc. 367, No. 1, 559-595 (2015). ZBL1419.57046 . All in all, the problem of smooth embeddings of closed 3-manifolds in $\mathbb R^4$ is wide-open. It is Problem 3.20 on Kirby's famous list of problems in 3-dimensional topology. Even in the case of Seifert manifolds , which is arguably, the nicest class of 3-dimensional manifolds, only partial a
|
|manifolds|differential-topology|
| 0
|
Figuring out if $\lim_{(x,y)\to(0,0)}\frac{-x^6y^1(x^2+1)}{(x^6+y^2)\sqrt{x^2+y^2}}$ exists
|
I need to find out if the limit exists. $$\lim_{(x,y)\to(0,0)}\frac{-x^6y^1(x^2+1)}{(x^6+y^2)\sqrt{x^2+y^2}}$$ First, I approached the limit from $y=0$ , and the result was $\frac{0}{x^7}$ . Then, I approached from $x = 0$ , and the result was $\frac{0}{y^3}$ . This made me assume that the limit does not exist. However, Wolfram Alpha calculated the limit as zero. What is the solution to this question?
|
We have that for $x=0 \land y\neq 0$ and for $y=0 \land x\neq 0$ the expression is equal to zero otherwise by AM-GM $$\left|\frac{-x^6y^1(x^2+1)}{(x^6+y^2)\sqrt{x^2+y^2}}\right|\le \frac{2x^6|y|}{2|x^3||y|\sqrt{x^2+y^2}}=\frac{|x^3|}{\sqrt{x^2+y^2}}\le x^2 \to 0$$
|
|limits|multivariable-calculus|
| 0
|
Torus is 2-variety, proof of Hausdorff axiom
|
I have to show that Torus defined as the quotient space obtained from the square $Q$ using the word $aba^{-1}b^{-1}$ to identificate the sides, is a topological 2-variety. I've already proved that is locally 2-euclidean. Now I have to show that it is an Hausdorff space. I want to use the following result. Theorem. If $X$ is a compact and Hausdorff topological space and the quotient map $\pi:X\to X/_\sim$ is closed, then $X/_\sim$ is an Hausdorff space. So, if $T$ is the torus, I have $T=\pi(Q)$ where $\pi:Q\to Q/_\sim=T$ is the quotient map. $Q$ is compact because it's closed and limited and it's Hausdorff because it is subspace of $\mathbb{R}^2$ that it is Hausdorff. The only thing to prove is that $\pi$ is a closed map. Let be $C$ a closed subset of $Q$ . I have to show that $\pi^{-1}(\pi(C))$ is closed for all $C$ closed subset of $Q$ . Case $1:$ If $C\cap \bigcup_i L_i\ne \emptyset$ , where $L_i$ is a side of $Q$ , I think that $C$ is saturated, and so $\pi^{-1}(\pi(C))=C$ . Case $
|
The proof looks correct (but please write out bounds on indices in the future: all your $i$ 's and $j$ 's start nowhere and end nowhere) to me, but I think you can "streamline" the whole thing a bit: For starters, note that $B$ (and thus the $B_i$ 's) are superfluous since they are always going to be contained in one of the $A_i$ . Also you don't have to treat case 1 separately, even if its short; it's covered by case 2 (and happens exactly when $A = \emptyset$ ), and the case distinction in the definition of the $A_i$ is also superfluous for a similar reason. With this in mind, how about something like the following (I'll also substitute some standard notation): Let $I^2 = [0, 1]^2$ be the square and let $L_1, \ldots, L_4 \subset \partial I^2$ be the four closed boundary line segments in clockwise order. Moreover, let $\varphi_i\colon L_i \overset{\cong}{\to} L_{i + 2}$ ( $i = 1, 2$ ) be the linear homeomorphisms along which the $L_i$ are identified in the quotient. If $C \subseteq I^
|
|general-topology|solution-verification|algebraic-topology|proof-writing|
| 1
|
Random vector $(X, Y)$ has a uniform distribution on the unit circle.
|
Faced with the following problem, I do not understand how to solve this problem: Random vector $(X, Y)$ has a uniform distribution on the unit circle. Will its components be independent? It is not very clear to me how to approach such tasks, is it necessary to look for a vector distribution function here? But it's probably clear that we need to check the definition somehow. $X,Y$ independent $\Leftrightarrow$ $\mathbb P(X But how to do this is not very clear.
|
Regardless of whether we're talking about just the boundary or including the interior of the circle, it is very easy to see that the components are not independent by considering a region between the unit square and the unit circle, e.g. $A = [0.8, 1] \times [0.8, 1]$ . Clearly $P(0.8 \leq X \leq 1) > 0$ and $P(0.8 \leq Y \leq 1) > 0$ , but $P((X, Y) \in A) = 0$ .
|
|probability|random-variables|uniform-distribution|
| 0
|
Computing the height of an ideal...?
|
I hope I'm not overbearing in this site. Yes, I'm still struggling. If you can, I have a question about primary decomposition that still needs help, you can find it in my page. Now I wanted to find the height of a certain ideal. In the ring $ A = K[x,y,z]/(xy,xz,z-y^2)$ I need to find the height of $I = (x,y,z)$ . Here is my reasoning up to now: the Krull theorem states that the height of a prime ideal, such as $I$ , is not greater than the number of generators of $I$ , so it's not greater than the least amount of polynomials that generate $I$ . Since in $A$ , $z=y^2$ , we can consider $I$ having 2 generators, so the height of $I$ is at best two. So I'd need to find two proper prime ideals that are contained in $I = (x,y)$ . I really don't know how to use the conditions $xy=0$ and $xz=0$ , so I'm having second thoughts about my solution. I initially said that the height is indeed two because $(0) \subseteq (x) \subseteq I$ and I don't need to find if there are any more ideals there, bu
|
Let $P$ be a minimal prime over $J=(xy,xz,z-y^2)$ . Clearly $z-y^2\in P$ . Since $xy\in P$ , either $x\in P$ or $y\in P$ . Suppose $x\in P$ . Then we see that $J\subseteq (x,z-y^2)\subseteq P$ , and since $(x,z-y^2)$ is a prime ideal, we conclude $P=(x,z-y^2)$ . Next, suppose $x\notin P$ . Then $y,z\in P$ and a similar argument as above yields $P=(y,z)$ . So the minimal primes of $J$ are $(x,z-y^2)$ and $(y,z)$ . Now a maximal chain of prime ideals in $A$ descending from $(x,y,z)$ must end at one of the minimal prime ideals, but both minimal primes have height two, and $(x,y,z)$ has height three in $k[x,y,z]$ , so the height of $(x,y,z)$ in $A$ must be one.
|
|commutative-algebra|ideals|krull-dimension|dimension-theory-algebra|
| 1
|
Proof of correctness in odd-even merge function
|
I'm studying the odd-even algorithm. I've read this lecture note and I don't understand one point in his proof of correctness. Thus it remains to prove that for all $i$ , $E[i] \le D[i+1]$ , or in other words, $\textrm{max}(C_o[i+1],C_e[i]) \le \textrm{min}(C_o[i+2],C_e[i+1])$ . Equivalently, we need to prove that for all $i$ , we have $(i)$ $C_o[i+1] \le C_o[i+2]$ , \emph $(ii)$ $C_o[i+1] \le C_e[i+1]$ , $(iii)$ $C_e[i] \le C_o[i+2]$ , and $(iv)$ $C_e[i] \le C_e[i+1]$ . Out of the four, $(i)$ , $(ii)$ , and $(iv)$ are trivial to prove. Now let's prove $(iii)$ . But I don't think $(ii)$ is trivial. Can you help me to prove that? Thanks P/S: I have to note that, in the base case, where only 2 values go into merge func. merge func ~ is equivalent to compare function. (i.e: compare ((a),(b)) = (a>b) ? (a,b) : (b,a))
|
Let $A_o[k]$ and $B_o[l]$ be the greatest values of $A_o$ and $B_o$ such that $A_o[k]\leq C_o[i+1]$ and $B_o[l]\leq C_o[i+1]$ . We have $k+l=i+1$ . Moreover, we can suppose WLOG that $A_o[k]=C_o[i+1]$ . We immediately see that $A_e[j] \leq C_o[i+1]$ for $j and $B_e[j] \leq C_o[i+1]$ for $j . The only other $C_e$ value that can eventually be lower than $C_o[i+1]$ is $B_e[l]$ , all the other $C_e$ values must be greater. We count at most $(k-1)+(l-1) +1 = i$ values of $C_e$ that can be lower than $C_o[i+1]$ , but $C_e[i+1]$ will necessarily be greater.
|
|algorithms|proof-explanation|
| 0
|
Compute last seven digits of $7^{5^{2024}}$
|
I recently had an exam in my number theory class and one of the questions was to compute the last seven digits of $7^{5^{2024}}$ . Usually, this is a pretty standard problem since we can just use FLT and CRT (see below for my approach). However, that we need to compute the last seven digits makes this method seem completely intractable for an exam since we will need to compute many large powers and then find solutions $ax + by = 1$ for very large numbers in order to run CRT. This makes me think there must be another way to think about this computation. Perhaps another way is to work $p$ -adically. Namely, I thought about the map $x \mapsto x^5$ on elements of $\mathbb{Z}_2$ and $\mathbb{Z}_5$ to try to solve for $7^{5^{2024}}$ mod $2^7$ and mod $5^7$ . However, I am still encountering large computations. Does anyone have any ideas? There is probably something simple that I have missed. Here is if we only wanted the last three digits: This is the same as computing $7^{5^{2024}} \pmod{10
|
Notice $\ 7^{\Large 5^{\large\color{#c00}{N+4}}}\!\!\equiv 7^{\Large 5^{\large \color{#c00}{N}}}\!\!\pmod{\!10^{\large 7}}\ $ for $\ \color{#c00}N\ge\color{#c00} 5,\,$ so the rest is easy. Proof: $\, \ 7^{\Large 5^{\large\color{#c00}{N+4}}}\!\!\!- 7^{\Large 5^{\large \color{#c00}{N}}}\!\! = 7^{\Large 5^{\large \color{#c00}{N}}}\! (7^{\Large\color{#0a0}{ 5^{\large \color{#c00}N} (5^{\Large 4}-1)}}-1)\equiv 0\,$ by mod order $\rm\color{#0a0}{reduction}$ via below $\qquad\ \ \ \ \begin{align} 7^{\large 4}\equiv 1\!\!\!\!\pmod{\!5^{\large 2}}&\overset{(\ \ )^{\large 5^5\!}} \Longrightarrow 7^{\Large \color{#0a0}{4\cdot 5^{\Large\color{#c00}5}}}\!\!\!\equiv 1\!\!\!\!\pmod{\!5^{\large 7}},\ {\rm and}\,\ \color{#0a0}{4\cdot 5^{\large\color{#c00}{5}}\mid 5^{\color{#c00}N}(5^{\large 4}\!-\!1)}\\ 7^{\large 2}\equiv 1\!\!\!\!\pmod{\!2^{\large 4}}&\overset{(\ \ )^{\large 2^3\!}}\Longrightarrow 7^{\Large \color{#0a0}{2^4}}\!\equiv\ 1\!\!\!\pmod{\!2^{\large 7}}\,\ \ {\rm and}\quad\ \ \ \color{#0a0}{
|
|elementary-number-theory|modular-arithmetic|
| 1
|
Asymptotic form of solution to biased random walk
|
Consider a continuous time biased random walk on a 1D lattice. The random walker walks with rate $k_\mathrm{R}$ to the right and with rate $k_\mathrm{L}$ to the left. The probability $p_n(t)$ of being at location $n$ at time $t$ is then described by $$\frac{\mathrm{d}}{\mathrm{d}t}p_n(t)=k_\mathrm{R}p_{n-1}(t)+k_\mathrm{L}p_{n+1}(t)-(k_\mathrm{L}+k_\mathrm{R})p_n(t).$$ The solution of this equation is $$p_n(t)=\left(\frac{k_\mathrm{R}}{k_\mathrm{L}}\right)^{\frac{n}{2}}I_n(2\sqrt{k_\mathrm{L}k_\mathrm{R}}t)\mathrm{e}^{-(k_\mathrm{L}+k_\mathrm{R})t},$$ where $I_n(t)$ is the Bessel function. On the other hand, since this process is related to drift diffusion, we expect the long time limit $t\to\infty$ to be given by $$p_n(t)\sim\frac{1}{\sqrt{4\pi Dt}}\mathrm{e}^{-\frac{(n-vt)^2}{4Dt}},$$ with the "diffusion constant" $D=(k_\mathrm{L}+k_\mathrm{R})/2$ and "drift velocity" $v=k_\mathrm{R}-k_\mathrm{L}$ . Visually, this also seems to be correct for all values I tried. Is it possible to der
|
I think that you skipped some steps when doing you did the approximations in the unbiased case. If you do it rigorously, the same method applies. More formally, you want to estimate large deviations at large times, so you want to scale: $$ n = ut $$ You then get: $$ \begin{align} p_n(t) &= \int_0^{2\pi}e^{ikn}\exp\left[-t\left(k_L(1-e^{ik})+k_R(1-e^{-ik})\right)\right]\frac{dk}{2\pi}\\ &= \int_0^{2\pi}\exp\left[t\left(iku-k_L(1-e^{ik})-k_R(1-e^{-ik})\right)\right]\frac{dk}{2\pi}\\ \end{align} $$ The saddle point is the solution $k$ to: $$ u=k_Re^{-ik}-k_Le^{ik} $$ The solution is purely imaginary, and it is better to rewrite it as $k=-i\kappa$ . Explicitly: $$ \kappa=\ln\left(\frac{\sqrt{u^2+4k_Rk_L}-u}{2k_L}\right) $$ but it is easier to think in terms of $\kappa$ . Therefore, the logarithmic leading order is: $$ \begin{align} \frac1t\ln p_n(t)&\to iku+k_Re^{-ik}+k_Le^{ik}-k_R-k_L\\ &= \kappa(k_Re^{-\kappa}-k_Le^\kappa)+k_Re^{-\kappa}+k_Le^\kappa-k_R-k_L \end{align} $$ The rate functi
|
|stochastic-processes|asymptotics|brownian-motion|random-walk|bessel-functions|
| 1
|
Proposition 5.4.9. Analysis I - Terence Tao.
|
Proposition 5.4.9 (The non-negative reals are closed). Let $a_1, a_2, a_3, \ldots $ be a Cauchy sequence of non-negative rational numbers. Then $\text{LIM}_{n \to \infty}a_n$ is a non-negative real number. Tao's Proof. We argue by contradiction, and suppose that the real number $x:= \text{LIM}_{n \to \infty}a_n$ is a negative number. Then by definition of negative real number, we have $x:= \text{LIM}_{n \to \infty}b_n$ for some sequence $b_n$ which is negatively bounded away from zero, i.e, there is a negative rational $-c such that $b_n \leq -c$ for all $n \geq 1$ . On the other hand, we have $a_n \geq 0$ for all $n \geq 1$ , by hypothesis. Thus the numbers $a_n$ and $b_n$ are never $c/2$ -close, since $c/2 . Thus the sequences $(a_n)_{n=1}^{\infty}$ and $(b_n)_{n=1}^{\infty}$ are not eventually $c/2$ -close. Since $c/2 > 0$ , this implies that $(a_n)_{n=1}^{\infty}$ and $(b_n)_{n=1}^{\infty}$ are not equivalent. But this contradicts the fact that both these sequences have $x$ as thei
|
In Chapter 5 Tao introduces the real numbers as formal expressions $\operatorname{LIM}_{n \to \infty} a_n$ , where $(a_n)$ is a Cauchy sequence of rational numbers. Two real numbers $\operatorname{LIM}_{n \to \infty} a_n$ and $\operatorname{LIM}_{n \to \infty} b_n$ are said to be equal iff $(a_n)$ and $(b_n)$ are equivalent Cauchy sequences. This means actually that $\operatorname{LIM}_{n \to \infty} a_n$ denotes the equivalence class of $(a_n)$ with respect to the equivalence relation "equivalent Cauchy sequences". See Confusion about Tao's construction of reals . Thus $\operatorname{LIM}_{n \to \infty} a_n$ does not denote the "usual limit" of the sequence $(a_n)$ . This concept is introduced only later in Chapter 6. The limit of a sequence $(x_n)$ of real numbers is written as $\lim_{n \to \infty} x_n$ . This explains why his proof uses the second sequence $(b_n)$ . It is a different representative of the real number $x = \operatorname{LIM}_{n \to \infty} a_n$ and he shows that the
|
|real-analysis|sequences-and-series|proof-explanation|real-numbers|cauchy-sequences|
| 1
|
linear bounded functional on hilbert space
|
Hi Guys can some one help with this question? Let $\mathcal{H}$ be a Hilbert space and F a linear bounded functional in $\mathcal{H}^∗$ such that $F \neq 0$ . Prove that dim(kerF) $^{\perp} = 1$
|
Take the field as $\mathbb{R}$ for example. Check the proof of Riesz Representation theorem, you'll see we constructed $a$ such that $f(x) = $ where $a \in Ker(f)^{\perp}$ . The uniqueness of Riesz Representation theorem implies $Ker(f)^{\perp}$ must be 1-dimensional (otherwise at least 2 linearly independent vectors after rescaling can both represent $f$ as an inner product, which is a contradiction).
|
|functional-analysis|hilbert-spaces|
| 0
|
What is the relationship between mutual information conditioned on different variables: I(W;X|Y) vs. I(W;X|Z)
|
Let four random variables form the Markov chain $${\displaystyle \raise{1.5ex}W\overset{\longleftarrow}{\searrow\lower{1.5ex} X\swarrow}\raise{1.5ex} Y\searrow\lower{1.5ex} Z}$$ such that the conditional distribution of $X$ depends only on $W$ and $Y$ and is conditionally independent of $Z$ given $Y$ . What is the relationship between the mutual informations I(W;X|Y) and I(W;X|Z) (e.g. is one greater than or equal to the other)? EDIT: $W$ also depends on $Y, Y\rightarrow W$ , (not shown in Markov chain graph above).
|
Your DAG encodes the factorisation: $p(w,x,y,z)= p(y)~p(w\mid y)~p(x\mid w,y)~p(z\mid y)$ $$\begin{align}\mathcal I(W,X\mid Y) &= \sum_{w,x,y} p(y) p(x,w\mid y)\log\left(\dfrac{p(w,x\mid y)}{p(w\mid y)p(x\mid y)}\right)\\[1ex] &= \sum_{w,x,y} p(y)p(w\mid y)p(x\mid w,y)\log\left(\dfrac{p(x\mid w,y)}{p(x\mid y)~~~~~}\right)\\[3ex]\mathcal I(W,X\mid Z) &= \sum_{w,x,z} p(z)p(w,x\mid z)\log\left(\dfrac{p(w,x\mid z)}{p(w\mid z)p(x\mid z)}\right)\\[1ex] &~~\vdots \end{align}$$
|
|probability|conditional-probability|mutual-information|
| 0
|
Sampling from Gaussian with very large covariance matrix in block form
|
I'm interested in sampling from a Gaussian with zero-mean and covariance given by: $$ \Sigma = \begin{bmatrix} \Sigma_{11} & \Sigma_{12} & \cdots &\Sigma_{1,100}\\ \Sigma_{21} & \Sigma_{22} & \cdots &\Sigma_{2,100}\\ \vdots & \vdots & \cdots &\vdots \\ \Sigma_{100,1} & \Sigma_{100,2} & \cdots &\Sigma_{100,100} \end{bmatrix} $$ where $\Sigma_{ij}$ is a square matrix of dimension $p \times p$ . In words, the covariance matrix is massive and I am not able to load the entire array into memory. Are there any approaches that allow me to sample from such a Gaussian but only access a subset of the blocks at a time?
|
You can use the Cholesky–Banachiewicz and Cholesky–Crout algorithms . They allow you to calculate each row of the Cholesky decomposition while using only one row from $\Sigma$ at a time.
|
|algorithms|normal-distribution|covariance|sampling|
| 1
|
branch points of arcsin
|
From the definition given by wikipedia and Cauchy's theorem i can find the branch points of $\arcsin$ through its derivative $\displaystyle\frac{1}{\sqrt{1-x^2}}$ Are -1 and 1 simple pole of this expression ? (i'm a bit confused because of the fractional power) Also, there is also a branch point at infinity. How do i find this branch point ? what are the order of all the branch points of arcsin ? From wikipedia, i know that simple pole of derivative means logarithmic branch point, so there is no order if -1 and 1 are simple pole of $\displaystyle\frac{1}{\sqrt{1-x^2}}$ ?
|
I believe we can think about this problem from the perspective of the domain, which is much easier. The expression $\sqrt{1-z^2}$ in $\sin^{-1}(z) = -i\log[iz + \sqrt{1-z^2}]$ is not explicit as the input is a complex number. The true definition is $\sqrt{1-z^2} = \exp[0.5 \log(1-z^2)]$ , which directly implies that $1-z^2 \neq x$ for some real number $x \leq 0$ as the real part of the output of the exponential function must be strictly positive. Therefore, we have $z \notin [-\infty,-1] \cup [+1,+\infty]$ , which is the branch cut of the arcsin function. In fact, all trigonometric functions possess similar branch cut due to the domain of the log function.
|
|complex-analysis|
| 0
|
Value of $\displaystyle \sum^{n}_{k=1}(-1)^{k-1}y_{k}$ in $n$ degree polynomial with roots $y_1,y_2,\cdots,y_n$
|
Let $n$ real roots of the equation $\displaystyle y^n-2ny^{n-1}+2n(n-1)y^{n-2}+ay^{n-3}+by^{n-4}+\cdots +c=0$ has roots $y_1,y_2,y_3,\cdots ,y_n$ . Then $\displaystyle \sum^n_{k=1}(-1)^{k-1}y_k=$ What I try : $\displaystyle y_1+y_2+y_3+\cdots y_n=2n$ And $\displaystyle y_1y_2+y_2y_3+\cdots +y_{n-1}y_{n}=2n(n-1)$ Also $\displaystyle y_1\cdot y_2 \cdots y_n=(-1)^{n}c$ But I did not understand How can I find value of $\displaystyle y_1-y_2+y_3-y_4+\cdots +(-1)^{n-1}y_n$ Please have a look , Thanks
|
Unless all the roots are the same, it's impossible to calculate $\sum_{k=1}^n (-1)^{k-1}y_k$ because the roots are indistinguishable, but the value of the expression can be changed by permutation of roots. So the question boils down to whether $y_1 = y_2 = .. =y_n$ . Consider the following expression $$\sum_{1\leq i Since $\sum_{0\leq i , therefore $\sum_{i=1}^n{y^2_i}\geq \dfrac{2}{n-1}\sum_{1\leq i . We have $$\begin{align} (\sum_{i=1}^n{y_i})^2&=\sum_{i=1}^n{y^2_i}\;+2\sum_{1\leq i On the other hand, $(\sum_{i=1}^n{y_i})^2 = 4n^2 = 2n(2n(n-1))=\dfrac{2n}{n-1} \sum_{1\leq i . Therefore $\sum_{1\leq i hence $y_i =y_j$ for $1\leq i . In other words $y_1=y_2=..=y_n$ It follows straightforwardly that $y_i=2$ and $\sum_{k=1}^n(-1)^{k-1}y_k = 2(1-(-1)^n)$
|
|polynomials|
| 1
|
Trig Subs issues
|
Problem: Use a trigonometric substitution to find $$I = \int_\sqrt{3}^2{\frac{\sqrt{x^2-3}}{x}}\;dx$$ Give both an exact answer (involving $\pi$ and a square root) and a decimal estimate to 3 significant digits. Here is the work I have so far. $$I = \int_\sqrt{3}^2{\frac{\sqrt{x^2-3}}{x}}\;dx$$ $$1 + \tan^2\Theta = \sec^2\Theta$$ $$\sec^2\Theta - 1 =\ tan^2\Theta$$ $$x = \sqrt{3}\sec\Theta$$ $$\sqrt{(\sqrt{3}\sec\Theta)^2 - 3}$$ $$\sqrt{3(1+\tan^2\Theta)-3}$$ $$\sqrt{3\tan^2\Theta}$$ $$\sqrt{3} \ tan\Theta$$ $$I = \int_\sqrt{3}^2{\frac{\sqrt{3}\tan\Theta}{\sqrt{3}\sec\Theta}}\;dx$$ $$\frac{dx}{d\Theta} = (\sqrt{3}\sec\Theta)'$$ $$dx = \sqrt{3}\tan\Theta\sec \Theta d\Theta$$ I feel like around here I may have messed up and I'm not sure if I should switch the limits of integration to $\Theta$ numbers of $0$ to $\frac{\pi}{6}$ yet or not. Any help is appreciated =) Thanks.
|
I give a method other than trig sub. Let $t=\sqrt{x^2-3}$ , then $x^2=t^2+3$ . Note $dt=\dfrac{x}{\sqrt{x^2-3}}dx$ . Hence, we have $$\int^2_{\sqrt{3}}\dfrac{\sqrt{x^2-3}}{x}~dx=\int^{1}_0\dfrac{t^2}{t^2+3}~dt=1-\int^1_0\dfrac{3}{t^2+3}~dt=1-\left[\dfrac{3}{\sqrt{3}}\arctan\left(\dfrac{t}{\sqrt{3}}\right)\right]^1_0=1-\dfrac{\pi}{2\sqrt{3}}$$
|
|calculus|definite-integrals|trigonometric-integrals|
| 0
|
Find the radius of the circle tangent to $x^2, \sqrt{x}, x=2$
|
I'm taking up a post that was closed for lack of context because I'm very interested in it : Let $(a,b)$ be the center of this circle. It seems intuitive that $b=a$ , but I have not been able to prove it formally, although I know that two reciprocal functions are symmetric with respect to the first bisector $y=x$ . Then let $(X,X^2)$ the point of tangency with $y=x^2$ . I think we're going to use the formula for the distance from $(a,a)$ to the line $y-X^2=2X(x-X)$ . We have obviously the relation $r=2-a$ . The normal to $(X,X^2) $ passes through $(a,a)$ I'm not sure if my notations are the best to elegantly solve the exercise. I hope you will share my enthusiasm for this lovely exercise that I have just discovered thanks to MSE.
|
Answer. $r\approx 0.344555975$ . More precisely, $r$ is the medium real root of the polynomial $$ 64r^5-531r^4+330r^3+1468r^2-1836r+452.$$ Solution. Let us justify the symmetry arguments first. Namely, we claim that the center $(x_0,y_0)$ of the circle belongs to the bisector $x=y$ , even when we relax the condition that the circle is tangent to the line $x=2$ . Indeed, let the circle is tangent to the graph $y=x^2$ at a point $(x,x^2)$ . Then the vector $(x-x_0,x^2-y_0)$ has length $r$ and is orthogonal to the tangent vector $(1,2x)$ to the graph $y=x^2$ at the point $(x,x^2)$ . It follows $x-x_0=\frac{-2rx}{\sqrt{1+4x^2}}$ and $x^2-y_0=\frac{r}{\sqrt{1+4x^2}}$ . Similarly, if the circle is tangent to the graph $x=y^2$ at a point $(y^2,y)$ then $y-y_0=\frac{-2ry}{\sqrt{1+4y^2}}$ and $y^2-x_0=\frac{r}{\sqrt{1+4y^2}}$ . Thus $$x_0=x+\frac{2rx}{\sqrt{1+4x^2}}=y^2-\frac{r}{\sqrt{1+4y^2}}$$ and $$y_0=x^2-\frac{r}{\sqrt{1+4x^2}}=y+\frac{2ry}{\sqrt{1+4y^2}}.$$ So $x_0+y_0=f(x)=f(y)$ , where
|
|geometry|functions|analytic-geometry|
| 0
|
Standard notation for permutations
|
I have a question about the standard notation for representing properties of permutations. This is best illustrated with a concrete example: let's take a permutation on 6 elements with cycle representation $\sigma=(134)(2)(56)$ . We know that the cycle representation is unique. In the problem I'm working on, I need an abstract notation to refer to the set of cycles in the cycle representation of $\sigma$ . In this example, this set is just $\{(134),(2),(56)\}$ , but I don't know a technical name for this set and I imagine one exists. Could anyone point me to a reference which introduces standard notation of this type? So far I haven't been able to find one. There are some other properties which I would like to give a name to. For example, once the cycle representation of a permutation is written down, we can talk about things like the set of elements acted on by a particular permutation. In my example above, the first cycle acts on the elements $\{1, 3, 4\}$ , and it would be nice to h
|
Have you seen this (sort of thing)? $(134)$ can be "described" by: $$\begin {pmatrix}1&2&3&4&5&6\\3&2&4&1&5&6\end {pmatrix}.$$ Because $1\to3,2\to2,3\to4,4\to1,5\to5,6\to6.$
|
|abstract-algebra|reference-request|permutations|notation|permutation-cycles|
| 0
|
Proof of one version of Cauchy-Schwarz in $\mathbb{R}^n$
|
Show that for $a,b \in \mathbb{R}$ and $x,y > 0$ that $$\frac{(a+b)^2}{x+y} \le \frac{a^2}{x} + \frac{b^2}{y}$$ and generalize this result for $a_1, a_2, \dots, a_n \in \mathbb{R}$ and $x_1, x_2, \dots, x_n > 0$ I am not sure how to solve this problems. Specifically, I tried expanding the left hand side first and reducing either $x$ or $y$ in the denominator. This couldn't work due to the $2ab$ term not being taken care of. Then I tried to move everything on one side and showing that $$\frac{(a+b)^2}{x+y} - \frac{a^2}{x} - \frac{b^2}{y} \ge 0$$ . But I just don't see how to factor the left. Edit: This is a subpart of a question where I need to prove C-S inequality. I cannot use C-S inequality.
|
This is the Sedrakyan's lemma, a variant of the Cauchy–Schwarz inequality: $$(\sum_{i=1}^n{a}_ib_i)^2\leq \sum_{i=1}^n{a_i^2} \sum_{i=1}^n{b_i^2}$$ Substitute $a_i \rightarrow \dfrac{a_i}{\sqrt{x_i}}$ , $b_i\rightarrow \sqrt{x_i}$ then the lemma follows
|
|inequality|cauchy-schwarz-inequality|
| 0
|
Rotation through any angle θ
|
They say that rotation of any point $(x,y)$ through any angle $\theta$ is given by $(x \cos\theta, y \sin\theta)$. Can anybody tell how was this derived? Please post here or send me by email.
|
I want to provide a very interesting method. Based on Euler equation: $$e^{iθ} = \cosθ+i \sinθ$$ it can be taken as an "rotation" action. So, if the original coordinate of your point is $(x,y)$ , it can be presented by $x + iy$ . Then, the coordinate after rotation is going to be $$(x + iy) e^{iθ}$$ Then, you get $$[x\cosθ - y\sinθ] + i [x\sinθ + y\cosθ]$$
|
|geometry|
| 0
|
A and B flip a fair dice until 6 occurs. Why was my calculation wrong?
|
A and B flip a fair dice until 6 occurs. I have read this post and tried to work out the expectation of flipping on A wins. But it seems something goes wrong. Following is my work: A wins iff $6$ occurs in odd round, so the EXP flippings on A wins is therefore: $$E = \frac{1}{6}*1 + (\frac{5}{6})^2*\frac{1}{6}*3+....(\frac{5}{6})^{2k}*\frac{1}{6}*(2k+1)+...$$ It is straightforward to get $E=\frac{366}{121}$ , where did I go wrong?
|
$\displaystyle\sum_{k=0}^\infty (2k+1)(\tfrac 56)^{2k}\tfrac 16+\sum_{k=1}^\infty (2k)(\tfrac 56)^{2k-1}\tfrac 16$ is the expected count for rolls until the game ends: $\mathsf E(X)$ . The first term counts rolls where the game ends on an odd roll, and the second counts rolls where the game ends on and even roll. $$\mathsf E(X)=\mathsf E(X\,\mathbf 1_{\{X\in2\Bbb N+1\}})+\mathsf E(X\,\mathbf 1_{\{X\in 2\Bbb N\}})\tag{Linearity}$$ So, you have evaluated: $\displaystyle\mathsf E(X\,\mathbf 1_{\{X\in2\Bbb N+1\}})=\sum_{k=0}^\infty (2k+1)(\tfrac 56)^{2k}\tfrac 16$ However, you want the expected count given that the game ends on an odd count. $$\begin{align}\mathsf E(X\mid X\in2\Bbb N+1) &= \dfrac{\mathsf E(X~\mathbf 1_{\{X\in2\Bbb N+1\}})}{\mathsf P(X\in2\Bbb N+1)} \\[1ex] &= \dfrac{(1/6)\sum\limits_{k=0}^\infty (2k+1)(5/6)^{2k}}{(1/6)\sum\limits_{k=0}^\infty (5/6)^{2k}}\\[1ex] &= \dfrac{366}{121}\div\dfrac6{11}\\[1ex]&= \dfrac{61}{11}\end{align}$$
|
|probability|conditional-probability|conditional-expectation|
| 0
|
Prove $|n^z| = |n^{\Re(z)}|$
|
I was reading these lecture notes and it's claimed at remark 2.2 (bottom of page 5) that for $n \in \mathbb{N} \setminus {0}$ , and $z \in \mathbb{C}$ , $|n^z| = |n^{\Re(z)}|$ , but I've not been able to prove it. Now in this context we also have that $\Re(z) > 1$ , but that requirement doesn't help either. I get stuck around $$ \begin{align*} |n^z| &= |n^{x + iy}|\\\\ &= |\exp(z \text{Log}(n))|\\\\ &= |\exp(z \ln(n))|\\\\ &= |e^{x \ln(n)} (\cos(y \ln(n)) + i \sin(y \ln(n)))|, \end{align*} $$ (where $\text{Log}$ is the principal logarithm) and now because $\cos(y \ln(n)) + i \sin(y \ln(n))$ could be equal to 0, I'm not sure how to continue.
|
because $\cos(y\ln(n))+i\sin(y\ln(n))$ could be equal to 0 Why would this be true? $n$ is a positive number, so $y\ln(n)$ is a real number, and for any real number $r$ , $\cos(r)+i\sin(r)$ has magnitude $1$ . Ergo, we have: $$ \begin{align*} & |e^{x\ln(n)}(\cos(y\ln(n))+i\sin(y\ln(n)))| \\ = & |e^{x\ln(n)}| \\ = & |n^x| \\ = & |n^{\Re(x+iy)}| \\ = & |n^{\Re(z)}| \end{align*} $$
|
|complex-analysis|
| 1
|
Is this a correct approach to calculating $\lim_{n\rightarrow \infty} {\sqrt[n]{\ln(n)}}$?
|
We have just started covering the limit of sequences and I've stumbled upon this limit in our uni's excercises: $$\lim_{n\rightarrow \infty} {\sqrt[n]{\ln(n)}}$$ I've considered solving it using the fact that $\lim_{n\rightarrow \infty} {\sqrt[n]{a}}=1$ for $a>0$ . And since we're dealing with natural numbers, with the exception of $n=1$ , the expression $\ln(n)$ should be $>0$ , right? So is it correct to assume that $\lim_{n\rightarrow \infty} {\sqrt[n]{\ln(n)}}=1$ using this thought process?
|
This paper provides a series of upper and lower bounds for the logarithmic function. They are given in the form of Padé approximants and rational approximations (have a look at Table $3$ on page $9$ of the linked paper). Let use use the second one $$\frac{3 (x-1) (x+1)}{1+4x+x^2} Take logarithms and Taylor expand $$\log (3)-\frac{4}{x}+O\left(\frac{1}{x^2}\right) Divide by $x$ , exponentiate using $$\sqrt[x]{\log(x)}=e^{\frac{\log(\log(x))} x }$$ and continue with Taylor series $$\color{blue}{1+\frac{\log (3)}{x}+O\left(\frac{1}{x^2}\right) Now, use the squeeze theorem with $x \to \infty$ . Using the seventh set of bounds, we should obtain $$\color{blue}{1+\frac{\log\left(\frac{49}{10}\right)}{x}+O\left(\frac{1}{x^2}\right)
|
|calculus|limits|limits-without-lhopital|
| 0
|
Topological equivalence and continuity on topological spaces
|
Could someone help me with this proof, please? I am aware that topologically equivalent means that the set of open sets are the same for both metrics. Let $f : (X,d) \to (Y,m)$ be a continuous function between metric spaces. If defined $$d^*(x,y) := d(x,y) + m(f(x),f(y)),$$ show that $d$ , $d^*$ are topologically equivalent and, furthermore, $d^*$ makes $f$ a uniformly continuous function. Consequently, on topological spaces, the notion of continuity is the same as uniform continuity. (I am so confused with this statement because I should prove that delta only deppends on epsilon) Until now, I have this: Let $U$ be an open set in $(X, d^*)$ . Since $d^*(x, y) = d(x, y) + m(f(x), f(y))$ , for any $x \in U$ , there exists $\epsilon > 0$ such that the open ball $B_{\epsilon}(x)$ with respect to the metric $d^*$ is contained in $U$ . Consider the open ball $B_{\epsilon}(x)$ in $(X, d^*)$ . For any $y \in B_{\epsilon}(x)$ , we have $d^*(x, y) , which implies $d(x, y) + m(f(x), f(y)) . Since
|
You're sort of messing up what you need to prove. For the first part, for example, you only proved $B_\epsilon(x)$ is contained in an open ball in $(X, d)$ , not that it is such an open ball. (Most likely it is not.) The following is a fixed version of the proof: For clarity, I'll use $B_\epsilon(x)$ to denote the $\epsilon$ -ball around $x$ in $(X, d)$ and use $B^\ast_\epsilon(x)$ to denote the $\epsilon$ -ball around $x$ in $(X, d^\ast)$ . For the first part, let $U$ be an open set in $(X, d^\ast)$ , we need to show $U$ is also open in $(X, d)$ . That is, for any $x \in X$ , we need to show that there is some $\epsilon > 0$ s.t. $B_\epsilon(x) \subset U$ . Fix $x \in U$ . Since $U$ is open in $(X, d^\ast)$ , there is some $\delta > 0$ s.t. $B^\ast_\delta(x) \subset U$ . As $f$ is continuous, there exists $\epsilon' > 0$ s.t., whenever $d(x, y) , we have $m(f(x), f(y)) . Let $\epsilon = \min\{\delta/2, \epsilon'\}$ . Then I claim that $B_\epsilon(x) \subset U$ , as required. Indeed, i
|
|general-topology|metric-spaces|
| 1
|
Proof that a certain function that satisfy a specific equation is constant
|
I am currently doing Problem 2, Chapter 1, from the "Functional Equations and How to Solve Them" book by Christopher G. Small. It is as follows: Let $f(x)$ be a function that satisfy $f(x + y) = f(xy)$ for all positive $x$ and $y$ . Prove that $f(x)$ is a constant. Here are my 2 possible solutions: 1. While the problem constraints us to positive values, if we assume it is continuous at $0$ and use a limit argument, then substituting $y = 0$ : $$f(x + y) = f(xy) \\ \Rightarrow f(x + 0) = f(0x) \Rightarrow f(x) = f(0)$$ for a certain value of $f(0)$ . This means that $f(x)$ is equal to a constant $f(0)$ and hence we proved the preposition. 2. Define $s = x + y, p = xy$ . $x$ and $y$ are therefore solutions of the quadratic equation $t^2 - st + p = 0$ (with $t$ as the variable). Clearly for such a solution to exist $s$ and $p$ needed to satisfy $\Delta = s^2 - 4p \geq 0$ . With that, we have $f(s) = f(p)$ for all values $s$ and $p$ that satisfy $s^2 - 4p \geq 0$ . We'll call this conditio
|
Solution (1) has the hitch that we have to use $f(0)$ which might not be available or which might not exist. We will then have to conclude that $f(x)$ itself might not exist. Solution (2) looks good & it is right. My Opinion is that it looks a little more complicated than necessary. My Solution (3) might be something like this : [[ I think , this is essentially what you are trying out too , though that was a little more complex ]] Let $y=1$ , then $f(x + 1) = f(x)$ , then we see that $f(x)$ for large $x$ will eventually reduce to $x$ values in $(0,1]$ . Eg $f(5.5)=f(4.5)=f(3.5)=f(2.5)=f(1.5)=f(0.5)$ Now , let $(x+y)=n$ where $n$ is sufficiently large , like $n=10$ . We have $y=(n-x)$ Hence $f(x + y) = f(n) = f(xy) = f(x(n-x)) = f(nx-x^2)$ We can see that $(nx-x^2)$ takes all values from $0^+$ to $n^2/2$ In that range , $f(nx-x^2) = f(n)$ is constant. Hence , $f(x)$ is a Constant though-out the Positive real number range.
|
|solution-verification|functional-equations|
| 0
|
Is $\int_{-\infty}^{\infty}\sin(e^{x-\frac{1}{x}})dx=\frac{\pi}{2}$?
|
I didn't use Glasser's Master Theorem because it only works on even functions, and $\sin(e^x)$ is definitely not an even function. Here's what I did. $$\int_{-\infty}^{\infty}\sin(e^{x-\frac{1}{x}})dx=\int_{-\infty}^{0}\sin(e^{x-\frac{1}{x}})dx+\int_{0}^{\infty}\sin(e^{x-\frac{1}{x}})dx$$ For the first term, $u=-x$ and for the second term, $u=\frac{1}{x}$ . $$=\int_{0}^{\infty}\sin(e^{\frac{1}{x}-x})dx+\int_{0}^{\infty}\frac{\sin(e^{\frac{1}{x}-x})}{x^2}dx=\int_{0}^{\infty}\sin(e^{\frac{1}{x}-x})\left(1+\frac{1}{x^2}\right)dx$$ $$=\int_{-\infty}^{\infty}\sin(e^{-u})du=\frac{\pi}{2}$$ However, WolframAlpha says that the integral actually approximates to 0.293238... Did I do something wrong or made an illegal move? If not, does Glasser's Master Theorem work for all kinds of functions, because if so how? Because most proofs I've seen only works for even functions which they don't mention for some reason. But if my work is wrong though, is there a closed form of this integral or how would
|
1. It is true that Glasser's Master Theorem (GMT) is not directly applicable in this case. However, the reason is not because the integrand is an even function. In fact, GMT applies to any integrable or non-negative functions on $\mathbb{R}$ , as we can see from the following far-reaching generalization (which actually precedes GMT, so let's give him some credit, too!): Theorem. (Letac, 1977). [1] Let $\alpha$ be a real number and $\mu$ be a measure on $\mathbb{R}$ which is singular to the Lebesgue measure and satisfies $\int_{\mathbb{R}} \frac{\mu(\mathrm{d}\lambda)}{1+\lambda^2} . Then the function $$ \phi(x) = x - \alpha - \lim_{\epsilon \to 0^+} \int_{\mathbb{R}} \left( \frac{1}{x+i\epsilon - \lambda} + \frac{\lambda}{1+\lambda^2} \right) \, \mu(\mathrm{d}\lambda) $$ defines a measurable function on $\mathbb{R}$ that preserves the Lebesgue measure on $\mathbb{R}$ . Note that GMT corresponds to the case where $\mu$ is a sum of finitely many point masses. Returning to OP's integral,
|
|integration|definite-integrals|
| 1
|
Which value of $c$ makes $\sum^{\infty}_{n=1}\dfrac{c}{n!}$ converge to 1?
|
Find the constant $c$ such that this series converges to $1$ ? $$\sum^{\infty}_{n=1}\dfrac{c}{n!}$$ I tried with $1$ , but I don't know if it converges exactly to $1$ cause I noticed that it might converges to $e$ . And I also found $c=\dfrac{1}{e-1}$ , but I doubt it converges to $1$ . Thanks in advance.
|
Suppose the required sum is $S := \sum_{n=1}^{\infty} \frac{c}{n!}$ . Since $c$ is a constant, you have $S = c \sum_{n=1}^{\infty} \frac{1}{n!}$ . Note the series converges to $e-1$ . Therefore, $S = c(e-1)$ . Since you want the sum to converge to $1$ , use $1=c(e-1)$ which makes $c = \frac{1}{e-1}$ . Edit: People pointed out that the previous answer was not thorough, so I modified it appropriately. Thanks!
|
|sequences-and-series|
| 0
|
Why is intersection of open sets not closed under countable unions?
|
I’m learning about Borel sets and I’m struggling to understand the following sentence. The intersection of every sequence of open subsets of $\mathbb{R}$ is a Borel set. However, the set of all such intersections is not the set of Borel sets (because it is not closed under countable unions). Why isn’t the set of all countable intersections closed under countable unions? Let $$(A_{1,1}\cap…\cap A_{1,n}\cap…)\cup(A_{2,1}\cap…\cap A_{2,n}\cap…)\cup…$$ be a countable union of intersections of open sets $A_{i,j}$ , with $i,j \in \mathbb{Z}^+$ . If we were to expand this, we would get an intersection of unions like $$(A_{1,1}\cup A_{2,1}\cup A_{3,1}\cup…)\cap(A_{1,2}\cup A_{2,1}\cup A_{3,1}\cup…)\cap…$$ Since each union above is a countable union of open sets, and the countable union of open sets is also an open set, we thus have the overall quantity written as an intersection of open sets. Where is the flaw in my reasoning?
|
If you actually fully expand out, you'll realize you actually have an uncountable intersection, because the index set of your intersection is actually the set of all sequences $\mathbb{N} \to \mathbb{N}$ . For an explicit example, $\mathbb{Q}$ is a countable union of countable intersections of open sets, but it is not a countable intersection of open sets itself. See the example section of https://en.wikipedia.org/wiki/G%CE%B4_set
|
|real-analysis|set-theory|borel-sets|
| 1
|
Given $G=\langle a\rangle$, $|G|=n$ and $d\mid n$, show $G$ has a unique subgroup of order $d$.
|
Given $G=\langle a\rangle$ , $|G|=n$ and $d\mid n$ , show $G$ has a unique subgroup of order $d$ . Proof: (Existence) : $|\langle a\rangle|=|a|=n$ and $ |a^\frac{n}{d}|=d$ . Then, $H_d=\langle a^\frac{n}{d}\rangle$ is a subgroup of $G$ of order $d$ . (Uniqueness) : Suppose, $H\le G$ and $|H|=d$ . Claim: $H=H_d$ . It is enough to prove one-sided set inclusion $H\subseteq H_d$ because $H\le H_d$ and $|H|=|H_d|$ implies $H=H_d$ . Choose $b\in H$ . Then $|H|=d \implies b^d=e$ and $b\in H \implies b \in G$ . Hence $b=a^k$ for some $k\in \mathbb{Z}$ . Also $e=b^d =(a^k)^d =a^{kd} $ and $|a|=n $ implies $n|kd$ . Hence $kd=nk'$ for some $k'\in \mathbb{Z}$ and $k=(\frac{n}{d})k'$ Hence $b=a^k =a^{{(\frac{n}{d})}k'}$ for some $k' \in \mathbb{Z}$ implies $b\in H_d$ . Hence $H\subseteq H_d$ and then $H=H_d$ . Note: $|G|$ : order of the group $G$ . Is the proof correct ? Is there any mistake?
|
I have the same proof for uniqueness, but there is a formula that instantly shows exists. Let $\frac{n}{k}=p$ . We know that $a^p$ is in $G$ since $p , thus we know that we can form a cyclic subgroup, $\langle a^p\rangle$ . Now we just need to find the order of this subgroup, but we can use: $$o(g^x)=\frac{o(G)}{\gcd{(o(G),x)}}$$ Here $x=p$ and $o(G)=n$ so, $$o(g^p)=\frac{n}{\gcd{(n,p)}}$$ Since $\frac{n}{k}=p$ , we know $\gcd(n,p)=p$ thus $o(g^p)=\frac{n}{p}=d$ .
|
|abstract-algebra|group-theory|solution-verification|abelian-groups|cyclic-groups|
| 0
|
How to compute the following limit? $\lim\limits_{x\to\ 0^+} {\frac{x-\lfloor x \rfloor}{x+\lfloor x \rfloor}}$
|
$$\lim\limits_{x\to\ 0^+} {\frac{x-\lfloor x \rfloor}{x+\lfloor x \rfloor}}$$ Here, $\lfloor x \rfloor$ represents the floor of $x$ . I tried using a graphing calculator (desmos) to plot the function $f(x) = \frac{x-\lfloor x \rfloor}{x+\lfloor x \rfloor}$ from the graph (see image here ), it is clear that this limit is equal to 1. But is there any analytical way to compute this limit?
|
For $0 , you have that $\lfloor x\rfloor =0$ . Hence $$ \lim\limits_{x\to\ 0^+} {\frac{x-\lfloor x \rfloor}{x+\lfloor x \rfloor}} =\lim\limits_{x\to\ 0^+} \frac{x}{x}=1. $$
|
|calculus|limits|ceiling-and-floor-functions|indeterminate-forms|fractional-part|
| 1
|
Solving $2-4\sin 2 \theta = 0$
|
Which of the following represents zeros of $r=2-4\sin 2 \theta $ ? (Multiple choice) I don't understand this question. I figured out that $\theta$ is $30$ , and I determined that the answer is $A$ (i.e. $\pi/6$ and $5\pi/6$ ). However, the answer sheet tells me that $D$ (i.e. $\pi/12$ , $5\pi/12$ , $13\pi/12$ , $17\pi/12$ ) is correct. I don't understand how $13\pi/12$ and $17\pi/12$ are possible answer for this question.
|
$\theta$ is not 30°. Most likely your mistake is forgetting the 2 from $2\theta$ . It should become the following. $$\frac12 = \sin2\theta$$ $$30° = 2\theta \text{ or } 150° = 2\theta \text{**}$$ See *** below. Then we must divide by two on both of sides of each equation. $$\frac{30}{2} = 15° = \theta \text{ or } \frac{150}{2} =75° = \theta $$ Converting to radians this is $$\theta =\frac{\pi}{12}\text{ or }\frac{5\pi}{12}$$ This gives us two of the four answers in D. Note: (***)To get the other two answers in D, first we remember about coterminal angles. Let k be any integer. Essentially, for the first angle, $$2\theta = 30° = \frac{\pi}{6}\text{ becomes }2\theta = 30° +360°k= \frac{\pi}{6} + 2\pi \cdot k$$ all point the exact same direction, so give the same result after taking the sine, for any integer we choose to use for k. Likewise, for the second angle, $$2\theta = 150° = \frac{5\pi}{6}\text{ becomes }2\theta = 150° +360k= \frac{5\pi}{6} + 2\pi \cdot k$$ Applying this to (**) fr
|
|algebra-precalculus|trigonometry|
| 0
|
Linear dependence condition
|
We know that to prove vectors $v_1,v_2,v_3$ linearly dependent we must find scalers $x_1,x_2,x_3$ not all equal to $0$ such that $x_1v_1+x_2v_2+x_3v_3=0$ . But the doubt I have is on the other alternative condition,which says if one of them can be expressed as a linear combination of other $2$ ,then they are linearly dependent. So let's assume the $x_1,x_2,x_3$ found earlier satisfy $x_1=x_2=0$ with $x_3$ non zero,then we can write $v_3=\frac{x_1}{-x_3}v_1+\frac{x_2}{-x_3}v_2=0v_1+0v_2$ . Here we see that $v_3$ is not a linear combination of $v_1,v_2$ since coefficients of $v_1,v_2$ are zero hence linearly independent. But by first condition they are linearly dependent due to not all coefficients equal to $0$ . Isn't it contradictory? Where is the mistake I am making?
|
In your case, $v_3$ is the zero vector, so by convention (or definition), it must be linearly dependent with other vectors. In fact your definition of linearly dependence is equivalent to your definition of linearly independent. Suppose a set of vector is not linearly independent, we must find some $a_1,\cdots,a_n$ such that they are not all zero, and $$a_1v_1+\cdots+a_nv_n=0$$ Since they are not all zero, we can certainly find some candidates $a_i\ne0$ , then $$v_i=\dfrac{a_1}{a_i}v_1+\cdots+\dfrac{a_n}{a_i}v_n$$
|
|linear-algebra|vector-spaces|vector-analysis|linear-independence|
| 0
|
Proof verification: ''standard'' lower estimate for positive definite quadratic form
|
Let $Q(x_1,\ldots,x_n)$ be a positive definite quadratic form. I would like to show that there exists $C>0$ such that $Q(x_1,\ldots,x_n)\geq C(x_1^2+x_2^2+\ldots x_n^2)$ . Proof: Let $\vec{x}=\begin{pmatrix}x_1\\x_2\\\vdots\\x_n\end{pmatrix}$ . The quadratic form can be expressed as $Q(x_1,\ldots,x_n)=\vec{x}^\top A\vec{x}$ for a symmetric matrix $A\in\mathbb{R}^{n\times n}$ with positive eigenvalues $\lambda_1,\ldots,\lambda_n$ . Moreover, there exists orthogonal matrix $Q$ such that $$ Q(x_1,\ldots,x_n)=\vec{y}^\top D \vec{y}=\sum_{i=1}^n\lambda_i y_i^2\geq\lambda_\min\sum_{i=1}^n y_i^2\geq C\sum_{i=1}^ny_i^2=C\Vert y\Vert_2^2 $$ where $\vec{y}=Q^\top\vec{x}$ , $D=Q^\top A Q=\textrm{diag}(\lambda_1,\ldots,\lambda_n)$ , $\lambda_\min:=\min\{\lambda_1,\ldots,\lambda_n\}>0$ , $0 . Since $$ \Vert\vec{x}\Vert_2^2=\vec{x}^\top\vec{x}=(Q\vec{y})^\top(Q\vec{y})=\vec{y}^\top \underbrace{Q^\top Q}_{I_n}\vec{y}=\Vert\vec{y}\Vert_2^2, $$ the claim follows. Do you agree?
|
The reasoning seems fine. Here is a reorganization of the presentation of the solution. As $A$ is a positive definite symmetric matrix, we can write $A=QDQ^T$ where $D$ is a diagonal matrix with positive diagonal entries and $Q$ is an orthogonal matrix. We denote $y=Q^Tx$ . \begin{align} Q(x_1, \ldots, x_n) &= x^TAx\\&=x^T(QDQ^T)x\\&=(x^TQ)D(Q^Tx)\\&=(Q^Tx)^TD(Q^Tx) \\ &=y^TDy\\ &=\sum_{i=1}^n \lambda_i y_i^2\\ &\ge \lambda_{\min}\sum_{i=1}^n y_i^2\\ &=\lambda_{\min}\|y\|^2\\ &=\lambda_{\min}\|Q^Tx\|^2\\ &=\lambda_{\min}\|x\|^2 \end{align} Hence we can pick $C$ to be $\lambda_{\min}$ (or any positive number smaller than $\lambda_{\min})$ .
|
|real-analysis|solution-verification|quadratic-forms|symmetric-matrices|
| 1
|
Linear dependence condition
|
We know that to prove vectors $v_1,v_2,v_3$ linearly dependent we must find scalers $x_1,x_2,x_3$ not all equal to $0$ such that $x_1v_1+x_2v_2+x_3v_3=0$ . But the doubt I have is on the other alternative condition,which says if one of them can be expressed as a linear combination of other $2$ ,then they are linearly dependent. So let's assume the $x_1,x_2,x_3$ found earlier satisfy $x_1=x_2=0$ with $x_3$ non zero,then we can write $v_3=\frac{x_1}{-x_3}v_1+\frac{x_2}{-x_3}v_2=0v_1+0v_2$ . Here we see that $v_3$ is not a linear combination of $v_1,v_2$ since coefficients of $v_1,v_2$ are zero hence linearly independent. But by first condition they are linearly dependent due to not all coefficients equal to $0$ . Isn't it contradictory? Where is the mistake I am making?
|
The mistake that you made is that you said " $v_3$ is not a linear combination of $v_1,v_2$ ...". Note that $v_3 = 0$ and a zero vector is always a linear combination of $v_1, v_2$ as you already wrote: $0=v_3 = 0v_1+0v_2$ . And this implies that $v_1,v_2,v_3$ form a linearly dependent set of vectors because one of them is the zero vector.
|
|linear-algebra|vector-spaces|vector-analysis|linear-independence|
| 0
|
If $f(X)=AX-XA$ is diagonalizable, show that $A$ is diagonalizable
|
Let $f:M_n(F)\rightarrow M_n(F), X\mapsto AX-XA$ . If $f$ is diagonalizable, I want to show that $A$ is diagonalizable. I'd prefer to avoid Jordan Blocks. I know that $f$ is diagonalizable if and only if: its minimal polynomial is square-free, or there exist $d$ linearly independent eigenvectors where $d = \dim M_n(F)$ , or the characteristic polynomial of $f$ factors into linear terms and each geometric multiplicity equals the corresponding algebraic multiplicity.
|
If $F$ is a splitting field for $A$ (e.g. when $F$ is algebraically closed), we may prove the statement as follows. Proof 1. (A simplified version of user8675309’s answer.) Let $\lambda\in F$ be an eigenvalue of $A$ and $v\in F^n$ be a corresponding left eigenvector. Let $B=A-\lambda I$ . Define $g:M_n(F)\to M_n(F)$ by $g(X)=BX$ . Since $A$ and $B$ commute, so do $f_A$ and $g$ . Moreover, for any vector $x\in F^n$ , we have $$ f_A(xv^T)=Axv^T-xv^TA=Axv^T-x(\lambda v^T)=Bxv^T=g(xv^T). $$ It follows that $(m(f_A))xv^T=(m(g))xv^T=m(B)xv^T$ for every polynomial $m\in F[x]$ . In particular, when $m$ is the minimal polynomial of $f$ , we have $m(B)xv^T=0$ . Since $x$ is arbitrary and $v$ is nonzero, we must have $m(B)=0$ . Hence the minimal polynomial of $B$ divides $m$ . However, as $f_A$ is diagonalisable over $F$ , $m$ is a product of distinct linear factors. Therefore the minimal polynomial of $B$ is also a product of distinct linear factors. This means $B$ is diagonalisable over $F$ . I
|
|linear-algebra|matrices|diagonalization|
| 1
|
Taylor's Theorem with Peano's Form of Remainder
|
The following form of Taylor's Theorem with minimal hypotheses is not widely popular and goes by the name of Taylor's Theorem with Peano's Form of Remainder : Taylor's Theorem with Peano's Form of Remainder : If $f$ is a function such that its $n^{\text{th}}$ derivative at $a$ (i.e. $f^{(n)}(a)$) exists then $$f(a + h) = f(a) + hf'(a) + \frac{h^{2}}{2!}f''(a) + \cdots + \frac{h^{n}}{n!}f^{(n)}(a) + o(h^{n})$$ where $o(h^{n})$ represents a function $g(h)$ with $g(h)/h^{n} \to 0$ as $h \to 0$. One of the proofs (search "Proof of Taylor's Theorem" in this blog post ) of this theorem uses repeated application of L'Hospital's Rule. And it appears that proofs of the above theorem apart from the one via L'Hospital's Rule are not well known . I have asked this question to get other proofs of this theorem which do not rely on L'Hospital's Rule and instead use simpler ideas. BTW I am also posting one proof of my own as a community wiki.
|
A slightly more effficient version of Hardy's argument, which at the same time proves Taylor's theorem with the Lagrange form of the remainder goes as follows: Assume that $f$ is $n$ times differentiable at $a \in \mathbb R$ . Then consider, for $C$ any constant, $$ G(t) = f(a+t)-\sum_{j=0}^{n-1}\frac{1}{j!}f^{(j)}(a)t^k - \frac{1}{n!}C.t^n $$ Note that for each $k$ , $0\leq k \leq n-1$ we have $$ G^{(k)}(t) = f^{(k)}(a+t) - \sum_{j=k}^{n-1}\frac{1}{(j-k)!}f^{(j)}(a)t^{j-k}- \frac{1}{(n-k)!}Ct^{n-k} $$ and so in particular, $G^{(k)}(0)=0$ for all $k\in \{0,1,\ldots,n-1\}$ . Now for any $h\neq 0$ we may choose the constant $C$ so that $G(h)=0$ -- that is, we set $C(h)=(n!/h^n).\{f(a+h)-\sum_{j=0}^{n-1}\frac{1}{j!}f^{(j)}(a)h^j\}$ ). Then for this choice of $C$ we have $G(0)=G(h)$ , so that there is some $h_1$ in the open interval between $0$ and $h$ with $G'(h_1)=0$ . Since $G'(0)=0$ , this implies there is some $h_2$ between $0$ and $h_1$ with $G''(h_2)=0$ . Continuing in this way for
|
|calculus|
| 0
|
How do I find the corner shape of the bounding box of a smooth curve of constant width?
|
Given the functions $$ p(θ) = \frac{S}{2} × \frac{\cos\bigl(n × (θ - α)\bigr)}{n^2 - 1}\\ \begin{align} X(θ) = \cos(θ) × &\left(p(θ) + \frac{S}{2} + A\right) - \sin(θ) × p'(θ) - p(0)\\ Y(θ) = \sin(θ) × &\left(p(θ) + \frac{S}{2} + A\right) + \cos(θ) × p'(θ) - p\left(\frac{π}{2}\right) \end{align} $$ the curve $\bigl(X(t), Y(t)\bigr), 0° ≤ t is a smooth, regular $n$ -sided polygonal Curve of Constant Width (CoCW) where $p'(θ)$ is the derivative of $p(θ)$ with respect to $θ$ , $A > 0$ is the radius of the smallest osculating circle on the curve, $S + A > 0$ is the radius of the largest osculating circle on the curve, $S + 2 A$ is the total width of the curve, and $α$ is an angle parameter that rotates the curve about the origin. (If $A = 0$ , the curve has sharp corners at the "vertices" and thus loses the property of being everywhere-smooth, but everything else holds. If $S = 0$ , the curve becomes a circle with radius $A$ .) We also define $N := n - \sin\left(\frac{π}{2} × n\right)$ . A
|
I have looked at this problem, reformulated it a bit, and developed a credible solution to envelope of curves described in the original post (OP). I work in the complex plane, so have reformatted the equations as follows, $$ p(\theta)=\frac{S}{2}\frac{\cos (n(\theta-\alpha))}{n^2-1}\\ z(\theta)=\bigg(p(\theta)+ \frac{S}{2}+A \bigg)e^{i\theta}+i p’(\theta) e^{i\theta} $$ You’ll notice that I’ve dropped the translation terms $p(1)+ip(\pi/2)$ as they become superfluous when the solutions for various $\alpha$ are aligned in a single square box and the origin is moved to the center. Here, as in the OP, we have taken $S=1.25, A=S/2$ . To delineate the envelope of the solutions, i.e., the purple zone in the OP, I plotted the solution for 360 values of $\alpha$ on a single plot. This is shown in the first figure below. The red circle is shown for reference and comparison. The blue dashed lines indicate the straight sections of the envelope as called out in the OP. Our job is to find an analyti
|
|curves|parametric|envelope|
| 0
|
Proof that a certain function that satisfy a specific equation is constant
|
I am currently doing Problem 2, Chapter 1, from the "Functional Equations and How to Solve Them" book by Christopher G. Small. It is as follows: Let $f(x)$ be a function that satisfy $f(x + y) = f(xy)$ for all positive $x$ and $y$ . Prove that $f(x)$ is a constant. Here are my 2 possible solutions: 1. While the problem constraints us to positive values, if we assume it is continuous at $0$ and use a limit argument, then substituting $y = 0$ : $$f(x + y) = f(xy) \\ \Rightarrow f(x + 0) = f(0x) \Rightarrow f(x) = f(0)$$ for a certain value of $f(0)$ . This means that $f(x)$ is equal to a constant $f(0)$ and hence we proved the preposition. 2. Define $s = x + y, p = xy$ . $x$ and $y$ are therefore solutions of the quadratic equation $t^2 - st + p = 0$ (with $t$ as the variable). Clearly for such a solution to exist $s$ and $p$ needed to satisfy $\Delta = s^2 - 4p \geq 0$ . With that, we have $f(s) = f(p)$ for all values $s$ and $p$ that satisfy $s^2 - 4p \geq 0$ . We'll call this conditio
|
Solution (1) requires $f(0)$ so it's not correct. Solution (2) is right. You don't need an arbitrary number though. We have $$f\left(2a+\dfrac{2}{a}\right) = f\left(4\right)= f\left (b\left(4-b\right)\right)$$ $2a+\dfrac{2}{a}$ has the range $[4,+\infty)$ while $b(4-b)$ has the range $(0,4]$ for $a \gt 0$ , $0 \lt b \leq 2$ , hence they could cover the entire $\mathbb{R}^+$
|
|solution-verification|functional-equations|
| 1
|
How Can I Prove this Version of Minkowski’s Theorem: $vol(C)>k2^d$ with $2k$ Lattice Points?
|
Question Prove that if $C\subseteq \mathbb{R}^d$ is convex, centrally symmetric and bounded, with $vol(C)>k2^d$ , then $C$ contains at least $2k$ lattice points (of lattice $\mathbb{Z}^d$ ). Note Minkowski’s theorem talks about $vol(C)>2^d$ and $C$ contains at least one point apart from the origin. Attempt Let $C^\prime = \frac{1}{2}C$ . Then, $vol(C^\prime)=(\frac{1}{2})^dvol(C)>k$ . I don’t know how to go from here. Induction seems to be a bad choice since if we take object at $k=1$ away from object at $k=2$ , the object we will have left will not be convex. I will greatly appreciate your help.
|
You could try like in the proof of standard Minkowski. There are 2 steps. If $D_1$ is such that $m(D) > m(\Gamma)$ (where $\Gamma$ is any fundamental parellelotope of the lattice $L$ , then $D$ does not map injectively on $V/L$ , that is, there exist $d_1$ , $d_2$ in $D$ such that $0\ne d_1-d_2\in L$ ( the intuition is clear, the proof is a bit subtle, but standard nowadays). 1'. Generalization: if $D$ is such that $m(D) > k \cdot m(\Gamma)$ , then there exists distinct $d_1$ , $\ldots$ , $d_{k+1} \in D$ such that $d_i-d_j \in L$ ( a similar proof) Take $D = \frac{1}{2} C$ . Then $m(D) > m(\Gamma)$ , so there exist distinct $d_1$ , $\ldots$ , $d_{k+1} \in D$ with differences in $L$ Now $d_i = \frac{1}{2} c_i$ . But $C$ is convex and symmetric so the $k$ distinct elements $\frac{c_i-c_{k+1}}{2}$ are in $C$ and in $L\backslash\{0\}$ . $\bf{Added:}$ We found $c_1$ , $\ldots$ , $c_{k+1}$ in $C$ , distinct, such that $\frac{c_i-c_j}{2}\in L$ for all $i,j$ . Now, this gets us $k$ distinct no
|
|convex-geometry|combinatorial-geometry|
| 1
|
$\liminf_{n\to \infty}(P_{n+1} -P_n)>\liminf_{n\to \infty}(P_{n+1}) -\limsup_{n\to \infty}(P_n) =(n+1)\log(n+1)-n\log n>\lim_{n\to\infty}\log n$.
|
I saw a paper by Zhang Yitang that $\liminf_{n\to \infty}(P_{n+1}-P_n) , where $P_n$ is the $n$ th prime. But by prime number theorem, $\liminf_{n\to \infty}(P_{n+1} -P_n)>\liminf_{n\to \infty}(P_{n+1}) -\limsup_{n\to \infty}(P_n) =\lim_{n\to\infty}((n+1)\log(n+1)-n\log n)>\lim_{n\to\infty}\log n$ for sufficiently large $n$ . Then how can it be less than $70000000$ ? Can anyone correct me? Thanks
|
Note that $\liminf P_n = \infty,$ and $\limsup P_n = \infty$ , since there are inifinitely many primes. So, the lower bound expression you've written is of the indeterminate $\infty - \infty$ form, which should give you pause. The prime number theorem (PNT) says (in one version) that $\lim P_n/(n \log n) = 1.$ In other words, $P_n = n \log n + \mathrm{error}_n$ , where $\lim \frac{\mathrm{error}_n}{n \log n} = 0,$ i.e., the error grows slower than $n \log n$ . But this error term can be large in and of itself: even under the Riemann hypothesis, we only expect to be able to control $\mathrm{error}_n$ to (roughly) the scale $\sqrt{n}$ . This is waaay larger than $(n+1) \log (n+1) - n \log n This means that while the PNT is an accurate description of where the $n$ th prime is, it cannot say anything about the gaps between consecutive primes, beyond speaking about them on average. But of course such an average is a very different kind of thing than a $\liminf$ , which is tracking the minim
|
|elementary-number-theory|
| 0
|
Prove that every root of $P(z)$ in the closed unit disc has multiplicity at most $c \cdot \sqrt{n}$, where $c = c(M) > 0$ is constant depending on $M$
|
Problem statement: Let $P(z)$ be a polynomial of degree $n$ with complex coefficients, $P(0) = 1$ , and $|P(z)| \leq M$ for $|z| \leq 1$ . Prove that every root of $P(z)$ in the closed unit disc has multiplicity at most $c \cdot \sqrt{n}$ , where $c = c(M) > 0$ is a constant depending only on $M$ . attempt (this is my friends attempt because I don’t know where to start) It is sufficient to examine the multiplicity of the number 1. In fact, if we prove something for 1, then we may apply the result to the polynomial $p(z) = P(\alpha z)$ with $|\alpha| \leq 1$ , and in this way, we obtain the same estimate for all roots lying in the unit disc. The idea of the solution is the following. We consider the integral $$F(P) = \int_{0}^{2\pi} \log(|P(e^{i\phi})|) \, d\phi$$ and show that it exists and is nonnegative. Then we estimate it from above, once in the neighborhood of 1 with the aid of the multiplicity of 1 and the degree of $P$ , and once at other points using the condition $|P(z)| \leq
|
Here is my attempt. Let $P(z)=\sum_{k=0}^n a_kz^k$ , then $a_0=P(0)=1$ . Let $m+1$ be the multiplicity of the root 1, we have $P^{k}(1)=0$ for any $0\le k\le m$ . Hence $\sum_{k}a_kf(k)=0$ for any polynomial f of degree does not exceed m, since $\left\{ x(x-1)\cdots(x-k+1)\right\}_{k\le m}$ is a base. Now we choose $f(x)=T_m(\frac{2x-n-1}{n-1})$ , where $T_m(x)$ is the Chebyshev polynomial of the first kind. Then $|Tm(k)|\le 1$ for $1\le k \le n $ . By triangle inequality, we deduce that \begin{align} \sum_{k=1}^n |a_k|&\ge \sum_{k=1}^n |a_kf(k)|\ge|\sum_{k=1}^n a_kf(k)|=|a_0f(0)|\\&=T_m(1+\frac{2}{n-1})\\ &=\cosh(m\log(1+\frac{2}{\sqrt{n}-1}))\\ &>\cosh(\frac{2m}{\sqrt{n}})\\ &>\frac{2m^2}{n} \end{align} So if we choose c sufficiently large, we can ensure that when $m>c\sqrt{n}$ , $\sum_{k} |a_k|$ could be very large. But I still can't figure out the relationship between $\sum |a_k|$ and $\max_{|z|=1} |P(z)|$ .
|
|complex-analysis|polynomials|complex-numbers|analytic-geometry|
| 0
|
Help understanding proof for additivity of Lebesgue outer measure for open sets
|
I’m reading the proof for the following lemma regarding the additivity of the Lebesgue outer measure: Suppose $A$ and $G$ are disjoint subsets of $\mathbb{R}$ and $G$ is open. Then $$|A∪G| = |A|+|G|.$$ Proof: We can assume that $|G| because otherwise both $|A ∪ G|$ and $|A| + |G| $ equal $∞$ . Subadditivity (see 2.8) implies that $|A ∪ G| ≤ |A| + |G|$ . Thus we need to prove the inequality only in the other direction. First consider the case where $G = (a,b)$ for some $a,b ∈ \mathbb{R}$ with $a . We can assume that $a,b \notin A $ (because changing a set by at most two points does not change its outer measure). Let $I_1, I_2, . . .$ be a sequence of open intervals whose union contains $A ∪ G$ . For each $n ∈ \mathbb{Z}^+$ , let $$J_n = I_n ∩(−∞,a), K_n = I_n ∩(a,b), L_n = I_n ∩(b,∞).$$ Then $$\mathcal{l}(I_n)= \mathcal{l}(J_n)+ \mathcal{l}(K_n)+ \mathcal{l}(L_n) $$ Now $J_1, L_1, J_2, L_2, . . .$ is a sequence of open intervals whose union contains $A$ and $K_1, K_2, . . .$ is a sequen
|
It is shown that $\sum^\infty_{n=1}{\mathcal{l}(I_n)} ≥ |A| + |G|$ for any cover of $A \cup G$ by open intervals $(I_n)$ . This implies that the infimum of the sums in the left side is also $\ge |A|+|G|$ , which means that $|A\cup G| \ge |A|+|G|$ . The reverse inequality always holds. [If each element of a set $S$ of real numbers is $\ge a$ then $\inf S \ge a$ ].
|
|measure-theory|
| 1
|
Given an $n$-element family $\mathcal{S}$ of average size $r$, is $\sum |S_i \cap S_j|\geq n\binom{r}{2}$?
|
Consider a set $X$ of size $n$ , and a size- $n$ family of sets $\mathcal{S}$ . The sets in $\mathcal{S}$ have average size $r$ , and their intersections are of size at most $k$ . I'm trying to show that the intersection graph of $\mathcal{S}$ satisfies the inequality $$|E|≥\frac{n}{k}\binom{r}{2}.$$ The intersection graph is a graph on $n$ vertices which has $(i,j)$ as an edge iff $|S_i\cap S_j|$ is nonzero. I'm considering the sum $\sum |S_i \cap S_j|$ . It's clear that $$|E|k\geq\sum |S_i \cap S_j|$$ so to complete the problem, it would be enough to show the assertion in the question. Unfortunately, I'm not sure how to proceed -- $r$ is only the average size, and I can't use any usual average-size-of-set-intersection lemmas since they're all lower bounds on the size of some set intersection, not upper bounds. Any hints on how to proceed? I feel like I'm missing something obvious, so I'd prefer a hint instead of a solution. After some consideration, it seems that the inclusion-exclus
|
Your intuition is correct. You only need to prove $\sum |S_i \cap S_j| \geq {r \choose 2}$ . I'll give you a hint as you requested. First of all, the correct summation you want is $\sum_{i to ensure no double counting of the edges in the intersection graph. For any element $x \in X$ , define $d(x)$ as the degree of $x$ , or the number of sets of $\mathcal{S}$ that it is contained in. First observe that $$\sum_{x\in X}d(x) = \sum_i |S_i| = rn$$ Now notice that $$\sum_{i But note that $\sum_{i,j} |S_i\cap S_j|$ counts each element in $X$ a total of $d(x)^2$ times. Indeed, for fixed $i$ , it is counted $d(x)$ times if $x\in S_i$ , and $0$ otherwise. So in total it is counted $d(x)^2$ times. Thus $$\frac{1}{2} \left( \sum_{i, j}|S_i \cap S_j| -\sum_i |S_i| \right) = \frac{1}{2}\left( \sum_{x\in X}d(x)^2 - \sum_{x\in X}d(x) \right) = \sum_{x\in X} {d(x) \choose 2}$$ So you get the inequality $$ \sum_{x\in X} {d(x) \choose 2} \leq k |E|$$ Can you take it from here?
|
|combinatorics|extremal-combinatorics|
| 1
|
Property of image of measurable sets mapped by continuous injection
|
$E\subset \mathbb R^n$ is Lebesgue measurable. A continuous injection $f:E\to \mathbb R^n$ maps sets of zero measure to sets of zero measure. Prove: If $m(f(E))$ is finite, then for any $\varepsilon>0$ , there exists $\delta>0$ , such that for any measurable subset $Z\subset E$ with $m(Z)<\delta$ , we have $m(f(Z))<\varepsilon$ . I find this problem is similar to the property of absolute continuous functions, but the method seems not working in this problem. I don't know how to solve this problem.
|
If not, there exists $\epsilon >0$ and sets $Z_n$ with $m(Z_n) and $m(f(Z_n)) \geq \epsilon$ for all $n$ . Let $Z=\lim \sup Z_n$ . Using injectivity of $f$ check that $f(Z)=\lim \sup f(Z_n)$ . Now, $\sum_n m(Z_n) and this implies $m(Z)=0$ . But $m(f(Z))=m(\lim \sup f(Z_n))\ge \lim \sup m(f(Z_n)) \ge \epsilon$ , a contradiction. [ The fact that $m(f(E)) is required to say that $m(\lim \sup f(Z_n))\ge \lim \sup m(f(Z_n))$ . Apply Fatou's Lemma to the complements].
|
|real-analysis|measure-theory|lebesgue-measure|
| 1
|
Solving $2-4\sin 2 \theta = 0$
|
Which of the following represents zeros of $r=2-4\sin 2 \theta $ ? (Multiple choice) I don't understand this question. I figured out that $\theta$ is $30$ , and I determined that the answer is $A$ (i.e. $\pi/6$ and $5\pi/6$ ). However, the answer sheet tells me that $D$ (i.e. $\pi/12$ , $5\pi/12$ , $13\pi/12$ , $17\pi/12$ ) is correct. I don't understand how $13\pi/12$ and $17\pi/12$ are possible answer for this question.
|
As commented by Benjamin, you seem to have `missed the $2$ in $2\theta$ '. The following solution uses the notation $a\equiv b\bmod c$ , which means: $a=b+kc$ for some $k\in\Bbb Z$ . $$\begin{align}\sin(2\theta)=\frac12&\iff2\theta\equiv\pi/6\text{ or }5\pi/6\bmod{2\pi}\\ &\iff\theta\equiv\pi/12\text{ or }5\pi/12\bmod\pi\\&\iff\theta\equiv\pi/12\text{ or }\pi/12+\pi\text{ or }5\pi/12 \text{ or }5\pi/12+\pi\bmod{2\pi}. \end{align}$$
|
|algebra-precalculus|trigonometry|
| 0
|
Is there a prime number $p$ dividing $1+2!^2+3!^2+\cdots (p-1)!^2$?
|
Is there a prime number $p$ with $p \mid \sum_{j=1}^{p-1} j!^2$ ? I checked the primes upto $600\ 000$ without finding a solution. Heuristic : If we can assume that the probability that $p$ is a solution is $\frac{1}{p}$ , then there should be infinite many solutions and therefore the desired prime should exist. But I guess this is not the case and maybe the special expression can easily be determined to be or not to be divisible by $p$ . Motivation : Such a prime number would prove that $1+2!^2+3!^2+\cdots n!^2$ can be prime only for finite many positive integers $n$ and also give an upper bound for the possible $n$ . If the answer to the question is no , there is a chance that there are infinite many such primes.
|
Let's implement $S$ in Python to make large searches possible (for whoever is interested). Let $S(m) = \sum_{j=1}^{m-1} (j!)^2$ . We want to focus our study on prime $m$ , but let's have $S$ available for any positive integer $m \geq 2$ . (There is an extensive comment in the Markdown here. It details an attempt to move directly from $S(m-1) = q_{m-1}(m-1) + r_{m-1}$ to $S(m) = q_m m + r_m$ . The result was no faster, so it is not presented. Feel free to edit and read if you are somehow interested.) The function sPlain() is a direct translation of the definition of $S$ . The function sReuse is an attempt to implement Efim Mazhnik's description of his code. Sconsecutive() evaluates $S(m) \pmod{m}$ for $m =2, 3, 4, \dots$ . SconsecutivePrimes() evaluates $S(p) \pmod{p}$ for prime integers, $p = 2, 3, 5, \dots$ . (I find that SconsecutivePrimes() runs faster as (equivalent) Mathematica code. Most likely, Mathematica tries harder to cache sieving partial results between NextPrime[] calls t
|
|summation|prime-numbers|divisibility|factorial|
| 0
|
Does symmetric matrix adjoint of $A$ have all its entries equal?
|
Reading about graph theory they said that, for the Laplacian matrix of a graph, let's call it $A$ , its adjoint $Adj(A)$ since $A$ is symmetric, , Here is an example of Laplacian matrix $A$ and its adjoint matrix. $$A=\begin{pmatrix}2&-1&0&-1\\-1&3&-1&-1\\0&-1&1&0\\-1&-1&0&2 \end{pmatrix} \to Adj(A)=\begin{pmatrix} 3&3&3&3\\3&3&3&3\\3&3&3&3\\3&3&3&3 \end{pmatrix}$$ I can understand that when looking for the determinant of, for example, the entry $(3,1)$ It will be composed of the same entries as those of the determinant of its symmetric element $(1,3)$ since in both cases the entries of the main diagonal are eliminated in the same order and the remaining symmetric elements are taken $$A_{(3,1)}=\begin{pmatrix}-1&0&-1\\3&-1&-1\\-1&0&2 \end{pmatrix} \to A_{(1,3)}=\begin{pmatrix} -1&3&-1\\0&-1&0\\-1&-1&2 \end{pmatrix}$$ But I do not understand how it is that The determinants of the main diagonal are equal to each other since there is no relationship that leads to that result. $$A_{(1,1)}=
|
The Graph Laplacian $A$ is always singular, because $Ae=0$ where $e$ denotes the vector of ones. In general, for $n\times n$ matrix over a general field, if $\operatorname{rank}(A) , then $\operatorname{adj}(A)=0$ . If $\operatorname{rank}(A)=n-1$ , then $\operatorname{adj}(A)$ is a nonzero scalar multiple of $uv^T$ , where $\ker(A)=\operatorname{span}\{u\}$ and $\ker(A^T)=\operatorname{span}\{v\}$ . Therefore, in your case, $\operatorname{adj}(A)$ is always a (possibly zero) scalar multiple of $ee^T$ (and this scalar multiple must be nonnegative, as every $(n-1)$ -rowed principal submatrix of $A$ is an M-matrix). Hence all elements of $\operatorname{adj}(A)$ are equal to each other.
|
|linear-algebra|matrices|geometry|determinant|
| 0
|
The roots of $\tan(x)-x$ approaches $\frac{\pi}2 + n\pi$
|
I wrote a proof on ProofWiki of this problem: Prove $\displaystyle \int_0^\infty \left|\dfrac d {d t} \left(\dfrac {\sin t} t\right)^n\right|\mathrm d t$ diverges for $n=1$ . To fill the gaps in the proof, I need to prove the following two inequalities (Also see the talk page ) $$\tag1 t_n and $$\tag2 |\sin(t_n)|\ge|\sin(t_1)|$$ where $(t_n)_{n=1}^\infty$ is all the positive roots of $t=\tan(t)$ in increasing order. I am convinced that inequalities $(1),(2)$ holds, by looking at the intersections $(t_n,t_n)$ of the graphs $y = x$ and $y = \tan x$ . $t_n$ is closer and closer to the lines $x=\frac{\pi}2 + n\pi$ which are asymptotes to the graph of $y=\tan x$ . To prove $(1)$ , I use the monotonicity of the function $\tan(x)-x$ on each interval $\left(n\pi,\frac{\pi}2+n\pi\right)$ $$\frac{d}{dx}[\tan(x)-x]=\sec^2(x)-1\ge0$$ so the function $\tan(x)-x$ is continuous and increasing on each interval $\left(n\pi,\frac{\pi}2+n\pi\right)$ and $\tan(x)-x at $x=n\pi$ and $$\lim_{x\to\frac{\pi}2+
|
Your proof is correct and easily understandable. What concerns your question the property can be also easily demonstrated. Assume $$ \lim_{n\to\infty}\left(n+\frac12\right)\pi-t_n=\epsilon>0.\tag1 $$ Since $\left(n+\frac12\right)\pi-t_n$ is decreasing sequence this means: $$ \forall n: \left(n+\frac12\right)\pi-t_n>\epsilon\implies t_n Here we took into account that $\tan x$ is increasing continuous function for $\pi n . In view of $t_n=\tan t_n$ the last inequality implies: $$\forall n: t_n which is however impossible since $t_n$ is unbounded (particularly $t_n>n\pi$ ). This contradiction demonstrates that the assumption $\epsilon>0$ was false.
|
|real-analysis|sequences-and-series|solution-verification|improper-integrals|
| 1
|
Find a rotation matrix given some constaints on points transformation
|
I'm looking for elegant way to find a rotation matrix between RefA and RefB with all the points known in RefA and a 6 constraints (corresponding to a solid transformation with 6doF) in RefB. I set as a prerequisite that the translation was managed before and that therefore the problem has a solution. 1 - Example Given points in ref A : Point 1 (+10,+0,+0) Point 2 (+20,+5,+0) Point 3 (+20,-5,+0) What would be the rotation matrix to get them in a ref B where we have 6 constraints (to lock position with 6DoF): Point 1' (6.7151763 4.7140706 -5.7169875) Point 2' (14.1649161 12.78572935 PT2'_Z) Point 3' (12.6957891 PT3'_Y PT3'_Z) Here the answer I would like to obtain is the following rotation matrix : [[ 0.67151763 0.1469127 0.72627869] [ 0.47140706 0.67151763 -0.57169875] [-0.57169875 0.72627869 0.38168025]] 2 - First attempts I tried by following the matrix calculation as : [[ R11 R12 R13 ] x [[ +10 +20 +20 ] = [[ +6.7151763 PT2'_X PT3'_X ] [ R21 R22 R23 ] [ +00 +05 -05 ] [ +4.7140706 12.
|
You have the initial points matrix $ P = \begin{bmatrix} 10 && 20 && 20 \\ 0 && 5 && -5 \\ 0 && 0 && 0 \end{bmatrix}$ And you partially have the rotated points matrix $ Q = \begin{bmatrix} 6.7151763 && 14.1649161 && 12.6957891 \\ 4.7140706 && 12.78572935 && Y_3 \\ -5.7169875 && Z_2 && Z_3 \end{bmatrix}$ And we want to find the rotation matrix $R$ such that $ Q = R P $ Now, the rotation matrix does not change the length of vectors or the length of their differences, therefore, $ 10^2 + 0^2 + 0^2 = 6.7151763^2 + 4.7140706^2 + (-5.7169875)^2 \tag{1}$ $ 20^2 + 5^2 + 0^2 = 14.1649161^2 + 12.78572935^2 + Z_2^2 \tag{2}$ $ 20^2 + (-5)^2 + 0^2 = 12.6957891^2 + Y_3^2 + Z_3^2 \tag{3}$ And in addition, $ (10 - 20)^2 + (0 - 5)^2 + (0 - 0)^2 = (6.7151763 - 14.1649161)^2 + (4.7140706 - 12.78572935)^2 + (-5.7169875 - Z_2)^2 \tag{4} $ $ (10 - 20)^2 + (0 - (-5) )^2 +0^2 = (6.7151763 - 12.6957891)^2 + (4.7140706 - Y_3)^2 + (-5.7169875 - Z_3)^2 \tag{5}$ $ (20 - 20)^2 + (5 - (-5) )^2 + (0 - 0)^2 = (14.1649
|
|matrix-equations|3d|
| 0
|
What are the eigenvalues and eigenvectors of $\operatorname{ad}x$ for non-diagonalizable $x$?
|
We know the following proposition is true. The proof together with the specification of the eigenvectors of $\operatorname{ad}x$ is here . Let $x\in \operatorname{gl}(n,F)$ be diagonalizable with $n$ eigenvalues $a_1,\ldots,a_n$ in $F$ . The eigenvalues of $\text{ad }x$ , where $\operatorname{ad}x(y):=[x,y]=xy-yx$ are precisely the $n^2$ scalars $a_i-a_j$ ( $1\leq i,j\leq n$ ). What is the result if $x$ is not diagonalizable? We know $x$ can always be transformed into the Jordan canonical form. I solved the cases of $2\times2$ and $3\times3$ Jordan canonical forms. I would like to know the general solution.
|
The same conclusion holds. Use the same argument as in the linked post, if $xv=\lambda v$ and $x^t w = \mu w$ where $v$ , $w$ are nonzero (eigenvectors), then $vw^t$ is an eigenvector of $\operatorname{ad}(x)$ . Hence $\lambda -\mu$ is an eigenvalue of $\operatorname{ad}(x)$ . Now we only need to show if $x$ is nilpotent, then $\operatorname{ad}(x)$ only has eigenvalue $0$ , in other words, $\operatorname{ad}(x)$ is nilpotent (this simple fact is used in the proof of the Engel's theorem in Lie algebra): $\operatorname{L_x}(y):=xy$ and $\operatorname{R_x}(y):=yx$ are both nilpotent and commute, therefore their difference $L_x-R_x=\operatorname{ad}(x)$ is also nilpotent. To be slightly more rigorous, let $v_1, \dotsc, v_l$ (resp. $w_1, \dotsc, w_l$ ) be a linearly independent set of generalized eigenvectors of $x$ (resp. $x^t$ ), then we claim $v_iw_j^t$ is a linearly independent set of generalized eigenvectors of $\operatorname{ad}(x)$ . The linear independence follows from the independ
|
|linear-algebra|eigenvalues-eigenvectors|lie-algebras|adjoint-action|
| 1
|
How to actually find the stream function for a simple Laplace problem?
|
Let's assume we're solving a $2D$ Laplace problem, in a domain (if necessary simply connected) $\Omega \subset \mathbb{R}^2$ , with a Dirichlet boundary $\Gamma_D$ and a Neumann boundary $\Gamma_N$ : $$\Delta \mathbf{u} = 0.$$ $$\mathbf{u} = \mathbf{u_D}, \quad \text{ on }\Gamma_D$$ $$\nabla\mathbf{u}\cdot \mathbf{n} = \mathbf{u_N}, \quad \text{ on }\Gamma_N$$ We define a stream function $\Psi$ such that it satisfies $$\frac{\partial \Psi}{\partial x} = \frac{\partial \mathbf{u}}{\partial y}, \quad \frac{\partial \Psi}{\partial y} = -\frac{\partial \mathbf{u}}{\partial x}.$$ One can now argue that level sets of $\Psi$ correspond to the streamlines of $\nabla \mathbf{u}$ since $$\nabla\mathbf{u} \cdot \nabla\Psi = 0.$$ Presumably, since $\mathbf{u}$ satisfies the Laplace equation, $\nabla \mathbf{u}$ satisfies the continuity equation $\nabla \cdot (\nabla\mathbf{u}) = 0$ , so the stream function exists (here the assumption of a simply connected $\Omega$ should be necessary). Moreover, $
|
You can always just integrate the conditions for $\Psi$ . Fix some $(x_0,y_0)\in\Omega$ and let's examine the function $$\Psi(x,y):=\int_{x_0}^x\partial_yu(s,y)\,ds-\int_{y_0}^y\partial_x u(x_0,s)\,ds.$$ Then $$\partial_x\Psi(x,y)=\partial_yu(x,y)$$ and, using the div-free property of $\nabla u$ , also $$\partial_y\Psi(x,y)=\int_{x_0}^x\partial^2_{yy}u(s,y)\,ds-\partial_x u(x_0,y)=-\int_{x_0}^x\partial_{xx}^2u(s,y)\,ds-\partial_x u(x_0,y)=-\partial_xu(x,y).$$ I doubt that one is able to say something useful about Dirichlet boundary conditions for $\Psi$ (note that it is defined only up to an additive constant).
|
|partial-differential-equations|heat-equation|fluid-dynamics|
| 0
|
Intermediate ring between a field and an algebraic extension.
|
This is an exercise in some textbooks. Let $E$ be an algebraic extension of $F$. Suppose $R$ is ring that contains $F$ and is contained in $E$. Prove that $R$ is a field. The trouble is really with the inverse of $r$, where $r\in R$. How to prove that $r^{-1}\in R$, in apparent lack of a characterization of $R$. It occurred to me to use the smallest field containing $R$ ($R$ is easily shown to be an integral domain), that's the field of quotients, and proving that it's $R$ itself, but I don't really know how to proceed. A not-too-weak, not-too-strong hint will be much appreciated. Beware $ $ Readers seeking only hints should beware that there is now a complete answer.
|
Let $r \in R, r\neq 0$ . $r$ is algebraic over $K$ , so $K[r] = K(r)$ . $K[r]$ is the smallest ring containing $r$ and $K$ , therefore $K[r]\subseteq R$ . As $r^{-1}\in K(r)$ , we get $ r^{-1}\in K[r]\subseteq R$ . Thus, $R$ is a field.
|
|abstract-algebra|field-theory|
| 0
|
About convergent sequence $f_n \to f$ in $L^p(U)$ ( Convergence in norm, passage of limit under integral etc.. ; Evans's PDE )
|
Let $U$ be a bounded, connected, open subset of $\mathbb{R}^n$ . Assume $1 \le p \le \infty$ .Let $f_n \to f$ be a convergent sequence in $L^p(U)$ . My question is, then, Q.1. $ \lim_{n\rightarrow\infty} \left\Vert f_n \right\Vert_p =\left\Vert f\right\Vert_p ? I found assoicated post : Convergence in Lp implies convergence in Lp norms finite . From the post, I think that the norm convergence is true for $1 \le p . And, I also wonder whether this is true for $p=\infty$ . Q.2. For any test function $\phi\in C^{\infty}_c(U)$ , $$ \int_U f \phi dx = \lim_{n\to \infty}\int_U f_n \phi dx \tag{1}$$ ? Let's define a functional $T_{\phi}$ on $L^p(U)$ by $T_{\phi}(f):=\int_U f \phi dx $ for all $f\in L^p(U)$ . If $T$ is a 'bounded' linear functional on $L^p(U)$ , then $T(f_n)$ converges to $T(f)$ . But the boundness of $T_{\phi}$ ( including the case $p=\infty$ ) is true? Or is there any other route to show the $(1)$ ? This question originates from following proof of the Poincare's inequality i
|
For the question 1) , this is true by the inverse triangle inequality for norm ; $$\big|\,\|x\|-\|y\|\,\big|\le\|x-y\|.$$ For the question 2) , Note next theorem ( Hölder's Inequality ) Theorem 1. Let $E$ be a measurable set, $1\le p , and $q$ the conjugate of $p$ . If $f$ belongs to $L^p(E)$ and $g$ belongs to $L^{q}(E)$ , then their product $f\cdot g$ is integrable over $E$ and $$ \int_E|f\cdot g| \le \|f\|_{p} \cdot \|g\|_q.$$ Now as suggested in my original question, let me show that $T_{\phi}(f) :=\int_U f \phi dx $ ( $f \in L^{p}(U) $ ) is 'bounded' linear functional so that we are done. To show the boundness of $T_{\phi}$ , we need to show that there exists $M \ge 0$ such that $$ |T_{\phi}(f)| := \bigg| \int_U f \phi dx\bigg| \le M \cdot \|f\|_p $$ for all $f\in L^{p}(U)$ . First note that since $\phi \in C^{\infty}_c(U)$ , $\phi \in L^{q}(U)$ for all $1 \le q \le \infty$ . Case 1) $1 \le p : Note that $$\bigg| \int_U f \phi dx \bigg| \le \int_U |f\phi|dx \le \|f\|_p \cdot \|\ph
|
|real-analysis|partial-differential-equations|lebesgue-integral|
| 1
|
Moving expressions around gives different implicit differentiation
|
Here's an exercise from Thomas Calculus, I need to do an implicit differentiation: $$ x^3=\frac{2x-y}{x+3y} $$ If I enter this in Wolfram Alpha, I get: $$ y'(x) =-\frac{3 x^4 + 18 x^3 y + 27 x^2 y^2 - 7 y}{7 x} $$ But if I move things around first like this: $x^3(x+3y)=2x-y$ , then I get a completely different result: $$ y'(x)=-\frac{4x^3+9x^2y-2}{3x^3+1} $$ Which is what I got when doing it by hand. First I thought that it simply took different steps, and one expression can be simplified to another, but then I tried to 3D plot the expressions as functions of $x$ and $y$ , and I got different plots. Which makes me wonder how this is possible.
|
First, the answer provided by Wolfam Alpha in your link includes a negative sign and reads $$y'(x) = - \frac{3x^4 + 18x^3 y + 27 x^2 y^2 - 7y}{7x}. \tag{1}$$ This is a correct expression for the implicit derivative of $y$ with respect to $x$ . The second expression $$y'(x) = - \frac{4x^3 + 9x^2 y - 2}{3x^3 + 1} \tag{2}$$ you obtained by hand calculation is also correct. They are not identically equivalent for all $(x,y)$ because they are only equivalent when the original implicit relation $$x^3 = \frac{2x-y}{x+3y} \tag{3}$$ is true; that is to say, $(1)$ and $(2)$ yield the same value of $y'(x)$ for all $(x,y)$ for which $(3)$ is true. To see this, one only needs to explicitly solve $(3)$ for $y$ and perform the differentiation: $$y(x) = \frac{2x - x^4}{1 + 3x^3} \tag{4}$$ hence $$y'(x) = \frac{2 - 16x^3 - 3x^6}{(1 + 3x^3)^2}. \tag{5}$$ Then if we substitute $(4)$ into $(1)$ and simplify, we should obtain $(5)$ : $$\begin{align} y'(x) &= -\frac{1}{7x} \left(3x^4 + 18x^3 \frac{2x - x^4}
|
|implicit-differentiation|
| 1
|
Proof for the curl of a curl of a vector field
|
For a vector field $\textbf{A}$, the curl of the curl is defined by $$\nabla\times\left(\nabla\times\textbf{A}\right)=\nabla\left(\nabla\cdot\textbf{A}\right)-\nabla^2\textbf{A}$$ where $\nabla$ is the usual del operator and $\nabla^2$ is the vector Laplacian. How can I prove this relation? I tried the ugly/unefficient/brute-force method, by getting an expression for the LHS and the RHS for an arbitrary vector field $$\textbf{A}=\left(a(x,y,z),b(x,y,z),c(x,y,z)\right)$$ It does work (duh), but is there a more elegant way of doing this? Using matrix notation maybe? EDIT: I got very good answers, from various perspectives. I would say @Spencer's derivation is the one I was looking for, using Einstein notation - and as a physics student, this was very helpful. However, @Vectornaut's solution not only is short and elegant, but it also introduced me to a whole new range of mathematics - and as a theoretical physics student, I appreciate learning new mathematical theories and trying to see h
|
Maybe you are already familiar with the cross-product formula $\vec{a} \times (\vec{b} \times \vec{c}) = (\vec{a} \cdot \vec{c}) \vec{b} - (\vec{a} \cdot \vec{b}) \vec{c}$ , but unsure as to why you can treat $\nabla$ as a vector. We can try to justify this by using the Fourier transform. Let's define the 3D Fourier transform with the convention $$ \psi(\vec{k}) = \mathcal{F}\{\psi(\vec{x})\} \equiv \int e^{-i \vec{k} \cdot \vec{x}} \psi(\vec{x}) d^3 \vec{x} $$ This definition extends to vector-valued functions too. Under the Fourier transform, using integration by parts, we can show that differentiation becomes multiplication $$ \mathcal{F}\{\partial_\mu \psi(\vec{x})\} = i k_\mu \psi(\vec{k}) $$ Essentially, the effect is the replacement of the operator $\nabla \to i\vec{k}$ (in another convention, there might be a minus sign). For example, $$ \mathcal{F}\{\nabla \psi\} = i\vec{k} \psi \\ \mathcal{F}\{\nabla \times \vec{A}\} = (i\vec{k}) \times \vec{A} \\ \mathcal{F}\{\nabla \cdot \v
|
|multivariable-calculus|vector-analysis|vector-fields|curl|
| 0
|
"Classical" bicartesian closed category?
|
Every Heyting algebra can be thought of as a bicartesian closed category through which is also a poset. We may interpret classical logic in a Heyting algebra if we ask of their pseudocomplements to be complements, i.e: to be boolean. Can we give a similar definition with bicartesian closed categories in general and not get a preorder? That is, a "boolean bicartesian closed category". Intuitively I'd say you can't but who knows.
|
No, not in the manner you suggest. The "collapse" starts to happen the moment you allow initial objects and the exponential (and the product) to come together. That's for Cartesian closed categories . It doesn't matter whether it's bi-Cartesian or not. I showed, in some detail, where and how the cascade of collapses happen here Decidability of bi-cartesian closed categories . All of the "negative" formulae - those expressible as $¬A$ , a.k.a. "regular" formulae - fall into a boolean lattice, where negation is defined by $¬A = A ⊃ ⊥$ , and $⊥$ corresponds to the initial object . More generally and more precisely, the unique morphism $f: A → ¬B$ that witnesses the conditional $A ⊃ ¬B$ is given by $f = ⋀{([]_{A∧B})}^{-1}$ , using the notation in the linked reply. This is true regardless of whether $A$ is negative or not. The linked reply shows that if $g: A∧B → ⊥$ , then $g = ([]_{A∧B})^{-1}$ , where $[]_{A∧B}: ⊥ → A∧B$ is the unique morphism that witnesses the conditinal $⊥ ⊃ A∧B$ . Unde
|
|logic|category-theory|categorical-logic|
| 0
|
Question about an inequality during proof that $e^2$ is irrational.
|
I am reading a proof of the irrationality of $e^2$ and I am stuck on the following inequality: Let $S := -a\underbrace{\left(\frac{1}{n+1} - \frac{1}{(n+1)(n+2)} + \frac{1}{(n+1)(n+2)(n+3)} \mp ...\right)}_{S^*}$ (just because of space issues) with $a \in \mathbb{Z}$ , $n \in \mathbb{N}$ . The proof I am reading states that $$ -\frac{a}{n} Why is this true? My intuition tells me that $S^* since $1/n$ is already greater than the first term of $S^*$ and the terms afterwards all tends to 0 rather quickly so it never reaches $1/n$ but I am looking for a more rigorous explanation. The same goes for $S^*$ being apparently smaller than $\tilde S$ . I see that the terms which get subtracted tends to zero more quickly than the terms of $S^*$ and so the inequality could be true as far as my intuition goes but not further. Thanks in advance for any help! (The proof I am referring to is out of "Proofs from THE BOOK" by Martin Aigner and Günter M. Ziegler in case anyone is wondering.)
|
It took me kind of long to understand this proof but I think I finally understood and want to share what I've learned at this point. Also I think the way I stated my question was kind of misleading so I apologize for that. First of all we assumed that $e^2$ is in fact rational with $e^2 = \frac{a}{b}$ where $a$ and $b$ are positive integers (since $e^2$ is also definitely positive). We can write this equation as $bn!e = an!e^{-1}$ (we also multiplied by $n!$ for some $n \in \mathbb{N})$ and insert the definition of $e$ and $e^{-1}$ via the exponential function (in the following for $e^{-1}$ and $n$ even): $$ e^{-1} = \sum^\infty_{k=0} \frac{(-1)^k}{k!} = \left( 1 - \frac{1}{1!} + \frac{1}{2!} \mp ... + \frac{1}{n!}\right) - \left(\frac{1}{(n+1)!} - \frac{1}{(n+2)!} + \frac{1}{(n+3)!} \mp ...\right). $$ Doing this the right hand side of the equation breaks up into two pieces one of which is $$ -an!\left(\frac{1}{(n+1)!} - \frac{1}{(n+2)!} + \frac{1}{(n+3)!} \mp ...\right) = -a\biggl(\un
|
|real-analysis|analysis|elementary-number-theory|irrational-numbers|
| 1
|
Solve PDE : $U^2_x + U^2_y + 1 = \frac{1}{U^2}$
|
Here is the PDE : $U^2_x + U^2_y + 1 = \frac{1}{U^2}$ I tried to solve it using separation of variables method. Assume $U=XY$ ; $U_x = \dot X Y$ and $U_y = X \dot Y$ so the PDE become : $(\dot X Y)^2 + (X \dot Y)^2 + 1 = \frac{1}{(XY)^2}$ $(\dot X Y)^2 + (X \dot Y)^2 = \frac{1}{(XY)^2} - 1$ My goal is to make the RHS become $0$ however i am stuck.
|
Hint. Using a Charpit procedure we have: $$ f(x,y,z,p,q) = p^2+q^2+1-\frac{1}{z^2} $$ and from $$ \frac{dp}{f_x+p f_z}=\frac{dq}{f_y+q f_z}=\frac{dz}{-p f_p-q f_q}=\frac{dx}{-f_p}=\frac{dy}{-f_q} $$ we have $$ \frac{z^3dp}{p}=\frac{z^3dq}{q}=-\frac{dz}{p^2+q^2}=-\frac{dx}{p}=-\frac{dy}{q} $$ From $$ \frac{z^3dp}{p}=\frac{z^3dq}{q}\Rightarrow p = c_1 q $$ From $$ \frac{z^3dp}{p}=-\frac{dz}{p^2+q^2}\Rightarrow \frac{dz}{z^3}=-(c_1^2+1)q dq $$ etc.
|
|partial-differential-equations|
| 0
|
Show that each of the following equations has a solution of the form $u(x,y) = f(ax+by) $ for a proper choice of constant $a,b$.
|
Find the constant for each example. (a) $u_x + 3u_y =0$ (b) $3u_x - 7u_y = 0$ (c) $2u_x + \pi u_y =0$
|
Show that each of the following equations has a solution of the form u(x, y) = f (ax + by) for a proper choice of constants a, b. Find the constants for each example. (a) ux + 3uy = 0. (b) 3ux − 7uy = 0. (c) 2ux + πuy = 0. To show that each of the given equations has a solution of the form (u(x, y) = f(ax + by)) for some constants (a) and (b), we can use the method of characteristics. The general form of a first-order linear partial differential equation is given by: [ A(x, y)u_x + B(x, y)u_y = C(x, y) ] where (A), (B), and (C) are functions of (x) and (y). The characteristic equation for this type of PDE is given by: [ \frac{dx}{A} = \frac{dy}{B} = \frac{du}{C} ] Solving the characteristic equations will give us the family of characteristic curves along which the solution can be written in the form (u(x, y) = f(ax + by)). Let's find the solutions for each of the given equations: (a) (u_x + 3u_y = 0) Here, (A = 1), (B = 3), and (C = 0). The characteristic equations become: [ \frac{dx}{
|
|partial-differential-equations|
| 0
|
What is the real K-theory of the spheres?
|
In many sources one finds a computation of the complex topological K-theory of the spheres $S^n$ , but the real theory $KO(S^n)$ is usually not computed. Maybe it can be read off some more advanced stuff, but I am unable to do so. I would like to see a computation of, say $KO(S^2)$ . Can somebody give me a reference?
|
The computation of these groups is an immediate consequence of real Bott periodicity. Namely, by representability you have $$ \widetilde{KO}(S^n) \cong [S^n, BO \times \mathbb{Z}]_* \cong \begin{cases} [S^n, BO]_* = \pi_n(BO) & \text{if }n > 0 \\ \mathbb{Z} & \text{if } n = 0 \end{cases} $$ and since $\pi_n(BO) \cong \pi_{n - 1}(O)$ you get to read off that \begin{align} \widetilde{KO}(S^0) &\cong \mathbb{Z} \\ \widetilde{KO}(S^1) &\cong \mathbb{Z}/2 \\ \widetilde{KO}(S^2) &\cong \mathbb{Z}/2 \\ \widetilde{KO}(S^3) &\cong 0 \\ \widetilde{KO}(S^4) &\cong \mathbb{Z} \\ \widetilde{KO}(S^5) &\cong 0 \\ \widetilde{KO}(S^6) &\cong 0 \\ \widetilde{KO}(S^7) &\cong 0 \\ \end{align} with $\widetilde{KO}(S^{8k + i}) \cong \widetilde{KO}(S^i)$ accounting for all remaining groups. This table you can also find here , and the Wikipedia pages for topological $K$ -theory and Bott periodicity , too, are pretty useful. For a proper mathematical text, Karoubi's " $K$ -Theory―An Introduction" gives a proof
|
|topological-k-theory|
| 1
|
Expected value of $X_N$ with smallest index $N$ for which $\sum_{i=1}^N X_i$ exceeds $1$ when $X_i$ are uniformly distributed
|
From an interview book, where the answer is not so clear I believe. You keep generating $\mathcal U_{[0,1]}$ iid random variables until their sum exceeds 1, then compute the expected value of the last random variable, i.e. the one responsible for letting the sum of rvs overflow 1. My idea (not working): The $i$ -th draw from $\mathcal U_{[0,1]}$ is called $X_i$ , and $S_N:=\sum_{i=1}^N X_i$ . I aim to compute: $$\mathbb E\left[X_{N}\right], N:=\min \left[i:\sum_i X_i > 1\right].$$ Rewrite it as: $$\mathbb E\left[X_{N}\right] = \sum_{i=2}^\infty \mathbb E\left[X_{N}|N=i\right]\mathbb P[N=i].$$ From this question I know that $\mathbb P[N=i] = (i-1)/i!$ . I know that $X_N$ takes positive values between 0 and 1, so I use the expectation of the tail function: $$\mathbb E\left[X_{N}|N=i\right]=\int_0^1 \mathbb P[X_N>t|N=i]\ \text d t= 1-\int_0^1 \mathbb P[X_N\leq t|N=i]\ \text d t.$$ Now, some relabeling, using $X$ for the generic $\mathcal U_{[0,1]}$ and $Y$ for $S_{i-1}$ : $$\mathbb P[X_N\
|
Another way: \begin{gather*} f_{S_{N-1}|N}(s|n)=ns^{n-2}\int_1^{1+s}dx=ns^{n-1},\quad E[S_{N-1}|N]=\frac{n}{n+1}\\ f_{S_N|N}(s|n)=\int_{s-1}^1nx^{n-2}dx=\frac{n(1-(s-1)^{n-1})}{n-1},\quad E[S_N|N]=\frac{3n+2}{2n+2}\\ E[X_N|N]=E[S_N|N]-E[S_{N-1}|N]=\frac{3n+2}{2n+2}-\frac{n}{n+1}=\frac{n+2}{2n+2}\\ E[X_N]=\sum_2^\infty\frac{n+2}{2n+2}\frac{n-1}{n!}=2-\frac{e}{2} \end{gather*}
|
|probability|random-variables|expected-value|conditional-expectation|
| 0
|
Question about solutions of linear system
|
I've just started studying linear algebra. I had a question while studying an example. I'm gonna explain the problem. $$\\$$ $$\\$$ Augmented matrix for a linear system in the unknowns x, y and z is $$ \left[\begin{array}{rrr|r} 1 & 0 & 3 & -1 \\ 0 & 1 & -4 & 2 \\ 0 & 0 & 0 & 0 \end{array}\right] $$ $ \\ \\ $ The linear system corresponding to the augmented matrix is $$ x + 3z = -1 \\ y - 4z = 2 $$ As you all know, if you apply a parametric equation to the above equations, you can obtain a general solution like this. $$ x = -1 -3t,\, y = 2 + 4t, \,z = t$$ But from what I learned, geometrically, the solution of the linear system is the intersection of each graph. If we rewrite the above equations for z $$ z = -{1 \over 3}x - {1 \over 3} \\ z = {1 \over 4}y - {1 \over 2} $$ It seems that these linear equations' graphs go through different planes (xz-plane, yz-plane) and never intersect. So my question is, how can there be solutions when the graphs drawn by linear equations don't make a i
|
I think you mistake is that you think of solutions for the equations \begin{align} z =-\frac{1}{3}x-\frac{1}{3}\ \ \&\ \ z=\frac{1}{4}y-\frac{1}{2} \end{align} as lines in the respective planes. But they are in reality planes in $\mathbb{R}^3$ (assuming you are looking for real solutions) and since they are not parallel they will intersect in a line in $\mathbb{R}^3$ . Namely the one you specified above i.e. $x=−1−3t,y=2+4t,z=t$ . The reason they are planes is that solutions to $z =-\frac{1}{3}x-\frac{1}{3}$ are formed by the set \begin{align} \{(x,y,z)\in\mathbb{R}^3\mid z =-\frac{1}{3}x-\frac{1}{3}\} \end{align} Intuitively $y$ can be chosen freely and then you can make a second choice for $x$ but then $z$ will already be given by the equation above i.e. you can make to free choices of three variables giving you a plane.
|
|linear-algebra|
| 1
|
Parametric area of a region bounded by two curves
|
Let $S(\epsilon)$ be the area of the region bounded by $y=e^x$ and $y=x+1+\epsilon$ , where $\epsilon$ is a small positive number. When $\epsilon\to0,$ we have $$S(\epsilon)=S_0+\epsilon^\alpha S_1+\dots,\alpha>0$$ Find $S_0, S_1$ and $\alpha$ . For starters, that's how the graphs of $y=e^x$ and $y=x+1$ look like: Let's take $\epsilon=0.5$ , then we would have the following graphs: So for us to find the area of the region bounded by the two graphs, we would need to calculate the following integral $$\int _a ^b (x+1+\epsilon-e^x)dx$$ We have to determine the limits as well, so we have to solve $$x+1+\epsilon=e^x$$ $$e^x-x=1+e$$ $$1+x+\dfrac{x^2}{2!}+\dfrac{x^3}{3!}+\dots - x = 1+\epsilon $$ $$\epsilon=\dfrac{x^2}{2!}+\dfrac{x^3}{3!}+\dots,$$ which I am not how to use in order to find $x$ in terms of $\epsilon$ and $e$ . How do I continue from here? I am not sure what the sum given in the problem has to do with the integral and how we should calculate the first two terms. What are $S_0,
|
As suggested by Claude Leibovici you can use the series reversion approach to get the first couple of terms for the series expansion of the inverse function. Let $f(x) = \epsilon$ and $g(\epsilon) = x$ be the inverse function. Using the approach presented in the link shared by Claude Leibovici you can get $g(\epsilon) = -3 + \frac{13}{2}\epsilon + \ldots$ In order to get this simply put $\epsilon = \frac{x^2}{2!} + \frac{x^3}{3!} + \ldots = \frac{g(\epsilon)^2}{2!} + \frac{g(\epsilon)^3}{3!}$ where you write: $g(\epsilon) = a_0 + a_1\epsilon + \ldots$ and compare powers of $\epsilon$ . This allows you to compute the integral as: $\frac{b^2-a^2}{2} + (b-a)(1-\epsilon) + e^a - e^b$ . By then using the fact that $a$ and $b$ are roots of $x+1+\epsilon = e^x$ this simplifies to $\frac{b^2-a^2}{2} + \epsilon(b-a)$ . Perhaps you could try it from here and see what answer you get, if you get stuck ask again.
|
|integration|definite-integrals|area|
| 0
|
Roots of $\tan x - x$
|
The function $\tan x - x$ has exactly one root $x_n$ in the interval $(n\pi, (n + \frac{1}{2})\pi)$. Show that $$x_n = n\pi + \frac{\pi}{2} - \frac{1}{n\pi} + r_n$$ where $\lim_{n\rightarrow \infty} n r_n = 0$. I think I should try to use Taylor expansion some way, but I am not seeing how to do this.
|
Let $y_n = \pi/2-x_n + n\pi$ , then $y_n\in(0,\pi/2)$ and $$\frac{1}{\tan y_n}=\frac{\pi}{2}-y_n+n\pi\Leftrightarrow \tan y_n=\frac{1}{\frac{\pi}{2}-y_n+n\pi}$$ Since $y\in(0,\pi/2)$ , it follows that $$\frac{1}{n\pi}> \tan y_n > \frac{1}{(n+1/2)\pi}\\ \Rightarrow 1 > n\pi\tan y_n > \frac{1}{(1+\frac{1}{2n})}$$ By squeeze theorem we have that $\lim_{n\rightarrow\infty}n\pi \tan y_n=1$ , but we also know that $\lim_{n\rightarrow\infty}\frac{\tan y_n}{y_n} = \lim_{y\rightarrow 0}\frac{\tan y_n}{y_n} = 1$ therefore $$\lim_{n\rightarrow +\infty} n\pi \,y_n = 1\\ \Leftrightarrow \lim_{n\rightarrow +\infty} n\pi(\pi/2 - x_n + n\pi) = 1\\ \Leftrightarrow \lim_{n\rightarrow +\infty} n\pi(x_n - \pi/2 + n\pi - \frac{1}{n\pi})=0 \Leftrightarrow \lim_{n\rightarrow +\infty} n\pi r_n=0$$
|
|real-analysis|taylor-expansion|roots|
| 0
|
Random vector $(X, Y)$ has a uniform distribution on the unit circle.
|
Faced with the following problem, I do not understand how to solve this problem: Random vector $(X, Y)$ has a uniform distribution on the unit circle. Will its components be independent? It is not very clear to me how to approach such tasks, is it necessary to look for a vector distribution function here? But it's probably clear that we need to check the definition somehow. $X,Y$ independent $\Leftrightarrow$ $\mathbb P(X But how to do this is not very clear.
|
They are not independent: A mathematical reasoning can be: $f_{x,y} = \frac{1}{\pi}$ (given) From here we can find $f_{x} = \int_{-\sqrt{1-x^2}}^{\sqrt{1-x^2}} f_{x,y} \, dy = \frac{2\sqrt{1-x^2}}{\pi}$ We know, for condition of independence to hold, $f_{x|y} = f_x$ i.e. x and y are independent random variables. Now, $f_{x|y} = \frac{1}{\text{Area of the square formed when Y=y}}=\frac{1}{2y\sqrt{1-y^2}}$ Since, $f_{x|y} \neq f_x$ , therefore x and y are not independent random variables. If you find any issues in my answer, please do let me know. Happy to cooperate!
|
|probability|random-variables|uniform-distribution|
| 0
|
Prove Looped line is Hausdorff
|
Definition of Looped line At each point $x$ of the real line other than the origin, the basic neighborhoods of $x$ will be the usual open intervals centered at $x$ . Basic neighborhoods of the origin will be the sets $(-\epsilon,\epsilon)\cup(-\infty,-n)\cup(n,\infty)$ for all possible choices $\epsilon >0$ and $n \in \mathbb{N}$ . This gives a topology on the line. Problem . Prove the looped line is Hausdorff. I know, by definition of Hausdorff space, with different points $x,y$ and differents of zero, we can disjoint open. But when, for example, $x=0$ there's no exists a open of $y$ such that both are disjoints. Could you guide me to solve this problem?
|
For $x = 0$ and $y\neq 0$ WLOG suppose that $y \gt 0$ , consider the interval $(\frac{y}{2},\frac{3y}{2})$ as an nhood for $y$ and $(\frac{-y}{2},\frac{y}{2}) \cup (-\infty,-\lceil\frac{3y}{2}\rceil) \cup (\lceil\frac{3y}{2}\rceil,\infty)$ as an nhood of $x$ . Clearly these two nhoods are disjoint and based on the definition of nhood, you can find open sets $V_y$ and $U_x$ contained in $y$ -nhood and $x$ -nhood respectively, so $U_x$ and $V_y$ are disjoint. For $y \lt 0$ you can do the same.
|
|general-topology|
| 0
|
If the limit $\lim\limits_{x\to 0}{\frac{\sin 3x}{x^3} + \frac{a}{x^2}+b}$ exists and equals $0$ then what can $a$ and $b$ be?
|
Let $$L=\lim\limits_{x\to 0}{\frac{\sin 3x}{x^3} + \frac{a}{x^2}+b}=0$$ given that $a,b \in \mathbb R$ and are finite. I tried the following approach, We know, $\lim\limits_{x\to 0}{\frac{\sin 3x}{3x}}=1$ $$\therefore L=\lim\limits_{x\to 0}{\frac{\sin 3x}{3x}\cdot \frac{3}{x^2}+\frac{a}{x^2}+b}$$ substituting $\lim\limits_{x\to 0}{\frac{\sin 3x}{3x}}=1$ $$L=\lim\limits_{x\to 0}{\frac{3}{x^2}}+\frac{a}{x^2}+b$$ after some simplification $$L=\lim\limits_{x\to 0}{\frac{bx^2+a+3}{x^2}}$$ Now, for L to exist $bx^2+a+3 \to 0$ Hence, $$\boxed{a=-3}$$ and as $$L=0 \implies \boxed{b=0}$$ but this is wrong! $b \not= 0$ I tried to plot $f(x)=\frac{\sin 3x}{x^3}-\frac{3}{x^2}$ (image attached here ). Clearly from graph $b=\frac{9}{2}$ . But what is wrong with this approach?
|
You cannot just "substitute" $\lim\limits_{x \to 0} \frac{\sin(3x)}{3x} = 1$ . A proper way of doing it is using Landau notation, i.e., $$\frac{\sin(3x)}{3x} = 1 - \frac{3}{2}x^2 + O(x^4).$$ Then you get $$L = \lim\limits_{x\to 0} \frac{bx^2+a+3(1 - \frac{3}{2}x^2 + O(x^4))}{x^2} = \lim\limits_{x\to 0} \frac{bx^2+a+3 - \frac{9}{2}x^2 + O(x^4)}{x^2}.$$ Therefore you need to put $a = -3$ and $b = \frac{9}{2}$ .
|
|calculus|limits|functions|trigonometry|indeterminate-forms|
| 0
|
If the limit $\lim\limits_{x\to 0}{\frac{\sin 3x}{x^3} + \frac{a}{x^2}+b}$ exists and equals $0$ then what can $a$ and $b$ be?
|
Let $$L=\lim\limits_{x\to 0}{\frac{\sin 3x}{x^3} + \frac{a}{x^2}+b}=0$$ given that $a,b \in \mathbb R$ and are finite. I tried the following approach, We know, $\lim\limits_{x\to 0}{\frac{\sin 3x}{3x}}=1$ $$\therefore L=\lim\limits_{x\to 0}{\frac{\sin 3x}{3x}\cdot \frac{3}{x^2}+\frac{a}{x^2}+b}$$ substituting $\lim\limits_{x\to 0}{\frac{\sin 3x}{3x}}=1$ $$L=\lim\limits_{x\to 0}{\frac{3}{x^2}}+\frac{a}{x^2}+b$$ after some simplification $$L=\lim\limits_{x\to 0}{\frac{bx^2+a+3}{x^2}}$$ Now, for L to exist $bx^2+a+3 \to 0$ Hence, $$\boxed{a=-3}$$ and as $$L=0 \implies \boxed{b=0}$$ but this is wrong! $b \not= 0$ I tried to plot $f(x)=\frac{\sin 3x}{x^3}-\frac{3}{x^2}$ (image attached here ). Clearly from graph $b=\frac{9}{2}$ . But what is wrong with this approach?
|
In a sum, you cannot apply $\sin(x)\approx x$ . The correct approach would be first looking for a common denominator: $$\dfrac{bx^3+ax+\sin(3x)}{x^3}$$ By Taylor's Theorem, we know $$\sin(3x)=3x-\dfrac{27}{6}x^3+g(x)x^4=3x-\dfrac{9}{2}x^3+g(x)x^4$$ for some function $g$ such that $g(x)\to 0$ when $x\to 0$ . Thus, the expression transforms to $$\dfrac{(b-9/2)x^3+(a+3)x+g(x)x^4}{x^3}$$ For its limit to exist we need $$a+3=0\iff \boxed{a=-3}$$ And we will have $$\lim_{x\to 0} \dfrac{bx^3+ax+\sin(3x)}{x^3}=b-\dfrac{9}{2}$$ for any $b\in\mathbb{R}$ . Thus, we will need $b=\dfrac{9}{2}$ for $L$ to be $0$ .
|
|calculus|limits|functions|trigonometry|indeterminate-forms|
| 1
|
When is it OK to take powers of the equation of a curve?
|
Isnt raising curve equations to powers "dangerous" in general? Take $$ x=y $$ if you square that, you get $$x^2 = y^2$$ The new equation contains the points described by $ x=y $ , and $ x=-y $ combined, ie, 2 lines, slopes 1 & -1, unlike the original equation that only described a single line. I'm self-studying calculus currently, reached arclengths, and I find that I have to take powers of curve equations sometimes or I CANNOT solve a problem. Im always worried about unwanted solutions. Is there some easy way of telling when it's ok to take powers? The particular problem I'm struggling with currently is one where I have to get an equation in $x$ & $y$ for this parametric equation for a curve ( $k$ is a positive integer): $$ x = a.cos^k (t) $$ $$ y = a.sin^k (t) $$ Is it valid to take the both sides to the power $2/k$ here? That would certainly help me solve this problem, but I'm not comfortable doing that, because I have no idea what kind of extra solutions will spawn, Im struggling t
|
After you square, everything becomes positive, so then you can take the $k$ th root without any problem. The equation $x=y$ implies $x^2=y^2$ though of course one cannot reverse the implication.
|
|calculus|parametric|
| 0
|
Constraints in modulo arithmetics
|
Consider four positive numbers a, b, c, d and addition modulo N. I would like to determine the minimum number of constraints needed to express that: $a + b + c = d$ $a + b + c$ does not wrap around the modulo N I think I need at least 5 constraints, for instance: $a \le a + b$ $b \le a + b$ $c \le d$ $a + b \le d$ $a + b + c = d$ Can I do with less that 5 constraints? EDITS for clarification We are operating modulo N so you can assume $a, b, c, d$ are all numbers in $[0, N-1]$ . In fact, if they weren't they would be reduced to their congruent number modulo N in $[0, N-1]$ What the constraints are meant to be prevent is a situation like: $$a=80, b=80, c=30, d=90, N=100$$ where you have $a + b + c = d$ modulo N but the addition wraps around N All the constraints operate modulo N, there is no access to integer arithmetics
|
You want to check that $a+b+c\equiv d \bmod n$ without wraparound, i.e. you want to check whether $a+b+c=d$ in integer arithmetic. However, the only tools you have are arithmetic modulo $n$ , and size comparison of the residues. To start with you can assume that $d since $a,b,c,d$ can be assumed to be the standard residue class representatives, i.e. in the range $\{0,1,..,n-1\}$ . The following three constraints are necessary and sufficient to force none of the additions to cause a wraparound: $$c\le d \\ b \le d−c \\ a=d−c−b $$ Each constraint ensures that the subtraction in the next one has no wraparound, and the final constraint is equivalent to the equality we want. It is fairly obvious how to extend this to any number of variables.
|
|modular-arithmetic|
| 1
|
Prove that a function is differentiable and its derivative function is integrable
|
Suppose $f:D \subset \mathbb{R} \to \mathbb{R}$ is continuous and satisfies the Lipschitz condition,that is $$\exists M>0, \forall x,y\in D:|f(x)-f(y)|\leq M|x-y|.$$ I want to know whether $f'(x)$ exists for all $x\in D$ and whether it is Riemannian integrable on $D$ . By Lebesgue's Theorem I know that $f'(x)$ exists almost everywhere on $D$ and that $f'(x)$ is Lebesgue integrable on $D$ . That's because $f(x)$ is an absolutely continuous function. But I don't know what to do next. Hope someone can help me!
|
$D=[0,1], f(x)=|x|$ shows that $f'(x)$ need not exist for all $x$ . For $f'$ to be Riemann integrable, it has to be defined at all points (and it has to be bounded).
|
|differential|
| 0
|
Is there a 9×9 Sudoku Room Square?
|
The following is an order 9 Room square . Copying from Wikipedia, Each cell of the array is either empty or contains an unordered pair from the set of symbols. Each symbol occurs exactly once in each row and column of the array. Every unordered pair of symbols occurs in exactly one cell of the array. This square meets a few additional requirements, each $3\times3$ square has five pairs, and a few of the squares have all ten symbols. Is an order 9 Room square possible where all nine $3\times3$ squares have all ten symbols? Note: I don't think row/column permutations for this particular Room square will yield an answer, due to trying a few million cases.
|
Yes, it is possible. Here is the first solution found by a simple search algorithm. \begin{array}{|ccc|ccc|ccc|}\hline 01&23&45&67&89&&&&\\ 68&79&&02&&&14&35&\\ &&&15&34&&78&06&29\\ \hline 25&&&39&07&18&&&46\\ 37&16&08&&&24&59&&\\ 49&&&&56&&03&28&17\\ \hline &04&69&&12&57&&&38\\ &58&13&&&09&26&47&\\ &&27&48&&36&&19&05\\ \hline \end{array}
|
|combinatorics|recreational-mathematics|latin-square|sudoku|
| 1
|
Integrating cumulative distribution function of normal and exponential
|
Let $F(\cdot)$ be the cdf of an exponential distribution with mean 1 and $\Phi(\cdot)$ be the cdf of the standard normal distribution. I need to show that there exists some $n$ such that \begin{align*} \int_0^\infty (1- F^n(x))\Phi^n(x)~dx > 0.99. \end{align*} I have no idea how to proceed here. I'm fairly certain that it's probably not possible to find a closed form for the left hand-side, but I'm not sure if there's some trick I should be using.
|
Let's define $g(x,y) = (1- F^y(x))\Phi^y(x)$ for $x,y\in \mathbb{R}^+$ , we have: $$\frac{\partial g}{\partial y} = \Phi^y(x)\ln\left(\Phi(x) \right) - \left(F(x)\Phi(x)\right)^y\ln\left(\left(F(x)\Phi(x)\right) \right) $$ Let's study the function $z\mapsto h(z) =z^y\cdot \ln(z)$ for $0 . The function $h(z)$ is decreasing (as $h'(z) = yz^{y-1}\ln(z) ). Then: $$\frac{\partial g}{\partial y} = h\left( \Phi(x) \right) - h\left( \left(F(x)\Phi(x)\right) \right) because $0 We deduce then $g(x,y)$ is decreasing with respect to $y$ for all $x \in \mathbb{R}$ . As $\int_0^{ +\infty}g(x,y)dx$ is a continuous function with respect to $y$ and $$0= \int_0^{ +\infty}\underbrace{g(x,+\infty )}_{=0}dx there exists a unique solution $y^*(\alpha)$ for the equation $$\int_0^{ +\infty}g(x,y)dx = \alpha \hspace{1cm} \alpha \in \mathbb{R} ^+$$ And so, for all $0 (in particular, $\alpha = 0.99$ ), we have: $$\int_0^{ +\infty}g(x,y)dx > \alpha$$
|
|probability|probability-theory|analysis|probability-distributions|
| 1
|
Maximizing area of the triangle in a quarter circle
|
The radius of the quarter circle is $6\sqrt 5$ and we assume that $OA= 5$ and $OC=10$ . What is the maximum area of the blue triangle? Interpreting the problem statement, I believe that points $A$ and $C$ are fixed and point $B$ can move on the arc. To solve this problem, I assumed that the coordinate of $O$ is $(0,0)$ and then assigned coordinates for each vertex of the triangle: $A(5,0), C(0,10), B(x,\sqrt{180-x^2})$ where $x \in [0, 6\sqrt5]$ . Then I applied the formula for the area of the triangle given its vertices, and the problem is reduced to maximizing $$A(x)= \left|25-(\frac52\sqrt{180-x^2}+5x)\right|\quad \text{for}\quad x \in [0, 6\sqrt5]$$ Which is easy to continue and I got $50$ as the answer. I'm looking for other approaches to solve this problem. I'm particularly interested in geometric approaches.
|
The area of the triangle ABC is equal to its side AC multiplied by the height, divided by $2$ . The height will be maximal if we draw a radius OB such that OB is perpendicular to AC. Let H be the intersection of AC and OB. Then $$AC=\sqrt{OC^2+OA^2}=\sqrt{25+100}=5\sqrt5.$$ It is known that $$OH=\frac{OA\cdot OC}{AC}=\frac{5\cdot 10}{5\sqrt5}=2\sqrt5.$$ Then $$HB=OB-OH=6\sqrt5-2\sqrt5=4\sqrt5.$$ Then the area of the triangle ABC is $$\frac{BH\cdot AC}2=\frac{4\sqrt5\cdot 5\sqrt5}2=50.$$
|
|geometry|euclidean-geometry|triangles|area|
| 1
|
Maximizing area of the triangle in a quarter circle
|
The radius of the quarter circle is $6\sqrt 5$ and we assume that $OA= 5$ and $OC=10$ . What is the maximum area of the blue triangle? Interpreting the problem statement, I believe that points $A$ and $C$ are fixed and point $B$ can move on the arc. To solve this problem, I assumed that the coordinate of $O$ is $(0,0)$ and then assigned coordinates for each vertex of the triangle: $A(5,0), C(0,10), B(x,\sqrt{180-x^2})$ where $x \in [0, 6\sqrt5]$ . Then I applied the formula for the area of the triangle given its vertices, and the problem is reduced to maximizing $$A(x)= \left|25-(\frac52\sqrt{180-x^2}+5x)\right|\quad \text{for}\quad x \in [0, 6\sqrt5]$$ Which is easy to continue and I got $50$ as the answer. I'm looking for other approaches to solve this problem. I'm particularly interested in geometric approaches.
|
Just extend the circle, call $A'$ and $C'$ the intersection of the line AC with the circle. The middle $B$ of the arc $A'C'$ is the solution.
|
|geometry|euclidean-geometry|triangles|area|
| 0
|
If $A$ is normal with $\sigma(A)\subseteq \mathbb{R}\cup\mathbb{T}$, does $\text{dim ker}(AB-BA)=\text{dim ker}(A^*B-BA^*)$?
|
This clearly holds if $A$ is self-adjoint, and also if $A$ is unitary, because then $A(\text{ker}(AB-BA))=\text{ker}(A^*B-BA^*)$ . To prove this, if $w\in\text{ker}(AB-BA)$ , then $A^*B(Aw)=A^*ABw=Bw=BA^*(Aw)$ , so $Aw\in\text{ker}(A^*B-BA^*)$ and if $v\in\text{ker}(A^*B-BA^*)$ , then $v=AA^*v$ and with $w=A^*v$ we have $ABw=ABA^*v=AA^*Bv=Bv=BAA^*v=BAw$ , so $w\in\text{ker}(AB-BA)$ . If $A$ is normal with spectrum contained in the union of the real line and the unit circle, then there several things one can try. On the one hand, if we diagonalize $A=UDU^*$ , then $D$ can be split into the sum of two diagonal matrices, one with the real entries and $0$ 's else, and one with the entries on the unit circle and $0$ 's else, $D=D_1+D_2$ . Moreover, let $J$ be the diagonal matrix which has $1$ 's where $D$ has real entries and $0$ 's else, then $D=(D_1-J)+(D_2+J)$ , so $A=U(D_1-J)U^*+U(D_2+J)U^*=:A_1+A_2$ , where $A_1$ is self-adjoint and $A_2$ is unitary. In this case, $A^*=A_1+A_2^*$ Anoth
|
The statement is not true in general. Pick any four numbers $x,y,z,w$ on $\mathbb R\cup\mathbb T$ such that $$ \zeta=(y-z)\overline{(x-z)}(x-w)\overline{(y-w)} $$ is not a real number. E.g. when $x=1,y=i,z=-1$ and $w=0$ , we have $\zeta=2(i-1)\not\in\mathbb R$ . Let $$ A=\pmatrix{x\\ &y\\ &&z\\ &&&w}\quad\text{and}\quad B=\pmatrix{0&0&y-z&y-w\\ 0&0&x-z&x-w\\ 0&0&0&0\\ 0&0&0&0}. $$ Then $$ [A,B]=\pmatrix{0&P\\ 0&0}\quad\text{and}\quad [A^\ast,B]=\pmatrix{0&Q\\ 0&0} $$ where $$ P=\pmatrix{(x-z)(y-z)&(x-w)(y-w)\\ (x-z)(y-z)&(x-w)(y-w)}\quad\text{and}\quad Q=\pmatrix{\overline{(x-z)}(y-z)&\overline{(x-w)}(y-w)\\ (x-z)\overline{(y-z)}&(x-w)\overline{(y-w)}}. $$ Hence $\operatorname{rank}[A,B]=\operatorname{rank}(P)\le1$ but $\operatorname{rank}[A^\ast,B]=\operatorname{rank}(Q)=2$ because $\det(Q)=\zeta-\overline{\zeta}\ne0$ .
|
|linear-algebra|matrices|matrix-rank|
| 1
|
Endomorphism of compact topological group preserving conjugacy classes on dense subset
|
Let $G$ be a compact topological group and $f:G\to G$ a continuous homomorphism. Assume there is a dense subset $X\subset G$ , such that that for all $x\in X$ , we have that $f(x)$ is conjugate to $x$ . Is it then true that $f(g)$ is conjugate to $g$ for all $g\in G$ ? If $G$ were a metric space, we could make a sequence argument. Pick some sequence $(x_n)_n \in X$ converging to $g$ . Then there are elements $(g_n)_n \in G$ such that $$f(x_n)=g_nx_ng_n^{-1}. $$ Because $G$ is compact, we can pick a converging subsequence of $(g_n)_n$ with limit $g_0$ . Then $$f(g)=g_0gg_0^{-1}.$$ I am not sure how to adapt this argument to the case where $G$ is not a metric space.
|
As Moishe Kohan suggested, one may use nets, if we add the assumption that $G$ is Hausdorff. Fix $g\in G$ . For each open $U\subset G$ containing $g$ , we may pick am element $x_U \in X \cap U$ . We obtain two nets $(x_U)_U \subset X$ and $(g_U)_U \subset G$ , such that $$f(x_U)=g_Ux_Ug_U^{-1}.$$ Using the compactness assumption, we may pass to subsequences and assume that both nets converge. Clearly $(x_U)_U$ converges to $g$ . Because $f$ is continuous $(f(x_U))_U$ , converges to $f(g)$ . Let $g_0$ be the limit of $(g_U)_U$ . $G$ being Hausdorff, implies that limits of nets are unique. If we prove that $(f(x_U))_U$ converges to $g_0gg_0^{-1}$ , we may conclude that $f(g)=g_0gg_0^{-1}$ . The function $$\psi:G^3\longrightarrow G,\psi(g_1,g_2,g_3)=g_1g_2g_3^{-1}$$ is continuous and the limit of the net $(g_U,x_U,g_U)_U \subset G^3$ is $(g_0,g,g_0) \in G^3$ . Therefore the limit of $(f(x_U)_U)$ is $g_0gg_0^{-1}$ .
|
|group-theory|compactness|topological-groups|
| 1
|
Why the polynomial $x^{p+1}=y$ has exactly $p+1$ solutions in $F_{p^2}^*$ for $y \in F_p^*$?
|
In some combinatorics book I found the following remarks about multiplicative groups of finite fields: Let p be an odd prime number and $q=p^2$ . Then $F_p^*$ can be identified with the elements $x$ of $F_q^*$ that satisfy $x^{p-1}=1$ . Furthermore, for fixed $y \in F_p^*$ the polynomial $y=x^{p+1}$ has exactly p+1 solutions in $F_q^*$ . I understand that the key to all of these is the decomposition $q-1=(p+1)(p-1)$ . Nevertheless one question still remains: Why the mentioned polynomial has exactly $p+1$ solutions? I know that the zeros of $x^{q-1}-1$ are exactly the elements of $F_q^*$ , but I can't finish the argument.
|
The multiplicative group of $F_{p^2}-\{0\}$ is cyclic (of order $p^2-1$ , of course), which is a basic fact in field theory. Moreover, under the inclusion $\tau: F_{p}\to F_{p^2}$ , the members of $\tau(F_p)$ are characterized by \begin{equation*} \tau(F_p)=\{x\in F_{p^2}: x^p=x\}. \end{equation*} Thus \begin{equation*} \tau(F_p-\{0\})= \{x\in F_{p^2}: x^{p-1}=1\}. \end{equation*} Let $a\in F_{p^2}-\{0\}$ generate the cyclic group $(F_{p^2}-\{0\},\cdot)$ . If $m$ is an integer s.t. $0\le m , then $y=a^m\in \tau(F_p-\{0\})$ iff $a^{m(p-1)}=1$ , iff $m(p-1)$ is a multiple of $p^2-1$ , iff $m=(p+1)k$ for some $0\le k . Now we study the equation $x^{p+1}=y(=a^m=a^{(p+1)k})$ . Assume $x=a^l$ ( $0\le l ). For $a^{l(p+1)}=x^{p+1}=a^{(p+1)k}$ to hold, it is necessary and sufficient that $(p+1)l-(p+1)k$ is a multiple of $p^2-1$ , and is therefore equivalent to $l-k$ is a multiple of $p-1$ , or, $l=(p-1)N+k$ for some $k$ . In $\{0,1,\cdots,p^2-2\}$ , there are exactly $p+1$ elements of the form
|
|abstract-algebra|group-theory|finite-groups|
| 1
|
Understanding Skorokhod representation of a random variable with prescribed distribution function
|
The following is from Williams' Probability with Martingales section 3.12 (I paraphrase a bit): Let $F:\mathbb{R}\to[0,1]$ have properties $F$ is non-decreasing. $F$ is normalized. $F$ is right-continuous. We can construct a random variable with distribution function $F$ carried by $([0,1],\mathcal{B}[0,1],\lambda)$ where $\mathcal{B}[0,1]$ is the Borel $\sigma$ -algebra on $[0,1]$ and $\lambda$ the Lebesgue measure. Define $$X^-(\omega) := \inf\{ z:F(z)\ge\omega\} = \sup\{y:F(y) We show $X^-$ is a random variables with $F$ as a distribution function: $$\omega\le F(c) \implies X^-(\omega)\le c \ \ \ \ \text{ by definition}$$ $$X^-(\omega) $$X^-(\omega)\le c \implies \omega\le F(X^-(\omega))\le F(c) \ \ \ \ \text{ by right-continuity of $F$}$$ $$\omega\le F(c) \iff X^-(\omega)\le c \ \ \ \ \text{ by previous lines}$$ $$\lambda(X^-\le c) = F(c). \ \ \ \ \text{ by ?}$$ where $\lambda$ is the Lebesgue measure. My questions: How are the lines $$X^-(\omega) $$\lambda(X^-\le c) = F(c)$$ deriv
|
Fix $u$ and $x$ Let $u\leq F(x)$ . Then as $X^{-}(u)=\inf\{x:F(x)\geq u\}$ we have that $X^{-}(u)\leq x$ as $x$ satisfies that $F(x)\geq u$ . Now let $X^{-}(u) . Then by property of infimum, there exists $a$ such that $F(a)\geq u$ and $a Thus by monotonicity, $F(x)\geq F(a)\geq u$ . But you also have that since $X^{-}(u)$ is the infimum, there exists a sequence $y_{n}$ such that $y_{n}\downarrow X^{-}(u)$ and $F(y_{n})\geq u$ . Thus, by right continuity, you have by taking limits $F(X^{-}(u))\geq u$ . Thus you have the two sided statement : $X^{-}(u)\leq x \iff u\leq F(x)$ Now note that $X^{-}$ is a Random Variable, i.e. real valued measurable function from the probability space $([0,1],\mathcal{B},\lambda)$ to $(\Bbb{R},\mathcal{B})$ The codomain is just $\Bbb{R}$ . Note that codomain need not come equipped with a "measure". But as $X^{-}$ is a random variable, it induces a probability distribution $\mu$ on $(\Bbb{R},\mathcal{B})$ i.e. $\mu((-\infty,t]):=P\bigg((X^{-})^{-1}((-\infty,t
|
|probability|probability-theory|probability-distributions|proof-explanation|
| 1
|
Minimum value when $abc+ab+4bc+9ca=144$
|
If $a,b,c$ are non-negative real numbers such that $abc+ab+4bc+9ca=144$ , find the minimum value of $a+b+c$ . I tried with Lagrange multipliers. I got the system: $bc+b+9c=ca+a+4c=ab+4b+9a$ Replacing in the condition, I found four solutions, but only one $(4,0,4)$ is non-negative. So the minimum value is $8$ . My question is, can this be done without Lagrange Multipliers?
|
[Hanno's CS solution is much more instructive, so read that. I was pleasantly surprised that this approach worked, so I posted it.] 1/ First, suppose $c$ is fixed, and we want to minimize $a+b$ . The condition is equivalent to $$ (c+1) ab + (4c) b + (9c) a = 144, $$ which we factorize (via Simon's Favorite Factoring Trick) to $$[ (c+1 ) a + 4c ] [ (c+1) b + 9c ] = 144(c+1) + 36 c^2, $$ and applying AM-GM on the LHS (both terms are non-negative), we get that $$ [ (c+1 ) a + 4c ] + [ (c+1) b + 9c ] \geq 2 \sqrt{ 144(c+1) + 36 c^2 } = 12(c+2), $$ which simplifies to (We divide by $ c+1 \geq 0$ ) $$ a+b \geq \frac{ -c + 24 } { c+1} , $$ or that $$ a + b +c \geq \frac{ c^2 + 24 } { c+1}. $$ Equality holds when $ (c+1) a + 4c = (c+1)b + 9c $ , or that $ a-b = \frac{5c}{c+1}$ . 2/ Show that on $ c \geq 0 $ , we have $ \frac{ c^2 + 24 } { c+1 } \geq 8 $ with equality at $ c = 4$ . This can be done through differentiation or through clever AM-GM (if you want to avoid calculus). 3/ Hence, conclu
|
|multivariable-calculus|inequality|optimization|quadratics|maxima-minima|
| 0
|
In how many ways can we distribute 6 white and 6 black ball in 10 different boxes such each box has atleast 1 ball?
|
The question states that 6 white and 6 black balls of the same size are distributed among 10 different urns. Balls are alike except for the colour each urn can hold any number of balls. Find the number of different distribution of the balls so that there is atleast 1 ball in each urn. Now the difficulty I am finding is that I can't simply apply any method for a direct answer as there will be many cases which could be repeated and some which I have to add again so I thought it won't be a good method So I am seeking a quick answer to the question with some explanation as this question is something I can't understand much..
|
First, we divide all patterns of how balls can be distributed into two groups: group of patterns A, when an urn has $3$ balls in it, and group if patterns B, when two urns have two balls in it. Examine the group A, there are following patterns in it: $3$ white balls in a “big” urn, $2$ white, $1$ black balls, $1$ white, $2$ black, $3$ black. What is left for the other $9$ urns in these cases? $3$ white, $6$ black balls, $4$ white, $5$ black, $5$ white, $4$ black, $6$ white, $3$ black. We can calculate the number of ways how to distribute these $9$ balls between $9$ urns. Each urn has exactly one ball, so the answer is $9\choose3$ in the first and fourth case and $9\choose4$ in the second and third. There are $10$ ways to choose a big urn. So the number of ways in the first group of patterns is $$10\times\left(2\times{9\choose3}+2\times{9\choose4}\right)=$$ $$=10\times(168+252)=4200.$$ Now, let us consider the group B. We choose two big urns ( $10\choose2$ ways to do that). Then there a
|
|combinatorics|
| 0
|
the expectation of linear combination of chi-squared random variables with 1 degree of freedom
|
I'm calculating the problem descripted in the title, and found it a little bit hard, here is the problem: Suppose $X_i\sim\mathcal{N}(0,1)$ is standard normal random variables, now we need to calculate the expectation of linear combination of the square of standard normal variables, that is $$\mathbb{E}\left[\sum_{i=0}^{L}c_i X^2_i\right].$$ There is a paper [1] giving a more general circumstance but the result is too complicated to calculate expectation. Another paper [2] gives a subtle theorem to derive a very simple result but it is conditional on a complicated condition. Have you any idea to solve it? [1]: Moschopoulos, P. G.; Canada, W. B. , The distribution function of a linear combination of chi-squares , Comput. Math. Appl. 10, 383-386 (1984). ZBL0576.62022 . [2]: Fleiss, J. L. , On the distribution of a linear combination of independent chi squares , J. Am. Stat. Assoc. 66, 142-144 (1971). ZBL0218.62014 .
|
$$\mathbb{E}\left[\sum_{i=0}^{L}c_i X^2_i\right] = \sum_{i=0}^{L}c_i\mathbb{E}\left[ X^2_i\right] = \sum_{i=0}^{L}c_i,$$ where the first equality comes from linearity of expectation and the second from the fact that your random variables are zero mean and variance 1.
|
|chi-squared|
| 1
|
I cannot understand what the author wants to say. ("Analysis on Manifolds" by James R. Munkres)
|
I am reading "Analysis on Manifolds" by James R. Munkres. Lemma 23.3. Let $M$ be a manifold in $\mathbb{R}^n$ , and let $\alpha:U\to V$ be a coordinate patch on $M$ . If $U_0$ is a subset of $U$ that is open in $U$ , then the restriction of $\alpha$ to $U_0$ is also a coordinate patch on $M$ . Note that this result would not hold if we had not required $\alpha^{-1}$ to be continuous. The map $\alpha$ of Example 3 satisfies all the other conditions for a coordinate patch, but the restricted map $\alpha|U_0$ is not a coordinate patch on $M$ , because its image is not open in $M$ . The author says " if we had not required $\alpha^{-1}$ to be continuous ", then Lemma 23.3 " would not hold ". But Lemma 23.3 requires that $\alpha^{-1}:V\to U$ is continuous because Lemma 23.3 requires $\alpha$ is a coordinate patch. I cannot understand what the author wants to say.
|
Munkres's remark refers to the concept of coordinate patch introduced in the definition at the beginning of §23. A coordinate patch $\alpha$ on $M$ is defined as a contiuous bijection $\alpha : U \to V$ between an open $U \subset \mathbb R^k$ and an open $V \subset M$ such that $\phantom{xx} (1)\phantom{x}$ $\alpha$ is of class $C^r$ . $\phantom{xx} (2)\phantom{x}$ $\alpha^{-1} : V \to U$ is continuous. $\phantom{xx} (3)\phantom{x}$ $D\alpha(\mathbf x)$ has rank $k$ for each $\mathbf x \in U$ . Condition $(2)$ means that $\alpha$ (which is a bijection!) is an open map . One could alternative require that $\alpha$ is a homeomorphism satisfying $(1)$ and $(3)$ . In the definition on p.201 he generalizes this to introduce manifolds with boundary. Concerning Lemma 23.3 Munkres writes Note that this result would not hold if we had not required $\alpha^{-1}$ to be continuous. The map $\alpha$ of Example 3 satisfies all the other conditions for a coordinate patch, but the restricted map $\alp
|
|multivariable-calculus|manifolds|
| 1
|
Baby Rudin 9.9 Matrices
|
The inserted picture shows the statement about linear transformation of a basis. The statement says "Then, every A $\in L(X,Y)$ determines a set of numbers $a_{ij}$ such that $Ax_{j}=\sum_{i=1}^{m}a_{ij}y_{i}$ , $\left(1\leq j\leq n\right)$ " Note that $L\left(X,Y\right)$ is the set of all linear transformations of the vector space X into the vector space Y. From the definition of $A$ , $Ax=Y$ . However, it is not clear if there exists $A$ such that $Ax_{j}=\sum_{i=1}^{m}a_{ij}y_{i} , \left(1\leq j\leq n\right)$ . Satisfying this equality condition means the following. $$ Ax_{j}=\begin{pmatrix} a_{11} & a_{12} &...& a_{1n}\\ ...&...&...&...\\ a_{m1} &a_{m2} & ...& a_{mn} \end{pmatrix} \begin{pmatrix} x_{1j}\\ x_{2j}\\ ...\\ x_{nj} \end{pmatrix}= \begin{pmatrix}a_{11}x_{1j}+...+a_{1n}x_{nj}\\ a_{21}x_{1j}+...+a_{2n}x_{nj}\\ a_{m1}x_{1j}+...+a_{mn}x_{nj} \end{pmatrix} $$ Then, $$ \sum_{i=1}^{m}a_{ij}y_{i}=\sum_{i=1}^{m}a_{ij}\begin{pmatrix}y_{1i}\\ y_{2i}\\ ...\\ y_{mi} \end{pmatrix}= \b
|
I think you might be overcomplicating it. You don't need to write each $x_j$ as $\begin{pmatrix}x_{1j}\\...\\x_{nj}\end{pmatrix}$ . Instead, what Rudin probably means is \begin{equation*} x_j=\begin{pmatrix}0_1\\...\\1_j\\...\\0_n\end{pmatrix} \end{equation*} where every coordinate is $0$ except for the $j$ th coordinate which is $1$ . I have used subindices to indicate which coordinate the values belong to. Similarly, \begin{equation*} y_i=\begin{pmatrix}0_1\\...\\1_i\\...\\0_m\end{pmatrix} \end{equation*} This way, verifying the equality is pretty straightforward. The computation is what you've already done, except that you change the basis, that is, \begin{equation} \begin{pmatrix} a_{11} & ... & a_{1n}\newline ... & ... & ...\newline a_{m1} & ... & a_{mn} \end{pmatrix} \begin{pmatrix} 0_1 \newline ... \newline 1_j \newline ... \newline 0_n \newline \end{pmatrix} = \begin{pmatrix} \sum_{i=1}^m a_{i1}y_i\newline ...\newline \sum_{i=1}^m a_{in}y_i \end{pmatrix} = \begin{pmatrix} \sum_
|
|linear-algebra|linear-transformations|
| 0
|
Question regarding identificaiton of Onto and Into Functions
|
Was reading up my textbook about the identification of onto and into functions and one of the method given is as follows Solve $f(x) = y$ for $x$ , say $x = g(y)$ Now if $g(y)$ is defined for each $y \in \text{ codomain }$ and $g(y) \in \text{ domain of } f \text{ for all } y \in { codomain}, then f(x) \text{ is onto}$ . Quite did not understand how this works and what even the function $g(y)$ is?, is it the inverse of the function $f(x)$ could someone explain to me how this method works.
|
That is indeed quite a confusing way to put it. The idea is that for any $y \in Y$ , you can find an $x$ , written as $g(y) = x$ , such that $f(x) = y$ . Below I have written a few things out, also for injective maps (because the idea is similar) and I hope this makes it more clear. Suppose $f : X \to Y$ is a function. Then $f$ is surjective iff for each $y \in Y$ , there exists an $x \in X$ such that $f(x) = y$ . " $f$ is able to reach every point $y$ in the codomain $Y$ " $f$ is injective iff for all $x_1, x_2 \in X$ we have $f(x_1) = f(x_2)$ implies that $x_1 = x_2$ . " $f$ maps distinct elements in the domain to distinct elements in the codomain" Now suppose there exists $g : Y \to X$ such that $f(g(y)) = y$ for all $y \in Y$ . This implies that $f$ is surjective. This easily follows from the definition. The reverse is also true (actually, only if you assume the axiom of choice, but if you have never heard of this, you should not mind it. see also: Why the fact that surjective map
|
|functions|
| 1
|
Find minimum of $a^2+\cdots+e^2$ under $2$ quadratic constraints
|
Let $a$ , $b$ , $c$ , $d$ , $e$ be real numbers such that $$ab+bc+cd+de+ea=20,\\ac+bd+ce+da+eb=22.$$ Find, as the root of a polynomial, the minimum value of $a^2+b^2+c^2+d^2+e^2$ . The numeric value of this minimum seems to be about $23.2359$ . I also tried Lagrange Multipliers but it didn't work. Perhaps we could use some substitutions to eliminate the condition. Update . I have proven that the problem is equivalent to (but no having the same answer as) the problem of minimizing $$\frac{(a-b)^2+(b-c)^2+(c-d)^2+(d-e)^2+(e-a)^2}{ac+bd+ce+da+eb-ab-bc-cd-de-ea}.$$ Update River Li finding the answer " $21+\sqrt5$ " enabled me to produce the following natural solution that explores the essence of this problem (at least I think it does) Here is the full process of my thinking. We are to prove that $\sum\limits_{\rm cyc}a^2\ge21+\sqrt5$ , or (by the first condition) $$(a-b)^2+(b-c)^2+(c-d)^2+(d-e)^2+(e-a)^2\ge2+2\sqrt5.\tag1$$ This, perceptually, means that the variables $a\dots$ , $e$ must n
|
Remark. Use the trick in my answer . The minimum is $21 + \sqrt{5} \approx 23.23606798$ . Proof. We have \begin{align*} &a^2 + b^2 + c^2 + d^2 + e^2 - (21 + \sqrt{5})\\[6pt] ={}& \frac{1 - \sqrt{5}}{2}(ab + bc + cd + de + ea - 20) + \frac{1 + \sqrt{5}}{2}(ac + bd + ce + da + eb -22)\\[6pt] &\qquad + \frac{3 - \sqrt{5}}{32}(b\sqrt{5} + c\sqrt{5} -2e\sqrt{5} - 2a + 3b + 3c - 2d - 2e)^2\\[6pt] &\qquad + \frac{5 + \sqrt{5}}{32}(b\sqrt{5} - c\sqrt{5} + 2a - b + c - 2d)^2\\[6pt] \ge{}& 0 \end{align*} with equality if \begin{align} &a = d = \frac15\,\sqrt {110+4\,\sqrt {135+80\,\sqrt {5}}+4\,\sqrt {5}},\\ &b = c = \frac15\,\sqrt {100-2\,\sqrt {-510+310\,\sqrt {5}}+4\,\sqrt {5}},\\ &e = -\frac{ab + bc + cd - 20}{a+d}. \end{align}
|
|inequality|optimization|
| 1
|
Is the Essential Spectrum the same as the Continuous Spectrum and Residual Spectrum
|
Given a linear operator $A:D(A)\rightarrow X$ (where $D(A)$ is a dense subset of $X$ and $X$ is a Banach space or Hilbert Space), we define the spectrum to be $$ \sigma(A)=\{\lambda\in\mathbb{C}: A-\lambda \text{is not invertible}\} $$ We note that $\sigma(A)$ will be a closed subset of $\mathbb{C}$ . Now there are a few ways that one may break up the spectrum. Firstly, we can decompose it into the discrete spectrum and the essential spectrum that is $\sigma(A)=\sigma_d(A)\cup \sigma_{ess}(A)$ . Where we have that the discrete spectrum, $\sigma_d(A)$ , consists of all eigenvalues of $A$ such that $A-\lambda$ has finite algebraic multiplicity. Then we define the essential spectrum $\sigma_{ess}(A)$ to be the complement of $\sigma_d(A)$ inside of $\sigma(A)$ . It is clear that this is a partition of the spectrum. Now there is another way in which we may break up the spectrum. We can decompose it as $\sigma(A)=\sigma_{eigenvalues}(A)\cup \sigma_{cont}(A)\cup\sigma_{res}(A)$ . In this deco
|
In general, these spectra do not coincide. In fact, assume for simplicity that $A: H \to H$ is a bounded and self-adjoint operator, where $H$ is a Hilbert space. Then on one hand, $\sigma(A) = \sigma_p(A) \cup \sigma_{\textit{cont}} (A)$ , as the residual spectrum is empty. On the other hand, we indeed have $^{(*)}$ the decomposition $\sigma(A) = \sigma_{\textit{disc}}(A) \cup \sigma_{\textit{ess}} (A)$ . For both decompositions, the union is disjoint. The inclusion $\sigma_{\textit{disc}}(A) \subset \sigma_p (A)$ (or equivalently $\sigma_{\textit{cont}} (A) \subset \sigma_{\textit{ess}} (A)$ ) always holds, since $\sigma_p(A)$ consists of all eigenvalues, whereas $\sigma_{\textit{disc}}(A)$ consists of isolated eigenvalues of finite multiplicity. However, the above inclusions are strict, a useful example being that of finite-rank operators. For instance, consider an orthonormal basis $\{w_n, n \geq 0\}$ of $H$ (w.r.t. the scalar product $\langle \cdot, \cdot \rangle_H$ ), and let $A$
|
|functional-analysis|spectral-theory|
| 0
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.