title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Find the maximum number of possible real roots of the equation $a x^4+ b x^3+ x^2+ x+1=0$ is equal to where $a \ne 0$.
The question goes on like this : "Let $a$ and $b$ be real numbers such that $a \ne 0$ . Then the maximum number of possible real roots of the equation $ax^4+bx^3+x^2+x+1=0$ is equal to " My attempt: First I differentiate wrt $x$ , but due to variable: $a$ and $b$ , I can't make any direct conclusion Also by writing $-ax^4-bx^3=x^2+x+1$ , I could draw graph of RHS but not of LHS. Please help with this problem in shortest way out.
Let $\alpha,\beta,\gamma,\delta$ be real roots of $ax^4+bx^3+x^2+x+1=0$ . Then $\displaystyle \frac{1}{\alpha},\frac{1}{\beta},\frac{1}{\gamma},\frac{1}{\delta}$ be roots of $\displaystyle x^4+x^3+x^2+bx+a=0.$ Then $\displaystyle \sum \bigg(\frac{1}{\alpha}\bigg)^2=\sum \frac{1}{\alpha^2}+2\sum \frac{1}{\alpha}\cdot \frac{1}{\beta}$ $\displaystyle (-1)^2=\sum \frac{1}{\alpha^2}+2(1)\Longrightarrow \sum \frac{1}{\alpha^2}=-1 So not all roots are real. So maximum $2$ real roots exists
|calculus|algebra-precalculus|polynomials|
0
Is $\textit{affine space}$ the same as $\textit{quotient space}$?
From the answer for this question , I understand that affine subspace is the same as affine subset , however (despite the somewhat misleading question's title), it doesn't say that affine space is the same as quotient space . Also, I found the following definition of affine space on Wikipedia : An affine space is a set A together with a vector space $\overrightarrow{A}$ and a transitive and free action of the additive group of $\overrightarrow{A}$ on the set $A$ . Which uses some terminologies, probably in Group theory, that I'm not familiar with. Some other "intuitive" explanations suggest to me that affine space is actually the same as quotient space . Is that true that affine space is a synonym of quotient space ?
The concepts of "affine space" and "quotient space" are by no means identical. In the linked question it was observed that the elements of a quotient space $V/V_0$ , where $V$ is a vector space and $V_0 \subset V$ is a linear subspace, are the subsets of $V$ having the form $x + V_0$ with $x \in V$ . Thus $$V/V_0 = \{ x + V_0 \mid x \in V \} .$$ Clearly $x + V_0 = y + V_0$ iff $x - y \in V_0$ . For $x \notin V_0$ the set $x + V_0$ is not a linear subspace of $V$ ; it is a translated copy of $V_0$ . The sets $x + V_0$ are called affine subspaces of $V$ . To understand the concept of an affine subspace you do not need to know the general concept of an affine space. However, each affine subspace $x + V_0$ (which is no vector space unless $x \in V_0$ , which means $x + V_0 = V_0$ ) can be regarded as affine space in the sense of the Wikipedia-definition. We simply define an action of $V_0$ on $x + V_0$ by $$(u, x + v) \mapsto x + v + u .$$ You can easily verify the properties required in W
|vector-spaces|affine-varieties|
0
Is $\textit{affine space}$ the same as $\textit{quotient space}$?
From the answer for this question , I understand that affine subspace is the same as affine subset , however (despite the somewhat misleading question's title), it doesn't say that affine space is the same as quotient space . Also, I found the following definition of affine space on Wikipedia : An affine space is a set A together with a vector space $\overrightarrow{A}$ and a transitive and free action of the additive group of $\overrightarrow{A}$ on the set $A$ . Which uses some terminologies, probably in Group theory, that I'm not familiar with. Some other "intuitive" explanations suggest to me that affine space is actually the same as quotient space . Is that true that affine space is a synonym of quotient space ?
The vocabulary "transitive and free action" refers to group action on a set. You know vectors and points and relation between three points $A,B, C$ : $$\vec{AB}+\vec{BC}=\vec{AC}$$ Describing an affine space as in wikipedia definition you refer in OP is an equivalent way of describing a point space where two points $A$ and $B$ define a vector $\vec{AB}$ . Let $M$ a point and $u$ a vector. $$M+u \text{ refers to the point }M' \text{ s.t. }u=\vec{MM'}$$ In accordance with my comment, i.e. the translation of $M$ by the translation $t_u$ of vector $u$ . This equivalent definition in terms of the action of the additive group of vectors and this notation " $M+u$ " is very convenient to define an affine sub space of $(A,\vec A,+)$ as $$M+V$$ where $M\in A$ and $V$ is a linear sub space of $\vec A$ . $$\text{V is called the direction of M+V}$$ Now, if you consider the equivalence relation $\mathcal R$ on $A$ defined by $$M\mathcal R N\iff N\in M+V$$ the equivalence classes are precisely the af
|vector-spaces|affine-varieties|
1
Solve $\sqrt{\dfrac{a}{x}}-\sqrt{\dfrac{x}{a}}=\dfrac{a^2-1}{a}$
Solve $\sqrt{\dfrac{a}{x}}-\sqrt{\dfrac{x}{a}}=\dfrac{a^2-1}{a}$ Let $u^2=\dfrac{a}{x}$ : $\Rightarrow \sqrt{u^2}-\sqrt{\dfrac{1}{u^2}}=\dfrac{a^2-1}{a} \tag{1}$ $\Rightarrow u-\dfrac{1}{u}=\dfrac{a^2-1}{a} \tag{2}$ $\Rightarrow a(u^2-1)=u(a^2-1) \tag{3}$ $\Rightarrow au^2-a^2u=a-u \tag{4}$ $\Rightarrow au(u-a)=a-u \tag{5}$ $\Rightarrow -au(a-u)=a-u \tag{6}$ $\Rightarrow -au=1 \tag{7}$ $\Rightarrow u=\dfrac{-1}{a} \tag{8}$ Therefore $\dfrac{1}{a^2}=\dfrac{a}{x} \Rightarrow x=a^3$ . However $x=\dfrac{1}{a}$ is given as another answer, but I don't see how it's possible to derive this solution from my calculations.
Now: $$ \sqrt{\frac{a}{x}} - \sqrt{\frac{x}{a}} = \frac{a^2 - 1}{a} $$ multiply by $a$ $$ a\sqrt{\frac{a}{x}} - a\sqrt{\frac{x}{a}} = a^2 - 1 $$ Set $t=a/x$ $$ a\sqrt{t} - a\sqrt{\frac{1}{t}} = a^2 - 1 $$ solve for $t$ by squaring $$ a^2t-2a^2+a^2/t=a^4-2a^2+1 $$ $$ a^2t^2+a^2-a^4t-t=0 $$ $$ a^2t(t-a^2)-(t-a^2)=0 $$ and finally $$ (t-a^2)(a^2t-1)=0. $$ As the product is $0$ each factor can be rearranged to get a root: $$ t_1=a^2, t_2=1/a^2 $$ Now the resubstitution yields the answers $$ x_1=1/a, x_2= a^3. $$
|algebra-precalculus|
0
Problems with integration $\int_{-\infty}^{\infty}\frac{xe^{a - x}}{(1+e^{a-x})^2}dx$
I have this integral: $\int_{-\infty}^{\infty}\frac{xe^{a - x}}{(1+e^{a-x})^2}dx$ I've tried to find its: $\int_{-\infty}^{\infty}\frac{xe^{a - x}}{(1+e^{a-x})^2}dx = | a-x = t| = -\int_{-\infty}^{\infty}\frac{(a-t)e^{t}}{(1+e^{t})^2}dx$ , then I used integration by parts: Let $u = a-t$ so $du = -dt$ and let $dv = \frac{e^{t}}{(1+e^{t})^2}dx$ so $v = \int_{}^{}\frac{e^{t}}{(1+e^{t})^2}dx = |1+e^t = s| = \int \frac{1}{s^2}dx =-\frac{1}{s} = - \frac{1}{1+e^t}$ , then - $\int_{-\infty}^{\infty}\frac{(a-t)e^{t}}{(1+e^{t})^2}dx = -(-\frac{a-t}{1+e^t}|^\infty_{-\infty} - \int_{-\infty}^{\infty}\frac{dx}{1+e^t})$ and here I have a problem: $\frac{a-t}{1+e^t}|^\infty_{-\infty} = \frac{a-\infty}{1+exp(\infty)} - \frac{a+\infty}{1+exp(-\infty)}$ . As for me I was mistaken somewhere, but I don't understand where. I've tried to go another way - use $exp(a-x) = exp(a)*exp(-x)$ and there is the same problem. QUESTION: where I was wrong? Thanks for any help.
Let $t=a-x$ . Then $\mathrm{d}t=-\mathrm{d}x$ . This will give you a minus sign. But you also need to apply your substitution to the boundaries of the integral. This gives you another minus sign. So your substituted integral becomes $$ I = \int \limits_{-\infty}^\infty (a+t) \frac{e^t}{(1+e^t)^2} \mathrm{d}t = a\underbrace{\int \limits_{-\infty}^\infty \frac{e^t}{(1+e^t)^2} \mathrm{d}t}_{I_1} + \underbrace{\int \limits_{-\infty}^\infty t \frac{e^t}{(1+e^t)^2}}_{I_2} \tag{1} $$ First we notice that $$ f(-t) = \frac{e^{-t}}{(1+e^{-t})^2} = \frac{e^{2t}}{e^{2t}} \frac{e^{-t}}{1+2e^{-t} + 2 e^{-2t}} = \frac{e^t}{e^{2t}+2e^t+ 1} = f(t) \tag{2} $$ So this is an even function. From here it follows immediately that $I_2 = 0$ . Now let $u = e^t+1$ so $\mathrm{d}u = e^t \mathrm{d}t$ . Then $$ I_1 = \int \limits_1^\infty \frac{1}{e^t}\frac{u-1}{u^2} \mathrm{d}u = \int \limits_1^\infty \frac{1}{u-1}\frac{u-1}{u^2} \mathrm{d}u = \int \limits_1^\infty \frac{1}{u^2} \mathrm{d}u = 1 $$ So $I=a \cdot 1
|integration|definite-integrals|
1
Evaluate $\int{\frac{1}{x^3}}$ using u-sub.
I tried solving $\int{\frac{1}{x^3}}$ using u-sub instead of power rule and I got $-\frac{1}{2}x^{-2}$ instead of $\frac{x^4}{4}$ . Its very possible I've made a very simple mistake or their is something fundamentally wrong with my idea, if someone could show how/if this problem can be done with u-sub it would be greatly appreciated. My work to solve the problem are as follows: $$ \int{\frac{1}{x^3}dx} \\u=x^3 \\x=u^{1/3} \\du=3x^{2}dx \\dx=\frac{1}{3x^2}du \\\int{\frac{1}{u}\times{\frac{1}{3x^2}du}} \\\int{\frac{1}{u}\times{\frac{1}{3u^{2/3}}du}} \\\int{u^{-1}\times{\frac{1}{3}\times{u^{-2/3}}}du} \\\frac{1}{3}\int{u^{-1}\times{{u^{-2/3}}}du} \\\frac{1}{3}\int{u^{-1+-2/3}du} \\\frac{1}{3}\int{u^{-5/3}du} \\\frac{1}{3}\times{\frac{u^{\frac{-5}{3}+1}}{\frac{-5}{3}+1}} \\\frac{1}{3}\times{\frac{u^{\frac{-2}{3}}}{\frac{-2}{3}}} \\\frac{1}{3}\times{\frac{3}{-2}}\times{u^{\frac{-2}{3}}} \\-\frac{1}{2}\times{u^{\frac{-2}{3}}} \\-\frac{1}{2}\times{x^{3^{\frac{-2}{3}}}} \\-\frac{1}{2}\times{x^
Your answer is fine maybe there is a typo in your textbook. Also an easier u-sub would be letting $u=\frac{1}{x} $ then $du=-\frac{1}{x^2} dx $ and then notice $\frac{1} {x^3} = \frac{1}{x^2} \cdot \frac{1}{x}$ So the integral becomes $\int (-u) du = -\frac{u^2}{2}+C $ and after subbing $u $ back you get $-\frac{1}{2} \cdot \frac{1}{x^2}= -\frac{1}{2}\cdot x^{-2}$
|calculus|indefinite-integrals|substitution|
0
Determining algebraically a point of intersection.
A student I was tutoring posed the question: "I know how to solve $$e^{-x} = \ln x$$ graphically, however how do you solve this algebraically?" I have been fiddling around with it for a while and I feel like I'm missing something. I have tried various methods involving series expansions and de Moivres theorem but I feel like I'm on the wrong track. Any help is appreciated.
I think that it could be better to write the equation as $$e^x\,\log(x)=1$$ By inspection, the solution is $\in (1,2)$ . Expand the lhs as $$f=e^x\,\log(x)=e \sum_{n=1}^\infty a_n\,(x-1)^n$$ where the first $a_n$ (which are defined by recursion) are $$\left\{1,\frac{1}{2},\frac{1}{3},0,\frac{3}{40},-\frac{7}{144}, \frac{23}{504},-\frac{29}{720},\frac{629}{17280},-\frac{120287}{ 3628800},\frac{607337}{19958400}\right\}$$ Truncate to some order and perform a power series reversion to obtain $$x=\sum_{n=1}^\infty b_n\,\left(\frac f e\right)^k$$ where the first $b_n$ are $$\left\{1,1,-\frac{1}{2},\frac{1}{6},\frac{5}{24},-\frac{37}{60}, \frac{679}{720},-\frac{1633}{1680},\frac{1921}{4480},\frac{3258 37}{362880},-\frac{10627109}{3628800},\frac{949279}{190080}\right\}$$ Now, make $f=1$ . The decimal representation is $x=\color{red}{1.309}83$ For the fun, using $100$ terms would give $x=\color{red}{1.309799585804150477669233}70$
|algebra-precalculus|numerical-methods|
0
List of geometric theorems linked by two squares
I'm trying to create a classification for geometric theorems that relate to two squares As a type of organization and classification And the curiosity to explore I have collected some theorems of this type that I will put in an answer/answers. I hope you can help me expand my list. It's important to note that I'm not looking for theorems about squares because it would become too extensive a list, I'm looking for theorems about a number of squares equal to exactly twoTherefore, theorems such as Van Opel's theorem are not accepted in the answers
Here are somewhat-natural generalizations of a couple of the two-squares-joined-at-a-vertex results from OP's answer . Potema's Theorem Let squares $\square A'B'C'D'$ and $\square A''B''C''D''$ be as shown, with $X$ the midpoint of $X'X''$ . If segment $A'A''$ remains fixed, and segment $D'D''$ keeps a constant length and inclination (in Potema's theorem, $D'=D''$ ), then midpoint $B$ remains fixed. Moreover, the position of $C$ relative to $D$ matches that of $B$ relative to $A$ . That is, $\square ABCD$ is a parallelogram, with $|AB|$ and $|CD|$ determined by the lengths and relative inclinations of $A'A''$ and $C'C''$ . Specifically, segments $AB$ and $CD$ form congruent triangles with segments of length $|AA'|$ and $|DD'|$ (more-specifically, segments perpendicular to $A'A''$ and $B'B''$ ). Likewise, segments $BC$ and $AD$ form congruent triangles with segments of length $|BB'|$ and $|CC'|$ . Fensler-Hadwiger Again, we have squares $\square A'B'C'D'$ and $\square A''B''C''D''$ , an
|geometry|euclidean-geometry|big-list|
0
How to pull back the differential form $\omega = \frac{-ydx+xdy}{\sqrt{x^2+y^2}}$ to $S^2$
Consider the stereographic projection chart on $S^2$ which doesn't include the north pole $$(X,Y)=\varphi(x,y,z)=\left(\frac{x}{1-z}, \frac{y}{1-z}\right).$$ I want to pull back the 1-form $\omega = \frac{-ydx+xdy}{\sqrt{x^2+y^2}}$ to $S^2$ from $\mathbb{R}^2$ to $S^2$ but I am not sure about a step in the calculation. The definition of pullback of a form under a smooth map $\varphi $ is $$\varphi^* \omega=(f \circ \varphi) d\left(X \circ \varphi\right)+(g \circ \varphi) d\left(Y \circ \varphi\right)$$ Then, $$ \begin{aligned} &\varphi^*\omega =\frac{\frac{-y}{1-z}}{\sqrt{\frac{x^2}{(1-z)^2}+\frac{y^2}{(1-z)^2}}} d\left(X \circ \varphi\right)+\frac{x}{\sqrt{\frac{x^2}{(1-z)^2}+\frac{y^2}{(1-z)^2}}} d\left(Y \circ \varphi\right) \\ & =\frac{-y}{\sqrt{x^2+y^2}} d(X \circ \varphi)+\frac{x}{\sqrt{x^2+y^2}} d(Y \circ \varphi) \\ & \end{aligned} $$ How do I compute $d(X\circ \varphi)$ and $d(Y\circ \varphi)$ . Intuitively, this feels like some sort of product rule would have to occur. $$d\fr
Imho the simplest but still a bit tedious approach is to use the inverse stereographic projection : \begin{align} \pmatrix{ x\\y\\z}=\frac{1}{1+X^2+Y^2}\pmatrix{2X\\2Y\\-1+X^2+Y^2} \end{align} where $x,y,z$ are the coordinates in $\mathbb R^3$ and $X,Y$ the coordinates on $S^2\,.$ By ordinary calculus, \begin{align} dx&=2\frac{(1-X^2+Y^2)\,dX-2XY\,dY}{(1+X^2+Y^2)^2}\,,\\[2mm] dy&=2\frac{-2XY\,dX+(1+X^2-Y^2)\,dY}{(1+X^2+Y^2)^2}\,. \end{align} This should give \begin{align} -y\,dx+x\,dy&=4\frac{-Y(1-X^2+Y^2)\,dX+2XY^2\,dY-2X^2Y\,dX+ X(1+X^2-Y^2)\,dY}{(1+X^2+Y^2)^3}\\[2mm] &=4\frac{-Y\,dX-X^2Y\,dX-Y^3\,dX+XY^2\,dY+ X\,dY+X^3\,dY}{(1+X^2+Y^2)^3}\,,\\[2mm] &=4\frac{-Y(1+X^2+Y^2)\,dX+X(1+X^2+Y^2 )\,dY}{(1+X^2+Y^2)^3}\,,\\[2mm] &=4\frac{-Y\,dX+X\,dY}{(1+X^2+Y^2)^2}\,. \end{align} Using \begin{align} x^2+y^2=4\frac{X^2+Y^2}{(1+X^2+Y^2)^2} \end{align} we obtain \begin{align}\boxed{\phantom{\BIGG|} \frac{-y\,dx+x\,dy}{\sqrt{x^2+y^2}}=\frac2{1+X^2+Y^2}\frac{-Y\,dX+X\,dY}{\sqrt{X^2+Y^2}}\,.\quad}
|calculus|geometry|analysis|manifolds|differential-topology|
0
If $u^2 \ge -\dfrac{8}{3}$, then $u \ge -\sqrt{\dfrac{8}{3}}$.
If $u^2 \ge -\dfrac{8}{3}$ , then $u \ge -\sqrt{\dfrac{8}{3}}$ . Is this the correct convention? I was confused because initially I thought the negative sign would go inside the square root, but then that would lead to imaginary numbers. Thanks.
NOTE that $\forall a \in \mathbb{R} , a^2 \geq 0$ $ so your first inequality is always true . $u^2 \geq 0 > -\frac{8}{3 } \implies u \in \mathbb{R} $ If you're still not convinced we can move everything to one side and solve the quadratic inequality $u^2+ \frac{8}{3} \geq0$ which has a negative determinant $D=b^2-4 \cdot a \cdot c = 0^2-4 \cdot 1 \cdot \frac{8}{3}=- \frac{32}{3} so the trinomial is always possitive
|algebra-precalculus|
0
Form of Hypergeometric Differential Equation - possible mistake?
I'm performing a close critical study of David Nelson's Penguin Dictionary of Mathematics (4th ed., 2008). Under the entry hypergeometric differential equation , it suggests the form: $$x (1 - x) \dfrac {\mathrm d^2 \phi} {\mathrm d x^2} + [c - (a + b - 1) x] \dfrac {\mathrm d \phi} {\mathrm d x} - a b \phi = 0$$ However, everywhere else I look, I see it defined as: $$x (1 - x) \dfrac {\mathrm d^2 \phi} {\mathrm d x^2} + [c - (a + b + 1) x] \dfrac {\mathrm d \phi} {\mathrm d x} - a b \phi = 0$$ I suspect, but would like to be certain, that Nelson's presentation is in fact incorrect, or whether it's a variant format which is equally acceptable as a representational format. Before I report on this as an acual error, can it be confirmed that it is in fact wrong?
Any homogenous linear equation of second order with singular points $(0,1,\infty)$ $$x(1-x)f'' + (a + b x) f'[x] +c f[x]=0$$ is a hypergeometric differential equation, the regular solution at (0,1) being the hypergemetric series $$\, _2F_1\left(-\frac{1}{2} \sqrt{b^2+2 b+4 c+1}-\frac{b}{2}-\frac{1}{2},\frac{1}{2} \sqrt{b^2+2 b+4 c+1}-\frac{b}{2}-\frac{1}{2};a;x\right).$$ Only the second standard form $$f'(x) (c-x (a+b+1))-a b f(x)+(1-x) x f''(x)=0$$ yields the simple form of the parameters $$\, _2F_1\left(a,b,c,x\right)$$ that generalizes the geometric series Series[Hypergeometric2F1[a, b, c, x], {x, 0, 3}] $$\sum_n \frac{a_n b_n}{c_n n!} \ x^n = 1+\frac{a b x}{c}+\frac{a (a+1) b (b+1) x^2}{2 c (c+1)}+\frac{a (a+1) (a+2) b (b+1) (b+2) x^3}{6 c (c+1) (c+2)}+O\left(x^4\right)$$
|ordinary-differential-equations|hypergeometric-function|
0
Complete Riemann Manifold
I was trying to understand this article about the existence of complete Riemannian metrics by Nomizu Ozeki, see The Existence of Complete Riemannian Metrics, Proceedings of the American Mathematical Society, Vol. 12, No. 6, pp. 889-891, 1961. I understand the general idea of building this function that increases indefinitely as you approach the missing points. So in the article, they define this function $r:M \rightarrow \mathbb{R}$ , which is the supremum of positive numbers such that the ball $\mathbb{B}(x,r) = \{y\in M: d(x,y) is relatively compact, where the distance function is induced by the original Riemann metric on $M$ : $$ d(x,y) = \inf \{\int\sqrt{\langle\frac{dc}{dt},\frac{dc}{dt}\rangle}dt: c \text{ piecewise differentiable curve joining } x \text{ and } y \}. $$ We may assume $r$ is always finite; otherwise, the manifold is already complete, as I understand it. From here, we choose a differentiable function satisfying $w(x)>\frac{1}{r(x)}$ for each $x\in M$ , via partitio
The key observation is that subsets of relatively compact are relatively compact: if $X \subseteq Y$ and $Y$ is relatively compact, then $\overline{X} \subseteq \overline{Y}$ is a closed subset of a compact set, hence is compact. So, if $B(x, d(x, y)+a)$ is relatively compact, then $B(y, a)$ is as well. Suppose for a contradiction that $r(x) > r(y) + d(x, y)$ . Then for some $\epsilon> 0$ we have $a = r(x)- \epsilon > r(y) + d(x, y)$ , hence $a - d(x, y) > r(y)$ . By definition of $r$ we see that $B(x, a)$ is relatively compact whereas $B(y, a-d(x, y))$ is not, contradiction. The other direction follows by symmetry.
|metric-spaces|riemannian-geometry|
0
Name for elements from domain used in mapping
As I understand, in a function, the subset of the codomain actually mapped to is called the range. What about the domain? Is there a name for the subset of the domain actually used in a mapping?
I'm not sure if I fully understand your question but here's an answer to what I think you're confused about. Let's use examples so that it's easier to see it "in action" Let $f(x)=\sqrt{x} $ ; the Domain is $[0,+ \infty)$ but notice that $ [0,+ \infty) \subset \mathbb{R}=(- \infty , + \infty)$ the symbol " $\subset$ " means "subset" and its kind of like $ but for sets! We say A is a subset of B if all elements of A are contained in B. so in our exapmple with $f(x)=\sqrt x$ the codomain is $\mathbb{R}. $ The Range of a function $f$ is all the $y$ values in can take . Let $g(x)=|x|$ the domain is $\mathbb{R}$ and the range is $[0,+ \infty)$ you can see it geometrically if you see the graph of $g$ as $x$ goes from $- \infty \to + \infty $ , $y$ only takes non-negative values. If you have any more questions or you didnt understand something feel free to let me know and I'll be glad to help you out!
|functions|terminology|
0
Notation clarification on Allen hatcher section 3.1
I am reading Hatcher(alg.top). in the chapter 3, section 3.1, the universal coefficient theorem, there it is being argued that, In the original chain complex the homology groups are $\mathbb{Z}$ 's in dimensions 0 and 3, together with a $\mathbb{Z}_2$ in dimension 1. The homology groups of the dual cochain complex, which are called cohomology groups to emphasize the dualization, are again $\mathbb{Z}$ ’s in dimensions 0 and 3, but the $\mathbb{Z}_2$ in the 1 dimensional homology of the original complex has shifted up a dimension to become a $\mathbb{Z}_2$ in 2 dimensional cohomology. I don't quite understand where does this $\mathbb{Z}_2$ come from in the description. The confusion may be due to the fact I don't understand what the relation showed as a "vertical-equal" sign. If that means to "isomorphic to" or "Homology of" then I don't understand how $\mathbb{Z}_2$ comes to the description
Have a look at the top sequence. Kernel of the last $0$ map $C_1\to C_0$ is of course whole $\mathbb{Z}$ . While the image of $x\mapsto 2x$ map is $2\mathbb{Z}$ . The corresponding homology is the quotient of kernel by image, and gives us $\mathbb{Z}/2\mathbb{Z}$ , also commonly referred to as $\mathbb{Z}_2$ . The vertical equality signs are indeed isomorphisms. But not of homology, but of (co)chain groups, the elements of those sequences. Well, for the top sequence these can be literal equalities. But for the bottom one these are isomorphisms, after applying $Hom$ . The $x\mapsto 2x$ also gets transformed through $Hom$ , but gives the same map (with reversed arrow) after applying the isomorphism. While zero morphisms are always mapped to zero morphisms through $Hom$ .
|algebraic-topology|notation|
0
Determining whether a housing allocation is in the Core
I have recently been thinking about the housing allocation problem where we have a set of players and a set of houses where players have strict preferences over the houses. I am aware of the Top Trading Cycle algorithm which can be used to assign houses to players in a Pareto Optimal, Strategy Proof, and Individual Rational way. Furthermore, this resulting allocation is group rational, i.e. in the (weak) core meaning that no subset of players can deviate and switch their assigned houses among themselves so that all players in this subset strictly improve. This also implies that the core is always non-empty. However, what I was wondering is whether there exists an approach that allows us to efficiently determine whether a given allocation is in the (weak) core. I.e. given an allocation f, can we determine whether there exists a subset of players that can switch so that all of them strictly improve? I could not find much existing content on this question. Any input would be highly apprec
There are two ways to interpret the question. First, is an allocation in the core for the initial assignment of houses? Second, is the allocation in the core if every agent owns their assigned house. For the second question, just run the top trading cycle algorithm starting from the allocation (that seems to be your answer, too). If nothing changes, you have a core allocation. For the first question, just run the top trading cycle algorithm from the initial assignment of houses and compare. With strict preferences, there is always a unique weak core allocation; this is part of Theorem 2 of Roth, Alvin E., and Andrew Postlewaite. " Weak versus strong domination in a market with indivisible goods. " Journal of Mathematical Economics 4.2 (1977): 131-137.
|game-theory|economics|matching-theory|
0
Determinant of $n \times n$ matrix of a sort of skew symmetric matrix plus some diagonal
Given, a matrix: $$\begin{pmatrix} a & b & \ldots & b & b \\ -b & a & \ldots & b & b \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ -b & -b & \ldots & a & b \\ -b & -b & \ldots & -b & a \end{pmatrix}.$$ I need to find a determinant. So initially what I did, was I added the first column to other ones: $$\begin{pmatrix} a & b+a & \ldots & b+a & b+a \\ -b & a-b & \ldots & 0 & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ -b & -2b & \ldots & a-b & 0 \\ -b & -2b & \ldots & -2b & a-b \end{pmatrix},$$ then added the last row to the first one $$\begin{pmatrix} a-b & a-b & \ldots & a-b & 2a \\ -b & a-b & \ldots & 0 & 0 \\ \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ -b & -2b & \ldots & a-b & 0 \\ -b & -2b & \ldots & -2b & a-b\end{pmatrix},$$ then multiplied the first column by 2 and subtracted the second one $$\frac{1}{2}\cdot\begin{pmatrix} a-b & a-b & \ldots & a-b & 2a \\ -a-b & a-b & \ldots & 0 & 0 \\ \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ 0 & -2b & \ldots & a-b & 0 \\ 0
Let $U$ be the upper triangular matrix of ones. The matrix in question can be expressed as $$ A_n=aI_n+b(U-U^T)=UB_nU^T $$ where $$ B_n=aU^{-1}(U^{-1})^T+b\left[(U^{-1})^T-U^{-1}\right]. $$ Clearly, we have $\det A_n=\det B_n$ because $\det U=1$ . It is straightforward to verify that $$ U^{-1}=\pmatrix{1&-1\\ &\ddots&\ddots\\ &&\ddots&\ddots\\ &&&\ddots&-1\\ &&&&1\\},\quad B_n=\pmatrix{2a&b-a\\ -b-a&\ddots&\ddots\\ &\ddots&\ddots&\ddots\\ &&\ddots&2a&b-a\\ &&&-b-a&a}. $$ Being a tridiagonal matrix, the determinant of $B_n$ satisfies the recurrence relation $\det B_n=2a\det B_{n-1}+(b^2-a^2)\det B_{n-2}$ with initial conditions $\det B_0=1$ and $\det B_1=a$ . (If you don’t see this, try Laplace expansion along the first row of $B_n$ .) By mathematical induction, it can be proved that $\det B_n=\frac12[(a+b)^n+(a-b)^n]$ .
|linear-algebra|abstract-algebra|matrices|determinant|problem-solving|
1
The selection of the direction of the auxiliary curve when applying green's theorem to line integral with a singular point seems to change the answer
Problem Compute $$ \oint_L\frac{xdy-ydx}{4x^2+y^2} $$ where $L$ is a circle centered at $(1, 0)$ with a radius of $R > 1$ , and the direction of $L$ is counterclock-wise. Solution To by pass the singular point at $(0, 0)$ which is inside the circle $L$ surrounds, adding an auxiliary curve: $$ C:4x^2+y^2 = \delta^2 $$ The integral can then be computed by applying Green's theorem. However, different choices of the direction of $C$ produce different answers. If we choose counterclock-wise: $$ \oint_L\frac{xdy-ydx}{4x^2+y^2}=\oint_{L+C}-\oint_{C}=-\oint_{C}=-\pi $$ , if we choose clockwise: $$ \oint_L\frac{xdy-ydx}{4x^2+y^2}=\oint_{L+C^{-1}}-\oint_{C^{-1}}=\oint_{C}=\pi $$ What I got wrong here?
The "pitfall" here is that Green's theorem only works with positively oriented boundaries/curves, which means that you need the correct orientation for $C$ , which should be the opposite of that of $L$ (see for example this answer for the reasoning behind that: https://math.stackexchange.com/a/141869/1104384 ), because you want and need the union of $L$ and $C$ to be positively oriented. In this case, that means that $C$ must be oriented clockwise, hence you get $+ \pi$ .
|calculus|multivariable-calculus|line-integrals|
0
How to compute $\lim_{x\to 0} \frac{e^{ax}-e^{bx}}{x}$?
I'm trying to compute the following limit: $$L=\lim_{x\to 0} \frac{e^{ax}-e^{bx}}{x} \tag{1}$$ And I have to use some of the following limits for it: $$\lim_{x\to 0}(1+x)^{\frac{1}{x}}=e=\lim_{x\to \infty}\left(1+\frac{1}{x}\right)^{x}$$ I tried some substitutions, specially the first limit but I got only to: $$L=\lim_{x\to 0} \frac{(x+1)^a-(x+1)^b}{x} \tag{2}$$ Which I tried to substitute $x$ for $x-1$ but this yielded nothing useful. Can you give me a hint?
$$ L = \lim _{x\to 0}\left(\frac{e^{ax}-e^{bx}}{x}\right) $$ $$ L = \lim _{x\to 0}\left(\frac{e^{ax}\left(1 - e^{(b-a)x}\right)}{x}\right) $$ $$ L = \lim _{x\to 0}\left(e^{ax} \cdot \frac{1 - e^{(b-a)x}}{x}\right) $$ Notice that as $x \to 0, e^{ax}\to 1$ , so we can focus on the second part of the product: $$ L = \lim _{x\to 0}\left(\frac{1 - e^{(b-a)x}}{x}\right) $$ Now, let's rewrite $1 - e^{(b-a)x}$ to look more like the provided limits. Use the substitution $$h = (b-a)x$$ As $x \to 0, h \to 0$ $$ L = \lim _{h\to 0}\left(\frac{1 - e^h}{\frac{h}{b-a}}\right)$$ $$ L = (b-a)\lim _{h\to 0}\left(\frac{1 - e^h}{h}\right) $$ $$ L = -(b-a)\lim _{h\to 0}\left(\frac{ e^h - 1}{h}\right) $$ Let $u=e^h - 1$ . Then $e^h = 1+u$ implying $h=\ln(1+u)$ . As $h \to 0, u \to 0$ since $1+u \to 1$ $$ \lim _{h\to 0}\left(\frac{ e^h - 1}{h}\right) = \lim _{u\to 0}\left(\frac{1+u- 1}{\ln(1+u)}\right)$$ $$ = \lim _{u\to 0}\left(\frac{u}{\ln(1+u)}\right)$$ $$ = \lim _{u\to 0}\left(\frac{1}{(\frac{1}{u})\ln(1+
|limits|exponential-function|limits-without-lhopital|
0
Request: Does notion of natural density $1$ with specified lower bound already exist
Suppose that $A\subseteq\mathbb{N}$ . Then one can look at the function $f(n)=|\{0,...,n\}\setminus A|$ ( $|S|$ denoting the cardinality of $S$ ). I am interested in the case when $f(n)\leq Cn^\alpha$ with $C>0$ and $\alpha . In this case $A$ obviously has natural density $1$ . My question is: Is there already a name for this concept and if not how one "should" call sets with this property. Edit: I am mainly interested in the first part of the question, the second part is only included in case someone thinks they have a good idea. It goes without saying that I will come up with my own terminology if I don't get advice here. I use sets with this property a lot in a work of mine, so I do need some useful name for it, since spelling it out each time seems bad practice. So the point of my question is to avoid the situation where I choose a name for it and then someone tells me that this concept has already been used multiple times and there already exist a standard name for it. As it is ha
I think this is a bad question, because it doesn't really matter if there is a name for sets with this specific property, as the property isn't sufficiently interesting to warrant a name being described to it. I also don't see how the sets you describe are "refined notions of natural density." Looking for names of sets with particular properties seems like a fruitless activity (unless the property is sufficiently "interesting"), as it is very easy to come up with infinitely many distinct properties that sets can have, and ask, "do sets with this property have a name?" each and every time. Most properties of such sets will not have a name. So just describe the property mathematically if you don't know the name. If then someone informs you of/you stumble across in a paper the name that exists for that property then you can amend your work/knowledge accordingly.
|number-theory|
0
for any positive integer $n$, we can find numbers$ 1, 2, \cdots, n$ that the mean of any two of them will not appear between these two numbers
Prove that, for any positive integer $n,$ we must be able to find a permutation of the numbers $1, 2, . . . , n$ such that the mean of any two numbers in the permutation will not appear between these two numbers. I hope that someone can help me. Thanks!
I am assuming you're asking Let $n$ be a possitive integer , prove that the mean let's call it $m$ of $0 , $m \notin \{1,2,...,n\}$ , $\forall a,b \in\{1,2,...,n\}$ If that is what you mean it is clearly wrong Let $n=10$ and we chose $a=1 , b=9 $ the mean of $a,b$ is $\frac{1+9}{2}=5$ and $5 \in \{1,2,3,...,10\} $ If I got it wrong please edit your post or make a comment with more details and/or explanations and I'll be happy to help you further!
|algebra-precalculus|
0
MLE's for ANOVA Model
Given the ANOVA model $Y_{ij} = \mu_i + \varepsilon_{ij}, \varepsilon_{ij}\sim N(0, \sigma^2)$ , $i = 1, 2, \ldots , I, \space j = 1,2, \ldots, n_i$ , I am trying to find the MLE's $\hat\mu_1, \hat\mu_2, \ldots , \hat\mu_n, \hat\sigma^2$ . I have that the likelihood function is $L = \prod_{i=1}^{I}\prod_{j=1}^{n_i} \frac{1}{\sqrt{2\pi \sigma^2}} e^{-(Y_{ij} - \mu_i)^2 / 2\sigma^2}$ and thus the log-likelihood is $\ell = -n\log 2\pi - n\log \sigma^2 + \sum_{i=1}^{I}e^{-\frac{1}{2\sigma^2}\sum_{j=1}^{n_i}(Y_{ij}-\mu_i)^2}$ , but the equation $\frac{\partial \ell}{\partial \mu_k} = 0$ does not allow me to explicitly calculate $\hat\mu_k$ . EDIT: After a second attempt I have found $\hat\mu_k = \frac{1}{n_k}\sum_{j=1}^{n_k} Y_{kj}$ and $\hat\sigma^2 = \frac{\sum_{i=1}^{I}\sum_{j=1}^{n_i}(Y_{ij} - \mu_i)^2}{\sum_{i=1}^{I} n_i}$ , is this correct? I found that $\ell = -\frac{1}{2}\log 2\pi \sum_{i=1}^{I}n_i - \frac{1}{2}\log \sigma^2 \sum_{i=1}^{I}n_i - \frac{1}{2\sigma^2}\sum_{i=1}^{I}\sum_
Denote $\sigma ^ 2 = \theta $ , thus \begin{align} L(\mu, \theta) = \left( \frac{1}{(2 \pi \theta ) ^{0.5}} \right)^ {\sum_{i=1}^I n_i} \exp\{ - (Y_{ij} - \mu_i ) ^ 2 / (2\theta) \}, \end{align} \begin{align} l(\mu, \theta) = 0.5\sum_{i=1}^I n_i \ln \left( 2\pi \theta \right) + \frac{1}{2\theta}\sum_i^I \sum_j^{n_i} (Y_{ij} - \mu_i)^2 \end{align} Therefore, \begin{align} \frac{\partial }{ \partial \theta }l(\mu, \theta) = \frac{\sum_i^I n_i}{2\theta} + \frac{1}{2\theta ^ 2}\sum_i^I \sum_j^{n_i} (Y_{ij} - \mu_i)^2 = 0, \end{align} \begin{align} \hat \theta_{MLE} = \sum_i^I \sum_j^{n_i} (Y_{ij} - \hat \mu_i)^2 \end{align} where $$ \hat \mu_i = \frac{1}{n_i} \sum_{j=1}^{n_i} Y_{ij} . $$
|statistics|regression|
0
find square root of $x^2+x^3 $ in formal power series $k[[x,y]]$
I am trying to show that the polynomial $y-x^2-x^3$ is reducible in the formal power series ring $k[[x,y]]$ . I am attempting the question by finding a polynomial in $k[[x,y]]$ which is the square root of $x^2+x^3$ . In order to find the square root I wrote the general polynomial in $k[[x,y]]$ , $$a_{00}+a_{10}x+a_{01}y+a_{20}x^2 ...........$$ Took it's square and equated the coefficients to the coefficients of $x^2+x^3$ . I got the following system of equations $$a_{00}^2=0$$ $$2a_{10} a_{00}=0$$ $$2a_{20} a_{00}+{a_{10}}^2=1 $$ and so on. But this system doesnot have a solution. I am sure that the root does exist. What am I doing wrong?
By the generalized binomial theorem we have $$ \sqrt{x^2+x^3}=|x| (1+x)^{\frac{1}{2}}=|x| \sum_{i=0}^\infty \binom{\frac{1}{2}}{i}x^i, $$ where $$ \binom{\frac{1}{2}}{i}=\frac{\frac{1}{2}(\frac{1}{2}-1)(\frac{1}{2}-2) \cdots (\frac{1}{2}-i+1)}{i!} $$
|abstract-algebra|systems-of-equations|formal-power-series|
0
Existence of an $L^2$ function.
Let $f_n \in L^2([0,1])$ be non-zero for all $n \in \Bbb{N}$ . Prove there exists a function $g \in L^2([0,1])$ such that $$\int_0^1 g(x) f_n(x) dx \neq 0.$$ So first I tried contradiction which implies $g(x)f_n(x)$ need be zero almost everywhere which would imply $g$ to be the zero function. But then again I thought maybe $g(x)=1$ on the whole unit interval could work? Its certainly in $L^2$ ...
For each $n \in \Bbb{N}$ , put $$U_n:=\{g \in L^2: \int_0^1 gf_n \neq 0\}.$$ Next, define linear functionals $L_n$ on $L^2$ via $$L_n(g):=\int gf_n dx.$$ Then $U_n=L^{-1}(\{0\})^c$ . And by continuity of $L_n$ , $U_n$ is open as it is the preimage of an open set. Moreover, given any scalar $t$ , \begin{align*} L_n(g+tf_n)&=L_n(g)+tL_n(f_n)\\ &=L_n(g) + t\vert \vert f_n \vert \vert^2. \end{align*} And since $\vert \vert f_n \vert \vert \neq 0$ , there exists only one value of $t$ for which $$g+tf_n \not\in U_n.$$ Thus $U_n$ is dense. Thus $(U_n)_n$ is a countable family of dense sets in $L^2([0,1])$ which is complete thus by Baire Category theorem, $$\bigcap_n U_n \neq \emptyset.$$ And any $g \in \cap_n U_n$ has the property that $$\int_0^1 gf_n dx \neq 0.$$
|real-analysis|integration|measure-theory|
1
Finding matrix to transform one vector to another vector
If I have an arbitrary vector $A = (a,b,c,0)$ how can I find a transformation matrix $M$ such that $M \times A = (0,1,0,0)$ ? We can assume $A$ has a magnitude of $1$ if it helps simplify the derivation process. The trivial case $A = (0,1,0,0)$ would cause $M$ to be the identity matrix. If $A = (0,-1,0,0)$ then $M$ would be a 180 degree rotation matrix about the $x$ axis. I heard of Rodrigues' rotation formula from this question but I'm not sure how it would work in a 4 by 4 matrix.
Since $B=(0,1,0,0)^T$ is very simple and $A=(a,b,c,0)^T$ is not we can consider $$M_1=\left[\begin{array}{cccc}0&a&0&0\\0&b&0&0\\0&c&0&0\\0&0&0&0\end{array}\right]$$ which satisfies $ M_1B=A$ which is not satisfactory since we are looking for all $M$ such that $MB=A$ . Furthermore $M_1^{-1}$ does not exist. So we rather perturbate $M_1$ in $$M_2=\left[\begin{array}{cccc}1&a&0&0\\0&b&0&0\\0&c&1&0\\0&0&0&1\end{array}\right].$$ Assume that $b\neq 0;$ then $\det M_2=b,$ which implies that $M_2$ is invertible. Since $M_2 B=A$ then $M_3=M_2^{-1}$ is a solution since $M_3 A=B.$ Finally all others $M=M_3+M_4$ such that $MA=B$ are such that $M_4 A=0$ . If $R_1,R_2,R_3,R_4$ are the rows of $M_4$ then $M_4A=0$ if and only if for $i=1,2,3,4 $ we have $R_iA=0.$ Plenty of solutions... Finally if $b=0$ imitate the method.
|matrices|
0
Must a countable disjoint union of closed balls in $\mathbb{R}^n$ with positive radius be disconnected?
A disjoint union of open balls is of course disconnected. Here it is proved that a locally compact, connected, Hausdorff space is not a countable disjoint union of compact subsets, so a countable disjoint union of closed balls in $\mathbb{R}^n$ can't be connected and locally compact (hence cannot be open or closed connected), which rules out the connected sets for $n=1$ . But what if we just ask this disjoint union to be connected? Please forgive me if this question turns out to be trivial. Thank you in advance for any help. Edit. Actually I should have asked about something more general, like Is there a connected subset $\mathbb{R}^n$ that can be written as a countable disjoint union of closed sets in $\mathbb{R}^n$ ? If not, is there a connected subset $\mathbb{R}^n$ that can be written as a countable disjoint union of closed sets in the subspace topology? And the answer to the second question above is still negative for $n=1$ , or for open subsets of $\mathbb{R}^n$ (the link require
Example of countable family of pairwise disjoint closed sets on a plane whose union is connected Here is a relatively simple example. The other section has an older, more complicated example in $ℝ^3$ . Let $D = \{d_n : n=0,1,\dots\}$ be a dense subset of unit sphere with $d_n \neq d_m$ . For each $n=1,2\dots$ define the line segment $$ A_n = \{ λd_n : 2^{-n} ≤ λ ≤ 1 \}, $$ and $A_0 = \{ λd_0 : 0 ≤λ≤1 \}$ . Let $\text{Sun} = \bigcup_{n=0}^∞ A_n$ . It is a union of closed, pairwise disjoint, connected sets. We show that $\text{Sun}$ is connected. Before proof, here is how $A_0,A_1 \dots, A_5$ might look like. Proof. Let $U,V$ be open disjoint sets with $\text{Sun} = U \cup V$ . Without loss of generality assume that $U$ contains $0\in A_0$ . Since $A_0$ is connected, it must be contained in $U$ . It suffices to show that $A_1, A_2, \dots$ are in $U$ as well. First, note that there is $N$ for which ball $B = B(0, 2^{-N})\cap A$ is contained in $U$ . Consequently, for each $k>N$ an endpoin
|real-analysis|general-topology|analysis|
1
Solve $\sqrt{\dfrac{a}{x}}-\sqrt{\dfrac{x}{a}}=\dfrac{a^2-1}{a}$
Solve $\sqrt{\dfrac{a}{x}}-\sqrt{\dfrac{x}{a}}=\dfrac{a^2-1}{a}$ Let $u^2=\dfrac{a}{x}$ : $\Rightarrow \sqrt{u^2}-\sqrt{\dfrac{1}{u^2}}=\dfrac{a^2-1}{a} \tag{1}$ $\Rightarrow u-\dfrac{1}{u}=\dfrac{a^2-1}{a} \tag{2}$ $\Rightarrow a(u^2-1)=u(a^2-1) \tag{3}$ $\Rightarrow au^2-a^2u=a-u \tag{4}$ $\Rightarrow au(u-a)=a-u \tag{5}$ $\Rightarrow -au(a-u)=a-u \tag{6}$ $\Rightarrow -au=1 \tag{7}$ $\Rightarrow u=\dfrac{-1}{a} \tag{8}$ Therefore $\dfrac{1}{a^2}=\dfrac{a}{x} \Rightarrow x=a^3$ . However $x=\dfrac{1}{a}$ is given as another answer, but I don't see how it's possible to derive this solution from my calculations.
Question Summary (for Easier Reference) Solve: $$\sqrt{\dfrac{a}{x}}-\sqrt{\dfrac{x}{a}}=\dfrac{a^2-1}{a} =a-\dfrac{1}{a} \text{ for }x\text{ in terms of }a \tag{Eq. 1}$$ Solution Steps Start by observing that the roles of $\sqrt{\frac{a}{x}}$ and $a$ are very similar, and same with $-\frac{1}{a}$ and $-\sqrt{\frac{x}{a}}$ . So try this solution and check for consistency: $$\sqrt{\frac{a}{x}}=a\text{ , } \tag{Eq. 2a}$$ $$ -\sqrt{\frac{x}{a}}=-\frac{1}{a} \tag{Eq. 2b}$$ $$\tag{Eqs. 2}$$ Testing Equation 2a, namely $\sqrt{\frac{a}{x}}=a$ and simplifying it by multiplying the left and the right by $\sqrt{x}$ : $$ \sqrt{a}=a*\sqrt{x} \underset{implies}\implies \pm\sqrt{x}=\frac{1}{\pm \sqrt{a}} \underset{implies}\implies x=\frac{1}{a} \tag{Eqs. 3a} $$ Now test Equation 2b for consistency, namely $-\frac{1}{a}=-\sqrt{\frac{x}{a}}$ thus: $$-\frac{1}{a}=-\sqrt{\frac{x}{a}} \underset{implies}\implies \frac{1}{a}=\sqrt{\frac{x}{a}} \underset{implies}\implies \frac{1}{a^2}=\frac{x}{a} \underset{
|algebra-precalculus|
0
Joint distribution of two conditional distributions
I am trying to understand how a joint distribution is formed when two regular conditional distributions are involved that are conditional with respect to different random variables. Let $(\Omega, \mathcal{A}, \mathbb{P})$ be a probability space, and let there be the three random variables $X:(\Omega, \mathcal{A})\rightarrow (\mathcal{X}, \mathcal{F})$ , $Y:(\Omega, \mathcal{A})\rightarrow (\mathcal{Y}, \mathcal{G})$ , $Z:(\Omega, \mathcal{A})\rightarrow (\mathcal{Z}, \mathcal{H})$ . Let us consider the Markov kernels $\mathbb{P}_{Y|X}$ and $\mathbb{P}_{X|Z}$ . 1.) My question now is, if $$\int_{\mathcal{X}}\mathbb{P}_{Y|X=x}(E) \mathbb{P}_{X|Z=z_0}(dx)=\mathbb{P}_{Y, X|Z=z_0}(E)$$ holds (for some fixed $z_0$ )? On the one hand I would say no because intuitively I would assume that we would need a kernel $\mathbb{P}_{Y|X, Z}$ for that. On the other hand, for some fixed $z_0$ , $\mathbb{P}_{X|Z=z_0}$ is simply a measure on $\mathcal{X}$ and not a conditional distribution (i.e., a kernel)
If $Y$ and $Z$ are conditionally independent when given $X$ , then $$\begin{align}\mathsf P_{X,Y\mid Z=z}(x,y\mid z)=\mathsf P_{Y\mid X=x}(y)\,\mathsf P_{X\mid Z=z}(x)\\\mathsf P_{Y\mid Z=z_0}\!(E)=\int_\mathcal X \mathsf P_{Y\mid X=x}(E)\,\mathsf P_{X\mid Z=z_0}\!(\mathrm dx)\end{align}$$ If that was not the case, then: $$\begin{align}\mathsf P_{X,Y\mid Z=z}(x,y\mid z)=\mathsf P_{Y\mid X=x,Z=z}(y)\,\mathsf P_{X\mid Z=z}(x)\\\mathsf P_{Y\mid Z=z_0}\!(E)=\int_\mathcal X \mathsf P_{Y\mid X=x,Z=z_0}\!(E)\,\mathsf P_{X\mid Z=z_0}\!(\mathrm dx)\end{align}$$
|probability-distributions|conditional-probability|conditional-expectation|
0
Find the inverse Laplace transform of F(s) = 1/(s+exp(-sτ)), where τ is a positive real parameter.
I'm looking for the inverse Laplace transform of $$F(s) = \frac{1}{s + e^{-s\tau}}$$ where τ is a positive real parameter. I am trying to use general inverse formula of Laplace transformation to solve it. But then, I need to find the singularities of F(s), that is, $ s + e^{-s\tau} = 0$ . Transform the euqation, I can get $ \tau = \frac{log(-s)}{(-s)}$ . Seems that the number of singularities depends on the value of parameter $\tau$ .Then, question comes to me, how to find the residue at those possible singulaties? And then how to proceed the calculation for the general inverse formula? Many thanks in advance for your advice.
$$\mathcal{L}_s^{-1}\left[\frac{1}{s+\exp (-s \tau )}\right](t)=\sum _{m=0}^{\infty } \frac{(-t+m \tau )^m \theta (t-m \tau )}{\Gamma (1+m)}$$ where: $\theta (t-m \tau )$ is HeavisideTheta function. $$\mathcal{L}_s^{-1}\left[\frac{1}{s+\exp (-s \tau )}\right](t)=\\\mathcal{M}_q^{-1}\left[\mathcal{L}_s^{-1}\left[\mathcal{M}_A\left[\frac{1}{s+A \exp (-s \tau )}\right](q)\right](t)\right](1)=\\\mathcal{M}_q^{-1}\left[\mathcal{L}_s^{-1}\left[e^{q s \tau } \pi s^{-1+q} \csc (\pi q)\right](t)\right](1)=\\\mathcal{M}_q^{-1}\left[\frac{\pi (t+q \tau )^{-q} \csc (\pi q) \theta (t+q \tau )}{\Gamma (1-q)}\right](1)=\\\sum _{m=0}^{\infty } \frac{(-t+m \tau )^m \theta (t-m \tau )}{\Gamma (1+m)}$$ where: $\mathcal{M}_q^{-1}$ is Inverse Mellin Transform, and $\mathcal{M}_A$ is Mellin Transform.
|complex-analysis|partial-differential-equations|laplace-transform|residue-calculus|inverse-laplace|
0
Show that if a Mobius transformation has 3 fixed points then it is the identity map.
I have that any non trivial Mobius transformation has at most 2 fixed points since f(z)-z=0 has at most 2 roots. But I cannot deduce why it must then be the identity.
Lemma: If $f$ fixes $1,0,\infty$ , it must be the identity map. Proof: Indeed, take: $$f(z)=\frac{az+b}{cz+d}$$ Because it fixes $0$ , then $b=0$ . Because it fixes $\infty$ then, $f(z)=\frac{a}{c+d/z}$ must have $c=0$ and we have $f(z)=az/d$ . But it fixes $1$ , so $f(z)=z$ . $\square$ Theorem: If $f$ fixes any distinct three points, it must be the identity map. Proof: I will assume you know Mobius transformation are invertible. You can prove this by explicitly writing the inverse, or by realizing every Mobius transformation is associated to an invertible linear transformation. Suppose $f$ and $\tilde{f}$ fix three points $a,b,c$ . Clearly $f^{-1}$ and $\tilde{f}^{-1}$ also fix these points. Consider the map: $$g(z)=\frac{(a-c)(z-b)}{(a-b)(z-c)}$$ $g$ takes $a$ to $1$ , $b$ to $0$ and $c$ to $\infty$ . The determinant is $(b-c)(a-c)(a-b)\not=0$ by hypothesis, so it is a Mobius transformation. Define: $$h=g\circ \tilde{f}\circ f^{-1}\circ g^{-1}$$ This fixes $1,0,\infty$ , so by our le
|complex-analysis|
0
Which theorem should be used to solve this question?
My friend sent me that question and said "another Carnot theorem" is used to solve this question but i couldnt find that theorem. Can you help me? Additional explanation: $$ \widehat{ABD} = 30^{\circ} $$ $$ \widehat{DBC} = 1^{\circ} $$ $$ \widehat{ACD} = 89^{\circ} $$ $$ \widehat{BAD} = \widehat{DAC} $$ $$ x(\widehat{BCD}) = ? $$
Take $E$ - reflection of $C$ about $AD$ , it belongs to $AB$ and $\widehat{BED}=91^\circ$ , i.e. $O$ , the circumcenter of $\triangle BDE$ belongs to $BC$ and $BO=OD=DE=CD$ , hence $\widehat{BCD}=\widehat{COD}=2^\circ$ . Best regards
|geometry|triangles|
0
Source of the definition of integrating a form along a curve in a manifold
Suppose that $M$ is a smooth manifold. Let $\omega$ be an $n-$ form on $M$ with compact support. Then we define $\int_M\omega$ using partitions of unity. If $M$ is covered by a single chart $h:M\to \mathbb R^n$ , then we define $\int_M\omega:= \int_{\mathbb R^n} (h^{-1})^\ast \omega$ , where $\ast$ denotes pullback. $\tag 1$ But often the following definition is stated: $\int_{\gamma} \omega := \int_{[0,1]} \gamma^\ast \omega$ , where $\gamma:[0,1]\to M$ is a smooth curve and $\omega$ is a $1$ - form on $M$ . $\tag 2$ My questions are: $(a)$ what is the source of the definition in $(2)$ ? $(b)$ Does this somehow follow from the definition in $(1)$ ? I think the answer to $(b)$ is no because taking the definition in $(1)$ to be a general definition, the term $\int_{\gamma} \omega$ makes sense iff $\gamma[0,1]$ is a $1$ - manifold but that's not the case in general: Smooth image of a $1$ - manifold is not necessarily a manifold. That brings me back to $(a)$ . I didn't find the definition
This notion is known as a "line integral" and you should have already encountered it in the special case that $M=\mathbb{R}^n$ in a lecture on analysis. Lee has an entire chapter on this concept, this exact definition is stated on p.289 (in the Second Edition). For the other question, yes, integrating over $[0,1]$ is a special case of your definition $(1)$ (as long as you modify it to allow $M$ to have boundary, which is something Lee does). The notation $\int_{\gamma}\omega$ is just that, a notation. It is not meant to imply that the image of the curve is a submanifold to be integrated over (and in the case that the curve is embedded, those interpretations would agree).
|integration|multivariable-calculus|differential-geometry|algebraic-topology|reference-request|
0
How to evaluate $\sum\limits_{n=3}^ \infty \frac{1}{n \ln(n)(\ln(\ln(n)))^2}$
I saw this problem : Prove that $\sum\limits_{n=3}^ \infty \frac{1}{n \ln(n)(\ln(\ln(n)))^2}$ converges, this is an easy problem could be proved using Cauchy condensation test twice. $$\sum_{n=3}^ \infty \frac{2^n}{2^n n\ln(2)(\ln(n \ln(2)))^2}=\sum_{n=3}^\infty \frac{1}{n \ln(2)(\ln(n \ln(2)))^2} $$ and $$\sum_{n=3}^\infty \frac{2^n}{2^n (\ln(2^n \ln(2)))^2}=\sum_{n=3}^\infty \frac{1}{ n^2(\ln( \ln(2)))^2} $$ which converges. I because curios what is the limit of this sum ? I tried every method I know but all of them lead to nothing. Since it might be impossible to find the exact sum of this series I want to ask for a numerical approximation for this sum.
Not a complete answer, but maybe it helps.. Here, have some derivatives: $$\frac{d}{dx}\log x = \frac{1}{x}$$ $$\frac{d}{dx}\log\log x = \frac{1}{x\log x}$$ $$\frac{d}{dx}\frac{1}{\log\log x} = \frac{-1}{x\log x(\log\log x)^2}$$ Therefore, take $$F(x) = -\frac{1}{\log\log x}$$ $$f(x) = \frac{1}{x\log x(\log\log x)^2}$$ Where, $F'(x) = f(x)$ . The sum can be re-written as: $\sum_{n=3}^\infty f(n)$ , and because $f(x)$ is continuous and monotone decreasing for $x\ge 3$ , one can use integral comparison: $$ \int_3^\infty f(x)dx\le\sum_{n=3}^\infty f(n)\le f(3) + \int_3^\infty f(x)dx $$ $$ F(\infty) - F(3)\le\sum_{n=3}^\infty f(n) \le F(\infty) - F(3) + f(3) $$ $$ \frac{1}{\log\log 3}\le\sum_{n=3}^\infty f(n) \le \frac{1}{\log(\log 3)} + \frac{1}{3\log 3(\log\log 3)} $$ $$ \left|\sum_{n=3}^\infty f(n) - \frac{1}{\log(\log 3)}\right|\le\frac{1}{3\log 3(\log\log 3)} $$ I think you can get a better approximation with arbitrary precision, by continuing this calculation using the Euler-Maclauri
|real-analysis|calculus|limits|summation|numerical-methods|
0
Show the linear functional relating to the solution of ODE is bounded
This is an exercise in the book by B. Daya Reddy . For $f \in L^2(0, 1)$ , let $u_f$ be the solution of the ODE: $u'' + u' - 2u = f$ , $u(0) = u(1) = 0$ . Define the functional $\ell$ by $$ \langle \ell, f \rangle = \int_0^1 u_f(x) dx \ \forall f \in L^2(0, 1) $$ Show that $\ell$ is a bounded linear functional. I have shown $\ell$ is linear, but struggle to bound $\ell$ . From the ODE $$ u'' + u' - 2u = f, u(0) = u(1) = 0 $$ integrate from $0$ to $1$ for both side, we have $$ \langle \ell, f \rangle = \int_0^1 u_f(x)dx = \dfrac{u_f'(1) - u_f'(0)}{2} - \dfrac{1}{2}\int_0^1 f(x)dx $$ Therefore, $$ \vert \langle \ell, f \rangle \vert \le \dfrac{\vert u_f'(1) - u_f'(0) \vert}{2} + \dfrac{1}{2}\lVert f \rVert_2 $$ Am I on the right track ? How to bound the term $\dfrac{\vert u_f'(1) - u_f'(0) \vert}{2}$ in term of $L^2$ norm of $f$ ? Any hints are appreciated. Thanks
Multiply your ODE by $u$ throughout, then integrate from $0$ to $1$ : \begin{align} \int_0^1u’’u +\int_0^1u’u-\int_0^12u^2&=\int_0^1fu. \end{align} Integrate by-parts on the first term to get $[u’u]_0^1-\int_0^1|u’|^2=-\int_0^1|u’|^2$ , since $u(0)=u(1)=0$ . For the second term, it can be integrated to give $\left[\frac{u^2}{2}\right]_0^1=0$ . Hence, rearranging gives \begin{align} \int_0^1\bigg(|u’|^2+ 2|u|^2\bigg)\,dx&=-\int_0^1fu\,dx\leq \int_0^1|f|^2\,dx+\int_0^1|u|^2\,dx, \end{align} and so rearranging gives \begin{align} \|u’\|_{L^2}^2+\|u\|_{L^2}^2&\leq\|f\|_{L^2}^2,\tag{$*$} \end{align} meaning in particular the solution map $f\mapsto u_f$ from $L^2([0,1])\to L^2([0,1])$ is bounded (with operator norm at most $1$ ). With this, \begin{align} |\langle\lambda,f\rangle|&\leq\|u_f\|_{L^1([0,1])}\leq\|u_f\|_{L^2([0,1])}\leq \|f\|_{L^2([0,1])}. \end{align} The only thing left to justify is why we are able to perform all these manipulations with the integration-by-parts and stuff, but
|functional-analysis|ordinary-differential-equations|
1
How to evaluate $\sum\limits_{n=3}^ \infty \frac{1}{n \ln(n)(\ln(\ln(n)))^2}$
I saw this problem : Prove that $\sum\limits_{n=3}^ \infty \frac{1}{n \ln(n)(\ln(\ln(n)))^2}$ converges, this is an easy problem could be proved using Cauchy condensation test twice. $$\sum_{n=3}^ \infty \frac{2^n}{2^n n\ln(2)(\ln(n \ln(2)))^2}=\sum_{n=3}^\infty \frac{1}{n \ln(2)(\ln(n \ln(2)))^2} $$ and $$\sum_{n=3}^\infty \frac{2^n}{2^n (\ln(2^n \ln(2)))^2}=\sum_{n=3}^\infty \frac{1}{ n^2(\ln( \ln(2)))^2} $$ which converges. I because curios what is the limit of this sum ? I tried every method I know but all of them lead to nothing. Since it might be impossible to find the exact sum of this series I want to ask for a numerical approximation for this sum.
In general we can approximate a sum $S=\sum_{i=m}^n f(i)$ by an integral $I=\int_m^n f(x)dx$ , more precisely for $n=\lfloor x\rfloor$ and $f(x)=1/(x\log(x)(\log\log(x))^2)$ we have $f(n)\geq f(x)\geq f(n+1)$ and integrating this inequality we obtain \begin{align} 0&\leq \sum_{n=n_0}^\infty f(n)-\int_{n_0}^\infty f(x)dx\leq f(n_0) \\ 0&\leq\sum_{n=n_0}^\infty f(n)-\frac 1{\log\log(n_0)}\leq f(n_0). \end{align} People like to joke that iterated log is a constant, and it is almost true. So you have to account for the term that comes from the integral to get decent bounds. For $n_0=1000$ we get $$\sum_{n=3}^\infty\frac 1{n\log(n)(\log\log(n))^2}\approx\sum_{n=3}^{n_0}\frac 1{n\log(n)(\log\log(n))^2}+\frac 1{\log\log(n_0)}=38.40678\ldots$$ with an error of magnitude less than $f(n_0)=0.0000387...$ .
|real-analysis|calculus|limits|summation|numerical-methods|
0
If $R$ is a ring that is finitely generated as an additive group, then R is Noetherian
Recall that a ring $R$ is called right (left) Noetherian if every right (left) ideal $I$ of $R$ is a finitely generated $R$ -module, i.e., there exists $x_1,\ldots,x_m \in I$ such that $I=x_1R+\ldots+x_mR$ (or $I=Rx_1+\ldots+Rx_m$ ). Suppose that $R$ is finitely generated as an additive group, i.e., $R=\mathbb{Z}x_1+\ldots+\mathbb{Z}x_m$ for some $x_1,\ldots,x_m \in R$ . Is it true that every right (or left) ideal of $R$ is finitely generated as an $R$ -module?
Yes, it is true. The ring $\mathbb{Z} $ is a PID, hence noetherian. Thus an abelian group (i.e. $\mathbb{Z} $ -module) is a noetherian $\mathbb{Z} $ -module iff is finitely generated. Now, given any ring homomorphism $S\to R$ and an $R$ -module $M$ , if $M$ is noetherian as an $S$ -module, so it is as an $R$ -module. Taking $S=\mathbb{Z} $ and the unique ring homomorphism $\mathbb{Z} \to R$ , your assertion follows.
|modules|
0
Limit of lacunar power series in $1^-$.
Let $\sigma:\mathbb{N}\longrightarrow\mathbb{N}$ be strictly increasing, and consider the power series $$ S_{\sigma}(x)=\sum_{n=0}^{+\infty}(-1)^nx^{\sigma(n)}. $$ Can any real number in $[0,1]$ be obtained as the limit $\lim\limits_{x\rightarrow 1^-}S_{\sigma}(x)$ for some $\sigma$ ? According to this answer, the limit always is $\frac{1}{2}$ when $\sigma$ is a polynomial, WolframAlpha suggests that the limit is also $\frac{1}{2}$ with $\sigma(n)=n\log n$ (think of $\sigma(n)$ as the $n$ -th prime number). Therefore my question can also be : Is the limit $\lim\limits_{x\rightarrow 1^-}S_{\sigma}(x)$ always $\frac{1}{2}$ ? if not, can any rational number in $[0,1]$ be obtained this way for some $\sigma$ ?
My remark from MO ... $$\lim_{x \to 1^-}\sum_{k=0}^\infty \big(x^{10k}-x^{10k+3}\big) = \frac{3}{10} .$$ Similarly, get any rational in $(0,1)$ .
|real-analysis|limits|power-series|analytic-number-theory|lacunary-series|
0
When is digital sum applicable and when it isn't?
I am a high school student and there is something I want to ask about the application of digital sums. Let's say there is a fraction " 520/7", let 520/7=a , so 520= a × 7 , so if we now calculate the digital sums, it would be like 7= a × 7 , it means the digital sum of a should be 1 and nothing else so it means the remainder of this fraction on dividing by 9 is 1, but when we calculate the answer we see that it results in a repeating and infinite rational no. Which is 74.285714285714...and so on, which do not have any SINGLE digital sum as it keeps on changing as we add more and more digits. But our proof says it should be 1? So what's going on? Also when we say the digital sum of any no. is same as the remainder we get when we divide that no. By 9 but is it applicable for fractions as well? Because let's say there is a no. 18.225, if we divide this by 9 the remainder will be 0.225 and not 9, so this statement seems to be applicable on only integers. Am I right? I am not much aware abo
As explained here , modular fractions are well-defined if the denominator is coprime to the modulus, and they obey common fraction laws, e.g. $\!\bmod 9\!:\ 520\equiv 5\!+\!2\equiv 7\,\Rightarrow\, \frac{520}7\equiv \frac{7}7\equiv 1$ . If you knew only the decimal we can still use this method to compute its value modulo $\,9\,$ by first first cancelling all factors of $3$ until we reach a denominator that's $\rm\color{#0a0}{coprime}$ to the modulus $9,\,$ i.e. $\underbrace{74.\overline{\color{#0af}{285714}}_{\phantom{}}}_{\large 520/7} = \underbrace{74\!+\!\frac{\color{#0af}{285714}}{999999} = 74\!+\!\frac{10582}{\color{#0a0}{37037}}}_{\large {\rm cancel}\ 3^3}$ $\equiv 7\!+\!4\!+\!\frac{1+5+8+2}{3+7+3+7}\equiv 11\!+\!\frac{16}{20}^{\phantom{|^{|^|}}}\!\!\!\!\equiv\underbrace{ 2\!+\!\frac{\color{#c00}7}{2}\equiv 2\!-\!1}_{\large \color{#c00}7\ \equiv\ -2}$ In example $\,2\!:\ \,0.225 = \frac{225}{1000}\equiv \frac{2+2+5}{1+0+0+0}\equiv \frac{9}1\equiv \color{#0af}0,\,$ so $\,18.125\eq
|elementary-number-theory|modular-arithmetic|
0
Prove that $\langle x, y \rangle = \overline{\langle y, x \rangle}$
Let $X$ be a normed linear space over the field $\mathbb C$ with the norm $\|\cdot \|$ . Let $x,y \in X$ . Define $\displaystyle \langle x, y \rangle =\frac{1}{4} \sum_{k =0}^{3} i^{k} \Vert x +i^k y\Vert^2$ . Prove that $\langle x, y \rangle = \overline{\langle y, x \rangle}.$ My attempt: \begin{align*} \overline{\langle y, x \rangle} &= \overline{\frac{1}{4} \sum_{k =0}^{3} i^{k} \Vert y +i^k x\Vert^2} \\ &= \overline{\frac{1}{4}(\Vert y +x\Vert^2)+i\Vert y +i x\Vert^2-\Vert y - x\Vert^2-i\Vert y -ix\Vert^2)} \\ &= \frac{1}{4}(\overline{\Vert y +x\Vert^2)}-i\overline{\Vert y +i x\Vert^2}-\overline{\Vert y - x\Vert^2}+i\overline{\Vert y -ix\Vert^2}). \end{align*} Can I take the complex conjugate inside the norm?
You cannot bring the complex conjugate inside the norm as $\|x+iy\|$ is not necessarily equal $\|x-iy\|.$ Instead I would use $$i^k\|x+i^ky\|^2= i^k\|i^{-k}x+y\|^2=\overline{i^{-k}\|y+i^{-k}x\|^2}$$ On summing up the terms you get the conclusion.
|functional-analysis|normed-spaces|hilbert-spaces|
0
Prove that $\langle x, y \rangle = \overline{\langle y, x \rangle}$
Let $X$ be a normed linear space over the field $\mathbb C$ with the norm $\|\cdot \|$ . Let $x,y \in X$ . Define $\displaystyle \langle x, y \rangle =\frac{1}{4} \sum_{k =0}^{3} i^{k} \Vert x +i^k y\Vert^2$ . Prove that $\langle x, y \rangle = \overline{\langle y, x \rangle}.$ My attempt: \begin{align*} \overline{\langle y, x \rangle} &= \overline{\frac{1}{4} \sum_{k =0}^{3} i^{k} \Vert y +i^k x\Vert^2} \\ &= \overline{\frac{1}{4}(\Vert y +x\Vert^2)+i\Vert y +i x\Vert^2-\Vert y - x\Vert^2-i\Vert y -ix\Vert^2)} \\ &= \frac{1}{4}(\overline{\Vert y +x\Vert^2)}-i\overline{\Vert y +i x\Vert^2}-\overline{\Vert y - x\Vert^2}+i\overline{\Vert y -ix\Vert^2}). \end{align*} Can I take the complex conjugate inside the norm?
Since the norm is a positive real number by definition/construction, it isn't affected by the complex conjugation. Then, it is only a matter of rearranging the terms. At the end, we have : $$ \begin{align} \overline{\langle y, x \rangle} &= \overline{\frac{1}{4} \sum_{k =0}^{3} i^{k} \|y + i^kx\|^2} \\ &= \frac{1}{4} \sum_{k =0}^{3} (-i)^{k} \|y + i^kx\|^2 \\ &= \frac{1}{4} \sum_{k =0}^{3} (-i)^{k} \|x + (-i)^ky\|^2 \\ &= \frac{1}{4} \left(\|x+y\|^2 - i\|x-iy\|^2 - \|x-y\|^2 + i\|x+iy\|^2\right) \\ &= \frac{1}{4} \left(\|x+y\|^2 + i\|x+iy\|^2 - \|x-y\|^2 - i\|x-iy\|^2\right) \\ &= \frac{1}{4} \sum_{k =0}^{3} i^{k} \|x + i^ky\|^2 \\ &= \langle x,y \rangle \end{align} $$ Formally, the rearrangement corresponds to the change of variable $k' := 3 - k$ .
|functional-analysis|normed-spaces|hilbert-spaces|
1
$n^3 \equiv n^5 \pmod{12} $?
I am proving that $$5n^3 + 7n^5 \equiv 0 \pmod{12}$$ It would suffice to show $$n^3 \equiv n^5 \pmod{12}$$ How would I go about doing that? I suppose I could just go through each $n \equiv r \pmod{12}$ with $r$ from $1$ to $11$ and show that $n^3 \equiv n^5 \pmod{12}$ for each, but that would be tedious. Surely there's a better way.
$$\begin{align}n^5-n^3&=6\binom{n+1}{3}n^2\\&=4\binom{n+1}{2}\binom{n}{2}n\end{align}$$ Since $6,4\mid n^5-n^3$ we also have that $\text{lcm}(4,6)=12\mid n^5-n^3$
|elementary-number-theory|
0
Values of Lebesgue integrable function from the integral of the product of a known function.
Let's say we have three complex absolutely integrable and square integrable functions $A,B,C\in \mathbb{L}^1(\mathbb{C})\cup \mathbb{L}^2(\mathbb{C})$ such that the following holds: $$A(y) = \int_{-\infty}^\infty{B(x)C(xy)dx}$$ If we can measure or calculate $A$ and $C$ for all values, what information (if any) could we learn about the function $B$ ? I understand that the entire function $B$ can't be extracted in most cases, but is there a systematic way of describing what information can be extracted and how to do so?
There is an algorithm similar to what I was looking for in this physics paper . It uses the de-convolution theorem and polynomial series to "invert" the integral. I use the term "invert" loosely here because information is definitely lost, but for some functions it clearly produces decent approximations.
|functional-analysis|measure-theory|fourier-analysis|information-theory|
0
Is '$10$' a magical number or I am missing something?
It's a hilarious witty joke that points out how every base is ' $10$ ' in its base. Like, \begin{align} 2 &= 10\ \text{(base 2)} \\ 8 &= 10\ \text{(base 8)} \end{align} My question is if whoever invented the decimal system had chosen $9$ numbers or $11$ , or whatever, would this still be applicable? I am confused - Is $10$ a special number which we had chosen several centuries ago or am I missing a point?
To avoid confusion, the following somewhat cumbersome notation seems appropriate to me : Let us write $(2:1:7)_{ten}$ instead of $217$ . It means $$(\color{red}2:\color{green}1:\color{blue}7)_{ten}=\color{red}2\times ten^2+\color{green}1\times ten^1+\color{blue}7\times ten^0$$ That's base ten So let's look at base four because the Martian in the joke image has only four fingers while we have ten. Very logically in his world he will form packs of four, then bundles of packs of four, and so on. For example , $$(\color{red}1:\color{green}2:\color{blue}3)_{four}=\color{red}1\times four^2+\color{green}2\times four^1+\color{blue}3\times four^0$$ To count, in particular the four stones in the image, the Martian only needs four digits $0,1,2$ and $3$ . When he sees a package, very logically, he says: " $1$ package" and writes $$10$$ He's never heard of $4$ because he doesn't really need $4$ . Hence his question: "what is base four?"
|notation|number-systems|
0
Solve for all $x $ such that $16\sin^{3}x -14\cos^{3}x = \sqrt[3]{\sin x\cos^{8}x + 7\cos^{9}x}$
The original question : Find all $x$ in $\mathbb R$ such that $16\sin^3(x) -14\cos^3(x) = \sqrt[3]{\sin x\cos^8(x) + 7\cos^9(x)}$ It's a tough question I've found. I've tried using $16\tan^3(x) -14 = \sqrt[3]{\tan x + 7 }$ By inspection, $\tan x=1$ is one of the answers. but According to WA , $\tan x$ is not equal to $1$ . (Sorry , I've seen later that $x = \frac{\pi}{4}$ , but I don't know how to find all roots of the question.) Can root of unity solve this?
The function $f\colon \mathbb{R} \to \mathbb{R}$ , $f(a) = (2a)^3-7$ , is bijective and increasing. Since $f(a) - a = (a-1)(8a^2+8a+7)$ , the only solution of the equation $f(a) = a$ is $a=1$ ; $f(a) > a$ if $a>1$ and $f(a) if $a . Then the equations $f(f(a)) = a$ and $f(a) = a$ are equivalent. In particular, $f^{-1} (a) = f(a)$ if and only if $a=1$ . In the original problem, $a=\tan{x}$ .
|algebra-precalculus|trigonometry|complex-numbers|radicals|
0
What numbers can be written uniquely as a sum of two squares?
What numbers can be written uniquely as a sum of two squares? I was looking at sequence A125022 , which shows the numbers that can be uniquely written as a sum of two squares. Here are a few things that I noticed from the first numbers. We have $1$ , $2$ , $4$ , $8$ , $16$ , $32$ , $64$ , $128$ . It is then safe to assume that all numbers of the form $2^{s}$ can be written uniquely, where $s \in \mathbb{Z}_{+} \cup \{0\}$ . Moreover, primes of the form $4k+1$ , for example $5$ and $13$ , also appear and, interestingly enough, $5^2$ and $13^{2}$ do not. So, we could also say that $p^{s}$ has a unique representation only when $s = 0$ or $s = 1$ . If we analyze $A125022$ a bit more, we notice that $3^{2}$ , $7^{2}$ , $11^{2}$ are there, so we can conjecture that numbers of the form $q^{2}$ have a unique representation, where $q$ is a prime of the form $4k+3$ . Furthermore, for reasons I will say later, I believe $d^{2}$ , where $d$ has all of its prime factors of the form $4k+3$ , can be
Your conjecture is correct. You are missing no numbers. Any number not on your list would contain a prime power $q^e$ with $q$ of the form $4k+3$ , $e$ odd, or a prime power $p^e$ with $p$ of the form $4k+1$ with $e\ge2$ , or at least two primes $p_1$ , $p_2$ of the form $4k+1$ . In the first case $n$ cannot be written as sum of two primes at all, and in the second and third case one obtains readily several different representations of $n$ . An essential ingredient to see this is the Brahmagupta-Fibonacci Identity Each number on your list has actually only one representation. This can be most easily seen by looking at the prime decomposition of $n$ in the ring of Gaussian integers $\mathbb{Z}[i]$ , and considering the fact that $\mathbb{Z}[i]$ has unique prime number decomposition up to units. Choosing different units does not result in different representations. Essentially all different representations arise from combining the factors in $\mathbb{Z}[i]$ of the prime decompositions $p
|number-theory|elementary-number-theory|algebraic-number-theory|diophantine-equations|
1
Introduce a parameter to determine the value of $\int_0^1\frac{\log(1+x)}{1+x^2}dx$
How could I introduce a parameter to determine the value of $\int_0^1\frac{\log(1+x)}{1+x^2}dx$ ?
You can write the integral as $$I(\alpha)=\int_0^1 \frac{\log(1+\alpha x)}{1+x^2}\,dx$$ The target integral is the case $\alpha=1$ , note that for $\alpha=0$ we find $I(0)=0$ which is going to be useful later. Now we want to differentiate $I(\alpha)$ with respect to $\alpha$ , so we write: $$I'(\alpha)=\frac{\partial}{\partial\alpha}I(\alpha)=\frac{\partial}{\partial\alpha}\int_0^1 \frac{\log(1+\alpha x)}{1+x^2}\,dx=\int_0^1 \frac{\partial}{\partial\alpha}\frac{\log(1+\alpha x)}{1+x^2}\,dx$$ We can bring inside the integral the partial differential because of the theorem of Dominated Convergence . $$I'(\alpha)=\int_0^1 \frac{x}{(1+x^2)(1+\alpha x)}\,dx=\int_0^1\frac{\alpha+x}{(a+x^2)(a+\alpha x)}-\frac{\alpha}{(\alpha^2+1)(1+(\alpha x))}\,dx$$ Integrating with respect of $x$ we obtain an expression that is in $\alpha$ : $$I'(\alpha)=\frac{\pi}{4}\frac{\alpha}{\alpha^2+1}-\frac{\log{2}}{2(\alpha^2+1)}$$ We now recover the original function by integrating this derivative: $$I(\alpha)=\in
|definite-integrals|
0
Given $x_1^3+x_2^3+...+x_9^3=0$. Find the maximum value of $S=x_1+x_2+...+ x_9$.
Given 9 real numbers $x_1, x_2, ... , x_9\in [-1,1]$ such that $x_1^3+x_2^3+...+x_9^3=0$ . Find the maximum value of $S=x_1+x_2+...+ x_9$ . I have tried ordering the numbers from smallest to largest and then dividing the set of integers $\{x_1, x_2, ... , x_9 \}$ into two subsets of only negative numbers and only positive numbers. In particular, $S_1 =\{ x_1, x_2,..., x_j \}$ and $S_2 =\{ x_{j+1}, x_{j+2},..., x_9 \}$ such that all the elements in $S_1$ are negative and all the elements in $S_2$ are positive. From there, letting $-(x_1^3+ x_2^3+... x_j^3)= x_{j+1}^3+ x_{j+2}^3+... x_9^3=P$ and evaluating some inequalities, I got: $maxS=\sqrt[3]{(9-j)^2P}-[\left\lfloor P \right\rfloor+\sqrt[3]{P-\left\lfloor P \right\rfloor}]$ This answer was impossibly complicated and I can't seem to find the maximum of S with respect to $j$ and $P$ . Is there a better solution to this? If not, how do I find the maximum of S?
Notice that for $-1\leq x \leq 1$ there exists a number $t$ such that $\cos(t)=x$ . We have: $$ 4\cos^3(t)-3\cos(t) = \cos(3t) \geq -1\\ \Rightarrow 4x^3 - 3x \geq -1\\ \Rightarrow x\leq \frac{4x^3+1}{3}$$ Or you can prove by showing that $4x^3-3x+1\geq0$ . Applying this to our sum, we obtain: $$\sum_{i = 1}^9 {x_i} \leq \frac{4}{3}\sum_{i = 1}^9 {x_i^3} + \frac{9}{3} = 3$$ The equal sign happens when $x_i \in \{1/2, -1\}$ and the constraint $\sum_{i = 1}^9 {x_i^3} = 0$ is satisfied. It is possible by letting: $$x_1=x_2=...=x_8=\frac{1}{2},x_9 = -1$$
|inequality|a.m.-g.m.-inequality|
0
Deriving Schwarzian Action in SYK Theory
I am trying to derive the Schwarzian action for the $q=4$ SYK model following "An introduction to the SYK model" by V. Rosenhaus. I understand that we have the solution $$G(\tau_1, \tau_2) = \frac{b {\rm sgn}(\tau_1-\tau_2)}{ J ^{2\Delta}} \frac{f^\prime (\tau_1) ^\Delta f^\prime(\tau_2)^\Delta}{\vert f(\tau_1)-f(\tau_2)\vert^{2\Delta}} .\tag{3.4}$$ My problem is deriving the expansion of $G$ . Supposedly, we change coordinates from $(\tau_1, \tau_2)$ to $(\tau_+, \tau_-)$ where $\tau_+= \frac{\tau_1+\tau_2}{2}$ and $\tau_-=\tau_1-\tau_2$ . Taylor expand around $\tau_+$ $$G(\tau_1, \tau_2) = \frac{b {\rm sgn}(\tau_1-\tau_2)}{ \vert J (\tau_1-\tau_2) \vert^{2\Delta}} \left(1+ \frac{\Delta}{6} (\tau_1-\tau_2)^2 {\rm Sch}(f(\tau_+), \tau_+)+ O(\tau_1-\tau_2)^3 \right) .\tag{3.5}$$ It would be appreciated if someone could help show how this expansion is done or point to a resource where this has been detailed.
Hints: Define $$ \tau~:=~\tau_+~:=~\frac{\tau_1+\tau_2}{2}, $$ $$ 2\delta~:=~\tau_{12}~:=~\tau_1-\tau_2, $$ and the Schwarzian derivative $${\rm Sch}(f(\tau),\tau)~:=~\frac{f^{\prime\prime\prime}(\tau)}{f^{\prime}(\tau)}-\frac{3}{2}\left(\frac{f^{\prime\prime\prime}(\tau)}{f^{\prime}(\tau)}\right)^2.$$ Then $$\begin{align} \tau_{12}^2&\frac{f^{\prime}(\tau_1)f^{\prime}(\tau_2)}{[f(\tau_1)-f(\tau_2)]^2}~=~\ldots\cr ~=~&\delta^2\frac{[f^{\prime}(\tau)+\delta f^{\prime\prime}(\tau)+\frac{\delta^2}{2} f^{\prime\prime\prime}(\tau)+O(\delta^3)][f^{\prime}(\tau)-\delta f^{\prime\prime}(\tau)+\frac{\delta^2}{2} f^{\prime\prime\prime}(\tau)+O(\delta^3)]}{[\delta f^{\prime}(\tau)+\frac{\delta^3}{6} f^{\prime\prime\prime}(\tau)+O(\delta^5)]^2}\cr ~=~&\frac{[f^{\prime}(\tau)+\frac{\delta^2}{2} f^{\prime\prime\prime}(\tau)]^2-[\delta f^{\prime\prime}(\tau)]^2+O(\delta^3)}{[f^{\prime}(\tau)+\frac{\delta^2}{6} f^{\prime\prime\prime}(\tau)+O(\delta^4)]^2}\cr ~=~&\frac{f^{\prime}(\tau)^2+\delta^2 f^{\p
|derivatives|taylor-expansion|physics|conformal-geometry|conformal-field-theory|
0
Tractable formulation of a mixed integer program
Given constant matrices $A_1\in\mathbb{R}^{1\times l}$ and $A_2\in\mathbb{R}^{1\times l}$ , and constants $b_i$ , $i=1,\dots,n$ . Consider the following mixed integer program (MIP) with decision variables $c_i\in\{0,1\}$ and $X=[x_1,\dots,x_n]\in\mathbb{R}^{l\times n}$ with $x_i\in\mathbb{R}^{l}$ for $i=1,\dots,n$ . Objective: min $\sum_{i=1}^{n} |c_iA_1x_i|$ Constraints: $A_2x_i \le c_ib_i$ ; $\;\;$ $\sum_{1}^nc_i \ge 1$ ; $\;\;$ $c_i\in\{0,1\}$ . This problem is intractable because the objective function contains the product of the decision variables $c_i$ and $x_i$ . Is it possible to derive a tractable formulation for this problem?
You can linearize the problem as follows. Introduce nonnegative decision variables $y_i$ to represent $|c_i A_1 x_i|$ , change the objective to minimizing $\sum_i y_i$ , let $M_i$ be a small constant upper bound on $|A_1 x_i|$ , and impose additional linear big-M constraints \begin{align} A_1 x_i - y_i &\le M_i(1-c_i) &&\text{for all $i$} \\ -A_1 x_i - y_i &\le M_i(1-c_i) &&\text{for all $i$} \end{align}
|optimization|convex-optimization|mixed-integer-programming|linearization|
1
Solve $y+3=3\sqrt{(y+7)^2}$
Solve $y+3=3\sqrt{(y+7)^2}$ $\Rightarrow y+3=3(y+7)$ $\Rightarrow y+3=3y+21$ $\Rightarrow 2y=-18$ $y=-9$ But $-9+3\ne3\sqrt{(-9+7)^2} \Rightarrow-6 \ne6$ How is this humanely possible? What's going on here???
$\sqrt{a^2}$ is not equal to $a$ , it’s equal to $|a|$ . So $$y+3=3\sqrt{(y+7)^2}$$ $$y+3=3|y+7|$$ We see that $y+3\ge0$ , so $y+7>0$ , and $|y+7|=y+7$ . $$y+3=3(y+7)$$ $$2y=-18$$ $$y=-9$$ But $y+3\ge0$ doesn’t hold. Hence, no solutions.
|algebra-precalculus|
1
Solve $y+3=3\sqrt{(y+7)^2}$
Solve $y+3=3\sqrt{(y+7)^2}$ $\Rightarrow y+3=3(y+7)$ $\Rightarrow y+3=3y+21$ $\Rightarrow 2y=-18$ $y=-9$ But $-9+3\ne3\sqrt{(-9+7)^2} \Rightarrow-6 \ne6$ How is this humanely possible? What's going on here???
The error that you have is very subtle. If you do not make any simplifications, the right-hand side of the equation will never be negative, that is because $\sqrt{(x+7)^2} = |x+7|$ . So actually you have a different equation that doesn't have solutions (you can check it with simple algebra)
|algebra-precalculus|
0
Natural transformations of Hom-sets “transport” natural transformations from one pair of functors to another? (Reference)
Question 1: Does anyone know a name, or have a reference, for the following lemma? $\newcommand{\Hom}{\operatorname{Hom}}$$\newcommand{\F}{\mathscr{F}}$$\newcommand{\G}{\mathscr{G}}$$\newcommand{\op}{\operatorname{op}}$$\newcommand{\C}{\mathscr{C}}$ $\newcommand{\Id}{\operatorname{Id}}$$\newcommand{\Ob}{\operatorname{Ob}}$$\newcommand{\Set}{\operatorname{Set}}$$\newcommand{\eval}{\operatorname{eval}}$ Lemma: Given functors $G_1,G_2: \C \to \G$ such that there exists a natural transformation $G_1 \implies G_2$ , and functors $F_1, F_2: \C \to \F$ such that there exists a natural transformation ${ \Hom_{\G} \circ (G_1^{\op}, G_2) \implies \Hom_{\F} \circ (F_1^{\op}, F_2) }$ , then there exists a natural transformation $F_1 \implies F_2$ . Question 2: If the above lemma can be used to prove the Yoneda lemma, how would one do so? If the above lemma is a corollary of the Yoneda lemma, then how? Optional context: I will put a proof of this lemma in an answer below. I found the lemma when try
$\newcommand{\Hom}{\operatorname{Hom}}$$\newcommand{\F}{\mathscr{F}}$$\newcommand{\G}{\mathscr{G}}$$\newcommand{\op}{\operatorname{op}}$$\newcommand{\C}{\mathscr{C}}$ $\newcommand{\Id}{\operatorname{Id}}$$\newcommand{\Ob}{\operatorname{Ob}}$ Lemma: Given functors $G_1,G_2: \C \to \G$ such that there exists a natural transformation $G_1 \implies G_2$ , and functors $F_1, F_2: \C \to \F$ such that there exists a natural transformation ${ \Hom_{\G} \circ (G_1^{\op}, G_2) \implies \Hom_{\F} \circ (F_1^{\op}, F_2) }$ , then there exists a natural transformation $F_1 \implies F_2$ . Proof: For every $c \in \Ob(\C)$ , let $\lambda_c$ denote the corresponding component of the natural transformation $G_1 \implies G_2$ . For every ${(c_1, c_2) \in \Ob(\C^{\op} \times \C)}$ , let $\eta_{c_1, c_2}$ denote the corresponding component of the natural transformation ${ \Hom_{\G} \circ (G_1^{\op}, G_2) \implies \Hom_{\F} \circ (F_1^{\op}, F_2) }$ . Then the claim is that the $\eta_{c,c}(\lambda_c) =: \
|reference-request|category-theory|terminology|natural-transformations|yoneda-lemma|
0
Jacobian and vectorization
Given the matrix function $h(Q) = Q^{T}AQ$ . The derivative can be obtained as $$\lim_{\epsilon \to 0} \frac{h(Q - \epsilon H)-h(Q)}{\epsilon} = H^{T} A Q + Q^{T} A H$$ Then, I saw that the Jacobian $J_h(vec(Q)) = ((AQ)^{T} \otimes I)\Pi + I \otimes Q^{T} A$ . I have some issues with obtaining this identity. Notice that $\Pi$ is the matrix s.t. $\Pi vec(X) = vec(X^{T}) \ \forall X$ . I tried this: \begin{align*} vec(H^{T} A Q + Q^{T} A H) &= vec(H^{T} A Q) + vec(Q^{T} A H)\\ &= (Q^{T} \otimes H^{T}) vec(A) + (H^{T} \otimes Q^{T}) vec(A)\\ &= (Q^{T} \otimes I) (I \otimes H^{T}) vec(A) + (H^{T} \otimes I)(I \otimes Q^{T})vec(A)\\ &= (Q^{T} \otimes H^{T}) vec(A) + (H^{T} \otimes Q^{T})vec(A)\\ &= ((Q^{T} \otimes H^{T}) + (H^{T} \otimes Q^{T})) vec(A). \end{align*} I am afraid that I am missing up something in the beginning, since there's no need for me to use $\Pi$ . Any advice is appreciated.
Using the more standard $K$ $($ instead of $\Pi)$ to denote the Commutation Matrix , an uppercase $H$ to denote your $h$ matrix, lowercase letters $(h,q)$ to denote the vectorized form of the matrices $(H,Q),\,$ and using differentials instead of limits, the calculation runs as follows $$\eqalign{ \def\p{\partial} \def\k{\otimes} \def\v{\operatorname{vec}} H &= Q^TAQ \\ dH &= dQ^TAQ \;+\; Q^TA\,dQ \\ \v(dH) &= ((AQ)^T\k I)\v(dQ^T) \;+\; (I\k Q^TA)\v(dQ) \\ dh &= (Q^TA^T\!\k I)K\,dq \;+\; (I\k Q^TA)\,dq \\ \frac{\p h}{\p q} &= (Q^TA^T\!\k I)K \;+\; (I\k Q^TA) \\ }$$
|linear-algebra|matrix-equations|matrix-calculus|
0
Eigendecomposition of the direct sum of two operator on Hilbert spaces
Let the (finite dimensional) Hilbert space $\mathcal{H}$ be the direct sum of $\mathcal{H}_A$ and $\mathcal{H}_B$ . Let $A$ be a linear operator on $\mathcal{H}_A$ and $B$ be a linear operator on $\mathcal{H}_B$ . Let $A = \sum_j \lambda_j^A |\psi_j^A\rangle \langle \psi_j^A|$ and $B = \sum_j \lambda_j^B |\psi_j^B \rangle \langle\psi_j^B|$ be the eigendecomposition of the two operators. What is the eigendecomposition of $A \oplus B$ ? From the definition it follows: $$(A \oplus B) = \sum_j \lambda_j^A |\psi_j^A\rangle \langle \psi_j^A| \oplus \sum_k \lambda_k^B |\psi_k^B \rangle \langle\psi_k^B|$$ and $$(A \oplus B) = \sum_j \sum_k (\lambda_j^A |\psi_j^A\rangle \oplus \lambda_k^B |\psi_k^B\rangle)(\langle\psi_j^A| \oplus \langle\psi_k^B|)$$ Not sure, though, if the next step is correct: $$(A \oplus B) = \sum_j \sum_k \lambda_j^A \lambda_k^B (|\psi_j^A\rangle \oplus |\psi_k^B\rangle)(\langle\psi_j^A| \oplus \langle\psi_k^B|)$$
The eigendecomposition of $A \oplus B$ is just the sum of the two eigendecompositions, that is, $$A \oplus B = \sum_j \lambda_j^A | \psi_j^A \rangle \langle \psi_j^A | + \sum_k \lambda_k^A | \psi_k^A \rangle \langle \psi_k^A |$$ Where I identified $H_A$ as a subspace of $H = H_A \oplus H_B$ by $H_A \ni h \mapsto h \oplus 0 \in H_A \oplus H_B$ and similarly identified $H_B$ as a subspace of $H = H_A \oplus H_B$ by $H_B \ni h \mapsto 0 \oplus h \in H_A \oplus H_B$ . To prove the above decomposition is the eigendecomposition, hint: Show that if $\psi_j^A$ is an eigenvector of $A$ , then $\psi_j^A \oplus 0$ is an eigenvector of $A \oplus B$ with the same eigenvalue. Similarly, if $\psi_k^B$ is an eigenvector of $B$ , then $0 \oplus \psi_k^B$ is an eigenvector of $A \oplus B$ with the same eigenvalue. Moreover, if $\{e_j\}$ and $\{f_k\}$ are orthonormal bases of $H_A$ and $H_B$ , respectively, then $\{e_j \oplus 0\}_j \cup \{0 \oplus f_k\}_k$ is an orthonormal basis of $H_A \oplus H_B$ .
|linear-algebra|operator-theory|hilbert-spaces|
1
Finding value of $(a+b)^5$ using $2$ cubic equation in $a$ and $b$
If $a,b\in\mathbb{R}$ and $\displaystyle \frac{a^3+4a}{3a^2+5}=-1.$ and $\displaystyle \frac{b^3+4b}{3b^2+5}=1$ . Then $(a+b)^5=$ What I try : From the above data, we have $\displaystyle a^3+3a^2+4a+5=0\cdots (1)$ $\displaystyle b^3-3b^2+4b-5=0\cdots (2)$ Adding both , We get $\displaystyle a^3+b^3+3(a^2-b^2)+4(a+b)=0$ $(a+b)\bigg[a^2-ab+b^2+3a-3b+4\bigg]=0$ So either we get $a+b=0$ or $a^2+b^2-ab+3a-3b+4=0$ But answer is $a+b=0\Longrightarrow (a+b)^5=0$ How can I prove that other factor is non zero, Please have a look on that , thanks
First, we can prove that there is only one real value of $a$ that satisfies $\frac{a^3+4a}{3a^2+5} = -1$ . We can see this from the cubic equation $$a^3 + 3a^2 + 4a + 5 = 0.$$ The derivative of the cubic polynomial is $3a^2 + 6a + 4 = 3(a+1)^2 + 1$ , which is always positive. This means that the cubic polynomial is always increasing. After the first real root, the polynomial will always be positive and never hit $0$ again. Second, for every value of $b$ such that $\frac{b^3 + 4b}{3b^2+5} = 1$ , we have $\frac{(-b)^3 + 4(-b)}{3(-b)^2 + 5} = -1$ . In other words, every solution to $b$ 's equation is the negation of a solution to $a$ 's equation. Since there was only one possibility for $a$ , there can only be one possibility for $b$ , and $b = -a$ .
|polynomials|
0
Finding value of $(a+b)^5$ using $2$ cubic equation in $a$ and $b$
If $a,b\in\mathbb{R}$ and $\displaystyle \frac{a^3+4a}{3a^2+5}=-1.$ and $\displaystyle \frac{b^3+4b}{3b^2+5}=1$ . Then $(a+b)^5=$ What I try : From the above data, we have $\displaystyle a^3+3a^2+4a+5=0\cdots (1)$ $\displaystyle b^3-3b^2+4b-5=0\cdots (2)$ Adding both , We get $\displaystyle a^3+b^3+3(a^2-b^2)+4(a+b)=0$ $(a+b)\bigg[a^2-ab+b^2+3a-3b+4\bigg]=0$ So either we get $a+b=0$ or $a^2+b^2-ab+3a-3b+4=0$ But answer is $a+b=0\Longrightarrow (a+b)^5=0$ How can I prove that other factor is non zero, Please have a look on that , thanks
Observe that $$a^2-ab+b^2+3a-3b+4=\left(\frac{a+b}{2}\right)^2+3\left(1+\frac{a-b}{2}\right)^2+1$$
|polynomials|
1
Detail in standard measure theory I cannot seem to obtain
There is a standard result in measure/integration theory which I just cannot seem to obtain. If $f \colon X \to \mathbb{C}$ is measurable ( $X$ is any measurable space), there exist simple measurable functions $\phi_k \colon X \to \mathbb{C}$ such that $\phi_k \to f$ pointwise and all $|\phi_k| \le |f|$ . This is fine. However , it is often claimed that the $\phi_k$ can be taken such that also $|\phi_k| \le |\phi_{k+1}|$ for all $k$ . (See for instance Folland's Real Analysis , Proposition 2.10.) This I cannot seem to obtain, at least not with the simplicitly for which it is claimed to follow. It is typically stated that this can be obtained as follows (e.g., Folland): Write $f = u + iv = u^+ - u^- + i(v^+ - v^-)$ , the standard decomposition into positive and negative parts $u^\pm$ and $v^\pm$ for $u = \Re f$ and $v = \Im f$ . Pick simple measurable functions $s_k^\pm$ and $t_k^\pm$ such that $0 \le s_k^\pm \uparrow u^\pm$ and $0 \le t_k^\pm \uparrow v^\pm$ . Put $$\phi_k := s_k^+ - s
You need to also use the fact that these approximations are subordinate to the positive and negative parts of $u,v$ . Here’s a more precise version of what I mean: Lemma Let $u:X\to[-\infty,\infty]$ be a given function and $u^+,u^-$ its positive, negative parts. Suppose $0\leq s_1\leq u^+$ and $0\leq s_2\leq u^-$ are given functions. Then, the subtraction $s:=s_1-s_2$ is a well-defined function $X\to [-\infty,\infty]$ , with $s^+=s_1$ and $s^-=s_2$ . We cannot have $s_1(x)=s_2(x)=\infty$ , because that would mean $u^+(x)=u^-(x)=\infty$ , which is absurd (in fact if $u^+(x)>0$ that automatically implies $u^-(x)=0$ , and vice-versa); hence the subtraction $s:=s_1-s_2$ is well-defined. Suppose $s(x)=0$ . Then, $s_1(x)=s_2(x)$ . If this common value is strictly positive, then $u^+(x)>0$ and $u^-(x)>0$ , which as I said above is impossible by definition. Hence, $s_1(x)=s_2(x)=0$ . So, $s^+(x):=\max(s(x),0)=\max(0,0)=0=s_1(x)$ , and similarly, $s^-(x)=s_2(x)$ . Suppose $s(x)>0$ . Then, $s_1(
|integration|measure-theory|proof-explanation|simple-functions|
0
Prove Localization in CRing is Epimorphism
Give a commutative ring $R$ and a multiplicative subset $S$ of $A$ , we have the normal localization map $\lambda_S: R \rightarrow S^{-1}R$ . How does one prove that this is an epimorphism? So given some $f, g: S^{-1}R \rightarrow C$ such that $f \circ \lambda_S = g \circ \lambda_S$ , I can see that $f$ and $g$ agree on elements with denominator 1. But I need to show that they agree on all elements in $S^{-1}R$ .
How about lets just use universal property of localization. Lets denote $k = f\lambda = g\lambda : A\to C$ , then we can verify that $k$ maps every thing in $S$ to invertible element in $C$ . Basically for any $s\in S$ , $\lambda(s)$ is invertible in $S^{-1}A$ , so there is $t\in S^{-1}A$ , such that $\lambda(s)t = 1$ , then we get $1=f(\lambda(s)t) = f(\lambda(s))f(t) = k(s)f(t)$ . So $f(t)$ is the inverse of $k(s)$ in $C$ . Then the universal property of localization $S^{-1}A$ shows, there is unique $h: S^{-1}A \to C$ such that $$h\lambda = k$$ But we know $f$ and $g$ both satisfy the above condition, then the uniqueness of $k$ implies $f = g$ .
|abstract-algebra|ring-theory|category-theory|
0
How many nonnegative integers $x_1, x_2, x_3, x_4$ satisfy $2x_1 + x_2 + x_3 + x_4 = n$?
Can anyone give some hints about the following question? How many nonnegative integers $x_1, x_2, x_3, x_4$ satisfy $2x_1 + x_2 + x_3 + x_4 = n$ ? Normally this kind of question uses stars and bars but there are $2x_1$ , which I don’t know how to handle. Help please! Ps :I think may be we can use recurrence relation.
I can’t post my ask on new thread. Problem 1. Let be $a$ is positive integer $(a\le 5)$ . Define $\|1,1,1,a;n\|$ are Number of non-negative integer solutions of equation $\quad x_1+x_2+x_3+ax_4=n$ Proof $\|1,1,1,a;n\|=\left\lfloor\dfrac{(n+2)(n+a+2)(2n+a+1)}{12a}\right\rfloor$ Problem2. Applies result on problem 1, Count number of triangles with edges are positive integers less than or equal $n$ My solution for problem 2. Let $x_1,x_2,x_3$ are three edges of triangle sort order $1\le x_1\le x_2\le x_3\le n$ Get $\begin{cases}x_1=y_1+1;(y_1\ge 0) \\ x_2=x_1+y_2=y_1+y_2+1;(y_2\ge 0)\\ x_3=x_2+y_3=y_1+y_2+y_3+1;(y_3\ge 0)\end{cases}$ Because $x_1+x_2>x_3$ therefore $2y_1+y_2+2>y_1+y_2+y_3+1\Rightarrow y_1\ge y_3\Leftrightarrow y_1=y_3+y_4;(y_4\ge 0)$ And $x_3=y_1+y_2+y_3+1=y_2+2y_3+y_4+1\le n$ Therefore $y_2+y_4+y_5+2y_3=n-1$ where $y_5\ge 0$ Ref problem 1. Number of triangles are $\|1,1,1,2;n-1\|=\left\lfloor\dfrac{(n+1)(n+3)(2n+1)}{24}\right\rfloor$ Return problem 1. I trying uses gener
|combinatorics|recurrence-relations|
0
Confusion in applying the Implicit function theorem
Consider the following equations $$\begin{cases} 2(x^2+y^2)-z^2=0\\ x+y+z-2=0\end{cases}$$ Prove that the above system of equations defines a unique function $\phi: z\mapsto (x(z),y(z))$ , from a neighborhood of $z=2$ to a neighbor hood $V$ of $(1,-1)$ and $\phi\in C^1$ on $U$ . My idea is to use The implicit function theorem Now I have to check the condition to apply this theorem! First, let set $F(x,y,z)=2(x^2+y^2)-z^2$ and $G(x,y,z)=x+y+z-2$ . Obviously, $F,G\in C^1$ on $R^3$ , $F(1,-1,2)=G(1,-1,2)=0$ . Also, $D_zF(1,-1,2)=-4\neq 0, D_zG(1,-1,2)=1\neq 0$ . According to the Implicit function theorem, there exits a unique $z=f(x,y)$ defined for $(x,y)$ near $(1,-1)$ s.t $F(x,y,z)=0$ and a unique $z=g(x,y)$ defined for $(x,y)$ near $(1,-1)$ s.t $G(x,y,z)=0$ Does this imply there is a unique function $\phi: z\mapsto (x(z),y(z))$ , from a neighborhood of $z=2$ to a neighbor hood $V$ of $(1,-1)$ and $\phi\in C^1$ on $U$ ?
No, you must check that the $2\times 2$ Jacobian matrix $\dfrac{\partial (F,G)}{\partial (x,y)}$ is invertible at the given point $(1,-1,2)$ . With regard to your final question, read the statement of the Implicit Function Theorem very carefully. What specific question do you have?
|real-analysis|calculus|multivariable-calculus|implicit-function-theorem|
0
Does $A/B \cong C/D$ and $B \cong D$ imply $A \cong C$?
Say that for some group $A$ who has a normal subgroup $B$ , and for some group $C$ who has a normal subgroup $D$ , we know that $A/B$ is isomorphic to $C/D$ and that $B$ is isomorphic to $D$ . Is $A$ necessarily isomorphic to $C$ ? EDIT: What if there is a homomorphism $\sigma: A \to C$ ?
This is one everyone knows: $n\neq m\implies \Bbb Z/n\Bbb Z\not \cong \Bbb Z/m\Bbb Z, $ but $n\Bbb Z\cong m\Bbb Z.$ But you still see the mistake all the time.
|group-theory|normal-subgroups|group-isomorphism|quotient-group|
0
Technique for generating Lie point symmetries
Consider I believe that there is something wrong with this text. In particular, how is $$\Delta=0 \quad \Longrightarrow \quad V(\Delta)=0$$ completely non trivial by linearity of operators? Moreover, I do not understand how this is used to find lie point symmetries. More context For context, the text also mentions so I assume that we plug such $V$ in $$ V(\Delta)=0$$ and then try to solve for the coefficients. But again, I do not understand why all coefficient dont simply work. Question: Why is $$\Delta=0 \quad \Longrightarrow \quad V(\Delta)=0$$ not true for any $V$ ? Moreover, can someone give me a simple example of how this technique is used to generate lie point symmetries?
The implication $\Delta = 0 \Longrightarrow V(\Delta) = 0$ is not trivially true, despite the linearity of $V$ , because the object $\Delta$ has to be understood as a formal expression representing the differential equation for $u$ itself (when set to zero). As an element of the tangent space, the vector field $V$ is defined by its action on the space of smooth functions of $(x,u,u_x,\ldots)$ but not on the space where $\Delta$ lives, in such a way that the linearity of $V$ doesn't have to be guaranteed. Here is a (counter-)example with the differential equation $\Delta[x,u,u'] = u' + xu = 0$ , which is solved by $u(x) = Ce^{-\frac{1}{2}x^2}$ , with $C$ a constant. For example, the translation $x \to x + \varepsilon$ , represented by $g^\varepsilon = e^{\varepsilon V}$ with $V = \partial_x$ , is not a symmetry of this equation of motion because of the $x$ prefactor. Then, one has $V(\Delta) = \partial_x\Delta = u' + u$ , which doesn't vanish for the same solution $u$ .
|ordinary-differential-equations|partial-differential-equations|lie-algebras|integrable-systems|
0
Proving my IVP for a Piecewise Decay Function (Diff Eq)
Setup So... I kinda handled most of my proof but I need help with some of the stuff I just kinda went with until it worked out. The problem relates to medicine and its decay in the body. We are given that the medicine will release over a period of $b$ hours and another dose is given at time $T$ . Known Values Decay Constant is 1. Each dose contains 1 gram of medicine. We know that $b=\frac{5}{4}$ & $T=\frac{5}{2}$ . We are also given that $y(0)=0$ y being the amount of medicine in the body at time $t$ . Finally we're given the simple equation $rate=rate_{in}-rate_{out}$ . Looking at our values we find that $\frac{4}{5}$ grams are released per hour over the course of $\frac{5}{4}$ hours. Using this and our value of T it's possible to make a piecewise function for the release $(rate_{in})$ of the medicine, $g(t)$ . This function is "on" over the intervals $0 & $\frac{5}{2} (note that $\frac{15}{4}$ just comes from $\frac{5}{2}+\frac{5}{4}$ ) and is "off" (to explain only having decay) be
Introducing some formalism, calling the unit step as $\unicode{x1D7D9}(t)$ we have that the medicine delivery is made as $$ g(t) = \frac 1b \sum_{k=0}^n \left(\unicode{x1D7D9}(t-kT)-\unicode{x1D7D9}(t-kT-b)\right) $$ so the differential relationship reads $$ y'(t) = g(t) - \gamma y(t) $$ with Laplace transform $$ (s+\gamma) \hat y(s) = \frac 1{bs}(1-e^{-b s})\sum_{k=0}^n e^{-k T s} + y_0 $$ If $n\to\infty$ then the transformed ode reads $$ (s+\gamma) \hat y(s) = \frac 1{bs}\left(\frac{1-e^{-b s}}{1-e^{-Ts}}\right) + y_0 $$ and then $$ \hat y(s) = \frac{1}{b(s+\gamma)s}\left(\frac{1-e^{-b s}}{1-e^{-Ts}}\right)+\frac{y_0}{s+\gamma} $$ with a well defined inverse. $$ y(t) = y_0e^{-\gamma t}+\frac{1}{\gamma b}\sum_{k=0}^{\infty}\left(\left(1-e^{-\gamma (t-k T)}\right) \unicode{x1D7D9}(t-k T)-\left(1-e^{-\gamma (t-k T-b)}\right) \unicode{x1D7D9}(t-k T-b)\right) $$
|ordinary-differential-equations|solution-verification|exponential-function|
0
Proving Euler product related to Riemann zeta function
Let $\omega(n)$ denote the number of prime factors of a positive integer $n$ . Prove that \begin{equation}\sum_{n=1}^{\infty}\frac{2^{\omega(n)}}{n^s}=\frac{\zeta^2(s)}{\zeta(2s)}\end{equation} My attempt: note that $2^{\omega(n)}$ is multiplicative, since $\omega(n)$ is clearly additive. Therefore the Dirichlet series on the LHS of the question admits an Euler product as follows: \begin{align}\sum_{n=1}^{\infty}\frac{2^{\omega(n)}}{n^s}&=\prod_p\left(\sum_{\nu=0}^{\infty}\frac{2^{\omega(p^k)}}{p^{ks}}\right)\\&=\prod_p\left(\sum_{\nu=0}^{\infty}\frac{2^{k}}{p^{ks}}\right)\\&=\prod_p\left(\sum_{\nu=0}^{\infty}\left(\frac{2}{p^s}\right)^k\right)\\&=\prod_p\left(\frac{p^s}{p^s-2}\right)\\&=\prod_p\left(1+\frac{2}{p^s-2}\right).\end{align} I can't see how to deduce the result though. The solution in the book (Ram Murty Problems in Analytic Number Theory) seems to follow the same outline as my attempt, but in a slightly different way which I don't quite understand. Their solution is that s
As commenters have pointed out, "the number of prime factors of $n$ " can be ambiguous: does it mean the number of distinct prime factors, or the number of prime factors counted with multiplicity. It turns out that it's currently standard in analytic number theory to use $\omega(n)$ to denote the number of distinct prime factors of $n$ , and to use $\Omega(n)$ to denote the number of prime factors of $n$ counted with multiplicity. So the obstacle is that the OP actually worked (correctly) with $\Omega(n)$ , while the intended question and solution are working with $\omega(n)$ .
|real-analysis|number-theory|elementary-number-theory|analytic-number-theory|
1
Prove that $a * b = a + b - ab$ defines a group operation on $\Bbb R \setminus \{1\}$
So, basically I'm taking an intro into proofs class, and we're given homework to prove something in abstract algebra. Being one that hasn't yet taken an abstract algebra course I really don't know if what I'm doing is correct here. Prove: The set $\mathbb{R} \backslash \left\{ 1 \right\}$ is a group under the operation $*$, where: $$a * b = a + b - ab, \quad \forall \,\, a,b \in \mathbb{R} \backslash \left\{ 1 \right\} .$$ My proof structure: After reading about abstract algebra for a while, it looks like what I need to show is that if this set is a group, it has to satisfy associativity with the operation, and the existence of an identity and inverse element. So what I did was that I assumed that there exists an element in the set $\mathbb{R} \backslash \left\{ 1 \right\}$ such that it satisfies the identity property for the set and another element that satisfies the inverse property for all the elements in the set. However I'm having trouble trying to show that the operation is indee
First and foremost you have to prove that " $*$ " is well defined, namely that: $$a,b\ne1\Longrightarrow a+b-ab\ne1$$ or, equivalently, that: $$a+b-ab=1\Longrightarrow (a=1)\vee(b=1)$$ And this is indeed the case, as: $$a+b-ab=1\iff a(1-b)=1-b$$ which splits into the two cases: $b=1\wedge a\in\mathbb R$ , or (exclusive) $b\in\mathbb R\setminus\{1\}\wedge a=1$ once noted that, for general propositions $P$ and $Q$ , $P\oplus Q\Longrightarrow$ $P\vee Q$ .
|abstract-algebra|group-theory|solution-verification|proof-writing|
0
Give an interpretation where a predicate logic formula is true
Give an interpretation where $$∃x(\neg P(x) ∨ Q(x)) \to (∃xP(x) ∧ ∀x\neg Q(x))$$ is false. How does someone even begin with questions like this? I have interpreted it in my head and I kind of get it in a sense. But seems like the only thing I know is that since it is an implication, the only way it will be false is if True -> False. Can someone please help me continue? This question is part of old exams I am solving.
The main strategy for finding a counter-model for a sentence in FOL is similar to the strategy you use to factor equations in algebra. You’re looking for a way to show a formula is falsifiable similarly to how you look for numbers and roots of polynomials that produce a polynomial expression. It takes practice to learn the basics, and using straight-up model theory oftentimes confuses new learners since it has a lot of moving parts that are needed for a full semantics, but aren’t really needed to understand why a formula is or isn’t satisfiable. To that end, I prefer to use the method of semantic tableaux. There is always a way to make a counter-model based off of a semantic tableaux, but it is not always going to be the simplest. You can read about it here: https://en.m.wikipedia.org/wiki/Method_of_analytic_tableaux . I’ll provide an example with the formula in question using a system similar to what is presented on Wikipedia. The ‘X’ to the left of a formula indicates that all availa
|logic|propositional-calculus|computer-science|first-order-logic|
1
Exercise 6, Section 47 of Munkres’ Elements of Algebraic Topology
I've been reading through the section on cohomology of Munkres' elements of algebraic topology book and I'm having some problems trying to solve one exercise (Exercise 6 of section 47). The exercise asks to compute the cohomology of the 5-fold dunce cap $X$ (which is defined as a pentagon with all its sides identified) with $\mathbb Z$ and $\mathbb Z/5$ coefficients, and then triangulate $X$ to find cocycles that generate the cohomology. I'm able to compute the cohomology calculating first the homology and then using the universal coefficient theorem. I got that $H^0(X,\mathbb Z)=\mathbb Z$ , $H^2(X,\mathbb Z)=\mathbb Z/5$ , and zero otherwise, and also that $H^n(X,\mathbb Z/5)$ is nonzero only if $n=0,1,2$ , in which case equals $\mathbb Z/5$ . However, I don't know how to proceed to find the cocyles that generate these groups. Any help would be appreciated.
Use the most obvious triangulation. I have left edges unlabelled for legibility; there are two vertices, $v$ and the central $w$ , five faces $\sigma_\bullet$ and five unmarked edges which run $v\to w$ , where $e_1$ is thought to be the base (the " $d_2$ " face) of $\sigma_1$ , etc. and a sixth unmarked edge $e_0$ which is the identified edge as per the green arrows. This is the " $d_1$ " face of every one of the $\sigma_\bullet$ ; I chose them oriented in this manner. Remember this is all a code for: take those simplices (maps $\Delta^k\to Y$ ) and compose them with the quotient map $Y\twoheadrightarrow X$ down to the dunce cap. This surely triangulates and allows a very easy computation of both cohomology and generating simplicial cocycles. Identifying $\hom(\Bbb Z,\Bbb Z)\cong\Bbb Z$ in the most canonical way; $\phi\sim\phi(1)$ ; we see the cochain complex is identifiable with: $$0\to\Bbb Z^2\overset{\begin{pmatrix}0&0\\-1&1\\-1&1\\-1&1\\-1&1\\-1&1\end{pmatrix}}{\longrightarrow}\Bbb
|algebraic-topology|homology-cohomology|
0
Why is the difference of consecutive primes from Fibonacci sequence divisible by $4$?
The primes represented in the Fibonacci sequence are written in the form $6n + 1$ and $6n -1$ , respectively. $$5=6\times1-1$$ $$13=6\times2+1$$ $$89=6\times15-1$$ $$233=6\times39-1$$ $$1597=6\times266+1$$ $$28657=6\times4776+1$$ $$514229=6\times85705-1$$ $$433494437=6\times 72249073-1$$ $$2971215073=6\times495202512+1$$ This is the part I don't know how to prove. I have found that $n$ is an odd number when the prime number is written in the form $6n -1$ and $n$ is an even number when the prime number can be written in the form $6n + 1$ . If the above is true than it is clear that diffrence between two prime numbers from Fibonacci sequence bigger than $4$ is divisible with $4$ . I wrote down all possible forms of the differences of two primes. n and m are even $6n + 1 - (6m + 1) = 6n + 1 - 6m - 1 = 6n - 6m = 6 (n - m) = 2 \times 3 \times (n - m)$ Since n and m are even numbers, their difference is also even, which means that the difference of the two prime numbers in this case is divis
To expand on the comments: All Fibonacci primes $>3$ are of the form $4k+1$ . To prove this, suppose it were otherwise. That is, suppose we had an index $n$ for which $F_n\equiv 3 \pmod 4$ . We wish to show that $F_n$ is composite. But, The sequence $\{F_n\}$ is periodic $\pmod 4$ with cycle of length $6$ . $\{0,1,1,2, 3, 1\}$ . Thus, $F_n\equiv 3\pmod 4\implies n\equiv 4\pmod 6$ . In particular, $n$ must be even. Hence, $n$ is not a prime, which implies that $F_n$ is not a prime (other that $n=4$ ) since $n=2k\implies F_k\,|\,F_n$ and $F_k>1$ if $k>2$ . Note: I expected this to be a duplicate, but was unable to find a match. If somebody can produce a duplicate, I'll delete this.
|sequences-and-series|prime-numbers|fibonacci-numbers|difference-sets|
0
Prove that sum of integrals $= n$ for argument $n \in \mathbb{N}_{>1}$
ORIGINAL QUESTION (UPDATED): I have a function $f:\mathbb{R} \rightarrow \mathbb{R}$ containing an integral that involves the floor function: $$f(x):= - \lfloor x \rfloor \int_1^x \lfloor t \rfloor x \left( -\pi t x \left( \frac{\cos \left(\pi (1-t) x \right)}{\pi (1-t) x} - \frac{\sin \left(\pi (1-t) x \right)}{\pi^2 (1-t)^2 x^2} \right) + \text{sinc}\left(\pi (1-t) x \right) \right) dt$$ with $x>1$ a real number. I can calculate numerical values of $f$ in Mathematica using NIntegrate . Heuristically, it looks as though $f$ evaluates to $x$ whenever $x$ is an integer. I want to know if this is correct, and if so, how do I prove it? Since I am only looking at integer values for the argument, make the substitution $x \rightarrow n$ with $n \in \mathbb{N}_{>1}$ . Given the discontinuities in the integrand, a logical approach is to write $$ f(n) = \\ - n \lim_{\epsilon \rightarrow 0+} \sum_{k=1}^{n-1} \int_{k + \epsilon}^{k + 1 - \epsilon} \lfloor t \rfloor n \left( -\pi t n \left( \frac{
Let $I$ be the required integral. First, I assume that $\operatorname{sinc}$ mentioned in $I$ is $\sin$ . Second, I think your approach is overcomplicated, because $I=\lim_{\epsilon\to 0+}\int_{1+\epsilon}^x \dots$ , and the integrals $\int_{1+\epsilon}^x\dots $ are proper. Third, in the final sum all summands are continuous functions, equal $0$ when $\epsilon=0$ , but $$1\cdot \frac{-(1 + \epsilon) \sin \pi n (1 + \epsilon - 1)}{(1 + \epsilon - 1)\pi n}=\frac{-(1 + \epsilon) \sin \pi n\epsilon}{\epsilon\pi n}$$ The latter expression tends to $-1$ when $\epsilon$ tends to $0$ , so the total sum indeed tends to $n$ .
|real-analysis|integration|summation|indefinite-integrals|ceiling-and-floor-functions|
1
Using Euclidean geometry, how to find $x$?
This question comes from a friend exam that I'm helping to review. I've been trying hard but can't find the answer. Using Euclidean geometry, how to find the angle $x$ ? I've been able to work out all the angles based on $x$ and $180^{\circ} $ , but then I got stuck. Here's my calculation: I named the center point as $E$ $$ \angle ABD = 180^{\circ} - 6x \\ \angle ABC = 180^{\circ} - 3x $$ $$\angle ABC = \angle ABD + \angle DBC \\ 180^{\circ} - 3x = 180^{\circ} - 6x + \angle DBC $$ $$ \begin{align} \angle DBC &= 3x \\ \angle BEC &= 180^{\circ} - 4x\\ \angle BDC &= 180^{\circ} - 5x \end{align}$$ $$ \angle DEC = 180^{\circ} - \angle ACD - \angle BDC = 180^{\circ} - (x + 180^{\circ} - 5x) = 4x $$ Would anyone be able to help me with this question?
As already shown, $BD=AB$ , also since $\widehat{DAC}=\widehat{ACB}=x$ we infer $BC\parallel AD$ , hence $\widehat{DBC}=\widehat{ADB}=3x$ . Take $E$ reflection of $D$ about $BC$ , it belongs to $AB$ and $BE=BD$ , thus, since $AB=BD=BE$ , $\widehat{ADE}=90^\circ$ , thus $\widehat{CDE}=\widehat{CED}=90^\circ-2x$ , thence the circumcenter of $\triangle CAE$ lies onto $ED$ ; with $AD=CD$ we infer that $D$ is the circumcenter of $\triangle CAE$ , wherefrom $\widehat{BAD}=\widehat{BED}=45^\circ$ , hence $3x=45^\circ, x=15^\circ$ . Best regards,
|geometry|euclidean-geometry|triangles|
0
Are "infinitesimal rotations" commutative? If so, which mathematical fact allows it?
I was reading Moysés Nussenzveig's "Basic Physics Course 1" when I came across this excerpt in chapter 11, about rotations and angular momentum, in section 11.2, vector representation of rotations: We could then think about associating a vector “θ” to a rotation through the angle θ, the direction of this vector being given by the direction of the axis. We have already seen, however (Fig. 3.12), that the quantity “θ” associated with a finite rotation, although having module, direction and sense, it would not be a vector, as the addition of quantities of this type is not commutative (cf. (3.2.5)). However, if instead of finite rotations we take rotations through infinitesimal angles δθ, we will now see that infinitesimal rotations are commutative and have a vector character. To do this, we will associate a vector with an infinitesimal rotation by the same procedure defined in Sec. 3.2 for finite rotations. I actually understand that rotations don't commute, as they can be represented by
Rotations in $\Bbb{R}^3$ are element of the Lie group $SO(3)$ . Ultimately, it is possible to write a rotation $R_\theta$ as $R_\theta = e^{\theta L}$ , where $L$ is the generator of that rotation of angle $\theta$ . As a side note, $L$ is an element of the Lie algebra associated to $SO(3)$ and can be interpreted physically as an angular momentum operator. Now, let's consider two of these rotations, namely $R_{\theta_1}$ and $R_{\theta_2}$ , with $\theta_1$ and $\theta_2$ being infinitesimal angles, hence $R_\theta = 1 + \theta L + \mathcal{O}(\theta^2)$ by Taylor expansion. In consequence, one has : $$ R_{\theta_1}R_{\theta_2} = \left(1 + \theta_1L_1 + \mathcal{O}(\theta_1^2)\right)\left(1 + \theta_2L_2 + \mathcal{O}(\theta_2^2)\right) = 1 + \theta_1L_1 + \theta_2L_2 + \mathcal{O}(\theta^2) $$ and similarly $$ R_{\theta_2}R_{\theta_1} = \left(1 + \theta_2L_2 + \mathcal{O}(\theta_2^2)\right)\left(1 + \theta_1L_1 + \mathcal{O}(\theta_1^2)\right) = 1 + \theta_1L_1 + \theta_2L_2 + \mathca
|lie-groups|lie-algebras|
0
The function $\log^+x=\max\{1, \log x\}$.
I was reading Marcinkiewicz-Zygmund (MZ) law of large numbers for random fields and came across necessary and sufficient condition $E(|X|\log^+|X|) for MZ-SSLN to hold true. I have a question about this function $\log^+|X|$ . Why don’t they just need condition without this max, that is, $E(|X|\log|X|) ?
@Shyam, you were right after all! Its unnecessary to state that for $\log^+$ instead of $\log$ , as it can be shown easily that $$ \operatorname{E}[|X|\log^+|X|] The reason is that the function $x\mapsto x\log x$ is bounded in $(0,1)$ and it can be continuously extended to $[0,1)$ . Let $Y:=|X|$ then note that $$ \operatorname{E}[Y\log Y]=\int_{[0,\infty )}y\log y P_Y(dy)=\int_{[0,e)}y\log y P_Y(dy)+\int_{[e,\infty )}y\log yP_Y(dy) $$ However as $P_Y$ is a probability measure it follows that $$ \left| \int_{[0,e)}y\log y P_Y(dy) \right|\leqslant \sup_{y\in[0,e)}|y\log y|\Pr [0\leqslant Y Therefore $\operatorname{E}[|X|\log |X|] if and only if $\int_{[e,\infty )}y\log yP_Y(dy) . A similar result shows that $\operatorname{E}[|X|\log^+|X|] if and only if $\int_{[e,\infty )}y\log yP_Y(dy) , so both conditions are equivalent. However in probability theory usually we use $\log^+$ instead of $\log$ because $\log^+$ is positive and increasing, so is a nicer function to apply many measure-theor
|probability|probability-limit-theorems|law-of-large-numbers|
1
$x, y \in \mathbb{N} \setminus \{0\}.$ Find the smallest value of $P = |36^x - 5^y|.$
$x, y \in \mathbb{N} \setminus \{0\}.$ Find the smallest value of $P = |36^x - 5^y|.$ Here's my attempt using calculus: Fix $x. P'(y) = 0 \iff y = y_{0} = x\log_5 36 \approx 2.23x.$ Also, $P'(y) > 0$ for $y > y_{0}$ and $P'(y) for $y Since $y \in \mathbb{N},$ $P$ reaches its smallest value when $y = 2x$ or $y = 3x$ - that is, when $P = P_1 = |36^x - 5^{2x}|$ or $P = P_2 = |36^x - 5^{3x}|.$ Both $P_1$ and $P_2$ are increasing monotone over $[1, \infty],$ so $P_1 \geq |36 - 25| = 11$ and $P_2 \geq |36 - 125| = 89.$ Thus, $P_{min} = 11.$ I suspect there is a shorter way using number theory, but I have not been able to find it. For context, I'm not well-versed in number theory other than having some basic knowledge of divisibility and congruences, but I'm eager to pick up new bits of it as I go. I hope to hear your methods. Thank you! EDIT : I just realized that my calculus solution is not correct. $y$ could be a fraction of $x$ that gives a natural number, not neccesarily a multiple of $x
Let's look at $|36^x-5^y|\pmod{180}$ . I chose $180$ because $36\cdot 5=180$ , so it seemed like an interesting modulus to try. Notice that $36^2\equiv 36\pmod{180}$ , so by induction, we can show $36^x\equiv 36\pmod{180}$ for all $x\geq 1$ . Ergo, the expression is equivalent to one of the following: $$ (36-5^y)\pmod{180}\ \ \ \ \text{ or }\ \ \ \ (5^y-36)\pmod{180} $$ depending on whether $36^x or $36^x > 5^y$ . Now, if you look at $5^y\pmod{180}$ for small values of $y$ , you will notice that $5^7\equiv 5\pmod{180}$ . With this, we can prove by induction that $5^{6+y}\equiv 5^y\pmod{180}$ for all $y\geq 1$ , meaning that the expression $5^y\pmod{180}$ is periodic with period $6$ , so if we want to find all values of the expressions $(36-5^y)\pmod{180}$ and $(5^y-36)\pmod{180}$ , it suffices to look at just $y=1,2,3,4,5,6$ : $$ \begin{array}{c|c|c} y & (36-5^y)\pmod{180} & (5^y-36)\pmod{180} \\ \hline 1 & 31 & 149 \\ 2 & 11 & 169 \\ 3 & 91 & 89 \\ 4 & 131 & 49 \\ 5 & 151 & 29 \\ 6 &
|elementary-number-theory|
0
Proving Density for Function Approximation with Hidden Layer Perceptron
I'm working on a problem related to function approximation within the $L^2\left(I_n\right)$ space of square-integrable functions: Problem Statement: Given a lemma without proof: $\textit{Lemma}$ : Let $g \in L^2\left(I_n\right)$ such that $\int_{\mathcal{H}} g(x) d x=0$ , for any half-space $\mathcal{H}:=\left\{x: w^T x+\theta>\right.$ $0\} \cap I_n$ . Then $g=0$ almost everywhere. Note that by choosing a convenient value for the parameter $\theta$ , the upper half-space may become the entire hypercube. Then, $g$ , considered before, has a zero integral $\int_{I_n} g(x) d x=0$ . The current task is to show that any function $g \in L^2\left(I_n\right)$ can be approximated by the output of a one hidden layer perceptron where the activation function $\sigma(x)$ is the Heaviside step function, defined as: $$ \sigma(x)= \begin{cases}1, & x \geq 0 \\ 0, & x Progress Made So Far: I am examining the use of a one-hidden layer perceptron with Heaviside step function activation for approximating
Here is a fully detailed proof based on your given Lemma, which I will restate below (Lemma) : Let $g \in L^2\left(I_n\right)$ such that $\int_{\mathcal{H}} g(x) d x=0$ , for any half-space $\mathcal{H}:=\left\{x: w^T x+\theta>\right.$ $0\} \cap I_n$ . Then $g=0$ almost everywhere. You didn't define it, but assuming the standard machine learning setup, we define the family of one hidden layer perceptrons as $$\mathbf F :=\Big\{ f :\mathbb R^n\to\mathbb R,\ x\mapsto \alpha\cdot\sigma(w^Tx +\theta)\mid \alpha,\theta\in\mathbb R,w\in\mathbb R^n\Big\},\tag1$$ where $\sigma \equiv \mathbf 1\{\cdot\ge0\}$ is the Heaviside step function. The goal is to show that $\mathbf F$ is dense in $L^2(I_n)$ (you didn't define it either, but I will assume that $I_n := [0,1]^n$ denotes the unit hypercube and identify elements of $\mathbf F$ with their restriction to $I_n$ ). For the sake of contradiction, assume that $\overline{\mathbf F}$ , the $L^2(I_n)$ -closure of $\mathbf F$ is not equal to $L^2(I_n)
|functional-analysis|measure-theory|hilbert-spaces|approximation-theory|neural-networks|
1
explanation required for the logic of a proof step regarding set membership, conjunction, and implication
This question is asking for an explanation of a step in the following segment of someone else's proof of a textbook exercise regarding set membership, conjunction and implication. Consider the following: $$ (x \in A \land y \in B) \implies (x \in C\land y \in D) $$ Let me check I understand the meaning. It says that if both $x \in A$ and $y \in B$ , then we can conclude that both $x \in C$ and $y \in D$ . Both clauses of the antecedent must be true in order for both clauses of the consequent to be true. The online solution guide then had the following as the next step: $$ (x\in A \implies x \in C) \land (y \in B \implies y \in D) $$ Question: I don't understand how the step was made to this statement. Can anyone explain (for a self-teaching newcomer to maths)? My Thoughts The first statement had pairs of clauses, connected by a conjunction. Both clauses had to be true in the antecedent for the consequent to be true. It so happens the consequent also has paired clauses, connected by a c
I think I understand where your confusion lies. In short, you are not paying attention to the fact that $\,x\,$ and $\,y\,$ are arbitrary. I expound below. To get your answer, first consider a different scenario than yours. Imagine that $\,x=y\,$ . In this scenario, your statement would be $$ \forall x,\qquad x\in A\ \land\ x\in B\ \ \implies\ \ x\in C\ \land\ x\in D\ \ , $$ and this does not imply that $$ \forall x,\qquad \left(x\in A \implies x\in C\right)\land\left(x\in B \implies x\in D\right)\ \ . $$ What is essential to take note of in this scenario is that all the statements are about a single (arbitrary) element $\,x\,$ -- there are no statements about $\,y\,$ . Looking at it from a different angle, the implication is in the form $$ \forall x,\qquad P(x)\ \land\ Q(x)\ \ \implies\ \ M(x)\ \land\ N(x)\ \ , $$ where $P$ , $Q$ , $M$ , and $N$ are functions that output True if $\,x\,$ is in $\,A$ , $B$ , $C$ , and $D$ respectively, and False otherwise. (Look up predicates or indicat
|elementary-set-theory|logic|
0
$x, y \in \mathbb{N} \setminus \{0\}.$ Find the smallest value of $P = |36^x - 5^y|.$
$x, y \in \mathbb{N} \setminus \{0\}.$ Find the smallest value of $P = |36^x - 5^y|.$ Here's my attempt using calculus: Fix $x. P'(y) = 0 \iff y = y_{0} = x\log_5 36 \approx 2.23x.$ Also, $P'(y) > 0$ for $y > y_{0}$ and $P'(y) for $y Since $y \in \mathbb{N},$ $P$ reaches its smallest value when $y = 2x$ or $y = 3x$ - that is, when $P = P_1 = |36^x - 5^{2x}|$ or $P = P_2 = |36^x - 5^{3x}|.$ Both $P_1$ and $P_2$ are increasing monotone over $[1, \infty],$ so $P_1 \geq |36 - 25| = 11$ and $P_2 \geq |36 - 125| = 89.$ Thus, $P_{min} = 11.$ I suspect there is a shorter way using number theory, but I have not been able to find it. For context, I'm not well-versed in number theory other than having some basic knowledge of divisibility and congruences, but I'm eager to pick up new bits of it as I go. I hope to hear your methods. Thank you! EDIT : I just realized that my calculus solution is not correct. $y$ could be a fraction of $x$ that gives a natural number, not neccesarily a multiple of $x
You have $$|36^x - 5^y|=|6^{2x}-5^y|=|(5+1)^{2x}-5^y|=\left|5^{2x}-5^y+\sum^{k=2x}_{k=1}\binom{2x}{k}5^{2x-k}\right|$$ It is clear that $(x,y)=(1,2)$ gives the minimum $11$ .
|elementary-number-theory|
0
A game on a rectangular board
Setup Let there be a board looking like a rectangular table. A piece is placed at any square of the board. Two players play a game. They move the piece in turns. The piece can only be moved to an adjacent square (no diagonal moves). The piece can’t be moved to a square that it has already visited (the starting square counts as visited). A player who can’t make a move loses. Who has a winning strategy: the player who makes a first move or their opponent? Motivation This question comes in continuation of this MathSE thread discussing a particular case where the starting square is in the corner of the board. It is proven there (by dividing the board into dominoes) that for an odd area board the second player wins, for an even area board the first player wins. Reasoning We can apply the dominoes argument here, too. If the board has even area, one of its sides has even length. We can divide the board into dominies along that even side. The first player has a following winning strategy: he m
Yes, the first player wins on an odd x odd board if the knight starts on gray square (where the board is colored like a checkerboard such that the corners are blue). To describe the first player's winning strategy, tile almost all of the board with dominoes, where only one corner is uncovered. The first move will be to the other half of the domino containing the starting square. For all subsequent moves, the second player will move into a new domino, and the first player moves to the other square in that domino. Note that the second player can never move onto the uncovered square, because the second player always moves onto a gray square.
|combinatorial-game-theory|
1
Are "infinitesimal rotations" commutative? If so, which mathematical fact allows it?
I was reading Moysés Nussenzveig's "Basic Physics Course 1" when I came across this excerpt in chapter 11, about rotations and angular momentum, in section 11.2, vector representation of rotations: We could then think about associating a vector “θ” to a rotation through the angle θ, the direction of this vector being given by the direction of the axis. We have already seen, however (Fig. 3.12), that the quantity “θ” associated with a finite rotation, although having module, direction and sense, it would not be a vector, as the addition of quantities of this type is not commutative (cf. (3.2.5)). However, if instead of finite rotations we take rotations through infinitesimal angles δθ, we will now see that infinitesimal rotations are commutative and have a vector character. To do this, we will associate a vector with an infinitesimal rotation by the same procedure defined in Sec. 3.2 for finite rotations. I actually understand that rotations don't commute, as they can be represented by
This answer is a little loose in terms of what an infinitesimal means, but I think it provides some intuition. Consider any set of matrices that are really close to identity. In other words, each matrix can be written as $I+\delta_A A$ . If you multiply two of these together, you get: $$ \begin{align} &(I+\delta_A A)(I+\delta_B B)\\ &=I+\delta_A A+\delta_B B + \delta_A A \delta_B B \end{align} $$ As $\delta_A$ and $\delta_B$ get smaller, the last term gets smaller quadratically - since it has a product of these infinitesimals. You can think of this as a linear approximation of the product: $(I+\delta_A A)(I+\delta_B B) \approx I+\delta_A A+\delta_B B$ . It gets more and more accurate as the scale gets smaller. Thinking about it as a linear approximation, you should see how the same kind of thing shows up in many physical systems.
|lie-groups|lie-algebras|
0
$x, y \in \mathbb{N} \setminus \{0\}.$ Find the smallest value of $P = |36^x - 5^y|.$
$x, y \in \mathbb{N} \setminus \{0\}.$ Find the smallest value of $P = |36^x - 5^y|.$ Here's my attempt using calculus: Fix $x. P'(y) = 0 \iff y = y_{0} = x\log_5 36 \approx 2.23x.$ Also, $P'(y) > 0$ for $y > y_{0}$ and $P'(y) for $y Since $y \in \mathbb{N},$ $P$ reaches its smallest value when $y = 2x$ or $y = 3x$ - that is, when $P = P_1 = |36^x - 5^{2x}|$ or $P = P_2 = |36^x - 5^{3x}|.$ Both $P_1$ and $P_2$ are increasing monotone over $[1, \infty],$ so $P_1 \geq |36 - 25| = 11$ and $P_2 \geq |36 - 125| = 89.$ Thus, $P_{min} = 11.$ I suspect there is a shorter way using number theory, but I have not been able to find it. For context, I'm not well-versed in number theory other than having some basic knowledge of divisibility and congruences, but I'm eager to pick up new bits of it as I go. I hope to hear your methods. Thank you! EDIT : I just realized that my calculus solution is not correct. $y$ could be a fraction of $x$ that gives a natural number, not neccesarily a multiple of $x
Inspired by Nobie Mushtak's solution... We have $$P\equiv\pm 1\pmod2$$ $$P\equiv\pm 1\pmod3$$ $$P\equiv\pm 1\pmod5$$ The smallest such $P$ is $11$ , which occurs when $x=1$ , $y=2.$ $P=1$ is not possible due to Mihailescu's theorem.
|elementary-number-theory|
0
Definition of Schubert Variety
Let $V$ be a full flag, $\lambda$ a partition. Consider $$\sigma_\lambda(V) = \{ \Lambda \in G(k,n): \Lambda \cap V_{n-k+i-\lambda_i} \geq i \}.$$ If you have another full flag $V'$ , are $\sigma_\lambda(V)$ and $\sigma_\lambda(V')$ isomorphic to each other? It seems that in intersection theory, they only care about the partition and not about the flag. Why is that? Thanks.
Yes, $GL_n$ acts on the Grassmannian by linear change of coordinates, which changes the choice of auxiliary flag. For intersection theory, it also matters that $GL_n$ is rationally connected. That is, the two subvarieties are not only isomorphic but rationally equivalent (essentially the algebraic version of being homotopic).
|algebraic-geometry|intersection-theory|schubert-calculus|
0
Is every extreme point in a compact convex set contained in a defining supporting hyperplane?
Let $K \subseteq X$ be a compact convex subset of a locally convex space $X$ . Let $k \in K$ be an extreme point. Question 1: Does there exist a supporting hyperplane of $X$ containing $k$ ? I think the answer is “yes” via some Hahn-Banach argument, although I’m a little confused about this at the moment. But what I really want to know is the following: Question 2: Suppose that $K = \cap_i H_i$ where each $H_i$ is a closed half-space. EDIT: Suppose also that for each $i$ the face $K \cap \partial(H_i)$ is not empty. end EDIT Then is $k$ contained in the boundary of some $H_i$ ? That is, assuming the answer to Question 1 is “yes”, I want to know whether I can guarantee that the supporting hyperplane can be chosen from a list of hyperplanes I already have. Notes: I’m aware that the extreme point $k$ doesn’t have to be exposed — i.e. it need not be the case that $\{k\} = K \cap Y$ for some supporting hyperplane $Y$ . But I want to know whether we have $\{k\} \subseteq K \cap Y$ for some s
The answer to question 2 is: "no" even in $X = \mathbb R$ . Indeed, $$ [0,\infty) = \bigcap_{n \in \mathbb N} [-1/n, \infty) $$ and the extreme point $0$ is not a boundary point of any $[-1/n, \infty)$ . After the edit, we need at least two dimensions, I guess: Take $X = \mathbb R^2$ and $$ X = \bigcap_{q \in \mathbb Q \cap [0,2 \pi]} \{ x \in \mathbb R^2 \mid x_1 \cos(q) + x_2 \sin(q) \le 1\}. $$ That is, we write the unit circle as a countable intersection of half spaces. Note that the boundary of each half space intersects the unit circle.
|functional-analysis|convex-analysis|compactness|convex-geometry|locally-convex-spaces|
1
Why is the difference of consecutive primes from Fibonacci sequence divisible by $4$?
The primes represented in the Fibonacci sequence are written in the form $6n + 1$ and $6n -1$ , respectively. $$5=6\times1-1$$ $$13=6\times2+1$$ $$89=6\times15-1$$ $$233=6\times39-1$$ $$1597=6\times266+1$$ $$28657=6\times4776+1$$ $$514229=6\times85705-1$$ $$433494437=6\times 72249073-1$$ $$2971215073=6\times495202512+1$$ This is the part I don't know how to prove. I have found that $n$ is an odd number when the prime number is written in the form $6n -1$ and $n$ is an even number when the prime number can be written in the form $6n + 1$ . If the above is true than it is clear that diffrence between two prime numbers from Fibonacci sequence bigger than $4$ is divisible with $4$ . I wrote down all possible forms of the differences of two primes. n and m are even $6n + 1 - (6m + 1) = 6n + 1 - 6m - 1 = 6n - 6m = 6 (n - m) = 2 \times 3 \times (n - m)$ Since n and m are even numbers, their difference is also even, which means that the difference of the two prime numbers in this case is divis
There are plenty of relations with Fibonnaci numbers , for example $$\forall n\geq 1,F_{2n}=F_{n+1}F_n+F_nF_{n-1}=F_n(F_{n+1}+F_{n-1})$$ which explain @lulu's answer. From his response, we know that $F_{2n}$ is not prime; The remainders when you do Euclidean division by 4 are $$0,1,1,2,3,1,\color{red}{0,1,1,2,3,1,},0,1,1,2,3,1,...$$ Let's apply this last idea when you do the Euclidean division by $6$ : we obtain a periodicity of $24$ $$0,\boxed{1,1},2,3,\boxed{5},\color{blue}{2,\boxed1,3,4,\boxed{1,5}},0,\boxed{5,5},4,3,\boxed{1},\color{blue}{4,\boxed5,3,2,\boxed{5,1}}$$ Prime Fibonnaci numbers have necessarly remainders $1$ or $5$ , as you said it. There you have it, you have enough information to conclude that what you have said is true : $\boxed{\text{$n$ is an odd number when the prime number is written in the form $6n -1$}}$ $\boxed{\text{$n$ is an even number when the prime number can be written in the form $6n + 1$}}$
|sequences-and-series|prime-numbers|fibonacci-numbers|difference-sets|
0
Non-Abelian groups exact sequences, right split and left-split are different?
I am learning about exact sequences that split, in the context of modules. In this context, as I understand it, sequences that split on the left are the same as sequences that split on the right. But in non Abelian groups, is there an easy example of an exact sequence $1 \rightarrow G \rightarrow H \rightarrow K \rightarrow 1$ that splits on the right and not on the left, and one that splits on the left but not on the right?
In the full category of groups, $\bf Grp,$ left and right split aren't equivalent. If it's left split then it's right split, and then a direct product. There's a little example on Wikipedia of one that is right but not left split, with $S_3$ and $A_3.$ But, any semi-direct product that is not a direct product should work. An important example is the semi-direct product, $G=H\ltimes N.$ It's equivalent to having a short exact sequence $$1\to N\to G\to H\to 1,$$ which is the identity on $H.$ In general it's right split, but not always left split. This is also called a group extension ( $G$ is an extension of $H$ by $N.$ )
|abstract-algebra|exact-sequence|
0
How to find multiple solutions for 3 variable, 2 degree Diophantine equation?
I have the equation $x^2+xy+y^2=z^2$ to solve it in natural numbers. The discriminant of it $D=4z^2-3y^2$ and must be perfect square. I wrote Python program to get solutions for $1 by enumeration. def Solution(): A=[] nMaximum=10**2 for x in range(1,nMaximum): dTemp1a=3*x**2 for z in range(x+1, nMaximum): dDiscriminant=4*z**2-dTemp1a dTemp5=int(dDiscriminant**0.5) if dTemp5**2!=dDiscriminant: continue dTemp6=(-1*x+dTemp5)/2 y=int(dTemp6) if not CheckIfExists(A, z): A.append([x,y,z]) return A def CheckIfExists(arr, z): bResult=False for s in arr: if s[2]==z: bResult=True break return bResult a = Solution() print(len(a)) print(a) # [3, 5, 7], [5, 16, 19], [6, 10, 14], [7, 8, 13] ... Three variable, second degree diophantine equation doesn't explain how to get other solutions when we know the first solution $(3,5,7)$ Could you give me a hint ? UPDATE asked by Shean: I need to get all solutions based on $(3,5,7)$ . See my question as the example of what I am looking for: First 30 solutions
Can't fit this into a comment so I'll make it an answer. Firstly, your question isn't clear. Three variable, second degree diophantine equation doesn't explain how to get other solutions when we know the first solution $(3,5,7)$ If you are expecting to generate ALL triples from $(3,5,7)$ then I don't think this is possible (actually it may be possible, see Will's comment). If you simply want more solutions then take the triple $(3k,5k,7k)$ for any positive integer $k$ . Now if you're wondering why the solutions to formulas such as \begin{equation} x=m^2-n^2\\ y=2mn+n^2\\\tag{1} z=m^2+mn+n^2 \end{equation} where $n and $\gcd(m,n)=1$ produce less results to: simply plugging in values for $x$ and $y$ and checking if $x^2+xy+y^2$ is a square, is because $(1)$ generates primitive solutions, that is $\gcd(x,y,z)=1$ . (Note: $(1)$ sometimes produces non-primitive solutions, to remedy this, simply divide $x,y,z$ by $\gcd(x,y,z)$ ) Some example code using the system $(1)$ producing only primiti
|diophantine-equations|
0
Confusion in applying the Implicit function theorem
Consider the following equations $$\begin{cases} 2(x^2+y^2)-z^2=0\\ x+y+z-2=0\end{cases}$$ Prove that the above system of equations defines a unique function $\phi: z\mapsto (x(z),y(z))$ , from a neighborhood of $z=2$ to a neighbor hood $V$ of $(1,-1)$ and $\phi\in C^1$ on $U$ . My idea is to use The implicit function theorem Now I have to check the condition to apply this theorem! First, let set $F(x,y,z)=2(x^2+y^2)-z^2$ and $G(x,y,z)=x+y+z-2$ . Obviously, $F,G\in C^1$ on $R^3$ , $F(1,-1,2)=G(1,-1,2)=0$ . Also, $D_zF(1,-1,2)=-4\neq 0, D_zG(1,-1,2)=1\neq 0$ . According to the Implicit function theorem, there exits a unique $z=f(x,y)$ defined for $(x,y)$ near $(1,-1)$ s.t $F(x,y,z)=0$ and a unique $z=g(x,y)$ defined for $(x,y)$ near $(1,-1)$ s.t $G(x,y,z)=0$ Does this imply there is a unique function $\phi: z\mapsto (x(z),y(z))$ , from a neighborhood of $z=2$ to a neighbor hood $V$ of $(1,-1)$ and $\phi\in C^1$ on $U$ ?
In this particular case you can solve the system explicitly. Namely $$ z^2+4xy=2(x+y)^2=2(2-z)^2$$ Thus $$xy={1\over 4}z^2-2z+2,\ x+y=2-z$$ By the Vieta formulas $x,y$ are solutions of the quadratic equation $$ u^2-(2-z)u+\left ({1\over 4}z^2-2z+2\right )=0$$ The discriminant is equal $4z -4.$ Therefore $$x,y ={1\over 2}[2-z\pm 2\sqrt{z-1}]$$ Taking into account the assumptions $x(2)=1$ and $y(2)=-1$ gives $$x=1-{z\over 2}+\sqrt{z-1},\ y=1-{z\over 2}-\sqrt{z-1}$$
|real-analysis|calculus|multivariable-calculus|implicit-function-theorem|
0
Uniqueness and continuous dependence on the data of Heat equation.
Let two smooth $v_1$ and $v_2$ both satisfy the system $$\partial_t{v}-\Delta v=f \quad \text{in} \quad U \times (0,\infty), $$ $$v = g \quad \text{on} \quad \partial U \times (0,\infty),$$ for some fixed given smooth $f: \bar{U}\times (0,\infty) \rightarrow \mathbb{R}$ and $g: \partial U \times (0,\infty).$ $U$ is open, bounded and $U \subset \mathbb{R}^n.$ Show that $$\sup_{x \in U} |v_1(t, x) − v_2(t, x)| \rightarrow 0,$$ as $t \rightarrow \infty.$ This is my work: Let $ u =v_1 -v_2,$ it is sufficient to prove $\sup_{x \in U} |u(x,t)| \rightarrow 0,$ as $t \rightarrow \infty. (1)$ $u$ obeys the system $$\partial_t{u}-\Delta u=0 \quad \text{in} \quad U \times (0,\infty), $$ $$u = 0 \quad \text{on} \quad \partial U \times (0,\infty).$$ Multiply both sides by $u.|u|^{2(m-1)},$ note that $\partial_t(|u|^{2m})=2m\partial_tu.u.|u|^{2(m-1)}$ then $$\dfrac{1}{2m}\partial_t\int_{U}|u|^{2m}dx=\int_{U}\Delta u.u.|u|^{2(m-1)}dx$$ Apply integration by part for the RHS, we get $$\dfrac{1}{2m}\par
To fix the gap $(\star)$ in my previous answer. In my other answer, I claimed that $\Vert u(t,\cdot)\Vert_2\to 0$ as $t\to\infty$ implied that $u(t,x)\to 0$ for almost all $x\in U$ . Note that the restriction to "almost all" is necessary, because $L^p$ convergence does not imply everywhere pointwise convergence, not even for a sequence of continuous functions . However, we can fix this, with the help of the following theorem: THEOREM: Rapid convergence in measure implies pointwise convergence almost everywhere. Let $(X,\Sigma,\mu)$ be a measure space and let $(f_n)_{n\in\mathbb N}$ be a sequence of measurable functions which converges in measure to the measurable function $f$ . Then, as shown by John Dawkins HERE as a consequence of the Borel-Cantelli lemma, if $$\sum_{n=1}^\infty \mu\big(\{x\in X:|f_n(x)-f(x)|>\epsilon\}\big) 0$$ Then, $f_n\overset{\text{p.w}}{\longrightarrow}f$ almost everywhere as $n\to\infty$ . In our case, we can consider the measure space $(U,\mathscr B(U),\mu^m)
|analysis|partial-differential-equations|heat-equation|gronwall-type-inequality|
0
Let $u$ and $w$ be complex numbers such that $|u|=5, |w|=3,$ and $|u+w|=6$. Calculate $|u+2w|$ with proof.
This came up on my homework and I don't understand how to calculate $|u+2w|$ . How do I get from $|u+w|$ to $|u+2w|$ ? I'm guessing that I have to square $|u+w|$ and then add $|u|$ and $|w|$ in a way that which would have a square root of $|u+2w|$ but I don't know how to get there.
We are given that $|u+w|=6$ . (eq. 1) On squaring eq. 1, we get $|u|^2+|w|^2+u\bar{w}+\bar{u}w=36$ $\implies u\bar{w}+\bar{u}w=36-25-9=2$ (eqn. 2) Now, $|u+2w|^2=|u|^2+4|w|^2+2(u\bar{w}+\bar{u}w)=25+36+4=65$ Hence, $|u+2w|=\sqrt{65}$ .
|algebra-precalculus|complex-numbers|
1
Finding matrix to transform one vector to another vector
If I have an arbitrary vector $A = (a,b,c,0)$ how can I find a transformation matrix $M$ such that $M \times A = (0,1,0,0)$ ? We can assume $A$ has a magnitude of $1$ if it helps simplify the derivation process. The trivial case $A = (0,1,0,0)$ would cause $M$ to be the identity matrix. If $A = (0,-1,0,0)$ then $M$ would be a 180 degree rotation matrix about the $x$ axis. I heard of Rodrigues' rotation formula from this question but I'm not sure how it would work in a 4 by 4 matrix.
Since you are mentioning Rodrigues' rotation formula, you may be interested in this alternative method: Represent A and B as quaternions: $$Q_A=a*1+b*i+c*j\\ Q_B=1*i.$$ Now compute the transformation quaternion $Q_R$ = $Q_B/Q_A$ by using Hamilton's product, getting $Q_R*Q_A=Q_B$ . Next you take (one of the) real 4*4 matrix representations $M$ corresponding to the quaternion $Q_R$ (see Wikipedia ), and by translating the quaternion equation back to a matrix equation, you immediately get $M*A=B$ as requested.
|matrices|
0
Derive $\sin x$ expansion without using calculus
We know that $$\sin x = x - \frac{1}{3!}x^3 + \frac{1}{5!}x^5 - \cdots = \sum_{n\ge 0} \frac{(-1)^n}{(2n+1)!}x^{2n+1}$$ But how could we derive this without calculus? There were some approach using $e^{ix} = \cos x + i \sin x$ , pls notice I also would like to avoid such definition as to prove $e^{ix} = \cos x + i \sin x$ again we need the expansion of $\sin x$ and $\cos x$ . One approach I tried is to start with $\sin^2 x$ : let $$\sin^2 x := \sum_{n\ge 1} a_n x^{2n}$$ Note: I guess such an ansatz as $\sin x$ is an odd function -- so that $\sin x$ has only $x$ 's power of odd numbers, so $\sin^2 x$ only has $x$ 's power of even numbers. Now if I can arrive $$\sin^2 x = \sum_{n\ge 1} \frac{(-1)^{n+1} 2^{2n-1} }{(2n)!} x^{2n},$$ then via $\cos^2x = 1-2\sin^2 x$ I can get the expansion of $\cos x$ then $\sin x$ . To derive $a_n$ , first I use $\lim_{x\rightarrow 0} \frac{\sin x}{x} = 1$ from geometric interpretation (the arc is almost the opponent edge for small angle $x$ ), to get $$a_1
"Without calculus" is a pretty big ask, given that the modern definition of sine is the power series representation you gave, which is the result of analysis. It arises from solving the differential equation $f(x)=-f''(x), f(0)=0, f'(0)=1$ . I think the best anyone will be able to do is to show you how to recover the power series of sine from these conditions. So, without any further ado: Let $f(x)=\sum_{n=0}^{\infty}a_nx^n=a_0+a_1x+a_2x^2+a_3x^3+...$ be the solution to our differential equation. Then for $f''(x)$ we have the following: $f''(x)=\sum_{n=0}^{\infty}(n+1)(n+2)a_{n+2}x^n=2a_2+6a_3x+12a_4x^2+20a_5x^3+...$ Now it's time to find the $a_n$ 's. Given that $f(0)=0$ , we may conclude that $a_0=0$ . Furthermore, since $f'(0)=1$ , we must have $a_1=1$ . From the equation $f(x)=-f''(x)$ , we know that the sum of the coefficients of each power series for each power of $x$ must be $0$ . This gives us the following infinite set of equations: $$a_0=0$$ $$a_1=1$$ $$a_0+2a_2=0\Longrightar
|sequences-and-series|algebra-precalculus|
0
why is this associative?
I'm dealing with Paul Halmos' Linear Algebra Problem Book and I've found a problem already The fourth exercise asks me to determine whether the following operation is compliant with the associative principle: $$(α, β) · (γ, δ) = (αγ − βδ, αδ + βγ)$$ The answer says that it is, because: $$(αγ − βδ)ε − (αδ + βγ)ϕ,(αγ − βδ)ϕ + (αδ + βγ)ε = α(γε − δϕ) − β(γϕ + δε), α(γϕ + δε) + β(γε − δϕ)$$ And the author adds: "By virtue of the associativity of the ordinary multiplication of real numbers the same eight triple products, with the same signs, occur in both these equations." The thing is that I'm not being able to understand why this claim is true. I don't see "the same eight triple products with the same sign" occurring on both sides. What am I taking wrong? I tried to work through it with Latin letters: $$(a,b . x,y) . f,g\\ = (ax-by,ay+bx) . (f,g)\\ = f(ax-by)-g(ay+bx),g(ax-by)+f(ay+bx)\\ = afx-bfy-agy+bgx,agx-bgy+afy+bfx\\ \\~\\ a,b . (x,y . f,g)\\ = a,b . (xf-yg,xg+yf)\\ = a(xf-yg)-b(xg+
It's best to look at each component, one at a time. I will also use English letters. The operation is defined as $$(a,b) \cdot (c,d) = (ac - bd, ad + bc).$$ Then $$\bigl((a,b) \cdot (c,d)\bigr) \cdot (e,f) = (ac - bd, ad + bc) \cdot (e,f).$$ The first component is $$(ac-bd)e - (ad+bc)f = ace-bde-adf-bcf. \tag{1}$$ The second component is $$(ac-bd)f + (ad+bc)e = acf - bdf + ade + bce. \tag{2}$$ Now we look at $$(a,b) \cdot \bigl((c,d) \cdot (e,f)\bigr) = (a,b) \cdot (ce - df, cf + de),$$ again by component. The first is $$a(ce - df) - b(cf+de) = ace - adf - bcf - bde. \tag{3}$$ The second is $$a(cf+de) + b(ce-df) = acf + ade + bce - bdf. \tag{4}$$ It is now easy to see that $(1) = (3)$ and $(2) = (4)$ . It so happens that this binary operation corresponds to multiplication in the field of complex numbers.
|associativity|
1
Why does this multiplication trick work
Why does this multiplication trick works-- I sort of discovered it on my own. 985 x 974: (1000 x 1000) - (15 x 1000 + 26 x 1000) + (-15)(-26) 997 x 989: (1000 x 1000) - (3 x 1000 + 11 x 1000) + (-3)(-11) 1003 x 976: (1000 x 1000) - (-3 x 1000 + 24 x 1000) + (3)(-24) 1005 x 1007: (1000 x 1000) - (-5 x 1000 - 7 x 1000) + (5)(7) Originally I thought about this looking through: https://www.splashlearn.com/blog/best-multiplication-tricks-for-kids/#37-frequently-asked-questions-faqs- (#16 Rounding off to 1000)... I couldn't make sense how they solved their method. Like how they got 990016 (it didn't seem obvious). So I kept messing with the numbers and eventually created the above method on my own (to make it work). Now I just want to know why it works lol 998 x 992: (1000 x 1000) - (2 x 1000 + 8 x 1000) + (-2)(-8) Like the way I came up with it. I first changed the problem to 1000 x 992 So then I thought 1000 x 1000 - 8 x 1000 would be the same (which I checked) So then I thought if both we
You are using the distributive property, sometimes called FOIL ("first outers inners last"): $$(a+b)\cdot (c+d) = a\cdot c + b\cdot c + a\cdot d + b\cdot d.$$ If $b$ and/or $d$ happen to be negative, then you get your earlier results. You can understand why this works by going back to first principles with multiplication: If $a,b,c$ are integers, then $(a+b)\cdot c$ can be thought of as adding $(a+b)$ to itself $c$ times, which means you add $a$ to itself $c$ times, add $b$ to itself $c$ times, and then add the results: $(a+b)\cdot c = a\cdot c + b\cdot c$ . By the way, an even cleverer application of this is to do something like $$1025 \cdot 975 = (1000+25)\cdot (1000-25) = 1000\times 1000 - 25\cdot 25.$$
|algebra-precalculus|
1
Radius of Convergence of Laurent Series Confusion
Determine the largest number $R$ such that the Laurent series of $$f(z)= \dfrac{2sin(z)}{z^2-4} + \dfrac{cos(z)}{z-3i}$$ about $z=-2$ converges for $0 ? I know the maclaurin series for sine and cosine which are valid for all complex numbers. For $\frac{1}{z^2-4} = -\frac{0.25}{z+2} + \frac{0.25}{z-2} = \frac{-0.25}{z+2} + \frac{0.25}{-4+(z+2)} = \frac{-0.25}{z+2} + \frac{-1}{4}\frac{0.25}{1-(\frac{z+2}{4})}$ , which is only valid for | $\frac{z+2}{4}$ | $R=4$ as of now when applying the geometric series. For $\frac{1}{z-3i} = \frac{1}{z+2-(2+3i)} = \frac{-1}{2+3i}\frac{1}{1-\frac{z+2}{2+3i}}$ . When applying the geometric series only valid on $|\frac{z+2}{2+3i}| so $R = \sqrt{13}$ . Is this right?
The other poles are at $2,3i.$ So compute the distances from $-2.$ We get $4,\sqrt {13},$ the smaller one being $\sqrt {13}.$ Since the poles of a meromorphic function are isolated, we get $\sqrt {13}.$ Or an annulus $0\lt\mid z+2\mid \lt\sqrt {13}.$
|sequences-and-series|complex-analysis|analysis|taylor-expansion|
1
But what is with the other cyclic groups? Doesn't one also have to consider them?
I'm currently reading a textbook about abstract algebra. There is a proof that every subgroup of a cyclic group is cyclic. This proof is using the fact as every proof I have found on the Internet that all cyclic groups have the form $ \langle a\rangle=\{a^n\}$ , where $n \in \mathbb{Z}$ . But I don't think that this is true, because only cyclic groups under multiplication have this form. But what is with the other cyclic groups? Doesn't one also have to consider them?
It's just notation. For each group $(G,\ast)$ (for any binary operation $\ast$ that defines a group on the set $G$ ), we may write the set $G$ under concatenation (which is the fancy term for putting symbols next to each other and it does not always denote multiplication ), via the inclusion map $\iota(g)=g$ because $$\iota(g\ast h)=\iota(g)\iota(h)=gh$$ for arbitrary $g,h\in G$ . ${}^\dagger$ Powers, multiples, etc. , depending on $\ast$ , are then simply $g^n$ for arbitrary $g\in G, n\in\Bbb Z$ . This is to save time on writing/typing. Concatenation may as well be the arbitrary notation to use for an arbitrary group, by fiat. In that case, we write $G$ instead of $(G,\ast)$ when the context is clear. $\dagger$ : Here $\iota$ is known as an isomorphism .
|abstract-algebra|group-theory|notation|proof-explanation|cyclic-groups|
0
If $x,y\in\mathbb{N},\varepsilon>0$ then are there infinitely many positive integer pairs $(n,m)$ s.t. $\vert\frac{x^n}{y^m}- 1\vert < \varepsilon?$
Proposition: If $x,y\in\mathbb{N}_{\geq2}$ then for any $\varepsilon>0,$ there are infinitely many pairs of positive integers $(n,m)$ such that $$\frac{\left\lvert y^m-x^n \right\rvert}{y^m} i.e. $\displaystyle\large{\frac{x^n}{y^m}} \to 1\ $ as these pairs $(n,m) \to (\infty,\infty).$ I think this is true, and I want to prove it. For all integers $n,$ we have $$\frac{x^n}{y^{ {n\log_y x}}} = 1.$$ Therefore, we want to find integers $n$ such that $n\log_y x$ is, in some sense, extremely close to an integer. This above question can also be stated as follows. If $x,y\in\mathbb{N}_{\geq2}$ and $x>y,$ then either $\ \displaystyle\limsup_{n\to\infty} \frac{x^n}{y^{\lceil n(\log_y x)\rceil}} = 1 $ or $\ \displaystyle\liminf_{n\to\infty} \frac{x^n}{y^{\lfloor n(\log_y x)\rfloor}} = 1. $ Can we use Dirichlet's approximation theorem to prove this, or the fact that $\{ n\alpha: n\in\mathbb{N} \} $ is dense in $[0,1]$ for irrational $\ \alpha\ ?$ Or do we have to use other tools?
Since $$ \left|\,e^x-1\,\right|\le\frac{|x|}{1-|x|}\tag1 $$ if $|n\log(x)-m\log(y)|\le\frac\epsilon{1+\epsilon}$ , then $$ \begin{align} \left|\,\frac{x^n}{y^m}-1\,\right| &=\left|\,e^{n\log(x)-m\log(y)}-1\,\right|\tag{2a}\\ &\le\frac{|n\log(x)-m\log(y)|}{1-|n\log(x)-m\log(y)|}\tag{2b}\\[3pt] &\le\epsilon\tag{2c} \end{align} $$ Now, using Dirichlet's Approximation Theorem , we can find $n,m\in\mathbb{Z}$ , arbitrarily large, so that $$ |n\log(x)-m\log(y)|\le\min\left(\frac{\log(x)}m,\frac{\log(y)}n\right)\tag3 $$ which allows us to make $\epsilon$ as small as we want.
|number-theory|diophantine-approximation|
0
find condition on $a(n)$ such that the $\lim_{n\rightarrow\infty}\frac{a(n)\cdot (n-a(n))}{\log\binom{n}{a(n)}}=+\infty$
I met the following problem in my research but I don't know how to deal with it: What is the condition on $a(n)$ such that $$\lim_{n\rightarrow\infty}\frac{a(n)\cdot (n-a(n))}{\log\binom{n}{a(n)}}=+\infty$$ note that $a$ can be (no neccessary) function on $n$ thus $\binom{n}{a}$ is $\binom{n}{a(n)}$ Example. If $a(n)=n/2$ then this condition is satisfied.
Let $s=1/n$ and $r = a/n = a \cdot s $ We have ( see ) the bound $$\frac{1}{a \cdot(n-a) }\log \binom{n}{a} \le \frac{n}{a \cdot (n-a) } H(r)= s \frac{H(r)}{r \cdot (1-r)} \tag 1$$ where $H(r)= -r \log r -(1-r)\log (1-r)$ is the binary entropy function. The bound is quite tight (we might add the corresponding lower bound to make it more complete). We want $(1)$ to tend to zero as $s \to 0$ . Let's assume $r$ has some limit. If $r$ tends to some value inside $(0,1)$ , then we are done. If $r \to 0$ , then the RHS of $(1)$ tends to $$ - s \log (r) = -s \log s - s \log a$$ Hence, it's enough to assume $a\ge 1$ , and we are also done. If $r \to 1$ , then the RHS of $(1)$ tends to $$ - s \log (1-r) = -s \log(1- a s) $$ Again, it's enough to prescribe $$ a \le n-1 \implies 1-r \ge s $$ Hence, it seems the desired limit is verified practically always. All we are requiring is $0 and that $a/n$ has some limit (but I guess this also can be relaxed).
|asymptotics|binomial-coefficients|
0