title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
Difference between $G$-principal bundle and fiber bundle with fiber $G$?
|
I'm having a hard time understanding principal bundles. I understand that a fiber bundle can be defined $(E, B, \pi, F)$ where $E, B, F$ are topological spaces and $\pi: E\to B$ is a surjection. For a fiber bundle, for each $b\in B$ there is a neighborhood $U\supset b$ such that $\pi^{-1}(U)\subset E \simeq U\times F$ via homomorphism $\phi$ . Also $\text{proj}_1\circ \phi = \pi$ . I've seen multiple definitions for the principal bundle. One definition is that a principal bundle is a fiber bundle $(E, B, \pi, G)$ where $G$ is a topological group. But in addition, it seems to be important that there is a free and transitive group action of $G$ on $E$ which preserves fibers of $B$ within $E$ . I have a few questions. Is the definition I've given for a principal bundle above good? I've seen many different definitions. Some of them involve quotients/orbit spaces, some do not indicate that a principal bundle is defined to have typical fiber $G$ , some involve a structure group, etc. I would
|
Disclaimer: I'm still learning, this is my stab at an answer, but correction/improvements are much appreciated. I think the key here is that the goal with a principal bundle is to attach a $G$ -torsor to each point of the base space, not to attach the group $G$ to each point of the base space. There is a bit of confusion, however, because a $G$ -torsor is always isomorphic to $G$ , so you can't attach a $G$ -torsor without kind of attaching $G$ as well. This is what makes the definitions a bit confusing. But then the question arises, why not define a principal bundle as a fiber bundle $(E, B, \pi, T)$ where the fiber $T$ is a $G$ -torsor? I think the answer here is that it is not enough for the group $G$ to act independently on/within the various fibers $E_b\subset E$ for $b\in B$ , but rather, it is necessary that the group acts continuously on the total space, i.e. globally and continuously across fibers as well. For this, we require a continuous group action of $G$ on $E$ which pres
|
|definition|fiber-bundles|principal-bundles|
| 0
|
What does a parametric equation mean?
|
I am following the last module of Differential Calculus on Khan Academy, that deals with Parameteric equations. Here are the parametric equations described in the lecture. $x(t) = 5t + 10$ $y(t) = 50 - 5t^2/2$ However, I really don't understand what parametric equations really mean. How do they differ from normal equations. According to Wikipedia : "In mathematics, a parametric equation defines a group of quantities as functions of one or more independent variables called parameters." I really don't understand what this definition is trying to convey. From what I observed, if two functions share a variable, it typically gets defined as a parametric equation. But that seems to be a loose definition. Regarding my prerequisite knowledge, I have a Masters in Engineering. Therefore I understand the formulae of calculus quite well. I just never bothered to understand some of the underlying concepts. Therefore, I am revisiting it by through Khan Academy.
|
Late to the party, but for posterity, and deliberately (i) steering clear of technicalities and (ii) covering basics selectively: Many objects of interest, such as curves (line segments, circles, parabolas, splines, ...), surfaces (polygons, spheres, tori, ...), and regions (solid rectangles and boxes, disks, balls, ...), are sets of points . One way to describe an object is implicitly : By giving a "defining" equation or inequality, such as (in the plane) $y = mx + b$ for a non-vertical line, or $x^{2} + y^{2} = 1$ for the unit circle, or (in space) $$ (x - x_{0})^{2} + (y - y_{0})^{2} + (z - z_{0})^{2} \leq r^{2} $$ for the closed ball of radius $r$ and center $(x_{0}, y_{0}, z_{0})$ . Another way to describe an object is parametrically : By giving mapping whose domain is an open set in a Cartesian space (or the closure of an open set) and whose image is all or part of the object. The name is self-descriptive: We are describing points of our object in terms of parameters , variables
|
|calculus|
| 0
|
Are conservative functors stable under pullback?
|
Let $A, B$ and $C$ be three small categories and let $f: A\to B$ and $g: C\to B$ two functors such that $g$ is conservative. We can take the pullback as in the following picture: $\require{AMScd}$ \begin{CD} P @>{g^{*}}>> A\\ @VVV @VVV\\ C @>{g}>> B. \end{CD} My question is: $g^{*}$ is always conservative? In case the answer is no, for which $f$ is $g^{*}$ conservative? Furthermore, we can canonically rephrase the above situation in the context of $\infty$ -category. So, let $A,B$ and $C$ be three small $\infty$ -categories and so on ... My questions become: the $\infty$ -functor $g^{*}$ is always conservative? In case the answer is no, for which $\infty$ -functor $f$ is $g^{*}$ conservative?
|
I discovered that the answer is yes because the class of conservative functor is the right class of a factorization system in $Cat_{\infty}$ , see Example 3.1.7 (f) in the article "Left-exact Localizations of $\infty$ -Topoi I" (Url: file:///C:/Users/teoxd/Desktop/Left-exact%20Localizations%20of%20infty-topoi%20I-Higher%20Sheaves-%20Anel-Biedermann-Finster-%20Andr%C3%A9%20Joyal.pdf).
|
|category-theory|higher-category-theory|
| 0
|
What is the integral $\int_{-\pi}^{\pi} \frac{1}{2\pi}\exp{(z_1 \cos\theta + z_2 \sin\theta)}\, d\theta$?
|
What is the integral $\int_{-\pi}^{\pi} \frac{1}{2\pi}\exp{(z_1 \cos\theta + z_2 \sin\theta)}\, d\theta$ ? When $z_{1,2} \in \mathbb{R}$ then we get the modified Bessel function $I_0(\sqrt{z_1^2+z_2^2})$ . What would be the solution for complex $z_i$ ?
|
For complex \begin{equation}( z1, z2 )\end{equation} , one has \begin{equation} I(z_1,z_2) = \int_{-\pi}^{\pi} \frac{1}{2\pi} \exp{(z_1 \cos\theta + z_2 \sin\theta)}\, d\theta \end{equation} This integral can be recognized as related to the generating function of the Bessel functions. We can write: \begin{align} I(z_1,z_2) &= \frac{1}{2\pi} \int_{-\pi}^{\pi} \exp\left(\Re\{z_1 e^{i\theta}\}\right) d\theta \\ &= \frac{1}{2\pi} \int_{-\pi}^{\pi} \exp\left(\frac{z_1}{2}(e^{i\theta} + e^{-i\theta}) + \frac{z_2}{2i}(e^{i\theta} - e^{-i\theta})\right) d\theta \\ &= \frac{1}{2\pi} \int_{-\pi}^{\pi} \exp\left(\frac{z_1}{2}(2\cos\theta) + \frac{z_2}{2i}(2i\sin\theta)\right) d\theta \\ &= \frac{1}{2\pi} \int_{-\pi}^{\pi} \exp\left(\frac{z_1}{2}(2\cos\theta) + \frac{z_2}{2}(2\sin\theta)\right) d\theta \\ &= \frac{1}{2\pi} \int_{-\pi}^{\pi} \exp\left(\frac{z_1 + iz_2}{2} e^{i\theta} + \frac{z_1 - iz_2}{2} e^{-i\theta}\right) d\theta. \end{align} Letting \begin{equation}(z^2 = z_1^2 + z_2^2)\end{eq
|
|calculus|integration|complex-analysis|complex-integration|bessel-functions|
| 0
|
Squaring a function, getting a surprising result
|
I'm doing a problem in which I have to find the minimum distance between the origin and a point on a parabola. I understand that the minimum of a function and the minimum of the square of the function are the same. The distance function when considering that one point is the origin should be $d =\sqrt{x^2 + y^2}$ , making the squared function $d = x^2 + y^2$ . However, when solving for $y$ so that I can make the equation solely in terms of $x$ , I get $y = \sqrt{-x^2}$ , and substituting that for $y$ in the original equation would negate that square root and result in $x^2 -x^2$ , or $0$ . Hopefully I just overlooked something, but that obviously seems wrong if I'm trying to find a distance function I can derive and get a minimum.
|
If you want to find the distance between a point on the parabola and the origin, all you have to do is use the Pythagorean theorem (essentially the distance equation you used) to figure out the hypotenuse connecting the points. For example, if the point on the parabola is $(3,9)$ , then the distance between $(0,0)$ and $(3,9)$ is $\sqrt{3^2+9^2}=\sqrt{90}=3\sqrt{10}$
|
|calculus|optimization|
| 0
|
Generalization of the matrix exponential
|
I've seen this post which addresses the question of exponentiating a vector. I was wondering if there's a well-defined notion of exponentiating a rank $r$ tensor? For instance, if I have a rank 3 tensor $A_{ijk}$ , can I compute something like $\mathrm{Exp}[A_{ijk}]$ ? If you know of any references/insights on this I'd highly appreciate it!
|
(Partial answer) Since exponentiation is generally defined via the series of the same name, i.e. $$e^x = \sum_{k=0}^\infty \frac{x^k}{k!},$$ the object $x$ needs to belong to a space where addition, multiplication and scalar multiplication (due to the factorial prefactor) are themselves defined. Given that tensors form vector spaces, addition and scalar multiplication are natural operations. Now, you need to define a multiplicative operation between tensors (in such a way that they now form an algebra ). The most natural way to do so corresponds to the case where multiplication is interpreted as the tensorial product. If you want to deal with antisymmetric tensors only, then you may consider the exterior product instead. If they form a Lie algebra, then the Lie brackets will play the role of the multiplicative operation. But nothing prevents you to consider more "exotic" products. You can also "import" the product from another space through isomorphism, as it is traditionally done for
|
|linear-algebra|reference-request|exponentiation|tensors|
| 1
|
Finding a minimal integer that satisfies some conditions
|
Let $b,d,m$ be positive integers such that $d|m$ and $0\leq b . I want to find a minimal integer $h\in\frac{m}{bd}\mathbb{Z}\cap\frac{m}{b^2}\mathbb{Z}\cap\frac{m}{d^2}\mathbb{Z}$ . First, I found minimal integers for each set. They are $\frac{m/d}{(m/d,b)},\frac{m}{(m,b^2)}$ , and $\frac{m/d}{(m/d,d)}$ , where ( , ) means gcd. So $h$ is just a least common multiple of them. We may use the rule lcm $(A,B,C)=ABC/(A,B,C)$ . I'd like to clean things up a bit more than just leaving them as they are. However, I did not find a nice formula. Is there some nice way to represent the number $h$ ? For example, we may check that $h=m$ if $(b,d)=1$ .
|
Note that if $h$ is an integer multiple of $\frac{m}{b^2}$ and $\frac{m}{d^2}$ , then it is automatically an integer multiple of $\frac{m}{bd}$ . Indeed, we have integers $x$ and $y$ such that $h = \frac{m}{b^2}x$ and $h = \frac{m}{d^2}y$ , and multiplying both equations and taking square roots shows that $h = \frac{m}{bd} \sqrt{xy}$ . Here, $z = \sqrt{xy}$ is rational and its square is an integer, so it must also be an integer. Now, you only need to find the LCM of $\frac{m}{(m, b^2)}$ and $\frac{m}{(m, d^2)}$ . Since the GCD of the denominators is... (can you proceed from here?)
|
|elementary-number-theory|
| 1
|
What happened to interchange Laplace transform and one definite integral (Fubini's theorem is not satisfied)?
|
I have been long time puzzled by the correct application of Fubinis theorem. Is it true that the bounds need to be constant if the integrals can be interchanged? Please correct if not. Let's consider the following Laplace transform pair. \begin{align} \mathcal{L}^{-1}\Bigl[\frac{2(\cos(at)-\cos(bt))}{t}\Bigr] = \color{red}{-}\ln\Bigl[\frac{s^2+a^2}{s^2+b^2}\Bigr]\tag{1}\label{e:1} \end{align} Since I observe that \begin{align*} \int_a^b\sin(xt) \, \mathrm{d}t = \frac{\cos(at)-\cos(bt)}{t} \end{align*} I tried to first do Laplace tranform and then integral as follows \begin{align*} 2\int_a^b \int_0^\infty \sin(xt)e^{-st} \,\mathrm{d}t\mathrm{d}x = 2\int_a^b \frac{x}{x^2+s^2} \, \mathrm{d}x =\ln\Bigl[\frac{s^2+b^2}{s^2+a^2}\Bigr] \end{align*} which is negative to $\eqref{e:1}$ . I might have made silly mistakes I was not aware of. EDIT : It turned out that is another error in Schaum's book p169 (1968, Spiegel).
|
Fubini–Tonelli Theorem tells that, if $f$ is a function on the product set $X \times Y$ 1) , then the followings hold: (Tonelli's Theorem) We always have $$ \int_{X}\int_{Y} |f(x, y)| \, \mathrm{d}y\mathrm{d}x = \int_{Y} \int_{X} |f(x, y)| \, \mathrm{d}x\mathrm{d}y = \iint_{X\times Y} |f(\mathbf{z})| \, \mathrm{d}\mathbf{z} $$ regardless of whether they are finite or not. (Fubini's Theorem) Moreover, if the above integrals are finite, then we also have $$ \int_{X}\int_{Y} f(x, y) \, \mathrm{d}y\mathrm{d}x = \int_{Y} \int_{X} f(x, y) \, \mathrm{d}x\mathrm{d}y = \iint_{X\times Y} f(\mathbf{z}) \, \mathrm{d}\mathbf{z}. $$ So, that $f$ is bounded by a constant is neither a sufficient nor necessary condition for Fubini's Theorem to be applicable. (There are counterexamples in both directions.) In OP's case, \begin{align*} \int_{0}^{\infty} \int_{a}^{b} \left| \sin(xt)e^{-st} \right| \, \mathrm{d}x\mathrm{d}t &\leq \int_{0}^{\infty} \int_{a}^{b} e^{-st} \, \mathrm{d}x\mathrm{d}t = \frac{b-a}
|
|calculus|integration|laplace-transform|
| 1
|
What is the set of numbers that are roots of the exponential?
|
Usually when an equations has no roots it leads to a new set of numbers. For example $x^2+1=0$ lead to the development of complex and imaginary numbers. What are the extension of numbers that solve $$e^x=0\;?$$ Obviously this is not possible for $x\in\mathbb{C}$ . I am having a bad time looking for this as many websites are dedicated to finding the roots of equations with exponentials. I am guessing since $$\log(e^x)=x = \log(0) (?)=-\infty$$ it is somehow related to hyperreal numbers?
|
In extended real numbers $\overline {\mathbb R}$ , the solution for $e^x=0$ is $-\infty$ . Extending reals with logarithm of zero can be done in a different way as well.
|
|roots|
| 0
|
How to define the addition and multiplication of p-adic integers as infinite formal sums?
|
On page 269 in Dummit&Foote's Abstract Algebra, 3rd edition, it gives that every element in $\mathbb{Z}_p=\varprojlim\mathbb{Z}/p^i\mathbb{Z}$ can be written uniquely as an infinite formal sum $\sum^{\infty}_{k=0}b_kp^k$ with each $b_k\in[0,p-1]$ . But I was confused about how to define the addition and multiplication on $\sum^{\infty}_{k=0}b_kp^k$ such that it could keep the structure of rings. The following is my attempt: $$ \begin{cases} \displaystyle\left(\sum_{k=0}^{\infty}b_kp^k\right)+\left(\sum_{k=0}^{\infty}c_kp^k\right)=\sum_{k=0}^{\infty}\big(b_k+c_k\ \ (\mathrm{mod}\ \ p)\big)p^k \\\displaystyle\left(\sum_{k=0}^{\infty}b_kp^k\right)\cdot\left(\sum_{k=0}^{\infty}c_kp^k\right)=\sum_{k=0}^{\infty}\big(b_k\cdot c_k\ \ (\mathrm{mod}\ \ p)\big)p^k \end{cases} $$ where $\big(b_k+c_k\ \ (\mathrm{mod}\ \ p)\big)$ and $\big(b_k\cdot c_k\ \ (\mathrm{mod}\ \ p)\big)$ are in $[0,p-1]$ . But I am not sure that whether it is true. Update on 2024.03.29: follow the hints from the comments I
|
It would not be easy to produce simple formulas. Roughly speaking, the connection between arithmetic and bitwise operations is complicated as seen from computer science. Had there been, math education in elementary school can be significantly simplified, since the formula must be able to re-produce the results for natural numbers. However, there is a very concrete way to understand and perform the operations. Given two positive integers $a=\sum_{i=0}^n a_i l^i$ and $b=\sum_{i=0}^n b_i l^i$ written in $l$ -adic manner where $l$ doesn't even have to be a prime. By the same way we perform addition and multiplication for $l=10$ in real life and scientific investigations, we can perform the operation for general $l$ . Here is the key insight: There is nothing that stopped us from defining $a=\sum_{i=0}^\infty a_i l^i$ and $b=\sum_{i=0}^\infty b_i l^i$ formally (i.e. as two infinite sequences $(a_1, a_2, \cdots)$ and $(b_1, b_2, \cdots)$ ), and their additions and multiplications as above. I
|
|abstract-algebra|group-theory|number-theory|ring-theory|p-adic-number-theory|
| 0
|
Solving the PDE through Laplace Transform Method
|
I have a particular PDE as shown below: $$ \frac{\partial u}{\partial t} = \nu \frac{\partial^2 u}{\partial x^2} + xe^{-\gamma x} $$ with Boundary conditions as shown, $$ \nu, \gamma >0 ~~~~1)~u(0, t) = 0 ~~~~~ 2)~u(\infty, t) = 0 ~~~~ 3)~u(x, 0) = 0 $$ I was supposed to solve this using Laplace Transform thus, I go on doing my thing as shown. $$ sU - 0 = \nu \frac{d^2U}{dx^2} + \frac{xe^{-\gamma x}}{s} \implies \frac{d^2U}{dx^2} - \frac{s}{\nu}U = -\frac{x}{s\nu}e^{-\gamma x} $$ And the solution of this ODE is $$ U = c_1\exp\bigg(\sqrt{\frac{s}{\nu}}x\bigg) + c_2\exp\bigg(-\sqrt{\frac{s}{\nu}}x\bigg) + \frac{e^{-\gamma x}}{s}\bigg[ \frac{x}{s-\gamma^2\nu} - \frac{2\gamma \nu}{(s-\gamma^2\nu)^2} \bigg] $$ We already used BC3 initially. Now from, BC2 we would get, $c_1 = 0$ . And Now from BC1 we would finally get our solution in Laplace domain as, $$ U = \frac{2\gamma \nu}{s(s-\gamma^2\nu)^2}\bigg[ \exp\bigg(-\sqrt{\frac{s}{\nu}}x\bigg) - \exp(-\gamma x) \bigg] + \frac{xe^{-\gamma x}}{s
|
The solution in s-domain must be inverted $$U(x,s)=e^{-x \sqrt{\frac{s}{\nu }}} \frac{2 \gamma \nu}{s \left(s-\gamma ^2 \nu \right)^2}-\frac{2 \gamma \nu e^{-\gamma x}}{s \left(s-\gamma ^2 \nu \right)^2}+\frac{x e^{-\gamma x}}{s \left(s-\gamma ^2 \nu \right)}$$ The first summand is the most complicated so we separate 2 factors $$f(x,t)=\mathcal{L}_s^{-1}\left[e^{-x \sqrt{\frac{s}{\nu }}}\right](t)=\frac{\nu x e^{-\frac{x^2}{4 \nu t}}}{2 \sqrt{\pi } \sqrt{\nu ^3 t^3}}$$ $$g(t)=\mathcal{L}_s^{-1}\left[\frac{2 \gamma \nu }{s \left(s-\gamma ^2 \nu \right)^2}\right](t)=\frac{2 \left(e^{\gamma ^2 \nu t} \left(\gamma ^2 \nu t-1\right)+1\right)}{\gamma ^3 \nu }$$ With the convolution theorem we get $$u_1(x,t)=(f*g)(t)=\int_0^{t} f(x,\tau)\cdot g(t-\tau)\ d\tau= \int_0^{t} \frac{x e^{-\frac{x^2}{4 \nu \tau }} \left(e^{\gamma ^2 \nu (t-\tau )} \left(\gamma ^2 \nu (t-\tau )-1\right)+1\right)}{\sqrt{\pi } \gamma ^3 \sqrt{\nu ^3 \tau ^3}} d\tau$$ Mathematica is able so solve the integral $$u_1(x,t)
|
|partial-differential-equations|laplace-transform|inverse-laplace|laplace-method|
| 1
|
Given eigenvalues and eigenvector, how to find the matrix there are fewer eigenvectors than its rank?
|
I know of the $A = PDP^{-1}$ formula but it only works if P is invertible. My example I've spent hours on is: Find a 3x3 matrix where the only eigenvector is $ \begin{pmatrix} 1 \\ 1 \\ 0 \\ \end{pmatrix} $ for which the eigenvalue is 1. (Any constant multiplication of the eigenvector is naturally fine as well). One example is \begin{bmatrix} 1 & 0 & 1 \\ 1/2 & 1/2 & 1/2 \\ -1/2 & 1/2 & 3/2 \end{bmatrix} . But I cannot for the life of me figure out how to get to this answer.
|
in the U.S., Jordan form is upper triangular. The transformation with characteristic polynomial $(x-1)^3 $ but minimal polynomial the same is just the Jordan block $$ J= \left( \begin{array}{rrr} 1 & 1 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \\ \end{array} \right) $$ That was the hard part. Now, you want to switch the eigenvector $$ e_1 = \left( \begin{array}{r} 1 \\ 0 \\ 0 \\ \end{array} \right) $$ for the required $$ v = \left( \begin{array}{r} 1 \\ 1 \\ 0 \\ \end{array} \right) $$ That is, you need to pull back $v$ to $e_1$ You require an (invertible) matrix $R$ such that $$ \left( \begin{array}{r} 1 \\ 0 \\ 0 \\ \end{array} \right) \; = \; \; R \; \; \left( \begin{array}{r} 1 \\ 1 \\ 0 \\ \end{array} \right) $$ so write out the system. My first try was lower triangular so it is easy to confirm the nonzero determinant, $$ R= \left( \begin{array}{rrr} 1 & 0 & 0 \\ -1 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) $$ Note the eigenvectors of $R$ do not matter . Next $$ R^{-1}= \left( \begin{array}
|
|linear-algebra|matrices|eigenvalues-eigenvectors|
| 0
|
For all $n \in \mathbb{N}^*$, prove that the set $\left(e^{ kx }\right)_{k \in [|1,n|]}$ is linearly independent.
|
$(\mathbb{R}^{\mathbb{R}},+, \cdot)$ is an $\mathbb{R}$ Vector space. For all $n \in \mathbb{N}^*$ , prove that the set $\left(e^{ kx }\right)_{k \in [|1,n|]}$ is linearly independent. I have already proved it by induction, but I'm trying to prove it now using isomorphisms, and here's how I've gone through it so far. Let $n \in \mathbb{N}^*$ , and let us prove that the set $\left(e^{ kx }\right)_{k \in [|1,n|]}$ is linearly independent. Consider the following morphism $$ \begin{align} f:\:(\mathbb{K}_{n}[X],+,\cdot)& \longrightarrow (f(\mathbb{K}_{n}[X]),+,\cdot) \\ &P \mapsto \tilde{P} \circ(e^{ x }) \end{align} $$ Since the set $(X^{k})_{k \in [|1,n|]}$ is already linearly independent in $\mathbb{K}_{n}[X]$ , all I gotta do now is prove that $f$ is injective to deduct that $\left(e^{ kx }\right)_{k \in [|1,n|]}$ is linearly independent in $\mathbb{R}^{\mathbb{R}}$ . Now here is where im stuck, as I find it quite difficult to prove that $f$ is injective. I wanna see if it is possible,
|
Since the set \begin{equation}( (X^k)_{k \in [|1,n|]} )\end{equation} is already linearly independent in \begin{equation}( \mathbb{K}_n[X] )\end{equation} , to show that \begin{equation}( f )\end{equation} is injective, we need to prove that the kernel of \begin{equation}( f )\end{equation} is trivial, i.e., the only polynomial that maps to the zero function under \begin{equation}( f )\end{equation} is the zero polynomial. Suppose \begin{equation}( f(P) = 0 )\end{equation} for some non-zero polynomial \begin{equation}( P \in \mathbb{K}_n[X] )\end{equation} . This means that \begin{equation}( P(e^x) = 0 )\end{equation} for all \begin{equation}( x \in \mathbb{R} )\end{equation} . However, the exponential function \begin{equation}( e^x )\end{equation} is never zero for any real number ( x ), and a non-zero polynomial can have at most a finite number of roots, which is a contradiction. Hence, the only polynomial that can map to the zero function is the zero polynomial itself, and therefore
|
|vector-spaces|vector-space-isomorphism|
| 1
|
summing binomial coefficiens related
|
If $s_n=\sum_{k=0}^{n}(-4)^k\binom{n+k}{2k}$ how to prove $s_{n+1}+2s_n+s_{n-1}=0$ . One of my student had this question in his exam. Honestly to speak I couldn't get any single idea how to even start. I knew some strategies to find binomial sums but they all couldn't help. It would be great if someone helps.
|
This answer proves that $s_{n+1}+2s_n+s_{n-1}=0$ without proving that $s_n=(-1)^n(2n+1)$ . ( user 's answer is helpful.) First, we have $$\begin{align}s_{n+1}&=\sum_{k=0}^{n+1}(-4)^k\binom{n+1+k}{2k} \\\\&=\sum_{k=0}^{n+1}(-4)^k\bigg(\binom{n+k}{2k}+\binom{n+k}{2k-1}\bigg) \\\\&=\sum_{k=0}^{\color{red}{n+1}}(-4)^k\binom{n+k}{2k}+\sum_{k=0}^{n+1}(-4)^k\binom{n+k}{2k-1} \\\\&=\sum_{k=0}^{\color{red}n}(-4)^k\binom{n+k}{2k}+\sum_{k=0}^{n+1}(-4)^k\binom{n+k}{2k-1} \\\\&=s_n+\sum_{k=0}^{n+1}(-4)^k\binom{n+k}{2k-1}\end{align}$$ So, we obtain $$\sum_{k=0}^{n+1}(-4)^k\binom{n+k}{2k-1}=s_{n+1}-s_n\tag1$$ Also, we have $$\begin{align}&\sum_{k=0}^{n+1}(-4)^k\binom{n+k}{2k-1}\\\\&=\sum_{k=0}^{n+1}(-4)^k\bigg(\binom{n+k-1}{2k-1}+\binom{n+k-1}{2k-2}\bigg) \\\\&=\sum_{k=0}^{\color{red}{n+1}}(-4)^k\binom{n+k-1}{2k-1}+\sum_{k=\color{blue}0}^{n+1}(-4)^k\binom{n+k-1}{2k-2} \\\\&=\sum_{k=0}^{\color{red}n}(-4)^k\binom{n+k-1}{2k-1}+\sum_{k=\color{blue}1}^{n+1}(-4)^{k}\binom{n+k-1}{2k-2} \\\\&=\sum_{k=0}^{n}(
|
|algebra-precalculus|
| 0
|
In an art museum, there are $n$ paintings, $n \ge 33$, ...
|
In an art museum, there are $n$ paintings, $n \ge 33$ , for which there are used a total of $15$ different colors so that any two paintings have at least one common color and there are no two paintings that have exactly the same colors. Determine all possible values of $n \ge 33 $ so that anyway we color the paintings with the above properties we can choose four distinct paintings $T_1$ , $T_2$ , $T_3$ and $T_4$ , so that any color that is used in both $T_1$ and $T_2$ , it can be found in $T_3$ or $T_4$ . I've been trying to solve this combinatorics problem for some time, but I can't think of what the result could be (probably a big number). I don't know if it will be useful, I will put my attempts below. Let $T_1, T_2, T_3, ... , T_n$ be the n sets representing the "paintings", each having at least one element and at most 15 elements and let those elements be $c_1,c_2,...,c_{15}$ (the $15$ colors).From the first sentence we have that $T_i \ne T_j , \forall i,j = \{1,2,...,n\} , i \n
|
I think I have an answer that relies crucially on there being exactly 15 colors (it won't work for 16 colors). I call the paintings $P_1, \cdots, P_n$ . We count the number of five tuples $$(i,j,k,l,c)$$ where $i,j,k,l$ are distinct integers in $1,2,\cdots, n$ , and $c$ is a color such that $c$ lies in $P_i$ and $P_j$ but not $P_k$ or $P_l$ . Call such a five tuple good . On one hand, for each color $c$ , suppose there are $a$ of the $P_i$ 's that contain $c$ , and $(n - a)$ of the $P_i$ 's that doesn't. Then the number of good five tuples with color $c$ is $$a(a - 1)(n - a)(n - a - 1)$$ which by AM-GM is at most $n^2(n - 2)^2 / 16$ . Summing over all the colors, the number of good five tuples is at most $$15 n^2(n - 2)^2 / 16.$$ On the other hand, suppose that for any four distinct painting $P_i, P_j, P_k, P_l$ , we have $P_i \cap P_j \not\subset P_k \cup P_l$ . Then there exists a color $c$ such that $(i,j,k,l,c)$ is a good five tuple. So the number of good five tuples is at least th
|
|combinatorics|permutations|combinations|
| 1
|
How do I find the solution for $\frac{x+1}{x-1} \gt \frac 1{x}$?
|
I'm very confused on inequalities. I just came from inequalities with absolute values and tried to solve the above inequality like I did with absolute values. When I compared with my students they seemed to have a different and much more simpler Answer. \begin{align*} \text{Given equation:} \quad & \frac{x+1}{x-1} > \frac{1}{x} \\ \text{Case 1:} \quad & x - 1 \geq 0 \quad \Rightarrow \quad x \geq 1 \quad \Rightarrow \quad [1,\infty[ \\ & \text{Case 1.1:} \quad x > 0 \quad \Rightarrow \quad ]0,\infty[ \\ & \quad \frac{x+1}{x-1} > \frac{1}{x} \\ & \quad \Rightarrow \quad x^2 + x > x - 1 \\ & \quad \Leftrightarrow \quad x^2 > -1 \\ & \text{Case 1.2:} \quad x \frac{1}{x} \\ & \quad \Rightarrow \quad x^2 \frac{1}{x} \\ & \quad \Rightarrow \quad x^2 When I wanted to do the second case I noticed it's same for all equations, so I noticed that the Intervalls are in contradictory, So my Solution is $]0,1[$ . Also $R\{1,0\}$ . What would have been an easier way?
|
$$\frac{x+1}{x-1} - \frac{1}{x} > 0 $$ Or $$\frac{x^2 +x - x + 1}{x(x-1)} > 0 $$ Or $$\frac { x²+1}{x(x-1)} > 0 $$ Now $x^2+1>0$ is true for all real values of x. So if $x²+1>0$ and $\frac { x²+1}{x-1} > 0 $ then $x(x-1)>0$ has to be true. Solving this by wavy curve method, you get $x or $1 What is wavy curve method :- Wavy curve method is a manner of checking signs of the function whose value we are comparing. If you draw a number line and plot the roots on it, here 0 and 1, you'll see that the function is +ve to the left of 1, -ve between 0 and 1, and +ve again to the right of 0. This symbols alternate from left to right after every root if the root can be expressed as $(x−k)^n$ where n is odd. It doesn't change if n is even
|
|inequality|
| 0
|
How do I find the solution for $\frac{x+1}{x-1} \gt \frac 1{x}$?
|
I'm very confused on inequalities. I just came from inequalities with absolute values and tried to solve the above inequality like I did with absolute values. When I compared with my students they seemed to have a different and much more simpler Answer. \begin{align*} \text{Given equation:} \quad & \frac{x+1}{x-1} > \frac{1}{x} \\ \text{Case 1:} \quad & x - 1 \geq 0 \quad \Rightarrow \quad x \geq 1 \quad \Rightarrow \quad [1,\infty[ \\ & \text{Case 1.1:} \quad x > 0 \quad \Rightarrow \quad ]0,\infty[ \\ & \quad \frac{x+1}{x-1} > \frac{1}{x} \\ & \quad \Rightarrow \quad x^2 + x > x - 1 \\ & \quad \Leftrightarrow \quad x^2 > -1 \\ & \text{Case 1.2:} \quad x \frac{1}{x} \\ & \quad \Rightarrow \quad x^2 \frac{1}{x} \\ & \quad \Rightarrow \quad x^2 When I wanted to do the second case I noticed it's same for all equations, so I noticed that the Intervalls are in contradictory, So my Solution is $]0,1[$ . Also $R\{1,0\}$ . What would have been an easier way?
|
This might be easier. $$\frac{x+1}{x-1}>\frac 1 x$$ $$\Leftrightarrow \frac{x+1}{x-1}-\frac 1 x>0$$ $$\Leftrightarrow \frac{x^2+1}{x(x-1)}>0$$ $$\Leftrightarrow x(x-1)>0,$$ hence the solution set is $$(-\infty,0)\cup (1,\infty).$$
|
|inequality|
| 0
|
Different definition of subsequences in Kelley's General Topology
|
I have a question that may seem trivial. I am studying Kelley's General Topology. I was reading Chapter 2 when I came across an unusual definition. Tipically, I define a subsequence as follows: a subsequence of the sequence $(a_n)_{n \in \mathbf{N}}$ is any sequence of the form $(a_{n_k})_{k \in \mathbf{N}}$ where $(n_k)_{k > \in \mathbf{N}}$ is a strictly increasing sequence of positive integers. Whereas in the book, it says that: $T$ is a subsequence of a sequence $S$ iff there is a sequence $N$ of non-negative integers such that $T = S \circ N$ (equivalently, $T_i = > S_{N_i}$ for each $i$ ) and for each integer $m$ there is an integer $n$ such that $N_i \geq m$ whenever $i \geq n$ . I can deduce that this two definitions are not the same and it may seem that one definition implies the other. Am I right or wrong? Can someone formally explain me the difference and how these two are related? Thank you :)
|
The first definition in your question is the usual one. Kelley's definition would agree with it if $N$ were required to be a strictly increasing sequence, but Kelley's requirement is weaker than that. So any subsequence in the usual sense is also a subsequence in Kelley's sense, but the converse is not true. I think that Kelley uses his unusual definition in order to agree with the treatment of nets and subnets (a sort of generalized sequence, important in general topology) later in the book. The definition of "subnet" is the same as Kelley's definition of "subsequence" with the index set $\mathbb N$ for sequences replaced by more general ("directed") partially ordered sets.
|
|sequences-and-series|analysis|definition|
| 0
|
How do I find the solution for $\frac{x+1}{x-1} \gt \frac 1{x}$?
|
I'm very confused on inequalities. I just came from inequalities with absolute values and tried to solve the above inequality like I did with absolute values. When I compared with my students they seemed to have a different and much more simpler Answer. \begin{align*} \text{Given equation:} \quad & \frac{x+1}{x-1} > \frac{1}{x} \\ \text{Case 1:} \quad & x - 1 \geq 0 \quad \Rightarrow \quad x \geq 1 \quad \Rightarrow \quad [1,\infty[ \\ & \text{Case 1.1:} \quad x > 0 \quad \Rightarrow \quad ]0,\infty[ \\ & \quad \frac{x+1}{x-1} > \frac{1}{x} \\ & \quad \Rightarrow \quad x^2 + x > x - 1 \\ & \quad \Leftrightarrow \quad x^2 > -1 \\ & \text{Case 1.2:} \quad x \frac{1}{x} \\ & \quad \Rightarrow \quad x^2 \frac{1}{x} \\ & \quad \Rightarrow \quad x^2 When I wanted to do the second case I noticed it's same for all equations, so I noticed that the Intervalls are in contradictory, So my Solution is $]0,1[$ . Also $R\{1,0\}$ . What would have been an easier way?
|
I get the general idea of trying to break down the problem into a bunch of cases, and examine each one of those cases independently. However, there is no guarantee that the cases you have chosen are useful or correct. Typically, the approach that you are working through depends on rewriting the original inequality so that it is expressed in the form $$ \frac{p(x)}{q(x)} > 0, $$ where $p$ and $q$ are polynomials. The inequality then holds true whenever $p(x)$ and $q(x)$ have the same sign. In this case, $$ \frac{x+1}{x-1} > \frac{1}{x} \iff 0 It is, I think, reasonably clear that $p(x) = x^2 + 1 > 0$ for all $x\in\mathbb{R}$ , which means that the inequality is satisfied if and only if $q(x) = x(x-1) > 0$ . This, in turn, requires that $x$ and $x-1$ have the same sign: if $x and $x-1 , then $x (as this is the more restrictive condition—hence $x \in (-\infty,0)$ ), and if $x > 0$ and $x-1 > 0$ , then $x > 1$ (as this is the more restrictive condition—hence $x\in (1,\infty)$ ). Since eith
|
|inequality|
| 1
|
Norms Induced by Inner Products and the Parallelogram Law
|
Let $ V $ be a normed vector space (over $\mathbb{R}$, say, for simplicity) with norm $ \lVert\cdot\rVert$. It's not hard to show that if $\lVert \cdot \rVert = \sqrt{\langle \cdot, \cdot \rangle}$ for some (real) inner product $\langle \cdot, \cdot \rangle$, then the parallelogram equality $$ 2\lVert u\rVert^2 + 2\lVert v\rVert^2 = \lVert u + v\rVert^2 + \lVert u - v\rVert^2 $$ holds for all pairs $u, v \in V$. I'm having difficulty with the converse. Assuming the parallelogram identity, I'm able to convince myself that the inner product should be $$ \langle u, v \rangle = \frac{\lVert u\rVert^2 + \lVert v\rVert^2 - \lVert u - v\rVert^2}{2} = \frac{\lVert u + v\rVert^2 - \lVert u\rVert^2 - \lVert v\rVert^2}{2} = \frac{\lVert u + v\rVert^2 - \lVert u - v\rVert^2}{4} $$ I cannot seem to get that $\langle \lambda u,v \rangle = \lambda \langle u,v \rangle$ for $\lambda \in \mathbb{R}$. How would one go about proving this?
|
Let focus on proving the additivity of the bilinearity of the inner product defined as follow: $\frac{1}{2}(||x||^2 + ||y||^2) = = \frac{1}{4} (||x+y||^2 - ||x-y||^2)$ (the other part all ready has a solution post). Additivity 1- We want to prove that $ = + $ By definition $4 = ||x+y+z||^2 - ||x-y-z||^2$ we write $y'=x/2 +y , z'=x/2+z , y'' = x/2-y, z''=x/2-z$ then we have that: $ || x+y+z ||^2 = || y' +z'||^2 = 2|| y' ||^2 + 2||z' ||^2 - 2 ||y'-z'||^2$ when the last equality came from using the parallelogram identity. $ || x-y-z ||^2 = || y'' +z''||^2 = 2|| y'' ||^2 + 2||z'' ||^2 - 2 ||y''-z''||^2$ when the last equality came from using the parallelogram identity. 2- Because $y'-z'=y-z=y''-z''$ we finally get that: $$4 = || x+y+z ||^2 - || x-y-z ||^2 = 2|| y' ||^2 + 2||z' ||^2 - 2 ||y'-z'||^2 - 2|| y'' ||^2 - 2||z'' ||^2 + 2 ||y''-z''||^2 = 2|| y' ||^2 + 2||z' ||^2 - 2|| y'' ||^2 - 2||z'' ||^2 = 2|| y' ||^2 - 2|| y'' ||^2 + 2||z' ||^2 - 2||z'' ||^2$$ 3- On the other side we note that
|
|linear-algebra|functional-analysis|normed-spaces|inner-products|
| 0
|
How do I find the solution for $\frac{x+1}{x-1} \gt \frac 1{x}$?
|
I'm very confused on inequalities. I just came from inequalities with absolute values and tried to solve the above inequality like I did with absolute values. When I compared with my students they seemed to have a different and much more simpler Answer. \begin{align*} \text{Given equation:} \quad & \frac{x+1}{x-1} > \frac{1}{x} \\ \text{Case 1:} \quad & x - 1 \geq 0 \quad \Rightarrow \quad x \geq 1 \quad \Rightarrow \quad [1,\infty[ \\ & \text{Case 1.1:} \quad x > 0 \quad \Rightarrow \quad ]0,\infty[ \\ & \quad \frac{x+1}{x-1} > \frac{1}{x} \\ & \quad \Rightarrow \quad x^2 + x > x - 1 \\ & \quad \Leftrightarrow \quad x^2 > -1 \\ & \text{Case 1.2:} \quad x \frac{1}{x} \\ & \quad \Rightarrow \quad x^2 \frac{1}{x} \\ & \quad \Rightarrow \quad x^2 When I wanted to do the second case I noticed it's same for all equations, so I noticed that the Intervalls are in contradictory, So my Solution is $]0,1[$ . Also $R\{1,0\}$ . What would have been an easier way?
|
To escape the subcase problem, I will provide a general method here $$\dfrac{ax+b}{cx+d}>\dfrac{ex+f}{gx+h}$$ The idea is to multiply the square of both denominator, so that they must be positive. Hence, it get $$(ax+b)(cx+d)(gx+h)^2>(ex+f)(gx+h)(cx+d)^2\tag{*}$$ It seems to be quite tough, but actually they have some common factors, so we can factorize them $$(*)\iff (cx+d)(gx+h)((ax+b)(gx+h)-(ex+f)(cx+d))>0$$ Now $(ax+b)(gx+h)-(ex+f)(cx+d)$ is a quadratic polynomial, solving quadratic inequality is simple, hence the solution can be easily determined. For example in this case, we have $$(x+1)(x-1)x^2>x(x-1)^2\iff x(x-1)(x^2+x-x+1)>0\iff x(x-1)(x^2+1)>0$$ This time $x^2+1$ is always positive, so it reduces to $x(x-1)>0$ , which give the solution $(-\infty,0)\cup(1,+\infty)$ .
|
|inequality|
| 0
|
How do I find the solution for $\frac{x+1}{x-1} \gt \frac 1{x}$?
|
I'm very confused on inequalities. I just came from inequalities with absolute values and tried to solve the above inequality like I did with absolute values. When I compared with my students they seemed to have a different and much more simpler Answer. \begin{align*} \text{Given equation:} \quad & \frac{x+1}{x-1} > \frac{1}{x} \\ \text{Case 1:} \quad & x - 1 \geq 0 \quad \Rightarrow \quad x \geq 1 \quad \Rightarrow \quad [1,\infty[ \\ & \text{Case 1.1:} \quad x > 0 \quad \Rightarrow \quad ]0,\infty[ \\ & \quad \frac{x+1}{x-1} > \frac{1}{x} \\ & \quad \Rightarrow \quad x^2 + x > x - 1 \\ & \quad \Leftrightarrow \quad x^2 > -1 \\ & \text{Case 1.2:} \quad x \frac{1}{x} \\ & \quad \Rightarrow \quad x^2 \frac{1}{x} \\ & \quad \Rightarrow \quad x^2 When I wanted to do the second case I noticed it's same for all equations, so I noticed that the Intervalls are in contradictory, So my Solution is $]0,1[$ . Also $R\{1,0\}$ . What would have been an easier way?
|
If we add and subtract $2$ at the top of the first fraction, we get $$1+\frac2{x-1}>\frac1x$$ We know that when $x>1$ , the LHS is bigger than the RHS because $\frac1{x-1}>\frac1x$ . If $x then let $u=-x$ and so $$1-\frac2{u+1}>-\frac1u$$ add $\frac2{u+1}-1$ $$0>\frac2{u+1}-\frac1u-1=\frac{-u^2-1}{u(u+1)}$$ Which reduces to gwen's case.
|
|inequality|
| 0
|
How to define the addition and multiplication of p-adic integers as infinite formal sums?
|
On page 269 in Dummit&Foote's Abstract Algebra, 3rd edition, it gives that every element in $\mathbb{Z}_p=\varprojlim\mathbb{Z}/p^i\mathbb{Z}$ can be written uniquely as an infinite formal sum $\sum^{\infty}_{k=0}b_kp^k$ with each $b_k\in[0,p-1]$ . But I was confused about how to define the addition and multiplication on $\sum^{\infty}_{k=0}b_kp^k$ such that it could keep the structure of rings. The following is my attempt: $$ \begin{cases} \displaystyle\left(\sum_{k=0}^{\infty}b_kp^k\right)+\left(\sum_{k=0}^{\infty}c_kp^k\right)=\sum_{k=0}^{\infty}\big(b_k+c_k\ \ (\mathrm{mod}\ \ p)\big)p^k \\\displaystyle\left(\sum_{k=0}^{\infty}b_kp^k\right)\cdot\left(\sum_{k=0}^{\infty}c_kp^k\right)=\sum_{k=0}^{\infty}\big(b_k\cdot c_k\ \ (\mathrm{mod}\ \ p)\big)p^k \end{cases} $$ where $\big(b_k+c_k\ \ (\mathrm{mod}\ \ p)\big)$ and $\big(b_k\cdot c_k\ \ (\mathrm{mod}\ \ p)\big)$ are in $[0,p-1]$ . But I am not sure that whether it is true. Update on 2024.03.29: follow the hints from the comments I
|
You basically need to do the definition recursively rather than attempt to find a closed formula for the $k$ th "digit" that does not depend on all previous digits. Let $\displaystyle a=\sum_{i=0}^{\infty}a_ip^i$ and $\displaystyle b=\sum_{i=0}^{\infty}b_ip^i$ , we let $$a+b=\sum_{i=0}^{\infty}s_ip^i\qquad\text{and}\qquad a\times b = \sum_{i=0}^{\infty} m_ip^i,$$ and define the digits and carries recursively as follows. For the sum, we define the summand digits $s_j$ and the carries $c_j$ by: Define the $0$ th digit of the sum $s_0 = a_0+b_0\bmod p$ (the remainder when dividing by $p$ ), and the first carry by $$c_1=\left\{\begin{array}{ll} 0&\text{if }a_0+b_0\lt p,\\ 1 &\text{if }a_0+b_0\geq p. \end{array}\right.$$ Assuming you have defined the $k$ th digit of the sum $s_k$ and the $(k+1)$ st carry $c_{k+1}$ , define $s_{k+1}=a_{k+1}+b_{k+1}+c_{k+1}\bmod p$ , and $$c_{k+2} = \left\{\begin{array}{ll} 0&\text{if }a_{k+1}+b_{k+1}+c_{k+1}\lt p,\\ 1 &\text{if }a_{k+1}+b_{k+1}+c_{k+1}\geq p
|
|abstract-algebra|group-theory|number-theory|ring-theory|p-adic-number-theory|
| 0
|
Limit of uniformly continuous and continuous functions are equal implies both are uniformly continuous
|
Given $f:[0,\infty)\to \mathbb{R} $ a uniformly continuous function and $g:[0,\infty)\to \mathbb{R} $ a continuous function. If $$ \lim_{x \to \infty} g(x) - f(x) = 0$$ is g uniformly continuous over $[0, \infty)$ ? Thank you for your help!
|
Notice that $g-f$ is uniformly continuous as $g-f$ is continuous on $[0,\infty)$ and $\lim\limits_{x\to\infty}g(x)-f(x)=0$ . Since $f$ and $g-f$ are uniformly continuous on $[0,\infty)$ , $g=(g-f)+f$ is uniformly continuous on $[0,\infty)$ .
|
|limits|continuity|uniform-continuity|
| 0
|
Optimal strategy in one-player game
|
You are playing a one-player game with two opaque boxes. At each turn, you can choose to either "place" or "take". "Place" places $1 from a third party into one box randomly. "Take" empties out one box randomly and that money is yours. This game consists of 100 turns where you must either place or take. Assuming optimal play, what is the expected payoff of this game? Note that you do not know how much money you have taken until the end of the game. It is obvious that the more you take, the less you can earn, for example if you place at turn $1$ and then keep taking, the expected payoff is $0.5$ . I thought of doing the opposite and keep placing until the $100$ th turn where we have already placed $99\$$ . Let $X_n$ denote the amount in box $1$ at the $n$ th turn. $X_n$ follows a binomial distribution Bin( $n,0.5$ ). For box $2$ , its amount $Y_n$ is $n-X_n$ which is also Bin( $n,0.5$ ). Hence, after the $99th$ "place", there is on average $49.5\$$ in each box which you take at the $100
|
I do not as of yet see a clean way to approach with pen-and-paper, but I am finding from simulation that $94$ places and $6$ takes appears to be optimal. Dirty javascript code for simulation. 100000 simulations for each strategy. results=[]; totalNumber=100; for(numberOfPlacings=0; numberOfPlacings Having simulated each option for the number of times to perform places vs takes, we end up with very slightly less than the total amount of money placed in each case up to the $90$ 's where it becomes more interesting. $\begin{array}{c|c}90&91&92&93&94\\89.92338&90.83088&91.64815&92.27359&92.51002\\\hline95&96&97&98&99\\91.99924&90.03692&84.84217&73.57936&49.50047\end{array}$ As a sanity check, this matches with what we expected for $99$ being an expected value of precisely $49.5$ and for $98$ being a value of precisely $49+\frac{1}{2}\cdot 49=73.5$ , the numbers being off a small amount due to acceptable randomness. Repeating the test specifically only for the numbers $90$ to $99$ and incre
|
|probability|
| 0
|
The Laplace transform of a conditional random variable
|
Let $X$ be exponentially distributed with mean $1$ and $q \in (0,1)$ . Define the random variable $Y \triangleq (1-q)X + q$ . Now, the CCDF of Y is given by $\mathbb{P}(Y>y) = e^{-\frac{y-q}{1-q}}\mathbf{1}(y\geq q) +\mathbf{1}(y , where $\mathbf{1}(\cdot)$ is the indicator function. Let $Z>0$ be a random variable. I claim that the following statements are both valid: $$ (1) \hspace{0.5cm} \mathbb{P}(Y> Z) = {\mathbb{E}} \left[ e^{-\frac{Z-q}{1-q}}\mathbf{1}(Z\geq q)+\mathbf{1}(Z $$ (2) \hspace{0.5cm} \mathbb{P}(Y> Z) = {\mathbb{E}} \left[ e^{-\frac{(Z|(Z \geq q))-q}{1-q}}\right]\mathbb{P}(Z\geq q) +\mathbb{P}(Z with the conditional random variable $(Z|(Z \geq q))$ . The title of the post is because combining $(1)$ and $(2)$ would follow $$ \mathcal{L}_{(Z|(Z \geq q))}\left( \frac{1}{1-q}\right)\triangleq{\mathbb{E}} \left[ e^{-\frac{(Z|(Z \geq q))}{1-q}}\right] = \frac{{\mathbb{E}} \left[ e^{-\frac{Z}{1-q}}\mathbf{1}(Z\geq q)\right] }{\mathbb{P}(Z\geq q)}. $$ For me, both (1) and (2)
|
In view of mathematical rigor, there are two primary concerns regarding your assertions: 1. The assertion only holds if we assume $Z$ to be independent of $Y$ (and, consequently, of $X$ ). Take, for instance, $Z = Y$ . Then the stated equalities don't hold true. 2. The term "conditional random variable" lacks a precise definition in mathematical terms. Generally, for any event $E$ with $\mathbb{P}(E) > 0$ , it's not possible to "conditionalize" a random variable $Z$ (by means of processing $Z$ as a quantity) in such a way that all associated probabilistic quantities (like the distribution of $Z$ or joint distributions involving $Z$ ) are "conditionalized" in a compatible manner. For this reason, people do not bother to define the notion of conditional random variable. Rather than attempting to manipulate the random variable itself, a more natural and neat solution is to replace the probability law $\mathbb{P}$ by the conditional probability law $\mathbb{P}(\cdot \mid E)$ . In light of
|
|conditional-probability|laplace-transform|characteristic-functions|
| 1
|
How are existential quantifiers present in the internal logic of regular categories?
|
Intuitively speaking, how do existential quantifiers appear? I'm just starting to get familiar with these definitions. Top and conjunctions appear because of finite products. (Plus, I assume, something that makes them work nicely with the existential quantifiers.) But my understanding of existential quantifiers in categorical terms is by way of them being left adjoint to weakening.
|
The internal logic of a regular category is talking about the logic of its subobject fibration , $p : \mathsf{Sub}_C → C$ . I think for later purposes, it's easiest to think about this case using two related structures: The codomain fibration on $C$ is the functor $\mathsf{cod} : C^→ → C$ mapping each arrow to its codomain The subobject fibration is equivalent to the subfibration of $\mathsf{cod}$ containing only monomorphisms in the 'over' category For both of these, given the projection map $π : Γ×A → Γ$ , there is a weakening functor $π^*$ (given by pullback), with a type like $C/Γ → C/Γ×A$ or $\mathsf{Sub}_C(Γ) → \mathsf{Sub}_C(Γ×A)$ , and we want to interpret existential quantifiers as left adjoints to this. I think it's easiest to see what the regular structure does in two steps. If we start with a monomorphism $s : S \hookrightarrow Γ×A$ , we can consider it as an object of the codomain fibration. Then the left adjoint to weakening is the disjoint union functor, given by the pos
|
|logic|category-theory|categorical-logic|
| 1
|
How to solve $\min(x^2+y^2)$, $x\ge1, y\ge-2$, using the KKT conditions?
|
I'm trying to understand better optimisation problems and in particular the KKT conditions. To this end, consider the minimisation problem $\min(x^2+y^2)$ subject to $x\ge1$ and $y\ge -2$ . It's clear that the solution is $x=1, y=0$ . However, how do I get this by applying the KKT conditions? And is this a kind of problem where they are necessary and sufficient?
|
I managed to solve this in the process of writing up the question, so I figured I'd post it for future reference and in case it might help someone else. Define the Lagrangian $$L(x,y,\lambda,\mu) = x^2+y^2-\lambda(x-1)-\mu (y+2).$$ Imposing $\nabla L=0$ you get the conditions $2x=\lambda$ and $2y=\mu$ . From this we can see that If $\mu\neq0$ , then $y=-2$ , thus $\mu=-1$ , which isn't consistent with $\mu\ge0$ . If $\mu=\lambda=0$ the only solution would be $x=y=0$ which isn't feasible. If $\mu=0$ and $\lambda\neq0$ , then $x=1$ , $\lambda=2$ . But also from $2y=\mu$ we'd get $y=0$ . The corresponding cost is $1^2+0^2=1$ , and therefore this is the solution I know I should have got from the beginning (my initial problem is that I did some miscalculations in this step and didn't correctly get this solution as I expected). For completeness, we can also verify what the dual problem looks like. We have $$g(\lambda,\mu) \equiv \inf_{x,y} L(x,y,\lambda,\mu) = \inf_{x,y}[x(x-\lambda) + y(y-\
|
|optimization|convex-optimization|nonlinear-optimization|karush-kuhn-tucker|
| 0
|
Growth rate in the context of dynamical systems
|
Consider the following question (I only need help with the last part) The dynamics of a directly transmitted microparasite can be modelled by the system \begin{align*} \frac{d X}{d t} & =b N-\beta X Y-b X, \\ \frac{d Y}{d t} & =\beta X Y-(b+r) Y, \\ \frac{d Z}{d t} & =r Y-b Z, \end{align*} where $b, \beta$ and $r$ are positive constants and $X, Y$ and $Z$ are respectively the numbers of susceptible, infected and immune (i.e. infected by the parasite, but showing no further symptoms of infection) individuals in a population of size $N$ , independent of $t$ , where $N=X+Y+Z$ . Consider the possible steady states of these equations. Show that there is a threshold population size $N_c$ such that if $N there is no steady state with the parasite maintained in the population. Show that in this case the number of infected and immune individuals decreases to zero for all possible initial conditions. Show that for $N>N_c$ there is a possible steady state with $X=X_s and $Y=Y_s>0$ , and find expr
|
Hint We can rewrite the linearized system in matrix form as $$\pmatrix{\tilde X\\\tilde Y}' = A \pmatrix{\tilde X\\\tilde Y} ,$$ where $$A := \pmatrix{-\beta Y_s - b & -\beta X_s \\ \beta Y_s & \beta X_s - (b + r) } ,$$ which has general solution $$\pmatrix{\tilde X\\\tilde Y} = \pmatrix{\tilde X_0\\\tilde Y_0} e^{A t} .$$ In particular, to show that the steady state is stable, it suffices to show that the real parts of the eigenvalue(s) of $A$ are negative. But the characteristic polynomial of $A$ (whose roots are the eigenvalues) can be written as $$\lambda^2 + \frac{b N}{X_s} \lambda + \beta (b + r) Y_s ,$$ and in particular its coefficients are positive, so the roots $\lambda$ have negative real part.
|
|dynamical-systems|mathematical-biology|
| 1
|
Confusion related to curse of dimensionality in k nearest neighbor
|
I have this confusion related to curse of dimensionality in k nearest neighbor search. It says that as the number of dimensions are higher I need to cover more space to get the same number of training examples. I didn't get it what is it trying to show and how does it occur. Any clarifications?
|
The other answers are good and concise. Here is another answer which some might find useful. Let $N$ be the number of training data points we are given in a KNN classifier ML problem. Assume that the data points will fall uniformly in the unit cube $[0,1]^{\text{d}}$ . Take $K$ (the number of nearest neighbors) to be such that the ratio $K/N$ is $r$ (for some $r$ ). When the training is done we are given a test point. Center a cube $c(l)$ of side length $l$ , at that point. By definition any of our training data points can be seen as $\text{Unif}[0,1]^{d}$ a d-dimensional uniform random variable. Now since $$ \mathbb{P}\Big( \text{Unif}[0,1]^{d} \in c(l) \Big) = l^d $$ we have $$ \mathbb{E}\Big[\text{no. data points in $c(l)$}\Big]=\sum^N_{i=1} \mathbb{P}\Big( \text{Unif}[0,1]^{d} \in c(l) \Big) = Nl^d. $$ If we choose the length of the cube $l=r^{1/d}$ then $\mathbb{E}\Big[\text{no. data points in $c(l)$}\Big] = K$ . This means that for large $d$ the volume of the cube around the test
|
|discrete-mathematics|dimension-theory-analysis|
| 0
|
Optimal strategy in one-player game
|
You are playing a one-player game with two opaque boxes. At each turn, you can choose to either "place" or "take". "Place" places $1 from a third party into one box randomly. "Take" empties out one box randomly and that money is yours. This game consists of 100 turns where you must either place or take. Assuming optimal play, what is the expected payoff of this game? Note that you do not know how much money you have taken until the end of the game. It is obvious that the more you take, the less you can earn, for example if you place at turn $1$ and then keep taking, the expected payoff is $0.5$ . I thought of doing the opposite and keep placing until the $100$ th turn where we have already placed $99\$$ . Let $X_n$ denote the amount in box $1$ at the $n$ th turn. $X_n$ follows a binomial distribution Bin( $n,0.5$ ). For box $2$ , its amount $Y_n$ is $n-X_n$ which is also Bin( $n,0.5$ ). Hence, after the $99th$ "place", there is on average $49.5\$$ in each box which you take at the $100
|
Say you place for $N$ consecutive rounds and then take for the last $100-N$ . You will then collect $N$ , unless you are unlucky enough to take from the same box each time. In the latter case, you'd expect to get only $\frac N2$ . Thus, as a function of $N$ your expected gain is $$N\times \left(1-\frac 1{2^{100-N}}\right)$$ It is easily verified that this numerically matches (well, closely anyway) the simulation results obtained by @JMoravitz above and confirms that $N=94$ is optimal. For that $N$ I get an expected value of $92.53125$ , for example.
|
|probability|
| 1
|
What does it mean when a number 'y' is pseudoprime to base 'x'
|
I am self learner so I don't really understand about pseudoprime to base $x$ For example, $91$ is a pseudoprime to base $3$ then is $91$ also a pseudoprime to base $2$ ? thank you please explain. edit** I did a bit research For example, an odd composite integer $N$ will be called a Fermat pseudoprime to base $a$ , if $\gcd(a, N) = 1$ and $a^{N−1} \equiv 1 \pmod{N}$ . My question is what about base $2$ ? Do we use the same formula like $2^{90}$ and divide by $91$ ? If I don't get a remainder as $1$ , it is not a pseudoprime, right? But when I plugged in on the calculator, the number is too big so how can I find a remainder on calculator?
|
For the sake of familiarity, let's suppose that you have a large number $n$ written in base ten, and want to know if it's a prime number. A simple test is to look at the last digit. If it's 0, 2, 4, 6, or 8, then $n$ is a multiple of 2 (aka “even”), so not prime. If it's 0 or 5, then $n$ is a multiple of 5, so not prime. If it's 1, 3, 7, or 9, then $n$ could be a prime. (Note that this test doesn't work for $2$ and $5$ themselves, so these have to be treated as special cases.) Of course, not all numbers with a last digit of 1, 3, 7, or 9 actually are prime. Counterexamples include 21, 27, and 33. You may have learned the divisibility rule for 9: A number is divisible by 9 if the sum of its digits is divisible by 9. This works because $\forall n\in\mathbb{N} : 10^n ≡ 1\pmod{9}$ . This sum-of-digits rule also works to check divisibility by 3, which is a factor of 9. For example, 57 has the digit sum $5 + 7 = 12$ , which is divisible by 3, and therefore 57 itself is divisible by 3 (in fac
|
|combinatorics|discrete-mathematics|pseudoprimes|
| 0
|
Probability of collision vs. Mean Free Path
|
Background In physics, there is the concept of " mean free path ," which is the expected value for the distance a molecule (for example) can travel before it hits another one. If they're all the same size, and the neighbor molecules are motionless, this can be calculated as $\frac{1}{\pi d^2 n_v}$ (where $d$ is diameter of a molecule, $n_v$ is number of molecules per-volume). That all makes sense to me. What I want to determine is: what is the probability of collision, for a given molecule which travels some distance $D$ ? Initial Stumblings I would expect the probability graph to look something like this: That is: as the particle travels further and further, the odds of it hitting a neighbor approach 1 (assuming infinitely large surroundings of of uniform density). I'm trying to find a formula for this, but am a little stuck. Also, I'm trying to find the relationship between this and the mean free distance formula, which also seems elusive. Could you help me? Update: I made some more
|
Let's assume that other particles are motionless and "located independently of each other". 1) Under this assumption, the probability of a molecule freely traveling an infinitesimally small distance $\mathrm{d}l$ is $1 - \pi d^2 n_v \mathrm{d}l$ , and this event is independent of what will happen in the remaining travelling path of that molecule. So, if $L$ denotes the distance the molecule travels before hitting another one, then $\mathbb{P}(L > l)$ , the probability of traveling a distance of $l > 0$ freely, is given by the following product integral : $$ \mathbb{P}(L > l) = \prod_{0}^{l}(1 - \pi d^2 n_v \, \mathrm{d}l) = \exp\left(-\int_{0}^{l} \pi d^2 n_v \, \mathrm{d}l\right) = e^{-\pi d^2 n_v l} = e^{-l / l_{\text{MFP}}} $$ where $l_{\text{MFP}} = \frac{1}{\pi d^2 n_v}$ stands for the mean free path. Hence $L$ is distributed according to the exponential distribution with mean $l_{\text{MFP}}$ : $$ \mathbb{E}[L] = \int_{0}^{\infty} \mathbb{P}(L > l) \, \mathrm{d}l = l_{\text{MFP}}
|
|probability|physics|cumulative-distribution-functions|
| 1
|
Prove that the function $f(x) = \frac{1}{x^p}$ belongs to $L_{1}(1,\infty)$ if and only if $p > 1$.
|
Prove that the function $f(x) = \frac{1}{x^p}$ belongs to $L_{1}(1,\infty)$ (Where $L_1$ is the space of functions that are Lebesgue integrable) if and only if $p > 1$ . proof $\Rightarrow$ Suppose that $f(x) = \frac{1}{x^p} \in L^1(1,\infty)$ . As I know, if a function $f \in L^1(E)$ , it means that $\int_{E} f \, dm . But in this case, if $p > 1$ , then $\int_{E} f \, dm = \infty$ as $m(1,\infty) = \infty$ ,Since $(1, \infty)$ is an interval, and the measure of an interval is equal to its length And for the converse, I have the same problem; I don't know how to do it.
|
The fact that $m(1,\infty)=+\infty$ does not imply that $f$ is not integrable in $(1,\infty)$ , that would mean, for instance, that $L^1(1,\infty)=L^1(\mathbb{R})=\emptyset$ , which clearly is not true. To prove this statement, you just need to remember the definition of improper integral (and that a non-negative function is Lebesgue Integrable in $(1,+\infty)$ iff $\displaystyle\int_{1}^{\infty} f(x) dx$ exists in the Riemann sense). For $p>1$ we have a primitive of $\dfrac{1}{x^p}$ is $\dfrac{1}{(1-p)x^{p-1}}$ and thus $$\displaystyle\int_{1}^\infty \dfrac{1}{x^p} dx=\lim_{c\to\infty} \displaystyle\int_{1}^c \dfrac{1}{x^p} dx=\lim_{c\to\infty}\dfrac{1}{(1-p)c^{p-1}}-\dfrac{1}{1-p}=\dfrac{1}{p-1} as $c^{p-1}\to \infty$ being $p>1$ , so $\dfrac{1}{x^p}\in L^1(1,\infty)$ . For $p=1$ a primitive of $\dfrac{1}{x}$ is $\ln(x)$ , so $$\displaystyle\int_{1}^\infty \dfrac{1}{x} dx=\lim_{c\to\infty} \displaystyle\int_{1}^c \dfrac{1}{x} dx=\lim_{c\to\infty} \ln(c)-\ln(1)=\infty$$ and $\dfrac{1}
|
|measure-theory|lebesgue-integral|lebesgue-measure|
| 0
|
express log-log relationship as an exponential relationship
|
I have several log(y) ~ a + b * log(x) models that I want to express as exponential relationships. I know this involves an expoential transformation, but how do I solve it? Example: Step 1: log(y) ~ 2 + 3 * log(x) Step 2: exp(log(y)) ~ exp(2 + 3 * log(x)) Step 3: exp(log(y)) ~ exp(2) + exp(3 * log(x)) what comes next for the exp(3 * log(x)) part? I'm using a natural logarithm.
|
Some logarithm properties to have in mind $\log(ab)=\log a+\log b$ $\log({a\over b})=\log a -\log b$ $\log a^b = b \log a $ $loge^a=a$ (NOTE: when i write $\log $ I am referring to the natural logarithm a.k.a $\ln , \log_e$ also $e^a=\exp(a)$ ) back to your example You were almost there ! $\log(y)=2+3\log(x) \implies e^{\log(y)}=e^{2+3\log(x)} \implies y=e^2 \cdot e^{3\log(x)} \implies y=e^2 \cdot e^{\log(x^3)} \implies y=e^2 \cdot x^3$ (also note that $e^{a+b} = e^a \cdot e^b $ ) EDIT after a bit of playing around with equations of the form $\log(y)=a+b \cdot \log(c) $ we can see the following $\log(y)=a+b \cdot \log(c) \implies \log(y)=\log e^a + \log(c^b) \implies \log(y)=\log(e^a \cdot c^b) $ $\implies e^{\log(y)}=e^{\log(e^a \cdot c^b)} \implies y= e^a \cdot c^b$ and that is the general simplified form of those types of expressions in terms of $y$ . Although I would not recommend just taking this and substituting in you $a,b,c$ values especially if you are still not really comfort
|
|logarithms|exponential-function|
| 1
|
Derivative (Jacobian) of a matrix equation
|
I have this equation: $y = e^{t(A + W)} x_0 $ where A is a diagonal matrix and W is a symmetric matrix. I need to find $\frac{\partial y}{\partial W}$ . If A and W commute then I could use the fact that $e^{t(A + W)} = e^{tA} . e^{tW} $ and then use the kronecker product: $\frac{\partial y}{\partial W} = x_0^T \otimes e^{tA} \, \, vec(t e^{tW}) $ But I can't derive the case were they don't commute. Any thoughts?? thanks
|
The expansion continues by commutators $$e^A e^W = e^{A + W + \sum_n c_n \ e^{ n-\text{fold symmetrized commutators}[A,B]}},$$ the so called Baker-Cambell-Hausdorff formula
|
|linear-algebra|algebra-precalculus|matrix-calculus|kronecker-product|
| 0
|
Find the unbiased estimator for the parameter $\sigma$.
|
TASK : Let $(X_1, X_2)$ is a random i.i.d. sample from $N(0, \sigma ^2)$ distribution. Find the unbiased estimator for the parameter $\sigma$ . SOLUTION : The estimator $\theta$ is unbiased if $E\theta = \theta$ . In this case $\theta = \sigma ^2$ , and $\sigma^2 = \frac{1}{2}( X_1^2 + X_2^2$ ). So $E\sigma^2 = E\frac{1}{2}(X_1^2+X_2^2) = \frac{1}{2}(EX_1^2 + EX_2^2)$ $EX_i^2 = VarX_i + (EX_i^2) = VarX_i + 0 = \sigma^2$ And $E\sigma^2 = \sigma^2$ so this estimator is unbiased. QUESTION That's it?. I don't understand how we choose $\sigma^2$ Thank you for any help.
|
The idea is that you should recognize the formula for sample variance, $s^2$ , and use the guess that the sample variance is an unbiased estimator of the population variance, $\sigma^2$ . First thing to note: it's more appropriate to use different letters for the estimator versus the thing we are trying to estimate--here I'm using $s$ for the estimator and $\sigma$ for the population parameter we are trying to estimate. I'm hypothesizing that the sample variance $s^2 = \frac{1}{2}(X_1^2 + X_2^2)$ is an unbiased of the population variance $\sigma^2$ . A few facts we will use: We know that $Var(X_1) = Var(X_2) = \sigma^2$ , since they are from the same population. We know that the variance formula is $Var(X_1) = E(X_1^2) + E(X_1)^2 = E(X_1^2)$ since $E(X_1) = 0$ . It's similar to show that $Var(X_2) = E(X_2^2)$ . Then, using all the information I prove unbiasedness. $$ \begin{align*} E(s^2) &= E\left(\frac{1}{2}(X_1^2 + X_2^2)\right) \\ &= \frac{1}{2}(E(X_1^2) + E(X_2^2)) \\ &= \frac{1}{
|
|probability|statistics|estimation|
| 0
|
An (a.s.) continuous process $(X_t)_{t\geq 0}$ is a Brownian motion if $(e^{i\lambda X_t + \frac{1}{2}\lambda^2 t})_{t\geq 0}$ is a local martingale
|
Problem Let $X=(X_t)_{t\geq0}$ be an (a.s.) continuous $\mathbb{R}$ -valued process with $X_0=0$ such that $(e^{i\lambda X_t + \frac{1}{2}\lambda^2 t})_{t\geq 0}$ is a $\mathbb{C}$ -valued local martingale for all $\lambda\in\mathbb{R}$ . Show that $X$ is a standard $\mathbb{R}$ -valued $(\mathcal{F}_t)_{t\geq 0}$ -Brownian motion. Opening remark The standard definition of a Brownian motion goes as follows: A $\mathbb{R}$ -valued $(\mathcal{F}_t)_{t\geq 0}$ -adapted process $B=(B_t)_{t\geq 0}$ is called a standard $\mathbb{R}$ -valued $(\mathcal{F}_t)_{t\geq 0}$ -Brownian motion if: $B_0=0$ (a.s.) $B$ is (a.s.) continuous. $\forall s $B_t$ has independent increments. This and this solution for related problems make use of the so-called Lévy characterisation : A $\mathbb{R}$ -valued $(\mathcal{F}_t)_{t\geq 0}$ -adapted process $B=(B_t)_{t\geq 0}$ is called a standard $\mathbb{R}$ -valued $(\mathcal{F}_t)_{t\geq 0}$ -Brownian motion if: $B_0=0$ (a.s.) $B_t$ is an (a.s.) continuous martin
|
By using martingale-property and taking derivative at $\lambda=0$ , we get martingale property for $X$ $$E[X_{t}|\mathcal{F}_{s}]=X_{s},$$ where we can take derivative due to dominated convergence theorem since $|e^{iX}|=1$ . Similarly, using martingale again and taking derivative twice in $E[e^{i\lambda(X_{t}-X_{s})+\lambda^2(t-s)/2}]=1$ , we get $$E[(X_{t}-X_{s})^{2}]=t-s.$$ Using the martingale for $X$ we also get $$E[(X_{t_{4}}-X_{t_{3}})(X_{t_{2}}-X_{t_{1}})]=0$$ when $$t_{1} https://almostsuremath.com/2010/01/18/quadratic-variations-and-integration-by-parts/#scn_ibp_thm1 , semimartingales are guaranteed to have quadratic variations Theorem 1 (Quadratic Variations and Covariations) Let $X,Y$ be semimartingales. Then, there exist cadlag adapted processes $[X]$ and $[X,Y]$ satisfying the following. For any sequence $P_n$ of stochastic partitions of $\mathbb R_+$ such that, for each $t\ge 0$ , the mesh $\vert P_n^t\vert$ tends to zero in probability as $n\rightarrow\infty$ , the foll
|
|normal-distribution|expected-value|brownian-motion|independence|local-martingales|
| 0
|
Differentiating/Integrating Power Series
|
I have the following question for my math homework, and I have no clue how to solve it. I've watched videos, looked in my textbook, Quizlets, etc. Question: Find the power series representation for g centered at 0 by differentiating or integrating the power series for f (perhaps more than once). Give the interval of convergence for the resulting series. $$ g(x)=\frac{x}{(1+13x^2)^2} \text{ using } f(x)=\frac{1}{1+13x^2} $$ I know that the power series of $f(x)$ is: $$ \sum_{k=0}^{\infty}(-13)^kx^{2k} $$ I also know that the derivative of $f(x)$ is: $$ f'(x)=\frac{-26x}{(1+13x^2)^2} $$ However, I do not know how to relate these variations of $f(x)$ to $g(x)$ , so I can find the power series of $g(x)$ .
|
We already have that $$f(x)=\sum_{k=0}^{\infty}(-13)^kx^{2k}, |x| It is well known that power series are differentiable within their interval of convergence and differentiating term by term gives the power series for the derivative. In other words, $$f'(x)=\frac{-26x}{(1+13x^2)^2}=\sum_{k=1}^{\infty} 2k(-13)^kx^{2k-1} $$ But $g(x)=\dfrac{x}{(1+13x^2)^2}=\dfrac{-f'}{26}$ , so $$g(x)=-\dfrac{1}{26}\sum_{k=1}^{\infty} 2k(-13)^kx^{2k-1}=\sum_{k=1}^{\infty} k(-13)^{k-1}x^{2k-1} ,\space |x| By shifting the index, we finally get $$g(x)=\sum_{k=0}^{\infty} (k+1)(-13)^{k}x^{2k+1} ,\space |x|
|
|calculus|sequences-and-series|convergence-divergence|power-series|
| 1
|
Finding the Probability of an Event Happening in a Sphere, where a Point's Individual Probability Depends on its Distance from the Center
|
I'm working on a pet project and I've reached a sticking point, I'll describe the problem, then I'll describe the direction I've been trying. We have a sphere of radius $R$ , filled with $N$ points. I don't think it matters, but $N$ is very high. Each point has probability of an event $E$ occurring that depends on it's distance from the sphere's center $r$ , given by the function $P_{E}(r)$ . All points are independent. If it helps, here's a little diagram I whipped up: Sphere Diagram Since we know the volume of the sphere and the number of point contained, we know the density of points, which we'll call $D$ . The distribution of points is both homogeneous and isotropic, so the density remains constant for any local volume of the sphere. What I'd like to know is, what is the probability of the event $E$ happening at least once in the sphere? So far, my approach has been to split the sphere evenly, into concentric shells $\Delta r$ apart. Find what the number of points in this band is a
|
Let $E_i$ that event happens at point $i$ . We know (if I got it right) that $$P(E)= P(\cup E_i)= 1- P(\cap E_i^c) = 1 - (1-P(E_i))^N\tag1$$ And (assuming we are speaking of a 3D sphere) $$P(E_i) = \int P(E_i | r_i) \, p(r_i) \,d r_i = \frac{3}{R^3} \int_0^R P_E(r) \, r^2 dr \tag2$$ If the sphere is $D-$ dimensional the integral is $$ \frac{D}{R^D} \int_0^R P_E(r) \, r^{D-1} dr $$ Added: in more detail Let's define the random variables $R_i$ (taking values in $[0,R]$ ) : distance from origin of point $i$ $X_i=1$ if the event happens at point $i$ , $0$ elsewhere. What we are given is the conditional probability $P(X_i = 1 | R_i = r) = P_E(r)$ Because $R_i$ is a continuous variable, we can write its density function $f_{R_i}(r) = \lim_{h \to 0} \frac{1}{h} P(r\le R_i We also know $f_{R_i}(r) \propto r^2 \implies f_{R_i}(r) = \frac{3}{R^3} r^2 $ Using the total law of probability: $$P(E_i)= P(X_i=1) = \int P(X_i =1 | R_i = r) f_{R_i}(r) dr $$
|
|probability|probability-distributions|
| 0
|
Integrating using substitution
|
I have the following integral: $$\int \frac{2x-\sin^{-1}x}{(1-x^2)^{1/2}} \, dx$$ I divide it into two integrals: $\dfrac{2x}{(1-x^2)^{1/2}}$ and $\dfrac{\sin^{-1}x}{(1-x^2)^{1/2}}$, and eventually get a final result of $\dfrac{2}{3(\sin^{-1}x)^{3/2}}-\large2(1-x^2)^{1/2}$ . However, I get a different result with an online integrator : $$-\frac 1 2 (\sin^{-1}x)^2-2(1-x^2)^{1/2}$$ Where could I have been wrong?
|
Why not just substitute $\arcsin(x)=\theta,~\sin\theta=x$ , then $$\int \frac{2x-\sin^{-1}x}{\sqrt{1-x^2}}\mathrm dx=\int\frac{2\sin\theta\cos\theta}{|\cos\theta|}\mathrm d\theta-\int\frac{\theta\cos\theta}{\sqrt{1-\sin^2\theta}}\mathrm d\theta\\=\frac{\cos\theta}{|\cos\theta|}\left(-2\cos\theta-\int\theta\mathrm d\theta\right)=\operatorname{sgn}(\cos\theta)(2\cos\theta+\frac12\theta^2).$$
|
|integration|
| 0
|
how to prove the Lemma 5.6. of "Invitation to 3D vision"
|
In the proof of Lemma 5.6 on page 115, there are some points which I am not quite clear, as shown in the figure: enter image description here I don't quite understand the proof ,in the line part of the figure. Concretely about How to deduce the formula 5.8 in detail
|
You already know $R=e^{\hat{\omega}\theta}$ , and the proof has just shown that $\omega = \pm \frac{T}{||T||}$ . Hence $R=\exp\left(\pm\hat{T}\frac{\theta}{||T||}\right)$ . Then since for any matrix $A$ and constant $c$ , $e^{cA}$ commutes with $A$ , it follows that $R$ commutes with $\hat{T}$ . So (5.7) becomes $R^2\hat{T} = \hat{T}$ , which gives (5.8).
|
|linear-algebra|lie-algebras|rotations|matrix-exponential|skew-symmetric-matrices|
| 0
|
Is Morita equivalence preserved when adjoining a variable?
|
Let $R$ and $S$ be two rings with unity. If $R$ and $S$ are Morita equivalent (the category of $R$ modules and $S$ modules are equivalent), does this mean $R[t]$ and $S[t]$ are Morita equivalent, for some indeterminate $t$ ? Let's also assume $t$ commutes with elements in $R$ and $S$ . If not, does anyone know conditions on $R$ and $S$ such that $R[t]$ and $S[t]$ are Morita equivalent? Thanks!
|
You can write $R[t]=R\otimes_{\mathbb{Z}} \mathbb{Z}[t]$ and $S[t]=S\otimes_{\mathbb{Z}} \mathbb{Z}[t]$ , so what you are doing is extending the scalars from $\mathbb{Z}$ -algebras to $\mathbb{Z}[t]$ -algebras. And in general scalar extension preserves Morita equivalence. You even get a little bit better, namely a Morita equivalence between $R[t]$ and $S[t]$ over $\mathbb{Z}[t]$ , so your $R$ - $S$ -bimodule giving the equivalence can be taken $\mathbb{Z}[t]$ -balanced, meaning that it is not just an $R\otimes_{\mathbb{Z}} S^{op}$ -module, but an $R\otimes_{\mathbb{Z}[t]} S^{op}$ -module.
|
|abstract-algebra|category-theory|
| 1
|
Rings where finitely generated ideals are closed under countable intersection
|
Does there exist a characterization of those rings $R$ such that finitely generated left ideals are closed under countable intersection? For example, any noetherian ring has this property, since all ideals are finitely generated. On the other hand, being coherent is not quite sufficient, as this counterexample shows. More generally, I'd be interested in sufficient conditions on coherent rings which imply that they have this property.
|
May be, there are no non-noetherian rings with such a property of left ideals. At least, it is so if we restrict ourselves to graded algebras. Namely, assume that $R$ is graded (by non-negative integers) algebra (with 1) over a field which is finitely generated in positive degrees (like the tensor algebra $TV$ of a finite-dimensional vector space $V$ ). Then $R$ is generated by a finite-dimensional homogeneous vector space $V\subset R_{\ge 1}$ . If $R$ is non-noetherian, there is an infinite-generated left ideal $I$ . We may assume the it is (minimally) generated by a sequence $a_1, a_2, \dots$ of elements with non-decreasing degrees $d_i = \deg a_i$ . Then $I$ is the intersection of the ideals $$ I_i = (a_1, \dots, a_{i}) + (V^{d_{i+1}}) $$ for $i=1,2,\dots$ , that are obviously finitely generated.
|
|abstract-algebra|ring-theory|ideals|noetherian|coherent-rings|
| 0
|
Does a torus knot give a Seifert fibering of the 3-sphere?
|
Let $K$ be a $(p,q)$ torus knot on the torus $T_1$ . Via the map \begin{equation*} H=\begin{pmatrix} 0 & 1 \\ 1 & 0 \\ \end{pmatrix} \end{equation*} $K$ becomes a $(q,p)$ torus knot on $T_2=H(T_1)$ . If we fill in both these tori to become solid tori $D_1$ and $D_2$ , then \begin{equation*} S^3 = D_1\cup_H D_2 \end{equation*} This seems to give a Seifert fibering of $S^3$ , and the comments to this question suggest that's true. But wouldn't the base orbifold be a a 2-sphere with two conical points of order $p$ and $q$ , which is a "bad" orbifold? I also know that if we remove a tubular neighborhood of $K$ in $S^3$ , then we do get a Seifert fibering of the knot complement. I'd like to figure out where my understanding is breaking down. Is this an actual Seifert fibering of $S^3$ or not?
|
The base space of a Seifert fibered 3-manifold can indeed be a bad 2-orbifold, so your intuition is not breaking down. See this math overflow answer for reference including a discussion which incorporates bad base orbifolds. So yes, your description of the base orbifold associated to a $(p,q)$ torus knot is correct.
|
|manifolds|knot-theory|low-dimensional-topology|orbifolds|
| 1
|
Local extrema of trigonometric product
|
Determine the the points where the function $\sin^n x \sin nx$ is maximum or minimum. Differentiating,we get that the critical points turn out to be $\frac{k\pi}{n+1}$ . But from this how can the points at minima or maxima be differentiated when we derived only one form as critical points? Does that mean the function will always be either at maxima or either at minima at these points?
|
You're overlooking an important point. (I have assumed n is positive and non zero) For $$ x= \frac{kπ}{n+1} $$ The value of k ranges as 1,2,3,... When k is odd, it generates a local maxima and when k is even, it generates a local minima. It can be proven by finding out the second order derivative:- $$f''(x)= cos (x+nx)(1+n) $$ Plugging the value of x in it gives $cos(kπ)(1+n)$ . If k is odd, $cos(kπ)=-1$ If k is even, $cos(kπ)=1$ So if k is odd, $f''(x) which denotes $f(\frac{kπ}{n+1})$ = value of local maxima And if k is even, $f''(x)>0 $ which denotes $f(\frac{kπ}{n+1})$ = value of local minima
|
|multivariable-calculus|derivatives|optimization|maxima-minima|
| 0
|
how to simplify $\log_{a/b}(n)$
|
This is stemming from a programming problem, and I am trying to figure out if there is an easier way to go about the issue. I need to get $\log_{a/b}(n)$ , with $0 . Is is possible to rewrite $\log_{a/b}(n)$ as something along the lines of $\frac{\log_{a}(n_1)}{\log_b(n_2)}$ ? Yes I know that $\log_{a/b}(n) \neq \frac{\log_{a}(n_1)}{\log_b(n_2)}$ , but I was wondering if there was some conversion that could work, possibly similar to this. I know that $\frac{\log(a)}{\log(b)} = \log_{b}(a)$ (99% sure at least). but this is different enough that I'm don't really think that this is a similar solution.
|
We can write $\log_{a/b} n$ as $\frac{\log_c n}{\log_c(a/b)} = \frac{\log_c n}{\log_c a - \log_c b}$ for any choice of $c$ . In particular, we could: Take $c = a$ and write $$\log_{a/b} n = \frac{\log_a n}{1 - \log_a b}.$$ Take $c = b$ and write $$\log_{a/b} n = \frac{\log_b n}{\log_b a - 1}.$$ Write $$\log_{a/b} n = \frac{\log n}{\log a - \log b}$$ for whatever happens to be your default choice of base for the logarithm, whether that's $2$ , $e$ , or $10$ . It's actually also possible to express the whole thing in terms of $\log_a n$ and $\log_b n$ , but it's a more complicated journey. Starting with taking $c=n$ , we get $$\log_{a/b} n = \frac{1}{\log_n a - \log_n b} = \frac1{\frac1{\log_a n} - \frac1{\log_b n}} = \frac1{\frac{\log_b n - \log_a n}{\log_a n \log_b n}} = \frac{\log_a n \log_b n}{\log_b n - \log_a n}.$$
|
|logarithms|
| 1
|
How do I find the solution for $\frac{x+1}{x-1} \gt \frac 1{x}$?
|
I'm very confused on inequalities. I just came from inequalities with absolute values and tried to solve the above inequality like I did with absolute values. When I compared with my students they seemed to have a different and much more simpler Answer. \begin{align*} \text{Given equation:} \quad & \frac{x+1}{x-1} > \frac{1}{x} \\ \text{Case 1:} \quad & x - 1 \geq 0 \quad \Rightarrow \quad x \geq 1 \quad \Rightarrow \quad [1,\infty[ \\ & \text{Case 1.1:} \quad x > 0 \quad \Rightarrow \quad ]0,\infty[ \\ & \quad \frac{x+1}{x-1} > \frac{1}{x} \\ & \quad \Rightarrow \quad x^2 + x > x - 1 \\ & \quad \Leftrightarrow \quad x^2 > -1 \\ & \text{Case 1.2:} \quad x \frac{1}{x} \\ & \quad \Rightarrow \quad x^2 \frac{1}{x} \\ & \quad \Rightarrow \quad x^2 When I wanted to do the second case I noticed it's same for all equations, so I noticed that the Intervalls are in contradictory, So my Solution is $]0,1[$ . Also $R\{1,0\}$ . What would have been an easier way?
|
If we know our strategy is to go from $\frac {x+1}{x-1} to multiply both sides by $x$ and both sides by $x-1$ to get $(x+1)x=x^2 +x ??? x-1$ and then subtract $x$ from both sides to get $x^2 ??? -1$ , then we know the key points we have to consider is when $x-1$ and $x$ change signs and how often the $ gets flipped to $>$ and how often it gets switched back. If $x and $x-1$ less than $0$ (i.e. whenever $x ) multiplying by $x$ and multiplying by $x-1$ flips the $ to $>$ and back to $ . If $x>0$ and $x -1$ are both equal or more than $o$ (i.e. whenever $x\ge 1$ ) then multiplying by $x$ and by $x-1$ will leave the $ sign alone. And if one is negative and the other positive (which only happens if $0 \le x ) then multiplying by $x-1$ will flip the $ to $>$ but multiplying by $x$ leaves it alone. So if $x or $x \ge 1$ we have to solve $x^2 . And if $0\le x we have to solve for $x^2 . .... and you have to consider cases when these terms are not define. That is we can not have $x = 0$ or $x-1
|
|inequality|
| 0
|
Off by a negative sign in Laplace transform $\mathcal{L}\{tx^{(3)}(t)\}(s)$?
|
Let $f(s) = \mathcal{L} \{ x(t) \}(s).$ Then $$\mathcal{L}\{tx^{(3)}(t)\}(s)=-1\frac{d}{dx}\mathcal{L}\{x^{(3)}(t)\}(s)=-\frac{d}{ds}[s^2f(s)]=-[2sf(s)+s^2f'(s)]$$ using the identities $$ f^{\prime}(t) \quad s F(s)-f(0) $$ and $$ t^n f(t), \quad n=1,2,3, \ldots \quad(-1)^n F^{(n)}(s). $$ Should the minus sign actually be there? In https://math.stackexchange.com/questions/4886992/what-is-the-inverse-laplace-transform-of-sfs , it was computed that $$\mathcal{L}^{-1}\left\{s F^{\prime \prime}(s)\right\}=t^2 f^{\prime}(t)+2 t f(t)$$ which does not have a negative sign. Is this correct?
|
Using differentiation under the integral and integration by parts three times, we have $$\begin{align} \int_0^\infty t x'''(t) e^{-st}\,dt&=-\frac{d}{ds}\int_0^\infty x'''(t) e^{-st}\,dt\\\\ &=-\frac{d}{ds}\left(s^3X(s)-\sum_{n=1}^3 s^{3-n}x^{(n-1)}(0)\right)\\\\ &=-3s^2X(s)-s^3X'(s)+2sx''(0)+x'(0) \end{align}$$ If $x(0)=x'(0)=0$ then we have $$\int_0^\infty t x'''(t) e^{-st}\,dt=-3s^2X(s)-s^3X'(s)$$
|
|real-analysis|ordinary-differential-equations|analysis|laplace-transform|
| 1
|
Norm of an inverse operator: $\|T^{-1}\|=\|T\|^{-1}$?
|
I am a beginner of functional analysis. I have a simple question when I study this subject. Let $L(X)$ denote the Banach algebra of all bounded linear operators on Banach space $X$ , $T\in X$ is invertible, then $||T^{-1}||=||T||^{-1}$ ? Is this result correct?
|
The equality $\|T^{-1}\|=\|T\|^{-1}$ implies that $\|Tx\|=\|T\|\,\|x\|$ for any $x.$ Indeed $$\|x\|=\|T^{-1}Tx\|\le \|T^{-1}\|\|Tx\| \\ =\|T\|^{-1}\|Tx\|$$ Thus $T$ is a multiple of an isometry. Clearly not every invertible operator is such, provided the space is at least two dimensional. For example fix $y\in X$ , $\|y\|=1.$ By the Hahn-Banach theorem there is a bounded linear functional $\varphi$ such that $\varphi(y)=2.$ Then the operator $$Tx=x+{1\over 2}\varphi(x)y$$ is invertible with $$T^{-1}x=x-{1\over 4}\varphi(x)y$$ For $x\in\ker\varphi$ we have $Tx=x$ and $Ty=2y.$ Hence $T$ is not a multiple of an isometry.
|
|functional-analysis|operator-theory|operator-algebras|
| 0
|
Why is $K \cap \mathbb{Q}^{\text{cyc}}=\mathbb{Q}$ iff $\chi_K(G_K)=\hat{\mathbb{Z}}^{\times}$?
|
I am reading David Zywina's "Elliptic curves with maximal Galois action" . For a number field $K$ , he defines $\mathbb{Q}^{\text{cyc}} \subset \overline{K}$ to be "the" cyclotomic extension of $\mathbb{Q}$ . Is $\mathbb{Q}^{\text{cyc}}$ the minimal cyclotomic extension corresponding to the conductor of $K$ obtained from Kronecker-Weber Theorem? What if $K$ isn't abelian? Further, he defines $\chi_K$ to be the cyclotomic character $\chi_K:G_K \rightarrow \hat{\mathbb{Z}}^{\times}$ where $G_K$ is the absolute Galois group of $K$ . He then claims in section $1.1$ that the assumption $K \cap \mathbb{Q}^{\text{cyc}}=\mathbb{Q}$ is equivalent to $\chi_K(G_K)=\hat{\mathbb{Z}}^{\times}$ . Why is $K \cap \mathbb{Q}^{\text{cyc}}=\mathbb{Q}$ is equivalent to $\chi_K(G_K)=\hat{\mathbb{Z}}^{\times}$ ? Any leads would be appreciated!
|
The extension $\mathbb{Q}^{\rm{cyc}}$ is defined to be $$\mathbb{Q}^{\rm{cyc}} = \bigcup_{n=1}^{\infty} \mathbb{Q}(\zeta_n)$$ where $\zeta_n$ is a primitive $n^{\rm{th}}$ root of unity. Here one obviously has to fix an isomorphism $\overline{\mathbb{Q}} \cong \overline{K}$ . One then has $\rm{Gal}(\mathbb{Q}^{\rm{cyc}}/\mathbb{Q}) = \varinjlim \, (\mathbb{Z}/n\mathbb{Z})^\times = \hat{\mathbb{Z}}^\times$ (to see this, choose your roots of unity compatibly). The second claim is then follows. If $\chi_K(G_K) = H$ is a proper (necessarily open) subgroup of $\hat{\mathbb{Z}}^\times$ then $$K \cap \mathbb{Q}^{\rm{cyc}} = \overline{K}^{\chi_K^{-1}(H)} = (\mathbb{Q}^{\rm{cyc}})^H \neq \mathbb{Q}.$$
|
|number-theory|galois-theory|elliptic-curves|class-field-theory|cyclotomic-fields|
| 1
|
How to find the number of possible planes equidistant from $5$ points $3$ of which are collinear.
|
$\vec a,\vec b,\vec c,\vec d$ are four non zero vectors and points $P(\vec a), \; Q(\vec a+\vec b+\vec c), \; R(\vec a-\vec b-\vec c), \; S(\vec d),\; T(\vec a +\vec d)$ are distinct points such that T does not lie in the plane of P,Q and S. Find the number of planes equidistant from all the points $P,Q,R,S,T$ . Any three vectors are coplaner, now $P,Q,R$ are collinear as, $\vec{OR}=\dfrac{2\vec{OP}-\vec{OQ}}{ 2-1}$ . So any plane parallel to this line will be equidistant from $P,Q,R$ . But I think that points $S,T$ may be aligned in space such that we get no plane which is equidistant from all five points. But answer is given $3$ planes are possible . Can not visualise at this stage. Please help.
|
$P,Q,R$ are collinear. Let $d=PQ$ , with $R\in d $ . Then there are $3$ cases to consider : Let $\mathcal P$ the plane containing $d$ and $S$ , $S'$ the orthogonal projection of $S$ on $\mathcal P$ , $I$ the midpoint of $SS'$ and $\mathcal P_1$ the plane parallel to $\mathcal P$ passing through $I $ ; Let $\mathcal Q$ the plane containing $d$ and $T$ , $T'$ the orthogonal projection of $T$ on $\mathcal Q$ , $J$ the midpoint of $TT'$ and $\mathcal P_2$ the plane parallel to $\mathcal Q$ passing through $J $ ; Let $e:=ST$ and $K$ the midpoint of the common perpendicular to $d$ and $e$ (see here for example) and let $\mathcal P_3$ the plane passing throug $K$ , whose direction is the one spaned by that of $d$ and $e$ . $\mathcal P_1,\mathcal P_2,\mathcal P_3$ are the $3$ planes.
|
|vector-spaces|vectors|contest-math|3d|
| 1
|
Is this a valid "easy" proof that two free groups are isomorphic if and only if their rank is the same?
|
I have read on different sources that it is not possible to give a simple proof that "two free groups are isomorphic if and only if they have the same rank" using only what "a student who has just read the definition of free group as a set of words over an alphabet" would know. See for example the answers to this question Is there a simple proof of the fact that if free groups $F(S)$ and $F(S')$ are isomorphic, then $\operatorname{card}(S)=\operatorname{card}(S')?$ . I think I have come with such a proof, but I would like to know if it is valid. The proof goes as follows. If A and B have the same cardinality, we can define a bijection between letters on A alphabet and letters on B alphabet. This establishes a bijection between (reduced) words on A and (reduced) words on B , and the isomorphism between the free groups F(A) and F(B) . This proves the "if". Now suppose that | A | B |. We can define a bijection between letters on A and a subset of the letters on B . Put differently, we can
|
I think you have proven that there can't be any isomorphism between $F(A)$ and $F(B)$ that maps elements of $A$ onto elements of $B$ . It's natural to think that any isomorphism must be like this, but it isn't true. You may have the intuition from just looking at elements of the free group that you can tell from the group structure alone which elements are singleton letters, and you might try to say something like "the singleton letters are those elements which aren't the product of any other elements". If true, then isomorphisms, since they preserve the group structure, would have to map letters to letters, and your proof would work. (As an aside, this is true for e.g. free monoids, and I think your argument works there.) But of course it isn't true, e.g. in $F(\{a,b\})$ , $a$ is the product of $ab$ and $b^{-1}$ , so you can't identify letters this way. Indeed, $ab$ and $b^{-1}$ also generate the free group on two elements, and the map from $F(\{a,b\})$ to itself that sends $a$ to $ab
|
|group-theory|solution-verification|group-isomorphism|free-groups|
| 0
|
Automorphism of the following rational function
|
This problem corresponds to the exercise 4.39 in Silverman's book "The Arithmetic of Dynamical systems": Let $\alpha \in \mathbb{C}^*$ , $d \geq 1$ and consider the rational function with degree $d$ $$\phi(z) = \left( \frac{z+1}{z-1} \right)^d.$$ Let $Aut(\phi) = \{ f \in PGL_2(\mathbb{C}): f \phi f^{-1} = \phi \}$ where the operation considere there is the composition of functions. Then $Aut(\phi) = \{ id \}$ , i.e., the identity Mobius transformation. MY ATTEMPTS: I have tried different things. Of course the condition can also be seen as $f \phi = \phi f$ . Putting a generic $f(z) = \frac{az+b}{cz+e}$ and making equations for $a,b,c,e$ does not really help me because the exponent $d$ that appears in $\phi$ . Evaluating at "nice" points. There are no nice points in my opinion, because when you try to make $f \phi$ nice, $\phi f$ is ugly, and nothing good arises. Observing that $\phi$ is multiplying a Mobius transformation $\psi = \frac{z+1}{z-1}$ $d$ times. I could not get anything fr
|
What the exercise asks you to prove is false. The actual true statement is: Let $\alpha\in\mathbb{C}^\times$ and $n\in\mathbb{Z}_{\geq 2}$ , and consider the rational function $\varphi\left(z\right)=\alpha\left(\frac{z+1}{z-1}\right)^n$ . Then $\operatorname{Aut}\left(\varphi\right)=\left\{f\in \operatorname{PGL}_2\left(\mathbb{C}\right):f\circ \varphi\circ f^{-1}=\varphi\right\}$ satisfies $$\operatorname{Aut}\left(\varphi\right)=\begin{cases}\left\{\operatorname{id}\right\},&\alpha^2\neq \left(-1\right)^{n+1},\\\left\{\operatorname{id},z\mapsto -\frac{1}{z}\right\},&\alpha^2=\left(-1\right)^{n+1}.\end{cases}$$ Observe that this is not a problem for Silverman because the only place in the book where he makes use of this, which is Example 4.85, he has $\alpha=i$ and $n=3$ , which satisfy $\alpha^2\neq \left(-1\right)^{n+1}$ . PROOF: Take $f\left(z\right)=\frac{az+b}{cz+d}$ with $a,b,c,d\in\mathbb{C}$ and $ad\neq bc$ , and let us work-out the equation $f\circ \varphi\circ f^{-1}=\varphi
|
|number-theory|automorphism-group|mobius-transformation|
| 1
|
How to prove if the given point is always the circumcenter of the triangle?
|
In the square $ABCD$ , the points $M$ and $N$ belong to the sides $BC$ and $CD$ respectively, such that $\angle MAN = 45^°$ . Let $O$ be the point of intersection of the circle that goes through $C$ , $M$ and $N$ with the segment $AC$ . Is the point $O$ always the circumcenter of $\triangle MAN$ ? The image shows $\angle NAD = \theta = 30^°$ , but the question goes for all values of $\theta$ ranging from $0^°$ to $45^°$ . The calculator suggests that it's true, but I can't find a way to prove it. I've noticed that since $\angle MCN = 90^°$ , the segment $MN$ must be a diameter of the small circle, and thus $\angle MON = 90^°$ . If $O$ were to be the circumcenter of $\triangle MAN$ , it would obey the rule of angles at the center being twice the angle at the circumference, however I don't know if that means it is necessarily the circumcenter.
|
First of all, if $N=D$ , then $M=C$ and there are infinitely many different circles that can be drawn through $N$ and $M$ . Similarily, if $N$ = $C$ . Therefore we are going to assume, that $N$ lies between points $D$ and $C$ . WLOG $AD=1$ . Put $\angle DAN=\theta$ , $\angle NMC=\alpha$ , $ND=q$ , and let $r$ be the radius of the red circle. $BM=tan(\frac{\pi}{4}-\theta)=\frac{1-tan(\theta)}{1+tan(\theta)}=\frac{1-q}{1+q}$ , so $MC=1-\frac{1-q}{1+q}=\frac{2q}{1+q}$ and $CN=1-q$ . Pythagoras theorem gives $MN^2=(1-q)^2+(\frac{2q}{1+q})^2=\frac{(1-q^2)^2+4q^2}{(1+q)^2}=\frac{(1+q^2)^2}{(1+q)^2}$ , hence $r=\frac{MN}{2}=\frac{1+q^2}{2(1+q)}$ . I'm going to switch to the complex plane now with the origin at $A$ and the real axis running along $AD$ . Note that $P=[\frac{1-q}{1+q}+\frac{q}{1+q}]+i[1-\frac{1-q}{2}]=\frac{1}{1+q}+i\frac{1+q}{2}$ . Next, let $M'$ be the point obtained by rotating $M$ around $P$ by $\frac{\pi}{2}$ ccw. Since $M=-re^{-i\alpha}+P$ , we get $M'=-re^{-i\alpha}e^{\fr
|
|geometry|triangles|
| 1
|
Can we reduce $\int_0^{\pi/2}\frac{\sqrt{\sin x}}{1+\cos x}\,dx$ to complete elliptic integrals?
|
This definite integral has an equivalent closed form in terms of complete elliptic integrals , $$\begin{align*} I &= \int_0^\tfrac\pi2 \frac{\sqrt{\sin x}}{1+\cos x} \, dx \\ & = 2 - \sqrt{\frac2\pi}\, \Gamma^2\left(\frac34\right) \\ &= \boxed{2 - 2\sqrt2 \, E\left(\frac1{\sqrt2}\right) + \sqrt2 \, K\left(\frac1{\sqrt2}\right)} \tag{$*$} \end{align*}$$ Q : Is there any way to algebraically reduce or transform $I$ to more readily obtain this elliptic integral form, without leaning on beta/gamma functions? Having made a similar connection recently, I'm wondering if the same can be done here. Working backward from $(*)$ , we have $$\begin{align*} I &= 2 - 2\sqrt2 \int_0^\tfrac\pi2 \sqrt{1-\frac12\sin^2t} \, dt + \sqrt2 \int_0^\tfrac\pi2 \frac{dt}{\sqrt{1-\frac12\sin^2t}} \, dt \\ &= 2 \left(1 - \int_0^\tfrac\pi2 \frac{\cos^2t}{\sqrt{1+\cos^2t}} \, dt\right) \end{align*}$$ I've tried replacing $1=\int_0^{\pi/2} f(t) \, dt$ but I'm not sure if there's a clever choice of $f(t)$ that will coo
|
$$\int_0^{\frac\pi 2}\left(\frac{\sqrt{\sin x}}{\cos x+1}+\sqrt{\sin x}\right)dx=\int_0^{\frac\pi 2}\frac{\cos x +2}{\cos x+1}\sqrt{\sin x}dx=\int_0^{\frac\pi 2}\frac{3(\sin\tfrac x2)^{\tfrac12}\cos\tfrac x2(\cos\tfrac x2)^\tfrac12+(\sin\tfrac x2)^\tfrac32(\cos x)^{-\tfrac12}\sin\tfrac x2}{\sqrt2(\cos^{\tfrac12}\tfrac x2)^2}dx =\frac{4}{\sqrt2}\frac{(\sin\tfrac x2)^{\tfrac32}}{(\cos\tfrac x2)^{\tfrac12}}\big\vert_0^{\tfrac\pi 2}=2$$ Hence, $$I=2-...$$
|
|integration|definite-integrals|trigonometric-integrals|elliptic-integrals|
| 1
|
Reference request for inituitive explanations of various PDEs
|
I am wondering if there are any books providing intuitive explanations for PDEs. I have been using Evans' PDE, but it only focuses on how to solve PDEs and some other theoretical properties. I would like to find a book that discusses how to derive certain PDEs (such as the wave equation, Laplace's equation or some more advanced ones, etc.), as well as providing intuitive interpretations for each differential operator (For example: like what $\Delta$ is trying to do in the equation $u_t= \Delta u + u_x$ ). It doesn't need to be too mathematically rigorous. My background is in mathematics, and I haven't studied much physics. So, books on physics are also great!
|
There is an absolutely amazing and very mathematically non-rigorous book by Stanley Farlow Partial Differential Equations for Scientists and Engineers . I recommend this book to anyone who wants some intuition but is ok to skip on many mathematical steps. It could be a somewhat too extreme counterpart to Evans' book, but I would still recommend it even for mathematically mature students.
|
|partial-differential-equations|reference-request|
| 1
|
Is this expression with the Levi-Civita tensor correct?
|
I was doing a calculation about Nuclear Physics and in one step of the calculation I obtained $$ \varepsilon_{kbc} n_k n_b \tau_c $$ where $\tau_c$ is $2\times2$ Pauli matrix and $n_k$ is the k-component of a normalized vector (i.e. $n_a n_a =1$ ). To obtain the correct result is needed that $$ \varepsilon_{kbc} n_k n_b \tau_c = 0 $$ However, I am not totally sure if this is always correct. ¿Is there some property of Levi-Civita tensor associated with it? I would really appreciate any help. Thank you!
|
The Levi-Civita symbol is antisymettric about swapping one index with another i.e. $\epsilon_{ijk} = -\epsilon_{jik}$ so $\epsilon_{kbc}n_kn_b \tau_c = - \epsilon_{bkc}n_kn_b \tau_c$ but also we can swap the dummy indices $k$ and $b$ : $\epsilon_{kbc}n_kn_b \tau_c = \epsilon_{bkc}n_bn_k \tau_c $ hence $- \epsilon_{bkc}n_kn_b \tau_c = \epsilon_{bkc}n_bn_k \tau_c = \epsilon_{bkc}n_kn_b \tau_c\\ \implies \epsilon_{kbc}n_kn_b \tau_c = 0 $ (so yes, the expression is always true)
|
|linear-algebra|
| 1
|
37 and Veritasium
|
In Veritasium's new video about 37 there is brought up something interesting about its multiples. For any multiple of 37 reverse it and put a 0 between all of its digits and the new number will be a multiple for 37 Example : $37 \rightarrow703 ,703=19*37$ $74\rightarrow407,407=11*37$ Why does this happen can someone prove it?
|
I’m not sure if this is the simplest way to present a solution, but I think it works. Start with $k=10^n a_n + 10^{n-1}a_{n-1}+\dots + 10a_1+ a_0$ . Then $k’ = 10^{2n}a_0+10^{2n-2} a_1+ \dots + 10^2a_{n-1}+ a_n$ where $k’$ is the integer formed by reversing the digits of $k$ and adding in $0$ s between adjacent digits. We note that $\bmod {37}: 10^3\equiv 1, 10^2\equiv 26, 10\equiv 10$ . Then, $$\bmod {37}: k \equiv \underbrace{(a_{0}+a_{3}+a_6+\dots)}_{S_1}+10(\underbrace{a_1+a_4+a_7+\dots}_{S_2})+26(\underbrace{a_2+a_5+a_8+\dots}_{S_3})\equiv 0$$ whereas, $$\mod {37}: k’\equiv (\underbrace{a_n+a_{n-3}+a_{n-6}+\dots}_{L_1})+26(\underbrace{a_{n-1}+a_{n-4}+a_{n-7}+\dots}_{L_2})+10(\underbrace{a_{n-2}+a_{n-5}+a_{n-8}+\dots}_{L_3})$$ Case $1$ : Suppose $n$ is a multiple of $3$ . Then we get that $S_1=L_1$ , $S_2=L_3$ and $S_3=L_2$ , thereby giving $\bmod {37}: k\equiv k’$ . Case $2$ : $n$ is one more than a multiple of $3$ . Then $S_2=L_1$ , $L_2=S_1$ and $S_3=L_3$ . Then, $\bmod {37}: k’
|
|divisibility|
| 0
|
Ricci Equation $(\overline R(U,V)X,Y) = (R^\nabla(U,V)X,Y) - (B_U X, B_V Y) + (B_V X, B_U Y)$
|
Let $f:(M, g) \rightarrow (N, h)$ be a pseudo-Riemannian immersion, $\overline D$ the linear connection on $f^*(TN)$ induced from the Levi-Civita connection of $h$ , and $\nabla$ the connection induced on the normal bundle $NM$ (which is the orthogonal complement of $TM$ in $f^*(TN)$ ). Consider also the second fundamental form $$\mathrm{II}: TM\otimes TM\rightarrow NM$$ given by $$\mathrm{II}(U,V) = \mathcal N(\overline D_U V),$$ where $\mathcal N$ is the orthogonal projection onto $NM$ . Finally, let the tensor $B:TM\otimes NM \rightarrow TM$ be given by $g(B_U X, V) = -g(\mathrm{II}(U,V),X)$ . Then, how can I prove the Ricci Equation $$(\overline R(U,V)X,Y) = (R^\nabla(U,V)X,Y) - (B_U X, B_V Y) + (B_V X, B_U Y),$$ where $U,V$ are vector fields on $M$ , $X, Y$ are sections of $NM$ , and $\overline R$ and $R^\nabla$ are the curvatures of $\overline D$ and $\nabla$ respectively c.f. Arthur Besse’s “Einstein Manifolds” p. 38 Theorem 1.72 e)?
|
One way to prove this is using properly chosen coordinates. Fix $p \in M$ . First, choose local coordinates $x=(x^1, \dots, x^m)$ on $M$ such that $$ x(p)=0,\ g_{ij}(p)=\delta_{ij}\text{, and }\partial_kg_{ij}(p)=0,\ \forall 1 \le i,j,k \le \dim(N). $$ You can check that the Christoffel symbols (but not necessarily their derivatives) with respect to these coordinates all vanish at $p$ and that at $p$ (only!) $$ R_{ijkl} = \frac{1}{2}(-\partial^2_{ik}g_{jl} - \partial^2_{jl}g_{ik}+\partial^2_{il}g_{jk}+\partial^2_{jk}g_{il}). $$ Second, choose local coordinates $y=(y^1, \dots, y^n)$ such that $y(f(p))=0$ and $M$ is a graph near $p$ . In other words, there is a neighborhood $O$ of $p$ and a map $$h = (h^{m+1},\dots,h^n): O \rightarrow \mathbb{R}^{n-m}$$ such that if $x \in O$ , $$ f(x^1, \dots, x^m) = (x^1, \dots, x^m, h^{m+1}(x), \dots, h^n(x)). $$ Moreover, these coordinates can be chosen so that $$ \partial_kh^p(0) = 0\text{, if }1 \le k \le m\text{ and }m+1\le p\le n. $$ With respect
|
|differential-geometry|
| 0
|
Move Infinite sum inside a limit $t \to \infty$.
|
This may be simple, but I want to know if my reasoning is ok. I came across a problem whose essential set up is: let $f_k$ be a sequence of functions in $L^1(\mathbb{R})$ (Lebesgue integrable functions on $\mathbb{R}$ ). Suppose that $$\displaystyle \lim_{t \to \infty} f_k(t) = 0 \text{ for all } k \in \mathbb{N} \quad \text{ and } \quad \displaystyle \sum\limits_{k=0}^\infty f_k \in L^1(\mathbb{R})$$ We of course have that $\displaystyle \sum\limits_{k=0}^\infty \lim_{t \to \infty} f_k(t) = 0$ . Where I'm having a bit of doubt, is proving that $$\lim_{t \to \infty} \sum\limits_{k=0}^\infty f_k(t) = 0, \tag{1}$$ my argument is: since for any $n \in \mathbb{N}$ , we have that $\displaystyle\lim_{t \to \infty} \sum\limits_{k=0}^n f_k(t) = \sum\limits_{k=0}^n \lim_{t \to \infty} f_k(t) = 0,$ so $(1)$ follows trivially from this last observation. Do you see something wrong in my argument? In the best case, I'm asking an obvious question. In the worst case I'm missing something very badly..
|
The limit in (1) need not exist. Let $f_k = \chi_{[k, k + 1 / k^2]}$ be the indicator function of the interval $[k, k + 1 / k^2]$ ( $k \geq 1$ ). These are obviously Lebesgue-integrable, $\lim_{t \to \infty} f_k(t) = 0$ and $\int_{\mathbb{R}} \sum_{k = 1}^\infty f_k \mathrm{d}\mu = \frac{\pi^2}{6}$ (since the integral computes the Basel problem sum $\sum_{k = 1}^{\infty} \frac{1}{k^2}$ ), whence $f = \sum_{k = 1}^\infty f_k \in L^1(\mathbb{R})$ . But for every $r > 0$ , there exists points $x_0, x_1 > r$ (for instance, $x_1 = \lceil r \rceil$ and $x_0 = \lceil r \rceil + 11 / 4$ ) such that $f(x_0) = 0$ and $f(x_1) = 1$ , whence $\lim_{t \to \infty} f(t)$ fails to converge.
|
|sequences-and-series|limits|lp-spaces|
| 1
|
Strict chromatic vector coloring of $K_n$
|
In a finite simple graph $X$ , for any $t\in\mathbb{R}$ , a vector $t$ -coloring of $G$ is a mapping $\phi_t: V(X)\longrightarrow S^m$ for some $m\in\mathbb{N}$ (where $S^m$ is the $m$ -sphere in $\mathbb{R}^{m+1}$ ) such that for any $x, y\in V(X)$ , $\langle\, \phi_t(x) \,,\, \phi_t(y) \,\rangle \leq -\dfrac{1}{t-1}$ whenever $x\sim y$ . The vector chromatic number of $G$ is the infimum among all real numbers $t\in\mathbb{R}$ such that $G$ has a vector $t$ -coloring. The definition and more details can be found in this link . Further, for any $t\in \mathbb{R}$ , a strict vector $t$ -coloring is a mapping $\psi_t:V(X) \longrightarrow S^m$ for some $m\in\mathbb{N}$ such that $\langle\, \psi_t(x) \,,\, \psi_t(y) \,\rangle = -\dfrac{1}{t-1}$ , and the strict vector chromatic number $\chi_{sv}(G)$ is defined similarly. Clearly $\chi_v(G)\leq \chi_{sv}(G)$ and in the link above it is proved that $\omega(G)$ the max clique number of $G$ is less than or equal to $\chi_v(G)$ . My question is
|
If we place the vertices of $K_n$ at the corners of an $(n-1)$ -simplex with center at the origin, then we'll see the desired inner products of $-\frac1{n-1}$ . Although this can be done in $\mathbb R^{n-1}$ , it's easier to implement in $\mathbb R^n$ . Put the first vertex at the point $$\frac{(1-n, 1, 1, 1, \dots, 1)}{\sqrt{n(n-1)}}$$ and all other vertices at cyclic shifts of this point. Now check that: The norm of this vector is actually $1$ ; in other words, the norm of $(1-n, 1, 1, 1, \dots, 1)$ is $\sqrt{n(n-1)}$ . The inner product of two of these vectors is actually $-\frac1{n-1}$ . It's enough to check that the inner product of $(1-n, 1, 1, 1, \dots, 1)$ with $(1, 1-n, 1, 1, \dots, 1)$ is $-n$ .
|
|linear-algebra|discrete-mathematics|graph-theory|inner-products|
| 1
|
When is $\sqrt n$ in $\Bbb Q[\omega_m]$?
|
Given a positive integer $n$ and a primitive $m^{th}$ root of unity $\omega_m$ over $\Bbb Q$ , how could one determine if $\sqrt{n}$ lies in $\Bbb Q[\omega_m]$ ? In the case of $n=p>0$ being an odd prime, the question is known by some algebraic number theory. Indeed primes ramified in $\Bbb Q\left[\sqrt p\right]$ are $p$ and if $p\equiv 3$ modulo $4$ , also $2$ . If $p\equiv 1$ modulo $4$ , the inclusion $\Bbb Q\left[\sqrt p\right]\subset\Bbb Q\left[\omega_m\right]$ requires $p$ to also ramify in $\Bbb Q[\omega_m]$ , which happens iff $p|m$ . If $p\equiv 3$ modulo $4$ , then $2$ also need to ramify in $\Bbb Q[\omega_m]$ , forcing $4p|m$ . Finally $\sqrt 2\in\Bbb Q[\omega_m]$ iff $8|m$ . Conversely one verifies $p\equiv 1$ modulo $4$ implies $\sqrt{p}\in\Bbb Q[\omega_p]$ and that $p\equiv 3$ modulo $4$ implies $\sqrt{p}\in\Bbb Q[\omega_{4p}]$ . The following criteria summarize the situation: Theorem 1. Let $p>0$ be a positive prime. If $p\equiv 1$ modulo $4$ , then $\sqrt p\in\Bbb Q[\om
|
Recall that $\mathrm{Gal}(\mathbb Q^{ab}/\mathbb Q)\simeq\widehat{\mathbb Z}^\times$ which sends $\mathrm{Frob}_p$ to $[p]$ . Now the homomorphism $\varphi\colon\mathrm{Gal}(\mathbb Q^{ab}/\mathbb Q)\to\{\pm1\}$ sends $\mathrm{Frob}_p$ to $\big(\frac np\big)$ . Thus your question is equivalent to computing the minimal $m$ such that $\varphi(1+m\widehat{\mathbb Z})=1$ . In other words, your question is equivalent to computing the kernel of $\varphi$ . For example, when $n=6$ then by quadratic reciprocity $$\big(\frac 6p\big)=\big(\frac2p\big)\big(\frac3p\big)=(-1)^{\frac{(p-1)(p-3)}8}\big(\frac p3\big),$$ so $$\ker(\varphi)=\{a\in\widehat{\mathbb Z}^\times:a\equiv1\ \mathrm{mod}\ 3,\ a\equiv1,3\ \mathrm{mod}\ 8\}.$$ Thus $\varphi(1+m\widehat{\mathbb Z})=1$ is equivalent to $24|m$ . Thus $\sqrt6\in\mathbb Q(\zeta_m)$ is equivalent to $24|m$ . The following is the complete characterization, which follows readily from the above and quadratic reciprocity. Let $n$ be square-free. Then $\sqrt
|
|algebraic-number-theory|cyclotomic-fields|
| 0
|
Can someone solve this limit?
|
$$\lim_{x \rightarrow 0} \frac{\sqrt{1+x}-\sqrt{1-x}}{\sqrt[3]{1+x} - \sqrt{1-x}}$$ I have no idea where I should start from
|
tl;dr: Generalize, L'hopital, Spec'd, Answer falls out. If $u=1+x$ , then $-u+2=-x+1$ . So letting $2=n$ and $6=p$ , we want to solve $$\lim_{u\to\ 1} \frac{\sqrt[n]{u}-\sqrt[n]{2-u}}{\sqrt[p]{u}-\sqrt[n]{2-u}}$$ Since it's clear lhopital's rule applies (0/0 case), d/du each of the numerator and denominator to get: $$=\lim_{u\to\ 1} \frac{\frac{u^{-1+1/n} + (2-u)^{-1+1/n}}{n}}{\frac{nu^{-1+1/p} + p(2-u)^{-1+1/n}}{np}}$$ after multiplying and collecting terms $$=\lim_{u\to\ 1} \frac{p(-u \sqrt[n]{2-u} - (2-u)\sqrt[n] u)}{n(u-2)\sqrt[p] u - pu \sqrt[n]{2-u}}$$ and setting back $n=2, p=6$ , we find that our original limit in question is equal to $$=\lim_{u\to\ 1} \frac{3(-(2-u)\sqrt u - u\sqrt{2-u})}{2(u-2)\sqrt[3] u - 3u\sqrt{2-u}}$$ Simply plug in $u=1$ into the above and you'll find $$=\frac 6 5$$ Hope that helps :)
|
|limits|
| 1
|
Can we reduce $\int_0^{\pi/2}\frac{\sqrt{\sin x}}{1+\cos x}\,dx$ to complete elliptic integrals?
|
This definite integral has an equivalent closed form in terms of complete elliptic integrals , $$\begin{align*} I &= \int_0^\tfrac\pi2 \frac{\sqrt{\sin x}}{1+\cos x} \, dx \\ & = 2 - \sqrt{\frac2\pi}\, \Gamma^2\left(\frac34\right) \\ &= \boxed{2 - 2\sqrt2 \, E\left(\frac1{\sqrt2}\right) + \sqrt2 \, K\left(\frac1{\sqrt2}\right)} \tag{$*$} \end{align*}$$ Q : Is there any way to algebraically reduce or transform $I$ to more readily obtain this elliptic integral form, without leaning on beta/gamma functions? Having made a similar connection recently, I'm wondering if the same can be done here. Working backward from $(*)$ , we have $$\begin{align*} I &= 2 - 2\sqrt2 \int_0^\tfrac\pi2 \sqrt{1-\frac12\sin^2t} \, dt + \sqrt2 \int_0^\tfrac\pi2 \frac{dt}{\sqrt{1-\frac12\sin^2t}} \, dt \\ &= 2 \left(1 - \int_0^\tfrac\pi2 \frac{\cos^2t}{\sqrt{1+\cos^2t}} \, dt\right) \end{align*}$$ I've tried replacing $1=\int_0^{\pi/2} f(t) \, dt$ but I'm not sure if there's a clever choice of $f(t)$ that will coo
|
Thanks to Bob's insightful observation that $$I + \int_0^\tfrac\pi2 \sqrt{\sin x} \, dx = \left[ \left(\sin x\right)^{3/2} \sec^2\frac x2 \right] \bigg|_0^\tfrac\pi2 = 2$$ we can connect the $x$ - and $t$ -integrals by substituting $\sin x=\cos^2t$ , $$\int_0^\tfrac\pi2 \sqrt{\sin x} \, dx = \int_0^\tfrac\pi2 \sqrt{\cos^2t} \frac{2\sin t\cos t}{\sqrt{1-\cos^4t}} \, dt = \int_0^\tfrac\pi2 \frac{2\cos^2t}{\sqrt{1+\cos^2t}}\,dt$$
|
|integration|definite-integrals|trigonometric-integrals|elliptic-integrals|
| 0
|
Range of a continuous operator into the range of balls
|
Let $P: H_{1} \rightarrow H_{2}$ be a bounded linear operator between two Hilbert spaces. Is it true that the following holds: $$cl(Ran(P)) = \cup_{n=1}^{\infty} cl(P(B_{1}(n)), $$ where $cl$ denotes the closure, and $B_{1}(n)$ represents the ball of radius $n$ in $H_{1}$ . The basic topological argument shows $\subset$ is true, but I am not sure about the opposite direction. Thank you in advance.
|
No. For example, if $P$ is any compact operator on an infinite-dimensional Hilbert space with dense range (say multiplication along the diagonal by $\frac{1}{n}$ on $l^2(\mathbb{N})$ ). Then $\mathrm{cl}(P(B(n)))$ is compact for all $n$ , whence it has empty interior. By Baire category theorem, the RHS $\cup_n \mathrm{cl}(P(B(n)))$ then has empty interior, but the LHS $\mathrm{cl}(\mathrm{range}(P))$ is the entirety of $H$ , so they can’t be equal.
|
|functional-analysis|
| 0
|
Adjusting sizes to calculate a new weighted average, but keeping the same total size...
|
I have prices $p_i$ , sizes $s_i$ , with average weighted price $A=\frac{\sum(p_i s_i)}{\sum(s_i)}$ I want to calculate $s'_i$ , such that $\sum(s'_i) = \sum(s_i)$ to give a desired new average weighted price $B = \frac{\sum(p_i s'_i)} { \sum(s'_i)}$ What is the best way to do this? Would it be possible to solve this with a linear relationship: $s'_i = ms_i+c$ . (EDITING IN RESPONSE TO ISCO'S ACCEPTED ANSWER) For people lacking algebra skills, this can be solved using Sympy, which confirms the accepted answer. from sympy import symbols, Sum m = symbols("m") n = symbols("n") c = symbols("c") sum_p = symbols("sum_p") sum_ps = symbols("sum_ps") sum_s = symbols("sum_s") A = symbols("A") B = symbols("B") eqn1 = 1-n*c/sum_s - m eqn2 = (1/A)*(B - c*sum_p/sum_s) - m # Solve the equations solution = solve((eqn1, eqn2), (m, c)) # Print the solutions print("Solutions:") print("m =", solution[m]) print("c =", solution[c])
|
If I understand your question correctly your problem is actually rather under-specified. Trivially rearranging your equation involving $B$ you have $$ \sum (p_i s'_i) = B \sum s'_i $$ and you want $\sum s'_i$ to be equal to $\sum s_i$ therefore $$ \sum (p_i s'_i) = B \sum s_i $$ Assuming you already have all of the sizes $s_i$ , or even just their sum, the right-hand-side here is just some known number. So your problem is reduced to finding any vector $s'$ such that the dot product with the price vector $p$ is some desired scalar value. If there is only one (non-zero) price then the required size is trivial to calculate, and if there are multiple then there are infinite possible solutions. You could set $n-1$ of the sizes to any nonsense and on the last one pick the correct size to get the target scalar product. I would guess that in the real world there are some constraints, such as all prices are positive, and maybe all sizes too, depending on what you are doing. If you are working o
|
|linear-algebra|
| 1
|
Can a number be positive and negative at the same time?
|
Last week someone asked me if I could solve $3x+5 = 3x-5$ . I think he just looked up unsolved problems or something like that, but as far as I can tell it has no solution... other than if $x$ was positive and negative $5/3$ . So I told him it was unsolvable because you get $0=10$ or $0=-10$ , but I started thinking about numbers being positive and negative. Like if you graph a number divided by zero you get an asymptote approaching positive infinity from one side and negative infinity from the other, so while it's unbounded it's also approaching positive and negative infinity simultaneously (as far as I understand it at least). Ultimately, I was wondering if a number could be positive and negative at the same time. Intuitively I don't think that could exist, but I can't find anything on it when I've looked it up and I was hoping someone else would have the answer. Thank you. [Also the tags probably aren't totally accurate for the question it just seemed like a good way to get some att
|
No. Show that no integer can be both positive and negative. (Precisely how to do this may depend on your definition of "integer"!) Therefore show that no rational number can be both positive and negative. Therefore show that no real number can be both positive and negative (by approximating it by rationals: either all sufficiently good rational approximations to $x$ are positive, or they're all negative, or there are arbitrarily good rational positive and negative approximations, in which case show that the number is $0$ ). (There may be easier ways, but this has the least machinery, since it's working directly from the definitions of the relevant objects.)
|
|real-numbers|
| 1
|
Finite cover of a countable set in $[0, 1]$.
|
Problem: True or False: Let $E \subset [0, 1] \subset \mathbb{R}$ be a countable subset. Then, for any $\epsilon> 0$ , there is a finite cover of E by open intervals $\{I_k\}_{k=1}^{n}$ such that $$ \sum_{k=1}^{n} m(I_k) This sounds like a quite easy problem, but I don't how to solve it. This one is quite similar to the one on Folland: Let $E \subset \mathbb{R}$ be Lebesgue measurable set and assume that there exists $0 such that $m(E \cap I) \leq \alpha m(I)$ for all open intervals $I$ . Then, $m(E) = 0$ . Proof: If $m(E) > 0$ , then let $O$ be an open set that contain $U$ and $O = \bigcup_{i=1}^{\infty} I_{i}$ , where $I_i$ is an open interval. Then we have $$ m(O) = \sum_{i=1}^{\infty}m(I_i) \geq \frac{1}{\alpha} \sum_{i=1}^{\infty}m(E \cap I_i) \geq \frac{1}{\alpha} m(E) $$ By regularity, we can always make $O$ such that $ m(E)\leq m(O) . Can any one help me with that?
|
False. Take $E$ to be dense in $[0, 1]$ . If a finite cover by open intervals $\{I_k\}^n_{k = 1}$ with total length $L exists, then the complement $[0, 1] \setminus \bigcup_{k = 1}^n I_k$ contains an open interval, which by denseness contains a point of $E$ , which is absurd.
|
|real-analysis|measure-theory|lebesgue-measure|
| 0
|
What is the probability of rolling at least $n$ consecutive numbers when rolling $k$ dice?
|
I am trying to code a game that turned out to be more complicated than expected. The complication comes down to a function that attempts to calculate the chance of at least $n$ consecutive numbers out of a total of $k$ dice rolled. The function only has to work for integer values of $n$ and $k$ , and $2\le n\le k\le 6$ . I am looking for a formula I can use to calculate all possibilities and store them as variables, or a javaScript function that can take parameters $n$ and $k$ and then do the calculation that way. I have been trying for a while now and cannot find a way to correctly calculate these probabilities to match experimental results. I have tried a lot of different javaScript function ideas to iterate and calculate these odds, but none of my attempts works for both $n=k=2$ and $n=3, k=6$ . Example combinations that work for $n=3$ and $k=6$ are $1, 2, 3, 6, 6, 6; \; 3, 5, 2, 1, 4, 6$ and 1, 2, 3, 4, 6. Thanks for any help.
|
There are only $6^6 \approx 50,000$ possible sequences of 6 rolls. Just enumerate all of them, and count how many meet your requirements. A program to do that should finish in milliseconds. Or, precompute the answer for each $k,n$ satisfying $2 \le n \le k \le 6$ . There are only 15 pairs of values $(n,k)$ , so you can precompute the answer for each pair, and hardcode all of those in your code as a lookup table, and just look up the desired entry at runtime.
|
|probability|dice|
| 0
|
How to find both of the degrees and length of the hypotenuse in a circle if were stretched?
|
I'm trying to calculate the degrees of an angle and the length of the hypotenuse if they were stretched/scaled horizontally or vertically. Here's a picture to demonstrate. a = width b = height c = length of hypotenuse α = degrees I want to find out if there is a possible way to cut down the math I have for calculating the stretched angle below. Converting degrees to Vector2 like format, and apply the scale. $\ \theta=\alpha\frac{π}{180} $ $\ x=\cos(θ)a $ $\ y=\sin(θ)b $ Then convert it back to degrees. $\ \alpha=\operatorname{atan2}(y, x)\frac{180}{\pi} $ Sorry if the way I write my equations is weird, I only have a shallow understanding in writing equations. Here's an example in code: void ScaleAngle(float degress, Vector2 size) { // Convert degrees to Vector2 and apply scale float theta = (degress - 90) * (Mathf.PI / 180); Vector2 v = Vector2(cos(theta), sin(theta)) * size; // Convert Vector2 back to degrees float a = atan2(v.y, v.x) * (180 / Mathf.PI) + 90; return a; } I don't know
|
Your computation is correct and doesn't really simplify any further. $x$ and $y$ are the scaled horizontal and vertical values of the point on the ellipse relative to the center, and the new angle $\alpha$ is given by your formula. The value of $c$ is simply $\sqrt{x^2 + y^2} = \sqrt{a^2 \cos^2 \theta + b^2 \sin^2 \theta}$ .
|
|recreational-mathematics|programming|
| 0
|
How do you find the double partial derivative of a quotient inside of a trigonometric function?
|
All I really know is the quotient rule and the chain rule, but this problem blew up and took several pages of my notebook and is still wrong. Is there a way to solve this in less than a page? $$ z=\arctan\left(\frac{x+y}{1-xy}\right) $$ $$ \text{find: } z_{xx} $$
|
HINT I would start with noticing that \begin{align*} \arctan\left(\frac{x + y}{1 - xy}\right) & = \arctan(x) + \arctan(y) \end{align*}
|
|calculus|derivatives|partial-derivative|
| 0
|
$f(x/a)$ converges to $f(x)$ in $L^1$ as $a\rightarrow 1$ for $f\in L^1$
|
Problem: For $a > 0$ , let $(S_a f)(x) = f(x/a)$ for Lebesgue measurable functions $f$ on $\mathbb{R}$ . Then for any $f\in L_1(\mathbb{R},m)$ , $S_a f\rightarrow f$ in $L_1$ as $a\rightarrow 1$ . Can anyone check if my proof below is correct or not? Or is there a simple way? I think the difficulty here is that $f$ is not continuous, so we don't have $S_a f\rightarrow f$ pointwise, thus nullifying any DCT related results. My proof: We are going to follow the routine of: characteristic functions to simple functions to $L^1$ functions. We first need to show the linearlity of $S_a$ . Indeed, $S_a(\alpha f + \beta g) = (\alpha f+\beta g)(x/a) = \alpha f(x/a) + \beta g(x/a) = \alpha S_a(f) + \beta S_a(g)$ . For $\chi_{I}$ where $I = (b, c)$ is an open interval. It is trivial that $\int |S_a \chi_I - \chi_I| dm \rightarrow 0$ . Characteristic functions. Let $E$ be a measurable set. Note that in this case we may assume $m(E) . We are to prove the argument for $\chi_{E}$ . Fix $\epsilon > 0$ ,
|
There is nothing wrong with what you did, however, I would go directly to a sequence of continuous functions approximating $f$ , slightly reducing the effort. In general, if you have an integral statement that you know how to prove immediately for a nice class of functions that are dense, prove it for that class and approximate. Let $\epsilon>0$ and take $f_\epsilon$ continuous with $\|f-f_\epsilon\|_1 . Then, $$ \|f(x)-f(x/a)\|_1\leq \|f(x)-f_\epsilon(x)\|_1+\|f_\epsilon(x)-f_\epsilon(x/a)\|_1+\|f_\epsilon(x/a)-f(x/a)\|_1\\ \leq (1+a)\epsilon+\|f_\epsilon(x)-f_\epsilon(x/a)\|_1 $$ by a change of variables. Now you can finish with DCT as you noted.
|
|real-analysis|measure-theory|
| 0
|
Sum of the elements in a discrete set
|
(This may have been asked elsewhere but I cannot find it.) Take the set of positive integers $\{1,2,...\frac{n}{2} (n+1)\}$ and put them into $n$ subsets as follows: $S_{1}=\{1\}$ $S_{2}=\{2,3\}$ $S_{3}=\{4,5,6\}$ And so on so that $S_{n}$ contains exactly $n$ elements. What is the sum of the elements of $S_{n}$ in terms of $n$ ?
|
Define $Z_k := 1+2+\ldots+k = \dfrac{k(k+1)}{2}$ for $k \in \mathbb{N}$ . Notice that the largest element of $S_n$ is $Z_n$ . So the sum of elements of $S_n$ is \begin{align} Z_{Z_n} - Z_{Z_{n-1}} &= \dfrac{Z_{n}(Z_{n}+1)}{2} - \dfrac{Z_{n-1}(Z_{n-1}+1)}{2}\\ &= \dfrac{(Z_{n-1}+n)(Z_{n-1}+n+1)}{2} - \dfrac{Z_{n-1}(Z_{n-1}+1)}{2} \end{align} which simplifies to $\dfrac{n(n^2+1)}{2}$ .
|
|sequences-and-series|set-theory|
| 1
|
Sigmoid-Normal Inequality
|
I conjecture that for any $\beta>0$ , \begin{align} \sigma(\beta) > \Phi(x_{\beta}) \quad\text{where}\quad x_{\beta} = \beta \phi(x_{\beta}). \end{align} $\sigma$ denotes the sigmoid function defined by $\sigma(x) = \frac{1}{1+e^{-x}}$ . $\Phi$ and $\phi$ denote the standard normal cdf and pdf, respectively. The conjecture seems to be true according to my numerical analysis, but I am unsure about a formal proof. I would appreciate it if you would share your thoughts or suggestions. Thank you.
|
We rephrase the problem as follows. Problem. Prove that, for all $\beta > 0$ , $$\frac{1}{1 + \mathrm{e}^{-\beta}} > \int_{-\infty}^{u} \frac{1}{\sqrt{2\pi}}\mathrm{e}^{-t^2/2}\, \mathrm{d} t$$ where $u$ is a real number satisfying $$u = \beta \cdot \frac{1}{\sqrt{2\pi}}\mathrm{e}^{-u^2/2}.$$ ( Note : Clearly, such $u$ exists (unique) and is positive.) $\phantom{2}$ Proof. We split into two cases. Case 1. $\beta \ge 3$ Let $f(u) := u - \beta \cdot \frac{1}{\sqrt{2\pi}}\mathrm{e}^{-u^2/2}$ . We have $f'(u) > 0$ . We have $$f(\sqrt{2\ln \beta}) = \sqrt{2\ln \beta} - \frac{1}{\sqrt{2\pi}} \ge \sqrt{2\ln 3} - \frac{1}{\sqrt{2\pi}} > 0 = f(u)$$ which results in $\sqrt{2\ln \beta} \ge u$ . It suffices to prove that $$\frac{1}{1 + \mathrm{e}^{-\beta}} > \int_{-\infty}^{\sqrt{2\ln \beta}} \frac{1}{\sqrt{2\pi}}\mathrm{e}^{-t^2/2}\, \mathrm{d} t. \tag{1}$$ We have, for all $u>0$ $$1 - \frac{u^2 - 1}{u^3\sqrt{2\pi}}\,\mathrm{e}^{-u^2/2}\ge \int_{-\infty}^{u} \frac{1}{\sqrt{2\pi}}\mathrm{e}^{-t^2/
|
|real-analysis|inequality|normal-distribution|
| 1
|
Geometric Interpretations of Complex Solutions to Simple Equations
|
My question is similar to this one but hopefully simpler. See the attached image below I created on Desmos. Now, the zeroes for the blue parabola are $x = -1$ and $x = 1$ . The zeroes for the red parabola are $+i$ and $-i$ . The geometric interpretation for the former is easily understood as the intersection of the parabola with the x axis. But I am struggling to find a geometric interpretation for the latter. Based on Angae MT's comments, I took another stab at drawing this, where the z axis is imaginary and the BLUE parabola represents the reflection that Angae is speaking of and the GREEN represents the same parabola but rotated 90 degrees. The only difference is that it is plotted in 3D with the 3rd axis being imaginary. I tried to upvote MT Angae's answer but I don't have sufficient reputation. 3D of Angae's Answer
|
I give a proof over open upward parabola, which can be generalised to open downward parabola too (treat it as exercise!) Lemma. Let $f(x)=(x+h)^2+k$ be an open upward parabola such that $f(x)=0$ has roots $a\pm bi$ , where $a,b\in\mathbb{R}$ . Define $g(x):=k-(x+h)^2$ , then $g(x)=0$ has roots $a\pm b$ . Proof. By quadratic formula, $$g(x)=0\iff x=\dfrac{2h\pm\sqrt{(-2h)^2-4(-1)(k-h^2)}}{-2}=-h\pm\sqrt{4k}$$ But by $f(x)$ , we know $a+bi=\dfrac{-2h\pm\sqrt{(2h)^2-4(h^2-k)}}{2}=-h\pm\sqrt{-4k}$ . So we have $$\begin{cases}a=-h\\ b=\sqrt{4k}\end{cases}$$ And it follows that $g(x)=0$ has roots $a\pm b$ . What is the use of this Lemma? The idea is the graph of $g(x)$ is actually obtained by reflecting the graph of $f(x)$ along the horizontal line which touches the vertex of $y=f(x)$ . This gives the geometric meaning. For example, for your $x^2+1$ , reflecting the curve $y=x^2+1$ along $y=1$ , then it would intersects $x$ -axis at $(-1,0)$ and $(1,0)$ so the original function has zeroes $0
|
|geometry|polynomials|complex-numbers|analytic-geometry|
| 1
|
What is in the image of the exponential of $\frak{sl}(n,\mathbb{R}$)? What do you need to get all of $\mathrm{SL}(n,\mathbb{R}$)?
|
This question discusses how $\mathrm{S}L(2,\mathbb{R}$ ) coincides with $\pm\exp(z)$ with $z\in \frak{sl}(n,\mathbb{R}$ ) (the real traceless matrices). Is it known what happens for $n>2$ ? Namely, one can represent all invertible matrices with determinant $1$ and trace greater than or equal to $-2$ . Also, just by providing a sign (more precisely, up to the center of the group, which is $\{ I, -I\}$ ) one is able to obtain all of $\mathrm{SL}(2,\mathbb{R})$ ). Is such a result known for the general case of $\mathrm{SL}(n,\mathbb{R})$ , which characterizes the image of the exponential and what one needs to "add" in order to obtain the entire group (i.e. what is the "minimal" set $A\subseteq G$ such that $\mathrm{SL}(n,\mathbb{R}) = A \exp(\frak{sl}(n,\mathbb{R}))$ )? I would also be interested in the more general question regarding classical Lie groups, although this would already be a great help towards a greater understanding of what happens in the exponential (even just for $n=3$ ).
|
I am not sure if the complete answer is known, but the interior and the exterior of the image $E$ of the exponential map are known: The interior of $E$ consists of all matrices in $SL(n,\mathbb R)$ which have no negative eigenvalues. The exterior of $E$ consists of all matrices in $A\in SL(n,\mathbb R)$ which have at least one negative eigenvalue of odd multiplicity. Thus, the boundary of $E$ consists of matrices $A\in SL(n,\mathbb R)$ such that some eigenvalues of $A$ are negative but all such eigenvalues have even multiplicity. For instance, $A=Diag(-1/2,-1/2,-2, -2, 1)$ is an example. This result is due to M.Nishikawa but his paper does not seem to be accessible. Another proof can be found in Đoković, Dragomir Ž. , The interior and the exterior of the image of the exponential map in classical Lie groups , J. Algebra 112, No. 1, 90-109, Corrigendum 115, No. 2, 521 (1988). ZBL0638.22006 . He also gives a description of the image $E$ of the exponential map for $SL(n,\mathbb C)$ but it
|
|linear-algebra|matrices|lie-groups|lie-algebras|matrix-exponential|
| 1
|
A clarification regarding the definition of uniform continuity of a function defined in a subset of $\mathbb R.$
|
Let $f:A\to \Bbb R$ where $A\subseteq \Bbb R$ . We say that, $f$ is uniformly continuous on $A$ if for any $\epsilon\gt 0$ there exists $\delta(\epsilon)=\delta\gt 0$ such that for any $x_1,x_2\in A$ and satisfying $|x_2-x_1|\lt \delta$ we have, $|f(x_2)-f(x_1)|\lt\epsilon.$ Now, my question is: Say, for a particular $\epsilon_0\gt 0$ there exists a $\delta\gt 0$ such that for any $x_1,x_2\in A$ and satisfying $|x_2-x_1|\lt \delta$ we have, $|f(x_2)-f(x_1)|\lt\epsilon_0.$ But, what if, no two distinct points in the domain of $f$ say, $A$ has a distance of $\delta$ or, in other words, what if every pair of distinct points in $A$ has a distance strictly greater than $\delta$ ? Will $f$ be still uniformly continuous? My answer is "yes". This is because, the definition of uniform continuity says, that if any two points say, $x_1,x_2$ have the distance between them the required $\delta$ or even less than $\delta$ (, for some choice of $\epsilon$ ) then $|f(x_2)-f(x_1)|\lt \epsilon$ must hol
|
This is true, but you can think of this even in the continuous case. This is because a domain may not have any accumulation point, or even the domain is discrete, for example, $$f:\mathbb{N}\to\mathbb{R}, f(x)=\max\{x,\pi\}$$ is continuous everywhere, because $|x-y|\ge1$ for any $x\ne y$ , so you can pick $\delta=\dfrac{1}{2}$ . And of course it is uniformly continuous also. You can somehow consider it as the vacuously true in logic, so that since no such $x,y$ satisfies this condition, $f$ is also (uniformly) continuous.
|
|functions|solution-verification|continuity|definition|uniform-continuity|
| 0
|
An attempt for approximating the logarithm function $\ln(x)$: Could be extended for big numbers?
|
An attempt for approximating the logarithm function $\ln(x)$ : Could be extended for big numbers? PS: Thanks everyone for your comments and interesting answers showing how currently the logarithm function is numerically calculated, but so far nobody is answering the question I am asking for, which is related to the formula \eqref{Eq. 1} : Is it correctly calculated?, Could a formula for the logarithm of large numbers be found with it? Here with "big/large numbers" I am meaning in the same sense of how the Stirling's approximation formula approximates the factorial function at large values. Intro__________ On a previous question I found that the following approximation could be used: $$\ln\left(1+e^x\right)\approx \frac{x}{1-e^{-\frac{x}{\ln(2)}}},\ (x\neq 0) \quad \Rightarrow \quad \dfrac{\ln\left(1+x^{\ln(2)}\right)}{\ln\left(x^{\ln(2)}\right)} \approx \frac{x}{x-1}$$ And later I noted that I could do the following: $$\dfrac{\ln\left(1+x^{\ln(2)}\right)}{\ln(2)} \approx \frac{x\ln\lef
|
From what I got you would like to find an approximation of $\ln(x)$ for big values of $x$ . If so then you can consider $$\ln(x)\approx\frac{\pi}{2M\big(1,2^{2-m}/x\big)}-m\ln(2)$$ where $M(x,y)$ is arithmetic-geometric mean. Here are $\log$ for first $10$ natural numbers calculated directly using Wolfram Mathematica: Table[N[ Log[x], 20], {x, 2, 10}] $${0.69314718055994530942, 1.0986122886681096914, \ 1.3862943611198906188, 1.6094379124341003746, 1.7917594692280550008, \ 1.9459101490553133051, 2.0794415416798359283, 2.1972245773362193828, \ 2.3025850929940456840}$$ and using arithmetic-geometric mean: Table[N[Pi/(2 ArithmeticGeometricMean[1, 2^(2 - m)/x]) - m Log[2], 20], {x, 2, 10}] //. m -> 10 $${0.693147180559945, 1.098612288668110, 1.386294361119891, \ 1.609437912434100, 1.791759469228055, 1.945910149055313, \ 2.079441541679836, 2.197224577336219, 2.302585092994046}$$
|
|real-analysis|combinatorics|convergence-divergence|solution-verification|pochhammer-symbol|
| 0
|
Showing that $x = \sec\theta + \tan\theta$ and $y = 2\sec\theta$ satisfy $x^{2}-xy+1=0$ without substitution?
|
Show that $x = \sec\theta + \tan\theta$ and $y = 2\sec\theta$ satisfy $x^{2}-xy+1=0$ I would sub, but are there alternative methods to consider?
|
$$x=\secθ+\tanθ$$ Squaring both sides $$x^2=(\secθ+\tanθ)^2$$ $$=\sec^2θ+\tan^2θ+2\secθ\tanθ$$ Now we have to use the identity $$\sec^2θ-1=\tan^2θ$$ So $$x^2= 2\sec^2θ-1+2\secθ\tanθ$$ Or $$x^2=2\secθ(\secθ+\tanθ) -1$$ Or $$x^2=yx -1$$ As $y=2\secθ$ Hence $$x^2-xy+1=0$$
|
|trigonometry|
| 0
|
four-digit number is equal to the product of the sum of its digits multiplied by the square of the sum of the squares of its digits
|
I'm trying to find a four-digit number that is equal to the product of (the sum of its digits) multiplied by (the square of the sum of the squares of its digits). I've tried running all combinations in Python and found two solutions (2023 and 2400). However, my maths teacher gave it to me and said there was a way to solve it analytically. $\sum_{i=0}^{3}10^{3-i}a_i = (\sum_{i=0}^{3} a_i) \times \left(\sum_{i=0}^{3} a_i^2\right)^2$ The only thing I found is that since $6^5 = 7776$ , no $a_i$ for $i \in \{0, 1, 2, 3\}$ cannot be greater or equal than six because otherwise $a_0$ would be greater or equal than 7 and $7^5 > 10000$ .
|
Here's a solution that ends up checking fewer than $30$ cases. I don't know if it would count as an analytic solution, but it's at least partly analytic. Note that you don't need to approach this as a search through all 9000 four-digit numbers. Instead view it as a search through all sets of four digits $\{a,b,c,d\}$ , seeing if $(a+b+c+d)(a^2+b^2+c^2+d^2)^2$ is some arrangement of those digits. You established that no digit can be $7$ or higher. So not only is the number at most $9999$ , it is at most $6666$ . Suppose there is a $6$ . Even $(6+0+0+0)(6^2+0^2+0^2+0^2)^2>6666$ , so there is no $6$ either. So not only is the number at most $6666$ , it is at most $5555$ . Suppose there is a $5$ . But $(5+2+0+0)(5^2+2^2+0^2+0^2)^2>5555$ and $(5+1+1+1)(5^2+1^2+1^2+1^2)^2>5555$ , and so the only possibilities with a $5$ are $\{5,1,1,0\}$ with $(5+1+1+0)(5^2+1^2+1^2+0^2)^2=5103$ ; $\{5,1,0,0\}$ with $(5+1+0+0)(5^2+1^2+0^2+0^2)^2=4056$ ; and $\{5,0,0,0\}$ with $(5+0+0+0)(5^2+0^2+0^2+0^2)^2=312
|
|elementary-number-theory|discrete-mathematics|
| 0
|
Connectedness of topological subspaces
|
I'm studying general topology and a question has come to my mind. [Definition of connected topological space and connected subset of a topological space] We have defined a connected topological space to be a space that (viz. such that its underlying set) cannot be written as the union of two non-empty and disjoint open subsets. The notion extends to subsets by means of the subspace topology, viz. a subset of a topological space is said to be connected in it if, when endowed with the subspace topology, it's a connected topological space. Does the following claim hold? If a subset of a topological space is such that every clopen of the space either contains it or is disjoint from it, then the subset is connected in the space . [Some observations] The converse can be easily proved to be true. I am aware that connected spaces can be characterised as those whose only clopens are the empty set and the whole underlying set, but I don't know how to address the problem since I suppose that clop
|
I can think of a counterexample that may further your investigation. Let X = $A \cup B$ , such that A and B are open. Clearly A and B are both also clopen. Let S be a subset of X such that every clopen subset of X contains it. Clearly S is disconnected
|
|general-topology|geometry|
| 0
|
Prove in a C* algebra that a*a is positive
|
I am trying to find a reference to the following "obvious facts" (not sure if they are true or not, but should have some comparable similar results) regarding a non-commutative $C^\ast$ algebra $A$ . For $a\in A,$ let $\Phi_A$ be the set of all multiplicative linear functionals $A \to \mathbb C.$ Then the spectrum $\sigma_A(a) = \{\varphi(a): \varphi\in \Phi_A\}.$ $a^\ast a$ is positive in the sense that its spectrum is a subset of nonnegative real numbers. If $a$ is positive and invertible, and $b$ is positive, then $a+b$ is positive and invertible. Is there a place where I can find the proof of these results?
|
Is false in general. Take $A=M_2(\mathbb C)$ , then the only multiplicative linear functional is zero. The assertion is true for unital commutative C $^*$ -algebras, where you take $\Phi_A$ to be the nonzero multiplicative linear functionals. As you wrote it, the assertion is true for non-unital commutative C $^*$ -algebras. The proof of " $a^*a$ is positive" that I know is a bit technical. The problem one has is that the statement needs to be proven early in the theory, before functional calculus and representations are available. If $a$ is positive and invertible, then $\sigma(a)\subset[c,\infty)$ for some $c>0$ . It follows that $a\geq cI$ , since $a-cI$ is positive. Then $a+b\geq a\geq cI$ , so $a+b-cI\geq0$ , which implies that $\sigma(a+b)\subset[c,\infty)$ and so $a+b$ is invertible. As for books, this stuff will be found in all C $^*$ -algebra books (Murphy, Davidson, Kadison-Ringrose, even Conway's Functional Analysis , to mention a few; as well as older ones like Sakai and Di
|
|functional-analysis|reference-request|operator-algebras|banach-algebras|
| 0
|
Solving progressive tax calculation for pre-tax income
|
Progressive Tax Rate Explanation Progressive taxation works by taxing income within a certain bracket at different rates. For example: Bracket # % tax rate within bracket Min amount (exclusive) Max amount (inclusive) 1 10% 0 50,000 2 20% 50,000 60,000 3 25% 60,000 n/a (>60,000) In the example above, the taxes on 100,000 in income would be 17,000 (5,000 from bracket #1, 2,000 from bracket #2, and 10,000 from bracket #3). I'm currently computing this as the dot product of two vectors (with a "zero" bracket of 0%/0 max): $$ ([\textrm{bracket 1 rate}, \textrm{bracket 2 rate}, \textrm{bracket 3 rate}]-[\textrm{bracket 0 rate}, \textrm{bracket 1 rate}, \textrm{bracket 2 rate}]) \cdot [max(0, \textrm{income}-\textrm{bracket 0 max}), max(0, \textrm{income}-\textrm{bracket 1 max}), max(0, \textrm{income}-\textrm{bracket 2 max})] $$ So, in the example above we do: $$ \begin{align} ([0.1, 0.2, 0.25]-[0, 0.1, 0.2]) \cdot [max(0, 100000-0), max(0, 100000-50000), max(0, 100000-60000)] \\ ([0.1, 0.1,
|
Start by assuming that the pre-tax income is in the lowest bracket, in which case you need take-home/0.9 Check whether the pre-tax is under 50000. If yes, you are done. If no, the first 50000 of pre-tax generates 45000 of take-home. Subtract 45000 from the required take-home and assume you are in the second bracket, so the required pre-tax is (take-home - 45000)/0.8 + 50000 Check whether the pre-tax is under 60000. If so you are done. Otherwise go into the third bracket the same way.
|
|linear-algebra|inner-products|
| 0
|
Twenty questions game where you have to guess two objects
|
Suppose we consider a variant of twenty questions game, where the answerer chooses two answers. On a query from the questioner, the answerer replies $0, 1$ or $2$ , depending on the number of answers that satisfy the question. The questioner queries the answerer in the same way as the original game, and wins if one guesses both answers correctly. Example . Suppose that the two answers are apples and oranges. On a query "Is it a fruit?", the answerer replies $2$ . On a query "Is it an orange?", the answerer replies $1$ , and so on. It is obvious that the game has gotten harder, but I got curious about how much it got harder . It is widely known that the answer for the original twenty questions game has $20$ bits of entropy, since the answerer replies 'Yes' or 'No' for each question. Question . How many questions would we need to correctly identify the two answers? Would there be a (constructive) clever way to obtain them? At first I mistakenly thought that the answer would be ordered, s
|
If there are $n$ objects, there are $\frac {n(n-1)}2$ pairs and you need to find one pair. Assuming you can make maximum use of each question, splitting the remaining possibilities evenly among the three answers, there are $3^{20}$ sets of answers, so we need to solve $$\frac {n(n-1)}2 =3^{20}$$ which Alpha says has $n \approx 83\ 508$ This compares with $2^{20}=1\ 048\ 576$ objects you can select one of in the original game.
|
|game-theory|entropy|
| 1
|
Color the edges of K6 by three colors so that there is no monochromatic cycle.
|
Is it possible to color the edges of K6 by three colors so that there is no monochromatic cycle? I tried many times but always end up getting monochromatic cycle.
|
If one colour is used on at least $6$ edges there will be a cycle of that colour since there are only $6$ vertices. Hence each colour must be used $5$ times. Here is a symmetric solution: Colour 1: 01 15 52 24 43 Colour 2: 23 31 14 40 05 Colour 3: 45 53 30 02 21
|
|graph-theory|
| 0
|
Does this type of tensor appear anywhere?
|
Antisymmetric, or skew-symmetric tensors on a subset of indices are those that get multiplied by $-1$ when any of the indices from the subset are transposed. This type of tensor is widely used in physics and mathematics. Now imagine I have, say, a tensor of type $(3, 0)$ that gets multiplied by $e^{2\pi i\over3}$ when I apply a cyclic permutation to the indices. More generally, let $h$ be an element of order $k$ in the multiplicative group of the underlying field, and say there is a tensor that gets multiplied by $h$ when a certain permutation of order $k$ is applied to its indices. This property does not seem to depend on the choice of basis. Do such tensors appear anywhere in physics or mathematics? I'm sorry if this question isn't well-motivated, I'm just curious.
|
In general, the symmetric group $S_n$ acts on the space $\Bbb V^{\otimes n} = \Bbb V \otimes \cdots \otimes \Bbb V$ of $n$ -tensors on $\Bbb V$ by permutation of indices, and this observation is a natural starting point for the study of Weyl's construction, Schur functors , and related ideas, one important consequence of which is the decomposition of $\Bbb V^{\otimes n}$ into irreducible $\operatorname{GL}(\Bbb V)$ -representations. For more about this perspective, see, e.g., Fulton & Harris' Representation Theory , $\S$ 6. Suppose that $\Bbb V$ is a (finite-dimensional, complex) vector space, let $\sigma$ denote the index permutation $(123)$ , so that $(T \cdot \sigma)^{ijk} = T^{jki}$ , and denote $\zeta := e^{2 \pi i / 3}$ . In this notation we're looking for a description of the space $\Bbb S \subseteq \Bbb V^{\otimes 3}$ comprising the tensors satisfying $$T \cdot \sigma = \zeta T .$$ By construction $\Bbb S$ is a $GL(\Bbb V)$ -representation, hence it is a direct sum of irreducib
|
|linear-algebra|representation-theory|tensors|
| 1
|
Solving progressive tax calculation for pre-tax income
|
Progressive Tax Rate Explanation Progressive taxation works by taxing income within a certain bracket at different rates. For example: Bracket # % tax rate within bracket Min amount (exclusive) Max amount (inclusive) 1 10% 0 50,000 2 20% 50,000 60,000 3 25% 60,000 n/a (>60,000) In the example above, the taxes on 100,000 in income would be 17,000 (5,000 from bracket #1, 2,000 from bracket #2, and 10,000 from bracket #3). I'm currently computing this as the dot product of two vectors (with a "zero" bracket of 0%/0 max): $$ ([\textrm{bracket 1 rate}, \textrm{bracket 2 rate}, \textrm{bracket 3 rate}]-[\textrm{bracket 0 rate}, \textrm{bracket 1 rate}, \textrm{bracket 2 rate}]) \cdot [max(0, \textrm{income}-\textrm{bracket 0 max}), max(0, \textrm{income}-\textrm{bracket 1 max}), max(0, \textrm{income}-\textrm{bracket 2 max})] $$ So, in the example above we do: $$ \begin{align} ([0.1, 0.2, 0.25]-[0, 0.1, 0.2]) \cdot [max(0, 100000-0), max(0, 100000-50000), max(0, 100000-60000)] \\ ([0.1, 0.1,
|
Let $I$ be the total income and $P$ be take-home pay. Then the taxes paid is $T = I - P$ , where $$T = \begin{cases} 0.1 I, & I \in [0, 50000] \\ 5000 + 0.2(I - 50000), & I \in (50000, 60000] \\ 7000 + 0.25(I - 60000), & I \in (60000, \infty). \end{cases}$$ Hence $$P = \begin{cases} 0.9 I, & I \in [0, 50000] \\ 0.8 I + 5000, & I \in (50000, 60000] \\ 0.75 I + 8000, & I \in (60000, \infty). \end{cases}$$ Now all that remains is to invert this piecewise function. We do this by solving each piece for $I$ and expressing the interval for each piece in terms of $P$ . So for instance, $$P = 0.9I, \quad I \in [0, 50000]$$ implies that $$I = \frac{10}{9}P, \quad P \in [0, 45000].$$ The other cases are handled similarly; we obtain $$I = \begin{cases} \frac{10}{9}P, & P \in [0, 45000] \\ \frac{5}{4}(P - 5000), & P \in (45000, 53000] \\ \frac{4}{3}(P - 8000), & P \in (53000, \infty). \end{cases}$$ This furnishes the total income $I$ needed to earn a take-home pay of $P$ .
|
|linear-algebra|inner-products|
| 1
|
Irreducible polynomial in $\Bbb{Z}_2[x]$
|
Suppose $2k + 1 \equiv 3 \mod 4$ in $\Bbb{Z}_{\geq 1}$ . Is the polynomial: $p_k(x) = x^{2k + 1} + x^{2k - 1} \dots + x + 1$ irreducible in $\Bbb{Z}_2[x]$ ? I do not know whether it is true or not... ( Side note: I've had a question in a test that is solved immediately if the statement above turns out to be true (a two parted question that would be solved by proving a single statement). The question was much simpler then trying to prove/disprove the above statement but I missed the easy way (very unfortunately and quite disappointingly after so much preparation and work). I decided to go about it my own way, brute forcing instead of trying to think outside the box.) The idea I had in mind - the statement is true: Suppose for the sake of contradiction that $p_k$ is reducible and suppose $g, f \in \Bbb{Z}_2[x]$ are such that $gf = p_k$ and $\deg(f), \deg(g) \geq 1$ . Also w.l.o.g $\deg f > \deg g$ . Denote $$ f = x^s + b_{s - 1}x^{s - 1} + \dots + b_1 x + 1 \\ g = x^r + a_{r - 1}x^{r - 1
|
Some simple (non-computer) ways of seeing the existence of non-trivial factors. In the end I prove that every primitive polynomial $\in \Bbb{F}_2[x]$ of degree $>2$ is a factor of infinitely many polynomials of this type. I reindex the polynomials and call $P_\ell(x)$ the polynomial with degree $4\ell-1$ . The geometric sum formula tells us that $$ P_\ell(x)=1+x\frac{1+x^{4\ell}}{1+x^2}. $$ So an element $\alpha\in K:=\overline{\Bbb{F}_2}$ is a root of $P_\ell(x)$ , if $\alpha\neq1$ and $$ \alpha(1+\alpha^{4\ell})=1+\alpha^2.\qquad(*) $$ Because $\alpha\notin \Bbb{F}_2$ , it is a root of unity of some order $M$ . From equation $(*)$ it thus follows that if $\alpha$ is a root of $P_\ell(x)$ , it is also a root of $P_k(x)$ for all $k\equiv \ell\pmod M$ . In other words $P_\ell(x)$ has a non-trivial common factor with all the polynomials $P_{\ell+j M}(x)$ , $j=1,2,3,\ldots$ . For example the polynomial $P_1(x)=x^3+x+1$ is a well-known irreducible. Its roots have order $M=7$ , because $7=2
|
|polynomials|finite-fields|irreducible-polynomials|
| 0
|
Smallest possible cardinality of finite set with two non-elementarily equivalent magmas which satisfy the same quasi-equations?
|
This is a natural follow-up to my previous question, here: Examples of two finite magmas which satisfy the same equations but not the same quasi-equations? . In the answer to that question, Keith Kearnes said that any two magmas on $\{0,1\}$ that satisfy the same equations are isomorphic. My question now is, is there a finite set $S$ and two binary operations $+$ and $*$ on $S$ such that the magmas $(S;+)$ and $(S;*)$ satisfy the same equations and also the same quasi-equations, but such that they are not elementarily equivalent, i.e, they do not have the same first-order theory? And if so, what is the smallest possible cardinality of $S$ ? It has to be at least $3$ , that is for sure. If the exact answer is unknown, I would like to know very good upper and lower bounds.
|
There are two magmas (in fact semilattices) of order $3$ which satisfy the same quasi-identities (universal Horn sentences) but are not elementarily equivalent. Namely, let $A=\{1,2,3\}$ and $B=\{1,2,4\}$ , both with the operation $x*y=\gcd(x,y)$ . They satisfy the same quasi-identities because $A$ and $B$ are isomorphic to subalgebras of $C\times C$ where $C=\{1,2\}=A\cap B$ . They are not elementarily equivalent because the sentence $\forall x\forall y(x*y=x\lor x*y=y)$ holds in $B$ but not in $A$ . The magma $B=(\{1,2,4\};*)$ is isomorphic to the magma $B'=(\{1,2,3\};\circ)$ where $x\circ y=\min(x,y)$ .
|
|model-theory|universal-algebra|
| 1
|
Limit as $n\to+\infty$ of $\prod_{k=1}^{n} \frac{2k}{2k+1}$
|
I'm trying to evaluate $$\lim_{n\to+\infty} \prod_{k=1}^{n} \frac{2k}{2k+1}$$ First I notice that since $k\geq1$ it is $\frac{2k}{2k+1}>0$ for all $k\in\{1,...,n\}$ ; so $$0\leq\lim_{n\to+\infty} \prod_{k=1}^{n} \frac{2k}{2k+1}$$ Then I notice that $$\prod_{k=1}^{n} \frac{2k}{2k+1}=\exp{\ln\left(\prod_{k=1}^{n} \frac{2k}{2k+1}\right)}=\exp{\sum_{k=1}^{n}\ln\left(\frac{2k}{2k+1}\right)}=$$ $$=\exp{\sum_{k=1}^{n}\ln\left(1-\frac{1}{2k+1}\right)}$$ Since $\ln(1+x)\leq x$ for all $x>-1$ and since $\exp$ is an increasing function it follows that $$\exp{\sum_{k=1}^{n}\ln\left(1-\frac{1}{2k+1}\right)}\leq\exp{\sum_{k=1}^{n}-\frac{1}{2k+1}}$$ So $$\lim_{n\to+\infty}\prod_{k=1}^{n} \frac{2k}{2k+1}\leq\lim_{n\to+\infty}\exp{\sum_{k=1}^{n}-\frac{1}{2k+1}}$$ Since $\exp$ is a continuous function it follows that $$\lim_{n\to+\infty}\exp{\sum_{k=1}^{n}-\frac{1}{2k+1}}=\exp{\sum_{k=1}^{+\infty}-\frac{1}{2k+1}}=e^{-\infty}=0$$ So by the comparison test we deduce that the limit is $0$ . Is this correct
|
Using the formula $\displaystyle 1+X\leq e^{X}$ for $X>0$ So we have $\displaystyle 1-X\leq e^{-X}$ for $X>0$ $\displaystyle 0 So using Squeeze Theorem, We get $\displaystyle \prod^{n}_{k=1}\frac{2k}{2k+1}=0$ (Above $\displaystyle \frac{1}{2k+2} -\frac{1}{2k+1})$
|
|real-analysis|sequences-and-series|limits|infinite-product|
| 0
|
Finding the values of $k$ for which $3\sin(\theta)+k\cos(\theta+30^\circ)=7$ has real solutions
|
Find the values of $k$ for which this equation has real solutions. $$3\sin\left(\theta \right)+k\cos\left(\theta +30^\circ\right)=7$$ I wrote the LHS into $a\ \sin\left(\theta\right)+b\cos\left(\theta\right)$ : $$\left(3-\frac{k}{2}\right)\sin\left(\theta\right)+\frac{\sqrt{3}}{2}k\cos\left(\theta\right)$$ I need to verify that this is correct and find how to proceed from here onwards.
|
The moment you come up with this equation, $$(3-\frac{k}{2})\sin\theta+\frac{\sqrt{3}}{2}k\cos\theta=7$$ One must look at the RHS. It is 7 (which is kind of big for terms involving $a\sin\theta+b\cos\theta$ Find the maximum value of the LHS! You will get a quadratic in k. I recommend that you try it on your own once and then look at this solution. Use the inequality, $$a\sin\theta+b\cos\theta\le\sqrt{a^{2}+b^{2}}$$ Further, write $$7\le\sqrt{(3-\frac{k}{2})^{2}+\frac{3k^{2}}{4}}$$ $$7\le\sqrt{9+\frac{k^{2}}{4}-3k+\frac{3k^{2}}{4}}$$ $$49\le k^{2}-3k+9$$ $$k^{2}-3k-40\ge0$$ $$(k-8)(k+5)$$ $$k\ge8\ or\ k\le-5$$ Proof of the inequality, $$a\sin\theta+b\cos\theta\le\sqrt{a^{2}+b^{2}}$$ start with assigning, $$a\sin\theta+b\cos\theta=k$$ On squaring, we get, $$a^{2}\sin^{2}\theta+b^{2}\cos^{2}\theta+2ab\ \sin\theta\cos\theta=k^{2}$$ Divide by $\cos^{2}\theta$ $$a^{2}\tan^{2}\theta+b^{2}+2ab\ \tan\theta=k^{2}\sec^{2}\theta$$ Now, on using $\sec^{2}\theta=1+\tan^{2}\theta$ and rearranging, we
|
|trigonometry|
| 0
|
Approximation of $\int_0^{\infty} e^{-bx^2}sin(ax^2)dx$ when a>>b
|
This is from an exercise in Migdal's "Qualitative Methods in Quantum Theory". For the case for where b>>a, we can arrive at an estimate by re-writing the integral in the following way: $$ \frac{1}{\sqrt{b}}\int_0^{\infty} e^{-z^2}sin(\frac{a}{b}z^2)dz \approx \frac{a}{b^{3/2}}\int_0^{\infty} e^{-z^2}z^2dz = a\sqrt{\frac{\pi}{16b^3}}. $$ However, I have been stuck trying to show that in the case a>>b, the integral is approximately $\sqrt{\frac{\pi}{8a}}$ . I am currently unsure how to treat the sine term, which oscillates quite fast. I considered approximating the integral's value by just considering the first half-cycle of an oscillation, but that didn't seem to bring me any closer to the known estimate. Any hints/idea what approximations may be used to recover the given estimate when a>>b?
|
If you use Euler representation of the sine function, the antiderivative is an error function since $$\int e^{-bx^2}\,\sin(ax^2)\,dx=\Im \left( \int e^{ (b-i a)x^2}\,dx\right)$$ As @Travis Willse already wrote in comments $$\int_0^{\infty} e^{-bx^2}\,\sin(ax^2)\,dx=\frac{\sqrt{\pi }}{2 \sqrt[4]{a^2+b^2}}\,\sin \left(\frac{1}{2} \tan ^{-1}\left(\frac{a}{b}\right)\right)$$ Let $a=k b$ and Taylor expand for large values of $k$ $$I=\frac 1 2 \sqrt{\frac{\pi }{2 b k} }\left(1-\frac{1}{2 k}-\frac{3}{8 k^2}+O\left(\frac{1}{k^3}\right)\right)$$ Replace $k$ by $\frac a b$
|
|integration|definite-integrals|numerical-methods|approximation|
| 1
|
Rate of change of volume when radius in increasing linearly
|
I have been trying to understand the solution of this problem for quite a while, and I get stuck at one specific line. Here's the question The radius of a sphere is increasing at the rate of $\frac{1}{\pi}$ m/s, then find change in volume of sphere when radius is 2.5m. Original Solution It's a different question, but the essence remains the same. In the solution, he differentiates $V$ wrt $t$ , in a single line. I would be grateful if someone explained that specific step in detail.
|
The idea is using chain rule. The given condition is equivalent as saying $\dfrac{dr}{dt}=\pi^{-1}$ , so you can get $$\dfrac{dV}{dt}=\dfrac{d}{dt}\left(\dfrac{4}{3}\pi r^3\right)=\dfrac{4\pi}{3}\dfrac{d(r^3)}{dt}=\dfrac{4\pi}{3}\dfrac{dr^3}{dr}\times\dfrac{dr}{dt}=\dfrac{4\pi}{3}(3r^2)\times\dfrac{1}{\pi}$$
|
|calculus|
| 0
|
Awkward Linear Programming Problem
|
This is a linear programming problem I was given in my semester examinations. The question is attached as an image. Given its size , I couldn't type it out. LPP problem So basically we have to come up with all the inequalities for this problem given the conditions. Few of them are easy to note : Let no. of products for models I,II and III be x,y and z respectively. Then , $60x + 40y + 100z$ is the objective function Given the minimum demand condition, we can say : $x\ge 500$ $y\ge 500$ $z\ge 375$ Given the conditions on raw materials we can compute : $2x + 3y + 5z \le 4000$ and $4x + 2y + 7z \le 6000$ From the labour statements given , we can say that if $l$ is the labour required to produce a unit of model I , then for each unit of models II and III , $\frac{l}2$ and $\frac{l}3$ labour are required respectively. The total labour capacity of the factory is $2500$ units of model I , so $2500l$ is the total labour capacity of the factory. So we get : $6x + 3y + 2z \le 15000$ Finally, fro
|
You are just asked to formulate a linear programming problem. You are not required to solve it by hand or reduce the number of variables though you are able to. Besides the number of variables, another factor to consider is the interpretability. Sometimes it is good to have more variables, say in variables $x,y,z$ so that we can tell which mathematical constraints refer to which requirement.
|
|optimization|linear-programming|operations-research|
| 0
|
Determinant of $n \times n$ matrix of a sort of skew symmetric matrix plus some diagonal
|
Given, a matrix: $$\begin{pmatrix} a & b & \ldots & b & b \\ -b & a & \ldots & b & b \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ -b & -b & \ldots & a & b \\ -b & -b & \ldots & -b & a \end{pmatrix}.$$ I need to find a determinant. So initially what I did, was I added the first column to other ones: $$\begin{pmatrix} a & b+a & \ldots & b+a & b+a \\ -b & a-b & \ldots & 0 & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ -b & -2b & \ldots & a-b & 0 \\ -b & -2b & \ldots & -2b & a-b \end{pmatrix},$$ then added the last row to the first one $$\begin{pmatrix} a-b & a-b & \ldots & a-b & 2a \\ -b & a-b & \ldots & 0 & 0 \\ \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ -b & -2b & \ldots & a-b & 0 \\ -b & -2b & \ldots & -2b & a-b\end{pmatrix},$$ then multiplied the first column by 2 and subtracted the second one $$\frac{1}{2}\cdot\begin{pmatrix} a-b & a-b & \ldots & a-b & 2a \\ -a-b & a-b & \ldots & 0 & 0 \\ \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ 0 & -2b & \ldots & a-b & 0 \\ 0
|
We can write $$A=\begin{pmatrix} a & b & \ldots & b & b \\ -b & a & \ldots & b & b \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ -b & -b & \ldots & a & b \\ -b & -b & \ldots & -b & a \end{pmatrix}=aI_n+bX$$ , where $$X=\begin{pmatrix} 0 & 1 & \ldots & 1 & 1 \\ -1 & 0 & \ldots & 1 & 1 \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ -1 & -1 & \ldots & 0 & 1 \\ -1 & -1 & \ldots & -1 & 0 \end{pmatrix}$$ Now note that, if $\alpha$ is an eigenvalue of $X$ , then $a+b\alpha$ is an eigenvalue of $A$ . Can you take it forward?
|
|linear-algebra|abstract-algebra|matrices|determinant|problem-solving|
| 0
|
Is $(e^{i \lambda B_t + \frac{1}{2}\lambda^2t})_{t\geq 0}$ a martingale?
|
Showing that $(e^{\lambda B_t - \frac{1}{2}\lambda^2t})_{t\geq 0}$ is a $\mathbb{R}$ -valued martingale Let $B$ be a standard $\mathbb{R}$ -valued Brownian motion and let $\lambda\in\mathbb{R}$ . From $B_t-B_s$ being independent of $\mathcal{F}_s$ and $B_t-B_s \sim \mathcal{N}\left(0, t-s\right)$ follows $\mathbb{E}\left[e^{\lambda\left(B_t - B_s\right)}\vert \mathcal{F}_s\right] = \mathbb{E}\left[e^{\lambda\left(B_t - B_s\right)}\right] = e^{\frac{1}{2}\lambda^2\left(t-s\right)}$ . Because of $B_t = B_t - B_0 \sim \mathcal{N}\left(0, t\right)$ we also have $\mathbb{E}\left[e^{\lambda B_t}\right] = e^{\frac{1}{2}\lambda^2 t}$ . $(e^{\lambda B_t - \frac{1}{2} \lambda^2 t})_{t\geq 0}$ is an adapted process because $B$ is. $e^{\lambda B_t - \frac{1}{2}\lambda^2t}$ is integrable: $$\mathbb{E}\left[\left\lvert e^{\lambda B_t - \frac{1}{2}\lambda^2t}\right\rvert\right] = \mathbb{E}\left[e^{\lambda B_t}\right]\cdot e^{-\frac{1}{2}\lambda^2t} = 1 The fact that $B_s$ is $\mathcal{F}_s$ -measura
|
$\mathbb{E}\left[e^{c\left(B_t - B_s\right)}\right] = e^{\frac{1}{2}c^2\left(t-s\right)}$ does hold for $c\in\mathbb{C}$ . The proofs are correct.
|
|complex-numbers|expected-value|brownian-motion|martingales|moment-generating-functions|
| 1
|
Showing that $x = \sec\theta + \tan\theta$ and $y = 2\sec\theta$ satisfy $x^{2}-xy+1=0$ without substitution?
|
Show that $x = \sec\theta + \tan\theta$ and $y = 2\sec\theta$ satisfy $x^{2}-xy+1=0$ I would sub, but are there alternative methods to consider?
|
Notice that the question wants us to prove $$x\left(y-x\right)=1$$ and we also know the standard result, $$\sec^{2}\theta=1+\tan^{2}\theta$$ $$\sec^{2}\theta-\tan^{2}\theta=1$$ $$\left(\sec\theta+\tan\theta\right)\left(\sec\theta-\tan\theta\right)=1$$ Now the LHS is exactly $x\left(y-x\right)$ . You know the line of thought used here, and thus the proof is complete.
|
|trigonometry|
| 0
|
Showing that $x = \sec\theta + \tan\theta$ and $y = 2\sec\theta$ satisfy $x^{2}-xy+1=0$ without substitution?
|
Show that $x = \sec\theta + \tan\theta$ and $y = 2\sec\theta$ satisfy $x^{2}-xy+1=0$ I would sub, but are there alternative methods to consider?
|
Since $x^{2}-xy+1=0 \iff xy = x^2+1 \iff y=\frac{x^2+1}{x} = x+\frac{1}{x}$ , the curve has infinitely many parametrizations, for example $$x=t, y=t+\frac{1}{t}$$ or $$x=\tan(t), y=\tan(t)+\frac{1}{\tan(t)}$$ Whatever you choose for $x$ , there will be a corresponding function for $y$ , so you will have to do a substitution at some point to check that the specific parametrization you were given works.
|
|trigonometry|
| 0
|
The associated Lie algebra of the unital associative algebra $\mathcal{A}$ in the Universal Enveloping Algebras
|
Definition: Let $\mathfrak{g}$ be a Lie algebra. A universal enveloping algebra of $\mathfrak{g}$ is a pair $(\mathfrak{U},i)$ consisting of a unital associative algebra $\mathfrak{U}$ and a Lie morphism $i: \mathfrak{g} \rightarrow \mathfrak{U}_L$ satisfying the following universal property: for every pair $(\sigma, \mathcal{A})$ consisting of a unital associative algebra $\mathcal{A}$ and a Lie algebra morphism $\sigma: \mathfrak{g} \rightarrow \mathcal{A}_L$ there exists a unique unital algebra morphism $\tilde{\sigma}: \mathfrak{U} \rightarrow \mathcal{A}$ such that $\tilde{\sigma} \circ i= \sigma$ . In this definition appears $\mathcal{A}_L$ that is the associated Lie algebra of the unital associative algebra $\mathcal{A}$ , but i don't understand how $\mathcal{A}_L$ is constructed from $\mathcal{A}$ . Is there anyone who can explain it to me?
|
For any associative algebra $A$ , its "associated Lie algebra" $A_L$ is the same set, and even vector space, as $A$ , with the Lie bracket $[a,b] := ab-ba$ . Cf. https://math.stackexchange.com/a/4245451/96384
|
|lie-algebras|
| 0
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.