Dataset Viewer
Auto-converted to Parquet
title
string
question_body
string
answer_body
string
tags
string
accepted
int64
List of interesting math podcasts?
mathfactor is one I listen to. Does anyone else have a recommendation?
More or Less is a BBC Radio 4 programme about maths and statistics in the news, and there is a free podcast. It's presented by Tim Harford , the Undercover Economist from the Financial Times .
|soft-question|big-list|online-resources|
0
How can you prove that the square root of two is irrational?
I have read a few proofs that $\sqrt{2}$ is irrational. I have never, however, been able to really grasp what they were talking about. Is there a simplified proof that $\sqrt{2}$ is irrational?
You use a proof by contradiction. Basically, you suppose that $\sqrt{2}$ can be written as $\dfrac{p}{q}$ . Then you know that $2q^2=p^2$ . As squares of integers, both $q^2$ and $p^2$ have an even number of factors of two. Therefore, $2q^2$ has an odd number of factors of $2$ , which means it can't be equal to $p^2$ .
|elementary-number-theory|radicals|rationality-testing|
1
What Does it Really Mean to Have Different Kinds of Infinities?
Can someone explain to me how there can be different kinds of infinities? I was reading " The man who loved only numbers " by Paul Hoffman and came across the concept of countable and uncountable infinities, but they're only words to me. Any help would be appreciated.
Suppose no one ever taught you the names for ordinary numbers. Then suppose that you and I agreed that we would trade one bushel of corn for each of my sheep. But there's a problem, we don't know how to count the bushels or the sheep! So what do we do? We form a "bijection" between the two sets. That's just fancy language for saying you pair things up by putting one bushel next to each of the sheep. When we're done we swap. We've just proved that the number of sheep is the same as the number of bushels without actually counting. We can try doing the same thing with infinite sets. So suppose you have the set of positive integers and I have the set of rational numbers and you want to trade me one positive integer for each of my rationals. Can you do so in a way that gets all of my rational numbers? Perhaps surprisingly the answer is yes! You make the rational numbers into a big square grid with the numerator and denominators as the two coordinates. Then you start placing your "bushels" a
|elementary-set-theory|intuition|infinity|faq|
1
What Does it Really Mean to Have Different Kinds of Infinities?
Can someone explain to me how there can be different kinds of infinities? I was reading " The man who loved only numbers " by Paul Hoffman and came across the concept of countable and uncountable infinities, but they're only words to me. Any help would be appreciated.
The basic concept is thus: A 'countable' infinity is one where you can give each item in the set an integer and 'count' them (even though there are an infinite number of them) An 'uncountable' infinity defies this. You cannot assign an integer to each item in the set because you will miss items. The key to seeing this is using the 'diagonal slash' argument as originally put forward by Cantor. With a countable infinity, you can create a list of all the items in the set and assign each one a different natural number. This can be done with the naturals (obviously) and the complete range of integers (including negative numbers) and even the rational numbers (so including fractions). It cannot be done with the reals due to the diagonal slash argument: Create your list of all real numbers and assign each one an integer Create a real number with the rule that the first digit after the decimal point is different from the first digit of your first number, the second digit is different from the
|elementary-set-theory|intuition|infinity|faq|
0
What is your favorite online graphing tool?
I'm looking for a nice, quick online graphing tool. The ability to link to, or embed the output would be handy, too.
Well, I am not sure where you want to embed the graphs, but Wolfram Alpha is pretty handy for graphing. It has most of the features of Mathematica, can handle 3D functions, and fancy scaling and such. I highly recommend it.
|soft-question|math-software|
0
How are we able to calculate specific numbers in the Fibonacci Sequence?
I was reading up on the Fibonacci Sequence, $1,1,2,3,5,8,13,\ldots $ when I noticed some were able to calculate specific numbers. So far I've only figured out creating an array and counting to the value, which is incredibly simple, but I reckon I can't find any formula for calculating a Fibonacci number based on it's position. Is there a way to do this? If so, how are we able to apply these formulas to arrays?
Wikipedia has a closed-form function called " Binet's formula ". $$F\left(n\right) = {{\varphi^n-(1-\varphi)^n} \over {\sqrt 5}}$$ This is based on the Golden Ratio.
|combinatorics|generating-functions|fibonacci-numbers|
1
How are we able to calculate specific numbers in the Fibonacci Sequence?
I was reading up on the Fibonacci Sequence, $1,1,2,3,5,8,13,\ldots $ when I noticed some were able to calculate specific numbers. So far I've only figured out creating an array and counting to the value, which is incredibly simple, but I reckon I can't find any formula for calculating a Fibonacci number based on it's position. Is there a way to do this? If so, how are we able to apply these formulas to arrays?
The closed form calculation for Fibonacci sequences is known as Binet's Formula .
|combinatorics|generating-functions|fibonacci-numbers|
0
How are we able to calculate specific numbers in the Fibonacci Sequence?
I was reading up on the Fibonacci Sequence, $1,1,2,3,5,8,13,\ldots $ when I noticed some were able to calculate specific numbers. So far I've only figured out creating an array and counting to the value, which is incredibly simple, but I reckon I can't find any formula for calculating a Fibonacci number based on it's position. Is there a way to do this? If so, how are we able to apply these formulas to arrays?
You can use Binet's formula, described at http://mathworld.wolfram.com/BinetsFibonacciNumberFormula.html (see also Wikipedia for a proof: http://en.wikipedia.org/wiki/Binet_formula#Closed_form_expression )
|combinatorics|generating-functions|fibonacci-numbers|
0
Is it true that $0.999999999\ldots=1$?
I'm told by smart people that $$0.999999999\ldots=1$$ and I believe them, but is there a proof that explains why this is?
Given (by long division): $\frac{1}{3} = 0.\bar{3}$ Multiply by 3: $3\times \left( \frac{1}{3} \right) = \left( 0.\bar{3} \right) \times 3$ Therefore: $\frac{3}{3} = 0.\bar{9}$ QED.
|real-analysis|algebra-precalculus|real-numbers|decimal-expansion|
0
Is it true that $0.999999999\ldots=1$?
I'm told by smart people that $$0.999999999\ldots=1$$ and I believe them, but is there a proof that explains why this is?
Indeed this is true. The underlying reason is that decimal numbers are not unique representations of the reals. (Technically, there does exist a bijection between the set of all decimal numbers and the reals, but it is not the natural/obvious one.) Here's a very simple proof: $$\begin{align} \frac13&=0.333\ldots&\hbox{(by long division)}\\ \implies0.333\ldots\times3&=0.999\ldots&\hbox{(multiplying each digit by $3$)} \end{align}$$ Then we already know $0.333\ldots\times3=1$ therefore $0.999\ldots=1$ .
|real-analysis|algebra-precalculus|real-numbers|decimal-expansion|
0
What Does it Really Mean to Have Different Kinds of Infinities?
Can someone explain to me how there can be different kinds of infinities? I was reading " The man who loved only numbers " by Paul Hoffman and came across the concept of countable and uncountable infinities, but they're only words to me. Any help would be appreciated.
Hilbert's Hotel is a classic demonstration.
|elementary-set-theory|intuition|infinity|faq|
0
Can you recommend a decent online or software calculator?
I'm looking for an online or software calculator that can show me the history of items I typed in, much like an expensive Ti calculator. Can you recommend any?
Lots of people like to use InstaCalc which lets you do unit conversions and store intermediate calculations in variables.
|soft-question|big-list|math-software|computer-algebra-systems|
0
What is a real number (also rational, decimal, integer, natural, cardinal, ordinal...)?
In mathematics, there seem to be a lot of different types of numbers. What exactly are: Real numbers Integers Rational numbers Decimals Complex numbers Natural numbers Cardinals Ordinals And as workmad3 points out, some more advanced types of numbers (I'd never heard of) Hyper-reals Quaternions Imaginary numbers Are there any other types of classifications of a number I missed?
Natural numbers The "counting" numbers. (That is, all integers, that are one or greater). Whole numbers The Natural numbers, and zero. Integers The Whole numbers, and the negatives of the Natural numbers. Rational numbers Any number that may be expressed by any integer A divided by any integer B, where B is not zero. Irrational numbers Any number that cannot be expressed as a rational number, but is not imaginary. All irrational numbers have an infinite decimal representation. Real numbers All of the Rational and Irrational numbers. Imaginary numbers All Real numbers, multiplied by the square root of negative one. Imaginary numbers are signified by the letter i . Complex numbers Numbers composed of the sum of a Real and an Imaginary number. This includes all Real and all Imaginary numbers.
|terminology|definition|number-systems|
1
Is it true that $0.999999999\ldots=1$?
I'm told by smart people that $$0.999999999\ldots=1$$ and I believe them, but is there a proof that explains why this is?
.999... = 1 because .999... is a concise symbolic representation of "the limit of some variable as it approaches one." Therefore, .999... = 1 for the same reason the limit of x as x approaches 1 equals 1.
|real-analysis|algebra-precalculus|real-numbers|decimal-expansion|
0
Why is the matrix-defined Cross Product of two 3D vectors always orthogonal?
By matrix-defined, I mean $$\left \times\left = \left| \begin{array}{ccc} i & j & k\\ a & b & c\\ d & e & f \end{array} \right|$$ ...instead of the definition of the product of the magnitudes multiplied by the sign of their angle, in the direction orthogonal) If I try cross producting two vectors with no $k$ component, I get one with only $k$, which is expected. But why? As has been pointed out, I am asking why the algebraic definition lines up with the geometric definition.
The obvious but slightly trite answer is "because that's just how the cross-product works as an operation". If you're looking for an intuitive reason, the cross-product by definition produces a vector that is orthogonal to the two operand (input) vectors. You know that the base vectors $\mathbf{i}$, $\mathbf{j}$, and $\mathbf{k}$ are all orthogonal, thus if your two input vectors lie on the $(x, y)$ plane (i.e. only $\mathbf{i}$ and $\mathbf{j}$ components), you know that any orthogonal vector must have only a component in the $z$ direction (multiple of $\mathbf{k}$).
|linear-algebra|matrices|inner-products|orthogonality|cross-product|
0
Is it true that $0.999999999\ldots=1$?
I'm told by smart people that $$0.999999999\ldots=1$$ and I believe them, but is there a proof that explains why this is?
You can visualise it by thinking about it in infinitesimals. The more $9's$ you have on the end of $0.999$, the closer you get to $1$. When you add an infinite number of $9's$ to the decimal expansion, you are infinitely close to $1$ (or an infinitesimal distance away). And this isn't a rigorous proof, just an aid to visualisation of the result.
|real-analysis|algebra-precalculus|real-numbers|decimal-expansion|
0
Can you recommend a decent online or software calculator?
I'm looking for an online or software calculator that can show me the history of items I typed in, much like an expensive Ti calculator. Can you recommend any?
Google's calculator is very powerful: Google Guide Quick Reference: Google Calculator (Cheat Sheet) and your use history will be stored in your browser history.
|soft-question|big-list|math-software|computer-algebra-systems|
0
Is it true that $0.999999999\ldots=1$?
I'm told by smart people that $$0.999999999\ldots=1$$ and I believe them, but is there a proof that explains why this is?
What does it mean when you refer to $.99999\ldots$? Symbols don't mean anything in particular until you've defined what you mean by them . In this case the definition is that you are taking the limit of $.9$, $.99$, $.999$, $.9999$, etc. What does it mean to say that limit is $1$? Well, it means that no matter how small a number $x$ you pick, I can show you a point in that sequence such that all further numbers in the sequence are within distance $x$ of $1$. But certainly whatever number you choose your number is bigger than $10^{-k}$ for some $k$. So I can just pick my point to be the $k$th spot in the sequence. A more intuitive way of explaining the above argument is that the reason $.99999\ldots = 1$ is that their difference is zero. So let's subtract $1.0000\ldots -.99999\ldots = .00000\ldots = 0$. That is, $1.0 -.9 = .1$ $1.00-.99 = .01$ $1.000-.999=.001$, $\ldots$ $1.000\ldots -.99999\ldots = .000\ldots = 0$
|real-analysis|algebra-precalculus|real-numbers|decimal-expansion|
1
What Does it Really Mean to Have Different Kinds of Infinities?
Can someone explain to me how there can be different kinds of infinities? I was reading " The man who loved only numbers " by Paul Hoffman and came across the concept of countable and uncountable infinities, but they're only words to me. Any help would be appreciated.
A countably infinite set is a set for which you can list the elements: $a_1,a_2,a_3,\ldots$ . For example, the set of all integers is countably infinite since I can list its elements as follows: $$0,1,-1,2,-2,3,-3,\ldots .$$ So is the set of rational numbers, but this is more difficult to see. Let's start with the positive rationals. Can you see the pattern in this listing? $$\frac{1}{1},\frac{1}{2},\frac{2}{1},\frac{1}{3},\frac{2}{2},\frac{3}{1},\frac{1}{4},\frac{2}{3},\frac{3}{2},\frac{4}{1},\frac{1}{5},\frac{2}{4},\ldots .$$ (Hint: Add the numerator and denominator to see a different pattern.) This listing has lots of repeats, e.g. $\dfrac{1}{1}=\dfrac{2}{2}$ and $\dfrac{1}{2}=\dfrac{2}{4}$ . That's ok since I can condense the listing by skipping over any repeats. $$\frac{1}{1},\frac{1}{2},\frac{2}{1},\frac{1}{3},\frac{3}{1},\frac{1}{4},\frac{2}{3},\frac{3}{2},\frac{4}{1},\frac{1}{5},\ldots .$$ Let's write $q_n$ for the $n$ -th element of this list. Then $0,q_1,-q_1,q_2,-q_2,q_3,-q_
|elementary-set-theory|intuition|infinity|faq|
0
Why is the matrix-defined Cross Product of two 3D vectors always orthogonal?
By matrix-defined, I mean $$\left \times\left = \left| \begin{array}{ccc} i & j & k\\ a & b & c\\ d & e & f \end{array} \right|$$ ...instead of the definition of the product of the magnitudes multiplied by the sign of their angle, in the direction orthogonal) If I try cross producting two vectors with no $k$ component, I get one with only $k$, which is expected. But why? As has been pointed out, I am asking why the algebraic definition lines up with the geometric definition.
Assuming you know the definition of orthogonal as "a is orthogonal to b iff $a\cdot b=0$ then we could calculate $(a \times b)\cdot a = a_1(a_2b_3-a_3b_2)-a_2(a_1b_3-a_3b_1)-a_3(a_1b_2-a_2b_1)=0$ and $(a \times b)\cdot b-0$, so the cross product is orthogonal to both. As Nold mentioned, if the two vectors a and b lie in the x,y plane, then the orthogonal vectors must be purely in the z direction.
|linear-algebra|matrices|inner-products|orthogonality|cross-product|
1
Can you recommend a decent online or software calculator?
I'm looking for an online or software calculator that can show me the history of items I typed in, much like an expensive Ti calculator. Can you recommend any?
Essentially the most helpful is WolframAlpha , as Ami said, you can use your browser history here too. WolframAlpha can carry out complex equations can comparisons much like a TI Calculator. Additionally they have some areas where you can see the simplification of an equation paired with charts and graphs where possible.
|soft-question|big-list|math-software|computer-algebra-systems|
0
Is it true that $0.999999999\ldots=1$?
I'm told by smart people that $$0.999999999\ldots=1$$ and I believe them, but is there a proof that explains why this is?
Suppose this was not the case, i.e. $0.9999... \neq 1$ . Then $0.9999... (I hope we agree on that). But between two distinct real numbers, there's always another one in between, say $x=\frac{0.9999... +1}{2}$ , hence $0.9999... . The decimal representation of $x$ must have a digit somewhere that is not $9$ (otherwise $x = 0.9999...$ ). But that means it's actually smaller, $x , contradicting the definition of $x$ . Thus, the assumption that there's a number between $0.9999...$ and $1$ is false, hence they're equal.
|real-analysis|algebra-precalculus|real-numbers|decimal-expansion|
0
The cow in the field problem (intersecting circular areas)
What length of rope should be used to tie a cow to an exterior fence post of a circular field so that the cow can only graze half of the grass within that field? updated: To be clear: the cow should be tied to a post on the exterior of the field, not a post at the center of the field.
So, the area of the field is $\pi r^2$ and you want the cow to be able to graze an area equal to half of that. All you need to do is set up the equation ($r_1$ is the radius of the field, $r_2$ is the length of the rope desired): $$\frac{(\pi r_1^2)}{2} = \pi r_2^2$$ You can then simplify it down: $$\frac{r_1^2 }{2} =r_2^2$$ and then taking roots: $$r_2 =\frac{ r_1 }{\sqrt{2}}$$ So you need a rope that is equal to the radius divided by the square root of $2$, and the post can be no closer than this distance to the edge of the field.
|geometry|
0
The cow in the field problem (intersecting circular areas)
What length of rope should be used to tie a cow to an exterior fence post of a circular field so that the cow can only graze half of the grass within that field? updated: To be clear: the cow should be tied to a post on the exterior of the field, not a post at the center of the field.
Let the total area of the field = $A$. We know $A = \pi R^2$ where $R$ = the radius of the field. We want the cow to be able to graze half the area, so we solve for a length of rope $r$ such that $\pi r^2 = A / 2$. This gives: $\pi r^2 = \pi R^2 / 2$, hence $r = R / \sqrt(2)$. In words, the length of the cow's rope should be the radius of the field divided by sqrt(2).
|geometry|
0
Calculating an Angle from $2$ points in space
Given two points $p_1$ , $p_2$ around the origin $(0,0)$ in $2D$ space, how would you calculate the angle from $p_1$ to $p_2$ ? How would this change in $3D$ space?
I will assume that you mean the angle of the line from $p_1$ to $p_2$ with respect to the $x$ -axis This is the best I can do given the information you have provided. In any case, the official mathsy way would be to find the dot product between the two, and divide by the magnitude of $p_1-p_2$ and take the arccossine. $$ \begin{aligned} v &= (\text{normalized vector from } p_1 \text{ to } p_2) \\ \theta &= \arccos( v \cdot \langle1,0\rangle) \qquad\qquad\qquad\qquad (\text{dot product}) \end{aligned} $$ You can normalize a vector by dividing every term by the magnitude (length) of the entire vector. For 3D, the same thing applies: $$ \theta = \arccos( v \cdot \langle1,0,0\rangle ) \qquad\qquad (\text{dot product}) $$ You could also possibly mean the angle between the line from the origin to $p_1$ and the line from the origin to $p_2$ . You can do this with dot products, as well; but both vectors must be normalized. $$ \theta = \arccos( a \cdot b ) \qquad\qquad (\text{dot product}) $$ w
|linear-algebra|geometry|
0
Real life usage of Benford's Law
I recently discovered Benford's Law. I find it very fascinating. I'm wondering what are some of the real life uses of Benford's law. Specific examples would be great.
Forensic accountancy is a popular use, and is actually admissible as evidence in the USA.
|soft-question|big-list|statistics|applications|
1
What is an elliptic curve, and how are they used in cryptography?
I hear a lot about Elliptic Curve Cryptography these days, but I'm still not quite sure what they are or how they relate to crypto...
Here is a super nice powerpoint on the subject! http://www.math.brown.edu/~jhs/Presentations/WyomingEllipticCurve.pdf
|cryptography|elliptic-curves|
1
How do the Properties of Relations work?
This is simply not clicking for me. I'm currently learning math during the summer vacation and I'm on the chapter for relations and functions. There are five properties for a relation: Reflexive - $R \rightarrow R$ Symmetrical - $R \rightarrow S$ ; $S \rightarrow R$ Antisymmetrical - $R \rightarrow S$ && ( $R \rightarrow R$ || $S \rightarrow S$ ) Asymmetrical - $R \rightarrow S$ && !( $R \rightarrow R$ || $S \rightarrow S$ ) Transitive - if $R \rightarrow S$ && $S \rightarrow T$ , then $R \rightarrow T$ If that's not what you call the properties in English, then please let me know- I have to study it in German, unfortunately, and these are my rough translations. Continuing on, I just don't know what to do with this information practically. The examples of the book are horrible: "Is the same age as" is apparently reflexive, symmetrical and transitive. "Is related to" is also apparently reflexive, symmetrical and transitive. "Is older than" is asymmetric, antisymmetric and transitive. Th
Asymmetric means simply "not symmetric". So in the binary case, it is NOT the case that if a is related to b, b is related to a. Antisymmetric means that if a is related to b, and b is related to a, a = b. To explain your third example: "is older than" is asymmetric because if Alice is older than Bob, Bob is NOT older than Alice. "is older than" is antisymmetric since if Alice is older than Bob, and Bob is older than Alice, Alice must be Bob because someone must be older (and if this is not the case, Alice simply has two names..). "is older than" is transitive since if Alice is older than Bob, and Bob is older than Charlie, Alice is also older than Charlie. So asymmetric and antisymmetric don't cancel out because the first means it's sort of a one-way relation, whereas the second means, loosely, that if it you reverse the operands and both statements are true, the operands must be the same.
|elementary-set-theory|relations|
0
What is a real number (also rational, decimal, integer, natural, cardinal, ordinal...)?
In mathematics, there seem to be a lot of different types of numbers. What exactly are: Real numbers Integers Rational numbers Decimals Complex numbers Natural numbers Cardinals Ordinals And as workmad3 points out, some more advanced types of numbers (I'd never heard of) Hyper-reals Quaternions Imaginary numbers Are there any other types of classifications of a number I missed?
The natural numbers can be defined by Peano's Axioms (sometimes called the Peano Postulates): Zero is a number. If n is a number, the successor of n is a number. zero is not the successor of a number. Two numbers of which the successors are equal are themselves equal. (induction axiom.) If a set S of numbers contains zero and also the successor of every number in S, then every number is in S. (This definition includes 0 in the natural numbers; altering rules 1, 3, and 5 to refer to one instead of zero excludes 0 from the natural numbers. Whether or not 0 is a natural number varies in various texts.) The whole numbers are the natural numbers with the additive identity element called 0. The integers are the whole numbers and their additive inverses. The rational numbers are numbers that can be expressed as a ratio of an integer to a non-zero integer. The real numbers are the set of numbers that are limits of Cauchy sequences of rational numbers. The irrational numbers are the real number
|terminology|definition|number-systems|
0
List of Interesting Math Blogs
I have the one or other interesting Math blog in my feedreader that I follow. It would be interesting to compile a list of Math blogs that are interesting to read, and do not require research-level math skills. I'll start with my entries: Division By Zero Tanya Khovanova’s Math Blog
Gil Kalai's blog is pretty awesome. Mostly covers combinatorics.
|soft-question|big-list|online-resources|
0
Understanding Dot and Cross Product
What purposes do the Dot and Cross products serve? Do you have any clear examples of when you would use them?
The dot product can be used to find the length of a vector or the angle between two vectors. The cross product is used to find a vector which is perpendicular to the plane spanned by two vectors.
|linear-algebra|inner-products|cross-product|
0
Understanding Dot and Cross Product
What purposes do the Dot and Cross products serve? Do you have any clear examples of when you would use them?
When you deal with vectors, sometimes you say to yourself, "Darn I wish there was a function that..." was zero when two vectors are perpendicular, letting me test perpendicularness." Dot Product would let me find the angle between two vectors." Dot Product (actually gives the cosine of the angle between two normalized vectors) would let me 'project' one vector onto another, or give the length of one vector in the direction of another." Dot Product could tell me how much force is actually helping the object move, when pushing at an angle." Dot Product could tell me how much a vector field is 'spreading out'." Cross Product could give me a vector that is perpendicular to two other vectors." Cross Product could tell me how much torque a force was applying to a rotating system." Cross Product could tell me how much this vector field is 'curling' up." Cross Product There are actually a lot more uses, but the more I study vectors, the more and more I run into a situation where I need a funct
|linear-algebra|inner-products|cross-product|
1
What is an Inner Product Space?
I've learned that the dot product is just one of many possible inner product spaces . Can someone explain this concept? When is it useful to define it as something other than the dot product ?
As for the utility of inner product spaces: They're vector spaces where notions like the length of a vector and the angle between two vectors are available. In this way, they generalize $\mathbb R^n$ but preserve some of its additional structure that comes on top of it being a vector space. Familiar friends like Cauchy-Schwarz, the parallelogram rule, and orthogonality all work in inner product spaces. (Note that there is a more general class of spaces, normed spaces, where notions of length make sense always, but an inner product cannot necessarily be defined.) The dot product is the standard inner product on $\mathbb R^n$. In general, any symmetric, positive definite matrix will give you an inner product on $\mathbb C^n$. And you can have inner products on infinite dimensional vector spaces, like $$ \langle \, f, \, g \, \rangle = \int_a^b \ f(x)\overline{g(x)} \, dx$$ for $f, g$ square-integrable functions on $[a,b]$. This becomes useful, for example, in applications like Fourier se
|linear-algebra|vector-spaces|inner-products|
0
Is it true that $0.999999999\ldots=1$?
I'm told by smart people that $$0.999999999\ldots=1$$ and I believe them, but is there a proof that explains why this is?
\begin{align} x &= 0.999... \\ 10x &= 9.999... \\ &= 9 + 0.999... \\ &= 9 + x \\ 10x - x &= (9 + x) - x \\ (10 - 1)x &= 9 + (x - x) \\ 9x &= 9 \\ x &= 1 \end{align}
|real-analysis|algebra-precalculus|real-numbers|decimal-expansion|
0
What are some good ways to get children excited about math?
I'm talking in the range of 10-12 years old, but this question isn't limited to only that range. Do you have any advice on cool things to show kids that might spark their interest in spending more time with math? The difficulty for some to learn math can be pretty overwhelming. Do you have any teaching techniques that you find valuable?
Graph theory! It's essentially connecting the dots, but with theorems working wonders behind the scenes for when they're old enough. Simple exercises like asking how many colors you need to color the faces or vertices of a graph are often fun (so I hear). (Also, most people won't believe the 4-color theorem.)
|big-list|education|
1
Why is the matrix-defined Cross Product of two 3D vectors always orthogonal?
By matrix-defined, I mean $$\left \times\left = \left| \begin{array}{ccc} i & j & k\\ a & b & c\\ d & e & f \end{array} \right|$$ ...instead of the definition of the product of the magnitudes multiplied by the sign of their angle, in the direction orthogonal) If I try cross producting two vectors with no $k$ component, I get one with only $k$, which is expected. But why? As has been pointed out, I am asking why the algebraic definition lines up with the geometric definition.
Note that if you replace $i$, $j$, and $k$ with $m$, $n$, and $p$, the determinant becomes the dot-product of the vector $(m, n, p)$ with the cross-product of the two original vectors. If $(m, n, p) = (a, b, c)$ or $(m, n, p) = (d, e, f)$, the determinant is zero (any matrix with two identical rows has determinant zero), so the dot product of $(a, b, c)$ or $(d, e, f)$ with the cross-product is zero. Hence, $(a, b, c)$ and $(d, e, f)$ are orthogonal to their cross-product.
|linear-algebra|matrices|inner-products|orthogonality|cross-product|
0
What are the differences between rings, groups, and fields?
Rings, groups, and fields all feel similar. What are the differences between them, both in definition and in how they are used?
You're right to think that the definitions are very similar. The main difference between groups and rings is that rings have two binary operations (usually called addition and multiplication) instead of just one binary operation. If you forget about multiplication, then a ring becomes a group with respect to addition (the identity is 0 and inverses are negatives). This group is always commutative! If you forget about addition, then a ring does not become a group with respect to multiplication. The binary operation of multiplication is associative and it does have an identity 1, but some elements like 0 do not have inverses. (This structure is called a monoid.) A commutative ring is a field when all nonzero elements have multiplicative inverses. In this case, if you forget about addition and remove 0, the remaining elements do form a group under multiplication. This group is again commutative. A division ring is a (not necessarily commutative) ring in which all nonzero elements have mul
|terminology|abstract-algebra|
0
Online resources for learning Mathematics
Not sure if this is the place for it, but there are similar posts for podcasts and blogs, so I'll post this one. I'd be interested in seeing a list of online resources for mathematics learning. As someone doing a non-maths degree in college I'd be interested in finding some resources for learning more maths online, most resources I know of tend to either assume a working knowledge of maths beyond secondary school level, or only provide a brief summary of the topic at hand. I'll start off by posting MIT Open Courseware , which is a large collection of lecture notes, assignments and multimedia for the MIT mathematics courses, although in many places it's quite incomplete.
Two good general references: Wikipedia MathWorld
|reference-request|online-resources|
0
How do you calculate the semi-minor axis of an ellipsoid?
Given the semi-major axis and a flattening factor, is it possible to calculate the semi-minor axis?
Possibly something like this. Correct me if I'm wrong. $j$ = semi-major $n$ = semi-minor $e$ = eccentricity $n = \sqrt{(j\sqrt{1 - e^{2}}) \times (j(1 - e^{2}))}$
|geometry|
1
Online resources for learning Mathematics
Not sure if this is the place for it, but there are similar posts for podcasts and blogs, so I'll post this one. I'd be interested in seeing a list of online resources for mathematics learning. As someone doing a non-maths degree in college I'd be interested in finding some resources for learning more maths online, most resources I know of tend to either assume a working knowledge of maths beyond secondary school level, or only provide a brief summary of the topic at hand. I'll start off by posting MIT Open Courseware , which is a large collection of lecture notes, assignments and multimedia for the MIT mathematics courses, although in many places it's quite incomplete.
Khan Academy, http://www.khanacademy.org/ You'll find tons of explanatory videos from various branches of mathematics; plus, each subject is explained pretty good, and the videos are easy to follow
|reference-request|online-resources|
0
Online resources for learning Mathematics
Not sure if this is the place for it, but there are similar posts for podcasts and blogs, so I'll post this one. I'd be interested in seeing a list of online resources for mathematics learning. As someone doing a non-maths degree in college I'd be interested in finding some resources for learning more maths online, most resources I know of tend to either assume a working knowledge of maths beyond secondary school level, or only provide a brief summary of the topic at hand. I'll start off by posting MIT Open Courseware , which is a large collection of lecture notes, assignments and multimedia for the MIT mathematics courses, although in many places it's quite incomplete.
A useful one for undergraduate level maths is Mathcentre . It has useful background material for people studying maths, or who need some maths background for other courses.
|reference-request|online-resources|
0
List of Interesting Math Blogs
I have the one or other interesting Math blog in my feedreader that I follow. It would be interesting to compile a list of Math blogs that are interesting to read, and do not require research-level math skills. I'll start with my entries: Division By Zero Tanya Khovanova’s Math Blog
Not a pure math blog, but it's one of the most fascinating blogs in my RSS. Futility Closet
|soft-question|big-list|online-resources|
0
What are some good ways to get children excited about math?
I'm talking in the range of 10-12 years old, but this question isn't limited to only that range. Do you have any advice on cool things to show kids that might spark their interest in spending more time with math? The difficulty for some to learn math can be pretty overwhelming. Do you have any teaching techniques that you find valuable?
If they're fairly mathematically inclined anyway, then try to get them solving interesting problems with an obvious mathematical content, if they're less mathematically inclined try to find problems where the usage of maths isn't as explicit. Problems with a very mathematical bent can be found at places like NRich , they update their problems monthly, the Stage 2 and 3 problems cover that age range. Other sources for problems could be video games (resource management based games require mathematical thinking), code breaking or programming a game (a simple driving game in Flash requires a lot of maths based problem solving). If they've convinced themselves that they can't do Maths then you basically need to smuggle the maths into your chosen activity without them realising that they're doing maths to solve the problem.
|big-list|education|
0
What are some good ways to get children excited about math?
I'm talking in the range of 10-12 years old, but this question isn't limited to only that range. Do you have any advice on cool things to show kids that might spark their interest in spending more time with math? The difficulty for some to learn math can be pretty overwhelming. Do you have any teaching techniques that you find valuable?
This really depend on how smart the kid is. I lean toward discrete math, elementary number theory related topics when talking to non-math people about math. They requires little background knowledge. There are some fun problems in discrete math, especially combinatorics. Simple probability is also nice. So are logic problem. Both topics can be used to formulate some simple puzzles. A simple number theory puzzle How many zeros are there in 20! I assume a bright 10 year old can solve it.
|big-list|education|
0
Faulty logic when summing large integers?
This is in relation to the Euler Problem $13$ from http://www.ProjectEuler.net . Work out the first ten digits of the sum of the following one-hundred $50$-digit numbers. $37107287533902102798797998220837590246510135740250$ Now, this was my thinking: I can freely discard the last fourty digits and leave the last ten. $0135740250$ And then simply sum those. This would be large enough to be stored in a $64$-bit data-type and a lot easier to compute. However, my answer isn't being accepted, so I'm forced to question my logic. However, I don't see a problem. The last fourty digits will never make a difference because they are at least a magnitude of $10$ larger than the preceding values and therefore never carry backwards into smaller areas. Is this not correct?
First you are doing it in the wrong end, second, the statement in general is still not correct. for example: 9999999999 1000000001 Say if you want the first 2 digits, you will get 10 if you discard the last 2 digit and do the sum. The right answer is 11
|arithmetic|project-euler|
0
Faulty logic when summing large integers?
This is in relation to the Euler Problem $13$ from http://www.ProjectEuler.net . Work out the first ten digits of the sum of the following one-hundred $50$-digit numbers. $37107287533902102798797998220837590246510135740250$ Now, this was my thinking: I can freely discard the last fourty digits and leave the last ten. $0135740250$ And then simply sum those. This would be large enough to be stored in a $64$-bit data-type and a lot easier to compute. However, my answer isn't being accepted, so I'm forced to question my logic. However, I don't see a problem. The last fourty digits will never make a difference because they are at least a magnitude of $10$ larger than the preceding values and therefore never carry backwards into smaller areas. Is this not correct?
If you were supposed to find the last ten digits, you could just ignore the first 40 digits of each number. However you're supposed to find the first ten digits, so that doesn't work. And you can't just ignore the last digits of each number either because those can carry over.
|arithmetic|project-euler|
1
Real world uses of Quaternions?
I've recently started reading about Quaternions, and I keep reading that for example they're used in computer graphics and mechanics calculations to calculate movement and rotation, but without real explanations of the benefits of using them. I'm wondering what exactly can be done with Quaternions that can't be done as easily (or easier) using more tradition approaches, such as with Vectors?
You can view a real-world example of quaternions in computer graphics with the open source program known as NASA WorldWind (http://worldwind.arc.nasa.gov/java/). It uses a Quaternion object to represent rotation of various geometries. The class definition itself is located in the src/gov/nasa/worldwind/geom/Quaternion.java file.
|soft-question|big-list|linear-algebra|applications|quaternions|
0
The cow in the field problem (intersecting circular areas)
What length of rope should be used to tie a cow to an exterior fence post of a circular field so that the cow can only graze half of the grass within that field? updated: To be clear: the cow should be tied to a post on the exterior of the field, not a post at the center of the field.
The field is the smaller/left circle, centered at A. The cow is tied to the post at E. The larger/right circle is the grazing radius. Let the radius of the field be R and the length of the rope be L. The grazable area is the union of a segment of the circular field and a segment of the circle defined by the rope length. (A segment of a circle is a sector of a circle less the triangle defined by the center of the circle and the endpoints of the arc.) The area of a segment of a circle of radius $R$ with central angle $t$ is $\frac{1}{2}R^2(t-\sin(t))$, where $t$ is measured in radians. In order to express the grazable area in terms of $R$ and one angle, we consider the angles ∠CED and ∠CAD (which define the segments of the circles; call these α and β for convenience) and the triangle CEF. Let $\theta$ be ∠EFC. $2\theta$ is an inscribed angle for the central angle $\beta$ over the same arc, making $\beta = 4\theta$. The sum of angles in triangle CEF is $\theta + \pi/2 +\alpha/2=\pi$ or $\
|geometry|
1
Is it true that $0.999999999\ldots=1$?
I'm told by smart people that $$0.999999999\ldots=1$$ and I believe them, but is there a proof that explains why this is?
One argument against this is that 0.99999999... is "somewhat" less than 1. How much exactly? 1 - 0.999999... = ε (0) If the above is true, the following also must be true: 9 × (1 - 0.999999...) = ε × 9 Let's calculate: 0.999... × 9 = ─────────── 8.1 81 81 . . . ─────────── 8.999... Thus: 9 - 8.999999... = 9ε (1) But: 8.999999... = 8 + 0.99999... (2) Indeed: 8.00000000... + 0.99999999... = ──────────────── 8.99999999... Now let's see what we can deduce from (0) , (1) and (2) . 9 - 8.999999... = 9ε because of (2) 9 - 8.999999... = 9 - (8 + 0.99999...) = because of (1) = 9 - 8 - (1 - ε) because of (0) = 1 - 1 + ε = ε. Thus: 9ε = ε 8ε = 0 ε = 0 1 - 0.999999... = ε = 0 Quod erat demonstrandum. Pardon my unicode.
|real-analysis|algebra-precalculus|real-numbers|decimal-expansion|
0
Is it true that $0.999999999\ldots=1$?
I'm told by smart people that $$0.999999999\ldots=1$$ and I believe them, but is there a proof that explains why this is?
Assuming: infinite decimals are series where the terms are the digits divided by the proper power of the base the infinite geometric series $a + a \cdot r + a \cdot r^2 + a \cdot r^3 + \cdots$ has sum $\dfrac{a}{1 - r}$ as long as $|r| $$0.99999\ldots = \frac{9}{10} + \frac{9}{10^2} + \frac{9}{10^3} + \cdots$$ This is the infinite geometric series with first term $a = \frac{9}{10}$ and common ratio $r = \frac{1}{10}$, so it has sum $$\frac{\frac{9}{10}}{1 - \frac{1}{10}} = \frac{\frac{9}{10}}{\frac{9}{10}} = 1.$$
|real-analysis|algebra-precalculus|real-numbers|decimal-expansion|
0
Chased by a lion and other pursuit-evasion problems
I am looking for a reference (book or article) that poses a problem that seems to be a classic, in that I've heard it posed many times, but that I've never seen written anywhere: that of the possibility of a man in a circular pen with a lion, each with some maximum speed, avoiding capture by that lion. References to pursuit problems in general would also be appreciated, and the original source of this problem.
Here is a book on this type of problem Paul J. Nahin, Chases and escapes: the mathematics of pursuit and evasion , Princeton University Press, 2007. it is also briefly mentioned in his other book "Euler's Fabulous Formula".
|reference-request|differential-games|
0
Online resources for learning Mathematics
Not sure if this is the place for it, but there are similar posts for podcasts and blogs, so I'll post this one. I'd be interested in seeing a list of online resources for mathematics learning. As someone doing a non-maths degree in college I'd be interested in finding some resources for learning more maths online, most resources I know of tend to either assume a working knowledge of maths beyond secondary school level, or only provide a brief summary of the topic at hand. I'll start off by posting MIT Open Courseware , which is a large collection of lecture notes, assignments and multimedia for the MIT mathematics courses, although in many places it's quite incomplete.
The following reddit post has a decent list of math resources: http://www.reddit.com/r/math/comments/bqbex/lets_list_all_the_useful_free_online_math/ One site I did not see it their list that I've found very helpful: http://betterexplained.com/
|reference-request|online-resources|
0
Why is $1$ not a prime number?
Why is $1$ not considered a prime number? Or, why is the definition of prime numbers given for integers greater than $1$?
One of the whole "points" of defining primes is to be able to uniquely and finitely prime factorize every natural number. If 1 was prime, then this would be more or less impossible.
|abstract-algebra|elementary-number-theory|ring-theory|prime-numbers|terminology|
1
Real world uses of hyperbolic trigonometric functions
I covered hyperbolic trigonometric functions in a recent maths course. However I was never presented with any reasons as to why (or even if) they are useful. Is there any good examples of their uses outside academia?
If you take a rope, fix the two ends, and let it hang under the force of gravity, it will naturally form a hyperbolic cosine curve.
|soft-question|big-list|applications|hyperbolic-functions|
0
How would you describe calculus in simple terms?
I keep hearing about this weird type of math called calculus. I only have experience with geometry and algebra. Can you try to explain what it is to me?
Calculus is a field which deals with two seemingly unrelated things. (1) the area beneath a graph and the x-axis. (2) the slope (or gradient) of a curve at different points. Part (1) is also called 'integration' and 'anti-differentiation', and part (2) is called 'differentiation'.
|soft-question|calculus|
0
How would you describe calculus in simple terms?
I keep hearing about this weird type of math called calculus. I only have experience with geometry and algebra. Can you try to explain what it is to me?
To be very brief and succinct: Calculus is the study of how quantities change Slightly more technically, it a subject based on infinitesimals . It may be pointing out the obvious, but the Wikipedia article does actually provide a pretty decent beginners introduction to the subject. You'll generally want to start with differential calculus and move on quickly to integral calculus , followed by linking up the two (fundamental theorem of calculus) and moving on from there.
|soft-question|calculus|
0
What are some classic fallacious proofs?
If you know it, also try to include the precise reason why the proof is fallacious. To start this off, let me post the one that most people know already: Let $a = b$. Then $a^2 = ab$ $a^2 - b^2 = ab - b^2$ Factor to $(a-b)(a+b) = b(a-b)$ Then divide out $(a-b)$ to get $a+b = b$ Since $a = b$, then $b+b = b$ Therefore $2b = b$ Reduce to $2 = 1$ As @jan-gorzny pointed out, in this case, line 5 is wrong since $a = b$ implies $a-b = 0$, and so you can't divide out $(a-b)$.
Wikipedia has a long list of these: http://en.wikipedia.org/wiki/Mathematical_fallacy
|soft-question|big-list|fake-proofs|
1
List of interesting math podcasts?
mathfactor is one I listen to. Does anyone else have a recommendation?
I listen to Math Mutation Podcast . The topics are interesting and understandable by a layman.
|soft-question|big-list|online-resources|
0
How do the Properties of Relations work?
This is simply not clicking for me. I'm currently learning math during the summer vacation and I'm on the chapter for relations and functions. There are five properties for a relation: Reflexive - $R \rightarrow R$ Symmetrical - $R \rightarrow S$ ; $S \rightarrow R$ Antisymmetrical - $R \rightarrow S$ && ( $R \rightarrow R$ || $S \rightarrow S$ ) Asymmetrical - $R \rightarrow S$ && !( $R \rightarrow R$ || $S \rightarrow S$ ) Transitive - if $R \rightarrow S$ && $S \rightarrow T$ , then $R \rightarrow T$ If that's not what you call the properties in English, then please let me know- I have to study it in German, unfortunately, and these are my rough translations. Continuing on, I just don't know what to do with this information practically. The examples of the book are horrible: "Is the same age as" is apparently reflexive, symmetrical and transitive. "Is related to" is also apparently reflexive, symmetrical and transitive. "Is older than" is asymmetric, antisymmetric and transitive. Th
I'd like to change the notation of your definitions, since $R$, $S$ and $T$ would usually be used to stand for the relations themselves (and $x, y$ and $z$ would be more commonly chosen for the objects that might bear the relation to each other). Reflexive - For all $x: xRx$ Example reflexive relation: $xRy$ stands for '$x$ is a factor of $y$' (in the set of natural numbers) Symmetric - For all $x,y$: if $xRy$ then $yRx$ Example symmetric relation: $xRy$ stands for '$x$ and $y$ are $2$ metres apart' (in the set of all people in a particular room) Antisymmetric - For all $x,y$: if $xRy$ and $yRx$ then $x = y$ Example antisymmetric relation: $xRy$ stands for '$x$ is a factor of $y$' (in the set of natural numbers) Asymmetric - For all $x,y$: if $xRy$ then not $yRx$ Example asymmetric relation: $xRy$ stands for '$x$ is taller than $y$' (in the set of all people) Transitive - For all $x,y,z$: if $xRy$ and $yRz$ then $xRz$ Example transitive relation: $xRy$ stands for '$x$ is taller than $y
|elementary-set-theory|relations|
1
What is an elliptic curve, and how are they used in cryptography?
I hear a lot about Elliptic Curve Cryptography these days, but I'm still not quite sure what they are or how they relate to crypto...
The technical definition is a nonsingular projective curve of genus 1, which is an abelian variety under the group law: basially, this means that you draw the line through two points on the curve -- which can be embedded in the projective plane -- and find where that line intersects the curve again (and call that the negative of the sum). We can always put elliptic curves in the (projectivization of the) form $y^2 = x^3 - Ax + B$. So, the meaning of "abelian variety" is that you can add points on the elliptic curve, which is really useful; there isn't a way to do this for most objects in algebraic geometry. Then one can study things like the torsion points on an elliptic curve, with respect to this abelian group structure: it's a theorem that there are $m^2$ torsion points of order $m$ for instance, if you 're working in an algebraically closed field. In fact, one way to think of this is that an elliptic curve is really--algebraically and topologically--a torus if you are working over
|cryptography|elliptic-curves|
0
Why is $x^0 = 1$ except when $x = 0$?
Why is any number (other than zero) to the power of zero equal to one? Please include in your answer an explanation of why $0^0$ should be undefined.
For non-zero bases and exponents, the relation $ x^a x^b = x^{a+b} $ holds. For this to make sense with an exponent of $ 0 $ , $ x^0 $ needs to equal one. This gives you: $\displaystyle x^a \cdot 1 = x^a\cdot x^0 = x^{a+0} = x^a $ When the base is also zero, it's not possible to define a value for $0^0$ because there is no value that is consistent with all the necessary constraints. For example, $0^x = 0$ and $x^0 = 1$ for all positive $x$ , and $0^0$ can't be consistent with both of these. Another way to see that $0^0$ can't have a reasonable definition is to look at the graph of $f(x,y) = x^y$ which is discontinuous around $(0,0)$ . No chosen value for $0^0$ will avoid this discontinuity.
|definition|exponentiation|
1
Why is $x^0 = 1$ except when $x = 0$?
Why is any number (other than zero) to the power of zero equal to one? Please include in your answer an explanation of why $0^0$ should be undefined.
$$0^x = 0, \quad x^0=1$$ both are true when $x>0$. What happens when $x=0$? It is undefined because there is no way to chose one definition over the other. Some people define $0^0 = 1$ in their books, like Knuth, because $0^x$ is less 'useful' than $x^0$.
|definition|exponentiation|
0
Why is $x^0 = 1$ except when $x = 0$?
Why is any number (other than zero) to the power of zero equal to one? Please include in your answer an explanation of why $0^0$ should be undefined.
This is a question of definition, the question is "why does it make sense to define $x^0=1$ except when $x=0$?" or "How is this definition better than other definitions?" The answer is that $x^a \cdot x^b = x^{a+b}$ is an excellent formula that makes a lot of sense (multiplying $a$ times and then multiplying $b$ times is the same as multiplying $a+b$ times) and which you can prove for $a$ and $b$ positive integers. So any sensible definition of $x^a$ for numbers $a$ which aren't positive integers should still satisfy this identity . In particular, $x^0 \cdot x^b = x^{0+b} = x^b$; now if $x$ is not zero then you can cancel $x^b$ from both sides and get that $x^0 = 1$. But if $x=0$ then $x^b$ is zero and so this argument doesn't tell you anything about what you should define $x^0$ to be. A similar argument should convince you that when $x$ is not zero then $x^{-a}$ should be defined as $1/x^a$. An argument using the related identity $(x^a)^b = x^{ab}$ should convince you that $x^{1/n}$ i
|definition|exponentiation|
0
Why is $x^0 = 1$ except when $x = 0$?
Why is any number (other than zero) to the power of zero equal to one? Please include in your answer an explanation of why $0^0$ should be undefined.
Exponents are only "basically" defined under the natural numbers above zero. By this I mean, defined as "iterated multiplication" the same way multiplication is defined as iterated addition. The property $a^0 = 1$ only arises when we look at generalizing multiplication to the integers. We do this by: \begin{align} a^4 / a^3 &= (a\cdot a\cdot a\cdot a)/(a\cdot a\cdot a) = a^1\\ a^4 / a^3 &= a^{4-3} = a^1 \end{align} And using this, we can say: $$a^2 / a^3 = a^{-1} = 1/a$$ and also: \begin{align} a^2 / a^2 &= 1\\ a^2 / a^2 &= a^{2-2} = a^0 = 1 \end{align} So we say $a^0 = 1$. However, notice that these proofs don't have any meaning when $a=0$, because the whole concept/idea involves fractions, and you cannot have zero be in the denominator. When we say $2^0 = 1$, we really mean: $$ 2^{1-1} = 2^1 / 2^1 = 2/2 = 1$$ But we cannot say the same for $0^0$: $$0^{1-1} = 0^1/0^1=0/0=\text{UNDEFINED}$$
|definition|exponentiation|
0
How do I cut a square in half?
I have a square that's $10\mathrm{m} \times 10\mathrm{m}$. I want to cut it in half so that I have a square with half the area. But if I cut it from top to bottom or left to right, I don't get a square, I get a rectangle! I know the area of the small square is supposed to be $50\mathrm{m}^{2}$, so I can use my calculator to find out how long a side should be: it's $7.07106781\mathrm{m}$. But my teacher said I should be able to do this without a calculator. How am I supposed to get that number by hand?
Take a pair of compasses and draw an arc between two opposite corners, centred at another corner; then draw a diagonal that bisects the arc. If you now draw two lines from the point of intersection, parallel to the sides of the square, the biggest of the resulting squares will have half the area of the original square. Here's an illustration:
|geometry|
0
How would you describe calculus in simple terms?
I keep hearing about this weird type of math called calculus. I only have experience with geometry and algebra. Can you try to explain what it is to me?
One of the greatest achievements of human civilization is Newton's laws of motions. The first law says that unless a force is acting then the velocity (not the position!) of objects stay constant, while the second law says that forces act by causing an acceleration (though heavy objects require more force to accellerate). However to make sense of those laws and to apply them to real life you need to understand how to move between the following three notions: Position Velocity (that is the rate of change in position) Acceleration (that is the rate of change of the velocity) Moving down that list is called "taking the derivative" while moving up that list is called "taking the integral." Calculus is the study of derivatives and integerals. In particular, if you want to figure out how objects move under some force you need to be able to integrate twice. This requires understanding a lot of calculus! In a first semester class you usually learn about derivatives and integrals of functions o
|soft-question|calculus|
0
Why is $1$ not a prime number?
Why is $1$ not considered a prime number? Or, why is the definition of prime numbers given for integers greater than $1$?
The main point of talking about prime numbers is Euclid's theorem that every positive integer can be written uniquely as a product of primes. As Justin remarks, this would break horribly if $1$ were considered prime, for example we could factor $2$ as $2\times1\times1\times1\times1\times1$. Instead we say that $1$ is not a prime, but it is the product of zero primes (see Why is $x^0 = 1$ except when $x = 0$? to understand why any prime multiplied by itself $0$ times is $1$) so Euclid's theorem works out nicely!
|abstract-algebra|elementary-number-theory|ring-theory|prime-numbers|terminology|
0
Are there any functions that are (always) continuous yet not differentiable? Or vice-versa?
It seems like functions that are continuous always seem to be differentiable, to me. I can't imagine one that is not. Are there any examples of functions that are continuous, yet not differentiable? The other way around seems a bit simpler -- a differentiable function is obviously always going to be continuous. But are there any that do not satisfy this?
It's easy to find a function which is continuous but not differentiable at a single point, e.g. $f(x) = |x|$ is continuous but not differentiable at $0$ . Moreover, there are functions which are continuous but nowhere differentiable, such as the Weierstrass function . On the other hand, continuity follows from differentiability, so there are no differentiable functions which aren't also continuous. If a function is differentiable at $x$ , then the limit $(f(x+h)-f(x))/h$ must exist (and be finite) as $h$ tends to 0, which means $f(x+h)$ must tend to $f(x)$ as $h$ tends to 0, which means $f$ is continuous at $x$ .
|real-analysis|continuity|
1
Real world uses of Quaternions?
I've recently started reading about Quaternions, and I keep reading that for example they're used in computer graphics and mechanics calculations to calculate movement and rotation, but without real explanations of the benefits of using them. I'm wondering what exactly can be done with Quaternions that can't be done as easily (or easier) using more tradition approaches, such as with Vectors?
To understand the benefits of using quaternions you have to consider different ways to represent rotations. Here are few ways with a summary of the pros and cons: Euler angles Rotation matrices Axis angle Quaternions Rotors (normalized Spinors) Euler angles are the best choice if you want a user to specify an orientation in a intuitive way. They are are also space efficient (three numbers). However, it is more difficult to linear interpolate values. Consider the case where you want to interpolate between 359 and 0 degrees. Linearly interpolating would cause a large rotation, even though the two orientations are almost the same. Writing shortest path interpolation, is easy for one axis, but non-trivial when considering the three Euler angles(for instance the shortest route between (240, 57, 145) and (35, -233, -270) is not immediately clear). Rotation matrices specify a new frame of reference using three normalized and orthogonal vectors (Right, Up, Out, which when multiplied become the
|soft-question|big-list|linear-algebra|applications|quaternions|
0
Do complex numbers really exist?
Complex numbers involve the square root of negative one, and most non-mathematicians find it hard to accept that such a number is meaningful. In contrast, they feel that real numbers have an obvious and intuitive meaning. What's the best way to explain to a non-mathematician that complex numbers are necessary and meaningful, in the same way that real numbers are? This is not a Platonic question about the reality of mathematics, or whether abstractions are as real as physical entities, but an attempt to bridge a comprehension gap that many people experience when encountering complex numbers for the first time. The wording, although provocative, is deliberately designed to match the way that many people actually ask this question.
Quantum mechanics, and hence physics and everything around us, fundamentally involves complex numbers.
|soft-question|complex-numbers|education|philosophy|
0
Do complex numbers really exist?
Complex numbers involve the square root of negative one, and most non-mathematicians find it hard to accept that such a number is meaningful. In contrast, they feel that real numbers have an obvious and intuitive meaning. What's the best way to explain to a non-mathematician that complex numbers are necessary and meaningful, in the same way that real numbers are? This is not a Platonic question about the reality of mathematics, or whether abstractions are as real as physical entities, but an attempt to bridge a comprehension gap that many people experience when encountering complex numbers for the first time. The wording, although provocative, is deliberately designed to match the way that many people actually ask this question.
I'll start by pointing out that a whole host of things that people think of as 'real' are on shakier ground than imaginary numbers. Given that quantum mechanics predicts a fundamental limit to how granular reality is, the whole concept of real numbers is on very shakey ground, yet people accept those as fine. I'd therefore suggest that it is merely a case of familiarity - people are less familiar with complex numbers than with some other mathematical constructs. As for an actual existence outside the realms of pure maths... your best bet is to look at quantum mechanics again. This area has some fascinating results that are only possible through the use of imaginary numbers. Incidentally, fundamental particles are the place in nature that gave a 'physicality' to negative numbers (the charge of an electron is negative) well after they were accepted as normal by most people.
|soft-question|complex-numbers|education|philosophy|
0
Do complex numbers really exist?
Complex numbers involve the square root of negative one, and most non-mathematicians find it hard to accept that such a number is meaningful. In contrast, they feel that real numbers have an obvious and intuitive meaning. What's the best way to explain to a non-mathematician that complex numbers are necessary and meaningful, in the same way that real numbers are? This is not a Platonic question about the reality of mathematics, or whether abstractions are as real as physical entities, but an attempt to bridge a comprehension gap that many people experience when encountering complex numbers for the first time. The wording, although provocative, is deliberately designed to match the way that many people actually ask this question.
The concept of mathematical numbers and "existing" is a tricky one. What actually "exists"? Do negative numbers exist? Of course they do not. You can't have a negative number of apples. Yet, the beauty of negative numbers is that when we define them (rigorously), then all of a sudden we can use them to solve problems we were never ever able to solve before, or we can solve them in a much simpler way. Imagine trying to do simple physics without the idea of negative numbers! But are they "real"? Do they "exist"? No, they don't. But they are just tools that help us solve real life problems. To go back to your question about complex numbers, I would say that the idea that they exist or not has no bearing on whether they are actually useful in solving the problems of every day life, or making them many, many, many times more easy to solve. The math that makes your computer run involves the tool that is complex numbers, for instance.
|soft-question|complex-numbers|education|philosophy|
0
How can you prove that a function has no closed form integral?
In the past, I've come across statements along the lines of "function $f(x)$ has no closed form integral", which I assume means that there is no combination of the operations: addition/subtraction multiplication/division raising to powers and roots trigonometric functions exponential functions logarithmic functions which when differentiated gives the function $f(x)$ . I've heard this said about the function $f(x) = x^x$ , for example. What sort of techniques are used to prove statements like this? What is this branch of mathematics called? Merged with " How to prove that some functions don't have a primitive " by Ismael : Sometimes we are told that some functions like $\dfrac{\sin(x)}{x}$ don't have an indefinite integral, or that it can't be expressed in term of other simple functions. I wonder how we can prove that kind of assertion?
It is a theorem of Liouville, reproven later with purely algebraic methods, that for rational functions $f$ and $g$ , $g$ non-constant, the antiderivative of $$f(x)\exp(g(x)) \, \mathrm dx$$ can be expressed in terms of elementary functions if and only if there exists some rational function $h$ such that it is a solution of $$f = h' + hg'$$ $e^{x^2}$ is another classic example of such a function with no elementary antiderivative. I don't know how much math you've had, but some of this paper might be comprehensible in its broad strokes: https://ksda.ccny.cuny.edu/PostedPapers/liouv06.pdf Liouville's original paper: Liouville, J. " Suite du Mémoire sur la classification des Transcendantes, et sur l'impossibilité d'exprimer les racines de certaines équations en fonction finie explicite des coefficients ." J. Math. Pure Appl. 3, 523-546, 1838. Michael Spivak's book on Calculus also has a section with a discussion of this.
|real-analysis|calculus|integration|faq|differential-algebra|
1
Do complex numbers really exist?
Complex numbers involve the square root of negative one, and most non-mathematicians find it hard to accept that such a number is meaningful. In contrast, they feel that real numbers have an obvious and intuitive meaning. What's the best way to explain to a non-mathematician that complex numbers are necessary and meaningful, in the same way that real numbers are? This is not a Platonic question about the reality of mathematics, or whether abstractions are as real as physical entities, but an attempt to bridge a comprehension gap that many people experience when encountering complex numbers for the first time. The wording, although provocative, is deliberately designed to match the way that many people actually ask this question.
There are geometric interpretations of imaginary numbers where they are thought of as parallelograms with a front and back, or oriented parallelograms. That interpretation requires geometric algebra but only uses real numbers. Here is a link: http://en.wikipedia.org/wiki/Geometric_algebra#Complex_numbers That doesn't have any pictures so it is admittedly not intuitive, but the answer is yes. Whether you think of imaginary numbers as square root of negative 1 or as parallelogram with a front and back, they exist.
|soft-question|complex-numbers|education|philosophy|
0
Why is $1$ not a prime number?
Why is $1$ not considered a prime number? Or, why is the definition of prime numbers given for integers greater than $1$?
It's important to understand that this is not something that can be proved : it's a definition . We choose not to regard 1 as a prime number, simply because it makes writing lots of theorems much easier. Noah gives the best example in his answer: Euclid's theorem that every positive integer can be written uniquely as a product of primes. If 1 is defined to be a prime number, then we'd have to change that theorem to: "every positive integer can be written uniquely as a product of primes, except for infinite multiplications by 1". So we choose to go with the easier path of defining 1 to not be a prime.
|abstract-algebra|elementary-number-theory|ring-theory|prime-numbers|terminology|
0
Do complex numbers really exist?
Complex numbers involve the square root of negative one, and most non-mathematicians find it hard to accept that such a number is meaningful. In contrast, they feel that real numbers have an obvious and intuitive meaning. What's the best way to explain to a non-mathematician that complex numbers are necessary and meaningful, in the same way that real numbers are? This is not a Platonic question about the reality of mathematics, or whether abstractions are as real as physical entities, but an attempt to bridge a comprehension gap that many people experience when encountering complex numbers for the first time. The wording, although provocative, is deliberately designed to match the way that many people actually ask this question.
You may be interested to read the MathOverflow question "Demystifying Complex Numbers," here . A teacher is asking how to motivate complex numbers to students taking complex analysis.
|soft-question|complex-numbers|education|philosophy|
0
Why is the volume of a sphere $\frac{4}{3}\pi r^3$?
I learned that the volume of a sphere is $\frac{4}{3}\pi r^3$ , but why? The $\pi$ kind of makes sense because its round like a circle, and the $r^3$ because it's 3-D, but $\frac{4}{3}$ is so random! How could somebody guess something like this for the formula?
Pappus's centroid theorem (second theorem) says that the volume of a solid formed by revolving a region about an axis is the product of the area of the region and the distance traveled by the centroid of the region when it is revolved. A sphere can be formed by revolving a semicircle about is diameter edge. The area of the semicircle is $\frac{1}{2}\pi r^2$ . The centroid of the semicircle can be found by intersecting two lines that both divide the area of the semicircle into two equal parts. One such line is perpendicular to the diameter edge through the center of the semicircle (this is a line of symmetry of the semicircle). Another such line is parallel to the diameter edge, $\frac{4r}{3\pi}$ away from it (verification of this is left as an exercise for the reader). When revolved about the diameter edge of the semicircle, the centroid travels $2\pi\cdot\frac{4r}{3\pi} = \frac{8}{3}\cdot r$ , so the volume of the sphere is $\frac{1}{2}\pi r^2\cdot\frac{8}{3}\cdot r = \frac{4}{3}\pi r
|geometry|volume|solid-geometry|spheres|
0
How would you describe calculus in simple terms?
I keep hearing about this weird type of math called calculus. I only have experience with geometry and algebra. Can you try to explain what it is to me?
Calculus is the mathematics of change. In algebra, almost nothing ever changes. Here's a comparison of some algebra vs. calc problems: algebra: car A is driving at 50 kph. How far has it gone after 6 hours? calc: car B starts at 10 mph and begins accelerating at the rate of 10 kph^2 (kilometers per hour per hour). How far has car B gone after 6 hours? Note how the algebra problem nothing changes, where in the calc problem, the speed of the car is constantly changing. calc: If a ball is rolling in a straight line at 10 fps with a diameter of 1 foot and Q is a the point at the top of the ball when t=0, how fast is point Q moving at time t=4 relative to the ground? The speed of the point in relation to the ground is never the same (its zero when its at the bottom, 20fps when it's at the top. Calculus lets you figure out how fast it's going exactly at a specific moment. There are two main branches of calculus, differential and integral. These problems pertain to differential calculus as th
|soft-question|calculus|
0
How do I cut a square in half?
I have a square that's $10\mathrm{m} \times 10\mathrm{m}$. I want to cut it in half so that I have a square with half the area. But if I cut it from top to bottom or left to right, I don't get a square, I get a rectangle! I know the area of the small square is supposed to be $50\mathrm{m}^{2}$, so I can use my calculator to find out how long a side should be: it's $7.07106781\mathrm{m}$. But my teacher said I should be able to do this without a calculator. How am I supposed to get that number by hand?
Does this give you any ideas?
|geometry|
1
Do complex numbers really exist?
Complex numbers involve the square root of negative one, and most non-mathematicians find it hard to accept that such a number is meaningful. In contrast, they feel that real numbers have an obvious and intuitive meaning. What's the best way to explain to a non-mathematician that complex numbers are necessary and meaningful, in the same way that real numbers are? This is not a Platonic question about the reality of mathematics, or whether abstractions are as real as physical entities, but an attempt to bridge a comprehension gap that many people experience when encountering complex numbers for the first time. The wording, although provocative, is deliberately designed to match the way that many people actually ask this question.
We will will first consider the most common definition of $i$, as the square root of $-1$. When you first hear this, it sounds crazy. $0$ squared is $0$; a positive times a positive is positive and a negative times a negative is positive too. So there doesn't actually appear to be any number that we can square to get $-1$. A mathematician would collectively term $0$, negative numbers and positive numbers as the real numbers. They would also define the term complex numbers as a group of numbers that includes these real numbers. So while we have shown that no real number can square to get $-1$, we haven't even defined complex numbers at this point, so we can't rule out that one might have this property. At this point, it makes sense to ask what does a mathematician mean by a number? It certainly isn't what most people associate it with - as an abstract representation some kind of real world quantity. We need to understand that it isn't uncommon for one word to have different meanings for
|soft-question|complex-numbers|education|philosophy|
0
Why is "the set of all sets" a paradox, in layman's terms?
I've heard of some other paradoxes involving sets (i.e., "the set of all sets that do not contain themselves") and I understand how paradoxes arise from them. But this one I do not understand. Why is "the set of all sets" a paradox? It seems like it would be fine, to me. There is nothing paradoxical about a set containing itself. Is it something that arises from the "rules of sets" that are involved in more rigorous set theory?
An informal explanation is Russel's Paradox. The wiki page is informative, here's the relevant quote: Let us call a set "abnormal" if it is a member of itself, and "normal" otherwise. For example, take the set of all squares. That set is not itself a square, and therefore is not a member of the set of all squares. So it is "normal". On the other hand, if we take the complementary set that contains all non-squares, that set is itself not a square and so should be one of its own members. It is "abnormal". Now we consider the set of all normal sets, R. Attempting to determine whether R is normal or abnormal is impossible: If R were a normal set, it would be contained in the set of normal sets (itself), and therefore be abnormal; and if it were abnormal, it would not be contained in the set of normal sets (itself), and therefore be normal. This leads to the conclusion that R is both normal and abnormal: Russell's paradox.
|paradoxes|logic|set-theory|
0
What are the differences between rings, groups, and fields?
Rings, groups, and fields all feel similar. What are the differences between them, both in definition and in how they are used?
I won't explain what a ring or a group is, because that's already been done, but I'll add something else. One reason groups and rings feel similar is that they are both "algebraic structures" in the sense of universal algebra. So for instance, the operation of quotienting via a normal subgroup (for a group) and a two-sided ideal (for a ring) are basically instances of quotienting via an invariant equivalence relation in universal algebra. A field, by contrast, is not really a construction of universal algebra (because the operation $x \to x^{-1}$ is not everywhere defined) -- which is why free fields don't exist, for instance -- though they are a special case of rings.
|terminology|abstract-algebra|
0
What are some good ways to get children excited about math?
I'm talking in the range of 10-12 years old, but this question isn't limited to only that range. Do you have any advice on cool things to show kids that might spark their interest in spending more time with math? The difficulty for some to learn math can be pretty overwhelming. Do you have any teaching techniques that you find valuable?
I have found most people liked math at some point, but something happened in their learning process that made them feel so stupid, they became disenfranchised with mathematics. What tends to happen is students are presented with some mathematically result they are expected to memorize by route, which takes all the joy out of mathematics and prevents them from approaching mathematics intuitively. So I would first try to zero in on what they don't like and what parts of mathematics they have had to take on faith. You might not have to excite them if you help them learn mathematics intuitively. As per the suggestions, I would show them how mathematically equations make pretty shapes in Processing.
|big-list|education|
0
Do complex numbers really exist?
Complex numbers involve the square root of negative one, and most non-mathematicians find it hard to accept that such a number is meaningful. In contrast, they feel that real numbers have an obvious and intuitive meaning. What's the best way to explain to a non-mathematician that complex numbers are necessary and meaningful, in the same way that real numbers are? This is not a Platonic question about the reality of mathematics, or whether abstractions are as real as physical entities, but an attempt to bridge a comprehension gap that many people experience when encountering complex numbers for the first time. The wording, although provocative, is deliberately designed to match the way that many people actually ask this question.
No number does "really exist" the way trees or atoms exist. In physics people however have found use for complex numbers just as they have found use for real numbers.
|soft-question|complex-numbers|education|philosophy|
0
Is it true that $0.999999999\ldots=1$?
I'm told by smart people that $$0.999999999\ldots=1$$ and I believe them, but is there a proof that explains why this is?
If you take two real numbers x and y then there per definition of the real number z for which x or x > z > y is true. For x = 0.99999... and y = 1 you can't find a z and therefore 0.99999... = 1 .
|real-analysis|algebra-precalculus|real-numbers|decimal-expansion|
0
Recasting points from one vector space to another
I have a collection of 3D points in the standard $x$, $y$, $z$ vector space. Now I pick one of the points $p$ as a new origin and two other points $a$ and $b$ such that $a - p$ and $b - p$ form two vectors of a new vector space. The third vector of the space I will call $x$ and calculate that as the cross product of the first two vectors. Now I would like to recast or reevaluate each of the points in my collection in terms of the new vector space. How do I do that? (Also, if 'recasting' not the right term here, please correct me.)
What you are describing is an Affine Transformation , which is a linear transformation followed by a translation. We know this because any straight lines in your original vector space is also going to be a straight line in your transformed vector space.
|linear-algebra|vector-spaces|
0
How does the wheel paradox work?
I keep looking at this picture and its driving me crazy. How can the smaller circle travel the same distance when its circumference is less than the entire wheel?
If the two circles are fixed, then they will be traveling the same difference, but at different velocities. In fact, the ratio of the radii is equal to the ratio of the velocities a point on either circle will be traveling. If you tried to repeat this by putting two different-sized circles on a track and making them spin to come out to be the same distance with the same angular velocity, you will notice that one of the circles will have to slide/slip along the track in order to keep them at the same pace.
|geometry|
0
Distribution of primes?
Do primes become more or less frequent as you go further out on the number line? That is, are there more or fewer primes between $1$ and $1{,}000{,}000$ than between $1{,}000{,}000$ and $2{,}000{,}000$? A proof or pointer to a proof would be appreciated.
From the Wikipedia article about the prime number theorem : Roughly speaking, the prime number theorem states that if a random number nearby some large number N is selected, the chance of it being prime is about 1 / ln(N), where ln(N) denotes the natural logarithm of N. For example, near N = 10,000, about one in nine numbers is prime, whereas near N = 1,000,000,000, only one in every 21 numbers is prime. In other words, the average gap between prime numbers near N is roughly ln(N).
|number-theory|prime-numbers|
1
Why is "the set of all sets" a paradox, in layman's terms?
I've heard of some other paradoxes involving sets (i.e., "the set of all sets that do not contain themselves") and I understand how paradoxes arise from them. But this one I do not understand. Why is "the set of all sets" a paradox? It seems like it would be fine, to me. There is nothing paradoxical about a set containing itself. Is it something that arises from the "rules of sets" that are involved in more rigorous set theory?
The "set of all sets" is not so much a paradox in itself as something that inevitably leads to a contradiction, namely the well-known (and referenced in the question) Russell's paradox. Given any set and a predicate applying to sets, the set of all things satisfying the predicate should be a subset of the original set. If the "set of all sets" were to exist, because self-containment and non-self-containment are valid predicates, the set of all sets not containing themselves would have to exist as a set in order for our set theory to be consistent. But this "set of all sets" cannot exist in a consistent set theory because of the Russel paradox. So the non-existence of the "set of all sets" is a consequence of the fact that presuming it's existence would lead to the contradiction described by Russel's paradox. This is in fact the origin of Russel's paradox. In his work "The Basic Laws of Arithmetic", Gottlob Frege had taken as a postulate the existence of this "set of all sets". In a let
|paradoxes|logic|set-theory|
0
Aren't constructive math proofs more "sound"?
Since constructive mathematics allows us to avoid things like Russell's Paradox, then why don't they replace traditional proofs? How do we know the "regular" kind of mathematics are free of paradox without a proof construction?
The distinction between constructive mathematics and traditional mathematics has nothing to do with Russell's Paradox. Constructive mathematics simply requires working with one less basic postulate that many mathematicians have believed to be sensible and on which some proofs are based, namely the Axiom of Choice
|proof-theory|constructive-mathematics|
0
Can you recommend a decent online or software calculator?
I'm looking for an online or software calculator that can show me the history of items I typed in, much like an expensive Ti calculator. Can you recommend any?
I use R these days. It was built to be a calculator and does the job well. Its syntax might be a bit strange but it allows you to do a lot with little typing.
|soft-question|big-list|math-software|computer-algebra-systems|
0
Aren't constructive math proofs more "sound"?
Since constructive mathematics allows us to avoid things like Russell's Paradox, then why don't they replace traditional proofs? How do we know the "regular" kind of mathematics are free of paradox without a proof construction?
A whole bunch of things in mathematics are inherently nonconstructive. For instance, invariant theory--recall the famous quote by Gordan that Hilbert's mathematics was "theology." (A quote which, I believe, was in jest.) The Hahn-Banach theorem, a fundamental tool in functional analysis (and a great tool for proving all sorts of results, like approximation results--Runge's theorem, the Stone-Weierstrass theorem, and more) relies on the axiom of choice, and is consequently nonconstructive. The fact that any proper ideal in a ring is contained in a maximal ideal is frequently used in algebra, and yet it needs the axiom of choice. The use of ultraproducts in logic (or the construction of hyperreal numbers) is inherently nonconstructive: you can't just exhibit a nonprincipal ultrafilter on the natural numbers. Basically, a lot of mathematics just doesn't work without Zorn's lemma, and this is equivalent to the axiom of choice.
|proof-theory|constructive-mathematics|
0
Is there possibly a largest prime number?
Prime numbers are numbers with no factors other than one and itself. Factors of a number are always lower or equal to than a given number; so, the larger the number is, the larger the pool of "possible factors" that number might have. So the larger the number, it seems like the less likely the number is to be a prime. Surely there must be a number where, simply, every number above it has some other factors. A "critical point" where every number larger than it simply will always have some factors other than one and itself. Has there been any research as to finding this critical point, or has it been proven not to exist? That for any $n$ there is always guaranteed to be a number higher than $n$ that has no factors other than one and itself?
Euclid's famous proof is as follows: Suppose there is a finite number of primes. Let $x$ be the product of all of these primes. Then look at $x+1$. It is clear that $x$ is coprime to $x+1$. Therefore, no nontrivial factor of $x$ is a factor of $x+1$, but every prime is a factor of $x$. By the fundamental theorem of arithmetic, $x+1$ admits a prime factorization, and by the above remark, none of these prime factors can be a factor of $x$, but $x$ is the product of all primes. This is a contradiction.
|number-theory|prime-numbers|
1
Is there possibly a largest prime number?
Prime numbers are numbers with no factors other than one and itself. Factors of a number are always lower or equal to than a given number; so, the larger the number is, the larger the pool of "possible factors" that number might have. So the larger the number, it seems like the less likely the number is to be a prime. Surely there must be a number where, simply, every number above it has some other factors. A "critical point" where every number larger than it simply will always have some factors other than one and itself. Has there been any research as to finding this critical point, or has it been proven not to exist? That for any $n$ there is always guaranteed to be a number higher than $n$ that has no factors other than one and itself?
According to XKCD , we have the following Haiku: Top Prime's Divisors' Product (Plus one)'s factors are...? Q.E.D B@%&$ I wonder if we can edit it to make it correct
|number-theory|prime-numbers|
0
How can you prove that a function has no closed form integral?
In the past, I've come across statements along the lines of "function $f(x)$ has no closed form integral", which I assume means that there is no combination of the operations: addition/subtraction multiplication/division raising to powers and roots trigonometric functions exponential functions logarithmic functions which when differentiated gives the function $f(x)$ . I've heard this said about the function $f(x) = x^x$ , for example. What sort of techniques are used to prove statements like this? What is this branch of mathematics called? Merged with " How to prove that some functions don't have a primitive " by Ismael : Sometimes we are told that some functions like $\dfrac{\sin(x)}{x}$ don't have an indefinite integral, or that it can't be expressed in term of other simple functions. I wonder how we can prove that kind of assertion?
Brian Conrad explains this in the following: Impossibility theorems on integration in elementary terms (archived PDF )
|real-analysis|calculus|integration|faq|differential-algebra|
0
Will this procedure generate random points uniformly distributed within a given circle? Proof?
Consider the task of generating random points uniformly distributed within a circle of a given radius $r$ that is centered at the origin. Assume that we are given a random number generator $R$ that generates a floating point number uniformly distributed in the range $[0, 1)$. Consider the following procedure: Generate a random point $p = (x, y)$ within a square of side $2r$ centered at the origin. This can be easily achieved by: a. Using the random number generator $R$ to generate two random numbers $x$ and $y$, where $x, y \in [0, 1)$, and then transforming $x$ and $y$ to the range $[0, r)$ (by multiplying each by $r$). b. Flipping a fair coin to decide whether to reflect $p$ around the $x$-axis. c. Flipping another fair coin to decide whether to reflect $p$ around the $y$-axis. Now, if $p$ happens to fall outside the given circle, discard $p$ and generate another point. Repeat the procedure until $p$ falls within the circle. Is the previous procedure correct? That is, are the random
Yes this will work; it's called rejection sampling . Even better is to generate a point in polar coordinates though: pick θ from [0, 2π) and r 2 from [0, R 2 ] (ie. multiply R by the square-root of a random number in [0, 1] - without the square-root it is non-uniform).
|algorithms|probability-theory|
0
Mathematical subjects you wish you learned earlier
I am learning geometric algebra, and it is incredible how much it helps me understand other branches of mathematics. I wish I had been exposed to it earlier. Additionally I feel the same way about enumerative combinatorics. What are some less popular mathematical subjects that you think should be more popular?
I don't really think that graph theory is a "less popular mathematical subject," but I certainly wish I had been exposed to it earlier.
|soft-question|learning|
0
Mathematical subjects you wish you learned earlier
I am learning geometric algebra, and it is incredible how much it helps me understand other branches of mathematics. I wish I had been exposed to it earlier. Additionally I feel the same way about enumerative combinatorics. What are some less popular mathematical subjects that you think should be more popular?
Theory of computation, information theory and logic/foundation of mathematics are very interesting topics. I wish I knew them earlier. They are not unpopular(almost every university have a bunch of ToC people in CS depatment...) , but many math major I know have never touched them. They show you the limits of mathematics, computation and communication. Logic shows there are things can't be proved from a set of axioms even if it's true--Godel's incompleteness theorem. There are other interesting theorems in foundation of mathematics. Like the independence of continuum hypothesis to ZFC. Theory of computation showed me things that's not computable. Problems that takes exponential time, exponential space, no matter what kind of algorithm you come up with. Information theory proves the minimum amount of information required to reconstruct some other information. It pops up in unexpected places. There is a proof of there are infinite number of primes by information theory (Sorry I can't fin
|soft-question|learning|
0
Do complex numbers really exist?
Complex numbers involve the square root of negative one, and most non-mathematicians find it hard to accept that such a number is meaningful. In contrast, they feel that real numbers have an obvious and intuitive meaning. What's the best way to explain to a non-mathematician that complex numbers are necessary and meaningful, in the same way that real numbers are? This is not a Platonic question about the reality of mathematics, or whether abstractions are as real as physical entities, but an attempt to bridge a comprehension gap that many people experience when encountering complex numbers for the first time. The wording, although provocative, is deliberately designed to match the way that many people actually ask this question.
Are real numbers "real" ? It's not even computationally possible to compare two real numbers for equality ! Interestingly enough, it is shown in Abstract Algebra courses that the idea of complex numbers arises naturally from the idea of real numbers - you could not say, for instance, that the real numbers are valid but the complex numbers aren't (whatever your definition of valid is...)
|soft-question|complex-numbers|education|philosophy|
0
How does the wheel paradox work?
I keep looking at this picture and its driving me crazy. How can the smaller circle travel the same distance when its circumference is less than the entire wheel?
That picture confuses things by making it look as though the red line is being "unwound" from the circle like paper towel being unwound from a roll. Our brains pick up on that, since it is a real-world example. Both circles complete a single revolution, and both travel the same distance from left to right. If these really were rolls of paper towel, the smaller roll would have to spin faster (and therefore complete more than one full revolution) in order to lay out the same length of paper towel as the larger roll. Alternatively, if the two rolls were spinning at the same rate, the free end of the strip of towel left behind by the smaller roll would also move to the right. In short, the image is a kind of optical/mental illusion, and you're not going crazy :)
|geometry|
1
End of preview. Expand in Data Studio

Math StackExchange Curated (Parquet, CC BY-SA 4.0)

This dataset is a curated collection of Math StackExchange (MSE) Q&A pairs packaged in Parquet format.
Each sample contains a problem (title, question_body), its corresponding answer (answer_body), the original MSE tag string (tags), and a flag indicating whether the answer was accepted (accepted).

This dataset includes content derived from the Math StackExchange public data dump (CC BY-SA 4.0, © Stack Exchange Inc.).
This derived dataset is released under CC BY-SA 4.0 in accordance with the license terms.


Files

  • math_stackexchange_train.parquet
  • math_stackexchange_val.parquet
  • math_stackexchange_test.parquet

Schema

Field Type Description
title string Title of the Math StackExchange question.
question_body string Full body of the question (often contains LaTeX/MathJax).
answer_body string The selected answer included in this dataset.
tags string Pipe-delimited tag string, e.g. `
accepted int64 1 if the answer was accepted; 0 otherwise.

License & Attribution

License: CC BY-SA 4.0

Required attribution:
"This dataset includes content derived from the Math StackExchange public data dump (CC BY-SA 4.0, © Stack Exchange Inc.)."

Full license: https://creativecommons.org/licenses/by-sa/4.0/


Changelog

v1.0 — Initial release (Parquet train/val/test)

Downloads last month
63