Some context for those who don't go/didn't go CMU:
15-251 Great Theoretical Ideas in CS is the freshman spring semester discrete math & theoretical CS (a follow up to the discrete math course they take freshman fall). The course itself goes for breadth over depth. In this aspect, it serves as a solid foundation for all theory classes students take going forward. It's a class where students learn proof techniques, collaboration, and problem solving.
The course is often praised for its rigor. The problem sets require a fair amount of work to complete, and the course staff sets a high standard for responses.
On the other hand, it commonly receives criticism for being quite intense as a freshman class. Many CMU students look back on it as one of the hardest classes they took.
I personally feel quite lucky to have taken 251. It gave me a foundation to appreciate a lot of deeper CS topics, and taking it freshman year meant I had 3 years more to put it to use.
I was just about to comment that the topics seem really intense. I got the impression that Carnegie Mellon is an excellent university with very high standards (much better than Harvard, Stanford, Berkeley, etc., which are prestigious and have lots of professors which do great research, but the actual scientific education is good, but not extraordinary, and moreover, the courses are not very hard from what I've heard).
It happened multiple times that I googled some relatively obscure topic, and I found multiple excellent sources from Carnegie Mellon professors. For example, there is this excellent document [1] by Jonathan Shewchuk, about the conjugate gradient method, and you will found as many as four lectures about DRAM memory on youtube.
My friends had him and I hear Jonathan Shewchuck is indeed an excellent teacher. However, he has never been a CMU professor. In 1994, when your link was dated, he was getting his PhD at CMU, and since 1998 he's been a professor at UC Berkeley (where I went): https://people.eecs.berkeley.edu/~jrs/
I have no doubts about CMU's excellence, but I found Berkeley's undergrad CS education to be quite good. There certainly were research professors who were bad teachers, but many like Shewchuck were outstanding teachers, and there were also several Teaching Professors whose job description to focus is more on teaching than research, like Dan Garcia, Paul Hilfinger, John DeNero, and Brian Harvey (since retired): https://www2.eecs.berkeley.edu/Faculty/Lists/faculty.html
I too have heard that private schools like Harvard and Stanford are not very hard once you get in, just very hard to get into (though I can't speak from experience). I haven't heard people say that about Berkeley, which is a public school so it doesn't have shareholders where alumni can form a controlling majority, and it's funded more by the state than by tuition. That also certainly was not the experience of myself or anyone I know.
I also studied EECS at Berkeley and had Shewchuck.
There's this mentality at Berkeley that the EECS classes are the hardest and most competitive in the country (and that private schools just give everyone A's) but after meeting many CMU graduates I realized that it's program is also similarly rigorous. The classes might not be as competitive but then again I don't think that means it's worse for it.
I wonder what kind of retention there is. Is one day of class studying quantum computation going to be particularly useful to people when they study it 6 months or 2 years later?
I've lately studied a bit of program design when it comes to working out. A lot of effort goes into designing workout programs, making the programs efficient, trying to get the maximum results from the least amount of effort. There's a clear progression and a clear way of measuring progress. Every exercise, stretch, and movement is supposed to a particular function that best moves you toward your goal. Particular muscle sets are targeted at particular intensities for particular intervals on particular days. There's a reason for every specific decision that gets made.
You can contrast this with the aimless way a lot of people workout. They go to the gym, they do whatever exercises or work on whatever machines seem like a good idea to them at the moment, and hope it increases their fitness in some non-specific way. Lacking good metrics for measuring effectiveness, they often judge a workout by how sore it makes them the next day (an extremely poor way to measure effectiveness). It's not that this approach can't improve your health, it's just that it's extremely inefficient compared to the more goal orientated and progression based system.
In my experience most universities follow the aimless second approach - throwing what they can at the students, hoping some of it sticks, and judging the courses based on difficulty (this usually becomes even more obvious when professors explain their courses). I don't think I've come across any study measuring the effectiveness and metrics of different programs and progression paths.
It's a bit more than it seems, on the surface. You're not just expected to pick up what the professors lecture about, but to take it and apply it to a sampling of fairly advanced problems. You end up doing a lot of learning on your own (or in a small group) while completing the homework.
To use your gym analogy, it's like focusing on one muscle group per week. On Monday, you're shown proper form and technique for a squat, but by the following Monday, you'd better be able to squat significantly more than you could a week ago (and you'll get graded on it). Are you going to become a competitive squat weight lifter? Absolutely not. Do you now know proper form, and understand what it takes to train properly? Almost certainly.
I'd also note two things:
- Most topics covered were useful in later courses.
- A lot of the credit for this course's success can be given as much to really, really dedicated TAs as it can to the professors.
(Note: I took this course my freshman year, it was blisteringly hard, and I wouldn't have traded anything for it.)
Probably, one day of lecture on the material would be useless. But the weekly homework assignments force you to develop a better understanding. I frequently remember topics, assignments, and even specific questions from this course when they are relevant to something I'm working on now.
Why other mathematicians at the time, were so nasty towards Cantor's ideas?
The quotes noted in the PDF -- just sound nasty and dismissive (or may be this is not the whole story)
I think, as with many paradigm changing and major ideas, its novelty required a change in the prevailing thinking of the epoch, which always makes people uncomfortable--according to wikipedia most objections were to Cantor's notion of transfinite numbers: https://en.wikipedia.org/wiki/Georg_Cantor
Seems like the notion of "completed infinity" which sort of gives infinity ontological status (? I'm no mathematician) did not sit well with some--I guess I could understand why--it definitely isn't an intuitive notion:
"Before Cantor, the notion of infinity was often taken as a useful abstraction which helped mathematicians reason about the finite world; for example the use of infinite limit cases in calculus. The infinite was deemed to have at most a potential existence, rather than an actual existence.[18] "Actual infinity does not exist. What we call infinite is only the endless possibility of creating new objects no matter how many exist already".[19] Carl Friedrich Gauss's views on the subject can be paraphrased as: 'Infinity is nothing more than a figure of speech which helps us talk about limits. The notion of a completed infinity doesn't belong in mathematics'.[20] In other words, the only access we have to the infinite is through the notion of limits, and hence, we must not treat infinite sets as if they have an existence exactly comparable to the existence of finite sets."
I guess this shows that the overall pragmatic value of your work may eventually win out against formal refutations.
It may be that a more deep-seated change has taken place within mathematics: the rise of formalism, with Frege, Russell, etc. separated mathematics from intuitions about 'reality'; a formal theorem can't be refuted by saying it's 'unrealistic', since it's a watertight consequence of the chosen axioms and deduction rules. We can attack those axioms and deduction rules as being 'unrealistic', but since Goedel shot down the attempts to define "one true" formalism, it seems that mathematicians have come to accept axioms and logics based on their own merits (e.g. whether they lead to interesting, non-trivial consequences) rather than whether they're accurate models of 'reality'.
In this relaxed view, Cantor's arguments make sense from his premises, and it's interesting to entertain the idea of his premises because they lead to such interesting results. Whether or not "completed infinities" are "real" is delegated to the philosophers, while mathematicians 'play the proof game', and change the rules if they make it more fun.
You are highly underestimating the mental leaps required to understand infinity pre Cantor. It's not surprising at all that people should find it ridiculous and get especially angry about it because they couldn't really explain why it was wrong, despite it just seeming like complete BS to them.
It's like telling a lay person (or even many mathematics students for that matter) that 0.999999.... = 1. They understand the proofs, but it's so against their intuition there is a visceral negative reaction to it.
There is no such thing as 0.9... There is 0.9[k], for k as large as you'd like. However, no matter which k you pick, I can squirrel a number between 0.9[k] and 1, for example 0.9[k]5. Your alleged proof is wrong :p
Usually 0.9... is defined as the limit (from calculus) of the sequence 0.9, 0.99, 0.999, 0.9999, etc. So it's the unique number that the sequence gets arbitrarily close to (even if it never reaches it), which is exactly 1. With the proper definition in place, the statement becomes a lot less interesting, in my opinion, but regardless, I'm pretty sure the statement "0.9... = 1" is well-accepted in the math community.
I agree that all of the proofs you normally see in high school are flawed in one way or another because they don't properly define what "..." means in the first place.
I may be out of my depth but I disagree. When I see … I read “and so on” so my brain when it sees 0.9… writes a function which first produces 0.9, then 0.99, then 0.999 and keeps going and never stops – it's return type is the bottom type to use type theoretic jargon. I say 'produces' but the function never returns so if there was some way to sample where the function was at or … (heh, that's an ordinary ellipsis). Or rather say it is a function where the first time it returns 0.9, the next time it returns 0.99 at a higher resolution, then 0.999, and so on.
If I'm wrong then I'm wrong for this reason. Mathematicians are using the symbol … ambiguously. Because in some places in math it really does just mean “and so on“ and nothing more. Or more pedantically “and keep going in such a manner following that pattern“.
If, as you say, in instances like this mathematicians take … to mean “the limit (from calculus) of the demonstrated sequence” this is news to me. Well, maybe it's not news to me but I'm not exactly buying it.
What I'm saying is that the expression could be taken to be one of any number of things (not sure if I ought to qualify that with the phrase 'mutually exclusive'): (1) the (symbolic representation of the) sequence, (2) the most recently generated term, (3) the generator (4) the limit (from calculus) of the sequence. That you say the convention is that it most assuredly is 4 is not something I'm buying into. (I'd nearly go a step further and assert that it ought not be defined this way).
Where we get into problems, I believe, is when people say nonsensical things like the sequence reaches 1. Or if they start talking about the actually completed sequence rather than a symbolic representation of it. And this, by the way, is the gripe that people have with Cantor's completed infinities. It really is an ontological claim, not a mathematical one that such things exist. It's a claim whose refutation I support. When this topic comes up on HN I invariably point[0] people towards the absolutely best book I've read on the subject. René Guénon's The Metaphysical Principles of the Infinitesimal Calculus. This book is lucid, compelling, precise. I urge anyone with even the slightest bit of interest in this topic to read it. The notion of the continuum is a very very slippery beast and I know of no other book that deals with the topic so well. Category theorists have a much better handle on the smoothness of the continuum than set theorists because they have far better machinery.
0.999... does not represent a function or a process or anything like that. It represents a fixed number: the limit of sum from k=1 to n of 9/10^k , as n approaches infinity.
You're demonstrating why so many people get confused by the 0.999... notation: they (and you) are thinking of it as the sequence (0.9, 0.99, 0.999, ...) ; but actually it doesn't represent this sequence, or a process for producing this sequence; rather, it represents the limit of this sequence, which is a different type of object: a real number.
> Mathematicians are using the symbol … ambiguously
Sure, it can mean other things in other contexts (like in my description above of a sequence), but we're talking about what it means in the case 0.j_1 j_2 j_3 ... where the j_k's are base 10 digits. In that case it has a well-defined, non-ambiguous meaning: \lim_{n\to\infty}\sum_{k=1}^n \frac{j_k}{10^k} .
> If, as you say, in instances like this mathematicians take … to mean “the limit (from calculus) of the demonstrated sequence” this is news to me. Well, maybe it's not news to me but I'm not exactly buying it.
OP is right. I have a degree in math and this is what mathematicians mean by 0.999...
> What I'm saying is that the expression could be taken to be one of any number of things
Yeah we could have had the notation mean something else. But we didn't. We have other notations for the things you're talking about. Who cares?
> That you say the convention is that it most assuredly is 4 is not something I'm buying into.
What would it take to make you believe that this is what mathematicians mean by this notation? I promise with all my heart that this is true.
> (I'd nearly go a step further and assert that it ought not be defined this way).
Plenty of stuff in math has pedagogically dubious notation, it's not really novel or philosophically interesting to claim you dislike some piece of notation or think it could have been better. It's all just convention.
> Where we get into problems, I believe, is when people say nonsensical things like the sequence reaches 1.
No mathematician is saying that. The sequence (0.9, 0.99, 0.999...) quite clearly never reaches 1, and any mathematician will agree with that. But its limit is 1.
> Or if they start talking about the actually completed sequence rather than a symbolic representation of it. And this, by the way, is the gripe that people have with Cantor's completed infinities. It really is an ontological claim, not a mathematical one that such things exist.
Well, things don't have to "exist" for us to be able to study them in mathematics, so what's the point of claims like this?
Yeah, this is an abuse of notation (it should be written as set membership, not equality), but the "=" in "0.99... = 1" is really equality, so I don't think this is a good example when talking about that.
Never? Equality always means the same thing in every case I can think of, except big-O notation. You picked the one example I'm aware of in all of mathematics where it means something different.
This notation is defined as meaning the limit as n approaches infinity of the sum from k=1 to n of 9/10^k . A sequence doesn't have to reach its limit!! For example, the limit of the sequence 1/2, 1/4, 1/8, ... is 0, even though the sequence never reaches 0. The limit of the sequence 0.9, 0.99, 0.999, ... is 1 (and this is BY DEFINITION what 0.999... means).
As much as I believe that 0.999... exists and is equal to 1, the thing where you write 1/3 as 0.3... and multiply both sides by 3 is a really bad proof -- it's pretty much circular reasoning.
For someone who doesn't understand what 0.999... means, why should we expect them to understand what 0.333... means and believe that it exists?
Because 0.33... means 1/3 for exactly the same reason that 0.99.. means 1. So if you believe one then you should already believe the other. All the intuitive (wrong) arguments against 0.99...=1 apply to 0.33...=1/3 just as well.
Your first sentence is probably true, but the point is that most people find the former more intuitive. It's a valid demonstration technique. Just because A entails B in the mathematical/logical sense doesn't mean everyone who believes A also believes B. No one is logically omniscient.
I'm not a math pro but I don't think those are good evidence to refute that claim. I have no idea whether the claim is valid or not, though I vaguely recall this discussion from school.
From your examples -- one's a ratio and the other is irrational. Almost as if you suggest "The set of real numbers exists, ergo 0.999999... exists"?
The battle over the question whether or not something exists in the mathematical sense has essentially been won by the formalists, who will allow any definition to claim the existence of something so long as it is consistent with the rest of mathematics.
Then, given that 0.99999... exists, what useful properties could it possibly have, if it is to be treated as a number.
So if 0.99999... exists and has the same arithmetic properties as decimal expansions of rational numbers, its value must be 1. Now whether you think it should have these properties is an entirely different question, but most mathematicians seem to like it this way.
No, I'm simply saying those are other examples of numbers with infinite decimal expansions. In fact, most numbers don't have a finite decimal expansion.
And the claim is correct. It is the statement that the limit of the partial sums of the inverse powers of 9 is 1, which can be rigorously proven with the epsilon-delta definition of a limit.
"0.999... = 1" isn't really a "claim", it's a definition.
Well, it's technically a claim, but one that is obviously true once we agree on what 0.999... means. Anyone who doesn't agree that 0.999... = 1 has a different definition of 0.999... from the one used in mainstream math notation.
There is no decimal expansion of 1/3. Or pi for that matter. There are only approximations thereof, with an error margin inverse proportional to the energy we throw at generating said expansion.
If you happen to somehow built a decimal expansion of 1/3, please share it with us!
> There is no decimal expansion of 1/3. Or pi for that matter.
This is incorrect. A decimal expansion is just a type of Cauchy sequence, which by definition is infinite (a function from the natural numbers to the reals).
You are confusing the decimal expansion with its truncated approximations.
> If you happen to somehow built a decimal expansion of 1/3, please share it with us!
Sure thing! Here it is:
def oneThird(place):
if place >= 0:
return 0
else:
return 3
Convert to your favorite programming language/Turing machine/other abstract machine of choice.
oneThird doesn't return a decimal expansion, just one digit in a number. Still waiting for the 1 / 3 decimal expansion you claim it exists. The best I can do is:
def oneThird(place):
if place >= 0:
return 0
else:
return 3
def printOneThird():
for x in iter(int, 1):
print(oneThird(-x))
Unfortunately, it didn't finish before I edited this response. Perhaps you have more patience, please let me know when you have a link to the decimal expansion of 1 / 3.
It seems you don't understand what a sequence is. Formally, a sequence is a function whose domain is the natural numbers (or integers). A decimal expansion is simply a sequence whose codomain is the set of digits {0,1,2,3,4,5,6,7,8,9}. I gave you just such a sequence.
Are you really going to continue disputing a basic mathematical fact?
It is formally defined in this case. In mathematics, 0.j_1 j_2 j_3 ... where each j_k is a base-10 digit is unambiguously defined to mean \lim_{n\to\infty}\sum_{k=1}^n \frac{j_k}{10^k} .
What I am saying is that the definition should be put in place whenever ellipsis is used. Better yet, do not even use ellipsis. Else at some people get confused.
An incorrect but partial analogy is use of physics meaning of the words energy and power in common language.
People don't self-select themselves to study and teach mathematics because they like surprises. This is true of academia in general, but very true re math.
That's the ideal. But if you've spent much time in academia, you'd know that the personality types that choose science usually do so to wallow in certainty. High School doesn't tell you much about the real history and philosophy of science. I trust everyone who added a downvote has actually spent some time inside a math dept of a University (but I highly doubt that.)
15-251 Great Theoretical Ideas in CS is the freshman spring semester discrete math & theoretical CS (a follow up to the discrete math course they take freshman fall). The course itself goes for breadth over depth. In this aspect, it serves as a solid foundation for all theory classes students take going forward. It's a class where students learn proof techniques, collaboration, and problem solving.
The course is often praised for its rigor. The problem sets require a fair amount of work to complete, and the course staff sets a high standard for responses.
On the other hand, it commonly receives criticism for being quite intense as a freshman class. Many CMU students look back on it as one of the hardest classes they took.
I personally feel quite lucky to have taken 251. It gave me a foundation to appreciate a lot of deeper CS topics, and taking it freshman year meant I had 3 years more to put it to use.