"The exact location of Hyperion is nominally secret but is available via internet search.[12] However, in July 2022, the Redwood Park superintendent closed the entire area around the tree, citing "devastation of the habitat surrounding Hyperion" caused by visitors. Its base was trampled by the overuse and as a result ferns no longer grow around the tree.[13]
Measures to protect the Hyperion tree were officially implemented in 2022 when the National Park Service (NPS) closed public access to its location in Redwood National Park.[14][15] Anyone who gets too close could face up to six months in jail and a $5,000 maximum fine.[13][16][17]"
My impression (as a dilettante programmer without relevant credentials) is that there isn't really any question about whether mathematical structures can be rooted in set theory, or can be expressed as extensions of set theory. Disputes about foundations of mathematics are more about how easy or elegant it is to do so. (And in fact my impression is they're mostly about subjective, aesthetic considerations of elegance rather than practical considerations of how hard it is to do something in practice, even though the discussion tends to be nominally about the practical side. Quite similar to disputes about programming languages in that respect.)
Defining first-order logic doesn't really require set theory, but it does require some conception of natural numbers. Instead of saying there's an infinite set of variables, you can have a single symbol x and a mark *, and then you can say a variable is any string consisting of x followed by any number of marks. You can do the same thing with constants, relation symbols and function symbols. This does mean there can only be countably many of each type of symbol but for foundational purposes that's fine since each proof will only mention finitely many variables.
Allowing uncountably many symbols can be more convenient when you apply logic in other ways, e.g. when doing model theory, but from a foundational perspective when you're doing stuff like that you're not using the "base" logic but rather using the formalized version of logic that you can define within the set theory that you defined using the base logic.
Thanks, that sharpens it then to a question about natural numbers, or at least some idea of an indefinitely extensible collection of unique elements: it seems the ordering on the numbers isn't required for a collection of variables, we just need them to be distinct.
I don't think you need set theory to define set theory (someone would have noticed that kind of circularity), but there still seems to be some sleight of hand in the usual presentations, with authors often saying in the definition of first-order logic prior to defining set theory that "there is an infinite set of variables". Then they define a membership relation, an empty set, and then "construct the natural numbers"... But I guess that's just a sloppiness or different concern in the particular presentation, and the seeming circularity is eliminable.
Maybe instead of saying at the outset that we require natural numbers, wouldn't it be enough to give an operation or algorithm which can be repeated indefinitely many times to give new symbols? This is effectively what you're illustrating with the x, x*, x**, etc.
If that's all we need then it seems perfectly clear, but this kind of operational or algorithmic aspect of the foundation of logic and mathematics isn't usually acknowledged, or at least the usual presentation don't put it in this way, so I'm wondering if there's some inadequacy or incompleteness about it.*
My first thought on reading your comment was to disagree and say no, we can have the exact value of 1, because we can choose our system of units and so we can make the square a unit square by fiat.
A better way to dispute the unit square diagonal argument for the existence of sqrt(2) would be to argue that squares themselves are unphysical, since all measurements are imprecise and so we can't be sure that any two physical lengths or angles are exactly the same.
But actually, this argument can also be applied to 1 and other discrete quantities. Sure, if I choose the length of some specific ruler as my unit length, then I can be sure that ruler has length 1. But if I look at any other object in the world, I can never say that other object has length exactly 1, due to the imprecision of measurements. Which makes this concept of "length exactly 1" rather limited in usefulness---in that sense, it would be fair to say the exact value of 1 doesn't exist.
Overall I think 1, and the other integers, and even rational numbers via the argument of AIPendant about egg cartons, are straightforwardly physically real as measurements of discrete quantities, but for measurements of continuous quantities I think the argument about the unit square diagonal works to show that rational numbers are no more and no less physically real than sqrt(2).
You can say it’s exactly 1 plus or minus some small epsilon and use the completeness of the reals to argue that we can always build a finer ruler and push the epsilon down further. You have a sequence (meters, decimeters, centimeters, millimeters, etc) where a_n is the resolution of measurement and 5*a_(n+1) determines your uncertainty.
However, at each finite n we are still dealing with discrete quantities, i.e. integers and rationals. Even algebraic irrationals like sqrt(2) are ultimately a limit, and in my view the physicality of this limit doesn’t follow from the physicality of each individual element in the sequence. (Worse, quantum mechanics strongly suggests the sequence itself is unphysical below the Planck scale. But that’s not actually relevant - the physicality of sqrt(2) ultimately assumes a stronger view about reality than the physicality of 2 or 1/2.)
> A professor sets up a challenge between a mathematics major and an engineering major
> They were both put in a room and at the other end was a $100 and a free A on a test. The experimenter said that every 30 seconds they could travel half the distance between themselves and the prize. The mathematician stormed off, calling it pointless. The engineer was still in. The mathematician said “Don’t you see? You’ll never get close enough to actually reach her.” The engineer replied, “So? I’ll be close enough for all practical purposes.”
While you nod your head OR wag your finger, you continuously pass by that arbitrary epsilon you set around your self-disappointment regarding the ineffability of the limit; yet, the square root of two is both well defined and exists in the universe despite our limits to our ability to measure it.
Thankfully, it exists in nature anyhow -- just find a right angle!
One could simply define it as the ratio of the average distance between neighboring fluoride atoms and the average distance of fluoride to xenon in xenon tetrafluoride.
Consider the identity function f, which just takes an argument and returns it unchanged, and has the polymorphic type a -> a, where a is a type variable. What's the type of f(f)?
Obviously, since f(f) = f it should be a -> a as well. But to infer it without actually evaluating it, using the standard Hindley-Milner algorithm, we reason as follows: the two f's can have different specific types, so we introduce a new type variable b. The type of the first f will be a substitution instance of a -> a, the type of the second f will be a substitution instance of b -> b. We introduce a new type variable c for the return type, and solve the equation (a -> a) = ((b -> b) -> c), using unification. This gives us the substitution {a = (b -> b), c = (b -> b)}, and so we see that the return type c is b -> b.
But if we use pattern matching rather than unification, the variables in one of the two sides of the equation (a -> a) = ((b -> b) -> c) are effectively treated as constants referring to atomic types, not variables. Now if we treat the variables on the left side as constants, i.e. we treat a as a constant, we have to match b -> b to the constant a, which is impossible; the type a is atomic, the type b -> b isn't. If we treat the variables on the right side as constants, i.e. we treat b and c as constants, then we have to match a to both b -> b and c, and this means our substitution will have to make b -> b and c equal, which is impossible given that c is an atomic type and b -> b isn't.
It would surprise me if most rationalists didn't know who Jaynes was. I first heard of him via rationalists. The Sequences talk about him in adulatory tones. I think Yudkowsky would acknowledge him as one of his greatest influences.
At least in my country (the UK), people generally do not learn abstract algebra in high school. That's a university-level topic.
I think there is a definite "step up" in complexity between the structures of abstract algebra such as monoids, rings, groups and vector spaces, and monads. All of those algebraic structures are basically just sets equipped with operations satisfying some equations. Monads are endofunctors equipped with natural transformations satisfying some equations. "Endofunctor" and "natural transformation" are considerably more advanced and abstract concepts than "set" and "operation", and they are concepts that belong to category theory (so I don't see how you can read and understand the definition of a monad without that basic level of category theory).
Your time estimates also seem wildly optimistic. A common rule of thumb is that reading a maths textbook at your level takes around an hour per page. I think the definition of a monad can be compared to one page of a textbook. So I'd say it'll take on the order of hours to read and understand the definition of a monad, and that's assuming you're already entirely comfortable with the pre-requisite concepts (natural transformations, etc.)
It is a particular sense of "nondeterminism", but it's not specific to functional programming, I think it's the usual one theoretical CS as a whole. It's the same sense in which "nondeterminism" is used in P vs NP, for example.
Think of a computation as a process of changing state. At a given point in time, the computer is in a certain state, the current state. The computation can be described in terms of a function that acts on the current state.
In a deterministic computation, the function takes in the current state, and produces a single state as output which will be the state the computer enters on the next tick.
In a non-deterministic computation, the function takes in the current state and produces a set of states as output. These states are all the possible states the computer might enter on the next tick. We don't know (or just don't care) which one of these states it will enter.
You can model a non-deterministic computation as a deterministic one, by using a list `currentStates` to store the set of all possible current states of the computation. At each "tick", you do `currentStates = flatMap(nextStates, currentStates)` to "progress" the computation. In the end `currentStates` will be the set of all possible end states (and you could do some further processing to choose a specific end state, e.g. at random, if you wish, but you could also just work with the set of end states as a whole).
It's in this sense that "a list is a non-deterministic result", although this is really just one thing a list can represent; a list is a generic data structure which can represent all sorts of things, one of which is a non-deterministic result.
As a non-arachnophobe, I don't find spiders of any kind to be cute. But I also don't find anything about their appearance or behaviour to be unpleasant or scary. They're just aesthetically unremarkable. Similar to, say, fish---I don't think most people will look at a fish and think either "ooh, so cute" or "weird alien thing, get it away from me", they just see a fish and don't really have any emotions about it.
"The exact location of Hyperion is nominally secret but is available via internet search.[12] However, in July 2022, the Redwood Park superintendent closed the entire area around the tree, citing "devastation of the habitat surrounding Hyperion" caused by visitors. Its base was trampled by the overuse and as a result ferns no longer grow around the tree.[13]
Measures to protect the Hyperion tree were officially implemented in 2022 when the National Park Service (NPS) closed public access to its location in Redwood National Park.[14][15] Anyone who gets too close could face up to six months in jail and a $5,000 maximum fine.[13][16][17]"
reply