Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Really, really have to say I disagree with (iii). Purity is growing more and more day by day. From other comments you seem to have taken a fairly non-standard definition of purity and I'd encourage you to stick with the normal one as it'll let you talk about more interesting things—see further comment on (viii).

I don't think (iv) is open at all. The fact that you can encode everything as function passing is an interesting trick, but it's merely one of several (you can, e.g., also encode everything categorically or via supercombinators and graph reduction, which turns out to be more practical for lazy languages). Function application is just one really well understood semantic foundation.

I find (v) far more subtle than you make it out to be. The standard static/dynamic types divide is silly because both sides are talking about different things. The static side wins completely because honestly, taken at face value, they don't claim that much. All languages are typed, some have interesting type theories, some compilers reject programs which are syntactically OK but violate the type theory.

(viii) is interesting. In my mind, you've conflated this with what most people would call purity---however, monads are merely an implementation of effect typing/purity. Others exist and are experimented with. Also, composition of monads always works as desired... there's no strange edge-case to it. It's merely that they don't compose like functions in a simple way. When you examine what they would entail you learn that they ought not to as well.

(ix) HKTs are sort of a gateway drug to type-level computation. It's remarkable how many languages just more or less let their type-level language wither and die. I'm not claiming that HKTs are a slam dunk victory, but I would be willing to bet that the evolution of languages will lead us to (a) having them universally in the next two decades and (b) wondering why everyone was so hung up on HKTs---I mean, I suppose I can understand singletons, but HKTs? Really?



     monads are merely an implementation of effect typing/purity. 
Yes, and the one Haskell has chosen. Is it the right way? Most newer languages are not following Haskell here, instead they have effects like state and exceptions 'directly. That's what I was pointing out. And yes, composition of monads is a bit non-canonical.

As to HKTs, I agree, languages should have them. I was just pointing out the state of the art, post Haskell.


I would suggest that effects which are "directly" embedded are impure effects (w.r.t. our other thread). Essentially, without reifying them into values with types it's harder to reason about them.

A more interesting dividing line are things like computation effect typing in languages like Bauer and Pretnar's "Eff" or McBride's "Frank".

http://www.eff-lang.org/

http://homepages.inf.ed.ac.uk/slindley/papers/frankly-draft-...

But yes, if you consider other "practical, modern" languages, many retain impurity.


There are very different options.

First, approaches like Eff take pure functional computation as a basis and layer effects like call/cc or concurrency on top of that. I think it's better to go the other way and take concurrency, or, if you want to start sequential, a lambda-calculus with jumps a la call/cc or lambda-mu, as a basis see pure functional computation as a restriction of the base calculus.

Second, it's unclear that effects should be reflected in types for reasoning. That road leads to complicated typing systems without type inferences (see also dependent types). I think the following alternative is more appealing: stick with lightweight types (think Hindley-Milner), and leave reasoning to an external calculus (variants of Hoare logic).


That "analytic" style you suggest is really interesting, too. I'm not sure I have a dog in the ultimate race and I'd love to see both analytic and synthetic styles flourish, but from a mere terminology point of view I'd distinguish between the two languages in means of describing them as "pure" or not.

I also really worry that external proofs will ultimately be neglected in practice. I suppose this is again a "UX of Types" thing, but I'm something of a believer that anything that you do not literally demand of the programmer will eventually be sacrificed.


I'm not totally sure what you mean by "external proofs will ultimately be neglected".


     I find (v) far more subtle than you make it out to be.
I don't want to get start or be involved in another static-vs-dynamic typing debate. I have a strong preference for static types, but I appreciate the arguments of the other side, and content that neither side has really convincing empirical arguments supporting their respective positions.


Eh, I just find the whole "debate" silly. The two systems aren't comparable and can and do live together side-by-side in every program. I feel I agree saying that empirical evidence here is going to either be silly or impossible to achieve for a long time.

So, I guess I just tire of it. If you're making a language today you can either take advantage of information known statically or not. I think there's a relative burst in activity here as some of the more genuinely useful forms of static analysis have become recently more popularized and I personally feel it's silly to not use this information... but it's just a choice, not a religious war.


    Purity is growing more and more day by day.
What I meant was this: a language should not be pure by default, rather, it should be possible to say (and get checked by the compiler): this is pure. Quite how to set up a pragmatically viable type and effect system to guarantee this is unclear.

PS what is the "normal" definition of purity?


Hm, I think given your stance now I'm not sure whether I properly understood what you said elsewhere on this matter.

"Purity" is a tricky technical concept that can be defined, but rather than do so explicitly I'd rather note that it necessarily requires the asserter to take a stance on what "observable" means. Haskell is clearly impure in that its laziness is implemented using mutation "behind the scenes", but it is also taken to have an "observable" semantics where such mutation is invisible and IO is merely an interesting value. This gives it purity [0] according to that degree of observability.

I think I'm having difficulty parsing your interpretation of purity because you're moving smoothly between analytic and synthetic judgements of the notion. Agda, as an example, is a system which has a synthetic notion of purity as the "language" is thought to not contain terms which represent effects. Haskell plays a trick where it has, essentially, two sets of semantics. The first shares this property (modulo non-termination, see [0]) and is where most "programming" occurs---here we use monads as a system for constructing terms of the second semantics, IO. The second, IO, semantics are vastly impure.

To the degree that this trick is "purity in a language" then purity is becoming a very popular idea. Haskell uses its type system to provide synthetic distinction between the two choices of semantics, but if you use a language lacking that discipline then you can merely call this "isolation of side effecting code" and then nobody will disagree with you.

If you don't want to call this "purity", and maybe you don't, then I feel you're in the minority though you aren't without an island to stand on.

Frankly, both are sort of an overloading of the term, though.

[0] Actually, it gets a lot trickier. It's well known that non-termination is an effect and it's a little slippery to define Haskell's notion of "observability" so as to no longer observe non-termination. I mean "slippery" here to say that it takes some technical finesse and that it tends to go against people's intuition a bit more harshly than the rest.


     I don't think (iv) is open at all ... Function application is just one really well understood semantic foundation.
I disagree, see my reply to "discreteevent" above. Parallel computing cannot be reduced to functions.


No, sorry, to be clear. I don't believe that "reduction to functions" is anything but a clever or perhaps illuminating toy and that (iv) is closed in the opposite way: it's certainly not true that "reduction to functions" is right.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: