Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why functional programming matters (1990) [pdf] (kent.ac.uk)
113 points by znt on May 6, 2015 | hide | past | favorite | 139 comments


Finally read this paper a couple of days ago. Even though i thought i already understood the concepts of higher order functions and laziness, and had used them on my every-day programming, the paper was still quite enlightening.

One of the things i hadn't considered before is how much laziness can help to build modular code. The last section in the paper, about implementing the minimax algorithm, is a great example of this. The `evaluate` function is first naively defined as a simple chain of a couple of functions that iterate the tree of all possible game positions. Then a series of improvements are applied to it (e.g. pruning the tree to allow the algorithm to work on infinite trees, or ignoring branches that cannot yield favourable moves), but what's really nice is that thanks to lazy evaluation these improvements can be implemented as independent functions that are just "plugged in" the chain, without having to modify the original functions. For example, the original function that generates the whole game tree doesn't need to be changed to implement pruning.

I'd really recommend the paper to anyone who might be wondering "why all this fuss about functional programming?". It's not only very accessible, but also full of beautiful code examples solving very practical and concrete problems. I personally felt a bit sad having to "back" to the imperative programming world in my every-day job after reading it.


Read this paper the first time while working on an Apache project. Shortly after created a sub in reddit, and now it has almost 1.700 users subscribed.

Feel free to share, vote, comment there too :) http://www.reddit.com/r/functionalprogramming/ A very healthy community, lots of interesting stuff in different programming languages.


This paper was written 25 years ago; have any studies yet justified the extravagant claims made by Hughes and other FP advocates of substantial increases in productivity, correctness, and modularity?


I agree these are just claims, but when I look at the something like the following it is hard not to see the power that comes from simplicity.

  ;lazy-seq of Fibonacci numbers
  (defn fibo 
    []
    (map first (iterate (fn [[a b]] [b (+ a b)]) [1N 1N])))
You can apply this to a larger set of problems and realize that issues can be solved with less code and that yields less errors. Correctness is also easier to achieve with FP because you don't have the typical problems of non-FP languages (index out of range in a loop for example), I think.

I personally think that the following has much more impact on modern CS than the Hughes paper linked above.

http://www.erlang.org/download/armstrong_thesis_2003.pdf

This is focusing on the reality (operating system threading model, no shared memory across processes -in the Erlang world-, communicate only with message passing, etc.) and this yields to a much higher impact. If you look into Scala features you can find many that came from Erlang, or through Erlang (some of them originally from other languages).

SCP also gave a bigger kick to CS (even though it has very little to do with FP) and it seems that FP languages were adopting it faster (I might be wrong on that).

http://www.usingcsp.com/cspbook.pdf


> it is hard not to see the power that comes from simplicity

Actually, it is quite hard for anyone who's not already versed in Lisp and equipped with some reasonable knowledge of basic FP constructs such as folds.

It's also important to look at the whole picture, and the author of the paper (and FP advocates in general) have this interesting tendency to completely ignore the opportunity cost of using FP and to dismiss the advantages of alternative approaches over FP.

Personally, I think laziness and immutability come at a very heavy price that explains why only the superficial aspects of FP (lambdas) have been adopted while the rest of FP remains the realm of research papers and conferences.


> the author of the paper (and FP advocates in general) have this interesting tendency to completely ignore the opportunity cost of using FP […]

Are you referring to the disadvantages of using a non-mainstream style? That's not relevant here. We're comparing the styles themselves, not their popularity.

> Personally, I think laziness and immutability come at a very heavy price that explains why only the superficial aspects of FP (lambdas) have been adopted while the rest of FP remains the realm of research papers and conferences.

I have a much simpler explanation: first class procedures are compatible with imperative programming, while immutability is not. (And laziness needs immutability to be workable.)

People don't adopt FP because old habits die hard.


> first class procedures are compatible with imperative programming, while immutability is not

But immutability is not is not incompatible with imperative programming, otherwise people wouldn't be writing things like Guy Steele's "Lambda: the ultimate imperative" paper [1]. And of course, monads.

> People don't adopt FP because old habits die hard.

People don't adopt FP for the same reason they don't wear hair shirts; it is presented and marketed as an "eat your vegetables" affair.

[1] http://repository.readscheme.org/ftp/papers/ai-lab-pubs/AIM-...


Most reasonable definitions of imperative programming include the pervasive use of mutable variables, `while` and `for` loops, procedures that have effects beyond their return values, or plain don't return the same values for the same arguments…

You could de-sugar your way out of those (Turing completeness and all that), but then it wouldn't be imperative programming any more. You could isolate those things in a monad, but (i) the monadic part is not functional, and (ii) the rest of the program don't use those things. I stand by my incompatibility claim.

Your marketing argument is better, but I don't see any other way. No matter how you cut it, FP is big about not using some features, and what benefits we get from this restraint.


Haskell has been called one of the best languages for imperative programming [1]. Even in a pure language, we still NEED imperative code at some point where the program hits the real world of time and state (even in FRP a stepper is often needed!). By lifting state and sequencing into a monad, continuation, map or whatever, it is still there even if handled better.

Now, you don't need to use imperative programming everywhere, and in a functional language you have the option to easily not fall into it (well, in Haskell, most other popular FPs are way less pure).

> No matter how you cut it, FP is big about not using some features, and what benefits we get from this restraint.

And that is really why FP will be niche (especially pure FP). Like very powerful static type systems that are really good at saying no...it adds to the resistance of writing code while claiming benefits further down the line. "Worse is better" wins out over and over again (why dynamic languages, imperative programming, OOP continue to be popular; we aren't Vulcans after all!).

[1] http://stackoverflow.com/questions/6622524/why-is-haskell-so...


> Like very powerful static type systems that are really good at saying no...it adds to the resistance of writing code while claiming benefits further down the line.

My experience is the exact opposite: having a compiler that is good at saying no means I correct my errors earlier. With a dynamic typing, my errors are revealed later, and further from the root cause. So, static typing removes to the resistance of writing correct code. It speeds me up.

Static typing would indeed add to the resistance if it required you to jump through hoops. But it doesn't. Bad type systems have significant limitations, but the good ones have very few. Unless you go crazy with Smalltalk-like hot-plug live systems (and even then, see Yi and Xmonad), static typing is permissive enough.

> we aren't Vulcans after all!

I have two theories. The first is, some of us are Vulcans. And somehow, Vulcan brains function better with static typing and functional programming. In other words, Vulcans are better at "math", and put programming into the "math" bucket. Then there are humans, who see programming and mathematics as two quite separate disciplines, ran away from math, and function better with procedural programming and dynamic typing.

My second theory is that everyone could adapt to anything, if only they would bother to learn. Anyone can think like a Vulcan.

I currently lean towards the second theory, though frankly I'm not sure which theory is closest to reality.


> Static typing would indeed add to the resistance if it required you to jump through hoops.

No arguments there. Haskell makes some interesting tradeoffs in this regard, I'm not sure they are optimal but they are definitely valid.

> Unless you go crazy with Smalltalk-like hot-plug live systems (and even then, see Yi and Xmonad), static typing is permissive enough.

The PL community is just really bad at exploring incremental type checking technology. It is totally possible (and I'm doing it with the live programming language I'm working on).

I only claimed that: if static typing feels like it is resisting too much, programmers won't like it. How fluid is programming in Haskell? Ever try using it in a REPL?

> The first is, some of us are Vulcans.

Yep. Take SPJ, the guy is brilliant and I feel like we are from different planets when we talk.

> In other words, Vulcans are better at "math", and put programming into the "math" bucket.

Some programmers like to treat programming as math, but definitely not all of them or even a majority of them.

> My second theory is that everyone could adapt to anything, if only they would bother to learn. Anyone can think like a Vulcan.

Spock was never able to convert Kirk. And I doubt we ever wanted him to.


> How fluid is programming in Haskell? Ever try using it in a REPL?

I program in Haskell using the repl as well as any other languages that make it possible.


I suspect that the average coder will find Python-style iterators much easier to read.

    def fibo():
        a, b = 1, 1
        while True:
            yield a
            a, b = b, a + b


The average speaker would find Mandarin easier to read than any other language. That does not say anything about the effectiveness of Mandarin over other languages.


>I agree these are just claims, but when I look at the something like the following it is hard not to see the power that comes from simplicity.

What simplicity? That's even more complex to state than the traditional imperative solution, that's also closer to how mathematicians would write the equation.


I think you are confusing two things here.

- calculating the n-th element of the Fibonacci sequence - representing it correctly in code

The first one is (assuming the second is present, see code above):

(nth (fibo) 10)


I can't be the only one who thinks that implementing a toy function in one line of terse code doesn't demonstrate anything useful. What about programming in the large? What about efficiency of naive implementations? What about maintainability? These are things that actually matter in software engineering.


> What about maintainability?

It's only anecdotal, but I find pure, immutable code far easier to maintain in any language. To a first approximation, a functional interface == a testable interface.


It's only anecdotal, but I feel the opposite.


Really? I'd genuinely love to see some examples.

I've joined several projects in their maintenance phase which had no tests, and introduced some as I went about debugging, adding features, etc. Since the code was often untestable, I'd make a few refactorings over and over again to allow testing, and these just-so-happen to tease apart the pure computation from the effects; in essence making the code more functional.

An obvious example is to turn implicit state into explicit parameters, eg.

    // BEFORE

    function foo(x, y) {
      b = x + y + a;
    }

    // AFTER

    function foo(x, y) {
      b = foo_pure(x, y, a);
    }

    function foo_pure(x, y, z) {
      return x + y + z;
    }
This doesn't make `foo` more functional or easier to test; but testing `foo_pure` is trivial. In particular this makes debugging logic errors much easier; since there's no need for elaborate/brittle test setup (eg. setting all the right globals, creating and cleaning up files, etc.).


How about reading what I wrote before you reply?

>What about programming in the large?

less code -> less code at large -> win

>What about efficiency of naive implementations?

like a stack overflow version of it the starters always come up with in Java? what exactly are you talking about?

>What about maintainability?

What does FP or any of this thread for that matter do with maintainability? You can write unmaintainable code in any language.

These are a very limited subset of what really matters in software engineering. (I would also remove the naive implementation part.)


>less code -> less code at large

that doesn't necessarily follow


How so? Your claim sounds "obviously false". We need an example.


Scrap the scare quotes, it is obviously false. Less code in the small does imply less code in the large. It doesn't need any justification, it is a tautology: if it takes less code to write all the small things in your program, the program itself will have less code overall.

Or did I misunderstood the meaning of "less code"?


Yep, Django (or anything equivalent) will give you much more productivity in the real world than any of that functional style.


I guess you were not joking but we are not talking about web development in this thread. How does Django help my embedded kernel development?


Ok, sure, but I am sure there are related libraries for that kind of development. When talking about the real world, the majority of it is in web development or business apps.


I was impressed by both of those papers which is part of the reason I decided to focus on Elixir for now.


I understand why you would demand peer-reviewed experiments to justify productivity, or maybe even correctness. But modularity needs nothing more than a qualitative argument.

It is neither disputable nor disputed, that pure functions are more modular than procedures: their result is independent from the order of their evaluation, and their data dependencies are explicit. That makes them easier to connect to one another.

It is neither disputable nor disputed, that first class functions enable more modularity than first-order functions alone. Just see `map`, `filter`, `fold`, which separate loop structure from loop body, in a way that would require special constructs or macros in first order languages.

It is neither disputable nor disputed, that in the absence of side effects, lazy evaluation is more modular than eager evaluation. Lazy evaluation lets you cleanly separate producers from consumers, by avoiding the full evaluation of what would otherwise be infinite data structures.

You may argue that the claims of increased productivity and correctness are extravagant, but you can not reasonably challenge the massive increase in modularity.

Some claims don't need "studies". Sometimes, a compelling argument is enough.


There is still no construction of a study that I would think would convince me of those figures. The number of confounders is invariably far too high.

But there are definitions of at least the latter two which are substantially justified via an FP take on the world.

Really, though, I think FP does itself a disservice by claiming to be a "style" of computing much like OO. It's really just a back-to-the-basics take on what "is" computing and from it you can get a very well-behaved foundations of formal languages to build your CS (of whatever brand you like) atop. This has proven to be incredibly fruitful as a theory since, honestly, something like 1907.


This is a really important point that I'm not sure I've seen before in a discussion about the adoption of FP. You're completely right, but I can't imagine FP language advocates would get very far by leading with the history of type theory.


IMO when it comes to productivity most of the time the platform, and by platform I mean documentation, build tools, package managers and such, are a big influence if not the biggest.

From my experience with functional languages, they slack in that aspect.

I like Lisp's simplicity and I like the elegance in Haskell but the only Lisp I know that have a workflow that kinda matches my taste is Clojure, which reeks of java. I tried Haskell several times and I liked the language but I can't stand cabal.

That's why I believe in Go, the language isn't eye popping, but every aspect of the workflow makes me smile and think "that's how you do it".

And by that I mean the no-versioning policy, web stuff built in to the language, and I just love the workspace structure[0].

It is transparent, no need to google to find out where external packages are installed if you want to fiddle with it's source, no worries about version hell, no worries about unicode.

I've read good things about racket in that aspect but I'm yet to try it.

[0]https://golang.org/doc/code.html


Lots of people are and have been "productive" in Haskell, various Lisp dialects, O'Caml, and so on.

I worked at a startup whose Haskell backend was quite pleasant to work with and served them very well. This site itself is written in Arc, as you probably know, and Paul Graham has written several essays about the "productivity" he and his team achieved using Common Lisp.

People can work productively with software using all kinds of tools and languages. As you mention, a lot of it might come down to taste, likes and dislikes, habits, etc. Of course you're going to be more productive if you're familiar with your tools. So your experience with productivity isn't really an objective indicator for anyone else.


I didn't mean you can't be productive in functional languages. What I'm criticizing are the platforms and I don't see that as too much of a subjective matter.

For me as a user Haskell(the platform) is inferior to, say, Go, simply because I can manage external packages that much easier. Thus if I needed to get something done I'd rather use Go, not because I think it's a better language than Haskell, but because of the pain in dealing with package versions, building, etc.

Basically what I'm saying is that you don't see that much adoption, and thus data to endorse the productivity claims because of the platforms and not because the languages per se are bad.

Pg said people don't use lisp because it looked weird and it's not popular[0] and I think it applies to functional languages in general. I don't think you can/should solve the former, but by having friendlier/better platforms you could start solving the latter.

http://www.paulgraham.com/iflisp.html[0]


How do you personally do package management with versioning in Go? Last time I checked Haskell was better than Go here.


Go packages have no versions, if you make breaking changes you should make a new package so you don't need to specify a version in some metafile to stop the build process from breaking.


So basically use master or fork all your dependencies to freeze versions.


Why would you freeze it? I understand that even with the Go policy on versions it can happen that a change on some external package can cause problems, but that's unlikely to happen and as you have the packages as git repos you can just git reset it back to a commit when it was stable or even better: fix the problem and contribute back.


Why does Java have Maven, Haskell have Cabal, or Python have pip? So you can grab some old code written by someone no longer at a company, build it, fix bugs, and complete your task in a reasonable amount of time.

Without version constraints it's much harder to even say "I can build this program" much less "I can fix this bug".

> that's unlikely to happen and as you have the packages as git repos you can just git reset it back to a commit when it was stable

You seem to be assuming you are the one who wrote the code. I'm more speaking in cases where someone you don't know and can't contact wrote the code. Have you ever had to reverse engineer a very large dependency chain? It is absolutely no fun :/


To be honest, while I am empathic to the argument you're making I think it's orthogonal. There is a fundamental difference in the design of languages to consider and then, atop that, one about actual existing technologies and their platforms.

I agree that Go's workflow philosophy is great. I wish more languages adopted it. But that's the thing, it's easily mixed and matched. On the other hand, making something like Ruby genuinely type-theoretically typed, while an admirable and interesting goal, will probably never happen. Their worlds are just different.


> back-to-the-basics take on what "is" computing... to build your CS (of whatever brand you like) atop

I feel that we've given many years of attention to functional programming, and it's been disappointingly unfruitful in most areas of CS, such as computational complexity, cryptography, machine learning, or computer graphics. Most new ideas coming out of the FP camp are applicable only to FP itself, like monads. We've discussed that before: https://news.ycombinator.com/item?id=8563817

Part of the mismatch is that many interesting algorithms are impure and cannot be easily made pure. I'd like the "foundation" of CS to give a faithful mathematical description of what computers can and cannot do. Is that too much to ask?

But the bigger problem is that FP treats functions as black boxes, and that's just not enough to prove interesting results in most areas of CS. A better "foundation" would try to describe computational behavior and reason about it, instead of abstracting it away. I admit that's a very vague idea, but I'm not smart enough to make it concrete :-)


> I'd like the "foundation" of CS to give a faithful mathematical description of what computers can and cannot do. Is that too much to ask?

http://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis

> A better "foundation" would try to describe computational behavior and reason about it, instead of abstracting it away

A lot of interesting research does exactly this, and a lot of it has found its way into ML-family languages. The starting point (or "foundation") in every case is a typed lambda calculus. Machine models, like turing machines, are just annoying to reason about and not extensible in any meaningful way.


Alas, the Church-Turing thesis doesn't say that every imperative algorithm can be translated to lambda calculus while keeping the same time and space complexity. The two main culprits are in-place mutation and O(1) array indexing.


Sorry for the glib answer. I'm not arguing for programming in the pure, untyped lambda calculus.

My point was that it's possible to systematically extend (or restrict) the lambda calculus and associated techniques. You can add effectful operations and give them semantics (e.g. as functions on stores), define a cost semantics and prove whatever you want while being completely formal.

Try extending RAMs with dynamic allocation, call stacks, or anything to make it more natural to write down a whole algorithm. Try proving something about the composition of RAM programs. I don't think you'll get very far.


Maybe we're talking about slightly different things?

There are many interesting algorithms that are usually analyzed in an imperative setting, like in-place quicksort, Knuth-Morris-Pratt, or Gaussian elimination. There doesn't seem to be any benefit to analyzing them in an FP setting, because it's not easier and doesn't yield new insights.

It seems to me that the FP approach only really shines on algorithms that are "naturally tree-like". For example, I'm a big fan of Okasaki's work on purely functional data structures, and of Blelloch's work-depth model. But let's not oversell FP techniques as the foundation of all algorithm work.


Yes, we're talking about different things. I took the parent comment to mean that talking narrowly about "FP" as programming with pure functions isn't very useful any more. It may have made sense in the 90s when few widely-used languages had h.o.f, but that fight has pretty much been won.

The field that gave rise to FP has a lot to say about how we specify languages and their properties and how we prove things about programs in general. That includes languages with all kinds of effects: state, exceptions, concurrency, etc, and equally as imporant, mechanisms for abstraction and modularity. There's currently no other "foundation" for computation that lets us define these ideas formally.

In discussions of FP, the contributions of the field get reduced to "programming with functions".


It doesn't even say that any imperative algorithm can be translated---merely Turing Machine algorithms which also don't happen to have O(1) array indexing, e.g..

Elsewhere mafribe mentions parallel processing... which is also clearly in another class.


I think pure FP gives a better foundation for mutation, too. It's very straightforward to use something like a pure FP language to make very clear the choice of semantics involving mutable slots.


The US Navy, 20 years ago: http://haskell.cs.yale.edu/wp-content/uploads/2011/03/Haskel...

Past that: all the correct systems I know were built using formal methods and a mix of functional languages and assembler to reduce the semantic gap.

All the highly productive programmers I know use either functional languages (or logical languages, same reasons) or else very assembler over very tight machine models--and the one using MMIX is an exception.

I think SML makes a good case to have won the modularity front, but there I don't feel as certain.


The URL proves that for the certain problem that set of people could come up with the best solution in Haskell. This might just mean that whoever was programming in ADA wasn't as good. The sample size is so small that makes it hard to accept it as evidence. Also almost everybody got A in the evaluation, if the others were C or B- I would see it a little bit more relevant.

I am with you, I also think that functional languages the way to go and I almost exclusively using those, but I would not make this big of a jump.


The answer is no. I still expect 25 more years of claims with no further proof though. People believe what they want to believe.


Credible empirical investigations of programming language efficiency are too hard to carry out (though they could be done in principle). However, what we can do instead is look the evolution of programming languages: what features do the latest creations have, what features are older languages retro-fitted with? What ideas have seen little uptake?

(i) Higher-order functions: Yes, slam-dunk win. Essentially all new languages have them, Java and C++ retrofit them.

(ii) Lazy evaluation order by default: mostly dead. All new language use eager evaluation, even SPJ is sceptical that lazy evaluation should be the default. Scala has an interesting approach: eager by default, but you can switch to CBN (although not full laziness) if required. This hybrid form enables you to define control-operators easily, while keeping the simplicity and efficiency of eager evaluation. Exceptions are languages with dependent types that are really theorem provers (e.g. Agda).

(iii) Purity: dead. All new languages have side-effects. The widely used FP languages like Haskell, Ocaml and F# have side-effects. Exceptions are languages with dependent types for Curry-Howard-based theorem provers (e.g. Agda), because there is no convincing Curry-Howard correspondence using state.

(iv) Everything is function application: open. This may work reasonably well in purely sequential languages, but there are theoretical reasons to believe that message passing is not function application (while function application is a special case of message passing).

(v) Types. The answer on this one depends on whether one is a proponent of statically typed languages or not, clearly an unresolved question. For dynamically typed languages, the question is moot, so the rest of the discussion is irrelevant to those who think that static typing is a bad idea.

(vi) Types (1), type inference: Slam dunk winner. All new languages have inference or wish they could have it but don't know how to do it (Scala). There are probably some hard trade-offs between expressivity of types and efficient decidability of type inference (B Pierce termed Hindley-Milner a "sweet spot"), so we might have to tolerate some type annotation being required, e.g. in local or bidirectional inference.

(vii) Types (2), algebraic data types, pattern matching: winner. Most new languages have them in some form.

(viii) Types (3), monadic encapsulation of effects: open. Most new languages have not yet embraced this. Some conceptual problems not yet solved, e.g. destructors for state monads. There are also expressivity problems, e.g. not all effects can easily be expressed as monads, and composition of monads doesn't always work as desired. Moreover there are pragmatic questions about whether monadic encapsulation should be required or optional.

(ix) Types (4), higher-kinded types: open. Not all new languages have them, but HKTs are generally considered useful, but not super important, cf Rust's evolution. If monadic encapsulation of effects is used, then HKTs become highly desirable, cf the recent Scala fork.

   --------------------
Any other suggestions?


I wish more discussions of the values of FP were this pragmatic. FP has brought some very valuable ideas into the mainstream, along with a few bad ones and many for which the jury is still out. I get a little tired of the people that see Haskell, for example, as the One True Way when it's clear it's a language design with its own set of tradeoffs, just like all the rest.


I guess there are a few people out there who would claim a "One True Way," but mostly that seems to be a stereotype?

As for me, I get a little tired of these pseudo-objective "debates" trying to decide which programming language is "better" by evaluating bullet lists, trends, and so on.

It seems pretty clear that different people can be very productive in different languages and using different techniques. Different projects and companies use different language; that's a plain fact.

For someone who's working with a happily functioning system written in, say, Haskell, it's a little weird to go on discussion forums and see people arguing over whether or not Haskell is a "productive" language.

In almost every HN thread about something related to Haskell, there'll be a few comment threads started by someone who claims that "functional programming doesn't work in the real world" or that "laziness is only good for research papers" or something like that. That's frankly flamebait by now, and I'm generally saddened by how much of the discourse about functional programming (on here, specifically) becomes this kind of groundless fight about whether or not it's possible to build real software with it.

Yes, of course Haskell, as a real-world thing instead of a perfect Platonic idea, has flaws, things that could be improved, problems one has to work around, and so on. That is inevitable and it is true for all programming systems ever!

It's always interesting to hear from people who have made good-faith efforts to use functional programming techniques and systems, and failed. Specific complaints, even feelings and vague opinions, are useful information—when they come from experience.

Anyway, that's my rant for today about the discourse on functional programming...


I think we all agree that the most important variables determining a programmer's productivity with a language (a) are how familiar/comfortable the programmer is with the language and (b) what libraries and tools are available. Language discussions take this as given. They are about what we can say about languages if we hold these factors (familiarity, ecosystem) constant. Is there still a difference, and those who participate in such debates think, yes there is.

There is another dimension to language wars: discussing languages is part of language learning, part of becoming a master. After all, teaching/discussing is one of the most efficient learning tools, and having to defend one's language helps to clarify one's own thoughs, in parts by looking at it from other peoples perpectives.


I'm not sure I agree with your exact formulation of what determines a programmer's productivity (and I think the notion of "productivity" is somewhat mystifying), but I somewhat agree that we can abstract away the concrete differences in programmer familiarity and pragmatic tooling.

Although even there, I'm hesitant. There's a sense in which the idea of the language as separate from its tooling is a judgment in itself, one that tends to divide the language community into camps already. For example, Smalltalk, Forth, and Common Lisp are radically "integrated" systems, in which the languages themselves can be redefined at runtime. There, it's in a way academic to talk about the formal grammar and semantics of the language in a "Platonic" sense.

But that might be a tangential point.

Probably I am uninterested in and somewhat averse to the kind of language discussion you're defining. I think that "language systems" operate in the world in complex ways, only a fraction of which are accessible through bullet-point-style comparisons of PL features.

And something about the interminability of "language wars," like the eternally recurring static-vs-dynamic skirmishes, fuels my aversion. I've heard so many pseudo-objective arguments for and against static typing, and these arguments have had so little cash value, that I've become skeptical of the whole mode of discussion.

So I'm much more interested to hear about, say, concrete successes or failures (or ongoing attempts) using static or dynamic typing (or laziness, or what have you) in concrete real-world settings.


> For example, Smalltalk, Forth, and Common Lisp are radically "integrated" systems, in which the languages themselves can be redefined at runtime. There, it's in a way academic to talk about the formal grammar and semantics of the language in a "Platonic" sense.

Not as academic as you might think. We could analyse how easily a language makes such an integration possible. I'm not sure you could have a lispy environment in C, for instance.

Once you know what kind of ecosystem a language enables, you can compare those ecosystems. (As opposed to existing ecosystems, which are highly influenced by sheer popularity and historical accidents.)


Thought experiment: if Haskell were actually the One True Way, what would you expect to be different? Would you be able to tell?

To say "every language design has its own set of tradeoffs" implies that all languages are equal, but that's clearly not true; we can all think of languages that are worse than others. I'm not at all convinced that there isn't an overall best language out there (I don't think it's Haskell, FWIW).


'To say "every language design has its own set of tradeoffs" implies that all languages are equal'

I don't think that implication is there at all. It absolutely doesn't hold for strong notions of equality - strong equality of all languages should imply that they all share a single set of tradeoffs.

It also clearly doesn't hold for "equal suitability to any purpose", since something less suitable to the purpose may be traded off for something more suitable.

I think any general notion of "equality of quality" can probably be viewed as a special case of the above reasoning - we are picking a set of attributes, of benefits and drawbacks - but we've no guarantee that they all net out the same however we are measuring them.


Is there a best human culture? Clearly some are further down the tech tree than others, but...is imperialism really inevitable?

PHP isn't a bad language if its tradeoffs work for you. But you would be hard pressed to see anything good today in BASIC from the 80s (like we might miss things if we all of a sudden were sent 1000 years back in time). There is no best PL, but there are new PL technologies that make programmer experiences better; using a language that doesn't include at least a few of those technologies is probably going to be a handicap (compared to your peers who are using them). Haskell is bit up the tech tree, but not incredibly so compared to other modern languages like Scala and even C#.


It's politically unacceptable to say so, but some human cultures are clearly worse than others, and yes, I expect there probably is a best culture. I wouldn't begin to guess at which one that is, but I would take people seriously if they're saying that a particular culture has got particular things right and other cultures can learn from that. The problem with imperialism was less the culture and more the exploitation and theft, which is not really relevant to the programming language case.


There is probably not a best culture (though there are definitely bad ones), just cultures that are further down the tech tree and therefore can out compete other ones. The same is true with programming languages.


I would be interested to hear what the bad ones are.


(iv) I come from an EE background so as when it comes to fundamentals I find it a help to think in terms of message passing and transmitters and receivers. It clears up some of the debate for me about which is more fundamental, objects or functions. To me they are all things (transceivers) that transmit and/or receive messages. In the case of a function the transceiver is alive for the duration of the call. In the case of an object or actor it may be alive for longer (and, if stateful, respond in ways that are harder to predict and reproduce)

But this is just a way of thinking that makes me feel comfortable because I like to have a physical model. If someone else is happy with function application as being the fundamental building block then I have no problem with that. However you said that "there are theoretical reasons to believe that message passing is not function application (while function application is a special case of message passing)." I would be interested to get more background on this. Would you have any references for this? Thanks.


To the best of my knowledge, the insight that function application (and other control structures) can be seen as special cases of message passingh comes from the actor people [1, 2]. The first proper mathematisation is Milner's breakthrough encoding of lambda calculus in pi-calculus [3]. This lead to fine-grained investigations into what kinds of interaction patterns correspond to what kinds of functional behaviour (CBV, CBN, call/cc etc), which in turn inspired a lot of work on types for interacting processes.

I don't remember off the top of who first showed that parallel computation has no 'good' encoding into functional computation (lambda-calculus). I'll try to dig out a reference and post it here if I find it.

But the upshot of all this is that message-passing is more fundamental than functions / function application.

[1] C. Hewitt, H. Baker, Actors and Continuous Functionals.

[2] C. Hewitt, Viewing Control Structures as Patterns of Passing Messages.

[3] R. Milner, Functions as Processes.


You can take this work much further in an "FP" kind of way by noting that the pi calculus is a good proof system for linear logic. Frank Pfenning has some lectures on this.


I don't fully agree with Pfenning here. The pi-calculus is not that good a proof system for linear logic: pi-calculus imposes sequentiality constraints on proofs that impede parallelism. Take for example the term

     x!<a> | y!<b> | x?(c).y?(d).P
You have to sequentialise the two possible reductions on x and y. The culprit is the pi-calculus's input prefix, which combines two operations: blocking input and scoping of the bound variables (here c and d). This sequentialisation is not true to the spirit of linear logic proofs (think e.g. proof nets).

Conversely, I don't think linear logic is a good typing system for pi-calculus for a variety of reasons: among them that linear logic does not track causality well, and because it doesn't take care of affine (at most once) interactions well.


These are good points and I'll have to consider them. I was planning on going back through Pfenning's notes again in a bit and I'll keep a more critical eye this next time. Thanks!

(Also, to be clear and with reference to other threads we've conversed on here, I think of linear logic and pi calculus, even if they aren't well-corresponding, as fantastic, unavoidable examples of non-function-application style programming.)


Thanks very much for that. I'll read them all. (I did a bit of reading around actors before including some Hewitt but didn't pick up on the message passing equivalence. Will read closely. The Milner paper looks very interesting)


"Some conceptual problems not yet solved, e.g. destructors for state monads"

I'm not sure what you're talking about here - could you elaborate?

My best guess would be that by "state monads" you mean streaming libraries like pipes and conduit, where there have indeed been some interesting questions around finalization. Calling these "state monads" is surprising enough that I seek clarification, however. Regarding those libraries, I don't know the current status of that discussion (which mostly took place a couple years back) but I do know the libraries themselves are quite useful.


Thanks for your question. Consider this stateful program in pseudo-code

    def f x = 
       let i = ref 17 in
       while i > 0 { i := i-1 }
       return i 
It is referentially transparent. The literal translation into Haskell needs the state monad, despite referential transparency.


This is a regionalized or "benign" effect. To the degree that this can be proven in Haskell's synthetic effect calculus it's OK. Since Haskell does have `unsafePerformIO` we can, with great care, introduce new rules to the type system. This is how `ST` was created

    -- we need a primitive first
    while :: Monad m => m Bool -> m a -> m ()
    while conde bodye = do
      cond <- conde
      if cond then bodye else return ()

    -- then we can make a direct, verified pure translation
    f :: a -> Int
    f x = runST $ do
      i <- newSTRef 17
      while (readSTRef i >>= \ival -> return (ival > 0)) $ do
        modifySTRef i (\ival -> ival - 1)
      ival <- readSTRef i
      return ival


I think your "while" is "when".


Oh, thanks! Good catch. Here's `while`

    while :: Monad m => m Bool -> m a -> m ()
    while conde bodye = do
      cond <- conde
      if cond then bodye >> while conde bodye else return ()


There are several candidate translations. Which you might choose depends on those constraints on the actual problem (instead of this pretend one) that make "f x = 0" not an eminently better solution.

The one that probably most closely matches the semantics of the pseudo-code would be:

    let f0 x = unsafePerformIO $ do
        iRef <- newIORef 17
        whileM_ (fmap (>0) $ readIORef iRef) $ do
            modifyIORef iRef (\ i -> i - 1)

        readIORef iRef
Better would be to let Haskell help check that the mutation is local and no references escape:

    let f1 x = runST $ do
        iRef <- newSTRef 17
        whileM_ (fmap (>0) $ readSTRef iRef) $ do
            modifySTRef iRef (\ i -> i - 1)

        readSTRef iRef
There's the more idiomatic translation, where for something this simple we wouldn't bother with State:

    let f2 x = go 17
      where
        go i | i > 0 = go (i - 1)
             | otherwise = i

The literal translation using State is:

    let f3 x = execState countdown 17
      where
        countdown = whileM_ (>0) $ modify (\ i -> i - 1)
In all cases, the function as a whole is referentially transparent.

In your function, "while i > 0 { i := i - 1 }" is clearly not referentially transparent. Attempting to transliterate in a way that preserves this expression, it is unsurprising that we wind up with expressions that depend on state. The State monad is one way of encapsulating this. Note that f3 and f2 are actually very similar. The execState function doesn't do any deep voodoo - a value of type "State s a" is a thin wrapper around a function of type "s -> (a, s)"; execState simply extracts this function, applies it to a value, and returns the second part of the output.

The actual definitions are equivalent to these (albeit slightly more complicated to keep things DRY w/ transformers):

    newtype State s a = State (s -> (a, s))

    execState (State f) s = let (_, s') = f s in s'

What I don't see, in any of this, is a problem (conceptual or practical) involving "destructors". Is your objection to the need to call execState (or its friends)?


Thanks. Do any of these renditions have the target type int -> int?


All of them.

More precisely, since the argument isn't used it's type is unconstrained, and the result could be any numeric type, so they all actually have type (Num b => a -> b), but that can be used anywhere you're expecting an Int -> Int.


> The literal translation into Haskell needs the state monad, despite referential transparency.

Sure, literally translating imperative procedures into Haskell code often needs the State monad since imperative procedures gratuitously use state modifications that are logically unnecessary for their function.

But I'm not sure why any meaning should be ascribed to this beyond "translate function, not code".


Could you be more precise? The obvious way to translate this into Haskell is 'f x = 0'.


While f x = 0 is extensionally equal to the original program, it doesn't implement the algorithm in question (iteratively counting down a local reference).


Ok, so how about

    f x = g 17 x
        where g i x = if i > 0 then g (i - 1) x else i


The point of the example was to (1) use mutable state (hence force the state monad to show up in the function's type), but (2) in a way that defines a pure function.

The example is intended to point towards an expressivity gap of the monadic encapsulation approach to effects, in that one can't (as far as I'm aware -- I have not programmed in Haskell) hide state usage, even when this is extensionally OK.

It is an open research question to define type/effect systems that (a) enable programmers to hide such benign state usage, (b) preserve type inference for interesting typing systems (say F-omega which is the heart of Haskell's types), and (c) are pragmatically viable.


"The example is intended to point towards an expressivity gap of the monadic encapsulation approach to effects, in that one can't (as far as I'm aware -- I have not programmed in Haskell) hide state usage, even when this is extensionally OK."

You absolutely can do exactly that. Even without cheating (unsafePerformIO and friends), usually.


If this is always possible without cheating, then I have definitely misunderstood something about Haskell.

Could you point me towards an explanation how the hiding of stateful expressions works in Haskell in general?

Let me clarify my question (a bit). Are you saying one of the following is true?

(1) You are saying for any program M that uses state (i.e. M's type mentions the state monad in an essential way) but in a pure way, i.e. M's stateful behaviour is not visible from the outside, there is a context C[.] (not involving tricks like UnsafeIO) such that C[M] has the same behaviour as M (as seen from the outside), and C[M] has (essentially) the same type as M, except that the state monad is gone?

(2) There is a 'nice' translation function f on programs such that for any program M that uses state in a pure way (same as (1)), f(M) has the same behaviour as M (as seen from the outside), and f(M) has (essentially) the same type as M, except that the state monad is gone?


I'm not sure I understand the distinction (quite possibly just 'cause it's early).

A fruitful way to look at monads is to say a value of type M (where M is an instance of Monad) is a program in a language. State is a language for describing stateful computation. Reader is a language for describing parameterized computation. ST is a language for describing controlled, local mutation. IO is a language for describing arbitrary side effects.

Most of these have pure interpreters, with the most generic typically called runM (so, runState, runReader, &c). The only place cheating comes in is unsafePerformIO - interpreting IO is the job of the runtime, which does it in a more controlled way relative to the rest of your program.

The type of the return value of an interpreter, of course, doesn't tell you what kind of program you ran - it just tells you the result.


Revisiting to speak directly to the question you've asked, both 1 and 2 are true for the State monad.

For 1, to get from a value of type "State s a" to a value of type a, you simply call evalState and supply an initial state of type s.

For 2, you could look at the structure of the program and unwrap all (State s a) values to have type (s -> (a, s)), updating the various functions used to operate on (State s a) values to match.

In either case, you would not see State in the resulting type signature. In case 2, you would still need to provide an initial value of type s - you can't do much interesting with a function but apply it.

Note that the function in 2 is more theoretical than practical. You could in principle write it with Template Haskell, but since it doesn't change the behavior and you can just do 1 to change the type the same way there isn't reason to.


Thanks. This is very interesting, but different from what I used to believe about Haskell (as you can imagine, I'm not an active Haskell programmer). I will have to revisit my ideas.

I still wonder how this would work with scope extrusion of references. Consider this little Ocaml program

    let ( impure, pure ) =
      let i = ref 0 in
      let impure x = ( i := !i + 1; x + !i ) in
      let pure x = (
        i := !i + 1;
        let res = !i + x in
        i := !i - 1;
        res - !i ) in
      ( impure, pure );;
which returns two functions called "pure" and "impure". Both are of type int -> int. The function "pure" is computing the successor function (let's ignore corner cases), but using hidden state. The function "impure" is using the same hidden state, but is clearly impure.

Is it possible to do the same in Haskell? By "the same" I mean giving "pure" the type int -> int, despite using hidden state that is used elsewhere?


It's harder to be confident about impossibility than about possibility, but I think this would require "cheating". I'm not sure if you meant it as a bug, but note that outside of a single-threaded context "pure" isn't referentially transparent - there seems to be a race condition that could lead to pure computing different values if impure is called during pure's execution.

Something with a similar form that would be more realistic to ask for would be a cache for a pure-but-expensive function and some IO operations to inspect/manipulate that cache. I believe you would need a call to unsafePerformIO, but that could be a reasonable thing to do.


Thanks. I know that "pure" is not thread-safe. I'm quite happy to consider the expressivity of Haskell's monadic approach to effects in a sequential setting first -- much easier to reason about. I wonder if anybody has investigated this expressivity problem. If not, there's an interesting problem waiting to be tackled.


I don't see the problem. Pure code being automatically thread-safe is a boon. I would much rather be unable to write thread-unsafe code without noticing (and able to when I deliberately decide to).


My interest is in proving expressivity results. Such proofs are much easier in a sequential setting. Indeed, at this point I don't even have a strong intuition what exactly the expressivity limitations of Haskell's monadic effect control are.


Really, really have to say I disagree with (iii). Purity is growing more and more day by day. From other comments you seem to have taken a fairly non-standard definition of purity and I'd encourage you to stick with the normal one as it'll let you talk about more interesting things—see further comment on (viii).

I don't think (iv) is open at all. The fact that you can encode everything as function passing is an interesting trick, but it's merely one of several (you can, e.g., also encode everything categorically or via supercombinators and graph reduction, which turns out to be more practical for lazy languages). Function application is just one really well understood semantic foundation.

I find (v) far more subtle than you make it out to be. The standard static/dynamic types divide is silly because both sides are talking about different things. The static side wins completely because honestly, taken at face value, they don't claim that much. All languages are typed, some have interesting type theories, some compilers reject programs which are syntactically OK but violate the type theory.

(viii) is interesting. In my mind, you've conflated this with what most people would call purity---however, monads are merely an implementation of effect typing/purity. Others exist and are experimented with. Also, composition of monads always works as desired... there's no strange edge-case to it. It's merely that they don't compose like functions in a simple way. When you examine what they would entail you learn that they ought not to as well.

(ix) HKTs are sort of a gateway drug to type-level computation. It's remarkable how many languages just more or less let their type-level language wither and die. I'm not claiming that HKTs are a slam dunk victory, but I would be willing to bet that the evolution of languages will lead us to (a) having them universally in the next two decades and (b) wondering why everyone was so hung up on HKTs---I mean, I suppose I can understand singletons, but HKTs? Really?


     monads are merely an implementation of effect typing/purity. 
Yes, and the one Haskell has chosen. Is it the right way? Most newer languages are not following Haskell here, instead they have effects like state and exceptions 'directly. That's what I was pointing out. And yes, composition of monads is a bit non-canonical.

As to HKTs, I agree, languages should have them. I was just pointing out the state of the art, post Haskell.


I would suggest that effects which are "directly" embedded are impure effects (w.r.t. our other thread). Essentially, without reifying them into values with types it's harder to reason about them.

A more interesting dividing line are things like computation effect typing in languages like Bauer and Pretnar's "Eff" or McBride's "Frank".

http://www.eff-lang.org/

http://homepages.inf.ed.ac.uk/slindley/papers/frankly-draft-...

But yes, if you consider other "practical, modern" languages, many retain impurity.


There are very different options.

First, approaches like Eff take pure functional computation as a basis and layer effects like call/cc or concurrency on top of that. I think it's better to go the other way and take concurrency, or, if you want to start sequential, a lambda-calculus with jumps a la call/cc or lambda-mu, as a basis see pure functional computation as a restriction of the base calculus.

Second, it's unclear that effects should be reflected in types for reasoning. That road leads to complicated typing systems without type inferences (see also dependent types). I think the following alternative is more appealing: stick with lightweight types (think Hindley-Milner), and leave reasoning to an external calculus (variants of Hoare logic).


That "analytic" style you suggest is really interesting, too. I'm not sure I have a dog in the ultimate race and I'd love to see both analytic and synthetic styles flourish, but from a mere terminology point of view I'd distinguish between the two languages in means of describing them as "pure" or not.

I also really worry that external proofs will ultimately be neglected in practice. I suppose this is again a "UX of Types" thing, but I'm something of a believer that anything that you do not literally demand of the programmer will eventually be sacrificed.


I'm not totally sure what you mean by "external proofs will ultimately be neglected".


     I find (v) far more subtle than you make it out to be.
I don't want to get start or be involved in another static-vs-dynamic typing debate. I have a strong preference for static types, but I appreciate the arguments of the other side, and content that neither side has really convincing empirical arguments supporting their respective positions.


Eh, I just find the whole "debate" silly. The two systems aren't comparable and can and do live together side-by-side in every program. I feel I agree saying that empirical evidence here is going to either be silly or impossible to achieve for a long time.

So, I guess I just tire of it. If you're making a language today you can either take advantage of information known statically or not. I think there's a relative burst in activity here as some of the more genuinely useful forms of static analysis have become recently more popularized and I personally feel it's silly to not use this information... but it's just a choice, not a religious war.


    Purity is growing more and more day by day.
What I meant was this: a language should not be pure by default, rather, it should be possible to say (and get checked by the compiler): this is pure. Quite how to set up a pragmatically viable type and effect system to guarantee this is unclear.

PS what is the "normal" definition of purity?


Hm, I think given your stance now I'm not sure whether I properly understood what you said elsewhere on this matter.

"Purity" is a tricky technical concept that can be defined, but rather than do so explicitly I'd rather note that it necessarily requires the asserter to take a stance on what "observable" means. Haskell is clearly impure in that its laziness is implemented using mutation "behind the scenes", but it is also taken to have an "observable" semantics where such mutation is invisible and IO is merely an interesting value. This gives it purity [0] according to that degree of observability.

I think I'm having difficulty parsing your interpretation of purity because you're moving smoothly between analytic and synthetic judgements of the notion. Agda, as an example, is a system which has a synthetic notion of purity as the "language" is thought to not contain terms which represent effects. Haskell plays a trick where it has, essentially, two sets of semantics. The first shares this property (modulo non-termination, see [0]) and is where most "programming" occurs---here we use monads as a system for constructing terms of the second semantics, IO. The second, IO, semantics are vastly impure.

To the degree that this trick is "purity in a language" then purity is becoming a very popular idea. Haskell uses its type system to provide synthetic distinction between the two choices of semantics, but if you use a language lacking that discipline then you can merely call this "isolation of side effecting code" and then nobody will disagree with you.

If you don't want to call this "purity", and maybe you don't, then I feel you're in the minority though you aren't without an island to stand on.

Frankly, both are sort of an overloading of the term, though.

[0] Actually, it gets a lot trickier. It's well known that non-termination is an effect and it's a little slippery to define Haskell's notion of "observability" so as to no longer observe non-termination. I mean "slippery" here to say that it takes some technical finesse and that it tends to go against people's intuition a bit more harshly than the rest.


     I don't think (iv) is open at all ... Function application is just one really well understood semantic foundation.
I disagree, see my reply to "discreteevent" above. Parallel computing cannot be reduced to functions.


No, sorry, to be clear. I don't believe that "reduction to functions" is anything but a clever or perhaps illuminating toy and that (iv) is closed in the opposite way: it's certainly not true that "reduction to functions" is right.


Regarding purity and encapsulation of effects - I would say instead say that it's just not decided if monads are the best mechanism for controlling effects, but there is still a lot of interest in some mechanism for control and reasoning about them.


I fully agree with that, that's why my point (viii) was: "monadic encapsulation of effects: open". But it's important to realise that (a) the existence of effects and (b) their control at the level of types are two differnt things.


I agree mostly with all your points.

Regarding lazyness by default, although you are right, I would say the lazy modifier found in Scala/ML derivatives/C#/D and the use of sequences/ranges kind of help half way there.


Scala doesn't have lazy evaluation of function arguments, it's got call-by-name. The difference is that lazy evaluation caches the value an argument evaluates to the first time it is needed, so it's evaluated at most once, whereas call-by-name simply passes the parameter unevaluated and the result of evaluation is not cached, so might be computed more than once.

Consider this Scala program

     object Main extends {

       def tryit [ T ] ( block : => T ) : Unit = {
         println ( "----- Running 1st block -----" )
         block
         println ( "----- Running 2nd block -----" )
         block
         println ( "--------- Finished ----------" ) }

       def main ( args : Array [ String ] ) {
         var i = -1
         tryit ( { i += 1; println ( s"i = $i" ) } ) }

     }


Does scala not have lazy vals anymore?


Sorry, I had forgotten about lazy vals. Thanks for reminding me. They are cached, so proper lazy [1], see e.g. https://stackoverflow.com/questions/3041253/whats-the-hidden...


Holy shit ... is your space bar broken?


> Purity: dead. All new languages have side-effects. The widely used FP languages like Haskell, Ocaml and F# have side-effects.

You don't consider Haskell to be pure? A big win of using monads for side effects was to keep purity right?


Before using the word "pure" it should be mandatory to define the sense in which you are using it. There are at least two senses, and people talk at cross purposes and get awfully antagonistic.

Sense (1): The return value of f can only depend on x, and not y (or anything else)

    x   = ...
    y   = ...
    ret = f x
Sense (2): A program written in the language can't interact with the outside world.


"The answer on this one depends on whether one is a proponent of statically typed languages or not, clearly an unresolved question" - I have yet to see a static-to-dynamic transpiler ;-).


Purity is not dead. Haskell 'has' side effects but they are explicitly managed. Managing side effects and relegating them to well defined areas lives on in newer languages.


There is no need to put "has" in scare quotes. Haskell does have side effects like C, Java and Python (although maybe not quite as conveniently). If you want to look at side-effect free languages, consider Miranda, Coq, Agda.

What Haskell contributes (in a concrete implementation, the ideas go back to Moggi's theory) is the monadic encapsulation of effects, which I treat as a separate point under (viii). Monadic encapsulation of effects is not absence of effects. It's a reflection of effects at the type level.


You're using a somewhat non-standard definition of "purity", which makes your claims more confusing than they need to be. Haskell is usually referred to as pure, and has had encapsulated effects from the start. Your claims are true using your definition of "purity", but false using the more common one.


As a point of historical note, Haskell has not had encapsulated effects from the start. Monadic IO was introduced in version 1.3, roughly 6 years after version 1.0.

http://research.microsoft.com/en-us/um/people/simonpj/papers...


Thanks for the correction! I wondered about that when I was writing it but didn't have time to go double-check.


It is an absence of side effects because side effects are defined as effects that cannot be accounted for. As soon as you have a mechanism to document those effects, they are not "side effects" any more.

hello :: IO () hello = putStrLn "Hello World"

`hello` is a "pure" value of type IO (). putStrLn a "pure" function of type -> IO ()

IO () is a value like any other "pure" value, like [Int].


C is "pure" too.

The statement

   puts ("hello");
is a "pure" value of type int.


No, that statement evaluates to a pure value of type int. In the process, it does something else as well that effects the visible behavior of the program.

The C equivalent of

    putStrLn "hello"
is probably actually

    void putsHello(void) {
        puts("hello");
    }
Unfortunately, function definitions aren't something you can really manipulate at runtime in C.


Here is a comment I wrote about the same issue some time ago:

https://news.ycombinator.com/item?id=9085403

Haskell IO is severely misunderstood.


Can you clarify the essential difference between Haskell's and Agda's versions of monadic I/O?

Or are you talking about divergent side-effects like bottom and exceptions?


Thanks for your question. I should have been more careful. I meant Agda used as a prover, not Agda used as a 'conventional' programming language (Indeed in years of usage, I've never executed an Agda program, I've only ever used the type checker). In other words, Agda, the proof calculus for Martin-Loef type theory.

That said, I was talking about effects in general, including non-termination, exceptions, concurrency ...


Is purity dead? Do you consider Idris a theorem prover? It's dependently typed but seems to be designed for writing operational programs rather than proofs.


I'm not familiar enough with Idris to know exactly how state is handled, but yes, Idris is a prover, since it has dependent types.

If you want to verify your programs (and in 99.9% of programs you probably don't as you get it sufficiently correct by more lightweight techniques such as testing and code-reviews), then purity is fine, since you have to track effects explicitly anyway to prove non-trivial results, and doing this in a pure functional way or say with auxiliary variables in a Hoare-logic are kind-of/sort-of notational variants.


If "prover" just means "has dependent types" then that's a much larger class than it sounds like. I write a lot of Scala, which has dependent types, but most of the code I write is just ordinary web services.


I meant dependent types a la Agda, Coq etc that you can use for proving essentially arbitrary properties. Scala and Haskell do not have this.


I don't understand the distinction you're drawing. Idris' type level functions are easier to write than Scala's but they don't seem to be able to do anything that Scala ones can't.


Dependent types aren't only useful in languages that prove theorems, so I think a better qualification for "provers" might be termination checking.


To be clear, Haskell "has dependent types" to various degrees as well.


Purity is not dead. My newish functional language is pure.



Can you quote the "extravagant claims" made in this paper? This is from the abstract:

> We conclude that since modularity is the key to successful programming, functional programming offers important advantages for software development

The conclusion section is more elaborate, but I don't see anything there that warrants the label "extravagant".


Probably one of the most popular reposts on HN, if you search this title.


I don't find the parts on lazy evaluation very convincing.


FP and the arguments have evolved in the 25 years after this paper. Laziness by default was important because it forced language and library developers to invent new purely functional constructs such as monads and functional reactive programming. Functional programming has won with lambdas in every language and even mainstream GUIs getting programmed with React.js, pure Flux architecture and Immutable-js.


Given that "anonymous function" is the same as "lambda", please use "anonymous function", because it's self-explaining.


If functional programming actually mattered we would stop having to remind ourselves why it matters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: