I am deeply impressed by the depth and breadth of this language. Algebraic data types, logic programming, mutability, all there from the get go.
Another aspect that I love from their comparison table is that a single executable is both the package manager, LSP and the compiler. As I understand, the language server for Haskell has/had to do a lot of dances and re implement things from ghc as a dance between the particular ghc version and your cabal file. And maybe stack too, because I don't know which package manager is the blessed one these days. Not to shit on Haskell -- it is actually a very fine language.
However, the best feature is a bit buried and I wonder why.
How ergonomic is the integration with the rest of the JVM, from the likes of Java? AFAIK, types are erased by JVM compilers... With the concept of `regions` they have at least first class support for imperative interaction.
Note: With the JVM you get billions worth of code from a high quality professional standard library, so that is a huge plus. That is why the JVM and .net core are IMHO the most sane choices for 90+% of projects. I think the only comparable language would be F#. I would love to see a document about Flix limitations in the JVM interoperability story.
I agree, somewhat, but "StringBuilder"... Hmm... Leaning towards Java a lot in this aspect. Not sure I like this aspect of it. The rest does seem look at a quick glance.
The StringBuilder example is just that-- an example that many software developers should be familiar with. The deeper idea is that in Flix one can write a pure function that internally use mutation and imperative programming.
Plus that you regretted having `+` as a concatenation operator.
However, I believe an even better design choice would be to forgo string concatenation and instead rely entirely on string interpolation. String interpolation is a much more powerful and elegant solution to the problem of building complex strings.
The logic programming / datalog feels a bit gimmicky on top of everything else. All the other features, I can see exactly how they'd improve the type soundedness of a codebase. But logic programming is really niche and I'd almost rather it be independent of the language.
Right, it feels like a standard example of a Lispy library (Datalog), and a Prolog monad is standard teaching material. I am of the opinion that Flix is strictly worse than Idris2
F# doesn’t have type classes (yet?) so programming with monads can be quite limited.
It would be interesting if F# skipped Haskell style monads and jumped straight to algebraic effects. They seem like a better fit for F# philosophy in any case.
Right, they have something like computation expressions, but they are not composable.
For your second point, I don't know if they could achieve that without type level programming. This is the Box of Pandora the designer of F# tried [0] not to open.
The parent poster is correct. We do monomorphization, hence Flix types are unboxed. For example, a `List[Int32]` is a list of primitive integers. There is no boxing and no overhead. The upshot is that sometimes we are faster than Java (which has to do boxing). The downside is larger bytecode size-- which is less of a factor these days.
Caveat: Flix sometimes has to box values on the boundary between Flix and Java code -- e.g. when calling a Java library methods that requires a java.lang.Object due to erasure in Java.
As a non-functional-programming, c-language-familiar person, the syntax look fabulous. It seems like the first functional language I've seen that makes simple things look simple and clear.
It's kind of a bummer that "skins/themes" never caught on for programming languages. You see it once in awhile, I think some compiler people at one of the FAANGs did an OCaml skin/theme/alternative syntax (reason? something). And there's stuff like Elixir that's kind of a new language but also an interface to an existing world (very cool, Valim is a brilliant guy).
But you could do it for almost anything. I would love the ability to hit a key chord in `emacs` and see things in an Algol-family presentation, or an ML family, or a Lisp.
Seems like the kind of thing that might catch on someday. Certainly the math notation in things like Haskell and TLA were a bit of a barrier to entry at one time. Very solvable problem if people care a lot.
Flix continues to impress me as one of the most thoughtfully designed languages in the ML-family. Its blend of functional, imperative, and logic paradigms, plus a polymorphic type and effect system and first-class Datalog constraints, feels genuinely unique. The strict separation of pure and impure code at the type level is a refreshing alternative to monads and makes reasoning about effects much more straightforward.
I also appreciate the “one language, no flags” philosophy and the focus on compile-time errors only, which keeps things simple and predictable. The fact that Flix manages full tail call elimination on the JVM (despite the lack of native support) is a technical feat that deserves more recognition.
Curious to hear how folks are using Flix in production (or research), especially for logic-heavy or concurrent applications. Has anyone run into pain points with the closed-world assumption or the lack of exceptions? And how do you find the Datalog integration compared to Prolog or other logic languages?
On a language semantics note: the semantics of extending/restricting polymorphic records seem to follow Leijen's approach [0] with scoped labels. That is, if you have a record e.g. r1 = { color = "yellow" }, you can extend it with r2 = { +color = "red" | r1 }, and doing r2#color will evaluate to "red"... and if you then strip the field "color" away, r3 = { -color | r2 }, then you'll get back an original record, r3#color will evaluate to "yellow". Which IMO is the sanest approach, as opposed to earlier attempts of trying to outlaw such behaviour, preferably statically (yes, people developed astonishingly high-kinded type systems to track records' labels, just to make sure that two fields with the same label couldn't be re-added to a record).
I looked and Flix a while ago and found it really interesting - so much so that I wrote an article "Flix for Java Programmers" about it. Might actually be a bit outdated by now.. need to look at Flix's recent development again.
The language has improved a lot in the years since the post. In particular, the effect system has been significantly extended, Java interoperability is much improved, and some syntax have been updated.
Wow what a gold mine your blog is. It’s like a more elaborate and well thought through version of thoughts that have been torturing me for years. Looking forward to reading it all.
Wow, thank you so much, that's flattering. And motivating - I shall start blogging again this month, and try to stick to a monthly cadence. Make sure to subscribe to the RSS feed or follow me on Bluesky or Mastodon, to get notified for new posts :)
Little nitpick. C# was created by Anders Hejlsberg who studied at DTU (Copenhagen). He also implemented Turbo Pascal. Borland was also a company founded by Danes.
In general, programming language theory is pretty strong in Denmark, with lots of other contributions.
For example, the standard graduate textbook in static program analysis (Nielson & Nielson) is also Danish. Mads Tofte made lots of contributions to Standard ML, etc.
> They aren't your 'typical' Ivy League/Oxbridge univ+techhubs.
Aarhus is an outstanding university. There are a couple of dozen universities in Europe that lack the prestige of Oxbridge but offer high quality education and perform excellent research.
Lineage? Aarhus has a strong academic tradition in areas like logic, type theory, functional programming, and object oriented languages. Many influential researchers in these fields have come through there.
I also think there's a noticeable bias toward the US in how programming language research is perceived globally. Institutions like Aarhus often don't invest heavily in marketing or self-promotion, they just focus on doing solid work. It's not necessarily better or worse, but it does make it harder for their contributions to break through the layers of global attention.
Yes exactly. Aarhus had Martin-Löf, Nygaard, etc. Similarly, INRIA has had many influential researchers as well as OCaml and Rocq. Talent (and exciting projects) attracts more talent. But that doesn’t mean it doesn’t exist in US. Penn, Cornell, CMU, MIT and others have had historically very strong PL faculty. My understanding is due to the nature of grants in US it doesn’t give faculty the same freedom to work on what they choose as in Europe. So you get different research focuses because of that.
Flix FAQ (https://flix.dev/faq/) starts normal, but becomes increasingly more hilarious towards the end :D
Some gems:
---
Q: Wait, division by zero is zero, really?
A: Yes. But focusing on this is a bit like focusing on the color of the seats in a spacecraft.
---
Q: "This site requires JavaScript"
A: People who have criticized the website for using JavaScript: [1], [2], [3], [4], [5].
People who have offered to help refactor the site to use static html: 0.
---
Q: I was disappointed to learn that Flix has feature X instead of my favorite feature Y.
A: We are deeply sorry to have let you down.
---
Q: This is – by far – the worst syntax I have ever seen in a functional language. Semicolons, braces, symbolic soup, et al. It is like if Scala, Java and Haskell had a one night stand in the center of Chernobyl.
So I am not sure what you mean? In general, if you like pipelines then you want the "subject" (here the map) to be the last argument. That is how it is in Flix.
Sorry, I wasn't clear. Yes, you can have pipelines in Flix, F#, OCaml, to me however placing the "subject" first feels more natural, as function signatures (not necessarily in pipelines) have a symmetry not encountered otherwise:
Can't find any mentions of typeclasses though, are they supported?
Give me typeclasses and macros comparable with Scala ones and I would be happy to port my libraries (distage, izumi-reflect, BIO) to Flix and consider moving to it from Scala :3
UPD: ah, alright, they call typeclasses traits. What about macros?
UPD2: ergh, they don't support nominal inheritance even in the most harmless form of Scala traits. Typeclasses are not a replacement for interfaces, an extremely important abstraction is missing from the language (due to H-M typer perhaps), so a lot of useful things are just impossible there (or would look ugly).
Flix supports type classes (called "traits") with higher-kinded types (HKTs) and with associated types and associated effects. A Flix trait can provide a default implementation of a function, but specific trait instances can override that implementation. However, Flix has no inheritance. The upshot is that traits are a compile-time construct that is fully eliminated through monomorphization. Consequently, traits incur no runtime overhead. Even better, the Flix inliner can "see through" traits, hence aggressive closure elimination is often possible. For example, typical usage of higher-order functions or pipelining is reduced to plain loops at the bytecode level without any closure allocation or indirection.
Flix does not yet have macros-- and we are afraid to add them due to their real (or perceived) (ab)use in other programming languages.
We are actively looking for library authors and if you are interested, you are more than welcome to stop by our Gitter channel.
> Flix does not yet have macros-- and we are afraid to add them due to their real (or perceived) (ab)use in other programming languages.
I think the abuse is not that much of a problem. It's rather that it makes it much much harder to change the language later on because it will break macros (like it did between Scala 2 and 3, causing many people to be stuck on Scala 2 due to libraries using macros heavily).
If I might add a suggestion: add type providers to the language (like in F#). It solves a lot of the problems that macros are often used for, such as generating code from SQL DDLs, API specs, etc. (or vice versa).
> The upshot is that traits are a compile-time construct that is fully eliminated through monomorphization.
So, apparently, I can't re-implement distage for Flix.
I don't mind a little bit of overhead in exchange for a massive productivity boost. I don't even need full nominal inheritance, just literally one level of interface inheritance with dynamic dispatching :(
> their real (or perceived) (ab)use in other programming languages.
Without macros I can't re-implement things like logstage (effortless structured logging extracting context from AST) and izumi-reflect (compile-time refleciton with tiny runtime scala typer simulator).
The reality is that careless programmers will do bad things with any tool they happen to pick up. Using that as an excuse to reduce the power of a tool is poor form.
Another way of putting it is to point out that removing goto from a language isn't going to reduce the occurrence of spaghetti code. The average skill and care of the developers who happen to be using that language is what does that.
Not sure I agree. A simple example: If your language has null as a subtype of every type then you will have null ptr exceptions everywhere. If your language does not have a null value then you won't. The situation is not as clear cut as you suggest.
Yes, you can write spaghetti code in any language. But a good language design can help (a) reduce errors and (b) nudge the developer towards writing better code.
Sorry to hijack, but since you are involved, can you explain why tail call optimization would incur a run time perf penalty, as the docs mention? I would expect tail call optimization to be a job for the compiler, not for the runtime.
We have to emulate tail calls using trampolines. This means that in some cases we have to represent stack frames as objects on the heap. Fortunately, in the common case where a recursive function simply calls itself in tail position, we can rewrite the call to a bytecode level loop and there is no overhead.
Thanks for explaining that term. That sounds really bad indeed. Maybe this is way too technical, but representing them as stack pointers was unfeasible?
TCO (tail call optimization) is often confused with TCE (tail call elimination), the latter is a runtime guarantee whereas the former is a compiler's best effort attempt to statically optimize tail calls.
Thanks! So you are implying that `TCO :: Maybe TCE`?
I am trying to think of a situation where a functional language compiler does not have enough information at compile time, especially when effects are witnessed by types.
I'm not a compiler dev, but I know that many functional programming languages struggle with this in the same manner if the target platform does not support TCE itself, and therefore require trampolining.
Curious if anyone can weigh in on why Flix requires a developer to explicitly mark a function as pure. I'd imagine in almost all cases this can be derived through static analysis.
I could be wrong, but the sentence "Flix precisely tracks the purity of every expression in a program." together with some examples of function definitions without the purity/impurity annotation, gave me the impression it's optional, because the compiler can infer it on its own most of the time.
Minor nit: the semicolons! Especially in the yield examples, since there's no "return" on the last line, the disparity looks weird.
That said, I like how the syntax isn't overly functional, and not too different from what we see in mainstream languages. I'd be fine with either braces or indentation, but the semicolons have to go!
Even though it's called effect, it has almost nothing to do with algebraic effects, which is what this language and others like OCaml 5 have, and so Effect TS is more like Haskell (as it came from fp-ts).
Any code agents work well with this or do we have to start thinking with our own brain again?
Seriously though, looks like a cool language and makes me sad that LLMs will probably inhibit the adoption of new languages, and wonder what we can do about it.
I have the opposite gut feeling about LLM's; I think they're going to break down the barriers to adopting new programming languages, since they'll lower the cost of porting code dramatically.
The code in a language's standard library is probably enough to train an LLM on the new syntax, and even if it isn't, agents now observe the compiler output and can in principle learn from it. Porting code from one language to another doesn't require deep creativity and is, barring API aesthetics, a perfectly well defined task. It will be one of the first programming tasks to be perfectly automated by LLM's.
We are going to have to use our brains again to start thinking about why we're doing any of the stuff we're doing, and what effects it will have on the world.
I hope so! On one hand I worry about the training corpus being so overwhelmingly biased toward certain languages that everything else will be drowned out. On the other, I think there'll be a point where we realize "reasoning" LLMs are more proficient with the same tools that we are: sound type systems, reusable libraries, concise syntax, DSLs where they make sense, etc. that the end game will look much more like skilled, experienced, thoughtful engineering work rather than the first of ten billion autocomplete attempts that happened to get something that met the basic requirements.
Oh, but you can just transpile it to WASM using e.g. TeaVM [0]. Just add another build step to your bundler or whatever web dev uses nowadays to build apps.
Effect system allows programmers to annotate expressions that can have certain effects, just like the type system annotating type information on them, so compilers can enforce effect rules just like enforcing type rules.
For example for a type system,
let a: Int // this says 'a' has the type Int
a = 5 // compiler allows, as both 'a' and 5 are of Int.
a = 5.1 // disallowed, as 'a' and 5.1 are of different types.
Similarly for example for an effect system,
let a: Int
let b: Int!Div0 // 'b' is of type Int and the Div0 effect.
let c: Int
...
a = 1 / c // disallowed, as '/' causes the Div0 effect which 'a' not supported
b = 1 / c // allowed, as both '/' and 'b' support the Div0 effect.
The effect annotations can be applied to a function just like the type annotations. Callers of the function need to anticipate (or handle) the effect. E.g. let's say the above code is wrapped in a function 'compute1(..) Int!Div0', a caller calling it can do.
compute1(..) on effect(Div0) {
// handle the Div0 effect.
}
The book uses Scala & ZIO but intends to be more about the concepts of Effects than the actual implementation. I'd love to do a Flix version of the book at some point. But first we are working on the TypeScript Effect version.
It looks like "effect" as in impure functions in a functional language? I.e. a new way of dealing with effects (global/hidden state mutations) in a language that makes the pure-impure distinction. I'm not entirely sure.
I thought it was going to be something like contracts or dependent types or something.
No. It is essentially resumable exceptions. You throw an exception saying “I need a MyAlgebraicType” and the effect handler catches the exception, generates the value, and returns execution to the place the exception was called from.
But entirely definable in user code, so an effect is essentially a set of possibly impure operations you can perform (like I/O or exception throwing), and a function that exhibits that effect has access to those operations. Of course the call sites then also exhibit that effect, unless they provide implementations of the effect operations.
I think their DIDYOUKNOW.md file in the source code is worth showing in full, as it describes the language in a more compact form:
---
# Did You Know?
## Language
Did you know that:
- Flix offers a unique combination of features, including: algebraic data types
and pattern matching, extensible records, type classes, higher-kinded types,
polymorphic effects, and first-class Datalog constraints.
- Flix has no global state. Any state must be passed around explicitly.
- Flix is one language. There are no pragmas or compiler flags to enable or
disable features.
- Flix supports type parameter elision. That is, polymorphic functions can be
written without explicitly introducing their type parameters. For example,
`def map(f: a -> b, l: List[a]): List[b]`.
- the Flix type and effect system can enforce that a function argument is pure.
- Flix supports effect polymorphism. For example, the `List.map` function is
effect polymorphic: its purity depends on the purity of its function argument.
- in Flix every declaration is private by default.
- In Flix no execution happens before `main`. There is no global state nor any
static field initializers.
- Flix supports full tail call elimination, i.e. tail calls do not grow the
stack. Flix -- being on the JVM -- emulates tail calls until Project Loom
arrives.
- Flix supports extensible records with row polymorphism.
- Flix supports string interpolation by default, e.g. "Hello ${name}". String
interpolation uses the `ToString` type class.
- Flix supports the "pipeline" operator `|>` and the Flix standard library is
designed around it.
- In Flix type variables are lowercase and types are uppercase.
- In Flix local variables and functions are lowercase whereas enum constructors
are uppercase.
- Flix supports set and map literals `Set#{1, 2, 3}` and `Map#{1 => 2, 3 => 4}`.
- Flix supports monadic do-notation with the `let*` construct.
- Flix supports "program holes" written as either `???` or as `?name`.
- Flix supports infix function applications via backticks.
- Flix compiles to JVM bytecode and runs on the Java Virtual Machine.
- Flix supports channel and process-based concurrency, including the powerful
`select` expression.
- Flix supports first-class Datalog constraints, i.e. Datalog program fragments
are values that can be passed to and returned from functions, etc.
- Flix supports partial application, i.e. a function can be called with fewer
arguments that its declared formal parameters.
- the Flix type and effect system is powered by Hindley-Milner. The same core
type system that is used by OCaml, Standard ML, and Haskell.
- the Flix type and effect system is sound, i.e. if a program type checks then a
type error cannot occur at run-time. If an expression is pure then it cannot
have a side-effect.
- the Flix type and effect system supports complete type inference, i.e. if a
program is typeable then the type inference with find the typing.
- Flix has a unique meta-programming feature that allows a higher-order
functions to inspect the purity of its function argument(s).
- Flix names its floats and integers types after their sizes, e.g. `Float32`,
`Float64`, `Int32` and `Int64`.
- Flix -- by design -- uses records for labelled arguments. Records are a
natural part of the type system and works for top-level, local, and
first-class functions.
- Flix -- by design -- has no implicit coercions, but provide several functions
for explicit coercions.
- Flix -- by design -- disallows unused variables and shadowed variables since
these are a frequent source of bugs.
- Flix -- by design -- disallows allow unused declarations. This prevents bit
rot.
- Flix -- by design -- does not support unprincipled overloading. Instead,
functions are given meaningful names, e.g. `Map.insert` and
`Map.insertWithKey`.
- Flix -- by design -- does not support variadic functions. We believe it is
better to pass an explicit array or list.
- Controversial: Flix defines division by zero to equal zero.
- Controversial: Flix defines String division as concatenation with the path
separator. For example, `"Foo" / "Bar.txt" => "Foo\Bar.txt"` on Windows.
## Standard Library
Did you know that:
- Flix has an extensive standard library with more than 2,600 functions spanning
more than 30,000 lines of code.
- the Flix Prelude, i.e. the functions which are imported by default, is kept
minimal and contains less than 20 functions.
- most higher-order functions in the Flix standard library are effect
polymorphic, i.e. they can be called with pure or impure functions.
- the Flix type and effect system enforces that equality and ordering functions
must be pure.
- the Flix standard library uses records to avoid confusion when a function
takes multiple arguments of the same type. For example, `String.contains` must
be called as `String.contains(substr = "foo", "bar")`.
- the Flix `List` module offers more than 95 functions.
- the Flix `String` module offers more than 95 functions.
- the Flix `Foldable` module offers more than 30 functions.
- the Flix standard library follows the convention of "subject-last" to enable
pipelining (`|>`).
## Ecosystem
Did you know that:
- Flix has an official Visual Studio Code extension.
- Flix has an official dark theme inspired by Monokai called "Flixify Dark".
- the Flix website (https://flix.dev/) lists the design principles behind Flix.
- the Flix VSCode extension uses the real Flix compiler.
- the Flix VSCode extension supports auto-complete, jump to definition, hover to
show the type and effect of an expression, find all usages, and more.
- the Flix VSCode extension has built-in snippets for type class instances. Try
`instance Eq [auto complete]`.
- the Flix VSCode extension supports semantic highlighting.
- the Flix VSCode extension has built-in "code hints" that suggests when lazy
and/or parallel evaluation is enabled or inhibited by impurity.
- Flix has community build where Flix libraries can be included in the CI
pipeline used to build the Flix compiler.
- Flix has a nascent build system and package manager based on GitHub releases.
Today it is possible to build, package, and install Flix packages. Dependency
management is in the works.
## Compiler
Did you know that:
- Flix -- by design -- has no compiler warnings, only compiler errors. Warnings
can be ignored, but errors cannot be.
- the Flix compiler uses monomorphization hence primitive values are (almost)
never boxed.
- the Flix compiler supports incremental and parallel compilation.
- the Flix compiler has more than 28 compiler phases.
- the Flix compiler contains more than 80,000 lines of code.
- the Flix compiler has more than 13,500 manually written unit tests.
- Flix is developed by programming language researchers at Aarhus University
(Denmark) in collaboration with researchers at the University of Waterloo
(Canada), and at Eberhard Karls University of Tübingen (Germany), and by a
growing open source community.
- Several novel aspects of the Flix programming language has been described in
the research literature, including its type and effect system and support for
first-class Datalog constraints.
- Flix is funded by the Independent Research Fund Denmark, Amazon Research,
DIREC, the Stibo Foundation, and the Concordium Foundation.
- more than 50 people have contributed to the Flix compiler.
- more than 2,000 pull requests have been merged into the Flix compiler.
enum Shape {
case Circle(Int32),
case Square(Int32),
case Rectangle(Int32, Int32)
}
def area(s: Shape): Int32 = match s {
case Circle(r) => 3 * (r * r)
case Square(w) => w * w
case Rectangle(h, w) => h * w
}
I wonder why not this syntax:
def area(s: Shape.Circle(r)) = { 3 * (r * r) }
def area(s: Share.Square(w)) = { w * w }
def area(s: Shape.Rectangle(h, w)) = { h * w }
area(Shape.Rectangle(2, 4))
The Int32 or Int32, Int32 types are in the definition of Shape, so we can be DRY and spare us the chances to mismatch the types.
We can also do without match/case, reuse the syntax of function definition and enumerate the matches in there. I think that it's called structural pattern matching.
> The Int32 or Int32, Int32 types are in the definition of Shape, so we can be DRY and spare us the chances to mismatch the types
I have to admit I don't see the distinction here in terms of DRYness--they are basically equivalent--or why the latter would somehow lead to mismatching the types--presumably if Flix has a typechecker this would be a non-issue.
I use Elixir now at work and I have used Haskell and PureScript personally and professionally, which both support analogs of both the case syntax and function-level pattern matching, and in my experience the case syntax is often the better choice even given the option to pattern match at the function level. Not that I'd complain about having both options in Flix, which would still be cool, but I don't think it's as big of a benefit as it may seem, especially when type checking is involved.
enum Shape {
case Circle(Int32),
def area(s: Shape): In32 = match s {
Not only I had to write something that the compiler already knows, but I typed a compilation error. The second type definition is there only to make developers write it wrong. It does not add any information.
> The second type definition is there only to make developers write it wrong.
Int32 is the type of the return value for the function (https://doc.flix.dev/functions.html), which is distinct information not implied by the type being passed in (Shape), so I dispute this characterization--the fact that this type is the same type as the parameter given to all of Shape's terms is specific to this example. Furthermore I suspect it would immediately be caught by the typechecker regardless.
While in a language like Haskell you could define this function without a type definition and its type would be inferred (vs. in Flix, see https://doc.flix.dev/functions.html?highlight=inference#func...), regardless I will almost always declare the type of top-level functions (or even non-top-level functions) for clarity when reading the code. I think this is useful and important information in any case.
What if you make a typo and instead of `area` you type `areas` the second time? I also don't see how one is more DRY than the other. If anything in the second example you typed `Shape` and `area` a bunch of times, so to me it's less DRY
>> Flix is a principled effect-oriented functional, imperative, and logic programming language...
>> Why Effects? Effect systems represent the next major evolution in statically typed programming languages. By explicitly modeling side effects, effect-oriented programming enforces modularity and helps program reasoning.
Since when do side effects and functional programming go together?
In Flix all effects are tracked by the type and effect system. Hence programmers can know when a function is pure or impure. Moreover, pure functions can be implemented internally using mutation and imperative programming. For example, in Flix, one can express a sort function that is guaranteed to be pure when seen from the outside, but internally uses a quick sort (which sorts in place on an array). The type and effect system ensures that such mutable memory does not escape its lexical scope, hence such a sort function remains observationally pure as seen from the outside.
Haskell can do the same kind of thing (local mutation), using the ST monad.
It's usage is almost equivalent to using IORefs, except we can escape ST using runST to get back a pure value not in ST, which we cannot do for IO because there is no `IO a -> a`.
There's no requirement to contain ST to a single function - we can split mutation over several functions, provided each one involved returns some `ST a` and their usage is combined with >>=.
That's right. Locally scoped mutable memory in Flix is very similar to the ST Monad. The two major differences are: (a) Flix is in direct-style vs. monadic style and (b) we use a type and effect system.
Note that there is no requirement that all mutation must occur within a single function. The requirement is that once you leave the lexical scope then all mutable memory associated with that scope become unreachable. Mutation can certainly span over multiple functions.
FP isn't really about eliminating side effects. Controlled effects are fine. That's what an effect system does.
Avoiding side effects is really just a side effect (pun intended) of older programming language technology that didn't provide any other way to control effects.
Arguably FP really is about eliminating side effects.
The research has sprung out of lambda calculus where a computation is defined in terms of functions (remember: Functional programming).
Side effects can only be realized by exposing them in the runtime / std-lib etc. How one does that is a value judgement, but if a term is not idempotent, then you arguably does not have a functional programming language anymore.
You gotta ask the question: why does FP care about eliminating side effects? There are two possible answers:
1. It's just something weird that FP people like to do; or
2. It's in service of a larger goal, the ability to reason about programs.
If you take the reasonable answer---number 2---then the conclusion is that effects are not a problem so long as you can still reason about programs containing them. Linear / affine types (like Rust's borrow checker) and effect systems are different ways to accommodate effects into a language and still retain some ability to reason about programs.
No practical program can be written without effects, so they must be in a language somewhere.
> No practical program can be written without effects, so they must be in a language somewhere.
Or rather, very few. It is like programming languages that trade Turing-completeness for provability, but worse.
In theory, one could imagine a program that adds 2 matrices in a purely functional manner, and you would have to skip on outputting the result to stay side-effect-free. Yet, it is running on a computer so the program does affect its internal state, notably the RAM in which the result is visible somewhere. One could dump that memory from outside of the program/process itself to get the result of the computation. That would be quite weird, but on the other hand sometimes normal programs do something like that by communicating through shared memory.
It seems that the notion of side effects must be understood relatively to a predefined system, just like in physics. One wouldn't count heat dissipation or power consumption as a side effect of such a program, although side-channel-attackers have a word to say about this.
(from your link:)
> Both languages allow mutation but it's up to us to use it appropriately.
This is the crux of the problem. If you add a C example to your Typescript and Scala examples, people will throw you stones for that statement - out of instinct. The intent is to prevent accidental misuse. Mutation is "considered harmful" by some because it can be accidentally misused
> It seems that the notion of side effects must be understood relatively to a predefined system, just like in physics. One wouldn't count heat dissipation or power consumption as a side effect of such a program, although side-channel-attackers have a word to say about this.
Absolutely! When you really dig into it, the concept of an effect is quite ill-defined. It comes down to whatever some observer considers important. For example, from the point of view of substitution quick sort and bubble sort are equivalent but most people would argue that they are very much not.
If you start with lambda calculus you don't have effects in the first place, so there's nothing to eliminate. Lambda calculus and friends are perfectly reasonable languages for computation in the sense of calculation.
A better way to think about general-purpose functional programming is that it's a way to add effects to a calculation-oriented foundation. The challenge is to keep the expressiveness, flexibility and useful properties of lambda calculus while extending it to writing interactive, optimizable real-world programs.
To retain referential transparency, we basically need to ensure that a function provided the same arguments always returns the same result.
A simple way around this is to never give the same value to a function twice - ie, using uniqueness types, which is the approach taken by Clean. A uniqueness type, by definition, can never be used more than once, so functions which take a uniqueness type as an argument are referentially transparent.
In Haskell, you never directly call a function with side effects - you only ever bind it to `main`.
Functions with (global) side effects return a value of type `IO a`, and the behavior of IO is fully encapsulated by the monadic operations.
instance Monad IO where
return :: a -> IO a
(>>=) :: IO a -> (a -> IO b) -> IO b -- aka "bind"
return lifts a pure value into IO, and bind sequences IO operations. Importantly, there cannot exist any function of type `IO a -> a` which escapes IO, as this would violate referential transparency. Since every effect must return IO, and the only thing we can do with the IO is bind it, the eventual result of running the program must be an IO value, hence `main` returns a value of type `IO ()`.
main :: IO ()
So bind encapsulates side effects, effectively using a strategy similar to Clean, where each `IO` is a synonym of some `State# RealWorld -> (# State# RealWorld, a #)`. Bind takes a value of IO as it's first argument, consumes the input `State# RealWorld` value and extracts a value of type `a` - feeds this value the next function in the sequence of binds, returning a new value of type `IO b`, which has a new `State# RealWorld`. Since `bind` enforces a linear sequencing of operations, this has the effect that each `RealWorld` is basically a unique value never used more than once - even though uniqueness types themselves are absent from Haskell.
Lisp always had side effects and mutability, and it's the canonical FP language, directly inspired by lambda calculus. To be fair, before Haskellers figured out monads, nobody even knew of any way to make a FP language that's both pure and useful.
Functional programming à la Haskell has always been about making effects controllable, explicit first-class citizens of the language. A language entirely without effects would only be useful for calculation.
The talk about "purity" and "removing side effects" has always been about shock value—sometimes as an intentional marketing technique, but most often because it's just so much easier to explain. "It's just like 'normal' programming but you can't mutate variables" is pithy and memorable; "it's a language where effects are explicitly added on top of the core and are managed separately" isn't.
// Computes the delivery date for each component.
let r = query p select (c, d) from ReadyDate(c; d)
facepalm. Select should always come last, not first, haven't we learned anything from the problems of SQL? LINQ got this right, so it should look like:
It is a fair point-- the implicit argument being that this allows `c` and `d` to be bound before they are used, and hence auto-complete can assist in the `select` clause. Nevertheless, the counter argument is that the form of a logic rule is:
Path(x, z) :- Path(x, y), Edge(y, z).
i.e. an implication from right to left. This structure matches:
query p select (x, z) from Path(x, y), Edge(y, z).
So the trilemma is:
A. Keep the logic rules and `query` construct consistent (i.e. read from right-to-left).
B. Reverse the logic rules and query construct-- thus breaking with decades of tradition established by Datalog and Prolog.
C. Keep the logic rules from right-to-left, but reverse the order of `query` making it from left-to-right.
We decided on (A), but maybe we should revisit at some point.
I appreciate the desire for consistency and being able to lean on old textbooks and documentation. A couple of considerations since this is a new language where history and precedent should (I think) be less important if it leads to clarity and improved productivity:
1. I think way more people coming to your language will be familiar with SQL and it's problems than with logic programming and Horn clauses.
2. I think many people are now familiar with functional pipelines, where filters and transforms can be applied in stages, thanks to the rise of functional programming in things like LINQ and Java's Stream API. This sort of pipelining maps naturally to queries, as LINQ has shown, and even to logic programming, as µKanren has shown.
3. People don't type programs right-to-left but left-to-right. To the extent that right-to-left expressions interfere with assisting the human that's typing in various ways (like autocomplete), I would personally defer to helping the human as much as possible over historical precedent.
4. Keeping the logic fragment separate from the query fragment (option C) seems viable if you really, really want to maintain that historical precedent for some reason.
I think when it comes to programming language design, sometimes I feel that a design has a 90% chance of being good and a 10% chance of being bad. For the logic constraints (right-to-left vs. left-to-right), I think my confidence is only 70% that we got it right.
The JVM is a state-of-the-art virtual machine with multiple open source implementations, a large ecosystem, and a fast JIT compiler that runs on most platforms. It is hard to find another VM with the same feature set and robust tooling.
I think the problem is that it targets a VM instead of native machine architectures, not the quality of the VM. I also find the times I need to target a VM to be very limited as I'm generally writing code for a specific platform, not a cross platform application. Of course this will vary between developers.
Targeting JVM means not having to roll your own garbage collector.
And bonus, you get a huge world of third party libraries you can work with.
It's been over a decade since I worked on the JVM, and Java is not my favourite language, but I don't get some people's hate on this topic. It strikes me as immature and "vibe" based rather than founded in genuine analysis around engineering needs.
The JVM gets you a JIT and GC with almost 30 years of engineering and fine tuning behind it and millions of eyes on it for bugs or performance issues.
I strongly agree. Java and JVM bytecode may not be our "cup of tea", but it is simply unrealistic to implement any runtime environment with comparable performance, security, robustness, and tooling. The only alternative is WASM, but they are not yet there feature-wise.
The JVM is a large and complex system with tons of configurable options. If you don't need it, why add all that cognitive overhead when you have perfectly good options that don't. And the benefits you gain are very limited if you aren't integrating with other JVM based systems.
You genuinely don’t need to think about any of the configurable options, especially if you’re running a client program. At most for server programs you just set the max memory percentage and soon you won’t have to do that.
The practical problems are slow startup time and high minimum memory usage. Since those are encountered early on in the developer experience, the reaction many have is predictable.
Which is amazing, you can fine tune the performance of the runtime to your heart's content. Or you can just leave them as-is, the default behaviour is quite reasonable too.
Very cool language. The standard library looks mostly sane, although it does have `def get(i: Int32, a: Array[a, r]): a \ r` which means that it must have some kind of runtime exception system. Not my cup of tea, but an understandable tradeoff
Another aspect that I love from their comparison table is that a single executable is both the package manager, LSP and the compiler. As I understand, the language server for Haskell has/had to do a lot of dances and re implement things from ghc as a dance between the particular ghc version and your cabal file. And maybe stack too, because I don't know which package manager is the blessed one these days. Not to shit on Haskell -- it is actually a very fine language.
However, the best feature is a bit buried and I wonder why.
How ergonomic is the integration with the rest of the JVM, from the likes of Java? AFAIK, types are erased by JVM compilers... With the concept of `regions` they have at least first class support for imperative interaction. Note: With the JVM you get billions worth of code from a high quality professional standard library, so that is a huge plus. That is why the JVM and .net core are IMHO the most sane choices for 90+% of projects. I think the only comparable language would be F#. I would love to see a document about Flix limitations in the JVM interoperability story.
__EDIT__
- There is a bit of info here. Basically all values from Flix/Java have to be boxed/unboxed. https://doc.flix.dev/interoperability.html
- Records are first-class citizens.
oh my i just know you're going to love unison
It would be interesting if F# skipped Haskell style monads and jumped straight to algebraic effects. They seem like a better fit for F# philosophy in any case.
For your second point, I don't know if they could achieve that without type level programming. This is the Box of Pandora the designer of F# tried [0] not to open.
____
0. https://github.com/fsharp/fslang-suggestions/issues/243#issu...
Not in all the cases (it keeps type parameters for anonymous classes) and there are various workarounds.
Also, essentially, it's not a problem at all for a compiler, you are free to render applied type constructors as regular classes with mangled names.
Caveat: Flix sometimes has to box values on the boundary between Flix and Java code -- e.g. when calling a Java library methods that requires a java.lang.Object due to erasure in Java.
But you could do it for almost anything. I would love the ability to hit a key chord in `emacs` and see things in an Algol-family presentation, or an ML family, or a Lisp.
Seems like the kind of thing that might catch on someday. Certainly the math notation in things like Haskell and TLA were a bit of a barrier to entry at one time. Very solvable problem if people care a lot.
I also appreciate the “one language, no flags” philosophy and the focus on compile-time errors only, which keeps things simple and predictable. The fact that Flix manages full tail call elimination on the JVM (despite the lack of native support) is a technical feat that deserves more recognition.
Curious to hear how folks are using Flix in production (or research), especially for logic-heavy or concurrent applications. Has anyone run into pain points with the closed-world assumption or the lack of exceptions? And how do you find the Datalog integration compared to Prolog or other logic languages?
[0] https://www.cs.ioc.ee/tfp-icfp-gpce05/tfp-proc/21num.pdf
But if you're interested: https://www.reactivesystems.eu/2022/06/24/flix-for-java-prog...
The language has improved a lot in the years since the post. In particular, the effect system has been significantly extended, Java interoperability is much improved, and some syntax have been updated.
C++, C#/Typescript, Dart, etc all have strong roots in that one small area in Denmark.
In general, I am curious what makes some of these places very special (Delft, INRIA, etc)?
They aren't your 'typical' Ivy League/Oxbridge univ+techhubs.
Is it the water? Or something else? :)
In general, programming language theory is pretty strong in Denmark, with lots of other contributions.
For example, the standard graduate textbook in static program analysis (Nielson & Nielson) is also Danish. Mads Tofte made lots of contributions to Standard ML, etc.
> They aren't your 'typical' Ivy League/Oxbridge univ+techhubs.
Aarhus is an outstanding university. There are a couple of dozen universities in Europe that lack the prestige of Oxbridge but offer high quality education and perform excellent research.
I also think there's a noticeable bias toward the US in how programming language research is perceived globally. Institutions like Aarhus often don't invest heavily in marketing or self-promotion, they just focus on doing solid work. It's not necessarily better or worse, but it does make it harder for their contributions to break through the layers of global attention.
Some gems:
---
Q: Wait, division by zero is zero, really?
A: Yes. But focusing on this is a bit like focusing on the color of the seats in a spacecraft.
---
Q: "This site requires JavaScript"
A: People who have criticized the website for using JavaScript: [1], [2], [3], [4], [5].
People who have offered to help refactor the site to use static html: 0.
---
Q: I was disappointed to learn that Flix has feature X instead of my favorite feature Y.
A: We are deeply sorry to have let you down.
---
Q: This is – by far – the worst syntax I have ever seen in a functional language. Semicolons, braces, symbolic soup, et al. It is like if Scala, Java and Haskell had a one night stand in the center of Chernobyl.
A: Quite an achievement, wouldn't you say?
Looking at their code however, I'm realizing one thing Elixir got "right", in my view, is the order of arguments in function calls.
For example, in Elixir to retrieve the value associated with a key in a map, you would write Map.get(map, key) or Map.get(map, key, default).
This feels so natural, particularly when you chain the operations using the pipe operator (|>):
In Flix it seems one needs to write Map.get(x, map), Map.insert(x, y, map). I guess it follows in the footsteps of F#.Subject First:
Subject Last:Can't find any mentions of typeclasses though, are they supported?
Give me typeclasses and macros comparable with Scala ones and I would be happy to port my libraries (distage, izumi-reflect, BIO) to Flix and consider moving to it from Scala :3
UPD: ah, alright, they call typeclasses traits. What about macros?
UPD2: ergh, they don't support nominal inheritance even in the most harmless form of Scala traits. Typeclasses are not a replacement for interfaces, an extremely important abstraction is missing from the language (due to H-M typer perhaps), so a lot of useful things are just impossible there (or would look ugly).
Flix does not yet have macros-- and we are afraid to add them due to their real (or perceived) (ab)use in other programming languages.
We are actively looking for library authors and if you are interested, you are more than welcome to stop by our Gitter channel.
I think the abuse is not that much of a problem. It's rather that it makes it much much harder to change the language later on because it will break macros (like it did between Scala 2 and 3, causing many people to be stuck on Scala 2 due to libraries using macros heavily).
If I might add a suggestion: add type providers to the language (like in F#). It solves a lot of the problems that macros are often used for, such as generating code from SQL DDLs, API specs, etc. (or vice versa).
So, apparently, I can't re-implement distage for Flix.
I don't mind a little bit of overhead in exchange for a massive productivity boost. I don't even need full nominal inheritance, just literally one level of interface inheritance with dynamic dispatching :(
> their real (or perceived) (ab)use in other programming languages.
Without macros I can't re-implement things like logstage (effortless structured logging extracting context from AST) and izumi-reflect (compile-time refleciton with tiny runtime scala typer simulator).
Another way of putting it is to point out that removing goto from a language isn't going to reduce the occurrence of spaghetti code. The average skill and care of the developers who happen to be using that language is what does that.
Yes, you can write spaghetti code in any language. But a good language design can help (a) reduce errors and (b) nudge the developer towards writing better code.
But the good news is that the common case incurs no overhead.
I am trying to think of a situation where a functional language compiler does not have enough information at compile time, especially when effects are witnessed by types.
That said, I like how the syntax isn't overly functional, and not too different from what we see in mainstream languages. I'd be fine with either braces or indentation, but the semicolons have to go!
It's a very fun time
Seriously though, looks like a cool language and makes me sad that LLMs will probably inhibit the adoption of new languages, and wonder what we can do about it.
The code in a language's standard library is probably enough to train an LLM on the new syntax, and even if it isn't, agents now observe the compiler output and can in principle learn from it. Porting code from one language to another doesn't require deep creativity and is, barring API aesthetics, a perfectly well defined task. It will be one of the first programming tasks to be perfectly automated by LLM's.
We are going to have to use our brains again to start thinking about why we're doing any of the stuff we're doing, and what effects it will have on the world.
[0] https://github.com/konsoletyper/teavm
For example for a type system,
Similarly for example for an effect system, The effect annotations can be applied to a function just like the type annotations. Callers of the function need to anticipate (or handle) the effect. E.g. let's say the above code is wrapped in a function 'compute1(..) Int!Div0', a caller calling it can do.The book uses Scala & ZIO but intends to be more about the concepts of Effects than the actual implementation. I'd love to do a Flix version of the book at some point. But first we are working on the TypeScript Effect version.
https://youtu.be/EHtVADr-x94
All the examples are editable, though not as text.
I thought it was going to be something like contracts or dependent types or something.
Then we just need to wait for the functional languages to become mainstream.
---
# Did You Know?
## Language
Did you know that:
- Flix offers a unique combination of features, including: algebraic data types and pattern matching, extensible records, type classes, higher-kinded types, polymorphic effects, and first-class Datalog constraints.
- Flix has no global state. Any state must be passed around explicitly.
- Flix is one language. There are no pragmas or compiler flags to enable or disable features.
- Flix supports type parameter elision. That is, polymorphic functions can be written without explicitly introducing their type parameters. For example, `def map(f: a -> b, l: List[a]): List[b]`.
- the Flix type and effect system can enforce that a function argument is pure.
- Flix supports effect polymorphism. For example, the `List.map` function is effect polymorphic: its purity depends on the purity of its function argument.
- in Flix every declaration is private by default.
- In Flix no execution happens before `main`. There is no global state nor any static field initializers.
- Flix supports full tail call elimination, i.e. tail calls do not grow the stack. Flix -- being on the JVM -- emulates tail calls until Project Loom arrives.
- Flix supports extensible records with row polymorphism.
- Flix supports string interpolation by default, e.g. "Hello ${name}". String interpolation uses the `ToString` type class.
- Flix supports the "pipeline" operator `|>` and the Flix standard library is designed around it.
- In Flix type variables are lowercase and types are uppercase.
- In Flix local variables and functions are lowercase whereas enum constructors are uppercase.
- Flix supports set and map literals `Set#{1, 2, 3}` and `Map#{1 => 2, 3 => 4}`.
- Flix supports monadic do-notation with the `let*` construct.
- Flix supports "program holes" written as either `???` or as `?name`.
- Flix supports infix function applications via backticks.
- Flix compiles to JVM bytecode and runs on the Java Virtual Machine.
- Flix supports channel and process-based concurrency, including the powerful `select` expression.
- Flix supports first-class Datalog constraints, i.e. Datalog program fragments are values that can be passed to and returned from functions, etc.
- Flix supports compile-time checked stratified negation.
- Flix supports partial application, i.e. a function can be called with fewer arguments that its declared formal parameters.
- the Flix type and effect system is powered by Hindley-Milner. The same core type system that is used by OCaml, Standard ML, and Haskell.
- the Flix type and effect system is sound, i.e. if a program type checks then a type error cannot occur at run-time. If an expression is pure then it cannot have a side-effect.
- the Flix type and effect system supports complete type inference, i.e. if a program is typeable then the type inference with find the typing.
- The Flix "Tips and Tricks"-section https://doc.flix.dev/tipstricks/ describes many useful smaller features of the language.
- Flix has a unique meta-programming feature that allows a higher-order functions to inspect the purity of its function argument(s).
- Flix names its floats and integers types after their sizes, e.g. `Float32`, `Float64`, `Int32` and `Int64`.
- Flix -- by design -- uses records for labelled arguments. Records are a natural part of the type system and works for top-level, local, and first-class functions.
- Flix -- by design -- has no implicit coercions, but provide several functions for explicit coercions.
- Flix -- by design -- disallows unused variables and shadowed variables since these are a frequent source of bugs.
- Flix -- by design -- disallows allow unused declarations. This prevents bit rot.
- Flix -- by design -- does not support unprincipled overloading. Instead, functions are given meaningful names, e.g. `Map.insert` and `Map.insertWithKey`.
- Flix -- by design -- does not support variadic functions. We believe it is better to pass an explicit array or list.
- Controversial: Flix defines division by zero to equal zero.
- Controversial: Flix defines String division as concatenation with the path separator. For example, `"Foo" / "Bar.txt" => "Foo\Bar.txt"` on Windows.
## Standard Library
Did you know that:
- Flix has an extensive standard library with more than 2,600 functions spanning more than 30,000 lines of code.
- the Flix Prelude, i.e. the functions which are imported by default, is kept minimal and contains less than 20 functions.
- most higher-order functions in the Flix standard library are effect polymorphic, i.e. they can be called with pure or impure functions.
- the Flix type and effect system enforces that equality and ordering functions must be pure.
- the Flix standard library uses records to avoid confusion when a function takes multiple arguments of the same type. For example, `String.contains` must be called as `String.contains(substr = "foo", "bar")`.
- the Flix `List` module offers more than 95 functions.
- the Flix `String` module offers more than 95 functions.
- the Flix `Foldable` module offers more than 30 functions.
- the Flix standard library follows the convention of "subject-last" to enable pipelining (`|>`).
## Ecosystem
Did you know that:
- Flix has an official Visual Studio Code extension.
- Flix has an official dark theme inspired by Monokai called "Flixify Dark".
- the Flix website (https://flix.dev/) lists the design principles behind Flix.
- Flix has an online playground available at https://play.flix.dev/
- Flix has online API documentation available at https://doc.flix.dev/
- the Flix VSCode extension uses the real Flix compiler.
- the Flix VSCode extension supports auto-complete, jump to definition, hover to show the type and effect of an expression, find all usages, and more.
- the Flix VSCode extension has built-in snippets for type class instances. Try `instance Eq [auto complete]`.
- the Flix VSCode extension supports semantic highlighting.
- the Flix VSCode extension has built-in "code hints" that suggests when lazy and/or parallel evaluation is enabled or inhibited by impurity.
- Flix has community build where Flix libraries can be included in the CI pipeline used to build the Flix compiler.
- Flix has a nascent build system and package manager based on GitHub releases. Today it is possible to build, package, and install Flix packages. Dependency management is in the works.
## Compiler
Did you know that:
- Flix -- by design -- has no compiler warnings, only compiler errors. Warnings can be ignored, but errors cannot be.
- the Flix compiler uses monomorphization hence primitive values are (almost) never boxed.
- the Flix compiler supports incremental and parallel compilation.
- the Flix compiler has more than 28 compiler phases.
- the Flix compiler contains more than 80,000 lines of code.
- the Flix compiler has more than 13,500 manually written unit tests.
- the performance of the Flix compiler is tracked at https://arewefast.flix.dev/
## Other
Did you know that:
- Flix is developed by programming language researchers at Aarhus University (Denmark) in collaboration with researchers at the University of Waterloo (Canada), and at Eberhard Karls University of Tübingen (Germany), and by a growing open source community.
- Several novel aspects of the Flix programming language has been described in the research literature, including its type and effect system and support for first-class Datalog constraints.
- Flix is funded by the Independent Research Fund Denmark, Amazon Research, DIREC, the Stibo Foundation, and the Concordium Foundation.
- more than 50 people have contributed to the Flix compiler.
- more than 2,000 pull requests have been merged into the Flix compiler.
I have to admit I don't see the distinction here in terms of DRYness--they are basically equivalent--or why the latter would somehow lead to mismatching the types--presumably if Flix has a typechecker this would be a non-issue.
I use Elixir now at work and I have used Haskell and PureScript personally and professionally, which both support analogs of both the case syntax and function-level pattern matching, and in my experience the case syntax is often the better choice even given the option to pattern match at the function level. Not that I'd complain about having both options in Flix, which would still be cool, but I don't think it's as big of a benefit as it may seem, especially when type checking is involved.
Int32 is the type of the return value for the function (https://doc.flix.dev/functions.html), which is distinct information not implied by the type being passed in (Shape), so I dispute this characterization--the fact that this type is the same type as the parameter given to all of Shape's terms is specific to this example. Furthermore I suspect it would immediately be caught by the typechecker regardless.
While in a language like Haskell you could define this function without a type definition and its type would be inferred (vs. in Flix, see https://doc.flix.dev/functions.html?highlight=inference#func...), regardless I will almost always declare the type of top-level functions (or even non-top-level functions) for clarity when reading the code. I think this is useful and important information in any case.
You are right. That part of my argument is wrong.
I think that is multi-methods
>> Why Effects? Effect systems represent the next major evolution in statically typed programming languages. By explicitly modeling side effects, effect-oriented programming enforces modularity and helps program reasoning.
Since when do side effects and functional programming go together?
(I am one of the developers of Flix)
It's usage is almost equivalent to using IORefs, except we can escape ST using runST to get back a pure value not in ST, which we cannot do for IO because there is no `IO a -> a`.
There's no requirement to contain ST to a single function - we can split mutation over several functions, provided each one involved returns some `ST a` and their usage is combined with >>=.
https://dl.acm.org/doi/pdf/10.1145/178243.178246
Note that there is no requirement that all mutation must occur within a single function. The requirement is that once you leave the lexical scope then all mutable memory associated with that scope become unreachable. Mutation can certainly span over multiple functions.
Avoiding side effects is really just a side effect (pun intended) of older programming language technology that didn't provide any other way to control effects.
The research has sprung out of lambda calculus where a computation is defined in terms of functions (remember: Functional programming).
Side effects can only be realized by exposing them in the runtime / std-lib etc. How one does that is a value judgement, but if a term is not idempotent, then you arguably does not have a functional programming language anymore.
1. It's just something weird that FP people like to do; or
2. It's in service of a larger goal, the ability to reason about programs.
If you take the reasonable answer---number 2---then the conclusion is that effects are not a problem so long as you can still reason about programs containing them. Linear / affine types (like Rust's borrow checker) and effect systems are different ways to accommodate effects into a language and still retain some ability to reason about programs.
No practical program can be written without effects, so they must be in a language somewhere.
More here: https://noelwelsh.com/posts/what-and-why-fp/
Or rather, very few. It is like programming languages that trade Turing-completeness for provability, but worse.
In theory, one could imagine a program that adds 2 matrices in a purely functional manner, and you would have to skip on outputting the result to stay side-effect-free. Yet, it is running on a computer so the program does affect its internal state, notably the RAM in which the result is visible somewhere. One could dump that memory from outside of the program/process itself to get the result of the computation. That would be quite weird, but on the other hand sometimes normal programs do something like that by communicating through shared memory.
It seems that the notion of side effects must be understood relatively to a predefined system, just like in physics. One wouldn't count heat dissipation or power consumption as a side effect of such a program, although side-channel-attackers have a word to say about this.
(from your link:) > Both languages allow mutation but it's up to us to use it appropriately.
This is the crux of the problem. If you add a C example to your Typescript and Scala examples, people will throw you stones for that statement - out of instinct. The intent is to prevent accidental misuse. Mutation is "considered harmful" by some because it can be accidentally misused
Absolutely! When you really dig into it, the concept of an effect is quite ill-defined. It comes down to whatever some observer considers important. For example, from the point of view of substitution quick sort and bubble sort are equivalent but most people would argue that they are very much not.
The preface of https://www.proquest.com/openview/32fcc8064e57c82a696956000b... is quite interesting.
A better way to think about general-purpose functional programming is that it's a way to add effects to a calculation-oriented foundation. The challenge is to keep the expressiveness, flexibility and useful properties of lambda calculus while extending it to writing interactive, optimizable real-world programs.
A simple way around this is to never give the same value to a function twice - ie, using uniqueness types, which is the approach taken by Clean. A uniqueness type, by definition, can never be used more than once, so functions which take a uniqueness type as an argument are referentially transparent.
In Haskell, you never directly call a function with side effects - you only ever bind it to `main`.
Functions with (global) side effects return a value of type `IO a`, and the behavior of IO is fully encapsulated by the monadic operations.
return lifts a pure value into IO, and bind sequences IO operations. Importantly, there cannot exist any function of type `IO a -> a` which escapes IO, as this would violate referential transparency. Since every effect must return IO, and the only thing we can do with the IO is bind it, the eventual result of running the program must be an IO value, hence `main` returns a value of type `IO ()`. So bind encapsulates side effects, effectively using a strategy similar to Clean, where each `IO` is a synonym of some `State# RealWorld -> (# State# RealWorld, a #)`. Bind takes a value of IO as it's first argument, consumes the input `State# RealWorld` value and extracts a value of type `a` - feeds this value the next function in the sequence of binds, returning a new value of type `IO b`, which has a new `State# RealWorld`. Since `bind` enforces a linear sequencing of operations, this has the effect that each `RealWorld` is basically a unique value never used more than once - even though uniqueness types themselves are absent from Haskell.https://www.youtube.com/watch?v=RsTuy1jXQ6Y
The talk about "purity" and "removing side effects" has always been about shock value—sometimes as an intentional marketing technique, but most often because it's just so much easier to explain. "It's just like 'normal' programming but you can't mutate variables" is pithy and memorable; "it's a language where effects are explicitly added on top of the core and are managed separately" isn't.
A. Keep the logic rules and `query` construct consistent (i.e. read from right-to-left).
B. Reverse the logic rules and query construct-- thus breaking with decades of tradition established by Datalog and Prolog.
C. Keep the logic rules from right-to-left, but reverse the order of `query` making it from left-to-right.
We decided on (A), but maybe we should revisit at some point.
1. I think way more people coming to your language will be familiar with SQL and it's problems than with logic programming and Horn clauses.
2. I think many people are now familiar with functional pipelines, where filters and transforms can be applied in stages, thanks to the rise of functional programming in things like LINQ and Java's Stream API. This sort of pipelining maps naturally to queries, as LINQ has shown, and even to logic programming, as µKanren has shown.
3. People don't type programs right-to-left but left-to-right. To the extent that right-to-left expressions interfere with assisting the human that's typing in various ways (like autocomplete), I would personally defer to helping the human as much as possible over historical precedent.
4. Keeping the logic fragment separate from the query fragment (option C) seems viable if you really, really want to maintain that historical precedent for some reason.
My two cents. Kudos on the neat language!
I think when it comes to programming language design, sometimes I feel that a design has a 90% chance of being good and a 10% chance of being bad. For the logic constraints (right-to-left vs. left-to-right), I think my confidence is only 70% that we got it right.
I, uh, think your math might need some checking :)
And bonus, you get a huge world of third party libraries you can work with.
It's been over a decade since I worked on the JVM, and Java is not my favourite language, but I don't get some people's hate on this topic. It strikes me as immature and "vibe" based rather than founded in genuine analysis around engineering needs.
The JVM gets you a JIT and GC with almost 30 years of engineering and fine tuning behind it and millions of eyes on it for bugs or performance issues.