Syntax is a UI problem, but programming languages are more than syntax sugar for assembly. Features like Rust's borrow-checker, type systems, even small things like name resolution and async/await provide "semi-formal"* guarantees. And LLMs are no substitute for even semi-formal guarantees; people have spent lots of money and effort trying to integrate (non-NN, rule-based) algorithms and formal methods into LLMs, some even writing LLM programming languages, because they may be the key to fixing LLMs' accuracy problem.
* Not formally defined or verified, but thoroughly defined and implemented via reliable algorithms. A true formal specification is completely unambiguous, and a true formally-proven implementation is completely accurate to its specification. A thorough language specification or reference (like https://tc39.es/ecma262/ or https://doc.rust-lang.org/nightly/reference/) is almost unambiguous, and a well-tested compiler or interpreter is almost always accurate, especially compared to LLMs.
Cases where true formal guarantees matter exist but are rare. There are many, many cases where LLMs are too inaccurate but well-tested algorithms are good enough.
You can certainly think of them that way if you want. And, you'll be wrong, at least IMO, and in what seems to be the consensus of people who study and work on PLs.
"UI" issues, while not at all unimportant, are surface level. Semantics are fundamental. Get the semantics of your language wrong, and you are stuck. Indeed, this is what happens with most popular languages that aren't well-considered:
- lang author creates some random new lang, throws whatever feels like a good idea to them into the lang
- lang community discover that these features interact in rather surprising ways, ways that keep tripping up newcomers to the language, and would like to change it.
- lang author, having had X more years of experience now, identifies that changing the behavior could break user programs who rely on that behavior (wittingly or un-)
- lang author decides to write a new, backwards-incompatible version, community has 20 years of turmoil before recovering (python3), if it ever does (perl6).
Speaking as an academic PL researcher: you are right that this math-centered view is common in the PL research community, but that doesn't make the opposing view "wrong". There is also plenty of work in the intersection of PL and HCI. Both perspectives are crucial, and neither field is intrinsically superior to the other, despite what some individual researchers feel.
Yes, “wrong” was a bit of a strong word, but do note that I intentionally framed it as an opinion and not as, like, objectively wrong.
UI is still important, but I do not think it really defines a language.
Let’s say I created a new Haskell compiler, and it had the best error messages in the world, a hiccup-free LSP experience, maybe an alternative syntax format that’s more familiar to mainstream devs. But it still has all the same semantics. Is that still Haskell? Why or why not?
Now let’s say I took whatever has the best tooling currently, idk what it would be, maybe TypeScript, but gave it say non-strict semantics. Is that still typescript?
If you consider the programming experience/UI to be the language along with the tools that support that language, then you don't need to concern yourself if using Haskell in vim is the same thing as using Haskell in an IDE that supports it really well, they aren't the same thing.
If you take it to an extreme, like with Smalltalk, you wind up with languages whose environment is considered essential, they are basically part of the language at that point.
A human view of programming languages - Amy J. Ko
In computer science, we usually take a technical view of programming languages, defining them as precise, formal ways of specifying a computer behavior. This view shapes much of the research and development that we do on programming languages, determining the questions we ask about them, the improvements we make to them, and how we teach people to use them. But to many people (even software engineers) programming languages are not purely technical things, but socio-technical things. In this talk, I discuss several alternative views of programming languages, and how these views can reshape how we design, evolve, and use programming languages in research and practice.
The upshot is while the "PL as math" and "PL as tool" views have been well researched, if you're a researcher getting started in this area there is plenty of fertile ground out there. We might just have this view that "PL is math" because we've done the most work there, but there are many more applications of programming languages we have not thoroughly explored enough to say they are not just as important.
To most people it is about IDE and development tool chains.
Or we can think of it from the perspective of mapping programmer’s mental model to the problems they want to solve. Interface to the Mind, so to speak.
I don’t disagree, but if you set it up as “ui vs math”, you signify certain things. And based upon your comment it did not say “they mean how the programmers mental model maps to the problem”. In fact I’d say that’s more math like than not
I have been intentionally coding with Claude in the past few months and I started to think about programming from problem solving standpoint other than generating artifacts that conform to whatever languages, libraries and frameworks I happen to use to solve the problem at the time.
We can call this mental model or not. But it seems to be a different abstraction than we are talking about at PL level.
To me, LLM is part of the interface between me and the solution, that is what I mean by UI.
The real work is still in my head. PL language is just a medium.
> What might a programming language designed specifically as a UI for coding agents look like?
A bad idea, probably. LLM output needs to be reviewed very carefully; optimizing the language away from human review would probably make the process more expensive. Also, where would the training data in such a language come from?
So then a programming language designed explicitly for coding languages would need to take human reviews into account, what is the most efficient and concise ways to express programming concepts then?
In the end, we circle back to lisps, once you're used to it, it's as easy for humans to parse as it is for machines to parse it. Shame LLMs struggle with special characters.
Surely lisps don't have drastically more special characters as other languages? A few more parens, sure, but less curly braces, commas, semicolons, etc
Also feels like making sure the tokeniser has distinct tokens for left/right parens would be all that is required to make LLMs work with them
Don't get me wrong, they do work with lisps already, had plenty of success having various LLMs creating and managing Clojure code, so we aren't that far off.
But I'm having way more "unbalanced parenthesis" errors than with Algol-like languages. Not sure if it's because of lack of training data, post-training or just needing special tokens in the tokenizer, but there is a notable difference today.
Yeah, makes sense it sounds like that. But the crux is probably that most of us learned programming via Algol-like languages, like C or PHP, and only after decades of programming did we start looking into lisps.
But don't take my word for it, ask the programmers around you for the ones who've been looking into lisps and ask them how they feel about it.
I don’t think that would be a real issue in practice. Coding LLMs need to be able to cope with complicated expressions in many languages. If they can produce legitimate code for other languages, they can be taught to cope with s-expressions.
Until such a point where have agents not trained on human language or programming languages, I think something that’s also really good for people as well.
- one obvious way to do things
- locality of reasoning, no spooky action at a distance
- tests as a first class feature of the language
- a quality standard library / reliable dependency ecosystem
- can compile, type check, lint, run tests, in a single command
- can reliably mock everything, either with effects or something else, such that again we maintain locality of reasoning
The old saying that a complex system that works is made up of simple systems that work applies.
A language where you can get the small systems working, tested, and then built upon.
Because all of these things with towards minimising the context needed to iterate, and reducing the feedback loop of iteration.
The same reason an actual AI wouldn't play chess by brute forcing every possible position. Intelligent systems are about reasoning, not simply computing, and that requires operating at the level of abstraction where your intelligence is most effective.
You're contradicting yourself. Raw, fully optimized executables for production would mean machine code for the target platform, not an intermediate bytecode that still requires a VM.
Not really, it would be one of the steps in the chain between design and implementation for a specific hardware platform. That is, unless that code is only ever to run on a single hardware platform, a rare occurrence outside of embedded applications.
Is it possible to create a programming language that has every possible feature all at once?
I realize there are many features that are opposed to each other. Is it possible to “simply” set a flag at compile / runtime and otherwise support everything? How big would the language’s source code be?
It seems that in practice no it's not possible based on what I've read from people much closer to programming language design and compiler work.
"In practice, the challenge of programming language design is not one of expanding a well-defined frontier, it is grappling with a neverending list of fundamental tradeoffs between mutually incompatible features.
Subtyping is tantalizingly useful but makes complete type inference incredibly difficult (and in general, provably impossible). Structural typing drastically reduces the burden of assigning a name to every uninteresting intermediate form but more or less precludes taking advantage of Haskell-style typeclasses or Rust-style traits. Dynamic dispatch substantially assists decoupling of software components but can come with a significant performance cost without a JIT. Just-in-time compilation can use runtime information to optimize code in ways that permit more flexible coding patterns, but JIT performance can be unpredictable, and the overhead of an optimizing JIT is substantial. Sophisticated metaprogramming systems can radically cut down on boilerplate and improve program concision and even readability, but they can have substantial runtime or compile-time overhead and tend to thwart automated tooling. The list goes on and on."
I do agree with this, but also, I don't really understand a lot of the tradeoffs, or at least to me they are false tradeoffs.
Her first example is excellent. In Haskell, we have global type inference, but we've found it to be impractical. So, by far the best practice is not to use it; at the very least, all top-level items should have type annotations.
the second one, structural typing: have your language support both structural types and nominal types, then? This is basically analogous to how haskell solved this problem: add type roles. Nominal types can't convert to one another, whereas structural types can. Not that Haskell is the paragon of language-well-designed-ness, but... There might be some other part of this I'm missing, but given the obviousness of this solution, and the fact that I haven't seen it mentioned, it is just striking.
on dynamic dispatch: allow it to be customized by the user - this is done today in many cases! Problem solved. Plus with a global optimizing compiler, if you can deal with big executable size, you have cake and eat cake.
on JIT: Yes, JIT can take some time, it is not free. JIT can make sense even in languages that are AOT compiled, in general it optimizes code based upon use patterns. If AOT loop unrolling makes sense in C, then I certainly think runtime optimization of fully AOT compiled code must be advantageous too. But, today, you can just about always figure that you can get yourself a core to do this kind of thing on, we just have so many of them available and don't have the tools to easily saturate them. Or, even if you do today with N cores, you probably won't be able to on the next gen, when you have N+M cores. Sure, there's gonna have to be some overhead when switching out the code, but I really don't think that's where the mentioned overhead comes from.
Metaprogramming systems are another great example: Yes, if we keep them the way they are today, at the _very least_ we're saying that we need some kind of LSP availability to make them reasonable for tooling to interact with. Except, guess what, all languages nowadays of any reasonable community size will need LSP. Beyond that, there are lots of other ways to think about metaprogramming other than just the macros we commonly have today.
I get her feeling, balancing all of this is hard. One think you can't really get away from here is that all of this increases language, compiler, and runtime complexity, which makes things much harder to do.
But I think that's the real tradeoff here: implementation complexity. The more you address these tradeoffs, the more complexity you add to your system, and the harder the whole thing is to think about and work on. The more constructs you add to the semantics of your language, the more difficult it is to prove the things you want about its semantics.
But, that's the whole job, I guess? I think we're way beyond the point where a tiny compiler can pick a new set of these tradeoffs and make a splash in the ecosystem.
Would love to have someone tell me how I'm wrong here.
However, can a lot more "features", or better, "programming paradigms" or as I would call them "architectural styles" be accommodated together, usefully?
Absolutely yes!
Objective-S does this using a combination of two basic techniques:
1. A language defined by a metaobject protocol
2. This metaobject protocol being organized on software architectural principles
(1) makes it possible to create a variable programming language. (2) widens the scope of that variability to encompass pretty much all of programming.
What was surprising is how little actual language mechanism was required to make this happen, and how far that little bit of mechanism goes. Eye-opening!
However, the site doesn't really have the explanation of how 1+2 work and work together. That is explained as best I could manage within the space limitation of an academic paper in Beyond Procedure Calls as Component Glue: Connectors Deserve Metaclass Status.
Alas, the PhD thesis that goes into more detail is still in the works (getting close, though!). That still being in the works also means that the site will not be updated with that information for a bit. Priorities...
I would assume not. Consider ARC vs GC. With ARC the programmer manages memory and with GC the compiler manages it. People choose one or the other because of the style they're going after. Could you set a flag for it. Maybe if all libraries you download were pre-compiled perhaps.
Personally, I just prefer to know that the team that supports the tool I'm using is dedicated to "that one thing" I'm after.
Nim language iirc can have either of arc or gc and even manual etc. via their Multi-paradigm Memory Management Strategies
Source: https://nim-lang.org/1.4.0/gc.html
You could do it. You could allocate something as garbage-collectible or not at allocation time. You probably would want them to be different types, though, so that the type system keeps track of whether you need to manually deallocate this particular thing, or whether the garbage collector will do so.
It could be done... but I'm not sure what it would gain you. It seems to me that knowing that it will do one or the other is better than having to think about which it will do in each instance.
Seriously tho, not really because programming languages are designed, and design is all about carefully balancing tradeoffs. It starts with the purpose of the language -- what's it for and who will be using it? From there, features follow function. Having all the features is counterproductive because some features are only good in the absence of others, others are mutually exclusive. For example, being 1-indexed or 0-indexed. You can't be both, unless it's configurable (and that just causes bugs as Julia found out).
If you want your language to be infinitely configurable to meet the needs of everyone, then you would want it to be as small as possible, not big. Something like Lisp.
> Is it possible to create a programming language that has every possible feature all at once?
Some random thoughts about this:
If languages are a tool of communication between programmers (there's that adage "primarily for humans to read, and only secondarily for computers to run") would this be a good idea?
Wouldn't each set of flags effectively define a different language? With a combinatorial explosion of flags.
An act of design is not only about what to include, but what to leave out. Some features do not interact well with others, which is why tradeoffs exist. You'd have to enforce restrictions on which flags can be used together.
You'd be designing a family of programming languages rather than a single language. Finding code in this language would tell you little, until you understood the specific flags.
Since all languages compile to the same representation on silicon (give or take a few opcodes) it would have to be a language with customizable grammar and runtime.
I for one would LOVE to make semicolons; optional and choose between {} braces/indentation per project/file, just like we can with tabs/spaces now (tabs are superior)
It depends how we're defining "representation", but I'd argue that languages are definitely more dissimilar here than they are they same. If you want to mix and match two compilation units with the semantics of different languages, even something as simple as a cross-language if-statement is going to be a hard bridge to cross, and even when targeting a single runtime it's easy to have a syntactic layer which doesn't efficiently map to that runtime.
That said, for the first thing they asked for (different syntactical views on the same "substrate," where I'm assuming the language has one model of how its runtime works), that's very doable.
No. It's a common enough term, but the handwavy concept I wanted to get across is that if you have code mixing and matching different syntaxes then there will necessarily be boundaries between those. Code with one syntax (if you can actually mix and match runtimes as the comment author said they want) will behave differently from "adjacent" (commonly a different file or directory, but I could imagine multiple syntaxes within a file too) code with a different syntax.
In common languages, you're usually still targeting the same runtime in different compilation units, but it's a rough description of optimization boundaries (you compile one unit at a time and stitch them together during linking). Some techniques bridge the gap and thus the language crispness (e.g., post-hoc transformations on compiled units, leading to one larger "effective" compilation unit), but you can roughly think of it as equivalent to a whole shared library.
Made me think about this book https://webperso.info.ucl.ac.be/~pvr/book.html where the author creates a mini language and extends it in many different paradigms.
Brief discussions 4 years ago (https://news.ycombinator.com/item?id=27024804) and 7 years ago (https://news.ycombinator.com/item?id=18638290).
Programming languages are more a UI problem than a mathematical problem. Not sure how it will evolve if coding agents are becoming the “middlemen”.
This is an interesting take, but I disagree.
Syntax is a UI problem, but programming languages are more than syntax sugar for assembly. Features like Rust's borrow-checker, type systems, even small things like name resolution and async/await provide "semi-formal"* guarantees. And LLMs are no substitute for even semi-formal guarantees; people have spent lots of money and effort trying to integrate (non-NN, rule-based) algorithms and formal methods into LLMs, some even writing LLM programming languages, because they may be the key to fixing LLMs' accuracy problem.
* Not formally defined or verified, but thoroughly defined and implemented via reliable algorithms. A true formal specification is completely unambiguous, and a true formally-proven implementation is completely accurate to its specification. A thorough language specification or reference (like https://tc39.es/ecma262/ or https://doc.rust-lang.org/nightly/reference/) is almost unambiguous, and a well-tested compiler or interpreter is almost always accurate, especially compared to LLMs.
Cases where true formal guarantees matter exist but are rare. There are many, many cases where LLMs are too inaccurate but well-tested algorithms are good enough.
You can certainly think of them that way if you want. And, you'll be wrong, at least IMO, and in what seems to be the consensus of people who study and work on PLs.
"UI" issues, while not at all unimportant, are surface level. Semantics are fundamental. Get the semantics of your language wrong, and you are stuck. Indeed, this is what happens with most popular languages that aren't well-considered:
- lang author creates some random new lang, throws whatever feels like a good idea to them into the lang
- lang community discover that these features interact in rather surprising ways, ways that keep tripping up newcomers to the language, and would like to change it.
- lang author, having had X more years of experience now, identifies that changing the behavior could break user programs who rely on that behavior (wittingly or un-)
- lang author decides to write a new, backwards-incompatible version, community has 20 years of turmoil before recovering (python3), if it ever does (perl6).
Speaking as an academic PL researcher: you are right that this math-centered view is common in the PL research community, but that doesn't make the opposing view "wrong". There is also plenty of work in the intersection of PL and HCI. Both perspectives are crucial, and neither field is intrinsically superior to the other, despite what some individual researchers feel.
Yes, “wrong” was a bit of a strong word, but do note that I intentionally framed it as an opinion and not as, like, objectively wrong.
UI is still important, but I do not think it really defines a language.
Let’s say I created a new Haskell compiler, and it had the best error messages in the world, a hiccup-free LSP experience, maybe an alternative syntax format that’s more familiar to mainstream devs. But it still has all the same semantics. Is that still Haskell? Why or why not?
Now let’s say I took whatever has the best tooling currently, idk what it would be, maybe TypeScript, but gave it say non-strict semantics. Is that still typescript?
If you consider the programming experience/UI to be the language along with the tools that support that language, then you don't need to concern yourself if using Haskell in vim is the same thing as using Haskell in an IDE that supports it really well, they aren't the same thing.
If you take it to an extreme, like with Smalltalk, you wind up with languages whose environment is considered essential, they are basically part of the language at that point.
If you haven't seen this talk, it touches on exactly this topic: https://medium.com/bits-and-behavior/my-splash-2016-keynote-...
The upshot is while the "PL as math" and "PL as tool" views have been well researched, if you're a researcher getting started in this area there is plenty of fertile ground out there. We might just have this view that "PL is math" because we've done the most work there, but there are many more applications of programming languages we have not thoroughly explored enough to say they are not just as important.We can say that the same thing about math itself. For example, to pure mathematicians, math is math, to physicists, math is a tool.
Amy is great, thank you for the link!
Well it's like asking, are the wheels more important on a car, or the engine?
UI can mean many things.
To most people it is about IDE and development tool chains.
Or we can think of it from the perspective of mapping programmer’s mental model to the problems they want to solve. Interface to the Mind, so to speak.
I don’t disagree, but if you set it up as “ui vs math”, you signify certain things. And based upon your comment it did not say “they mean how the programmers mental model maps to the problem”. In fact I’d say that’s more math like than not
Interesting perspective.
I have been intentionally coding with Claude in the past few months and I started to think about programming from problem solving standpoint other than generating artifacts that conform to whatever languages, libraries and frameworks I happen to use to solve the problem at the time.
We can call this mental model or not. But it seems to be a different abstraction than we are talking about at PL level.
To me, LLM is part of the interface between me and the solution, that is what I mean by UI.
The real work is still in my head. PL language is just a medium.
What might a programming language designed specifically as a UI for coding agents look like?
Serious (germ of a) question.
> What might a programming language designed specifically as a UI for coding agents look like?
A bad idea, probably. LLM output needs to be reviewed very carefully; optimizing the language away from human review would probably make the process more expensive. Also, where would the training data in such a language come from?
So then a programming language designed explicitly for coding languages would need to take human reviews into account, what is the most efficient and concise ways to express programming concepts then?
In the end, we circle back to lisps, once you're used to it, it's as easy for humans to parse as it is for machines to parse it. Shame LLMs struggle with special characters.
Surely lisps don't have drastically more special characters as other languages? A few more parens, sure, but less curly braces, commas, semicolons, etc
Also feels like making sure the tokeniser has distinct tokens for left/right parens would be all that is required to make LLMs work with them
Don't get me wrong, they do work with lisps already, had plenty of success having various LLMs creating and managing Clojure code, so we aren't that far off.
But I'm having way more "unbalanced parenthesis" errors than with Algol-like languages. Not sure if it's because of lack of training data, post-training or just needing special tokens in the tokenizer, but there is a notable difference today.
> once you're used to it
> shame LLMs struggle
That sounds like Stockholm syndrome more than an easy-to-use language.
Yeah, makes sense it sounds like that. But the crux is probably that most of us learned programming via Algol-like languages, like C or PHP, and only after decades of programming did we start looking into lisps.
But don't take my word for it, ask the programmers around you for the ones who've been looking into lisps and ask them how they feel about it.
I don’t think that would be a real issue in practice. Coding LLMs need to be able to cope with complicated expressions in many languages. If they can produce legitimate code for other languages, they can be taught to cope with s-expressions.
edit: Lisp -> s-expressions
Until such a point where have agents not trained on human language or programming languages, I think something that’s also really good for people as well. - one obvious way to do things
- memory safe, thread safe, concurrency safe, cancellation safe
- locality of reasoning, no spooky action at a distance
- tests as a first class feature of the language
- a quality standard library / reliable dependency ecosystem
- can compile, type check, lint, run tests, in a single command
- can reliably mock everything, either with effects or something else, such that again we maintain locality of reasoning
The old saying that a complex system that works is made up of simple systems that work applies. A language where you can get the small systems working, tested, and then built upon. Because all of these things with towards minimising the context needed to iterate, and reducing the feedback loop of iteration.
If true ASI ever materializes, JVM bytecode.
This is the correct answer. Why wouldn’t an actual AI just generate raw, fully optimized executables for production?
The same reason an actual AI wouldn't play chess by brute forcing every possible position. Intelligent systems are about reasoning, not simply computing, and that requires operating at the level of abstraction where your intelligence is most effective.
You're contradicting yourself. Raw, fully optimized executables for production would mean machine code for the target platform, not an intermediate bytecode that still requires a VM.
Not really, it would be one of the steps in the chain between design and implementation for a specific hardware platform. That is, unless that code is only ever to run on a single hardware platform, a rare occurrence outside of embedded applications.
This brings up a question I’ve had for a while:
Is it possible to create a programming language that has every possible feature all at once?
I realize there are many features that are opposed to each other. Is it possible to “simply” set a flag at compile / runtime and otherwise support everything? How big would the language’s source code be?
It seems that in practice no it's not possible based on what I've read from people much closer to programming language design and compiler work.
"In practice, the challenge of programming language design is not one of expanding a well-defined frontier, it is grappling with a neverending list of fundamental tradeoffs between mutually incompatible features.
Subtyping is tantalizingly useful but makes complete type inference incredibly difficult (and in general, provably impossible). Structural typing drastically reduces the burden of assigning a name to every uninteresting intermediate form but more or less precludes taking advantage of Haskell-style typeclasses or Rust-style traits. Dynamic dispatch substantially assists decoupling of software components but can come with a significant performance cost without a JIT. Just-in-time compilation can use runtime information to optimize code in ways that permit more flexible coding patterns, but JIT performance can be unpredictable, and the overhead of an optimizing JIT is substantial. Sophisticated metaprogramming systems can radically cut down on boilerplate and improve program concision and even readability, but they can have substantial runtime or compile-time overhead and tend to thwart automated tooling. The list goes on and on."
From https://lexi-lambda.github.io/blog/2025/05/29/a-break-from-p...
I do agree with this, but also, I don't really understand a lot of the tradeoffs, or at least to me they are false tradeoffs.
Her first example is excellent. In Haskell, we have global type inference, but we've found it to be impractical. So, by far the best practice is not to use it; at the very least, all top-level items should have type annotations.
the second one, structural typing: have your language support both structural types and nominal types, then? This is basically analogous to how haskell solved this problem: add type roles. Nominal types can't convert to one another, whereas structural types can. Not that Haskell is the paragon of language-well-designed-ness, but... There might be some other part of this I'm missing, but given the obviousness of this solution, and the fact that I haven't seen it mentioned, it is just striking.
on dynamic dispatch: allow it to be customized by the user - this is done today in many cases! Problem solved. Plus with a global optimizing compiler, if you can deal with big executable size, you have cake and eat cake.
on JIT: Yes, JIT can take some time, it is not free. JIT can make sense even in languages that are AOT compiled, in general it optimizes code based upon use patterns. If AOT loop unrolling makes sense in C, then I certainly think runtime optimization of fully AOT compiled code must be advantageous too. But, today, you can just about always figure that you can get yourself a core to do this kind of thing on, we just have so many of them available and don't have the tools to easily saturate them. Or, even if you do today with N cores, you probably won't be able to on the next gen, when you have N+M cores. Sure, there's gonna have to be some overhead when switching out the code, but I really don't think that's where the mentioned overhead comes from.
Metaprogramming systems are another great example: Yes, if we keep them the way they are today, at the _very least_ we're saying that we need some kind of LSP availability to make them reasonable for tooling to interact with. Except, guess what, all languages nowadays of any reasonable community size will need LSP. Beyond that, there are lots of other ways to think about metaprogramming other than just the macros we commonly have today.
I get her feeling, balancing all of this is hard. One think you can't really get away from here is that all of this increases language, compiler, and runtime complexity, which makes things much harder to do.
But I think that's the real tradeoff here: implementation complexity. The more you address these tradeoffs, the more complexity you add to your system, and the harder the whole thing is to think about and work on. The more constructs you add to the semantics of your language, the more difficult it is to prove the things you want about its semantics.
But, that's the whole job, I guess? I think we're way beyond the point where a tiny compiler can pick a new set of these tradeoffs and make a splash in the ecosystem.
Would love to have someone tell me how I'm wrong here.
All at once? Unlikely.
However, can a lot more "features", or better, "programming paradigms" or as I would call them "architectural styles" be accommodated together, usefully?
Absolutely yes!
Objective-S does this using a combination of two basic techniques:
1. A language defined by a metaobject protocol
2. This metaobject protocol being organized on software architectural principles
(1) makes it possible to create a variable programming language. (2) widens the scope of that variability to encompass pretty much all of programming.
What was surprising is how little actual language mechanism was required to make this happen, and how far that little bit of mechanism goes. Eye-opening!
Site for the language:
https://objective.st
However, the site doesn't really have the explanation of how 1+2 work and work together. That is explained as best I could manage within the space limitation of an academic paper in Beyond Procedure Calls as Component Glue: Connectors Deserve Metaclass Status.
https://2024.splashcon.org/details/splash-2024-Onward-papers...
ACM Digital Library (open access):
https://dl.acm.org/doi/10.1145/3689492.3690052
Alas, the PhD thesis that goes into more detail is still in the works (getting close, though!). That still being in the works also means that the site will not be updated with that information for a bit. Priorities...
Can't you program everything with C? If yes, doesn't C have all features all at once?
I would assume not. Consider ARC vs GC. With ARC the programmer manages memory and with GC the compiler manages it. People choose one or the other because of the style they're going after. Could you set a flag for it. Maybe if all libraries you download were pre-compiled perhaps.
Personally, I just prefer to know that the team that supports the tool I'm using is dedicated to "that one thing" I'm after.
Nim language iirc can have either of arc or gc and even manual etc. via their Multi-paradigm Memory Management Strategies Source: https://nim-lang.org/1.4.0/gc.html
You could do it. You could allocate something as garbage-collectible or not at allocation time. You probably would want them to be different types, though, so that the type system keeps track of whether you need to manually deallocate this particular thing, or whether the garbage collector will do so.
It could be done... but I'm not sure what it would gain you. It seems to me that knowing that it will do one or the other is better than having to think about which it will do in each instance.
Huh?
ARC stands for Automatic Reference Counting. The compiler manages the reference counts.
They may have been thinking rust where it means atomic reference counting.
(Although the compiler inserts ref count inc and dec calls automatically, so ...)
Yes, it's called C++.
Seriously tho, not really because programming languages are designed, and design is all about carefully balancing tradeoffs. It starts with the purpose of the language -- what's it for and who will be using it? From there, features follow function. Having all the features is counterproductive because some features are only good in the absence of others, others are mutually exclusive. For example, being 1-indexed or 0-indexed. You can't be both, unless it's configurable (and that just causes bugs as Julia found out).
If you want your language to be infinitely configurable to meet the needs of everyone, then you would want it to be as small as possible, not big. Something like Lisp.
Oz is maybe what you looking for https://en.wikipedia.org/wiki/Oz_(programming_language)
Yes, Oz is pretty cool, though I personally find the use of FP as the base limiting.
I also highly recommend the book: https://en.wikipedia.org/wiki/Concepts,_Techniques,_and_Mode...
Academically the answer is obviously yes since they are all abstractions on the same instruction set.
> Is it possible to create a programming language that has every possible feature all at once?
Some random thoughts about this:
If languages are a tool of communication between programmers (there's that adage "primarily for humans to read, and only secondarily for computers to run") would this be a good idea?
Wouldn't each set of flags effectively define a different language? With a combinatorial explosion of flags.
An act of design is not only about what to include, but what to leave out. Some features do not interact well with others, which is why tradeoffs exist. You'd have to enforce restrictions on which flags can be used together.
You'd be designing a family of programming languages rather than a single language. Finding code in this language would tell you little, until you understood the specific flags.
Since all languages compile to the same representation on silicon (give or take a few opcodes) it would have to be a language with customizable grammar and runtime.
I for one would LOVE to make semicolons; optional and choose between {} braces/indentation per project/file, just like we can with tabs/spaces now (tabs are superior)
> Since all languages compile to the same representation on silicon (give or take a few opcodes)
I'm not sure that this is the case.
It depends how we're defining "representation", but I'd argue that languages are definitely more dissimilar here than they are they same. If you want to mix and match two compilation units with the semantics of different languages, even something as simple as a cross-language if-statement is going to be a hard bridge to cross, and even when targeting a single runtime it's easy to have a syntactic layer which doesn't efficiently map to that runtime.
That said, for the first thing they asked for (different syntactical views on the same "substrate," where I'm assuming the language has one model of how its runtime works), that's very doable.
> compilation units
Is this a universal concept?
No. It's a common enough term, but the handwavy concept I wanted to get across is that if you have code mixing and matching different syntaxes then there will necessarily be boundaries between those. Code with one syntax (if you can actually mix and match runtimes as the comment author said they want) will behave differently from "adjacent" (commonly a different file or directory, but I could imagine multiple syntaxes within a file too) code with a different syntax.
In common languages, you're usually still targeting the same runtime in different compilation units, but it's a rough description of optimization boundaries (you compile one unit at a time and stitch them together during linking). Some techniques bridge the gap and thus the language crispness (e.g., post-hoc transformations on compiled units, leading to one larger "effective" compilation unit), but you can roughly think of it as equivalent to a whole shared library.
Should be called: “The Exotic Programming Languages Zoo”
To be fair, zoos don't usually feature the boring species, like insects, worms, and other tiny invertebrates.
Good ones include an arthropod (insect+) house.
Tiny invertebrates are anything but boring (although many bore), but too small to make good showcases.
Worms aren't attractive to anyone who isn't on several government lists.
Posted this because I love how concise the implementations are. Might be a combination of using OCaml plus lexical and parser generators.
A shame there's no little language here that demonstrates linear types.
[dead]