socketcluster a day ago

Even large companies are still grasping at straws when it comes to good code. Meanwhile there are articles I wrote years ago which explain clearly from first principles why the correct philosophy is "Generic core, specific shell."

I actually remember early in my career working for a small engineering/manufacturing prototyping firm which did its own software, there was a senior developer there who didn't speak very good English but he kept insisting that the "Business layer" should be on top. How right he was. I couldn't imagine how much wisdom and experience was packed in such simple, malformed sentences. Nothing else matters really. Functional vs imperative is a very minor point IMO, mostly a distraction.

  • js8 17 hours ago

    "Generic core, specific shell."

    Your advice is the opposite of "functional core, imperative shell". The FCIS principle has IS which is generic, to be simple, because it's usually hard to test (it deals with resources and external dependencies). So by being simple, it's more unit testable.

    On the other hand, FC is where the business logic lives, which can be complex and specific. The reason why you want that "functional" (really just another name for "composable from small blocks") is because it can be tested for validity without external dependencies.

    So the IS shields you from technicalities of external dependencies, like what kind of quirks your DB has, or are we sending data over network or writing to the file, or does the user inputs comands in spanish or english, do you display the green square or blue triangle to indicate the report is ready, etc.

    On the other hand, FC deals with the actual business logic (what you want to do), which can be both generic and specific. These are just different types of building blocks (we call them functions) living in the FC.

    FCIS is exemplified by user-shell interaction. The user (FC) dictates the commands and interprets the output, according to her "business needs". While the shell (IS) simply runs the commands, without any questions of their purpose. It's not the job of IS to verify or handle user errors caused by wrong commands.

    But the user doesn't do stuff on her own; you could take her to a pub and she would tell you the same sequence of commands when facing the same situation. In that sense, the user is "functional" - independent on the actual state of the computer system, like the return value of a mathematical function is only dependent on the arguments.

    Another example is MVC, where M is the FC and VC is the IS. Although it's not always exactly like that, for variety of reasons.

    You can think of IS as a translator to a different language, understood by "the other systems", while the FC is there to supply what is actually being communicated.

    • Twisol 14 hours ago

      I disagree that these two pieces of advice are opposed. I think they are orthogonal at worst, and in agreement at best.

      "Functional core, imperative shell" (FCIS) is a matter of implementing individual software components that need to engage with side-effects --- that is, they have some impact on some external resources. Rather than threading representations of the external resources throughout the implementation, FCIS tells us to expel those concerns to the boundary. This makes the bulk of the component easier to reason about, being concerned with pure values and mere descriptions of effects, and minimizes the amount of code that must deal with actual effects (i.e. turning descriptions of effects into actual effects). It's a matter of comprehensibility and testability, which I'll clumsily categorize as "verification": "Does it do what it's supposed to do?"

      "Generic core, specific shell" (GCSS) is a matter of addressing needs in context. The problems we need solved will shift over time; rather than throwing away a solution and re-solving the new problem from scratch, we'd prefer to only change the parts that need changing. GCSS tells us we shouldn't simply solve the one and only problem in front of us; we should use our eyes and ears and human brains to understand the context in which that problem exists. We should produce a generic core that can be applied to a family of related problems, and adapt that to our specific problem at any specific time using a, yes, specific shell. It's a matter of adaptability and solving the right problem, which I'll clumsily categorize as "validation": "Is what it's supposed to do what we actually need it to do?"

      Ideally, GCSS is applied recursively: a specific shell may adapt an only slightly more generic core, which then decomposes into a smaller handful of problems that are themselves implemented with GCSS. When business needs change in a way that the outermost "generic core" can't cover, odds are still good that some (or all) of its components can still be applied in solving the new top-level problem. FCIS isn't really amenable to the same recursion.

      Both verification and validation activities are necessary. One is a matter of internal consistency within the component; the other is a matter of external consistency relative to the context the component is being used in. FCIS and GCSS advise on how to address each concern in turn.

      • netdevphoenix 5 hours ago

        >GCSS tells us we shouldn't simply solve the one and only problem in front of us; we should use our eyes and ears and human brains to understand the context in which that problem exists

        This violates KISS and YAGNI and potentially leads to overengineering code and excessive abstraction

        • gf000 17 minutes ago

          The context was "business", that kind of application is developed quite differently than, say, a cool little hobby terminal emulator or whatever.

          Even though the business currently doesn't have a need to e.g. support any other currency than USD and EUR, an experienced developer will clearly see that it is unlikely to stay that way for long, so doing some preliminary preparation for generalizing currencies may well worth the time.

        • Twisol 3 hours ago

          Everything "potentially leads" to adverse outcomes if not applied with due care and cognizance. That includes KISS and YAGNI. If you're looking for a principle you can apply in 100% of cases without consideration of context, I'm afraid you'll need to shop elsewhere.

      • js8 13 hours ago

        > I think they are orthogonal at worst, and in agreement at best.

        I have considered them being orthogonal, but then the definition of the "shell" and "core" becomes problematic in this comparison. What you call shell in GCSS is not shell in FCIS at all, more like a boundary. Even there it is questionable whether boundary should be more specific than the core. At the core, things can be more integrated than at the boundary, and so it can have more business-specific logic.

        The definition question is, if you take an application, where is the business logic, is it in the "core" or not? I would say it is, literally what the application's main purpose is its "core". And "shell" is similarly well-defined. For example, UI without an engine implementing the actual logic is just a "shell".

        I am not disputing GP's advice as you understand it, although I feel it is perhaps a little bit simplistic if not tautological ("prefer generic building blocks where possible"), and really muddles up what the core and shell is in the FCIS meaning.

        • garethrowlands 13 hours ago

          One way to get some intuition with FCIS is to write some Haskell.

          Because Haskell programs pretty much have to be FCIS or they won't compile.

          How it plays out is...

          1. A Haskell program executes side effects (known as `IO` in Haskell). The type of the `main` program is `IO ()`, meaning it does some IO and doesn't return a value - a program is not a function

          2. A Haskell program (code with type `IO`) can call functions. But since functions are pure in Haskell, they can't call code in `IO`.

          3. This doesn't actually restrict what you can do but it does influence how you write your code. There are a variety of patterns that weren't well understood until the 1990s or later that enable it. For example, a pure Haskell function can calculate an effectful program to execute. Or it can map a pure function in a side-effecting context. Or it can pipe pure values to a side-effecting stream.

          • ynhatex 6 hours ago

            I used to write lots of haskell before deciding it didn't meet my needs. However, the experience provided lots of long-term benefits, including a FCIS design mindset.

            Recently, I did a major python refactoring project, converting a prototype/hack/experiment into a production-quality system. The prototype heavily intermixed IO and app logic, and I needed to write unit tests for the production system. Even with fixtures and mocking, unit testing was painful and laborious, so our test coverage was lousy.

            Partitioning the core classes into pure and impure components was the big win. Unit testing became trivial and we caught lots of bugs in the original business logic. More recently, we changed the IO from files to a DB and having encapsulated the IO was also a win.

            • urxvtcd 6 hours ago

              May I ask you how you model your functional code in Python, in absence of Haskell's algebraic data types?

          • js8 8 hours ago

            I agree with the suggestion to study Haskell, I like Haskell quite a lot (although I don't write applications in it).

        • Twisol 13 hours ago

          > then the definition of the "shell" and "core" becomes problematic in this comparison

          I agree -- if you're trying to make the words "shell" and "core" mean the same things between FCIS and GCSS, or identify the same parts of a program, then there will be problems. I think FCIS and GCSS are just two different ways of analyzing a program into pieces. Just as the number 8 can be seen through addition as 3 + 8 and through multiplication as 2 * 4, a program can be analyzed in multiple ways. If you view a program through the lens of FCIS, you expect to see a broad region of the program in which side-effects don't occur and a narrow region in which they do. If you view a program through the lens of GCSS, you expect to see broad parts of the program that solve general problems, and narrower regions in which those problems are instantiated in specific. The narrower regions are all "shell"-shaped, but that doesn't mean they are "the" shell. They have in common simply that they wrap a bulk of functionality to interface it to a larger context.

          > At the core, things can be more integrated than at the boundary, and so it can have more business-specific rules.

          I tend to disagree. Decomposition is a fundamental part of software engineering: we decompose a large problem into smaller ones, solve those, them compose those solutions into a solution to the large problem (c.f. Parnas' "On the criteria to be used in decomposing systems into modules"). It is often easier to solve a more general problem than the one originally given (Polya's principle). Combining the two yields GCSS.

          A solution to each individual small problem can be construed as having its own generic core, and the principles used in composing sibling solutions constitute the specific shells that wrap them, allow them to interface, and together implement a solution to a higher-level problem.

          Because there are multiple of these "cores", each solving a decomposed part of the top-level problem, it's hard for me to see how "At the core, things can be more integrated than at the boundary".

          > The definition question is, if you take an application, where is the business logic, is it in the "core" or not?

          I don't mean to be a sophist, but I think I need a more precise meaning of "business logic" before I can answer this question. In the process of solving successively smaller (and more general) problems, we abstract away from the totality of the business problem being solved, and address smaller and less-specific aspects of that problem. It may be that each subproblem's solution is architected as an individual instance of FCIS, as is often argued for microservice architectures; or that each subproblem is purely functional and only the top-level solution is wrapped in an imperative core; or anywhere in between. Needless to say, I think that choice is orthogonal.

          As a result, I would say that the business logic itself has been factorized and distributed across the many subproblems and their solutions, and that indeed the "specific shell"s that are responsible for specializing solutions toward the specific case of the business need may necessarily include business logic. For instance, when automating a business process, one often needs to perform a complex step A before a complex step B. While both A and B might be independently solvable, orchestrating them together is still "business logic", because they need to be performed in order according to business needs.

          (In all of this, perhaps you can see why I don't think the "core" and "shell" of FCIS should be identified with the "core" and "shell" of GCSS. Words are allowed to have contextual meanings!)

          • js8 9 hours ago

            "Words are allowed to have contextual meanings"

            Sure, but this discussion is about FCIS, that's the context, and the GP should consider that.

            " think I need a more precise meaning of "business logic" before I can answer this question"

            Well, some examples. A tax application - the tax calculation according to the law. A word processor - layouting and rendering engine. A video game - something that calculates the state of the world, according to the rules of the game.

            So a game is a good example where the core can be more specialized than the shell. You can imagine a generic UI library shared by a bunch of games, but a generic game rules engine - that's just a programming language.

            "Decomposition is a fundamental part of software engineering: we decompose a large problem into smaller ones, solve those, them compose those solutions into a solution to the large problem"

            There is a big misconception in SW engineering that the above decomposition always exists in a meaningful way. Take the tax calculation for example. That cannot be decomposed into pieces that are generic, and potentially reusable elsewhere. It's just a list of rules and exceptions that need to be implemented as stated. You can decompose it into "1st part of calculation" and "2nd part of calculation", but that's meaningless (unhelpful). (Similarly for the game example above, the rules only exist in the context of other rules.)

            Surprisingly many problems are like that, and that makes them kinda difficult to test.

            • Izkata 5 hours ago

              > Take the tax calculation for example. That cannot be decomposed into pieces that are generic, and potentially reusable elsewhere. It's just a list of rules and exceptions that need to be implemented as stated. You can decompose it into "1st part of calculation" and "2nd part of calculation", but that's meaningless (unhelpful). (Similarly for the game example above, the rules only exist in the context of other rules.)

              As someone who does this himself for taxes, you're looking only at the "specific shell" part. The generic core is the thing that does the math - spreadsheet, database, whatever. The tax rules are then imposed on top of that core.

              • js8 4 hours ago

                Well you can claim that the core is the programming language, in which we write those tax rules, but that's not a very useful distinction IMHO (for how to write programs in the language).

                • Izkata 4 hours ago

                  That's only the case where a usable generic core already exists. A great example where it didn't exist is python's "requests" library: https://requests.readthedocs.io/en/latest/

                  The example on the homepage is the "specific shell" - simple and easy to use, and by far the most common usage, but if you scroll down the table of contents on the API page (https://requests.readthedocs.io/en/latest/api/) you'll see sections titled "Lower-Level Classes" and "Lower-Lower-Level Classes" - that's the generic core, which the upper level is implemented in terms of.

                  • Twisol 3 hours ago

                    I like this example :) Another good example might be Git's distinction between "porcelain" and "plumbing"; the porcelain is implemented in terms of the plumbing, and gives a nicer* interface in terms of what people generally want to do with Git, but the plumbing is what actually does all the general, low-level stuff.

                    * opinions vary

            • Twisol 8 hours ago

              > Take the tax calculation for example. That cannot be decomposed into pieces that are generic, and potentially reusable elsewhere.

              Quite right! However, the tax code does change with some regularity, and we can expect that companies like Intuit should have gotten quite good by now -- even on a pure profit motive -- at making it possible to relatively quickly modify only the parts of their products that require updating to the latest tax code. To put it another way, while it might be the case that the tax code for any given year is not amenable to decomposition, all tax codes within a certain span of years might be specific instances of a more general problem. (I recall a POPL keynote some years back that argued for formalizing tax codes in terms of default logic!) By solving that general problem, you can instantiate your solution on the given year's tax code without needing to recreate the entire program from scratch.

              To be clear, I'm the one who brought subproblem decomposition into the mix, and we shouldn't tar the top-level commenter with that brush unnecessarily. Of course some problems will be un-decomposable "leaves". I believe their original point, about a specific business layer sitting on top of a more general core, still applies.

              > So a game is a good example where the core can be more specialized than the shell. You can imagine a generic UI library shared by a bunch of games, but a generic game rules engine - that's just a programming language.

              As it happens, the "ECS pattern" (Entity, Component, and System) is often considered to be a pretty good way of conceptualizing the rules of a game. An ECS framework solves the general problem (of associating components to entities and executing the systems that act over them), and a game developer adapts an ECS framework to their specific needs. The value in this arrangement is precisely that, as the game evolves and takes shape, only the logic specific to the game needs to be changed. The underlying ECS framework is on the whole just as appropriate for one game as for any other.

              (I could also make a broader point about game engines like Unity and Unreal, and how so many games these days take the "general" problem solved by these engines and adapt them to the "specific" problem of their particular game. In general, nobody particularly wants to make engine-level changes for each experiment during the development of a game, even though sometimes a particular concept for a game demands a new engine.)

              > Sure, but this discussion is about FCIS, that's the context, and the GP should consider that.

              I understood the original commenter as criticizing FCIS (or at least the original post, as "grasping for straws") and suggesting that GCSS is generally more appropriate. In that context, I think it's natural to interpret their use of "core" and "shell" as competing rather than concordant with FCIS.

      • bccdee 7 hours ago

        I think the "generic core" is often a SQL database or other such generic storage/analytics layer, while the "functional core" is the business logic that operates on the specific domain objects.

    • zelphirkalt 12 hours ago

      A functional core can actually be very generic. There is nothing in the functional paradigm, that makes functions less generic. They are as generic or specific as you write them.

  • foofoo12 a day ago

    > Even large companies are still grasping at straws when it comes to good code

    Probably many reasons for this, but what I've seen often is that once the code base has been degraded, it's a slippery slope downhill after that.

    Adding functionality often requires more hacks. The alternative is to fix the mess, but that's not part of the task at hand.

    • stitched2gethr a day ago

      I've seen it many times. And then every task takes longer than the last one, which is what pushes teams to start rewrites. "There's never enough time to do it right, but always time to do it again."

    • motorest 14 hours ago

      > Probably many reasons for this, but what I've seen often is that once the code base has been degraded, it's a slippery slope downhill after that.

      Another factor, and perhaps the key factor, is that contrary to OP's extraordinary claim there is no such thing as objectively good code, or one single and true way of writing good code.

      The crispest definition of "good code" is that it's not obviously bad code from a specific point of view. But points of view are also subjective.

      Take for example domain-driven design. There are a myriad of books claiming it's an effective way to generate "good code". However, DDD has a strong object-oriented core, to the extent it's nearly a purist OO approach. But here we are, seeing claims that the core must be functional.

      If OP's strong opinion on "good code" is so clear and obvious, why are there such critical disagreements at such a fundamental levels? Is everyone in the world wrong, and OP is the poor martyr that is cursed with being the only soul in the whole world who even knows what "good code" is?

      Let's face it: the reason there is no such thing as "good code" is that opinionated people making claims such as OP's are actually passing off "good code" claims as proxy's for their own subjective and unverified personal taste. In a room full of developers, if you throw a rock at a random direction you're bound to hit one or two of these messiahs, and neither of them agrees on what good code is.

      Hearing people like OP comment on "good code" is like hearing people comment on how their regional cuisine is the true definition of "good food".

      • bccdee 7 hours ago

        > However, DDD has a strong object-oriented core

        The original 2003 DDD book is very 2003 in that it is mired in object orientation to the point of frequently referencing object databases¹ as a state-of-the-art storage layer.

        However, the underlying ideas are not strongly married to object orientation and they fit quite nicely in a functional paradigm. In fact, ideas like the entity/value object distinction are rather functional in and of themselves, and well-suited to FCIS.

        [1]: https://en.wikipedia.org/wiki/Object_database

        • motorest 4 hours ago

          > The original 2003 DDD book is very 2003 in that it is mired in object orientation to the point of frequently referencing object databases¹ as a state-of-the-art storage layer.

          Irrelevant, as a) that's just your own personal and very subjective opinion, b) DDD is extensively documented as the one true way to write "good code", which means that by posting your comment you are unwittingly proving the point.

          > However, the underlying ideas are not strongly married to object orientation and they fit quite nicely in a functional paradigm.

          "Underlying ideas" means cherry-picking opinions that suit your fancy while ignoring those that don't.

          The criticism on anemic domain models, which are elevated to the status of anti-pattern, is more than enough to reject any claim on how functional programming is compatible with DDD.

          And that's perfectly fine. Not being DDD is not a flaw or a problem. It just means it's something other than DDD.

          But the point that this proves is that there is no one true way of producing "good code". There is no single recipe. Anyone who makes this sort of claim is either both very naive and clueless, or is invested in enforcing personal tastes and opinions as laws of nature.

      • jve 9 hours ago

        > However, DDD has a strong object-oriented core, to the extent it's nearly a purist OO approach

        Really?

        https://fsharpforfunandprofit.com/ddd/

        • JoelMcCracken 8 hours ago

          Yeah, I haven’t read Scott’s books, but my understanding of ddd is such that it should be extremely applicable to DDD.

          DDD is described in terms of OOP, but really imo it makes far more sense in fp contexts.

  • quietbritishjim 10 hours ago

    > Meanwhile there are articles I wrote years ago which explain clearly from first principles why the correct philosophy is ...

    I think this is a very common mistake. You've spent years, maybe decades, writing code and now you want to magically transfer all that experience in a few succinct articles. But no advice that you give about "the correct philosophy" is going to instantly transfer enough knowledge to make all large companies write good code, if only they followed it. Instead, I'm sure it's valuable advice, but more along the lines of a fragment within a single day of learning for a diligent developer.

    A company I worked recently had a more extreme version of this mistake. It had software written in the 1980s based on a development process by Michael Jackson (no, not that one!), a software researcher that had spent his whole career trying to come up with silly processes that were meant to fix software development once and for all; he wrote whole books about it. I remember reading a recent interview with him where he mourns that developers today are interested in new programming languages but not development methodologies. (The code base I worked on was fine by the way, given that it was 40 years old, but not really because of this Jackson stuff.)

    I'm reminded of the Joel on Software article [1] where he compares talented (naturally or through experience) developers as being like really talented expert chefs, and those following some methodology as being like people working at McDonald's.

    [1] https://www.joelonsoftware.com/2001/01/18/big-macs-vs-the-na...

    • Twisol 8 hours ago

      > But no advice that you give about "the correct philosophy" is going to instantly transfer enough knowledge to make all large companies write good code, if only they followed it.

      Good old "Programming as Theory Building". It's almost impossible to achieve this kind of transfer without already having the requisite lived experience.

      [0]: https://ratfactor.com/papers/naur1_theory_building.pdf

      • naasking 6 hours ago

        "The astronomer may speak to you of his understanding of space, but he cannot give you his understanding." ~ Kahlil Gibran ~

  • lelanthran 14 hours ago

    > the correct philosophy is "Generic core, specific shell."

    > Nothing else matters really. Functional vs imperative is a very minor point IMO, mostly a distraction.

    I'm torn on this. This really is the faster way to higher quality.

    OTOH, if more developers knew this, I wouldn't be so much more faster when I create my systems for clients. I'd just be a "normal 1x dev".

    I like implementing features, sans AI-assistance, in my LoB applications faster than devs with Claude code doing so on their $FRAMEWORK.

  • ericmcer 5 hours ago

    Isn't this saying business layer should not be on the top?

    Business layers should be accessible via an explicit interface/shape that is agnostic to the layers above it. So if the org decides to move from mailchimp to some other email provider the business logic can remain untouched and you just need to write some code mapping the new provider to the business logic's interface.

    Maybe our visualizations are mixed up, but I always viewed things like cloud providers, libraries etc. as potentially short lived whereas the core logic could stick around forever.

  • frank_nitti a day ago

    These are great and succinct, yours and your teammate’s.

    I still find myself debating this internally, but one objective metric is how smoothly my longer PTOs go:

    The only times I haven’t received a single emergency call were when I left teammates a a large and extremely specific set of shell scripts and/or executables that do exactly one thing. No configs, no args/opts (or ridiculously minimal), each named something like run-config-a-for-client-x-with-dataset-3.ps1 that took care of everything for one task I knew they’d need. Just double click this file when you get the new dataset, or clone/rename it and tweak line #8 if you need to run it for a new client, that kind of thing.

    Looking inside the scripts/programs looks like the opposite of all of the DRY or any similar principles I’ve been taught (save for KISS and others similarly simplistic)

    But the result speaks for itself. The further I go down that excessively basic path, the more people can get work done without me online, and I get to enjoy PTO. Anytime i make a slick flexible utility with pretty code and docs, I get the “any chance you could hop on?” text. Put the slick stuff in the core libraries and keep the executables dumb

    • zdc1 16 hours ago

      I see a similar problem in infra-land where people expose too many config variables for too many things, creating more cruft. Knowing what to hardcode and what to expose as a var is something a lot of devs don't seem to understand; and don't realise they don't understand.

      • frank_nitti 7 hours ago

        Oh definitely, many headaches untangling massive “variables.tf” files where the value is identical in 100% of the target environments, and would be nonsensical to change without corresponding changes in the infra config resources/modules as well.

        My favorite are things where security policy mandates something like private networking and RBAC, and certain resources only have meaning in those contexts, for heavens sake why are we making their basic args like “enforce_tls” or “assign_public_ip” or “enable_rbac” into variable params for the user to figure out

    • timpieces a day ago

      Yes I feel that when to apply certain techniques is frequently under-discussed. But I can't blame people for err-ing on the side of 'do everything properly' - as this makes life more pleasant in teams. Although I think if you squint, the principle still applies to your example. The further you get from the 'core' of your platform/application/business/what-have-you, the less abstract you need to be.

    • chamomeal 18 hours ago

      That is pretty convincing advice!! Maybe I’ll try writing more specific scripts.

  • zelphirkalt 12 hours ago

    Non-functional core tends to become a buggy mess, with workarounds in the shell to account for those bugs, and then one needs to know about the nature of the core to correctly use the shell and so on. Functional core will lend itself very well to unit tests. Writing them will almost be trivial, when functional core is done right. Imperative shell is then less of an issue, because the blast radius of bugs is reduced to one usage of the core. The imperative shell should be kept as small as reasonably possible of course.

  • zelphirkalt 11 hours ago

    I would even go so far to say, that large companies even struggle with this more than small ones. The amount of people needing to know how to build things properly is larger than in a small company, where one knowledgeable engineer might already be sufficient. Too many cooks spoiling the soup(/broth?). And lots of people are cooking these days.

  • spoiler 11 hours ago

    In my head, and the way its describe the generic and specific are swapped. The core handles a specific, pure problem. The core is generic (do you get the data via http, databases, filesystem, etc) then becomes irrelevant to the core problem

  • veqq a day ago

    > The more specific, the more brittle. The more general, the more stable. Concerns evolve/decay at different speeds, so do not couple across shearing layers. Notice how grammar/phonology (structure) changes slowly while vocabulary (functions, services) changes faster.

    ...

    > Coupling across layers invites trouble (e.g. encoding business logic with “intuitive” names reflecting transient understanding). When requirements shift (features, regulations), library maintainers introduce breaking changes or new processor architectures appear, our stable foundations, complected with faster-moving parts, still crack!

    https://alexalejandre.com/programming/coupling-language-and-...

novoreorx 11 hours ago

While I largely agree with the philosophy, the example provided is not very practical. The code snippet `getExpiredUsers(db.getUsers(), Date.now())` is unlikely to occur in real-life scenarios. No one would retrieve all users and then filter them within the program. Instead, it should be `db.getExpiredUsers(Date.now())`.

We should never be too extreme on anything, otherwise it would turn good into bad.

  • _flux 10 hours ago

    With a good library you could do just that, by having the functions return only queries and then expand them to the actual values (by interacting with the DB) after applying the filtering to it?

    • soulofmischief 10 hours ago

      So would you then have to do `getActualUsers(db.getUsers())` or `query(db.getUsers())`?

      Still smells like in such a case the developer avoids the complications of abstraction or OOP by making the user deal with it. That's bad API design due to putting ideology before practicality or ergonomics.

      • bulatb 8 hours ago

        You would have an API that makes the query shape, the query instance with specific values, and the execution of the query three different things. My examples here are SQLAlchemy in Python, but LINQ in C# and a bunch of others use the same idea.

        The query shape would be:

          active_users = Query(User).filter(active=True)
        
        That gives you an expression object which only encodes an intent. Then you have the option to make basic templates you can build from:

          def active_users_except(exclude):
              return active_users.filter(User.id.not_in(exclude)
        
        ...where `exclude` is any set-valued expression.

        Then at execution time, the objects representing query expressions are rendered into queries and sent to the database:

          exclude_criterion = rude_users()  # A subquery expression
          polite_active_users = load_records(
              active_users_except(exclude_criterion)
          )
        
        With SQLAlchemy, I'll usually make simple dataclasses for the query shapes because "get_something" or "select_something" names are confusing when they're not really for actions.

          @dataclass
          class ActiveUsers(QueryTemplate):
              active_if: Expression = User.active == true()
        
              @classmethod
              excluding(cls, bad_set):
                  return cls(
                      and_(
                          User.active == true(),
                          User.id.not_in(bad_set)
                      )
                  )
        
              @property
              def query(self):
                  return Query(User).filter(self.active_if)
        
          load_records(
              ActiveUsers.excluding(select_alice | select_bob).query
          )
        • soulofmischief 8 hours ago

          This is a better story because it has consistent semantics and a specific query structure. The db.getUsers() approach is not part of a well-thought-out query structure.

        • t_mahmood 8 hours ago

          As in Django querysets. But starts to get messy with complex queries.

          • bulatb 7 hours ago

            It can. SQLAlchemy has good support for types since 2.0, which helps a lot.

      • shortrounddev2 7 hours ago

        In linq (C#)

            IEnumerable<User> getExpiredUsers(DbSet<User> users)
                => users.Where(u => u.ExpiresAt < DateTime.UtcNow);
        
        Such simple logical expressions (called expression trees) get converted to SQL queries
        • ryanrasti 5 hours ago

          The is exactly the way forward: encapsulation (the function), type safety, and dynamic/lazy query construction.

          I'm building a new project, Typegres, on this same philosophy for the modern web stack (TypeScript/PostgreSQL).

          We can take your example a step further and blur the lines between database columns and computed business logic, building the "functional core" right in the model:

            // This method compiles directly to a SQL expression
            class User extends db.User {
              isExpired() {
              return this.expiresAt.lt(now());
              }
            }
          
            const expired = await User.where((u) => u.isExpired());
          
          Here's the playground if that looks interesting: https://typegres.com/play/
  • fulafel 6 hours ago

    Depends on the programming environment: if it's a O(n) operation anyway, meaning you don't the times indexed, and your computation is colocated with the data and the db interface is using lazy sequences...

    (Also, real-life systems of course do things inefficiently all the time)

  • doix 10 hours ago

    > No one would retrieve all users and then filter them within the program.

    No one _should_ do that, but that's a common enough problem (that usually doesn't get found until code is running in production). I suspect with the rise of vibe coding, it's going to happen more and more.

    • regularfry 8 hours ago

      Sometimes it's forced by using the wrong database in the first place, or the wrong data structure. It can be less pain to do a bit of post-processing in the application layer than to unpick either of those.

hinkley a day ago

Bertrand Meyer suggested another way to consider this that ends up in a similar place.

For concerns of code complexity and verification, code that asks a question and code that acts on the answers should be separated. Asking can be done as pure code, and if done as such, only ever needs unit tests. The doing is the imperative part, and it requires much slower tests that are much more expensive to evolve with your changing requirements and system design.

The one place this advice falls down is security - having functions that do things without verifying preconditions are exploitable, and they are easy to accidentally expose to third party code through the addition of subsequent features, even if initially they are unreachable. Sun biffed this way a couple of times with Java.

But for non crosscutting concerns this advice can also be a step toward FC/IS, both in structuring the code and acclimating devs to the paradigm. Because you can start extracting pure code sections in place.

  • Jtsummers a day ago

    Command-Query Separation is the term for that. However, I find this statement odd:

    > having functions that do things without verifying preconditions are exploitable

    Why would you do this? The separation between commands and queries does not mean that executing a command must succeed. It can still fail. Put queries inside the commands (but do not return the query results, that's the job of the query itself) and branch based on the results. After executing a command which may fail, you can follow it with a query to see if it succeeded and, if not, why not.

    https://en.wikipedia.org/wiki/Command%E2%80%93query_separati...

    • jakewins 14 hours ago

      I think CQRS is something different than what’s being described here. “Query” code in CQRS can still “do stuff”: call an external database, grab locks, audit trail recording etc.

      What’s being described here is something lower level, that you keep as much code as you can as a side-effect-free “pure functional core”. That pattern is useful both for the “command” and “query” side of a CQRS system, and is not the same thing as CQRS

      • Jtsummers 5 hours ago

        If by "described here" you mean the article, yes, it is not about CQRS or CQS. I was responding to hinkley who was referencing CQS as defined by (or at least popularized by) Meyer in his Eiffel language and books on OO programming.

      • mexicocitinluez 11 hours ago

        It's not about literally doing things (ie logging) it's about the intent.

        Query and ask are synonyms and represent the same idea in this context.

        • jakewins 6 hours ago

          The fact that “query” and “ask” are synonyms in English does not make the patterns the same.

          The key design goal in this thread was to create a pure functional core, which you can “ask” things of. That pattern is useful on both the command and query side of a CQRS system, and a different thing from splitting up mutating and reading operations as CQRS proposes

          Maybe I misunderstand you though. Say you have a CQRS system that reads and writes to a database. Are you proposing the query side be implemented in pure side-effect-free functional code? How should the pure code make the network calls to the database?

        • stonemetal12 6 hours ago

          Then why the weird assertion that "command" code can only do things and not validate input?

          • jakewins 5 hours ago

            That is not something that’s necessary for all CQRS systems, but maybe is something you’ve heard for the subset that people call “Event Sourcing”? There it’s a design goal that the system only records events that are occurring, so there’s no domain level validation that can be done on the command path - the user pressed the button whether we like it or not, so to speak. Whether the event has the intended effect is worked out after the event is recorded.

            But there’s nothing in the more general idea of “separate reads from writes” that mandates “no validation on writes”

          • Jtsummers 5 hours ago

            Commands can validate their input in CQS. What they don't do, in strict CQS, is return values. They can set state which can then be queried after execution which can let you retrieve an updated result or check to see if an error occurred during execution or whatever.

    • layer8 a day ago

      In asynchronous environments, you may not be able to repeat the same query with the same result (unless you control a cache of results, which has its own issues). If some condition is determined by the command’s implementation that subsequent code is interested in (a condition that isn’t preventing the command from succeeding), it’s generally more robust for the command to return that information to the caller, who then can make use of it. But now the command is also a query.

      • hinkley 13 hours ago

        I can’t decide if this really is the biggest problem with CQS. Certainly the wiki page claims it is, and it’s a reasonable argument. For some simpler cases you could dodge it by wrapping the function pairs/tuples in a lock. Database calls are a bit sketchy, because a transaction only “fixes” the problem if you ignore the elephant in the room which is reduced system parallelism by a measurable amount because even in an MVCC database transactions aren’t free. They’re just cheaper.

        Caches always mess up computational models because they turn all reads into writes. Which makes things you could say with static analysis no longer true. I know a lot of tricks for making systems faster and I’ve hardly ever seen anyone apply most of them to systems after caching was introduced. It has one upside and dozens of downsides as bad or worse than this one.

      • Jtsummers a day ago

        > it’s generally more robust for the command to return that information to the caller, who then can make use of it. But now the command is also a query.

        You don't need the command to return anything (though it can be more efficient or convenient). It can set state indicating, "Hey, I was called but by the time I tried to do the thing the world and had changed and I couldn't. Try using a lock next time."

          if (query(?)) {
            command(x)
            result := status(x) //  ShouldHaveUsedALockError
          }
        
        The caller can still obtain a result following the command, though it does mean the caller now has to explicitly retrieve a status rather than getting it in the return value.
        • layer8 a day ago

          Where is that state stored, in an environment where the same command could be executed with the same parameters but resulting in a different status, possibly in parallel? How do you connect the particular command execution with the particular resulting status? And if you manage to do so, what is actually won over the command just returning the status?

          I’d argue that the separation makes things worse here, because it creates additional hidden state.

          Also, as I stated, this is not about error handling.

          • codebje 20 hours ago

            CQRS should really only guide you to designing separate query and command interfaces. If your processing is asynchronous then you have no choice but to have state about processing-in-flight, and your commands should return an acknowledgement of successful receipt of valid commands with a unique identifier for querying progress or results. If your processing is synchronous make your life easier by just returning the result. Purity of CQRS void-only commands is presentation fodder, not practicality.

            (One might argue that all RPC is asynchronous; all such arguments eventually lead to message buses, at-least-once delivery, and the reply-queue pattern, but maybe that's also just presentation fodder.)

    • jonahx a day ago

      > Why would you do this?

      Performance and re-use are two possible reasons.

      You may have a command sub-routine that is used by multiple higher-level commands, or even called multiple times within by a higher-level command. If the validation lives in the subroutine, that validation will be called multiple times, even when it only needs to be called once.

      So you are forced to choose either efficiency or the security of colocating validation, which makes it impossible to call the sub-routine with unvalidated input.

      • Jtsummers a day ago

        Perhaps I was unclear, to add to my comment:

        hinkley poses this as a fault in CQS, but CQS does not require your commands to always succeed. Command-Query Separation means your queries return values, but produce no effects, and your commands produce effects, but return no values. Nothing in that requires you to have a command which always succeeds or commands which don't make use of queries (queries cannot make use of commands, though). So a better question than what I originally posed:

        My "Why would you do this?" is better expanded to: Why would you use CQS in a way that makes your system less secure (or safe or whatever) when CQS doesn't actually require that?

    • hinkley a day ago

      The example in the wiki page is far more rudimentary than the ones I encountered when I was shown this concept. Trivial, in fact.

      CQS will rely on composition to do any If A Then B work, rather than entangling the two. Nothing forces composition except information hiding. So if you get your interface wrong someone can skip over a query that is meant to short circuit the command. The constraint system in Eiffel I don’t think is up to providing that sort of protection on its own (and the examples I was given very much assumed not). Elixir’s might end up better, but not by a transformative degree. And it remains to be seen how legible that code will be seen as by posterity.

      • Jtsummers a day ago

        That's still not really answering my question for you, which was less clear than intended. To restate it:

        > The one place this advice falls down is security - having functions that do things without verifying preconditions are exploitable

        My understanding of your comment was that "this advice" is CQS. So you're saying that CQS commands do not verify preconditions and that this is a weakness in CQS, in particular.

        Where did you get the idea that CQS commands don't verify preconditions? I've never seen anything in any discussion of it, including my (admittedly 20 years ago) study of Eiffel.

        • hinkley 16 hours ago

          And I remain confused by your confusion.

          If A then B()

          Versus

          B()

          Somewhere there’s a B without the associated query. Call it what you want, at the bottom of the tree two roads diverge. Otherwise there is no Separation in your CQS.

          ETA: once you get down to the mutation point you aren’t just dealing with immutable data. You’re moving things around, often plural.

  • hinkley 4 hours ago

    Found an example that is closer to how I learned about CQS:

    https://hemath.dev/blog/command-query-separation/

    Down at the bottom it gets into composition to make utility functions that compose several operations. Any OO system has to be careful not to expose methods that should have been private, so that’s not specific to CQS. It’s just that the opportunities to get it wrong increase and the consequences are higher.

hackthemack a day ago

I never liked encountering code that chains functions calls together like this

email.bulkSend(generateExpiryEmails(getExpiredUsers(db.getUsers(), Date.now())));

Many times, it has confused my co-workers when an error creeps in in regards to where is the error happening and why? Of course, this could just be because I have always worked with low effort co-workers, hard to say.

I have to wonder if programming should have kept pascals distinction between functions that only return one thing and procedures that go off and manipulate other things and do not give a return value.

https://docs.pascal65.org/en/latest/langref/funcproc/

  • HiPhish a day ago

    > email.bulkSend(generateExpiryEmails(getExpiredUsers(db.getUsers(), Date.now())));

    What makes it hard to reason about is that your code is one-dimensional, you have functions like `getExpiredUsers` and `generateExpiryEmails` which could be expressed as composition of more general functions. Here is how I would have written it in JavaScript:

        const emails = db.getUsers()
            .filter(user => user.isExpired(Date.now()))  // Some property every user has
            .map(generateExpiryEmail);  // Maps a single user to a message
    
        email.bulkSend(emails);
    
    The idea is that you have small but general functions, methods and properties and then use higher-order functions and methods to compose them on the fly. This makes the code two-dimensional. The outer dimension (`filter` and `map`) tells the reader what is done (take all users, pick out only some, then turn each one into something else) while the outer dimension tells you how it is done. Note that there is no function `getExpiredUsers` that receives all users, instead there is a simple and more general `isExpired` method which is combined with `filter` to get the same result.

    In a functional language with pipes it could be written in an arguably even more elegant design:

        db.getUsers() |> filter(User.isExpired(Date.now()) |> map(generateExpiryEmail) |> email.bulkSend
    
    I also like Python's generator expressions which can express `map` and `filter` as a single expression:

        email.bulk_send(generate_expiry_email(user) for user in db.get_users() if user.is_expired(Date.now())
    • hackthemack a day ago

      I guess I just never encounter code like this in the big enterprise code bases I have had to weed through.

      Question. If you want to do one email for expired users and another for non expired users and another email for users that somehow have a date problem in their data....

      Do you just do the const emails =

      three different times?

      In my coding world it looks a lot like doing a SELECT * ON users WHERE isExpired < Date.now

      but in some cases you just grab it all, loop through it all, and do little switches to do different things based on different isExpired.

      • rahimnathwani a day ago

          If you want to do one email for expired users and another for non expired users and another email for users that somehow have a date problem in their data....
        
        Well, in that case you wouldn't want to pipe them all through generateExpiryEmail.

        But perhaps you can write a more generic function like generateExpiryEmailOrWhatever that understands the user object and contains the logic for what type of email to draft. It might need to output some flag if, for a particular user, there is no need to send an email. Then you could add a filter before the final (send) step.

      • solomonb a day ago

        since were just making up functions..

            myCoolSubroutine = do
              now <- getCurrentTime
              users <- getUsers
              forM users (sendEmail now)
        
            sendEmail now user =
              if user.expiry <= now
                then sendExpiryEmail user
                else sendNonExpiryEmail user
        
        The whole pipeline thing is a red herring IMO.
      • HiPhish a day ago

        > Question. If you want to do one email for expired users and another for non expired users and another email for users that somehow have a date problem in their data.... > > Do you just do the const emails = > > three different times?

        If it's just two or three cases I might actually just copy-paste the entire thing. But let's assume we have twenty or so cases. I'll use Python notation because that's what I'm most familiar with. When I write `Callable[[T, U], V]` that means `(T, U) -> V`.

        Let's first process one user at a time. We can define an enumeration for all our possible categories of user. Let's call this enumeration `UserCategory`. Then we can define a "categorization function" type which maps a user to its category:

            type UserCategorization = Callable[[User], UserCategory]
        
        I can then map each user to a tuple of category and user:

            categorized_users = map(categorize, db.get_users())  # type Iterable[tuple[UserCategory, User]]
        
        Now I need a mapping from user category to processing function. I'll assume we call the processing function for side effects only and that it has no return value (`None` in Python):

            type ProcessingSpec = Mapping[UserCategory, Callable[[User], None]
        
        This mapping uses the user category to look up a function to apply to a user. We can now put it all together: map each user to a pair of the user's category and the user, then for each pair use the mapping to look up the processing function:

            def process_users(how: ProcessingSpec, categorize: UserCategorization) -> None:
                categorized_users = map(categorize, db.get_users())
                for category, user in categorized_users:
                    process = how[category]
                    process(user)
        
        OK, that's processing one user a time, but what if we want to process users in batches? Meaning I want to get all expired users first, and then send a message to all of them at once instead of one at a time. We can actually reuse most of our code because how how generic it is. The main difference is that instead of using `map` we want to use some sort of `group_by` function. There is `itertools.groupby` in the Python standard library, but it's not exactly what we need, so let's write our own:

            def group_by[T, U](what: Iterable[T], key: Callable[[T], U]) -> Mapping[U, list[T]]:
                result = defaultdict(list)
                # When we try to look up a key that does not exist defaultdict will create a new
                # entry with an empty list under that key
                for x in what:
                    result[key(x)].append(x)
                return x
        
        Now we can categorize our users into batches based on their category:

            batches = group_by(db.get_users(), categorize)
        
        To process these batches we need a mapping from batch to a function which process an iterable of users instead of just a single user.

            type BatchProcessingSpec = Mapping[UserCategory, Callable[[Iterable[User]], None]
        
        Now we can put it all together:

            def process_batched_users(how: BatchProcessingSpec, categorize: UserCategorization) -> None:
                batches = group_by(db.get_users(), categorize)
                for category, users in batches:
                    process = how[category]
                    process(users)
        
        There are quite a lot of small building block functions, and if all I was doing was sending emails to users it would not make sense to write these small function that add indirection. However, in a large application these small functions become generic building blocks that I can use in higher-order functions to define more concrete routines. The `group_by` function can be used for many other purposes with any type. The categorization function was used for both one-at-a-time and batch processing.

        I have been itching to write a functional programming book for Python. I don't mean a "here is how to do FP in Python" book, you don't need that, the documentation of the standard library is good enough. I mean a "learn how to think FP in general, and we are going to use Python because you probably already know it". Python is not a functional language, but it is good enough to teach the principles and there is value in doing things with "one hand tied behind your back". The biggest hurdle in the past to learning FP was that books normally teach FP in a functional language, so now the reader has to learn two completely new things.

        • nickpsecurity 9 hours ago

          Your post was very interesting in terms of how to translate requirements to a functional solution. You should write that book on how to do that.

  • POiNTx a day ago

    In Elixir this would be written as:

      db.getUsers()
      |> getExpiredUsers(Date.now())
      |> generateExpiryEmails()
      |> email.bulkSend()
    
    I think Elixir hits the nail on the head when it comes to finding the right balance between functional and imperative style code.
    • time4tea 17 hours ago

      Not a single person in this thread commented on the use of Date.now() and similar - surely clock.now() - you never ever want to use global time in any code, how could you test it?

      clock in this case is a thing that was supplied to the class or function. It could just be a function: () -> Instant.

      (Setting a global mock clock is too evil, so don't suggest that!)

      • POiNTx 17 hours ago

        I was just referring to how pipes make these kinds of chained function calls more readable. But on your point, I think using Date.now() is perfectly ok.

        • ruszki 8 hours ago

          > I think using Date.now() is perfectly ok.

          This is why we have tests which we need to update every 3 months, because somebody said this. This is of course, after a ton of research went into finding out why the heck our tests broke suddenly.

          • Izkata 5 hours ago

            I would call those badly-written tests. The current date/time exists outside the system and ought to be acceptable for mocks, and in python we have things like freezegun that make it easy to control without the usual pitfalls of mocks.

            • ruszki 4 hours ago

              What are those mock pitfalls, which are avoided by freezegun which is a mock according even to them? IoC and Clocks solve the same problem. So what are the pitfalls of using those instead of this other mock?

              • Izkata 4 hours ago

                Applying it to the wrong module, a very common mistake in python due to how imports work.

        • vlovich123 13 hours ago

          What happens during a daylight savings adjustment?

          • POiNTx 12 hours ago

            You use UTC which doesn't adjust daylight savings.

      • MarkMarine 7 hours ago

        nit picking on the example code while missing the example the code was trying to demonstrate. I see why TAOCP used pseudocode

        • time4tea 3 hours ago

          Agreed! But i didnt miss the example.... i also thought it was interesting that all the various examples of declarative or applicative did Date.now(), which i see as a big thing to avoid.

    • montebicyclelo a day ago

          bulk_send(
              generate_expiry_email(user) 
              for user in db.getUsers() 
              if is_expired(user, date.now())
          )
      
      (...Just another flavour of syntax to look at)
      • whichdan a day ago

        The nice thing with the Elixir example is that you can easily `tap()` to inspect how the data looks at any point in the pipeline. You can also easily insert steps into the pipeline, or reuse pipeline steps. And due to the way modules are usually organized, it would more realistically read like this, if we were in a BulkEmails module:

          Users.all()
          |> Enum.filter(&Users.is_expired?(&1, Date.utc_today()))
          |> Enum.map(&generate_expiry_email/1)
          |> tap(&IO.inspect(label: "Expiry Email"))
          |> Enum.reject(&is_nil/1)
          |> bulk_send()
        
        The nice thing here is that we can easily log to the console, and also filter out nil expiry emails. In production code, `generate_expiry_email/1` would likely return a Result (a tuple of `{:ok, email}` or `{:error, reason}`), so we could complicate this a bit further and collect the errors to send to a logger, or to update some flag in the db.

        It just becomes so easy to incrementally add functionality here.

        ---

        Quick syntax reference for anyone reading:

        - Pipelines apply the previous result as the first argument of the next function

        - The `/1` after a function name indicates the arity, since Elixir supports multiple dispatch

        - `&fun/1` expands to `fn arg -> fun(arg) end`

        - `&fun(&1, "something")` expands to `fn arg -> fun(arg, "something") end`

      • Akronymus a day ago

        Not sure I like how the binding works for user in this example, but tbh, I don't really have any better idea.

        Writing custom monad syntax is definitely quite a nice benefit of functional languages IMO.

  • lmm 20 hours ago

    > I have to wonder if programming should have kept pascals distinction between functions that only return one thing and procedures that go off and manipulate other things and do not give a return value.

    What you want is to use a language that has higher-kinded types and monads so that functions can have both effects (even multiple distinct kinds of effects) and return values, but the distinction between the two is clear, and when composing effectful functions you have to be explicit about how they compose. (You can still say "run these three possibly-erroring functions in a pipeline and return either the successful result or an error from whichever one failed", but you have to make a deliberate choice to).

    • Warwolt 14 hours ago

      Making a distinction between pure and effectful functions doesnt require any kind of effect system though.

      Having a language where "func" defines a pure function and "proc" defines a procedure that can performed arbitrary side effects (as in any imperative language really) would still be really useful, I think

      • lmm 14 hours ago

        > Having a language where "func" defines a pure function and "proc" defines a procedure that can performed arbitrary side effects (as in any imperative language really) would still be really useful, I think

        Rust tried that in the early days, the problem is no-one can agree on exactly what side effects make a function non-pure. You pay almost all the costs of a full effect system (and even have to add an extra language keyword) but get only some of the benefits.

        • cestith 4 hours ago

          The definition I’ve used for my own projects is that anything that touches anything outside the function or in any way outlives the function is impure. It works pretty well for me. That is, no i/o, mutability of a function-local variable is okay but no touching other memory state (and that variable cannot outlive the return), the same function on the same input always produces the same output, and there’s no calling of impure code from within pure code. Notice this makes closures and currying impure unless done explicitly during function instantiation, making those things at least nominally part of the input syntactically. YMMV.

      • stOneskull 10 hours ago

        nim does that. and they are called that.

  • Antibabelic 9 hours ago

    Ada is a great modern language that preserves the distinction between functions and procedures that you mention.

  • MarkMarine 7 hours ago

    These chains become easy to read and understand with a small language feature like the pipe operator (elixir) or threading macro (clojure) that takes the output of one line and injects it into the left or rightmost function parameter. For example: (Elixir) "go " |> String.duplicate(3) # "go go go " |> String.upcase() # "GO GO GO " |> String.replace_suffix(" ", "!") # "GO GO GO!"

    (Clojure) ;; Nested function calls (map double (filter even? '(1 2 3 4)))

    ;; Using the thread-last macro (->> '(1 2 3 4) (filter even?) ; The list is passed as the last argument (map double)) ; The result of filter is passed as the last argument ;=> (4.0 8.0)

    Things like this have been added to python via a library (Pipe) [1] and there is a proposal to add this to JavaScript [2]

    1: https://pypi.org/project/pipe/ 2: https://github.com/tc39/proposal-pipeline-operator

    • netdevphoenix 5 hours ago

      If you get an exception, you might not know where it comes from unless you get a stack trace. Code looks nice but not practical imo

  • solid_fuel 21 hours ago

    I may have gotten nerd sniped here, but I believe all of these examples so far have some subtle errors. Using elixir syntax, I would think something like this covers most of the cases:

        expiry_date = DateTime.now!("Etc/UTC")
    
        query = 
              from u in User,
              where: 
                u.expiry_date > ^expiry_date 
                and u.expiry_email_sent == false,
              select: u
    
        MyAppRepo.all(query)
        |> Enum.map(u, &generate_expiry_emails(&1, expiry_date))
        |> Email.bulkSend()  # Returns {:ok, %User{}} or {:err, _reason}
        |> Enum.filter(fn 
          {:ok, _} -> true
          _ -> false
        end)
        |> Enum.map(fn {:ok, user} ->
          User.changeset(user, %{expiry_email_sent: true})
          |> Repo.update()
        end)
    
    
    Mainly a lot of these examples do the expiry filtering on the application side instead of the database side, and most would send expiry emails multiple times which may or may not be desired behavior, but definitely isn't the best behavior if you automatically rerun this job when it fails.

    ----

    Edit: I actually see a few problems with this, too, since Email.bulkSend probably shouldn't know about which user each email is for. I always see a small impedance mismatch with this sort of pipeline, since if we sent the emails individually it would be easy to wrap it in a small function that passes the user through on failure.

    If I were going to build a user contacting system like this I would probably want a separate table tracking emails sent, and I think that the email generation could be made pure, the function which actually sends email should probably update a record including a unique email_type id and a date last sent, providing an interface like: `send_email(user_query, email_id, email_template_function)`

  • tags2k 13 hours ago

    Since everyone's giving !opinions, in my C# DDD world you'd ideally be able to:

      _unitOfWork.Begin();
    
      var users = await _usersRepo.Load(u => u.LastLogin <= whateverDate);
      users.CheckForExpiry();
    
      _unitOfWork.Commit();
    
    That then writes the "send expiry email" commands from the aggregate, to an outbox, which a worker then picks up to send. Simple, transactional domain logic.
  • tadfisher a day ago

    That's pretty hardcore, like you want to restrict the runtime substitution of function calls with their result values? Even Haskell doesn't go that far.

    Generally you'd distinguish which function call introduces the error with the function call stack, which would include the location of each function's call-site, so maybe the "low-effort" label is accurate. But I could see a benefit in immediately knowing which functions are "pure" and "impure" in terms of manipulating non-local state. I don't think it changes any runtime behavior whatsoever, really, unless your runtime schedules function calls on an async queue and relies on the order in code for some reason.

    My verdict is, "IDK", but worth investigating!

    • hackthemack a day ago

      It has been so long since I worked on the code that had chaining functions and caused problems that I am not sure I can do justice to describing the problems.

      I vaguely remember the problem was one function returned a very structured array dealing with regex matches. But there was something wrong with the regex where once in a blue moon, it returned something odd.

      So, the chained functions did not error. It just did something weird.

      Whenever weird problems would pop up, it was always passed to me. And when I looked at it, I said, well...

      I am going to rewrite this chain into steps and debug each return. Then run through many different scenarios and that was how I figured out the regex was not quite correct.

    • mrkeen 16 hours ago

      > you want to restrict the runtime substitution of function calls with their result values?

      I don't get how you got there from parent comment.

      Pascal just went with a needless syntax split of (side-effectful) methods and (side-effectful) functions.

  • sandeepkd 18 hours ago

    On the same page here, read it multiple times to see if I can convince my mind, this is bit off in terms of reading the code as its being executed. There are high chances of people making mistakes over the time with such patterns. As usual there is always a trade off involved, readability is the one taking hit here.

  • fedlarm a day ago

    You could write the logic in a more straight forward, but less composable way, so that all the logic resides in one pure function. This way you can also keep the code to only loop over the users once.

    email.sendBulk(generateExpiryEmails(db.getUsers(), Date.now()));

  • shortrounddev2 7 hours ago

    It also invites exceptions as error handling instead of a monadic (result) pattern. I usually do something more like

        Result<Users> userRes = getExpiredUsers(db);
        if(isError(userRes)) {
            return userRes.error;
        }
    
        /* This probably wouldn't actually need to return a Result IRL */
        Result<Email> emailRes = generateExpireyEmails(userRes.value);
        if(isError(emailRes)) {
            return emailRes.error;
        }
    
        Result<SendResult> sendRes = sendEmails(emailRes.value);
        if(isError(sendRes)) {
            return sendRes.error;
        }
        
        return sendRes; // successful value, or just return a Unit type.
    
    This is in my "functional C++" style, but you can write pipe helpers which sort of do the same thing:

        Result<SendResult> result = pipe(getExpiredUsers(db))
            .then(generateExpireyEmails)
            .then(sendEmails)
            .result();
    
        if(isError(result)) {
            return result.error;
        }
    
    If an error result is returned by any of the functions, it terminates immediately and returns the error there. You can write this in most languages, even imperative/oop languages. In java, they have a built in class called Optional with options to treat null returns as empty:

        Optional.ofNullable(getExpiredUsers(db))
            .map(EmailService::generateExpireyEmails)
            .map(EmailService::sendEmails)
            .orElse(null);
    
    or something close to that, I haven't used java in a couple years.

    C++ also added a std::expected type in C++23:

        auto result = some_expected()
            .and_then(another_expected)
            .and_then(third_expected)
            .transform(/* ... some function here, I'm not familiar with the syntax*/);
  • sfn42 a day ago

    I would have written each statement on its own line:

    var users = db.getUsers();

    var expiredUsers = getExpiredUsers(users, Date.now());

    var expiryEmails = generateExpiryEmails(expiredUsers);

    email.bulkSend(expiryEmails);

    This is not only much easier to read, it's also easier to follow in a stack trace and it's easier to debug. IMO it's just flat out better unless you're code golfing.

    I'd also combine the first two steps by creating a DB query that just gets expired users directly rather than fetching all users and filtering them in memory:

    expiredUsers = db.getExpiredUsers(Date.now());

    Now I'm probably mostly getting zero or a few users rather than thousands or millions.

    • netdevphoenix 5 hours ago

      Took me a bit of scrolling to find this. I believe most of the other folks are functional devs or something. The 5 functions on a single line wouldn't pass the code review in most .net/java shops.

      The rule I was raised with was: you write the code once and someone in the future (even your future self) reads it 100 times.

      You win nothing by having it all smashed together like sardines in a tin. Make it work, make it efficient and make it readable.

    • hackthemack a day ago

      Yeah. I did not mention what I would do, but what you wrote is pretty much what I prefer. I guess nobody likes it these days because it is old procedural style.

      • bccdee 21 hours ago

        There's nothing procedural about binding return values to variables, so long as you aren't mutating them. Every functional language lets you do that. That's `let ... in` in Haskell.

    • ajusa 21 hours ago

      (author here)

      This is actually closer to the way the first draft of this article was written. Unfortunately, some readability was lost to make it fit on a single page. 100% agree that a statement like this is harder to reason about and should be broken up into multiple statements or chained to be on multiple lines.

    • ahoka 11 hours ago

      But then you are creating references with larger then needed "reachability".

      • sfn42 11 hours ago

        I don't see a problem with that. This code would typically be inside it's own function anyway, but regardless I think your nitpick is less important than the readability benefit.

    • codazoda 20 hours ago

      Glad to see this. This style seems like it’s out of vogue now, but I find it much, much easier to reason about.

      • rifty 17 hours ago

        I agree because it reads as it will process in the direction I normally read. But I do think one of the benefits of the function approach is that the scope isn't cluttered with staging variables.

        For these reasons one of the things I like to do in Swift is set up a function called ƒ that takes a single closure parameter. This is super minimal because Swift doesn't require parenthesis for the trailing closure. It allows me to do the above inline without cluttering the scope while also not increasing the amount of redirection using discrete function declarations would cause.

        The above then just looks like this:

          ƒ { 
            var users = db.getUsers();
            var expiredUsers = getExpiredUsers(users, Date.now());
            var expiryEmails = generateExpiryEmails(expiredUsers);\
            email.bulkSend(expiryEmails);
          }
procaryote 14 hours ago

I like the general idea, but unless you're assuming some very clever language or even more clever ORM that fixes things implicitly, wouldn't

    email.bulkSend(generateReminderEmails(getExpiredUsers(db.getUsers(), fiveDaysFromNow)));
get all users and then filter out the few that will expire in 5 days, on a code level? That doesn't sound like it would scale
  • dawidloubser 13 hours ago

    Often the answer to "some very clever language" is a lazy functional language like Haskell, where it's quite common to express problems naturally on infinite or near-infinite lists (of numbers, users, whatever...) in this way - and the language's lazy evaluation semantics effectively turning it into an efficient streaming data pipeline of sorts.

    But even if not, the example from the article is just hypothetical. `db.getUsers()` could be something that just retrieves rather efficient `[UserEmail, ExpiryTime]` pairs, and then you'd have to have a pretty enormous user base for it to not scale (a couple of million string/date pairs in memory should be no problem).

    • procaryote 3 hours ago

      Modern computers are fast enough that a lot of bad code won't outright break. That doesn't necessarily make it good code

      I fixed a performance issue at some point where a missing index meant a scan of millions of rows every login. It worked, and could log in 3 people per second or so. It was still terrible code.

  • spoiler 13 hours ago

    I think it's just a contrived example. They probably wanted to show more than a single thing composing in a very short post given it's from their Toilet series.

    Replace it with `getUsers(filters)` or even a specialised function, and it starts making more sense.

    • ajusa 7 hours ago

      (author here)

      It's exactly this - I do regret using "db" a bit now after reading all of the comments here, as it's taken away focus from the main point. But yes, the post had to fit on a single page, and I needed to pick something that most engineers would be familiar with.

      • Izkata 5 hours ago

        Kinda hints most people haven't used a good ORM, or if they have maybe just don't understand how it really works. Django looks similar to this and would have the same misunderstanding (User.objects.all()), except it actually returns a QuerySet object that would let getExpiredUsers() apply its own criteria and not actually run the query until something tries to read from the object. There's an example up above where someone shows SQLAlchemy doing the same thing.

  • bribri 13 hours ago

    I agree. They could have picked a better example. Just db.getUsers() alone should set off alarm bells as soon as you see it.

  • globular-toast 13 hours ago

    Irrelevant. The "bad" code does it too. It's talking about something specific and not "let's fix all the problems with this code".

    • procaryote 4 hours ago

      If you write one method poorly so you select too much and filter in a for loop, you just have a bad method you can fix.

      If you pick and recommend a pattern where filtering should happen separately from retrieving, all your code will be bad

      Give a man a fish / teach a man to fish, but bad.

metalrain 18 hours ago

I like the idea but the example doesn't make much sense.

In what application would you load all users into memory from database and then filter them with TypeScript functions? And that is the problem with the otherwise sound idea "Functional core, imperative shell". The shell penetrates the core.

Maybe some filters don't match the way database is laid out, what if you have a lot of users, how do you deal with email batching and error handing?

So you have to write the functional core with the side effect context in mind, for example using query builder or DSL that matches the database conventions. Then weave it with the intricacies of your email sender logic, maybe you want iterator over the right size batches of emails to send at once, can it send multiple batches in parallel?

  • bad_username 15 hours ago

    I am surprised by this example, for the same reason.

    Generally, performance is a top cause of abstraction leaks and the emergence of less-than-beautiful code. On an infinitely powerful machine it would be easy and advisable to program using neat abstracrions, using purely "the language of" the business. Our machines are not infinitely powerful, and that is especially evident when larger data sets are involved. That's where, to achieve useful performance, you have to increasingly speak "the language of" the machine. This is inevitable, and the big part of the programmer's skill is to be able to speak both "languages", to know when to speak which one, and produce readable code regardless.

    Database programming is a prime example. There's a reason, for example, why ORMs are very messy and constitute such excellent footguns: they try to gap this bridge, but inevitably fail in important ways. And having and ORM in the example would, most likely, violate the "functional core" principle from the article.

    So it looks like the author accidentally presented a very good counterexample to their own idea. I like the idea though, and I would love to know how to resolve the issue.

  • edf13 17 hours ago

    > In what application would you load all users into memory from database and then filter them with TypeScript functions?

    You’d be surprised! I have worked on a legacy PHP service which did something very similar

sherinjosephroy 17 hours ago

I like the idea of separating your “business logic” (the functional core) from the glue code that interacts with the outside world (the imperative shell). It makes the core easier to test and reason about.

But also: the challenge is knowing where to draw the line. In real systems you’ll still have messy side-effects, transactions, performance constraints — so you might end up in a mixed bag anyway. The principle is solid, but the practical trade-offs matter.

CharlieDigital 20 hours ago

I wrote our AI agents code with a functional core + imperative shell and I have to agree: this approach yields much faster cycle times because you can run pure unit tests and it makes testing a lot easier.

We have tens of thousands of lines of code for the platform and millions of workflow runs through them with no production errors coming from the core agent runtime which manages workflow state, variables, rehydration (suspend + resume). All of the errors and fragility are at the imperative shell (usually integrations).

Some of the examples in this thread I think get it wrong.

    db.getUsers() |> filter(User.isExpired(Date.now()) |> map(generateExpiryEmail) |> email.bulkSend
This is already wrong because the call already starts with I/O; flip it and it makes a lot more sense.

What you really want is (in TS, as an example):

    bulkSend(
      userFn: () => user[],
      filterFn: (user: User) => bool,
      expiryEmailProducerFn: (user: User) => Email,
      senderFn: (email: Email) => string
    ) 
The effect of this is that the inner logic of `bulkSend` is completely decoupled from I/O and external logic. Now there's no need for mocking or integration tests because it is possible to use pure unit tests by simply swapping out the functions. I can easily unit test `bulkSend` because I don't need to mock anything or know about the inner behavior.

I chose this approach because writing integration tests with LLM calls would make the testing run too slowly (and costly!) so most of the interaction with the LLM is simply a function passed into our core where there's a lot of logic of parsing and moving variables and state around. You can see here that you no longer need mocks and no longer need to spy on calls because in the unit test, you can pass in whatever function you need and you can simply observe if the function was called correctly without a spy.

It is easier than most folks think to adopt -- even in imperative languages -- by simply getting comfortable working with functions at the interfaces of your core API. Wherever you have I/O or a parameter that would be obtained from I/O (database call), replace it with a function that returns the data instead. Now you can write a pure unit test by just passing in a function in the test.

I am very surprised how many of the devs on the team never write code that passes a function down.

  • nickpsecurity 9 hours ago

    Great examples. We were taught to pass variables, scalar or compound, into API's. Most of us were never taught to pass functions.

    Even Python examples in trainings that look functional might not be. They put the function calls in as arguments. The beginner thinks the function returns some data, that would be in a variable, and they are implicitly passing that variable. Might as well, for readability, do the function call first to pass a well-named variable instead.

    That was my experience. That plus minimizing side effects in functions. I've yet to really learn functional programming where I'd think to pass a function in an API. What are the best articles or books for us to learn that in general or in Python?

    • globular-toast 8 hours ago

      It's called dependency injection and there's loads written about it. It's a really powerful technique which is also used for dependency inversion, key for decoupling components. I really like how it enables simple tests without any mocking.

      The book Architecture Patterns in Python by Percival and Gregory is one of the few books that talks about this stuff using Python. It's available online and been posted on HN a few times before.

      • CharlieDigital an hour ago

        Agree with zdragnar; this is not traditional DI which is generally focused on injecting objects.

        The difference between the two is that when you inject an object, the receiving side must know a potentially large surface area of the object's behavior.

        When injecting a function, now the receiving side only needs to know the inputs and outputs of the singular function.

        This subtlety is what makes this approach more powerful.

        • globular-toast 21 minutes ago

          I don't see the difference, but I do agree that DI is generally used to mean constructing systems. It's what you do in your main or "bootstrap" part of the program and there are frameworks to do it for you. But really it's the same thing. You're just composing functionality by passing objects (functions are objects) that satisfy an interface. It might be more acceptable to just say it's dependency inversion.

      • zdragnar 7 hours ago

        DI (generally) tends to point more towards constructing objects or systems. This would be a bit closer to a functional equivalent of the OO "template method" pattern: https://en.wikipedia.org/wiki/Template_method_pattern

        You write a concrete set of steps, but delegate the execution of the steps to the caller by invoking the supplied functions at the desired time with the desired arguments.

QuadmasterXLII 20 hours ago

I would argue that the real key is to have a distinct core and shell, and to hold the core to a much higher standard of quality than the shell. In this article, being "functional" is just serving as a proxy for code quality.

  • ccortes 18 hours ago

    > In this article, being "functional" is just serving as a proxy for code quality.

    It is not, it is being very specific about what it means and what it is referring to

ryangibb 17 hours ago

The MirageOS project [0] is a great collection of functionality pure OCaml libraries that are useful outside of unikernels. I've used the DNS library with an effectful layer for various nameserver experiments [1].

[0] https://mirage.io/

[1] https://ryan.freumh.org/eon.html

johnrob a day ago

Functions can have complexity or side effects, but not both.

  • anttiharju 14 hours ago

    All pure functions have complexity?

fsmv 7 hours ago

Google writes articles like these every week and hangs them in the bathroom. It's meant to be a quick one page tip thing. That's why the example isn't super realistic, it has to be short.

There's a link with more info at the top. I'm not sure why this one in particular made it to the front page of HN.

rockyj 13 hours ago

Interestingly, I have been harping on this for a while. Recently wrote a blog on how to separate business logic from infrastructure code and tie them together by composing functions together - https://rockyj-blogs.web.app/2025/10/25/result-monad.html

I also see that lately "code quality" is the least concern of most (even software product) companies, just ask AI to write code in a single file / module / class - then launch feature and fix if you have to. I could see that in a few years things will be extremely messy (but who can say).

smusamashah 7 hours ago

How does it fit with Tell Don't Ask https://martinfowler.com/bliki/TellDontAsk.html

Or is it that the example in the article is a bit poor?

  • crymer11 7 hours ago

    From your linked article:

    > But personally, I don't use tell-dont-ask. I do look to co-locate data and behavior, which often leads to similar results. One thing I find troubling about tell-dont-ask is that I've seen it encourage people to become GetterEradicators, seeking to get rid of all query methods.

kitd 13 hours ago

I never really got into Haskell in a big way, but one of the things I liked about the Haskell Wikibook [1] was how they presented Haskell code as being either in pure form or "do" form, and how the latter orchestrates the former, much as presented here. To a beginner like me not interested in monads etc, this was a very simple and explicit way of approaching coding in Haskell.

[1] https://en.wikibooks.org/wiki/Haskell

rcleveng a day ago

If your language supports generators, this works a lot better than making copies of the entire dataset too.

  • akshayshah a day ago

    Sometimes, sure - but sometimes, passing around a fat wrapper around a DB cursor is worse, and the code would be better off paginating and materializing each page of data in memory. As usual, it depends.

  • KlayLay a day ago

    You don't need your programming language to implement generators for you. You can implement them yourself.

foobarian 8 hours ago

This sounds to me like the old hexagonal architecture [1]

[1] https://en.wikipedia.org/wiki/Hexagonal_architecture_(softwa...

zkmon a day ago

I think it's just your way of looking at things.

What if a FCF (functional core function) calls another FCF which calls another FCF? Or do we do we rule out such calls?

Object Orientation is only a skin-deep thing and it boils down to functions with call stack. The functions, in turn, boil down to a sequenced list of statements with IF and GOTO here and there. All that boils boils down to machine instructions.

So, at function level, it's all a tree of calls all the way down. Not just two layers of crust and core.

  • skydhash a day ago

    Functional core usually means pure functional functions, aka the return value is know if the arguments is known, no side effects required. All the side effects is then pushed up the imperative shell.

    You’ll find usually that side effect in imperative actions is usually tied to the dependencies (database, storage, ui, network connections). It can be quite easy to isolate those dependencies then.

    It’s ok to have several layers of core. But usually, it’s quite easy to have the actual dependency tree with interfaces and have the implementation as leaves for each node. But the actual benefits is very easy testing and validation. Also fast feedback due to only unit tests is needed for your business logic.

urxvtcd 15 hours ago

I have written a small system in Elixir adhering to FCIS. Not used to the approach, I was pretty slow and sometimes it felt like jumping through hoops set by myself, lol, but I loved it, the code was very clean, testable, and refactorable. Highly recommend it as an exercise, it was surprising just how much state and IO can be pushed out.

pjmlp 16 hours ago

All nice ideas, that unfortunately don't get appreciated on the age of offshoring and vibe coding.

Have to ship it non matter what.

lucifer153 19 hours ago

This is same idea with onion architect in "Grokking Simplicity: Taming Complex Software with Functional Thinking Book by Eric Normand"

droningparrot 19 hours ago

Haskell practically encourages this style of programming. Any function that touches IO needs to wrap outputs with an appropriate monad. It becomes easier to push all IO out to the edges of your program and keep your core purely functional with no monads

  • kaashif 19 hours ago

    I wish that's what people did, some codebases I've seen are messes of monad transformer stacks the likes of which you've never seen.

    I mean, what if you want to do IO and have mutable data structures inside a do block? I'm afraid I'm going to have to prescribe you a monad transformer. Be careful of the side effects.

semiinfinitely a day ago

this looks like a post from 2007 im shocked at the date

  • mrkeen 14 hours ago

    And "I call it my billion-dollar mistake. It was the invention of the null reference in 1965" is from 2009.

    Hopefully by 2045 these ideas will have gotten a little more traction.

  • diamondtin a day ago

    I saw Gary posted his blog link on twitter, and I really like his article. I really didn't expect it to surface up at this moment (2025), and it's referred from a google blog. :shrug:

  • vietvu a day ago

    Me too. Aren't we already doing this? This is the basic I have been taught first.

postepowanieadm a day ago

Something like that was popular in perl world: functional core, oop external interface.

  • dominicrose 12 hours ago

    When you're not relying on a compiler you just have to right good code. And it's easier if the code never has to change, only grow. I know a confident Perl programming who rarely changes his mind about anything. When he codes something he keeps it.

    I always feel like I have to "maintain" code so I usually get bored after 3k lines of code, but truth is code doesn't have to be maintained if we like it the way it is, which obviously includes all the functionality that comes with it.

svat 15 hours ago

Another good blog post that is IMO in the same vein: https://lambdaisland.com/blog/2022-03-10-mechanism-vs-policy (“Improve your code by separating mechanism from policy”). This blends harmoniously with “functional core, imperative shell”—the "mechanism" code is the "functional core", and the "policy" code is the "imperative shell"—and also a little bit with John Ousterhout's idea in A Philosophy of Software Design of "deep modules" (in this context, don't put policy stuff, i.e. arbitrary decisions, inside the module).

taeric a day ago

This works right up to the point where you try to make the code to support opening transactions functional. :D

Some things are flat out imperative in nature. Open/close/acquire/release all come to mind. Yes, the RAI pattern is nice. But it seems to imply the opposite? Functional shell over an imperative core. Indeed, the general idea of imperative assembly comes to mind as the ultimate "core" for most software.

Edit: I certainly think having some sort of affordance in place to indicate if you are in different sections is nice.

  • agentultra a day ago

    whispers in monads

    It can be done "functionally" but doesn't necessarily have to be done in an FP paradigm to use this pattern.

    There are other strategies to push resource handling to the edges of the program: pools, allocators, etc.

    • taeric a day ago

      Right, but even in those, you typically have the more imperative operations as the lower levels, no? Especially when you have things where the life cycle of what you are starting is longer than the life cycle of the code that you use to do it?

      Consider your basic point of sale terminal. They get a payment token from your provider using the chip, but they don't resolve the transaction with your card/chip still inserted. I don't know any monad trick that would let that general flow appear in a static piece of the code?

      • whstl a day ago

        > but even in those, you typically have the more imperative operations as the lower levels

        Yes, the monadic part is the functional core, and the runtime is the imperative shell.

        > Consider your basic point of sale terminal. They get a payment token from your provider using the chip, but they don't resolve the transaction with your card/chip still inserted. I don't know any monad trick that would let that general flow appear in a static piece of the code?

        What do you mean by Monad trick? That's precisely the kind of thing the IO monad exists for. If you need to fetch things on an API: IO. If you need to read/save things on a DB: IO. DB Transaction: IO.

        • taeric 21 hours ago

          I have not seen too many (any?) times where the monad trick is done in such a way that they don't combine everything in a single context wrapper and talk about the "abnormal" case where things don't complete during execution.

          Granted, in trying to find some examples that stick in my memory, I can't really find any complete examples anymore. Mayhap I'm imagining a bad one? (Very possible.)

          • whstl 20 hours ago

            You would deal with this problem in the same way you would with, say, in a REST API.

            If the transaction object is serializable you can just store it in a DB, for example. If it's some C++ pointer from some 3rd-party library that you can't really serialize and gotta keep open, you gotta keep it in memory and manage its lifetime explicitly, be it a REST web server, in Haskell or in a C++ app.

            • taeric 8 hours ago

              Right, I think a better way of stating my main assertion here is that you have to be able to partially work in a transaction. If your "shell" pretends that you can always complete the full transaction, either successfully or with a failure, then it is a brittle shell. Sometimes, you can simply make progress on an presumed open transaction.

              • whstl 4 hours ago

                It doesn’t have to pretend anything you don’t want. If you don’t want this kind of problem/failure possibility, then you have to encode those states in the type system. Functional programming can do the same things you can do in imperative, but you gotta make it when it’s not there. Just like in any other paradigm.

                • taeric 3 hours ago

                  Totally fair. As I said up thread, I am comfortable saying I am imagining bad examples here. Would be interested in reading good examples. Though I couldn't find any of the bad ideas I had in my mind, I also didn't find any good ones on a quick search.

      • garethrowlands a day ago

        I'm unclear what you're suggesting here. Are you suggesting you couldn't write a POS in Haskell, say?

        • taeric a day ago

          My idea here is that, in many domains, you will have operations that are somewhat definitionally in the imperative camp. OpenTransaction being the easy example.

          Can you implement it using functional code? Yes. Just make sure you wind up with partial states. And often times you are best off explicitly not using the RAI pattern for some of these. (I have rarely seen examples where they deal with this. Creating and reconciling transactions often have to be separate pieces of code. And the reconcile code cannot, necessarily, fallback to create a transaction if they get a "not found" fault.)

  • garethrowlands a day ago

    > Indeed, the general idea of imperative assembly comes to mind as the ultimate "core" for most software.

    That's not what functional core, imperative shell means though. It's a given that CPUs aren't functional. The advice is for people programming in languages that have expressions - ruby, in the case of the original talk. The functional paradigm mostly assumes automatic memory management.

    • taeric a day ago

      Right, I was just using that as "at the extreme" and how it largely exists to allow you to put a functional feel on top of the imperative below it.

      I'm sympathetic to the idea, as you can see it in most instruction manuals that people are likely to consume. The vast majority of which (all of them?) are imperative in nature. There is something about the "for the humans" layer being imperative. Step by step, if you will.

      I don't know that it fully works, though. I do think you are well served being consistent in how you layer something. Where all code at a given layer should probably stick to the same styles. But knowing which should be the outer and which the inner? I'm not clear that we have to pick, here. Feel free to have more than two layers. :D

  • mrkeen 16 hours ago

    > This works right up to the point where you try to make the code to support opening transactions

    Indeed. It's all well and good to impart some kind of flavour into your code and call it functional, but transactions do not give a crap about style.

    A transaction needs to be able to 'back out' to fulfill 'all-or-nothing' semantics. Side effects are what make this impossible.

    • garethrowlands 13 hours ago

      Surely transactions are a pretty good example of where functional core / imperative shell is a good guide. You really don't want to be doing arbitrary side effects inside your transaction because those can't be backed out. Check out STM in Haskell for a good example.

      • mrkeen 12 hours ago

        > You really don't want

        And that's what this thread is filled with, and that's what I'm pushing back against.

        > the RAI pattern is nice

        > indicate if you are in different sections is nice

        Style doesn't matter, flavour doesn't matter, wants don't matter, code "quality" (whatever that means) doesn't matter, niceness doesn't matter.

        A transaction can be rolled back. If it can't, it's not a transaction.

        • taeric 8 hours ago

          I'd go a little further, though? Transactions have to be able to fail at the very last step. That could just be the "commit" stage. Everything up to that point could have been perfectly fine, but at commit it fails. More, the time of execution of that commit could be fairly far removed from the time of starting the transaction.

          To that end, any style that tries to move those two time periods closer together in code is almost doomed to have some either hard to reason about code, or some tough edge cases that are hard to specify.

          (Granted, I'll note that most transactions that people are dealing with on a regular basis probably do open and close rather close to each other.)

bitwize a day ago

I invented this pattern when I was working on a small ecommerce system (written in Scheme, yay!) in the early 2000s. It just became much easier to do all the pricing calculations, which were subject to market conditions and customer choices, if I broke it up into steps and verified each step as a side-effect-free, data-in-data-out function.

Of course by "invented" I mean that far smarter people than me probably invented it far earlier, kinda like how I "invented" intrusive linked lists in my mid-teens to manage the set of sprites for a game. The idea came from my head as the most natural solution to the problem. But it did happen well before the programming blogosphere started making the pattern popular.

wslh 21 hours ago

I don't really like the example (and it's from Google) because, beyond the general concept, it seems like the trigger for sending emails is calling bulkSend with Date.now() instead of the user actually triggering an email when it's really expired: user.subscriptionEndDate change to < Date.now().

itsthecourier 21 hours ago

that's nice, so should I get all the db users and then filter them in app?

  • lmm 20 hours ago

    Probably. Or better yet move the code to run where the data is so you're not moving the data around.

vivzkestrel 18 hours ago

db.getUsers() I am sorry what? Who in their right mind loads all users from the database and then filters out the expired subscription ones. Shouldn't the database query do this?