One of the most critical aspects a Lakehouse is protecting data for security and compliance reasons and this article completely just glosses over it which makes me really uncomfortable.
Thanks for the feedback. Bauplan actually features a few innovative points in this area, and full Pythonic at that: Git for Data (https://docs.bauplanlabs.com/en/latest/concepts/git_for_data...) to sandbox any data change, tag it for compliance and make it querable; full code and data auditability in one command (AFAIK, the only platform offering this), as every change is automatically versioned and tagged with the exact run and code that produced it (https://docs.bauplanlabs.com/en/latest/concepts/git_for_data...).
Our sandbox with public data is free for you to try, or just reach out and ask any question!
> Option 2 - Hand it off to DevOps. The other option is to have data science produce prototypes that can be on Notebooks and then have a devops team whose job is to refactor those into an application that runs in production. This process makes things less fragile, but it is slow and very expensive.
I've never understood why this is so hard. Every time data science gives me a notebook it feels like I have been handed a function that says `doFeature()` and should just have to put it behind an endpoint called /do_feature, but it always takes forever and I'm never even able to articulate why. It feels like I am clueless at reading code but just this one particular kind of code.
A data scientist wants results with minimum programming effort, and efficiency be damned. Pull all the data and join it all together in a honking great data frame, use brute force to analyse it.
This isn’t necessarily what you want in a daily production environment, let alone a real-time environment.
Thanks for the comment: your frustration is the default in the industry, and it's part of the reasons why Bauplan was built.
"but it always takes forever and I'm never even able to articulate why." -> there are way more factors at play than DoFeatures unfortunately, see for example Table 1 (https://arxiv.org/pdf/2404.13682). Even knowing which data people have developed on is hard, which is why bauplan has git-for-data semantics built in: everyone works on production data, but safely and reliably, to avoid data skews.
Each computer is different, which is why bauplan adopt FaaS with isolated and fully containerized functions: you are always in the cloud, so no skew in the infra etc.
The problem of "going to production" is still the biggest issue in the industry, and solving it is not a one-fix kind of thing, but unfortunately the combination of good ergonomics, new abstractions and reliable infra.
I'll do you one better. Productionizing a data science prototype is exactly the kind of grunt work AI is able to take over.
I think its a much better result to have data science prototype translated to a performant production version rather than have a databricks type approach or what bauplan is proposing.
Maybe, but it would still need to work within a well defined framework. Usually the data science part is “solve the problem”, the data engineering part is “make it work reliably, fast, at scale”.
What that looks like is highly dependent upon the environment at hand, and letting AI take that over may be one of those “now you have 2 problems” things.
We are not proposing or advocating for any approach to development (I personally almost never use notebooks these days and run Bauplan with preview).
The blog together with our marimo friends is to showcase that you can have notebook development if you like it AND cloud scaling (which u need) without code changes, thanks to the fact that both marimo and Bauplan are basically Python (maybe a small thing, but there is nothing else in the market remotely close).
On the AI part, we agree: the fact that bauplan is just Python, including data management and infra-as-code, makes it trivial for AI to build pipelines in Bauplan, which is not something that can be said about other data platforms - if you follow our blog, we are releasing in a few weeks or so a full "agentic" implementation with Bauplan API of production ETL workloads, which you may find interesting.
There have been so many "better notebook" implementations over the years that I cannot catch up. What are the promising one? Is this "marimo" one of them or rather a newcomer?
Marimo is very impressive. It's effectively a cross between Jupyter and https://observablehq.com/ - it adds "reactivity", which solves the issue where Jupyter cells can be run in any order which can make the behavior of a notebook unpredictable, whereas in Marimo (and Observable) updating a cell automatically triggers other dependent cells to re-execute, similar to a spreadsheet.
Marimo is pretty new (first release January 2025) but has a high rate of improvement. It's particularly good for WebAssembly stuff - that's been one of their key features almost from the start.
For those new to marimo, we have affordances for working with expensive (ML/AI/pyspark) notebooks too, including lazy execution that gives you guarantees on state without running automatically.
One small note: marimo was actually first launched publicly (on HN) in January 2024 [1]. Our first open-source release was in 2023 (a quiet soft launch). And we've been in development since 2022, in close consultation with Stanford scientists. We're used pretty broadly today :)
Once you get to a certain complexity of notebooks, I find it only serves to complicate my mental model to “experiment” out of order. It makes me far more likely to forget to “commit” an ordering change.
Jupyter notebooks do store the execution order of the cells. Just enforce a pre-commit or pre-merge hook that doesn't allow adding notebooks that have out-of-order cells.
marimo still allows you to run cells one at a time (and has many built-in UI elements for very rapid experimentation). But the distinction is that in marimo, running a cell runs the subtree rooted at it (or if you have enabled lazy execution, marks its descendants as stale), keeping code and outputs consistent while also facilitating very rapid experimentation. The subtree is determined by statically parsing code into a dependency graph on cells.
I think it’s safe to say Observable’s inability to properly price their services made people look elsewhere. Their new offering is interesting but also ridiculously priced.
I was also wondering their pricing because Canvas seemed so cool at first. Now that I've seen your comment I checked and $900/month (includes 10 users) is indeed very high. I guess they are primarily targeting big enterprises.
Marimo is really special and solves most of the problems that you have with Jupyter. For those Marimo curious I strongly recommend checking out their YouTube channel. So much effort gone into making these videos really great. https://youtube.com/@marimo-team?si=ZGaf8Zgq5WN3LKRg
marimo is open source and uses a reactive model which makes it fun to mix/match widgets with Python code. It even supports gamepads if you wanted to go nuts!
Thanks for checking out bauplan (which also supports BYOC, so I guess it is indeed hostable by you in a sense!).
We've done quite a lot of open source in our life, at Bauplan (you can check our github), and before (you can check me ;-)), so the comment seems unfair!
I don't think python is always the best suited language for managing models and agents, but it certainly is the most popular and has the largest choice of related libraries. "Python first" or "pythonic" invites skepticism from me.
> "you can define assets in pure python using any framework or engine you want."
sounds flexible but what does that actually mean in practice? are there guardrails to keep things interoperable
> "engine-agnostic execution"
how that holds up when switching between, say, pandas and spark. are dependencies and semantics actually preserved or is it up to us to manually patch the gaps every time the backend shifts?
Spark is technically not Python, even if we support PySpark with the relevant decorator but it's a very niche use case for us.
As for all the other Python packages, including proprietary ones, the FaaS model is such that you can declare any package you want in a function as node in the pipeline DAG, and any other in another: every function is fully isolated, and you can even selectively use pandas 1 in one, pandas 2 in another, or update the Python interpreter only in node X.
I find Marimo best for when you're trying to build something "app-like"; an interactive tool to perform a specific task. I find Jupyter lab more appropriate for random experimentation and exploration, and documenting your learnings. Each absolutely has it's place in the toolbox, and does it's thing well, but for me at least, there's not much overlap between the two other than the cell-based notebook-like similarity. That similarity works well for me when migrating from exploration mode to app design mode. The familiar interface makes it easy for me to take ideas from Jupyter into Marimo to build out a proper application.
Thanks for the kind words. Many of our users have switched entirely from Jupyter to marimo for experimentation (including the scientists at Stanford's SLAC alongside whom marimo was originally designed).
I have spent a lot of time in Jupyter notebooks for experimentation and research in a past life, and marimo's reactivity, built-in affordances for working with data (table viewer, database connections, and other interactive elements), lazy execution, and persistent caching make me far more productive when working with data, regardless of whether I am making an app-like thing.
But as the original developer of marimo I am obviously biased :) Thanks for using marimo!
I just like the Jupyter Lab overall IDE-like interface. It's really well designed for general random exploration, and works well with Wil McGugan's "Rich" console output library. On the other hand, it's not really at all well suited for building web application type stuff. It's capable of it (with a whole lotta "hackery" and jumping through hoops) but it's not really built for it the way Marimo is. Marimo just feels like the right choice once you want to build a real repeatable end-usery type application for day to day use on a specific task. The widget set seems really well designed in Marimo too. I'm also really pleased with Marimo's usage of the uv Python package tool as well. I fully intend to keep both Marimo and Jupyter within easy reach, as they're both really excellent at what they do.
Hey, founder of Bauplan here. Happy to field any questions or thoughts. Yes, marimo is great, and it's the only way to work within a real Python ecosystem for production use cases shipping proper code.
Hey! Congrats on the product. Do you have any more complex examples anywhere?
I'm a data engineer and make decisions around what software we use for pipelines. A lot of examples for these types of tools showcase the simple case, which is a handy intro, but I'd love to see a real world example of Bauplan scaling to interconnected pipelines!
Our largest client is a 5BN / USD year company running thousands of jobs on bauplan. If you have something in mind, you can try out the public sandbox for free and come on our Slack, and I'm happy to build something with you.
Thanks for your comment. As stated elsewhere, we understand the need for people to know how the system works, and have contributed back our ideas (and quite a bit of open source code) to the community: if you want to check our blogs and / or papers, I'm sure you'll find many interesting things.
If you're worried about data movement or secure deployment, none of that is an issue because of Iceberg + BYOC option.
Databricks and Snowflake, just to mention two players in a similar space, are not OS: did you feel that would prevent you from adopting them as well?
> Databricks and Snowflake, just to mention two players in a similar space, are not OS: did you feel that would prevent you from adopting them as well?
Yes absolutely. Snowflake is a modern Oracle. It may survive but will be more of a barnacle/legacy system for big corporations. Neither are the right solution for the next generation of companies that are starting up today
Rolling a notebook out to a service rapidly is an attractive idea -- but, as mentioned, has security implications -- I can add that there are also a host of monitoring implications as well -- service quality & continuity, model quality etc.
You mean on the data side? Data access in the example (and in real-world) is mediated by production-grade Iceberg compatible catalog, sandboxed changes, and full auditability trail (https://docs.bauplanlabs.com/en/latest/concepts/git_for_data...). Or do you mean something else?
Importantly, kedro does not run things for you, resulting in a suboptimal experience because the runtime and dsl are separated: in particular, it does not solve the problem of having K different systems with scattered logs and not easy to integrate APIs.
If you want to dive deeper in one line reproducibility, you can chek our SIGMOD24 paper: https://arxiv.org/pdf/2404.13682. Let us know what you think!
One of the most critical aspects a Lakehouse is protecting data for security and compliance reasons and this article completely just glosses over it which makes me really uncomfortable.
Thanks for the feedback. Bauplan actually features a few innovative points in this area, and full Pythonic at that: Git for Data (https://docs.bauplanlabs.com/en/latest/concepts/git_for_data...) to sandbox any data change, tag it for compliance and make it querable; full code and data auditability in one command (AFAIK, the only platform offering this), as every change is automatically versioned and tagged with the exact run and code that produced it (https://docs.bauplanlabs.com/en/latest/concepts/git_for_data...).
Our sandbox with public data is free for you to try, or just reach out and ask any question!
When I first quickly glanced at this heading, I read "Leakhouse" instead of "Lakehouse" :D And then I saw your comment...
> Option 2 - Hand it off to DevOps. The other option is to have data science produce prototypes that can be on Notebooks and then have a devops team whose job is to refactor those into an application that runs in production. This process makes things less fragile, but it is slow and very expensive.
I've never understood why this is so hard. Every time data science gives me a notebook it feels like I have been handed a function that says `doFeature()` and should just have to put it behind an endpoint called /do_feature, but it always takes forever and I'm never even able to articulate why. It feels like I am clueless at reading code but just this one particular kind of code.
A data scientist wants results with minimum programming effort, and efficiency be damned. Pull all the data and join it all together in a honking great data frame, use brute force to analyse it.
This isn’t necessarily what you want in a daily production environment, let alone a real-time environment.
Thanks for the comment: your frustration is the default in the industry, and it's part of the reasons why Bauplan was built.
"but it always takes forever and I'm never even able to articulate why." -> there are way more factors at play than DoFeatures unfortunately, see for example Table 1 (https://arxiv.org/pdf/2404.13682). Even knowing which data people have developed on is hard, which is why bauplan has git-for-data semantics built in: everyone works on production data, but safely and reliably, to avoid data skews.
Each computer is different, which is why bauplan adopt FaaS with isolated and fully containerized functions: you are always in the cloud, so no skew in the infra etc.
The problem of "going to production" is still the biggest issue in the industry, and solving it is not a one-fix kind of thing, but unfortunately the combination of good ergonomics, new abstractions and reliable infra.
I'll do you one better. Productionizing a data science prototype is exactly the kind of grunt work AI is able to take over.
I think its a much better result to have data science prototype translated to a performant production version rather than have a databricks type approach or what bauplan is proposing.
Maybe, but it would still need to work within a well defined framework. Usually the data science part is “solve the problem”, the data engineering part is “make it work reliably, fast, at scale”.
What that looks like is highly dependent upon the environment at hand, and letting AI take that over may be one of those “now you have 2 problems” things.
We are not proposing or advocating for any approach to development (I personally almost never use notebooks these days and run Bauplan with preview).
The blog together with our marimo friends is to showcase that you can have notebook development if you like it AND cloud scaling (which u need) without code changes, thanks to the fact that both marimo and Bauplan are basically Python (maybe a small thing, but there is nothing else in the market remotely close).
On the AI part, we agree: the fact that bauplan is just Python, including data management and infra-as-code, makes it trivial for AI to build pipelines in Bauplan, which is not something that can be said about other data platforms - if you follow our blog, we are releasing in a few weeks or so a full "agentic" implementation with Bauplan API of production ETL workloads, which you may find interesting.
There have been so many "better notebook" implementations over the years that I cannot catch up. What are the promising one? Is this "marimo" one of them or rather a newcomer?
Marimo is very impressive. It's effectively a cross between Jupyter and https://observablehq.com/ - it adds "reactivity", which solves the issue where Jupyter cells can be run in any order which can make the behavior of a notebook unpredictable, whereas in Marimo (and Observable) updating a cell automatically triggers other dependent cells to re-execute, similar to a spreadsheet.
Marimo is pretty new (first release January 2025) but has a high rate of improvement. It's particularly good for WebAssembly stuff - that's been one of their key features almost from the start.
My notes on it so far are here: https://simonwillison.net/tags/marimo/
Thanks Simon for the kind words!
For those new to marimo, we have affordances for working with expensive (ML/AI/pyspark) notebooks too, including lazy execution that gives you guarantees on state without running automatically.
One small note: marimo was actually first launched publicly (on HN) in January 2024 [1]. Our first open-source release was in 2023 (a quiet soft launch). And we've been in development since 2022, in close consultation with Stanford scientists. We're used pretty broadly today :)
[1] https://news.ycombinator.com/item?id=38971966
> it adds "reactivity", which solves the issue where Jupyter cells can be run in any order
This is one of the key features of Jupyter to me; it encourages quick experimentation.
Once you get to a certain complexity of notebooks, I find it only serves to complicate my mental model to “experiment” out of order. It makes me far more likely to forget to “commit” an ordering change.
Jupyter notebooks do store the execution order of the cells. Just enforce a pre-commit or pre-merge hook that doesn't allow adding notebooks that have out-of-order cells.
marimo still allows you to run cells one at a time (and has many built-in UI elements for very rapid experimentation). But the distinction is that in marimo, running a cell runs the subtree rooted at it (or if you have enabled lazy execution, marks its descendants as stale), keeping code and outputs consistent while also facilitating very rapid experimentation. The subtree is determined by statically parsing code into a dependency graph on cells.
I think it’s safe to say Observable’s inability to properly price their services made people look elsewhere. Their new offering is interesting but also ridiculously priced.
I was also wondering their pricing because Canvas seemed so cool at first. Now that I've seen your comment I checked and $900/month (includes 10 users) is indeed very high. I guess they are primarily targeting big enterprises.
Marimo is really special and solves most of the problems that you have with Jupyter. For those Marimo curious I strongly recommend checking out their YouTube channel. So much effort gone into making these videos really great. https://youtube.com/@marimo-team?si=ZGaf8Zgq5WN3LKRg
marimo is open source and uses a reactive model which makes it fun to mix/match widgets with Python code. It even supports gamepads if you wanted to go nuts!
https://youtu.be/4fXLB5_F2rg?si=jeUj77Cte3TkQ1j-
disclaimer: I work for marimo and I made that video, but the gamepad support is awesome and really shows the flexibility
I personally really like marimo. It's very easy to use and for data analysis type tasks it seems to work a lot better than jupyter in most cases.
I am strangely unmoved by some new SaaS which is not open-source and self-hostable.
Thanks for checking out bauplan (which also supports BYOC, so I guess it is indeed hostable by you in a sense!).
We've done quite a lot of open source in our life, at Bauplan (you can check our github), and before (you can check me ;-)), so the comment seems unfair!
We understand the importance of being clear on how the platform works, and for that we have a long series of blog posts and, if you're so inclined, quite a few peer-reviewed papers in top conferences, ranging from low-level memory optimizations (https://arxiv.org/abs/2504.06151), columnar caching (https://arxiv.org/abs/2411.08203), novel FaaS runtimes (https://arxiv.org/pdf/2410.17465), pipeline reproducibility (https://arxiv.org/pdf/2404.13682) and more.
We are also always happy to chat about our tech choices if you're interested.
I don't think python is always the best suited language for managing models and agents, but it certainly is the most popular and has the largest choice of related libraries. "Python first" or "pythonic" invites skepticism from me.
> "you can define assets in pure python using any framework or engine you want."
sounds flexible but what does that actually mean in practice? are there guardrails to keep things interoperable
> "engine-agnostic execution"
how that holds up when switching between, say, pandas and spark. are dependencies and semantics actually preserved or is it up to us to manually patch the gaps every time the backend shifts?
Spark is technically not Python, even if we support PySpark with the relevant decorator but it's a very niche use case for us.
As for all the other Python packages, including proprietary ones, the FaaS model is such that you can declare any package you want in a function as node in the pipeline DAG, and any other in another: every function is fully isolated, and you can even selectively use pandas 1 in one, pandas 2 in another, or update the Python interpreter only in node X.
If you're interested in containerization and FaaS abstractions, this is good deep dive: https://arxiv.org/pdf/2410.17465
If you're more the practical type, just try out a few runs in the public sandbox which is free even if we are not GA.
Huge fan of Marimo - fixes so many of the annoying problems w/ notebooks
I find Marimo best for when you're trying to build something "app-like"; an interactive tool to perform a specific task. I find Jupyter lab more appropriate for random experimentation and exploration, and documenting your learnings. Each absolutely has it's place in the toolbox, and does it's thing well, but for me at least, there's not much overlap between the two other than the cell-based notebook-like similarity. That similarity works well for me when migrating from exploration mode to app design mode. The familiar interface makes it easy for me to take ideas from Jupyter into Marimo to build out a proper application.
Thanks for the kind words. Many of our users have switched entirely from Jupyter to marimo for experimentation (including the scientists at Stanford's SLAC alongside whom marimo was originally designed).
I have spent a lot of time in Jupyter notebooks for experimentation and research in a past life, and marimo's reactivity, built-in affordances for working with data (table viewer, database connections, and other interactive elements), lazy execution, and persistent caching make me far more productive when working with data, regardless of whether I am making an app-like thing.
But as the original developer of marimo I am obviously biased :) Thanks for using marimo!
I just like the Jupyter Lab overall IDE-like interface. It's really well designed for general random exploration, and works well with Wil McGugan's "Rich" console output library. On the other hand, it's not really at all well suited for building web application type stuff. It's capable of it (with a whole lotta "hackery" and jumping through hoops) but it's not really built for it the way Marimo is. Marimo just feels like the right choice once you want to build a real repeatable end-usery type application for day to day use on a specific task. The widget set seems really well designed in Marimo too. I'm also really pleased with Marimo's usage of the uv Python package tool as well. I fully intend to keep both Marimo and Jupyter within easy reach, as they're both really excellent at what they do.
This is exactly my impression.
Hey, founder of Bauplan here. Happy to field any questions or thoughts. Yes, marimo is great, and it's the only way to work within a real Python ecosystem for production use cases shipping proper code.
Hey! Congrats on the product. Do you have any more complex examples anywhere?
I'm a data engineer and make decisions around what software we use for pipelines. A lot of examples for these types of tools showcase the simple case, which is a handy intro, but I'd love to see a real world example of Bauplan scaling to interconnected pipelines!
Hey Ben, thanks for your message.
We have people building stuff featured here (https://www.bauplanlabs.com/build-with-bauplan) as well as online (e.g. https://blog.det.life/bauplan-the-serverless-data-lakehouse-...), plus of of course our examples repo in Github that you can check as part of the tutorial.
Our largest client is a 5BN / USD year company running thousands of jobs on bauplan. If you have something in mind, you can try out the public sandbox for free and come on our Slack, and I'm happy to build something with you.
Not open source. DOA.
Thanks for your comment. As stated elsewhere, we understand the need for people to know how the system works, and have contributed back our ideas (and quite a bit of open source code) to the community: if you want to check our blogs and / or papers, I'm sure you'll find many interesting things.
If you're worried about data movement or secure deployment, none of that is an issue because of Iceberg + BYOC option.
Databricks and Snowflake, just to mention two players in a similar space, are not OS: did you feel that would prevent you from adopting them as well?
> Databricks and Snowflake, just to mention two players in a similar space, are not OS: did you feel that would prevent you from adopting them as well?
Yes absolutely. Snowflake is a modern Oracle. It may survive but will be more of a barnacle/legacy system for big corporations. Neither are the right solution for the next generation of companies that are starting up today
Posing the sentiment differently: Why not go open source? Follow the same model as marimo, astral etc. that are enriching the python ecosystem?
If you click on See certifications in Security section[1]. It resolves to an empty section.
[1] https://security.bauplanlabs.com/#resources-b2152df0-4179-48...
Mhmm, it doesn't resolve to empty but the full SecureFrame monitoring: https://security.bauplanlabs.com/#resources-b2152df0-4179-48... - if you wait a second, this is the entire report: https://www.loom.com/share/7cfc9c2f020645ddab2b1850b9c47619?...
Rolling a notebook out to a service rapidly is an attractive idea -- but, as mentioned, has security implications -- I can add that there are also a host of monitoring implications as well -- service quality & continuity, model quality etc.
You mean on the data side? Data access in the example (and in real-world) is mediated by production-grade Iceberg compatible catalog, sandboxed changes, and full auditability trail (https://docs.bauplanlabs.com/en/latest/concepts/git_for_data...). Or do you mean something else?
For reproducibility https://kedro.org/
Importantly, kedro does not run things for you, resulting in a suboptimal experience because the runtime and dsl are separated: in particular, it does not solve the problem of having K different systems with scattered logs and not easy to integrate APIs.
If you want to dive deeper in one line reproducibility, you can chek our SIGMOD24 paper: https://arxiv.org/pdf/2404.13682. Let us know what you think!
"Data lake", "data lakehouse"...
Who comes up with these weird names for patterns. What the heck is "lake" supposed to invoke.
Yeah, terms are confusing sometimes! "Data lakehouse" is weirdly enough a "technical term". The canonical reference is from CIDR https://www.cidrdb.org/cidr2021/papers/cidr2021_paper17.pdf, but we have our own version from VLDB https://arxiv.org/pdf/2308.05368