The blog post isn’t clear: Did the LLM actually do the routing? The only screenshot showing connected traces comes after the author says they added ground fills and “tidied up the layout”
It’s amazing that this worked at all, but to be clear this layout is actually very bad. Just look at that minimum width trace used to carry power across the entire board and into the ESP32. Using min width traces and wrapping them and min clearance to components is a classic mistake of people (or LLMs?) that have zero understanding of PCB layout techniques beyond “draw lines until everything is connected”
It would be interesting to see if you could feed the file into an LLM and get it to produce the feedback.
I could be wrong, but that looks like autoroute to me just based on the aesthetics of it, autoroute has a bit of a "smell" that you can recognize if you pay attention. For example see the via and traces to the left of SW2. No human I know, even a total noob designing their first ever PCB, would do that.
Also, it certainly wasn't the LLM; atopile doesn't allow you to specify routing as far as I'm aware, their docs seem to tell you to route in KiCad.
> even a total noob designing their first ever PCB
As said noob, do you have any resources for basic PCB design/routing? Along the lines of a simple list of things to look out for?
I've only ever done one, and for routing I basically did the "make two ground pours, then keep clicking until everything is connected" process that others have described in this thread. Probably about the same as I'd imagine an autorouter would have. And it seems like it worked fine in the end. But I'm wondering what obvious things I probably missed, and what the consequences are to missing them? PCB layout articles online seem to quickly get into topics like differential pair length matching, high-frequency / RF circuits, optimizing current return paths, controlled impedance, and so on... none of which I imagine will ever be relevant to me as a hobbyist.
There are some absolute masters of PCB design on this site, I am far below that level, so take this all with a heap of salt. A lot of what follows is generally good advice but not everything is universally applicable.
Basics: learn to use your EDA software, properly configure it with your board house's capabilities, get correct footprints, read and re-read and re-re-read the datasheets for everything you use. Study other similar designs and try to understand everything they're doing and _why_.
- Place mounting holes and critical components first. Tiny boards and tiny components look bigger on-screen, zoom out to 1:1 real life scale as a sanity check!
- Use as many of the largest decoupling caps you can get. You don’t need multiple caps in different sizes; this comes from the old days of leaded caps when parasitics would be bad
- For power: use planes when possible; use a trace width calculator; always have a ground plane.
- Generally speaking, use the widest traces you can.
- There is a huge asterisk on this one, but most traces should be made as short as possible. Decoupling caps should be super close to where they're needed. This is one of the more common noob mistakes, but it can also lead you astray (making overly complex or compact PCBs on the first try.)
- Do not put capacitors or inductors close to the edges of a board, they will fail because of flexing!
- Check clearance between parts for pick and place and hand-soldered parts
- Always run DRC checks (there are also secondary DRC check tool websites/downloads aside from the one in your EDA software)
- Before sending it off, manually check for obvious common blunders (forgot the ground plane, no copper pour on ground plane, dead short, forgot to drill holes, wrong units, used the wrong footprints) - manually measure a few things on your design including footprints and pad sizes and cross reference this with an independent source. Check your files in different gerber viewers and hand-trace through the copper path from one component to the next. Visually preview the PCB and ensure you're not missing any copper anywhere.
- Don’t make things as small as possible right away! Make it big, test points, connectors, break out sketchy features into daughterboards etc, then shrink when it works
Beyond the basics:
- Understand your components. There are countless types of resistors and capacitors, to say nothing of the other component types. Getting more advanced, try to understand the various types, their lifespans, failure modes, heat tolerance. Pay attention to physical component sizes, if some capacitors of type X and rating Y are one volume and the others are half the volume by being half the height... why?
- Understand heat. For the most basic calculations: "With only natural convection (i.e. no airflow), and no heat sink, a typical two sided PCB with solid copper fills on both sides, needs at least 15.29 cm2/2.37 in2 of area to dissipate 1 watt of power for a 40°C rise in temperature. Adding airflow can typically reduce this size requirement by up to half. To reduce board area further a heat sink will be required." - from Thermal Design By Insight, Not Hindsight by Marc Davis-Marsh
- Get a better understanding of electricity and RF in general. This really pays dividends in terms of understanding why the "rules" are what they are.
For some interesting stuff beyond the basics, or to get yourself thinking, these links are great:
The "PCB Review" threads on r/PrintedCircuitBoard are great places to learn as well.
Beyond that... well, it's like any skill, learning the theory and best practices is great but the way to really improve is to get out there and look at (and design) tons of PCBs.
- impressive that this worked so well with LLM-generated atopile, given that atopile is about a year old!
- the hardest part of a PCB is still the routing and nonstandard parts of the design; what this did is basically "find a reference design, pick components that match the reference design, and put them on the correct nets" which is the easiest part of the process for people designing PCBs today
- much like with code, 99% of PCBs designed are fairly basic boards implementing the reference design with some small tweaks, and then there is a tiny amount of envelope-pushing designs/crazy complex stuff. Obviously you can't design some fancy PCB with complex RF with this, but give it some time and I'd bet you can probably make a lot of the basic stuff...
The video seems more clear. The LLM generated the BOM and netlist using atopile, a tool for specifying the equivalent of a schematic in code. He did the placement and routing in KiCad in the usual way, presumably by hand.
ETA: Other commenters suspect a traditional autorouter based on the poor layout quality. I agree that's also possible, and nothing in the video excludes that. It definitely wasn't the LLM, though.
That makes more sense. Going from a highly detailed set of common parts and instructions to an incomplete net list seems within the realm of LLM tasks.
I assumed the author was more experienced, I suppose this is more of an entry level hobbyist blog. There are some very fundamental problems with routing PCBs like this that are covered in introductory materials.
freerouting plugin is available as a standalone program. It's pretty good. You export DSN from kicad and use freerouting. Not as simple as button click though.
> It’s amazing that this worked at all, but to be clear this layout is actually very bad. Just look at that minimum width trace used to carry power across the entire board and into the ESP32. Using min width traces and wrapping them and min clearance to components is a classic mistake of people (or LLMs?) that have zero understanding of PCB layout techniques beyond “draw lines until everything is connected”
Is it really so implausible that these constraints could be built into the process/algorithm/agentic workflow?
No, but at that point, why even leverage a stochastic text generator? Placing hard constraints on a generative algorithm is just regular programming with more steps and greater instability.
Edit: Also, one could just look to the world of decision tree and route-finding algorithms that could probably do this task better than a language model.
While I think that AI tools can be quite useful for coding, PCB design, and other tasks like that, the setup of this experiment makes it really hard for the LLM to fail.
The author's prompt is basically already a meticulous specification of the PCB, even proactively telling the LLM to avoid certain pitfalls ("GPIO19 and GPIO20 on the ESP32-S3 module are USB D- and D+ respectively. Make sure these nets are labeled correctly so that differential routing works"). If you had no prior experience building that exact thing, writing that spec would be 95% of the work.
Anyway, I don't think the experiment is wrong, but it's also not exactly vibe-PCBing!
> If you had no prior experience building that exact thing, writing that spec would be 95% of the work.
Nowadays most mainstream LLMs support pre-bundled prompts. GitHub Copilot even made it a major feature and tools like Visual Studio Code have integrated support for prompt files.
Also, LLMs can generate prompt files too. I recommend you set aside 10 minutes of your time to vibe-code a prompt file for PCB generation, and then try to recreate the same project as OP. You'd be surprised.
> Anyway, I don't think the experiment is wrong, but it's also not exactly vibe-PCBing!
I don't agree. Vibecoding doesn't exactly mean naive approaches to implementations. It just means you enter higher level inputs to generate whatever you're creating.
Sure, but the utility of that for PCB design wasn't demonstrated in the article. This is an expert going out of his way to give the LLM a task it can't fumble (and still does, a bit).
> Sure, but the utility of that for PCB design wasn't demonstrated in the article.
Forget about the article. Try it yourself. Set aside 5 or 10 minutes to ask any LLM of your choice to generate a LLM prompt to generate PCBs. Iterate over your prompt before using it to generate your PCB. See the result for yourself.
> Initially, everything looked great. The build succeeded, all components were found and added. But when I opened KiCad… nothing was wired up.
Maybe this is pedantic, but I thought that the core point of "Vibe Coding" is that you do not look at the code. You "give in to the 'vibes'".
I don't know how to translate it into a physical hardware product exactly, but I think it would be manufacturing it without looking at it, plugging it in for your use-case and seeing if it works, then going back to the model, saying it didn't work, rinse, repeat.
I'm not saying you're wrong, or that I know better.
Yet I have to say that if you are correct, the term is no different than eating tide pods or dry swallowing cinnamon. Why tf would anyone impose such an absurd artificial constraint on themselves, on the tool, or on whatever they are trying to build? Good faith question, I promise.
Constructing detailed prompts to ultimately pair program impressive, complex outcomes is what I assumed vibe coding was. After 35 years of not being able to tell a computer to write the code for me, even getting an 80% coherent first pass of a sophisticated refactor was already radical enough.
If that's what vibe coding is, then nobody should be using that term because it might be the perfect example of "just because you can, doesn't mean you should".
> Yet I have to say that if you are correct, the term is no different than eating tide pods or dry swallowing cinnamon. Why tf would anyone impose such an absurd artificial constraint on themselves, on the tool, or on whatever they are trying to build? Good faith question, I promise.
IDK! I don't think Vibe Coding, with the definition that I understand, is a good idea.
I myself am unclear on what the "vibes" that one is giving into actually are. But terms should have meanings and my understanding from reading the original tweet is that "Vibe Coding" means something distinct from "coding using some AI to help".
That's how Karpathy defined it - it's throwing and rethrowing it back to the LLM agent until you have a result that roughly matches the goal.
It's the behaviorism of programming. (Pay no attention to the man behind the curtain).
Personally I use the term "agentic coding" if you are high leveling describing the specs to the LLM agent but still taking some minimal amount of time to review the diffs.
Co-author of atopile here – super excited to see this on HN!
Coincidentally, we just built an MCP server for atopile, and Claude seems to love it. It makes a big difference in usability, and also exposes our re-usable design library[0].
A bit about atopile[1]:
Our core idea is to capture design intent in a knowledge graph with constraints and high-level modeling of components and interfaces. This lets us do much more than just AI integrations: we’ve built an in-house constraint solver that can automatically pick passives (resistors, capacitors, etc) based on the values you've constrained in your design.
Currently, atopile directly generates KiCAD PCB files, so you can finish the layout (mainly the connections between reusable layout blocks). We're also generating artifacts like I2C bus trees and 3D models, with power trees and schematic generation on the roadmap.
Happy to answer questions or go into technical details!
The LLM generated four caps on the LDO output. They're all placed next to each other and away from the LDO, but that seems to have been a human choice. So I can't fault the LLM there.
That said, the AMS1117 datasheet shows a tantalum cap on the output. This is presumably because the non-negligible ESR helps stabilize the regulator, though they don't say that explicitly. The LM1117 datasheet explains this better, stating that "the ESR of the output capacitor should range between 0.3 Ω to 22 Ω". (These are very similar parts, just from different manufacturers.)
The ceramic caps chosen here are probably below that, so perhaps it would ring even with correct layout. The prompt guided towards that bad choice when it said all caps should be 0603, since almost all 0603 capacitors are ceramic. The LLM was free to choose a regulator optimized for use with ceramic output caps, but it probably chose the xx1117 because it's so common.
There are numerous serious problems with this PCB. Even skimming the data sheets or design guides for the ESP32 or LDO would reveal them.
I’m puzzled why the post calls it “surprisingly good” when it’s so bad and missing basic requirements for different parts. I guess it’s surprising that anything at all was produced, but it’s weird that the author can’t identify the basic problems with the design.
This is similar to situations where someone uses an LLM to vibe code an app until it kind of works, but then an experienced developer takes one look at the codebase and can immediately see it was not developed with any understanding of the code.
Similarly to how most web dev isn't exactly on the frontiers of computer science, a lot of day-to-day PCB design isn't about cutting-edge analog or radio stuff. It's just putting the same MCU or SoC on differently-shaped boards over and over again.
If you can reliably automate that, it's still a pretty big deal.
Sure, but that's most PCBs (or even subsets of PCBs). Most PCBs are not about using new/proprietary chips with custom requirements. It's mostly connect this MCU with some other ICs, some power delivery, some connectors, reverse voltage protection etc.
This is a really poor experiment and conclusions do not mean much for teo reasons: 1- This PCB is extremely simple. Anyone can design this in no time. 2- This type of automation is as old as EDA tools (which means electronic design automation). Auto floorplanning and routing isn't new in the industry. Yet, LLMs do not perform well in this field. The successful implementations are using custom algorithms + RL. The bar to claim "design via LLMs" is very very high.
Amazon has been hiding behind "it's a marketplace" for more than a decade. There's an insane amount of shit that should never be sold. Including, but not limited to, fake fire alarms sold as real ones. The CPSC tried going after Amazon but are stuck only going after listings once in awhile. I can't imagine the deaths caused by Amazon are only in the single digits.
The traces for the power lines are extremely thin, this device may suffer issues related to that. These devices pull a lot of power when Wifi is on, and too-thin traces aren't going to help that. Depending on the device, those traces might act like a fuse (and go poof), and I can imagine there could be plenty of other issues that could lead to fires from a "vibe coded" PCB.
There is no shortage (pun intended) of dangerously bad electronics out there already. If that’s what you want to prevent, finger wagging at ai coding for electronics isn’t going to help - but regulations and certification requirements might
Most mass produced electronics aren't vibe-coded hallucinations. There is at least some level of attention to detail in most of it, but of course there's still plenty of dangerous crap out there.
I don't care about this one example project, but when thousands of people read about it and vibe-code their own hallucinated PCB, hopefully wasting their money is the worst thing that happens. They certainly won't be learning much if the AI does it for them. They also don't get the pride that comes from understanding. They are an imposter, and when someone asks if they made the thing, they will feel like an imposter. Nice job, noob!
I'm active in the world of amateur LED installations, and practically nobody realizes how easy it is to start a fire with a 500 watt power supply (or several of them connected together in bad ways) for their holiday lightshow. "AI" is not likely to help that and will probably make it worse.
"AI" is like the blind leading the blind, and it gives people permission to do the stupidest things. Sometimes it's right, but it's a gamble. It's not going to always give the same answer for the same question, and when it "hallucinates", a noob is unlikely to notice.
We have some of that already (design rule checks, spice simulations, etc). But in this case, the author did the layout and routing by hand not by AI, and those parts are _most_ of the work of making a PCB.
Tried out claude code with atopile this week, and it's absolutely crazy.
atopile has mcp support and claude loves using it.
The MCP gives access to the atopile design library and compiler amonst other things.
Despite LLMs being decent at generating text (which has loose, messy rules), applications like PCB layout and coding with clearly-defined constraints make much more sense.
Having well-established, unambiguous rules that must be followed for functionality seems to be a key predictor of AI success. The more constrained and rule bound the domain, the better LLMs perform.
Autorouter works. Tho i don’t use it for anything complex, as the results are middling at best.
Parts placement could be automated, but you’d have to tell something what you wanted and at that point might as well just do the placement instead of describing placement requirements.
I would be happy if an AI could read the datasheet, produce footprints and schematic symbols, and perhaps 3d models. And present the component characteristics in a unified way.
I mean, some people are claiming that LLMs can do scientific research, so the above isn't too much to ask.
Awesome to see more people experimenting with AI-generated electronics. The main thing holding back physical world innovation is the labor cost of design- I’m always blown away that someone needs to raise $50m just to design a hardware AI-assistant or new robotics
Frameworks like atopile, tscircuit (disclaimer: I’m a tscircuit lead maintainer) and JITX are critical here because they enable the LLM to output the deep knowledge it already has. The author is missing a couple pieces to really get great output: 1) Context-friendly datasheets 2) DRC/Semantic review 3) LLM-compatible layout methods
The hardest to build is (3) and what I spend 90% of my time on. AI knows how do do spatial layout for things like flex or css grid but doesn’t have a layout method for PCBs. Our approach w/ tscircuit is to develop new layout systems that either match templates, new heuristic layouts (we are developing one called “pack”), or solve simple spatial constraints.
But tldr; it is only a matter of time before AI can output PCBs. It is not simple but we know what works with LLMs from witnessing the evolution of AI for website generation
Could you please stop posting in this rapid-fire, inflammatory style you've been posting in for the past few days? Your comment history is a steady stream of these low-substance, sensationalist quips about divisive topics, sometimes not even bothering to finish with a full-stop (which is a hallmark of a low-effort comment).
We need that to stop if you're going to keep commenting here. HN is a place for thoughtful discussion, and it's only a place where people want to participate because others make an effort to keep the standards up. Commenting in this style is not what HN is for and it destroys what it is for. Please take a moment to read the guidelines and make an effort to observe them in future.
While the author has very different aims and intended scope, I have been able to leverage both "traditional" (lol) LLMs and Cursor to design and program for novel device design. The results have been pretty incredible to me, and all of the salty comments about the utility of AI to assist with electronics are fully missing the mark.
It's been my direct and many-times-repeated experience that o3 is an incredible electronics engineering wingman, so long as you follow good LLM hygiene; basically, verify all important assumptions, actually read the datasheets, err on the side of too much detail.
The time spent crafting prompts is the time I would spend planning and iterating on designs anyhow. Unlike a human, I don't have to pay them by the hour to patiently explain the nuances of different diodes or suggest alternative parts. o3 is remarkably good at rapidly grokking intent and making suggestions that have unblocked me.
For the camp of armchair quarterbacks on this site who demand specific "evidence" that we're not all just hallucinating the value of these tools, here are two things that happened just this week:
I was blowing my brains out troubleshooting a touch IC, IS31SE5117A. No matter how good my reflow or how many units I tried, I could not bring up an I2C connection. Based only on the fact that Cref refused to rise above ~0.1V when it's supposed to be about 0.7V, it suggested that it seemed likely that I had units from a batch that had no firmware. After going back and forth with their lead engineer for a week, I ordered a few IS32SE5117A - automotive/medical spec, same chip - and it worked immediately, prompting a product recall.
I'd managed to implement galvanic isolation on my USB connection to eliminate audio hum, but it turns out that touching a capacitive pad on a device that has no outside ground connection means that static has nowhere to go but to reboot the microcontroller. I'd been chasing my tail on this for a while, but o3 suggested that instead of isolating my whole device, I could just isolate my MIDI OUT circuit. This is one of those facepalm moments that only seems obvious in hindsight. I told my partner that abandoning weeks of effort was first very hard, and then very easy.
Finally, last night I had Cursor generate both sides of an SPI connection between two ESP32-S3s, something I had never done before. I obviously could have figured it out in 2020, but it would have taken me 1-2 weeks and it wouldn't be nearly as clean or cover as many edge cases.
My hottest take is that LLMs are already (far?) more valuable for engineering tasks than coding. That's kind of unfair because by definition, these tasks involve coding. The speed at which I've been able to iterate has been kind of nuts.
Also: any claims that people who tackle complex domains from a cold start somehow aren't learning fundamentals from a mentor with infinite patience and awareness of every part and circuit design pattern are simply wrong.
The blog post isn’t clear: Did the LLM actually do the routing? The only screenshot showing connected traces comes after the author says they added ground fills and “tidied up the layout”
It’s amazing that this worked at all, but to be clear this layout is actually very bad. Just look at that minimum width trace used to carry power across the entire board and into the ESP32. Using min width traces and wrapping them and min clearance to components is a classic mistake of people (or LLMs?) that have zero understanding of PCB layout techniques beyond “draw lines until everything is connected”
It would be interesting to see if you could feed the file into an LLM and get it to produce the feedback.
I could be wrong, but that looks like autoroute to me just based on the aesthetics of it, autoroute has a bit of a "smell" that you can recognize if you pay attention. For example see the via and traces to the left of SW2. No human I know, even a total noob designing their first ever PCB, would do that.
Also, it certainly wasn't the LLM; atopile doesn't allow you to specify routing as far as I'm aware, their docs seem to tell you to route in KiCad.
> even a total noob designing their first ever PCB
As said noob, do you have any resources for basic PCB design/routing? Along the lines of a simple list of things to look out for?
I've only ever done one, and for routing I basically did the "make two ground pours, then keep clicking until everything is connected" process that others have described in this thread. Probably about the same as I'd imagine an autorouter would have. And it seems like it worked fine in the end. But I'm wondering what obvious things I probably missed, and what the consequences are to missing them? PCB layout articles online seem to quickly get into topics like differential pair length matching, high-frequency / RF circuits, optimizing current return paths, controlled impedance, and so on... none of which I imagine will ever be relevant to me as a hobbyist.
There are some absolute masters of PCB design on this site, I am far below that level, so take this all with a heap of salt. A lot of what follows is generally good advice but not everything is universally applicable.
Basics: learn to use your EDA software, properly configure it with your board house's capabilities, get correct footprints, read and re-read and re-re-read the datasheets for everything you use. Study other similar designs and try to understand everything they're doing and _why_.
- Place mounting holes and critical components first. Tiny boards and tiny components look bigger on-screen, zoom out to 1:1 real life scale as a sanity check!
- Use as many of the largest decoupling caps you can get. You don’t need multiple caps in different sizes; this comes from the old days of leaded caps when parasitics would be bad
- For power: use planes when possible; use a trace width calculator; always have a ground plane.
- Generally speaking, use the widest traces you can.
- There is a huge asterisk on this one, but most traces should be made as short as possible. Decoupling caps should be super close to where they're needed. This is one of the more common noob mistakes, but it can also lead you astray (making overly complex or compact PCBs on the first try.)
- Do not put capacitors or inductors close to the edges of a board, they will fail because of flexing!
- Check clearance between parts for pick and place and hand-soldered parts
- Always run DRC checks (there are also secondary DRC check tool websites/downloads aside from the one in your EDA software)
- Before sending it off, manually check for obvious common blunders (forgot the ground plane, no copper pour on ground plane, dead short, forgot to drill holes, wrong units, used the wrong footprints) - manually measure a few things on your design including footprints and pad sizes and cross reference this with an independent source. Check your files in different gerber viewers and hand-trace through the copper path from one component to the next. Visually preview the PCB and ensure you're not missing any copper anywhere.
- Don’t make things as small as possible right away! Make it big, test points, connectors, break out sketchy features into daughterboards etc, then shrink when it works
Beyond the basics:
- Understand your components. There are countless types of resistors and capacitors, to say nothing of the other component types. Getting more advanced, try to understand the various types, their lifespans, failure modes, heat tolerance. Pay attention to physical component sizes, if some capacitors of type X and rating Y are one volume and the others are half the volume by being half the height... why?
- Understand heat. For the most basic calculations: "With only natural convection (i.e. no airflow), and no heat sink, a typical two sided PCB with solid copper fills on both sides, needs at least 15.29 cm2/2.37 in2 of area to dissipate 1 watt of power for a 40°C rise in temperature. Adding airflow can typically reduce this size requirement by up to half. To reduce board area further a heat sink will be required." - from Thermal Design By Insight, Not Hindsight by Marc Davis-Marsh
- Get a better understanding of electricity and RF in general. This really pays dividends in terms of understanding why the "rules" are what they are.
For some interesting stuff beyond the basics, or to get yourself thinking, these links are great:
https://resources.altium.com/p/2-the-extreme-importance-of-p... by Rick Hartley
https://codeinsecurity.wordpress.com/2025/01/25/proper-decou... by Graham Sutherland
The "PCB Review" threads on r/PrintedCircuitBoard are great places to learn as well.
Beyond that... well, it's like any skill, learning the theory and best practices is great but the way to really improve is to get out there and look at (and design) tons of PCBs.
Few other thoughts:
- impressive that this worked so well with LLM-generated atopile, given that atopile is about a year old!
- the hardest part of a PCB is still the routing and nonstandard parts of the design; what this did is basically "find a reference design, pick components that match the reference design, and put them on the correct nets" which is the easiest part of the process for people designing PCBs today
- much like with code, 99% of PCBs designed are fairly basic boards implementing the reference design with some small tweaks, and then there is a tiny amount of envelope-pushing designs/crazy complex stuff. Obviously you can't design some fancy PCB with complex RF with this, but give it some time and I'd bet you can probably make a lot of the basic stuff...
The video seems more clear. The LLM generated the BOM and netlist using atopile, a tool for specifying the equivalent of a schematic in code. He did the placement and routing in KiCad in the usual way, presumably by hand.
ETA: Other commenters suspect a traditional autorouter based on the poor layout quality. I agree that's also possible, and nothing in the video excludes that. It definitely wasn't the LLM, though.
That makes more sense. Going from a highly detailed set of common parts and instructions to an incomplete net list seems within the realm of LLM tasks.
I assumed the author was more experienced, I suppose this is more of an entry level hobbyist blog. There are some very fundamental problems with routing PCBs like this that are covered in introductory materials.
> Did the LLM actually do the routing?
Good question. KiCAD once had a router, built in, or sort of built in, but it was taken out for licensing reasons. So who's doing that?
freerouting plugin is available as a standalone program. It's pretty good. You export DSN from kicad and use freerouting. Not as simple as button click though.
Freerouting used to be more closely integrated with KiCAD.
So what did this project use?
Hand routing.
That's not vibing.
> It’s amazing that this worked at all, but to be clear this layout is actually very bad. Just look at that minimum width trace used to carry power across the entire board and into the ESP32. Using min width traces and wrapping them and min clearance to components is a classic mistake of people (or LLMs?) that have zero understanding of PCB layout techniques beyond “draw lines until everything is connected”
Is it really so implausible that these constraints could be built into the process/algorithm/agentic workflow?
No, but at that point, why even leverage a stochastic text generator? Placing hard constraints on a generative algorithm is just regular programming with more steps and greater instability.
Edit: Also, one could just look to the world of decision tree and route-finding algorithms that could probably do this task better than a language model.
IDK, modeling, constraints, simulations and stochastic processes seem like a match made in heaven.
It's like how pairing a coding agent that can run unit tests and iterate is way more powerful than code gen alone.
While I think that AI tools can be quite useful for coding, PCB design, and other tasks like that, the setup of this experiment makes it really hard for the LLM to fail.
The author's prompt is basically already a meticulous specification of the PCB, even proactively telling the LLM to avoid certain pitfalls ("GPIO19 and GPIO20 on the ESP32-S3 module are USB D- and D+ respectively. Make sure these nets are labeled correctly so that differential routing works"). If you had no prior experience building that exact thing, writing that spec would be 95% of the work.
Anyway, I don't think the experiment is wrong, but it's also not exactly vibe-PCBing!
> If you had no prior experience building that exact thing, writing that spec would be 95% of the work.
Nowadays most mainstream LLMs support pre-bundled prompts. GitHub Copilot even made it a major feature and tools like Visual Studio Code have integrated support for prompt files.
https://docs.github.com/en/github-models/use-github-models/s...
Also, LLMs can generate prompt files too. I recommend you set aside 10 minutes of your time to vibe-code a prompt file for PCB generation, and then try to recreate the same project as OP. You'd be surprised.
> Anyway, I don't think the experiment is wrong, but it's also not exactly vibe-PCBing!
I don't agree. Vibecoding doesn't exactly mean naive approaches to implementations. It just means you enter higher level inputs to generate whatever you're creating.
> Also, LLMs can generate prompt files too.
Sure, but the utility of that for PCB design wasn't demonstrated in the article. This is an expert going out of his way to give the LLM a task it can't fumble (and still does, a bit).
> Sure, but the utility of that for PCB design wasn't demonstrated in the article.
Forget about the article. Try it yourself. Set aside 5 or 10 minutes to ask any LLM of your choice to generate a LLM prompt to generate PCBs. Iterate over your prompt before using it to generate your PCB. See the result for yourself.
Yeah, it's trash. Just as one would expect.
It seems that "vibe X" just means "using LLMs" now, regardless of what the original intent of the term was.
It's the era of jack of no trades, master of all.
> Initially, everything looked great. The build succeeded, all components were found and added. But when I opened KiCad… nothing was wired up.
Maybe this is pedantic, but I thought that the core point of "Vibe Coding" is that you do not look at the code. You "give in to the 'vibes'".
I don't know how to translate it into a physical hardware product exactly, but I think it would be manufacturing it without looking at it, plugging it in for your use-case and seeing if it works, then going back to the model, saying it didn't work, rinse, repeat.
I'm not saying you're wrong, or that I know better.
Yet I have to say that if you are correct, the term is no different than eating tide pods or dry swallowing cinnamon. Why tf would anyone impose such an absurd artificial constraint on themselves, on the tool, or on whatever they are trying to build? Good faith question, I promise.
Constructing detailed prompts to ultimately pair program impressive, complex outcomes is what I assumed vibe coding was. After 35 years of not being able to tell a computer to write the code for me, even getting an 80% coherent first pass of a sophisticated refactor was already radical enough.
If that's what vibe coding is, then nobody should be using that term because it might be the perfect example of "just because you can, doesn't mean you should".
> Yet I have to say that if you are correct, the term is no different than eating tide pods or dry swallowing cinnamon. Why tf would anyone impose such an absurd artificial constraint on themselves, on the tool, or on whatever they are trying to build? Good faith question, I promise.
IDK! I don't think Vibe Coding, with the definition that I understand, is a good idea.
But the term comes from here: https://x.com/karpathy/status/1886192184808149383
And the key parts are:
> "forget that the code even exists"
> "I don't read the diffs anymore"
I myself am unclear on what the "vibes" that one is giving into actually are. But terms should have meanings and my understanding from reading the original tweet is that "Vibe Coding" means something distinct from "coding using some AI to help".
Wild.
I appreciate the explanation. Off to get some cinnamon, I suppose.
That's how Karpathy defined it - it's throwing and rethrowing it back to the LLM agent until you have a result that roughly matches the goal.
It's the behaviorism of programming. (Pay no attention to the man behind the curtain).
Personally I use the term "agentic coding" if you are high leveling describing the specs to the LLM agent but still taking some minimal amount of time to review the diffs.
In this case the netlist is the code, and the pcb is the output. Idk if vibe coding has rules, but if it does that seems well within the rules.
The next stage in `ato build` process would have caught the missing connections in the DRC check and the LLM could have fixed it itself.
Co-author of atopile here – super excited to see this on HN!
Coincidentally, we just built an MCP server for atopile, and Claude seems to love it. It makes a big difference in usability, and also exposes our re-usable design library[0].
A bit about atopile[1]: Our core idea is to capture design intent in a knowledge graph with constraints and high-level modeling of components and interfaces. This lets us do much more than just AI integrations: we’ve built an in-house constraint solver that can automatically pick passives (resistors, capacitors, etc) based on the values you've constrained in your design.
Currently, atopile directly generates KiCAD PCB files, so you can finish the layout (mainly the connections between reusable layout blocks). We're also generating artifacts like I2C bus trees and 3D models, with power trees and schematic generation on the roadmap.
Happy to answer questions or go into technical details!
[0] https://packages.atopile.io/ [1] https://atopile.io/
Don't you need a bypass cap on AMS1117 LDO output for stability? Reference design uses two caps each on input and output..
The LLM generated four caps on the LDO output. They're all placed next to each other and away from the LDO, but that seems to have been a human choice. So I can't fault the LLM there.
That said, the AMS1117 datasheet shows a tantalum cap on the output. This is presumably because the non-negligible ESR helps stabilize the regulator, though they don't say that explicitly. The LM1117 datasheet explains this better, stating that "the ESR of the output capacitor should range between 0.3 Ω to 22 Ω". (These are very similar parts, just from different manufacturers.)
The ceramic caps chosen here are probably below that, so perhaps it would ring even with correct layout. The prompt guided towards that bad choice when it said all caps should be 0603, since almost all 0603 capacitors are ceramic. The LLM was free to choose a regulator optimized for use with ceramic output caps, but it probably chose the xx1117 because it's so common.
http://www.advanced-monolithic.com/pdf/ds1117.pdf
https://www.ti.com/lit/ds/symlink/lm1117.pdf
There are numerous serious problems with this PCB. Even skimming the data sheets or design guides for the ESP32 or LDO would reveal them.
I’m puzzled why the post calls it “surprisingly good” when it’s so bad and missing basic requirements for different parts. I guess it’s surprising that anything at all was produced, but it’s weird that the author can’t identify the basic problems with the design.
This is similar to situations where someone uses an LLM to vibe code an app until it kind of works, but then an experienced developer takes one look at the codebase and can immediately see it was not developed with any understanding of the code.
That's a very generic PCB, of which there are already hundreds on the internet, and in the datasheet of the manufacturer of the MCU ...
Similarly to how most web dev isn't exactly on the frontiers of computer science, a lot of day-to-day PCB design isn't about cutting-edge analog or radio stuff. It's just putting the same MCU or SoC on differently-shaped boards over and over again.
If you can reliably automate that, it's still a pretty big deal.
But this is more copy+paste than automating a design process.
Sure, but that's most PCBs (or even subsets of PCBs). Most PCBs are not about using new/proprietary chips with custom requirements. It's mostly connect this MCU with some other ICs, some power delivery, some connectors, reverse voltage protection etc.
This is a really poor experiment and conclusions do not mean much for teo reasons: 1- This PCB is extremely simple. Anyone can design this in no time. 2- This type of automation is as old as EDA tools (which means electronic design automation). Auto floorplanning and routing isn't new in the industry. Yet, LLMs do not perform well in this field. The successful implementations are using custom algorithms + RL. The bar to claim "design via LLMs" is very very high.
Can’t wait for cheap vibe coded electronics to flood Amazon and burn houses down.
Hah, don't need LLMs for that.
Amazon has been hiding behind "it's a marketplace" for more than a decade. There's an insane amount of shit that should never be sold. Including, but not limited to, fake fire alarms sold as real ones. The CPSC tried going after Amazon but are stuck only going after listings once in awhile. I can't imagine the deaths caused by Amazon are only in the single digits.
The traces for the power lines are extremely thin, this device may suffer issues related to that. These devices pull a lot of power when Wifi is on, and too-thin traces aren't going to help that. Depending on the device, those traces might act like a fuse (and go poof), and I can imagine there could be plenty of other issues that could lead to fires from a "vibe coded" PCB.
There is no shortage (pun intended) of dangerously bad electronics out there already. If that’s what you want to prevent, finger wagging at ai coding for electronics isn’t going to help - but regulations and certification requirements might
Most mass produced electronics aren't vibe-coded hallucinations. There is at least some level of attention to detail in most of it, but of course there's still plenty of dangerous crap out there.
I don't care about this one example project, but when thousands of people read about it and vibe-code their own hallucinated PCB, hopefully wasting their money is the worst thing that happens. They certainly won't be learning much if the AI does it for them. They also don't get the pride that comes from understanding. They are an imposter, and when someone asks if they made the thing, they will feel like an imposter. Nice job, noob!
I'm active in the world of amateur LED installations, and practically nobody realizes how easy it is to start a fire with a 500 watt power supply (or several of them connected together in bad ways) for their holiday lightshow. "AI" is not likely to help that and will probably make it worse.
"AI" is like the blind leading the blind, and it gives people permission to do the stupidest things. Sometimes it's right, but it's a gamble. It's not going to always give the same answer for the same question, and when it "hallucinates", a noob is unlikely to notice.
What we need is a way to verify designs. Simulate the components, simulate RF parasitic effects, check the component voltage/current/power ratings ...
Maybe _then_ we can trust LLMs to design stuff for us.
We have some of that already (design rule checks, spice simulations, etc). But in this case, the author did the layout and routing by hand not by AI, and those parts are _most_ of the work of making a PCB.
Amazing. Inspires to try out Claude Code + skidl [1] to build a custom circuit for fun.
[1] https://github.com/devbisme/skidl
Tried out claude code with atopile this week, and it's absolutely crazy. atopile has mcp support and claude loves using it. The MCP gives access to the atopile design library and compiler amonst other things.
Disclaimer: co-author of atopile here
Despite LLMs being decent at generating text (which has loose, messy rules), applications like PCB layout and coding with clearly-defined constraints make much more sense.
Having well-established, unambiguous rules that must be followed for functionality seems to be a key predictor of AI success. The more constrained and rule bound the domain, the better LLMs perform.
Vibe coding for software: bugs Vibe coding for hardware: fire
Yeah ...
Except that:
Easily the two hardest / annoying steps in designing such a straightforward board.Autorouter works. Tho i don’t use it for anything complex, as the results are middling at best.
Parts placement could be automated, but you’d have to tell something what you wanted and at that point might as well just do the placement instead of describing placement requirements.
I would be happy if an AI could read the datasheet, produce footprints and schematic symbols, and perhaps 3d models. And present the component characteristics in a unified way.
I mean, some people are claiming that LLMs can do scientific research, so the above isn't too much to ask.
Shouldnt this new labeled as a "Show HN"?
Awesome to see more people experimenting with AI-generated electronics. The main thing holding back physical world innovation is the labor cost of design- I’m always blown away that someone needs to raise $50m just to design a hardware AI-assistant or new robotics
Frameworks like atopile, tscircuit (disclaimer: I’m a tscircuit lead maintainer) and JITX are critical here because they enable the LLM to output the deep knowledge it already has. The author is missing a couple pieces to really get great output: 1) Context-friendly datasheets 2) DRC/Semantic review 3) LLM-compatible layout methods
The hardest to build is (3) and what I spend 90% of my time on. AI knows how do do spatial layout for things like flex or css grid but doesn’t have a layout method for PCBs. Our approach w/ tscircuit is to develop new layout systems that either match templates, new heuristic layouts (we are developing one called “pack”), or solve simple spatial constraints.
But tldr; it is only a matter of time before AI can output PCBs. It is not simple but we know what works with LLMs from witnessing the evolution of AI for website generation
Has anyone tried vibe coding fpgas?
The days of fiverr and similar are seriously numbered. Llms will not replace the top 10% of talent but the rest will die off over time
Could you please stop posting in this rapid-fire, inflammatory style you've been posting in for the past few days? Your comment history is a steady stream of these low-substance, sensationalist quips about divisive topics, sometimes not even bothering to finish with a full-stop (which is a hallmark of a low-effort comment).
We need that to stop if you're going to keep commenting here. HN is a place for thoughtful discussion, and it's only a place where people want to participate because others make an effort to keep the standards up. Commenting in this style is not what HN is for and it destroys what it is for. Please take a moment to read the guidelines and make an effort to observe them in future.
https://news.ycombinator.com/newsguidelines.html
While the author has very different aims and intended scope, I have been able to leverage both "traditional" (lol) LLMs and Cursor to design and program for novel device design. The results have been pretty incredible to me, and all of the salty comments about the utility of AI to assist with electronics are fully missing the mark.
It's been my direct and many-times-repeated experience that o3 is an incredible electronics engineering wingman, so long as you follow good LLM hygiene; basically, verify all important assumptions, actually read the datasheets, err on the side of too much detail.
The time spent crafting prompts is the time I would spend planning and iterating on designs anyhow. Unlike a human, I don't have to pay them by the hour to patiently explain the nuances of different diodes or suggest alternative parts. o3 is remarkably good at rapidly grokking intent and making suggestions that have unblocked me.
For the camp of armchair quarterbacks on this site who demand specific "evidence" that we're not all just hallucinating the value of these tools, here are two things that happened just this week:
I was blowing my brains out troubleshooting a touch IC, IS31SE5117A. No matter how good my reflow or how many units I tried, I could not bring up an I2C connection. Based only on the fact that Cref refused to rise above ~0.1V when it's supposed to be about 0.7V, it suggested that it seemed likely that I had units from a batch that had no firmware. After going back and forth with their lead engineer for a week, I ordered a few IS32SE5117A - automotive/medical spec, same chip - and it worked immediately, prompting a product recall.
I'd managed to implement galvanic isolation on my USB connection to eliminate audio hum, but it turns out that touching a capacitive pad on a device that has no outside ground connection means that static has nowhere to go but to reboot the microcontroller. I'd been chasing my tail on this for a while, but o3 suggested that instead of isolating my whole device, I could just isolate my MIDI OUT circuit. This is one of those facepalm moments that only seems obvious in hindsight. I told my partner that abandoning weeks of effort was first very hard, and then very easy.
Finally, last night I had Cursor generate both sides of an SPI connection between two ESP32-S3s, something I had never done before. I obviously could have figured it out in 2020, but it would have taken me 1-2 weeks and it wouldn't be nearly as clean or cover as many edge cases.
My hottest take is that LLMs are already (far?) more valuable for engineering tasks than coding. That's kind of unfair because by definition, these tasks involve coding. The speed at which I've been able to iterate has been kind of nuts.
Also: any claims that people who tackle complex domains from a cold start somehow aren't learning fundamentals from a mentor with infinite patience and awareness of every part and circuit design pattern are simply wrong.