I made these a couple of years ago as a teaching exercise for https://minitorch.github.io/. At the time the resources for doing anything on GPUs were pretty sparse and the NVidia docs were quite challenging.
These days there are great resources for going deep on this topic. The CUDA-mode org is particularly great, both their video series and PMPP reading groups.
Slightly offtopic, but any chance you could update or re-upload code for your https://github.com/harvardnlp/DeepLatentNLP tutorial? I found the NLP latent variable models discussed there really interesting, and notebooks were excellent. However, these seem gone and the only thing left are slides?
Alternatively, any other places that discuss the same topics, including some code? I could only find equivalent discussions with code in Pyro docs and Kevin Murphy's book, volume 2. But these are more sparse as they also cover many other topics.
These still hold up, and I think they're a great first step. But they no longer get you to the goal line. Think about it more as conceptual practice, before you enter the jungle.
I recently ported this to Metal for Apple Silicon computers. If you're interested in learning GPU programming on an M series Mac, I think this is a very accessible option. Thanks to Sasha for making this!
When working on GPU code there’s really two parts to it, I feel. One is “how do I even write code for the GPU” which this tutorial seems to cover but there’s a second part which is “how do I write good code for the GPU” which seems like it would need another resource or expansion to this one.
I've always felt like the best interactive educational model for forming a good intuition on how to maximize throughput and minimize worst-case latency in a pipelined parallel dataflow system (e.g. DSPs, FPGAs, GPUs, or even distributed message-passing systems) would be some variant of the game Factorio. Specifically, one with:
1. instead of buildings, IP cores doing processing steps;
2. instead of belts, wires — which take up far less than one tile, so many can run together along one tile and many can connect to a single IP core; where each wire can move its contents at arbitrary speed (including "stopped") — but where this will have a power-use cost proportional to the wire's speed;
3. an overall goal of optimizing for rocket launches per second per power-usage watt. (Which should overall require minimizing the amount of stuff moving around across the whole base, avoiding pipeline stalls; doing as much parallel batching as possible; etc.)
(Yes, I know Shenzhen I/O exists. It's great for what it does — modelling signals and signal transformations — but it doesn't model individual packets of data as moving along wires with propagation delay, and with the potential for e.g. parallel-line interference given a bad encoding scheme, quantum tunnelling, overclocking or undervolting components, etc. I think a Factorio-variant would actually be much more flexible to implement these aspects.)
I loved the tensor puzzles you made. I spent the morning revisiting and liking all the videos on youtube you've made. Hope for many more in the future!
It’s not about brute force, but about trying to do the exact same calculation on every thread. Efficiency in numpy or on a GPU comes from avoiding “divergence”, which is what it’s called on a GPU when some threads execute different instructions than other threads in the same thread group. If one thread executes a unique instruction, all the other threads have to stall and wait for it. If all the threads execute unique blocks, the waiting becomes catastrophic and slower than single-theaded code. But if they all do the same thing, the machine will fly. Sometimes avoiding divergence means doing things that seem counter-intuitive compared to CPU single-threaded code, which is why it has a reputation for being brute force, but really it’s just a different set of efficiency tricks.
It is true that you don’t have to worry as much about repeating calculations. I think you’re referring to “rematerialization”, meaning after doing some non-trivial calculation once and using the result, throwing it away and redoing the same calculation again later on the same thread. It’s true this can sometimes be advantageous, mostly because memory use is so expensive. One load or store into VRAM can be as expensive as 10 or sometimes even 100 math instructions, so if your store & load takes 40 cycles, and recomputing something takes 25 cycles of math using registers, then recomputing can be faster.
I second the sibling recommendation to learn numpy, it’s a different way of thinking than single-threaded functional programming with lists & maps. Try writing some kind of image filter in Python both ways, and get a feel for the performance difference. If you’re familiar with Python, this is a one or two hour exercise. Last time I tried it, my numpy version was ~2 orders of magnitude faster than the lists & maps version.
One of the most fun ways to learn SIMD programming, in my humble opinion, is to study the shaders on ShaderToy. ShaderToy makes it super simple to write GPU code and see the result. Some of the tricks people use are very clever, but after studying them for a while and trying a few yourself, you’ll start to see themes emerge about how to organize parallel image computations.
I would recommend first learning Numpy or a similar vectorized library. If you have a good sense of those data structures (array broadcasting) it is a good starting point for what you can do in a GPU world.
Nope! Too hard for me. But it would be a great practice for someone who wants to get started in this space. There is a Triton implementation that might be a good starting place.
I made these a couple of years ago as a teaching exercise for https://minitorch.github.io/. At the time the resources for doing anything on GPUs were pretty sparse and the NVidia docs were quite challenging.
These days there are great resources for going deep on this topic. The CUDA-mode org is particularly great, both their video series and PMPP reading groups.
Slightly offtopic, but any chance you could update or re-upload code for your https://github.com/harvardnlp/DeepLatentNLP tutorial? I found the NLP latent variable models discussed there really interesting, and notebooks were excellent. However, these seem gone and the only thing left are slides?
Alternatively, any other places that discuss the same topics, including some code? I could only find equivalent discussions with code in Pyro docs and Kevin Murphy's book, volume 2. But these are more sparse as they also cover many other topics.
I'll take a look. Yeah Pyro is the best thing to do here. But it would be nice to revisit some of these implementationz
Thank you so much!
Thanks a lot, Sasha, for creating these. I found your LLM training puzzles to be excellent as well.
Awesome! Here are all of them if anyone else is looking.
https://github.com/srush/Triton-puzzles https://github.com/srush/tensor-puzzles https://github.com/srush/autodiff-puzzles https://github.com/srush/transformer-puzzles https://github.com/srush/GPTworld https://github.com/srush/LLM-Training-Puzzles
Thanks Sasha - this looks like a great resource.Just to be clear, would you recommend going through other newer resources than this instead?
Not sure if your comment is to discourage someone from going through this.
These still hold up, and I think they're a great first step. But they no longer get you to the goal line. Think about it more as conceptual practice, before you enter the jungle.
Got it, thank you.
Do you have links to the other great resources you are referring to?
I recently ported this to Metal for Apple Silicon computers. If you're interested in learning GPU programming on an M series Mac, I think this is a very accessible option. Thanks to Sasha for making this!
https://github.com/abeleinin/Metal-Puzzles
Wow, thank you! I've been wanting to learn about GPUs on my next flight, and this is the perfect material for that.
I think this course is also relevant for some deeper context.
https://gfxcourses.stanford.edu/cs149/fall23/lecture/datapar...
all videos should be available on YT by end of month
When working on GPU code there’s really two parts to it, I feel. One is “how do I even write code for the GPU” which this tutorial seems to cover but there’s a second part which is “how do I write good code for the GPU” which seems like it would need another resource or expansion to this one.
I've always felt like the best interactive educational model for forming a good intuition on how to maximize throughput and minimize worst-case latency in a pipelined parallel dataflow system (e.g. DSPs, FPGAs, GPUs, or even distributed message-passing systems) would be some variant of the game Factorio. Specifically, one with:
1. instead of buildings, IP cores doing processing steps;
2. instead of belts, wires — which take up far less than one tile, so many can run together along one tile and many can connect to a single IP core; where each wire can move its contents at arbitrary speed (including "stopped") — but where this will have a power-use cost proportional to the wire's speed;
3. an overall goal of optimizing for rocket launches per second per power-usage watt. (Which should overall require minimizing the amount of stuff moving around across the whole base, avoiding pipeline stalls; doing as much parallel batching as possible; etc.)
(Yes, I know Shenzhen I/O exists. It's great for what it does — modelling signals and signal transformations — but it doesn't model individual packets of data as moving along wires with propagation delay, and with the potential for e.g. parallel-line interference given a bad encoding scheme, quantum tunnelling, overclocking or undervolting components, etc. I think a Factorio-variant would actually be much more flexible to implement these aspects.)
It would be nice if the puzzles natively supported C++ CUDA.
Here is a port without the visualizer:
https://twitter.com/srush_nlp/status/1719376959572980094
Here is an amazing in-browser implementation in WebGPU
https://www.answer.ai/posts/2024-09-12-gpupuzzles.html
I loved the tensor puzzles you made. I spent the morning revisiting and liking all the videos on youtube you've made. Hope for many more in the future!
Thanks so much!
Either puzzle 4 has a bug in it or I'm losing my mind. (Possible answer to solution below, so don't read if you want to go in fresh)
Results in a failed assertion: But the test cell beneath it will still pass?maybe try out[local_i, local_j] ?
So I'm used to working with lists and maps, which doesn't really track well with tackling problems on thousands of cores.
Is the usual strategy to worry less about repeating calculations and just use brute force to tackle the problem?
Is there a good resource to read about how to tackle problems in an extremely parallel way?
It’s not about brute force, but about trying to do the exact same calculation on every thread. Efficiency in numpy or on a GPU comes from avoiding “divergence”, which is what it’s called on a GPU when some threads execute different instructions than other threads in the same thread group. If one thread executes a unique instruction, all the other threads have to stall and wait for it. If all the threads execute unique blocks, the waiting becomes catastrophic and slower than single-theaded code. But if they all do the same thing, the machine will fly. Sometimes avoiding divergence means doing things that seem counter-intuitive compared to CPU single-threaded code, which is why it has a reputation for being brute force, but really it’s just a different set of efficiency tricks.
It is true that you don’t have to worry as much about repeating calculations. I think you’re referring to “rematerialization”, meaning after doing some non-trivial calculation once and using the result, throwing it away and redoing the same calculation again later on the same thread. It’s true this can sometimes be advantageous, mostly because memory use is so expensive. One load or store into VRAM can be as expensive as 10 or sometimes even 100 math instructions, so if your store & load takes 40 cycles, and recomputing something takes 25 cycles of math using registers, then recomputing can be faster.
I second the sibling recommendation to learn numpy, it’s a different way of thinking than single-threaded functional programming with lists & maps. Try writing some kind of image filter in Python both ways, and get a feel for the performance difference. If you’re familiar with Python, this is a one or two hour exercise. Last time I tried it, my numpy version was ~2 orders of magnitude faster than the lists & maps version.
One of the most fun ways to learn SIMD programming, in my humble opinion, is to study the shaders on ShaderToy. ShaderToy makes it super simple to write GPU code and see the result. Some of the tricks people use are very clever, but after studying them for a while and trying a few yourself, you’ll start to see themes emerge about how to organize parallel image computations.
I would recommend first learning Numpy or a similar vectorized library. If you have a good sense of those data structures (array broadcasting) it is a good starting point for what you can do in a GPU world.
Wow, It looks realy interesting, I will definitely look into it.
Can I hire you to make Flash Attention a reality for V100?
Nope! Too hard for me. But it would be a great practice for someone who wants to get started in this space. There is a Triton implementation that might be a good starting place.
Looks nice and fun but the "see-through" font for the titles in the screenshots gives me some deep and primordial unease, not sure why.
https://en.wikipedia.org/wiki/Astigmatism
Maybe?
This: https://github.com/srush/GPU-Puzzles/raw/main/GPU_puzzlers_f...
Yeah it looks bad in the readme. In the actual code it's cleaner. Font rendering is hard
Looks bad on mobile only I think, not on desktop. That is why I did not understand.
seems like an opportune moment to gift a plug for bitcoin puzzles, namely BTC32 / 1000 BTC Challenge[1]
pools are in dire need of cuda developers
[1]https://bitcointalk.org/index.php?topic=1306983.0
> pools are in dire need of cuda developers
Pools have money; if they need CUDA engineers, they are fully capable of hiring them at the industry rate.
most are community-based, plus, the prize can far exceed such a rate
> the prize can far exceed such a rate
For all the good it's done them.
https://news.ycombinator.com/item?id=41547395
Why? Wouldn't existing tools be about as good as they could be?
they are stagnating due to the logarithmic increase in difficulty