Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How I learned Vulkan and wrote a small game engine with it (2024) (edw.is)
179 points by jakogut 23 hours ago | hide | past | favorite | 94 comments




I just want to be a bit picky and say that bike shedding means focusing on trivial matters while ignoring or being oblivious to the complicated parts. What he described sounded more like a combination of feature creep/over-engineering.

You’re risking bike shedding “bike shedding”.

My opinions of Vulkan have not changed significantly since this was posted a year ago https://news.ycombinator.com/item?id=40601605

I'm sure Vulkan is fun and wonderful for people who really want low level control of the graphic stack, but I found it completely miserable to use. I still haven't really found a graphics API that works at the level I want that I enjoyed using; I would like to get more into graphics programming since I do think it would be fun to build a game engine, but I will admit that even getting started with the low level Vulkan stuff is still scary to me.

I think what I want is something like how SDL does 2D graphics, but for 3D. My understanding is that for 3D in SDL you just drop into OpenGL or something, which isn't quite what I want.

Maybe WebGPU would be something I could have fun working on.


SDL 3.0 introduced their GPU API a year or so ago, which is an abstraction layer on top of vulkan/others, might want to check it out.

Although after writing an entire engine with it, I ended up wanting more control, more perf, and to not be limited by the lowest common denominator limits of the various backends, and just ended up switching back to a Vulkan-based engine.

However, I took a lot of learnings from the SDL GPU code, such as their approach to synchronization, which was a pattern that solved a lot of problems for me in my Vulkan engine, and made things a lot easier/nicer to work with.


I'm working with SDL GPU now, and while it's nice, it hasn't quite cracked the cross platform nut yet. You still need to maintain and load platform-specific shaders for each incompatible ecosystem, or you need a set of "source of truth" HLSL shaders that your build system processes into platform-specific shaders, through a set of disparate tools that you have to download from all over the place, that really should be one tool. I have high hopes for SDL_shadercross to one day become that tool.

I thought shaders just needed to be compiled to spir-v

My comment was specifically about cross-platform. Apple operating systems don't know what spir-v is.

Oh well sure if you're targeting apple as a platform you're gonna have to deal with their special snowflake graphics API

I wish Apple had made a point to support Vulkan. I know about MoltenVK and all that fun stuff, but for a time, there was a graphics API that worked on all of the major platforms: OpenGL.

Vulkan was meant to succeed OpenGL, and despite my annoyances with the API, I still think that it's nice to have an open standard for these things, but now there isn't any graphics API that works on everything.


SDL GPU is extremely disappointing in that it follows the Vulkan 1.0 model of static pipelines and rigid workflows. Using Vulkan 1.3 with a few extensions is actually far more ergonomic beyond a basic "Hello, World" than using SDL GPU.

That might exclude a lot of your user base. For example a big chunk of Android users, or Linux workstation users in enterprise settings who are on older LTS distributions.

SDL GPU doesn't properly support Android anyways due to driver issues, and I doubt anyone's playing games on enterprise workstations.

But sdl is super high level. If you want to do more than pong, you'll hit a wall very quickly.

I just want OpenGL, it was the perfect level of abstraction. I still use it today, both at work and for personal projects.


For what it's worth my experience with Metal was that it was the closest any of the more modern APIs got to OpenGL. It's just stuck on an irrelevant OS. If they made sure you could use it on Windows & Linux I think it'd fill a pretty cool niche.

WebGPU is in many ways closer to Metal than to Vulkan. You can use the API outside of the browser too, especially in Rust.

> WebGPU is in many ways closer to Metal than to Vulkan.

If only that were true for the resource binding model ;) WebGPU BindGroups are a 1:1 mapping to the Vulkan 1.0 binding model, and it's also WebGPU's biggest design wart. Even Vulkan is moving away from that overly rigid model, so we'll probably be stuck with a WebGPU that's more restrictive than required by any of its backend APIs :/


I'll check out WebGPU at some point, I guess. I've written our rendering layer in all of the major APIs (OpenGL, DX12, Vulkan and Metal) and found it very instructive to have all of them to compare at the same time because it really underscored the differences; especially maintaining all of them at the same time. We eventually decided to focus only on DX12, but I think I'll revive this "everything all at once" thing for some side projects.

As someone who has done this since DX7, what you’re looking for is WebGPU either Dawn (Google) or wgpu-native (Firefox). WebGPU works. It’s 99% there across platforms for use.

There’s another wrapper abstraction we all love and use called BGFX that is nice to work with. Slightly higher level than Vulkan or Metal but lower than OpenGL. Works on everything, consoles, fridges, phones, cars, desktops, digital signage.

My own engines have jumped back and forth between WebGPU and BGFX for the last few years.


Personally I'm not interested in the web as a platform. The APIs themselves I'm interested in, but as a target I think the web needs to die for everything that isn't a document.

I never mentioned the web as a target, rather devices. You don’t need a browser, you need a window or a surface to draw on and use C/C++/Rust/C# to write your code.

WebGPU is a standard, not necessarily for the web alone.

At no point does a browser ever enter the picture.

https://eliemichel.github.io/LearnWebGPU/index.html


You mentioned "Google" and Firefox, one of which is a browser. I clarified that I'm not interested in the web as a target, not to dismiss your entire suggestion but rather to clarify that that particular part doesn't interest me.

It sounds like the standard did itself a disfavor by its name, more interesting in how you describe it.

Well, it started off with “all the right intentions” of providing low-level access to the GPU for browsers to expose as an alternative to WebGL (and OpenGL ES like API’s of old).

However, throw a bunch of engineers in a room…

When wgpu got mature enough, they needed a way to expose the rust API for other needs. The C wrapper came. Then for testing and other needs, wgpu-native. I’m not a member of either team so I can’t say why for sure but because of those decisions, we have this powerful abstraction available pretty much on anything that can draw a web page. And since it’s just exposing the buffers and things that Vulkan, Metal, etc are already based on, it’s damned fast.

The added benefit is you get WGSL as your shading language which can translate into any and all the others.

The downsides are it provides NO WINDOW support as that needs to be provided by the platform, i.e. you. Good news is the tests and stuff use glfw and it’s the same setup to get Vulkan working as it is to get WebGPU working. Make window, probe it, make surface/swap chain, start your threads.


The WebGPU spec identifies squarely as a web standard: "WebGPU is an API that exposes the capabilities of GPU hardware for the Web." There are also no mentions of non-web applications.

The It's true that you can use Dawn and wgpu from native code but that's all outside the spec.


There is mention of desktop applications in their getting-started docs; it seems well within the intention of the maintainers to me.

https://eliemichel.github.io/LearnWebGPU/introduction.html

> Yeah, why in the world would I use a web API to develop a desktop application?

> Glad you asked, the short answer is:

    Reasonable level of abstraction

    Good performance

    Cross-platform

    Standard enough

    Future-proof

In practice SDL is used to abstract away the system-dependent parts required to set up OpenGL.

I like OpenGL ES but the support for compute shaders sucks. I hate transform feedbacks. I am in the process of trying out WebGPU now, but it doesn't have good native support everywhere like OpenGL ES 3 does.

OpenGL is designed-by-committee state-machine crap.

You don't know it yet, but what you really want is DirectX 9/10/11.


`wgpu` in Rust is an excellent middle ground, matching the abstraction level of WebGPU. More capable than OpenGL, but you don’t have to deal with things like resource barriers and layout transitions.

The reason you don’t is that it does an amount of bookkeeping for you at runtime, only supports using a single, general queue per device, and several other limitations that only matter when you want to max out the capabilities of the hardware.

Vulkan is miserable, but several things are improved by using a few extensions supported by almost all relevant vendors. The misery mostly pays off, but there are a couple of cases where the API asks you for a lot of detail which all major drivers then happily go ahead ignore completely.


How easy is it to integrate wgpu if the rest of your game is developed with a language that isn't rust? (e.g. C# or C++)

Very! there are unified headers and a C library that the maintainers have written as a wrapper around the library.

https://github.com/gfx-rs/wgpu-native

https://github.com/eliemichel/WebGPU-Cpp


wgpu is the name of the Rust library, but it pretty closely follows the WebGPU spec, which you can easily use from C or C++ via Google's `dawn` library. It provides C bindings as well as a templatized C++ API.

Webgpu.h is, AIUI, part of the webgpu spec. Both Dawn (Google’s C++ implementation used in Chrome) and wgpu (Mozilla’s implementation used in Firefox) can be used as concrete implementation of those headers.

Could you say more about which extensions you’re referring to? I’ve often heard this take, but found details vague and practical comparisons hard to find.

Dynamic rendering, timeline semaphores, upcoming guaranteed optimality of general image layouts, just to name a few.

The last one has profound effects for concurrency, because it means you don’t have to serialize texture reads between SAMPLED and STORAGE.


Not the same commenter, but I’d guess: enabling some features for bindless textures and also vk 1.3 dynamic rendering to skip renderpass and framebuffer juggling

I'll definitely give wgpu a look. I don't need to make something that competes with Unreal 5 or anything, but I do think it would be neat to have my own engine.

As someone who did OpenGL programming for a very, very long time, I fully agree with you. Without OpenGL being maintained, we are missing a critical “middle” drawing API. We have the very high level game engines, and very low level things like Vulkan and Metal which are basically thin abstractions on top of GPU hardware. But we are missing that fun “draw a triangle” middle API that lets you pick up and learn 3D Graphics (as opposed to the very different “learn GPU programming” goal).

If I was a beginner looking to get a basic understanding of graphics and wanted to play around, I shouldn’t have to know or care what a “shader” is or what a vertex buffer and index buffer are and why you’d use them. These low level concepts are just unnecessary “learning cliffs” that are only useful to existing experts in the field.

Maybe unpopular opinion: only a relative handful of developers working on actually making game engines need the detailed control Vulkan gives you. They are willing to put up with the minutiae and boilerplate needed to work at that low level because they need it. Everyone else would be better off with OpenGL.


> Without OpenGL being maintained, we are missing a critical “middle” drawing API.

OpenGL still works. You can set up an old-school glBegin()-glEnd() pipeline in as few as 10 lines of code, set up a camera and vertex transform, link in GLUT for some windowing, and you have the basic triangle/strip of triangles.

OpenGL is a fantastic way to introduce people to basic graphics programming. The really annoying part is textures, which can be gently abstracted over. However, at some point the abstractions will start to be either insufficient in terms of descriptive power, or inefficient, or leaky, and that's when advanced courses can go into Vulkan, CPU and then GPU-accelerated ray tracing, and more.


OpenGL still exists, runs and works fine on the two platforms that matter. I think its death has been overstated quite a bit.

With that said we decided to focus on DX12 eventually because it just made sense. I've written our platform layers targetting OpenGL, DX12, Vulkan and Metal and once you've just internalized all of these I really don't think the horribleness of the lower level APIs is as bad as people make them out to be. They're very debuggable, very clear and well supported.


OpenGL is still being maintained, it just isn't being updated. Since OpenGL 4.0 or something we've had vertex and pixel shaders. As a non-AAA developer, I can't imagine anything else I'd really need.

BTW: If anyone says OpenGL is "deprecated", laugh in their face.


Apple officially deprecated GL/GLES on both MacOS and iOS 7 years ago, and only ever supported up to GL 4.1 (which came out in 2010), meaning it doesn't support essential "modern" features like compute shaders (DX11 had them in 2009), or bindless textures (supported since 2012 on AMD+Nvidia, and 2015 for Intel iGPUs, massive performance win, needed for GPU driven rendering and ray tracing).

Apple did that to push people towards their own walled garden of APIs rather than some deficiency of the OpenGL API.

There is no "technical" solution to this, no Even better API that would make them support it, as it's a business decision as much as anything else.


OK, maybe OpenGL is not "unmaintained" but the major OS and hardware vendors have certainly handed him his hat.

If I were starting a new project, would it be unwise to just use OpenGL? It's what I'm used to, but people seem to talk about it as if it's deprecated or something.

I know it is on Apple, but let's just assume I don't care about Apple specifically.


OpenGL is fine, it has the same issues now it had before but none of it really comes from "old age" or being deprecated in any way. It's not as debuggable and much harder to get good performance out of than the lower level APIs but beyond that it's still great.

Honestly, starting out with OpenGL and moving to DX12 (which gets translated to Vulkan on Linux very reliably) is not a bad plan overall; DX12 is IMO a nicer and better API than Vulkan while still retaining the qualities that makes it an appropriate one once you actually want control.

Edit:

I would like to say that I really think one ought to use DSA (Direct State Access) and generally as modern of a OpenGL usage as one can, though. It's easy to get bamboozled into using older APIs because a lot of tutorials will do so, but you need to translate those things into modern modern OpenGL instead; trust me, it's worth it.

Actual modern OpenGL is not as overtly about global state as the older API so at the very least you're removing large clusters of bugs by using DSA.


What do you think makes DX12 better API than Vulkan?

I've found it has less idiosyncrasies, is slightly less tedious in general and provides a lot of the same control, so I don't really see much of an upside to using Vulkan. I don't love the stupid OO-ness of DX12 but I haven't found it to have much of an adverse effect on performance so I've just accepted it.

On top of that you can just use a much better shading language (HLSL) with DX12 by default without jumping through hoops. I did set up HLSL usage in Vulkan as well but I'm not in love with the idea of having to add decorators everywhere and using a 2nd class citizen (sort of) language to do things. The mapping from HLSL to Vulkan was also good enough but still just a mapping; it didn't always feel super straight forward.

(Edit: To spell it out properly, I initially used GLSL because I'm used to it from OpenGL and had previously written some Vulkan shaders, but the reason I didn't end up using GLSL is because it's just very, very bad in comparison to HLSL. I would maybe use some other language if everything else didn't seem so overwrought.)

I don't hate Vulkan, mind you, I just wouldn't recommend it over DX12 and I certainly just prefer using DX12. In the interest of having less translation going on for future applications/games I might switch to Vulkan, though, but still just write for Win32.


OpenGL is still be the best for compatibility in my opinion. I have been able to get my software using OpenGL to run on Linux, Windows, old/new phones, Intel integrated graphics and Nvidia. Unless you have very specific requirements it does everything you need and with a little care, plenty fast.

>Maybe unpopular opinion: only a relative handful of developers working on actually making game engines need the detailed control Vulkan gives you.

If you make a game instead of a game engine, you can use one of the existing engines.


For a while I've been wondering if the push to DX12 or vulkan as the "better" APIs has been a factor in the big engines becoming a near monoculture with games development. Games are very varied in what they require, some push the limits but many releases are more modest yet lots of them are gravitating around full featured leading edge Unreal/Unity. Having a lower barrier to entry for graphics programming that lets them make something that's a closer fit to their requirements.

The other big push would be Epic cutting royalties until you're earning a significant amount, which would encourage studios not to hire or allocating as much resources to in-house.


I don't really think it is related. Graphics aren't really the most difficult part of a modern engine, and there are high quality open-source 3rd party solutions for rendering anyway.

In fact the "engine" part itself is quite small compared to the editor, and the hardest things can be done with third-party solutions, a lot open source: physics, rendering, audio, ECS, controls, asset loading, shader conversion.

The reason people gravitate towards Unity/Unreal is because of the low barrier to entry. This caused the monoculture among hobbyists.

The reason studios are gravitating to those engines is because of there is plenty of cheap labour available.


The problem with 'something like SDL, but 3D' very quickly turns into a full blown engine. There's just such a combinatorial explosion of different ways to do things in 3D compared to 2D that 3D 'game engine' is either limiting or complicated.

OpenGL was designed as a way to more or less do that and it turned complicated fast.


I followed tutorials for Vulkan. I liked vk-guide, until it updated to the latest version. People said the newer SDL is so much better, but I honestly had more fun and got things done back with Renderpasses.

I personally have just been building off of tutorials. But notwithstanding all of the boilerplate code, the enjoyability of a code base can be vastly different.

The most fun I’ve ever had coding, and still do at times, is with WebGL. I just based it off of the Mozilla tutorial and went from there. WebGLFundamentals has good articles…but to be honest I do not love their code


If you don't need 4K PBR rendering, a software renderer is a lot of fun to write.

Interesting. I wouldn't actually mind learning how to do that; any tips on how/where to get started?

Pikuma.com writes a software renderer pretty much from scratch with all the necessary math and explanations in a very pedagogical way. Highly recommend it

I can highly recommend this course, i finished it. It's one of those code katas to learn a new language with a bit like Raytracing in one weekend.


Getting a triangle on the screen is the hello world of 3D applications. Many such guides for your backend of choice. From there it becomes learning how the shaders work, internalizing projection matrices (if you're doing 3D) which takes a bit of thinking, then slowly as you build up enough abstractions turns back into a more "normal" data structures problem surrounding whatever it is you're actually building. But it's broad, be prepared for that.

Definitely recommend starting with a more "batteries included" framework, then trying your hand at opengl, then Vulkan will at least make a bit more sense. SDL is a decent place to start.

A lot of the friction is due to the tooling and debugging, so learning how to do that earlier rather than later will be quite beneficial.


Tsoding has been live streaming the development of a software renderer as of late: https://www.youtube.com/watch?v=maSIQg8IFRI

Pikuma.com has a good one.

If you want to render 2d vs 3d there are different tradeofs, a 3d renderer has to do interpolation of attributes over triangles, a 2d renderer doesn't and as a result can render ngons without having to triangulate them.

I'm just going to dump some links really quick, which should get anyone started.

Getting a framebuffer on screen: https://github.com/zserge/fenster

I would recommend something like SDL if you want a more complete platform abstraction, it even supports software rendering as a context mode.

Filling solid rectangles is the obvious first step.

Loading images and copying pixels onto parts of the screen is another. I recommend just not drawing things that intersect the screen boundaries to get started. Clipping complicates things a bunch but is essential.

Next up: ghetto text blitting https://github.com/dhepper/font8x8 I dislike how basically every rendering tutorial just skips over drawing text on screen, which is super useful for debugging.

For drawing single pixel lines, this page has everything on Bresenham:

http://members.chello.at/easyfilter/bresenham.html

For 2d rasterization, here's an example of 3 common approaches: https://www.mathematik.uni-marburg.de/~thormae/lectures/grap...

Scanline rasterization tought me a lot about traversing polygons, I recommend trying it even if you end up preferi g a different method. Sean Barrett has a good overview: https://nothings.org/gamedev/rasterize/

Side note: analytical antialising is fast, but you should be carefull with treating alpha as coverage, the analytic approaches tell you how much of a pixel is covered, not which parts are.

For 3d rasterization Scratchapixel is good: https://www.scratchapixel.com/lessons/3d-basic-rendering/ras...

Someone mentioned the Pikuma course which is also great, though it skips over some of the finer details such as fixed point rasterizing.

For good measure here's some classic demoscene effects for fun: https://seancode.com/demofx/

Anyway, this is just scratching the surface, being progressively able to draw more and more types of primitives is a lot of fun.


Awesome! Do you have any resources on, uhhh, "hardware accelerating" a software renderer. i.e. using SIMD (or math hardware like the vector hardware you can access with the Accelerate[0] framework on Apple devices).

[0] https://developer.apple.com/documentation/accelerate


There's the work on Larrabee by Mike Abrash and co: https://www.gamedeveloper.com/programming/sponsored-feature-...

This is a gold mine, thank you.

To this day, the best 3D API I’ve used (and I’ve tried quite a few over the years) is Apple’s SceneKit. Just the right levels of abstraction needed to get things on the screen in a productive, performant manner for most common use cases, from data visualization to games, with no cruft.

Sadly 1) Apple only, 2) soft deprecated.


SceneKit is actually just straight up deprecated now: https://developer.apple.com/documentation/scenekit/

I imagine it will still be around for a long time because Apple and a lot of large third party apps use it for simple 3D experiences. (E.g. the badges in the Apple Fitness app).

Apple wants devs to move to RealityKit, which does support non-AR 3D, but it is still pretty far from feature parity with SceneKit. Also RealityKit still has too many APIs that are either visionOS only or are available on every platform but visionOS.

Microrant: I absolutely loathe when I am told "move to new thing. Old thing is deprecated/unsupported" and the new thing is incredibly far from feature parity and usually never reaches parity, let alone exceeds it. This is not just an Apple problem.


Trying to write a ground up game engine in Metal is a very serious exercise in self-discipline. Literally everything you need is right at your finger tips with RealityKit / old SceneKit. It’s so tempting to cheat or take a few short cuts. There’s even a fully featured physics engine in there.

RealityKit is pretty cool and the replacement it seems. Still Apple only though, and I find the feedback loop slow/frustrating due to Swift

I find SDL3 more fun and interesting, but it’s a ton of work to to get going.


If you want something like SDL but for 3D, check out Raylib.

Unironically I think I can help.

Frank Luna’s D3D11 bible is probably the closest thing we’ll get to a repetition spaced learning curriculum for 3D graphics at a level where you can do an assload with the knowledge.

No, it won’t teach you to derive things. Take Calculus I and II.

No, it won’t teach you about how light works. Take an advanced electrical engineering course on electromagnetism.

But it will teach you the nuts and bolts in an approachable way using what is by far an excellent graphics API, Direct3D 11. Even John Carmack approves.

From there on all the Vulkan, D3D12 shit is just memory fences buffers and queue management. Absolute trash that you shouldn’t use unless you have to.


There was XNA but it was abandoned a long time ago.

I think there are maintained community forks/reimplementations. FNA is probably something I would enjoy; that’s basically the level I want to program at.

I wonder if I can get it working with F# in Linux…


> Starting your engine development by doing a Minecraft clone with multiplayer support is probably not a good idea.

Plenty of people make minecraft-like games as their first engine. As far as voxel engines go, a minecraft clone is "hello, world."


Vulkan was one of the hardest thing I've ever tried to learn. It's so unintuitive and tedious that seemingly drains the joy out of programming. Tiny brain =(

You don't have a tiny brain. Vulkan is a low-level chip abstraction API, and is about as joyful to use as a low-level USB API. For a more fun experience with very small amounts of source code needed to get started, I'd recommend trying OpenGL (especially pre-2.0 when they introduced shaders and started down the GPU-programming path), but the industry is dead-set on killing OpenGL for some reason.

Vulkan is definitely a major pain and very difficult to learn... But once you've created an init function, a create buffer function, a create material function etc which you do once you can largely then just ignore it and write at a higher level.

I don't like Vulkan. I keep thinking did nobody look at this and think 'there must be a better way' but it's what we've got and mostly it's just learn it and write the code once


> I'd recommend trying OpenGL

Tbh, OpenGL sucks just as much as Vulkan, just in different ways. It's time to admit that Khronos is simply terrible at designing 3D APIs ;) (probably because there are too many cooks involved)


Does anyone know why the industry is killing OpenGL?

People wanted more direct control over the GPU and memory, instead of having the drivers do that hard work.

To fix this AMD developed Mantle in 2013. This inspired others: Apple released Metal in 2014, Microsoft released DX12 in 2015, and Khronos released Vulkan in 2016 based on Mantle. They're all kind of similar (some APIs better than others IMO).

OpenGL did get some extensions to improve it too but in the end all the big engines just use the other 3.


OpenGL cannot achieve the control over modern hardware necessary to get competitive performance. Even in terms of CPU overhead it’s very limiting.

Direct3D (and Mantle) had been offering lower level access for years, Vulkan was absolutely necessary.

It’s like assembly. Most of us don’t have to bother.


When I first tried to learn Vulkan, I felt the exact same way. As I was following the various Vulkan tutorials online, I felt that I was just copying the code, without understanding any of it and internalizing the concepts. So, I decided to learn WebGPU (via the Google Dawn implementation), which has a similar "modern" API to Vulkan, but much more simplified.

The commonalities to both are:

- Instances and devices

- Shaders and programs

- Pipelines

- Bind groups (in WebGPU) and descriptor sets (in Vulkan)

- GPU memory (textures, texture views, and buffers)

- Command buffers

Once I was comfortable with WebGPU, I eventually felt restrained by its limited feature set. The restrictions of WebGPU gave me the motivation to go back to Vulkan. Now, I'm learning Vulkan again, and this time, the high-level concepts are familiar to me from WebGPU.

Some limitations of WebGPU are its lack of push constants, and the "pipeline explosion" problem (which Vulkan tries to solve with the pipeline library, dynamic state, and shader object extensions). Meanwhile, Vulkan requires you to manage synchronization explicitly with fences and semaphores, which required an additional learning curve for me, coming from WebGPU. Vulkan also does not provide an allocator (most people use the VMA library).

SDL_GPU is another API at a similar abstraction level to WebGPU, and could also be another easier choice for learning than Vulkan, to get started. Therefore, if you're still interested in learning graphics programming, WebGPU or SDL_GPU could be good to check out.


Exactly the reason why I haven't switched from OpenGL to Vulkan. Vulkan is just ridiculously overengineered. Cuda shows that allocation of GPU memory and copy from host to device can be one-liners, yet in Vulkan it's an incredible amount of boilerplate to go through. Modern Vulkan fixes a lot of issues, like getting rid of pipelines, render passes, bindings, etc., but there is still much more to fix before it's usable.

I think anyone who ever looked at typical Vulcan code examples would reach the same conclusion: it's not for application/game developers.

I really hope SDL3 or wgpu could be the abstraction layer that settles all these down. I personally bet on SDL3 just because they have support from Valve, a company that has reasons to care about cross platform gaming. But I would look into wgpu too (...if I were better at rust, sigh)


Yep. Most of the engine and "game from scratch" tutorials on Youtube, etc, use this style of having OpenGL code strewn around the app.

With Vulkan this is borderline impossible and it becomes messy quite quickly. It's very low level. Unlike OpenGL, one really needs an abstraction layer on top, so you either gotta use a library or write your own in the end.


For wgpu, someone else mentionned in another comment that there are bindings for other languages, maybe your favorite too!

You don't have a tiny brain--programming Vulkan/DX12 sucks.

The question you need to ask is: "Do I need my graphics to be multithreaded?"

If the answer is "No"--don't use Vulkan/DX12! You wind up with all the complexity and absolutely zero of the benefits.

If performance isn't a problem, using anything else--OpenGL, DirectX 11, game engines, etc.

Once performance becomes the problem, then you can think about Vulkan/DX12.


What about new features? There are many small features that can't be used via older APIs and bigger ones like accelerated ray tracing.

(2024) At the time (625 points, 260 comments) https://news.ycombinator.com/item?id=40595741

I am fascinated with 3D/Gaming programming and watch a few YouTubers stream while they build games[1]. Honestly, it feels insanely more complicated than my wheelhouse of webapps and DevOps. As soon as you dive in, pixel shaders, compute shaders, geometry, linear algebra, partial differential equations (PDE). Brain meld.

[1] https://www.youtube.com/@tokyospliff


I love that it's becoming kind of cool to do hobby game engines. I've been working on a hobby engine for 10 years and it's been a very rewarding experience.

Note that I have retained the original title of the post, but I am not the author.

>If you haven’t done any graphics programming before, you should start with OpenGL

I remember reading NeHe OpenGL tutorials about 23 years ago. I still believe it was one of the best tutorial series about anything in the way they were structured and how each tutorial built over knowledge acquired in previous ones.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: