@jplebreton yeah, this was an interesting case where they used a combination of public and private enterprises to pull it off. Capitalism and cronyism did of course corrupt things, and a bunch of people made a ton of money out of it... but now people have a nice city to live in. To me it looks like an OK way to work within the system you find yourself in, but still improve things for everyone
A good summary of the whole thing in this thread, with tons of pictures:
@jplebreton to people from an industrial city, the pictures of a green, beautiful, walkable city were entirely unbelievable. "It'd be nice if it was true" was the general sentiment. But those designs and that dream of a better city mostly survived through 30 years of political and social changes.
Interesting how achievable dreams can change the world around us, huh? :)
@jplebreton well, 10 years later it turned out they were making it. And 20 years later it had clearly worked. It's 30 years later now, and it's been a smashing success
Picture of the before/after:
one interesting case of this happened in my home town, Bilbao, in northern Spain.
In the 90's it was a polluted, ugly industrial city, but the local politicians were smart enough to realize industrialization was fading, so they needed to change the city. They came up with a plan called "Bilbao Ría 2000" (Bilbao Estuary 2000) to reimagine the city.
When they unveiled the plan I was 8 years old, and I remember thinking "yeah, right, that's some crazy scifi crap. No way they're pulling that off"
If you want to understand the architecture of a GPU, how they operate and execute work and all that, the AMD manuals are _incredibly_ detailed. Highly recommended if you want to go down that rabbit hole... but be warned that it's a _deeeeep_ rabbit hole
If you want to go down into low-level communication with the GPU, you'll have to look at GPU driver implementations, like the Nouveau driver
And between companies, the only thing they _need_ to do is implement the DX/Vulkan/OpenGL specs. How exactly they do that is up to them, and the details vary wildly
Individual companies tend to reuse a lot of stuff from older generations of cards, and within the same generation you can generally count on things being pretty similar... but there's no guarantee that, say, Navi GPUs will share anything in common with Navi2 GPUs in terms of how they communicate with their driver. In fact, that's on purpose, so that the GPU vendors have more freedom to modify and improve things under the hood
The lowest-level you can go is by using Vulkan, which is the "no hand-holding" API. You'll have to do a loooot of stuff manually (like allocating memory on the GPU, for example), but that's all still standardized by the Vulkan spec. You can switch your GPU from AMD to nVidia or Intel, and everything should still work (at least in theory...)
Basically, really low-level communication with the GPU is not something we ever do. We use the APIs provided by the GPU driver. In general, because GPUs all have different architectures, ISAs, etc, you can't count on being able to communicate with one without a driver that's specific to that GPU
In general, the driver will expose standard APIs for you to communicate with the GPU. In Linux, the driver will give you a Vulkan and an OpenGL implementation you can use. In Windows, you'll have DirectX, Vulkan and OpenGL. If you want to go lower-level than that (and basically communicate via IRQs and memory locations with the GPU) you'll need to write your own GPU driver
You can kinda solve this by switching to SPIR-V, a precompiled shader binary format. SPIR-V is the standard in Vulkan, and I believe there's some support in OpenGL nowadays too (not sure about this). With SPIR-V, you compile your source to this binary representation, and then the driver just needs to do the final translation to GPU machine code, which is a lot simpler and straightforward. That way compiled SPIR-V programs are GPU and driver-independent and you can share them around freely
Regarding the compilation step, you can get back the compiled bit stream out of the driver, but it's unfortunately driver-dependent and GPU dependent, so it's only useful to cache it so you don't need to recompile it on later runs. If you wanted to ship something with precompiled shaders in OpenGL, you'd have to precompile your shaders for every possible combination of GPU and driver, and keep it updated forever. Pretty ridiculous task, unfortunately.
You can either do these things manually (each OS has a different API for this), or you can use a library that abstracts all this and gives you a cross-platform implementation of window handling and blitting (the "copy texture to screen" part). Examples of libraries like this are: SDL, SFML, GLFW and a bunch of others.
Everything that I've said so far is platform-independent. OpenGL defines all these things as a standard. Sadly, there's some things they didn't dare to standarize, because OSs are too different from each other. This shows up in everything related to window handling. As long as you don't need the data to be displayed on screen, you can do everything cross-platform. But once you do, window creation is a platform-dependent thing, and outputting the result of your shaders to a window is too.
Unlike other similar APIs (DX12, Metal, Vulkan...), OpenGL doesn't consume a pre-compiled version of your shaders. In OpenGL, you give the driver the source code for your shader, the driver compiles it and gives you a handle you can use to refer to the compiled shader. Then, you can send data to the GPU, and essentially tell OpenGL "grab this shader, and set these 3 textures as input, and this buffer as output. Then execute".
Programmer in Frostbite rendering team. Spaniard in Stockholm
This Mastodon instance is for people interested in technology. Discussions aren't limited to technology, because tech folks shouldn't be limited to technology either!