Maybe.
Linux won because it worked. Hurd was stuck in research and development hell. They never were able to catch up.
Maybe.
Linux won because it worked. Hurd was stuck in research and development hell. They never were able to catch up.
However, Linus’s kernel was more elaborate than GNU Hurd, so it was incorporated.
Quite the opposite.
GNU Hurd was a microkernel, using lots of cutting edge research, and necessitating a lot of additional complexity in userspace. This complexity also made it very difficult to get good performance.
Linux, on the other hand, was just a bog standard Unix monolithic kernel. Once they got a libc working on it, most existing Unix userspace, including the GNU userspace, was easy to port.
Linux won because it was simple, not elaborate.
TIL. Thanks for the correction.
\1. Many retro games were made for CRT TVs at 480p. Updating the graphics stack modern TVs is valuable, even if nothing else is changed.
\2. All of my old consoles only have analog A/V outputs. And my TV only has one analog A/V input. The mess of adapter cables and swapping is annoying. I want the convenience of playing on a system that I already have plugged in.
\3. I don’t even still have some of the consoles that play my favorite classic games, and getting retro hardware is sometimes difficult. Especially things like N64 controllers with good joysticks.
Studios don’t need to do a full blown remake to solve these problems. But I’m also not going to say the Crash and Spyro remakes weren’t welcome. Nintendo’s Virtual Console emulators toe this line pretty well.
But studios should still put in effort to make these classic games more accessible to modern audiences, and if that means a remake, that’s fine with me.
(I’m mostly thinking about the GameCube/PS2 generation and earlier. I don’t see much value in remakes of the Wii/PS3 generation yet.)
Zsh
No plugin manager. Zsh has a builtin plugin system (autoload
) and ships with most things you want (like Git integration).
My config: http://github.com/cbarrick/dotfiles
Exactly.
My take is that the issue isn’t with tmpfiles.d, but rather the decision to use it for creating home directories.
This is a good book on how Google treats production environments at their scale.
Cattle, not pets.
Google operates on a trunk model, according to this:
The entire TotT series is pretty good.
The Internet.
Computers do a lot of things. But the Internet specifically is the aspect of the computer that revolutionized the world.
Cheating is such a hard problem.
Like, this is what leads to invasive client-side anti-cheat. Which also happens to be one of the main blockers for OS portability.
But if you make it so that the server has to constantly validate the game state, you get terrible lag.
You really have to design your game well to deter cheaters. And you have to empower server moderators to ban cheaters. This sorta implies releasing the servers so that communities can run their own instances, because these studios don’t have the resources to handle moderation themselves.
Relevant xkcd
Yeah, but I want both GPU compute and Wayland for my desktop.
Long term, I expect Vulkan to be the replacement to CUDA. ROCm isn’t going anywhere…
We just need fundamental Vulkan libraries to be developed that can replace the CUDA equivalents.
cuFFT
-> vkFFT
(this definitely exists)cuBLAS
-> vkBLAS
(is anyone working on this?)cuDNN
-> vkDNN
(this definitely doesn’t exist)At that point, adding Vulkan support to XLA (Jax and TensorFlow) or ATen (PyTorch) wouldn’t be that difficult.
Unfortunately, those of us doing scientific compute don’t have a real alternative.
ROCm just isn’t as widely supported as CUDA, and neither is Vulkan for GPGPU use cases.
AMD dropped the ball on GPGPU, and Nvidia is eating their lunch. Linux desktop users be damned.
You have no idea. Python (and Ruby) are used widely in the industry. Large parts of YouTube are written in Python, and large parts of GitHub are written in Ruby. And every major tech company is using Python in their offline data pipelines.
I know of systems critical to the modern web that are written in Python.
Dynamic typing is not a fad.
Python is older than Java, older than me. It is still going strong.
There’s a Wikipedia article on multiple encryption that talks about this, but the arguments are not that compelling to me.
The main thing is mostly about protecting your data from flawed implementations. Like, AES has not been broken theoretically, but a particular implementation may be broken. By stacking implementations from multiple vendors, you reduce the chance of being exposed by a vulnerability in one of them.
That’s way overkill for most businesses. That’s like nation state level paranoia.
+1
From an order of magnitude perspective, the max is terabytes. No “normal” users are dealing with petabytes. And if you are dealing with petabytes, you’re not using some random poster’s program from reddit.
For a concrete cap, I’d say 256 tebibytes…