I like Ardour. Unfa on YouTube made a great tutorial on how to use it.
I like Ardour. Unfa on YouTube made a great tutorial on how to use it.
To be fair there is so much JPEG compression on the image, you can’t see much of anything.
It isn’t misusing metric, it just simply isn’t metric at all.
single master text file
Sounds like something you are using to manage your packages to me…
IANAL but it looks like they are violating Apache 2, as they are supposed to retain the license and mark any changes.
I wonder how this interacts with tiling window managers…
Try installing nvidia-dkms. It is better integrated into the kernel, so you may have better luck with it. Also make sure to read the xorg page on the arch wiki if you are going to stick with arch.
Sure. If you are using an nvidia optimus laptop, you should also add __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia at the start of the last line when running in hybrid mode to run mpv on the dgpu. You should have a file at ~/.wallpaperrc that contains wallpaper_playlist: /path/to/mpv/playlist
. You may want to add this script to your startup sequence via your wm/de.
#!/bin/sh
WALLPAPER_PLAYLIST=$(cat ~/.wallpaperrc | grep -v '^\w*#' | grep 'wallpaper_playlist' | sed "s/wallpaper_playlist: //")
xwinwrap -g 1920x1080 -ov -- mpv -wid WID --no-osc --no-audio --loop-playlist --shuffle --playlist=$WALLPAPER_PLAYLIST
Hope this helps!
I set mpv as the root window which worked well. I stopped using it a while back, but if you are interested, I could dig up the simple script for you (literally one or two lines iirc).
Wow, CUPS is way better than I previously thought and I thought it was amazing!
If you have ever seen a police interregation, you may notice the detectives ask a question and then, after either no answer or insufficient answer, they will just look at the suspect expectantly. This is done to put phsycological pressure on the suspect to answer the question. Given this info, I would say so, at least in a face to face situation.
Online, I am not so sure. How many posts did you scroll past in the last week on Lemmy that ask a question that you did not answer? How many did you answer? Even if you answered most, you would be in the minority, as if you were not, we would expect far higher engagement rates on posts.
If I’m being honest, it is fairly slow. It takes a good few seconds to respond on a 6800XT using the medium vram option. But that is the price to pay to running ai locally. Of course, a cluster should drastically improve the speed of the model.
You can run llms on text-generation-ui such as open llama and gpt2. It is very similar to the stable diffusion web ui.
Pffsh, that’s baby mode, I use butterflies by releasing them at just the right time to cause the air currents to change just right to cause a solar ray to pass through the atmosphere and flip the bit I want to flip. It is a bit trickier with error correcting memory…
Oh no! Anyway…
Why did you mention it?! Now we can never unsee it!
Block ads; problem solved.
I am aware what fine tuning is. It is available from the train tab while the base checkpoint is loaded in both cases.
Same, I thought it was used commonly too.