Arch already just works, Majaro breaks more (at least for the one month I tried it while getting into Linux).
Arch already just works, Majaro breaks more (at least for the one month I tried it while getting into Linux).
Reading filesystem is not about which distribution you have but drivers on disc. If you have FAT the defaults should work, for NTFS you might have to install the ntfs driver. I don’t use mint but it’s the linux way so either it’s already there or you can install it. Once you have driver just mount it like a normal drive and it’s done.
For the OCR, have you tried tesseract? For printed documents it can take image input and generate a pdf with selectable text. I don’t OCR much but it has been useful when I tried a few times.
You might be able to have a script that takes the scanner input into tesseract and output a pdf. It only works on a single image per run so I had to make script to run it on whole pdf by separating it and stitching it back together.
That sounds like too much work, and what if there are clouds. We should make devices that can be calibrated when you’re in a certain location based on their standards and then the device will continue to show that locations’ shadow location without the sun.
Someone already talked about the XY problem, so I’ll say this.
Why sound notification instead of notification content? If your notification program (dunst in my case) have pattern matching or calling scripts based on patterns and the script has access to which app, notification title, contents etc. then it’s just about calling something in your bash script.
And any time you wanna add that functionality to something else, add one more line with a different pattern or add a condition in your script. Comparing text is lot more reliable than audio.
Of course your use case could be completely different, so maybe give some examples of use case so people can give you different ways to solve that instead of just the one you’re thinking of.
Yeah sure, I’ll compile it in my OS. For any other OS, either I’m not knowledgeable about the tools available, and many of them that I am not going to spend money to acquire. If providing the binary a developer compiles for themselves would solve it, we’d not have that problem at all.
I specifically hate when program or libraries are only in compiled form, and then I get an error messages talking about an absolute path it has with some usernames I’ve never seen before, and no way to correct it as there’s no code. Turns out when people pass compiled versions to the OS they don’t use themselves they don’t encounter the errors and think it works fine.
I was thinking the same, smartphones would definitely do everything it can to make images sharp so it’s probably not going to be easily stackable.
Still it feels like something should be there to combine the pictures to make better drawing, as there’re softwares to generate 3D models from smartphone pictures.
Did you solve it? Recently there was a problem with graphics thing and downgrading mess from 1.24 to 1.23 helped me. It was in arch with AMD graphics, but some people said Nvidia ones also had the problem.
Edit: mesa not mess
Lol, that’s a fun angle. They don’t need all those fields coz they just get your information the other way
Similar. But I do contribute by adding things I want to some projects I use if it’s simple enough.
And my pile of shit has like 40 stars, so maybe I have one or two other users besides me.
And how do I find a job like that?
Hi there, I did say it’s easily doable, but I didn’t have a script because I run things based on the image before OCR manually (like the negating the dark mode I tried in this script; when doing manually it’s just one command as I know whether it’s dark mode of not myself; similar for the threshold as well).
But here’s a one I made for you:
#!/usr/bin/env bash
# imagemagic has a cute little command for importing screen into a file
import -colorspace gray /tmp/screenshot.png
mogrify /tmp/screenshot.png -color-threshold "100-200"
# extra magic to invert if the average pixel is dark
details=`convert /tmp/screenshot.png -resize 1x1 txt:-`
total=`echo $details | awk -F, '{print $4}'`
value=`echo $details | awk '{print $7}'`
darkness=$(( ${value#_(%_)} * 100 / $total ))
if (( $darkness < 50 )); then
mogrify -negate /tmp/screenshot.png
fi
# now run the OCR
text=`tesseract /tmp/screenshot.png -`
echo $text | xclip -selection c
notify-send OCR-Screen "$text"
So the middle part is to accommodate images in dark mode. It negates it based on the threshold that you can change. Without that, you can just have import
for screen capture, tesseract
for running OCR. and optionally pipe it to xclip
for clipboard or notify-send
for notification.
In my use case, I have keybind to take a screenshot like this: import png:- | xclip -selection c -t image/png
which gives me the cursor to select part of the screen and copies that to clipboard. I can save that as an image (through another bash script), or paste it directly to messenger applications. And when I need to do OCR, I just run tesseract
in the terminal and copy the text from there.
Not for handwritten text, but for printed fonts, getting OCR is as easy as just making a box in screen with current technology. So I don’t think we need AI things for that.
Personally I use tesseract. I have a simple bash script that when run let’s me select a rectangle in screen, save that image and run OCR in a temp folder and copy that text to clipboard. Done.
Edit: for extra flavor you can also use notify-send to send that text over a notification so you know what the OCR produced without having to paste it.
About the malware thing. Won’t the Linux use increasing in organizations give incentive for attackers to make malwares targeting linux? It’s not like we’re malware free, it’s just that average user is informed enough and there is low use of linux making it not worth as much to target desktop users.
Not much documentation. I tried to use it, but it was really hard to figure out anything.