Just wondering since I know a lot of people quietly use a screen-area-select -> tesseract OCR -> clipboard shortcut.

  • I separate subjects of interest into different Firefox windows, in different workspaces – so I have an extension title them and a startup script parse text to ask the compositor to put them in the correct workspace (lets me restart more conveniently).
  • I have automatically-set different-orientation wallpapers for using my 2-in-1 depending on whether I use it in portrait or landscape (kind of just for looks, but I don’t think if anyone else adds a wallpaper change to their screen rotation keybind).
  • Korhaka@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 month ago

    Does stuff I wrote myself count?

    Apache server that has a bunch of webpages that are all configured by simple JSON files and loaded by PHP. The pages have buttons on them which when pressed enter macros. So I push “Deploy Landing Gear” and Shift+alt+F8 or some obscure as fuck combination no one would ever use normally gets pressed and the game can be set to use that keybind. Most of it is for simple immediate key presses but also made a few for macros as well.

    The HTML/PHP that runs the show is a grand total of 2018 bytes, including comments. Plus a fairly bloated 2444 byte CSS file that includes some button colour options that I never use now because I decided they look ugly. Should update some of the background images though, my sheet steel Faulcon DeLacy logo looks a bit basic.

  • Eyedust@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 month ago

    Maybe a bit plain since I’m only at mediocre level in my Linux journey, but I use my favorite fonts for Kitty. Recursive Mono Linear and then for italics and comments in neovim I use Recursive Mono Casual Italic.

    Recursive Linear is so tidy and neat, with just the lightest touch of personality. And Casual keeps that style but tweaks it just ever so slightly to a more comic. And they have sans versions of both as well for everything else.

    I also made my own Starship prompt to match my desktop. It runs an easily reconfigurable color palette and uses color coded chevrons to denote different git statuses.

  • comfy@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 month ago

    While I doubt the concept is unique, the script is: a keyboard shortcut will check the clipboard for a YouTube link and then show launcher options for mpv or yt-dlp, including launch arguments for lower quality format and audio only. It launches that in a terminal for easier handling when yt-dlp doesn’t work properly (much more common if using proxies, but also if a video is age-restricted or deleted).

    So when I see a yt link here, I can just copy it, keyboard shortcut and then it’s playing in my local video player.

    edit: here’s the script. It assumes xsel (clipboard access), rofi (menu creator), gnome-terminal (terminal) and notify-send (system notification on failure) are installed and working, you’ll need to replace any which don’t match your system. My DE just runs it in bash when the shortcut is entered.

    Code (click to expand)
    #!/bin/bash
    
    ARR=()
    ARR+=("mpv full")
    ARR+=("mpv medium")
    ARR+=("yt-dlp")
    
    NORMAL_URL=`xsel -ob | sed -r "s/.*(v=|\/)([a-zA-Z0-9_-]{11}).*/https:\/\/youtube.com\/watch?v=\2/"`
    
    CHOICE=$(printf '%s\n' "${ARR[@]}" | rofi -dmenu -p "mpv + yt-dlp from clipboard")
    DOWNLOAD="false"
    MPV="false"
    OPTIONS=""
    
    if [ "$CHOICE" = "mpv full" ]; then
    	MPV="true"
    fi
    
    if [ "$CHOICE" = "mpv medium" ]; then
    	MPV="true"
    	OPTIONS+="'--ytdl-format=bv*[height<721]+ba' "
    fi
    
    if [ "$CHOICE" = "yt-dlp" ]; then
    	DOWNLOAD="true"
    fi
    
    if [ $MPV == "true" ]; then
    	COMMAND="mpv $OPTIONS $NORMAL_URL"
    	gnome-terminal --title "$NORMAL_URL" -- bash -c "echo $COMMAND;$COMMAND;if [ \$? -ne 0 ]; then notify-send 'yt-dlp failed' $NORMAL_URL; bash; fi;"
    elif [ $DOWNLOAD == "true" ]; then
    	COMMAND="yt-dlp $OPTIONS $NORMAL_URL"
            gnome-terminal --title "$NORMAL_URL" -- bash -c "echo $COMMAND;$COMMAND;if [ \$? -ne 0 ]; then notify-send 'yt-dlp failed' $NORMAL_URL; bash; fi;"
    fi
    
  • faercol@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    2
    ·
    1 month ago

    I boot on a custom EFI app to control my dualboot (instead of systemd-boot or grub) that asks a service on my proxmox server which OS I’m supposed to boot.

    Overkill, but it allows me to control my dual-boot without a keyboard in my computer (because it’s a Bluetooth keyboard so I can’t really use it in grub anyway)

    • fool@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      A custom EFI app? Is that like a handrolled Unified Kernel Image with some Proxmox-specific addons in it? How’d you make it?

      • faercol@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        1
        ·
        1 month ago

        No, it’s a EFI app I developed in Rust that does a query over multicast UDP and uses the result to select which EFI app (Windows bootloaded (yeah I know…) Or systemd-boot to start Arch)

        There’s nothing related to proxmox itself, it’s just there that I host my LXC with the service that responds to the quey.

  • tonyn@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    1 month ago

    When I press Super + PrtSc, a bash script performs the following:

    Takes a screenshot of the entire desktop (import -window root) and saves it as ~/screenshot.png…

    Analyzes the screenshot to calculate the “mean brightness” value of the image. It converts the image to grayscale and determines the average pixel brightness (a value between 0 and 1, where 0 is black and 1 is white).

    Checks if the image is dark by comparing the mean brightness to a threshold of 0.2. If the mean brightness is less than 0.2 (i.e., the image is very dark), it applies a negative filter to the image (convert -negate), effectively inverting the colors (black becomes white and vice versa).

    Sends the image to a printer (lp command) named MF741C-743C for printing.

  • nycki@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 month ago

    I have Syncthing set up to copy save data between my pc and steam deck, but not just for emulator stuff: its got my entire modded minecraft directory and my balatro modloader nn there too.

    • Eyedust@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Syncthing is great and incredibly easy to use. I have mine set to sync my Obsidian notes so I don’t have to pay for the official service.

      I have tried multiple different open source note apps that offer free local sync, but I can’t find anything I like. It frustrates me because I love open source.

      • Zeoic@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        Take a look at the Self-Hosted LiveSync plugin for Obsidian. Requires some self hosting for a sync server, but it is damn flawless. I have my phone, desktop, laptop, and work laptop, all syncing through it. Syncs live too, so you can even see me typing on one device from another

      • aes@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        1 month ago

        I use the same setup with Syncthing and Obsidian. The git plugin sometimes gets confused, but nothing I can’t untangle. I also use Syncthing for pictures off my phone, and ebooks onto it.

        Actually, I think I do have a setup that might qualify as unusual: I use the scheduled backup feature of Podcast Addict to get a listing of listened podcast episodes, and then I inject them into my Obsidian notes.

  • golden_zealot@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    I am indecisive when it comes to wallpapers so I have a script somewhere which accepts tag-words as arguments and then scrapes wallhaven.cc for those words at the resolution of my setup and picks one that contains those words at random before downloading it to my wallpapers folder and setting it as my wallpaper image.

    So for example, you could just know you want something blue so you would run wallpaper blue and it just grabs one and sets it. You could get a wallpaper of the sky, of a blue car, of the ocean, whatever happens to be a wallpaper that met the criteria of the word/s supplied.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    I’ve got a RPI running a full-screen ‘kiosk’ view from homeassitant that turns an external display on/off based on a motion sensor.

    So basically it’s showing current temperatures, thermostat control, etc. but I have the display turn off after X minutes of no movement and turn on when there has been movement so it’s only on when you’re in the room.

    • irotsoma@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      I’d love to see your implementation specs, code, pr any other technical details you’d like to share. I’m setting up home assistant and one of the things I want it to do is replace the functions of my thermostat and add some additional details.

      I used to have a Nest Thermostat, but my furnace needed to be replaced a couple of months back and I got a Mitsubishi heat pump, but their thermostat sucks, and it isnt compatible with Nest because it’s all wireless. I installed the WiFi add-on to the furnace so I can use the app, too, but it also sucks pretty bad. Plus I miss the functionality of it turning down the heat when I’m away to save money and turning it back on before I get home.

      So I’m planning to implement my own solution and documenting and open sourcing everything. But it’s going to be several months before I get to doing it due to other more urgent projects. So, I’m looking at everything available. I definitely will be setting up a small display to replace the thermostat and having motion detectors to turn on the display when you approach it to see the temperature and such and to supplement the home/away detection.

      Anyway, I would love to see your implementation to see how you did this piece of it.

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        It’s really quite simple - but works pretty well. There are 3 components:

        Kiosk service

        A simple systemd service that starts a kiosk script.

        [Unit]
        Description=Kiosk
        Wants=graphical.target
        After=graphical.target
        
        [Service]
        Environment=DISPLAY=:0.0
        Environment=XAUTHORITY=/home/pi/.Xauthority
        Type=simple
        ExecStart=/bin/bash /home/pi/kiosk.sh
        Restart=on-abort
        User=pi
        Group=pi
        
        [Install]
        WantedBy=graphical.target
        

        Kiosk script

        The script in /home/pi/kiosk.sh just starts a web browser in full-screen mode pointed at my home assistant instance:

        #!/bin/bash
        
        xset s noblank
        xset s off
        xset -dpms
        
        export DISPLAY=:0.0 
        
        echo 0 > /sys/class/backlight/rpi_backlight/bl_power
        
        LANDING_PAGE="https://homeassistant.example.com/"
        
        unclutter -idle 0.5 -root &
        
        /usr/bin/chromium-browser --noerrdialogs --disable-infobars --kiosk $LANDING_PAGE
        
        

        Display service

        I have a very simple python/flask service that runs and exposes an endpoint that lets you turn on/off the display. It’s called by a homeassistant automation for when the motion detector senses or hasn’t sensed movement.

        Here’s the python - I have this started from another “kiosk.service” systemd service as well.

        #!/usr/bin/env python3
        import subprocess
        
        from flask import Flask
        from flask_restful import Api, Resource
        
        def turn_off_display():
            with(open(backlight_dev, 'w')) as dev:
                dev.write("1")
        
        
        def turn_on_display():
            with(open(backlight_dev, 'w')) as dev:
                dev.write("0")
        
        
        class DisplayController(Resource):
            def get(self, state):
                if state == 'off':
                    turn_off_display()
                elif state == 'on':
                    turn_on_display()
                else:
                    return {'message': f'Unknown state {state} - should be off/on'}, 500
                return {"message": "Success"}
        
        
        def init():
            turn_on_display()
        
        
        if __name__ == "__main__":
            init()
            app = Flask(__name__)
            api = Api(app)
            api.add_resource(DisplayController, '/display/<string:state>')
            app.run(debug=False, host='0.0.0.0', port=3000)
        

        You can then have the HA rest action call this with “http://pidisplay:3000/display/on” or off.

  • mlg@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    I’m using XFCE with Compiz, and since I have two monitors I have a 3D octagon instead of a 3D cube desktop.

  • Rimu@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    I have an old gamer keyboard with extra programmable keys on the side, which I use for cut, copy, paste, close tab, close window, etc. Logitech provides drivers/software for Windows & Mac only.

    To make it work I have a custom monkey-patched USB driver that I compiled from source, some weird daemon that interacts with the driver and some shell scripts on top of that. I’m not sure how but it works thanks to a 9 year old youtube video made by a guy from eastern europe somewhere.