Keira Treial (researcx) [0xA2FB6A07]

Things that I've done:

blog

A script which parses markdown files into blog posts, intended to be used with Obsidian Notes.

shell config generator

A javascript based shell config generator.

xPoster

Front-end for the do_social_media_post.py crossposter.

do_social_media_post.py

Social media crossposter.

import json, shlex, sys
from subprocess import Popen

# usage: do_social_media_post.py "" twitter,mastodon,bluesky,facebook,instagram

caption = sys.argv[1]
image_caption = ""
if len(sys.argv) == 4:
  image_caption = sys.argv[3]
social_media = sys.argv[2].split(',')
is_image = False

if caption.endswith(('jpg', 'png', 'gif', 'jpeg')):
  is_image = True

for site in social_media:
  if is_image == False:
      if site != "instagram":
          command = "/opt/homebrew/opt/python@3.9/libexec/bin/python3 /Users/sysadmin/Documents/social_media_bots/"+site+"/posters/"+site+"_text.py " + json.dumps(caption)
          print(command)
          proc = Popen(shlex.split(command)) #, shell=True)
          proc.communicate()
      else:
          print("instagram does not support text posting")
  else:
      command = "/opt/homebrew/opt/python@3.9/libexec/bin/python3 /Users/sysadmin/Documents/social_media_bots/"+site+"/posters/"+site+"_image.py " + json.dumps(caption) + " " + json.dumps(image_caption)
      print(command)
      proc = Popen(shlex.split(command)) #, shell=True)
      proc.communicate()

researchdepartment/dotfiles-macos-yabai

macOS 14.4 configuration

proxmox-backup.sh

A script to backup Proxmox containers and host configs with the optional ability to automatically store them in LUKS-encrypted images. Also available for LXD (full-backup.sh).

#!/bin/bash -l
# set -x

# make sure you have zip installed
# running: /path/to/proxmox-backup.sh [0/1]
# use 0 for debugging, 1 to run
# gist.github.com/researcx

# configuration
backups_folder="/Backups/Pi/$hostname" # needs to exist

path="/tmp/backup-$hostname/"
path_host=$path"host/" # main sysetm (host) files
path_containers="/var/lib/vz/dump" # proxmox container backup storage

mail=1 # email a log and important server information (disk space, etc)
mailto="root"

make_encrypted=0 # luks encrypt the host files and lxd container images
encryption_passphrase="passphrase" # passphrase for luks encrypted container
path_crypt="luks/"
crypt_ext=".crypt"

days=7 # delete backups older than x days
run=$1 # whether to actually run commands (set to 0 for debugging)
wait=120 # amount of seconds to wait between running backup commands, helps calm server load

timestamp=$(date +%Y%m%d_%H%M%S)

# host files (this is kind of a catch-all) (good to get via history | grep "nano /etc"), space separated
host_files=("/root/.bashrc" "/root/.ssh" "/root/.bash_profile" "/root/.bash_history" "/root/.tmux.conf" "/root/.local/share/fish" "/root/Scripts" "/etc/wireguard" "/etc/logrotate.d" "/etc/profile" "/etc/netdata" "/etc/fish" "/etc/fail2ban" "/etc/ssh" "/etc/sysctl.conf" "/etc/cron.d" "/etc/cron.daily" "/etc/cron.weekly" "/etc/cron.hourly" "/etc/cron.deny" "/etc/crontab" "/var/spool/cron" "/etc/sysconfig" "/etc/fstab" "/etc/crypttab" "/etc/postfix" "/etc/hosts" "/etc/resolv.conf" "/etc/aliases" "/etc/rsyslog.d" "/etc/ufw" "/etc/pam.d" "/etc/netplan" "/etc/wpa_supplicant" "/etc/network" "/etc/networks" "/etc/apt" "/etc/apticron" "/etc/yum.repos.d" "/etc/iptables.rules" "/etc/ip6tables.rules" "/etc/iptables" "/etc/modprobe.d" "/etc/pve" "/etc/udev" "/etc/modules-load.d" "/etc/systemd" "/etc/update-motd.d" "/etc/lightdm" "/etc/groups" "/etc/passwd" "/etc/nsswitch.conf" "/etc/netatalk" "/etc/samba" "/etc/avahi" "/etc/default" "/etc/nanorc" "/etc/X11" "/etc/netconfig")

# proxmox containers, numbers, space separated
core=("100" "101" "102" "103") # dnscrypt, nginx, ldap, ircd

# log
log_file=backup-$hostname-$timestamp.log
log=$path$log_file

# make the directories
rm -r $path
rm -r $path_containers
mkdir -p $path
mkdir -p $path_host
mkdir -p $path_containers
mkdir -p $backups_folder

if [[ "$make_encrypted" == 0 ]]; then
  mkdir -p $path$path_crypt
fi

# functions
convertsecs() {
  ((h=${1}/3600))
  ((m=(${1}%3600)/60))
  ((s=${1}%60))
  printf "%02d:%02d:%02d\n" $h $m $s
}
proxmox_backup() {
  container=$1
  vzdump $container
  pct unlock $container
}
make_encrypted_container(){
  name=$1
  file=$2
  mountpoint="$2/enc/"
  size=$(du -s $file | awk '{print $1}')

  if [ "$size" -lt "65536" ]; then # cryptsetup: luks images need to be +32mb in order to be able to be formatted/opened
      size=65536
  else
      size="$(($size + 65536))" #just being safe! hopefully
  fi

  crypt_filename=$hostname-$name-$timestamp$crypt_ext
  crypt_mapper=$hostname-$name-$timestamp
  crypt_devmapper="/dev/mapper/$crypt_mapper"


  fallocate -l "$size"KB $path$path_crypt$crypt_filename

  printf $encryption_passphrase | cryptsetup luksFormat $path$path_crypt$crypt_filename -
  printf $encryption_passphrase | cryptsetup luksOpen $path$path_crypt$crypt_filename $crypt_mapper

  mkfs -t ext4 $crypt_devmapper
  mkdir -p $mountpoint
  mount $crypt_devmapper $mountpoint
}

unmount_encrypted_container(){
  name=$1
  mountpoint="$2/enc/"
  crypt_mapper=$hostname-$name-$timestamp
  crypt_devmapper="/dev/mapper/$crypt_mapper"
  umount $mountpoint
  cryptsetup luksClose $crypt_mapper
}

# clean up old backups
if [[ "$run" == 1 ]]; then
  if [[ "$make_encrypted" == 1 ]]; then
      find $backups_folder -maxdepth 1 -name "*$crypt_ext"  -type f -mtime +$days  -print -delete >> $log
  fi
  find $backups_folder -maxdepth 1 -name "*.log"  -type f -mtime +$days  -print -delete >> $log
  find $backups_folder -maxdepth 1 -name "*.tar"  -type f -mtime +$days  -print -delete >> $log
  find $backups_folder -maxdepth 1 -name "*.zip"  -type f -mtime +$days  -print -delete >> $log
fi
# start main code
START_TIME=$(date +%s)
echo "Backup:: Script start -- $timestamp" >> $log
echo "Backup:: Host: $hostname -- Date: $timestamp" >> $log
echo "Paths:: Host: $path" >> $log
echo "Paths:: Containers: $path_containers" >> $log
echo "Paths:: Backups: $backups_folder" >> $log


# host files
echo "Backup:: Backing up the following host files to $path_host" >> $log
# echo $host_files >> $log
for host_file in ${host_files[@]}; do
  echo "Backup:: Starting backup of $host_file to $path_host" >> $log
  host_file_safe=$(echo $host_file | sed 's|/|-|g')
  if [[ "$run" == 1 ]]; then
      zip -r $path_host$host_file_safe-$timestamp.zip "$host_file" >> $log
      
  fi
done
echo "Backup:: Host files successfully backed up" >> $log
if [[ "$run" == 1 ]]; then
  if [[ "$make_encrypted" == 1 ]]; then
      echo "Backup:: Making an encrypted container for host files" >> $log
      make_encrypted_container "host" $path_host
      echo "Backup:: Moving files to encrypted container" >> $log
      mv $path_host/*.zip "$path_host/enc/"
      echo "Backup:: Unmounting encrypted container" >> $log
      unmount_encrypted_container "host" $path_host
      rm -rf $path_host/*
      echo "Backup:: Successfully encrypted host backup" >> $log
  fi
fi

# containers
echo "Backup:: Backing up containers" >> $log
for container in ${core[@]}; do
  echo "Backup:: Starting backup on $container to $path_containers" >> $log
  if [[ "$run" == 1 ]]; then
      proxmox_backup $container >> $log
      sleep $wait
  fi
done
if [[ "$run" == 1 ]]; then
  if [[ "$make_encrypted" == 1 ]]; then
      echo "Backup:: Making an encrypted container for containers" >> $log
      make_encrypted_container "containers" $path_containers
      echo "Backup:: Moving files to encrypted container" >> $log
      mv $path_containers/*.tar.gz "$path_containers/enc/"
      echo "Backup:: Unmounting encrypted container" >> $log
      unmount_encrypted_container "core" $path_containers
      rm -rf $path_containers/*
      echo "Backup:: Successfully encrypted core container backup" >> $log
  fi
  sleep $wait
fi
rsync -a --progress $log $backups_folder  >> $log
if [[ "$make_encrypted" == 1 ]]; then
  rsync -a --progress $path$path_crypt $backups_folder  >> $log
else
  rsync -a --progress $path_host $backups_folder  >> $log
  rsync -a --progress $path_containers/ $backups_folder  >> $log
fi
END_TIME=$(date +%s)
# end main code

elapsed_time=$(( $END_TIME - $START_TIME ))
echo "Backup :: Script End -- $(date +%Y%m%d_%H%M)" >> $log
echo "Elapsed Time ::  $(convertsecs $elapsed_time) "  >> $log

backup_size=`find $path -maxdepth 5 -type f -mmin -360 -exec du -ch {} + | grep total$ | awk '{print $1}'`
backup_stored=`find $path -maxdepth 5 -type f -exec du -ch {} + | grep total$ | awk '{print $1}'`
disk_remaining=`df -Ph $backups_folder | tail -1 | awk '{print $4}'`

echo -e "Subject: [$hostname] Backup Finished [$backup_size] [stored: $backup_stored | disk remaining: $disk_remaining] (took $(convertsecs $elapsed_time))\n\n$(cat $log)" > $log

if [[ "$mail" == 1 ]]; then
  sendmail -v $mailto < $log
fi

sleep $wait
rm -r $path
rm -r $path_containers

GitLab SnippetGitHub Gist

lxd-backup.sh

#!/bin/bash
#set -x
# configuration
hostname=$(hostname)
lxc="/snap/bin/lxc"
path="/mount/backups/$hostname/"
path_host=$path"host/"
path_lxd=$path"lxd/"
path_lxd_core=$path"lxd/core/"
mail=1 # email a log and important server information (disk space, etc)
mailto="root"
make_encrypted=1 # luks encrypt the host files and lxd container images
encryption_passphrase="passphrase" # passphrase for luks encrypted container
path_crypt="luks/"
crypt_ext=".encrypted"
days=7 # delete backups older than x days
run=1 # whether to actually run commands (set to 0 for debugging)
wait=15 # amount of time to wait between running backup commands, helps calm server load
timestamp=$(date +%Y%m%d_%H%M%S)
# make the directories
/bin/mkdir -p $path
/bin/mkdir -p $path_host
/bin/mkdir -p $path_lxd
/bin/mkdir -p $path_lxd_core
if [[ "$make_encrypted" == 1 ]]; then
  /bin/mkdir -p $path$path_crypt
fi
# functions
convertsecs() {
((h=${1}/3600))
((m=(${1}%3600)/60))
((s=${1}%60))
/usr/bin/printf "%02d:%02d:%02d\n" $h $m $s
}
lxdbackup() {
  container=$1
  folder=$2
  snapshotname=$container-$timestamp.snapshot
  backupname=lxd-image-$container-$timestamp
  $lxc snapshot $container $snapshotname
  $lxc publish --force $container/$snapshotname --alias $backupname
  $lxc image export $backupname $folder$backupname
  $lxc delete $container/$snapshotname
  $lxc image delete $backupname
}
# use encrypted folders?
make_encrypted_container(){
  name=$1
  file=$2
  mountpoint="$2/enc/"
  size=$(du -s $file | awk '{print $1}')
  if [ "$size" -lt "65536" ]; then # cryptsetup: luks images need to be +32mb in order to be able to be formatted/opened
      size=65536
  else
      size="$(($size + 65536))" #just being safe! hopefully
  fi
  crypt_filename=$hostname-$name-$timestamp$crypt_ext
  crypt_mapper=$hostname-$name-$timestamp
  crypt_devmapper="/dev/mapper/$crypt_mapper"
  /usr/bin/fallocate -l "$size"KB $path$path_crypt$crypt_filename
  /usr/bin/printf $encryption_passphrase | /sbin/cryptsetup luksFormat $path$path_crypt$crypt_filename -
  /usr/bin/printf $encryption_passphrase | /sbin/cryptsetup luksOpen $path$path_crypt$crypt_filename $crypt_mapper
  /sbin/mkfs -t ext4 $crypt_devmapper
  /bin/mkdir -p $mountpoint
  /bin/mount $crypt_devmapper $mountpoint
}
unmount_encrypted_container(){
  name=$1
  mountpoint="$2/enc/"
  crypt_mapper=$hostname-$name-$timestamp
  crypt_devmapper="/dev/mapper/$crypt_mapper"
  /bin/umount $mountpoint
  /sbin/cryptsetup luksClose $crypt_mapper
}
# host files
host_files=("/root/.bashrc" "/root/.bash_profile" "/root/.bash_history" "/root/.tmux.conf" "/root/.local/share/fish" "/root/scripts" "/etc/wireguard" "/etc/logrotate.d" "/etc/profile" "/etc/netdata" "/etc/fish" "/etc/fail2ban" "/etc/ssh" "/etc/sysctl.conf" "/etc/cron.d" "/etc/cron.daily" "/etc/cron.weekly" "/etc/cron.hourly" "/etc/cron.deny" "/etc/crontab" "/var/spool/cron" "/etc/sysconfig" "/etc/fstab" "/etc/crypttab" "/etc/postfix" "/etc/hosts" "/etc/resolv.conf" "/etc/aliases" "/etc/rsyslog.d" "/etc/ufw" "/etc/pam.d" "/etc/netplan" "/etc/wpa_supplicant" "/etc/network" "/etc/networks" "/etc/apt" "/etc/apticron" "/etc/yum.repos.d")
# lxd containers (names, space separated)
core=("nginx mariadb mail")
# log
logname=backup-$hostname-$timestamp.log
log=$path$logname
# clean up old backups
if [[ "$make_encrypted" == 1 ]]; then
  find $path$path_crypt -maxdepth 1 -name "*$crypt_ext"  -type f -mtime +$days  -print -delete >> $log
fi
find $path_host -maxdepth 1 -name "*.zip"  -type f -mtime +$days  -print -delete >> $log
find $path_host -maxdepth 1 -name "*.log"  -type f -mtime +$days  -print -delete >> $log
find $path_lxd_core -maxdepth 1 -name "*.tar.gz"  -type f -mtime +$days  -print -delete >> $log
# start main code
START_TIME=$(date +%s)
echo "Backup:: Script start -- $timestamp" >> $log
echo "Backup:: Host: $hostname -- Date: $timestamp" >> $log
# host files
echo "Backup:: Backing up the following host files to $path_host" >> $log
echo $host_files >> $log
for host_file in ${host_files[@]}; do
  echo "Backup:: Starting backup of $host_file to $path_host" >> $log
  host_file_safe=$(echo $host_file | sed 's|/|-|g')
  if [[ "$run" == 1 ]]; then
      zip -r $path_host$host_file_safe-$timestamp.zip "$host_file" >> $log
      
  fi
done
echo "Backup:: Host files successfully backed up" >> $log
if [[ "$run" == 1 ]]; then
  if [[ "$make_encrypted" == 1 ]]; then
      echo "Backup:: Making an encrypted container for host files" >> $log
      make_encrypted_container "host" $path_host
      echo "Backup:: Moving files to encrypted container" >> $log
      /bin/mv $path_host/*.zip "$path_host/enc/"
      echo "Backup:: Unmounting encrypted container" >> $log
      unmount_encrypted_container "host" $path_host
      /bin/rm -rf $path_host
      echo "Backup:: Successfully encrypted host backup" >> $log
  fi
fi
# containers
echo "Backup:: Backing up containers" >> $log
for container in ${core[@]}; do
  echo "Backup:: Starting backup on $container to $path_lxd_core" >> $log
  if [[ "$run" == 1 ]]; then
      lxdbackup $container $path_lxd_core $ >> $log
      /bin/sleep $wait
  fi
done
if [[ "$run" == 1 ]]; then
  if [[ "$make_encrypted" == 1 ]]; then
      echo "Backup:: Making an encrypted container for core containers" >> $log
      make_encrypted_container "core" $path_lxd_core
      echo "Backup:: Moving files to encrypted container" >> $log
      /bin/mv $path_lxd_core/*.tar.gz "$path_lxd_core/enc/"
      echo "Backup:: Unmounting encrypted container" >> $log
      unmount_encrypted_container "core" $path_lxd_core
      /bin/rm -rf $path_lxd_core
      echo "Backup:: Successfully encrypted core container backup" >> $log
  fi
  /bin/sleep $wait
fi
END_TIME=$(date +%s)
# end main code
elapsed_time=$(( $END_TIME - $START_TIME ))
echo "Backup :: Script End -- $(date +%Y%m%d_%H%M)" >> $log
echo "Elapsed Time ::  $(convertsecs $elapsed_time) "  >> $log
backup_size=`find $path -maxdepth 5 -type f -mmin -360 -exec du -ch {} + | grep total$ | awk '{print $1}'`
backup_stored=`find $path -maxdepth 5 -type f -exec du -ch {} + | grep total$ | awk '{print $1}'`
disk_remaining=`df -Ph $path | tail -1 | awk '{print $4}'`
echo -e "Subject: [$hostname] Backup Finished [$backup_size] [stored: $backup_stored | disk remaining: $disk_remaining] (took $(convertsecs $elapsed_time))\n\n$(cat $log)" > $log
if [[ "$mail" == 1 ]]; then
  /usr/sbin/sendmail -v $mailto < $log
fi

GitLab SnippetGitHub Gist

githubrepos.py

Periodically make a HTML page of your GitHub repos (requires GitHub cli tools (gh command)).

import json, humanize, os, html
from datetime import datetime, timedelta

file = '/home/keira/Repositories/Public/github.html'
github_json_file = '/home/keira/Resources/github.json'

# cron every 30 minutes:
# */30 * * * * /usr/bin/gh repo list researcx -L 400 --visibility public --json name,owner,pushedAt,isFork,diskUsage,description > /home/keira/Resources/github.json
# */31 * * * * /usr/bin/python3 /home/keira/Scripts/githubrepos.py

# demo:
# https://xch.fairuse.org/~devk/list/static/files/Repositories/github.html

def human_size(filesize):
  return humanize.naturalsize(filesize)

def recent_date(unixtime):
  dt = datetime.fromtimestamp(unixtime)
  today = datetime.now()
  today_start = datetime(today.year, today.month, today.day)
  yesterday_start = datetime.now() - timedelta(days=1)

  def day_in_this_week(date):
      startday = datetime.now() - timedelta(days=today.weekday())
      if(date >= startday):
          return True
      else:
          return False

  timeformat = '%b %d, %Y'
  if day_in_this_week(dt):
      timeformat = '%A at %H:%M'
  if(dt >= yesterday_start):
      timeformat = 'Yesterday at %H:%M'
  if(dt >= today_start):
      timeformat = 'Today at %H:%M'

  return(dt.strftime(timeformat))

def shorten_text(s, n):
  if len(s) <= n:
      # string is already short-enough
      return s
  # half of the size, minus the 3 .'s
  n_2 = int(n) / 2 - 3
  # whatever's left
  n_1 = n - n_2 - 3
  return '{0}...{1}'.format(s[:int(n_1)], s[-int(n_2):])


cmd = 'gh repo list researcx -L 400 --visibility public --json name,owner,pushedAt,isFork,diskUsage,description > github.json'
repo_list = ""
last_updated = ""
i = 0

with open(github_json_file, 'r') as gh_repos:
  repos = json.load(gh_repos)
#    print(gh_repos)

for repo in repos:
  i = 1 if i == 0 else 0
  #print(repo)
  iso_date = datetime.strptime(repo['pushedAt'], '%Y-%m-%dT%H:%M:%SZ')
  timestamp = int((iso_date - datetime(1970, 1, 1)).total_seconds())
  date_time = recent_date(timestamp)
  str_repo = str(repo['name'])
  path_repo = str(repo['owner']['login']) + "/" + html.escape(str(repo['name']))
  str_desc = "<br/><span class='description' title='" + html.escape(str(repo['description'])) + "'>" + shorten_text(html.escape(str(repo['description'])), 150) + "</span>"
  link_repo = '<a href="https://github.com/'+path_repo+'" class="user" target="_blank">'+str(repo['owner']['login']) + "/"+'</a><a href="https://github.com/'+path_repo+'" target="_blank">'+str_repo+'</a>'
  disk_usage = " " + human_size(repo['diskUsage'] * 1024) + " &nbsp; "
  is_fork = "<span title='Fork'>[F]</span> &nbsp; " if repo['isFork'] == True else ""
  repo_list += """
  <div class="list v"""+str(i)+""""><span style="float:right;">""" + is_fork + disk_usage + str(date_time) + """</span>""" + link_repo + str_desc + """</div>"""
  if last_updated == "":
      last_updated = timestamp

html = """
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <meta http-equiv="X-UA-Compatible" content="ie=edge">
  <title>repositories</title>
  <style>
      body, html {
          padding: 0px;
          margin: 0px;
          color: #dc7224;
          font-family: JetBrainsMono, Consolas, "Helvetica Neue", Helvetica, Arial, sans-serif
      }
      a {
          color: #dc7224 !important;
          text-decoration: none;
      }
      a:hover {
          color: #efefef !important;
          tect-decoration: underline;
      }
      a.user {
          color: #aa7735 !important;
          text-decoration: none;
      }
      span {
          font-size: 14pt;
          margin-top: 2px;
          color: #555;
      }
      .list {
          font-size: 16pt;
          padding: 4px 8px;
          border-bottom: 1px dashed #101010;
          background: #000;
      }
      .list.v0 {
          background: #0f0f0f;
      }
      .list.v1 {
          background: #121212;
      }
      .description{
          font-size: 10pt;
          color: #555;
      }
  </style>
</head>
<body>"""+repo_list+"""
<div class="list" style="text-align: right;">
<span style="text-style: italic; font-size: 9pt;">generated by <a href="https://gist.github.com/researcx">githubrepos.py</a></span>
</div>
</body>
</html>
"""

f = open(file, "w+")
f.write(html)
f.close()

os.utime(file, (last_updated, last_updated))

GitLab SnippetGitHub Gist

researcx/ss13-lawgen [Content-Warning]

Space Station 13 AI Law Generator (Browser Version).

researcx/spacetopia (Spacetopia)

A role-play oriented Space Station 13 mod which adds certain Second-Life features and character customisation amongst other changes that are highly inspired by EVE Online and Furcadia.

Features:

  • A marketplace where users can submit various types of clothing, character details and miscellaneous other vanity items.
  • Users can buy things from the marketplace to apply to their character, items can also be gifted to other users.
  • Extensive character customization with the ability to change your skin tone and color, the detail and color of body parts and the ability to change the style of your body.
  • All outfits/uniforms are separated into tops, bottoms, underwear and socks (with inventory slots for each) for better customization.
  • Clothing selection using clothes from the marketplace is available on the character preference screen.
  • Customizable personal housing with item/storage persistency and the ability to change your furniture/decor.
  • Station doesn't require large amounts of maintenance thus making it perfect for semi-serious and serious roleplay.
  • In-game integration with on-site Spacetopia currency.
  • New game and web UI.
  • Ability to set up extra information/a character sheet.
  • Ability to change resolution/view distance
  • Character preview can be rotated by clicking on it.
  • Players can decide whether to spawn with a satchel or a backpack.
  • Improved sprites for floors, air alarms, APC's and ATMs.
  • More nature sprites.
  • Gender is now a text field.
  • Closets spawn with parts of clothing rather than uniforms.
  • PvP toggle and PvP-only areas (such as exploratory).
  • No permadeath.
  • Players spawn as a civilian by default and can choose an occupation later in-game using a computer.
  • Many sprites, optimizations, fixes and features ported from D2Station.
  • Major improvements upon the API system written for D2Station V4.

More images...

weatherscript.sh

Adjust cpu frequency based on time and weather. (For solar panels.)

#!/bin/bash
LAT=53.3963308
LON=-1.5155923

JSON=$(curl -s "https://api.open-meteo.com/v1/forecast?latitude=$LAT&longitude=$LON&current_weather=true")
WEATHERCODE=$(echo $JSON | jq -r '.current_weather.weathercode') # WMO weather interpretation code
IS_DAY=$(echo $JSON | jq -r '.current_weather.is_day') # 1 if the current time step has daylight, 0 at night.

#echo -e "\033[1;33mDBG: IS_DAY=$IS_DAY\033[0m"
#echo -e "\033[1;33mDBG: WEATHERCODE=$WEATHERCODE\033[0m" # 0 for clear sky, 1 for mainly clear, 2 for partly cloudy (see https://open-meteo.com/en/docs#weathervariables)

if [ "$IS_DAY" -eq 1 ]; then
  echo -e "\033[1;32mSUCC: It is daytime!\033[0m"
  if [ "$WEATHERCODE" -gt 2 ]; then
      echo -e "\033[1;31mERR: Unsuitable weather conditions (dark)\033[0m"
      x86_energy_perf_policy --turbo-enable 0; cpupower frequency-set -u 1.5GHz > /dev/null
  else
      echo -e "\033[1;32mSUCC: Suitable weather conditions (bright)\033[0m"
      x86_energy_perf_policy --turbo-enable 1; cpupower frequency-set -u 5GHz > /dev/null
  fi
else
  echo -e "\033[1;31mERR: It is not daytime!\033[0m"
  x86_energy_perf_policy --turbo-enable 0; cpupower frequency-set -u 1GHz > /dev/null
fi

GitLab SnippetGitHub Gist

discord_export_friends.py

Python script to backup discord tags with user IDs. (For friend lists.)

#!/usr/bin/python3
import discord
token = ("")

class ExportFriends(discord.Client):
async def on_connect(self):
  friendslist = client.user.friends
  for user in friendslist:
    try:
      print(user.name+"#"+user.discriminator + " ("+str(user.id)+")")
    except:
      print(user.name+"#"+user.discriminator + " ("+str(user.id)+")")
client = ExportFriends()
client.run(token, bot = False)

GitLab SnippetGitHub Gist

researcx/square_avatar [Fork]

Square avatar generator for Twitter.

From noiob/noiob.github.io/main/hexagon.html

researcx/timg

Self-destructing image upload server in Python+Flask. Image is loaded in base64 in the browser and destroyed as soon as it is viewed (experimental).

DonateBot.py

Game-server style "donate to us" announcer bot for IRC channels.

#!/usr/bin/python
from twisted.words.protocols import irc
from twisted.internet import reactor, protocol
from re import search, IGNORECASE
from random import randint
import time
import os, signal

serv_ip = "10.3.0.50"
serv_port = 6667

with open('/root/DonateBot/donate.txt') as f:
  message = f.read()

class DonateBot(irc.IRCClient):
  nickname = "DonateBot"
  chatroom = "#xch"

  def signedOn(self):
      self.join(self.chatroom)
      time.sleep(2)
      self.msg(self.chatroom, message)
      self.part(self.chatroom)
      time.sleep(2)
      self.quit()

  def quit(self, message=""):
      self.sendLine("QUIT :%s" % message)

def main():
  f = protocol.ReconnectingClientFactory()
  f.protocol = DonateBot

  reactor.connectTCP(serv_ip, serv_port, f)
  reactor.run()

if __name__ == "__main__":
  main()

GitLab SnippetGitHub Gist

researcx/pyborg-1up [Fork]

Adds server password and channel-specific trigger word support to the IRC module.

From jrabbit/pyborg-1up

researcx/conduit

A multi-network multi-channel IRC relay. Acts as a soft-link between IRC servers.

  • Allows you to configure more than two networks with more than two channels for messages to be relayed between.
  • Records users on every configured channel on every configured network.
  • Remembers all invited users with a rank which the bots will automatically attempt to promote them to.
  • Provides useful commands for administrators to manage their conduit-linked servers with.
  • When used with the Matrix IRC AppService, filters Matrix and Discord nicks and messages for clarity.

researcx/weechat-confsave

A weechat config exporter which includes non-default variables. Outputs to /set commands, myweechat.md-style markdown or raw config.

#
# Copyright (c) 2020 researcx <http://linktr.ee/researcx>
# researcx.gitlab.io
# 
# Everyone is permitted to copy and distribute verbatim or modified
# copies of this license document, and changing it is allowed as long
# as the name is changed.
#
# DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
# TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
#
# 0. You just DO WHAT THE FUCK YOU WANT TO.
#

#
# confsave.py
#   Save non-default config variables to a file in various formats.
#   Note: will attempt to exclude plaintext passwords.
#
# Usage:
#   /confsave [filename] [format]
#
#   filename: target file (must not exist)
#   format: raw, markdown or commands
#
# History:
#   2020-04-01, unendingPattern <kei.trei.a52@gmail.com>
#       version 0.1: initial release
#

try:
  import weechat as w
except Exception:
  print("This script must be run under WeeChat.")
  print("Get WeeChat now at: https://weechat.org")
  quit()
from os.path import exists

SCRIPT_NAME    = "confsave"
SCRIPT_AUTHOR  = "researcx <https://linktr.ee/researcx>"
SCRIPT_LINK    = "https://github.com/researcx/weechat-confsave"
SCRIPT_VERSION = "0.1"
SCRIPT_LICENSE = "WTFPL"
SCRIPT_DESC    = "Save non-default config variables to a file in various formats."
SCRIPT_COMMAND = SCRIPT_NAME


if w.register(SCRIPT_NAME, SCRIPT_AUTHOR, SCRIPT_VERSION, SCRIPT_LICENSE, SCRIPT_DESC, "", ""):
  w.hook_command(SCRIPT_COMMAND,
          SCRIPT_DESC + "\nnote: will attempt to exclude plaintext passwords.",
          "[filename] [format]",
          "   filename: target file (must not exist)\n   format: raw, markdown or commands\n",
          "%f",
          "confsave_cmd",
          '')

def confsave_cmd(data, buffer, args):
  args = args.split(" ")
  filename_raw = args[0]
  output_format = args[1]
  acceptable_formats = ["raw", "markdown", "commands"]
  output = ""
  currentheader = ""
  lastheader = ""
      
  if not filename_raw:
      w.prnt('', 'Error: filename not specified!')
      w.command('', '/help %s' %SCRIPT_COMMAND)
      return w.WEECHAT_RC_OK

  if output_format not in acceptable_formats:
      w.prnt('', 'Error: format incorrect or not specified!')
      w.command('', '/help %s' %SCRIPT_COMMAND)
      return w.WEECHAT_RC_OK
  
  filename = w.string_eval_path_home(filename_raw, {}, {}, {})
  infolist = w.infolist_get("option", "", "")
  variable_dict = {}
  if infolist:
      while w.infolist_next(infolist):
          infolist_name = w.infolist_string(infolist, "full_name")
          infolist_default = w.infolist_string(infolist, "default_value")
          infolist_value = w.infolist_string(infolist, "value")
          infolist_type = w.infolist_string(infolist, "type")
          if infolist_value != infolist_default:
              variable_dict[infolist_name] = {}
              variable_dict[infolist_name]['main'] = infolist_name.split(".")[0]
              variable_dict[infolist_name]['name'] = infolist_name
              variable_dict[infolist_name]['value'] = infolist_value
              variable_dict[infolist_name]['type'] = infolist_type
      w.infolist_free(infolist)

  if output_format == "markdown":
      output += "## weechat configuration"
      output += "\n*automatically generated using [" + SCRIPT_NAME + ".py](" + SCRIPT_LINK + ")*"
  # w.prnt("", str(variable_dict.values())) # debug
  for config in variable_dict.values():
      if output_format == "markdown":
          currentheader = config['main']
          if not ("password" in config['name']) and ("sec.data" not in config['value']):
              if currentheader != lastheader:
                  output += "\n### " + config['main']
                  lastheader = currentheader

      if not ("password" in config['name']) and ("sec.data" not in config['value']):
          write_name = config['name']
          if config['type'] == "string":
              write_value = "\"" + config['value'] + "\""
          else:
              write_value = config['value']
          if output_format == "markdown":
              output += "\n\t/set " + write_name + " " + write_value
          if output_format == "raw":
              output += "\n" + write_name + " = " + write_value
          if output_format == "commands":
              output += "\n/set " + write_name + " " + write_value
  output += "\n"
  # w.prnt("", "\n" + output) # debug

  if exists(filename):
      w.prnt('', 'Error: target file already exists!')
      return w.WEECHAT_RC_OK

  try:
      fp = open(filename, 'w')
  except:
      w.prnt('', 'Error writing to target file!')
      return w.WEECHAT_RC_OK

  # w.prnt("", "\n" + output)
  fp.write(output)
  w.prnt("", "\nSuccessfully outputted to " + filename + " as " + output_format + "!")

  fp.close()

  return w.WEECHAT_RC_OK

GitLabGitHub

researcx/lxd-tools

LXD powertool for container mass-management, migration and automation.


researcx/random-fursona

Fetch a random fursona from thisfursonadoesnotexist.com

researcx/xch (incomplete)

Imageboard software.

Feature List

researcx/infinity [Fork]

"infinity" imageboard script fork with fixes, instructions for modern system installation, basic recent threads functionality and some added missing files.

researcx/pydirlist

Directory listing script based on SPKZ's + Garry's Directory Listing.

Features:

  • Lists all folders as-is without the need for a database.
  • Automatic thumbnail generation for images and videos (150x150, 720x, 1600x)
  • Embedded images, animation (.gif) and videos
  • Toggle for image grid and gallery view
  • RSS feeds for entire site and individual folders
  • 18+ notices for folders marked as NSFW
  • Ability to add hidden folders

researcx/i3-gaps-qubes [Fork]

i3-gaps patch for qubes 3.2.

researchdepartment/dotfiles-qubes3.2-i3 (get the full dotfiles for the above setup here)

From SietsevanderMolen/i3-qubes

goscraper [Fork]

Fork of the goscraper webpage-scraper which adds timeout, proxy and user-agent support.

From badoux/goscraper.

void-linux-install.sh

Minimal instructions for installing Void Linux on MBR + Legacy BIOS.

# Obtain the latest Void Linux base live ISO from:
# https://voidlinux.org/download/ (plain musl version)

# Write it to a USB drive:
# sudo dd bs=4M if=void-live-x86_64-musl-20181111.iso of=/dev/sdb status=progress oflag=sync

# Switch to bash (easier to use while installing)
bash

# Set UK keymap
loadkeys uk

# Set up WiFi
wpa_passphrase <MYSSID> <key> >> /etc/wpa_supplicant/wpa_supplicant.conf
wpa_supplicant -i <device> -c /etc/wpa_supplicant/wpa_supplicant.conf -B

# Set up partitions (MBR)
parted /dev/sdX mklabel msdos
cfdisk /dev/sdX
# 1 - 1G primary partition, set the bootable flag
# 2 - 100% primary partition

# Set the filesystem for the boot partition
mkfs.ext2 /dev/sdX1

# Create the encryptied partitions
cryptsetup luksFormat --type luks2 --cipher aes-xts-plain64 --key-size 512 --hash sha256 /dev/sdX2
cryptsetup luksOpen /dev/sdX2 sysroot

# Set up LVM
pvcreate /dev/mapper/sysroot
vgcreate void /dev/mapper/sysroot
lvcreate --size 2G void --name swap
lvcreate -l +100%FREE void --name root

# Set the filesystems
mkfs.xfs -i sparse=0 /dev/mapper/void-root
mkswap /dev/mapper/void-swap

# Mount the new filesystem
mount /dev/mapper/void-root /mnt
swapon /dev/mapper/void-swap
mkdir /mnt/boot
mount /dev/sdX1 /mnt/boot
for i in /dev /dev/pts /proc /sys /run; do sudo mount -B $i /mnt$i; done

# Install Void Linux
xbps-install -Sy -R https://mirrors.dotsrc.org/voidlinux/current -r /mnt base-system lvm2 cryptsetup grub nano htop tmux

# Chroot into the new Void Linux install and set permissions
chroot /mnt
bash

# Set permissions
chown root:root /
chmod 755 /

# Set root password
passwd root

# Add a new user
useradd -m -s /bin/bash -U -G wheel,users,audio,video,cdrom,input MYUSERNAME
passwd MYUSERNAME

# Configure suders
nano /etc/sudoers # Uncomment the line containing %wheel ALL=(ALL) ALL

# Configure timezone and default keymap
nano /etc/rc.conf
# TIMEZONE="Europe/Jersey" # (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones)
# KEYMAP="uk"

# Set up hostname
echo somehostname > /etc/hostname

# Set up the locale
echo "LANG=en_US.UTF-8" > /etc/locale.conf
echo "en_US.UTF-8 UTF-8" >> /etc/default/libc-locales
xbps-reconfigure -f glibc-locales

# UUID for disks
BOOT_UUID=$(blkid -o value -s UUID /dev/sdX1)
CRYPTD_UUID=$(blkid -o value -s UUID /dev/sdX1)

# Edit fstab
nano /etc/fstab
# <file system>	   	   <dir> <type>  <options>             <dump>  <pass>
# /dev/mapper/void-root  /     xfs     defaults              0       1
# /dev/voidvm/void-swap  swap  swap    defaults              0       0
echo "UUID=${BOOT_UUID} /boot ext2 defaults 0 2"

# Configure GRUB
echo "GRUB_CMDLINE_LINUX_DEFAULT=\"loglevel=4 slub_debug=P page_poison=1 acpi.ec_no_wakeup=1 rd.auto=1 cryptdevice=UUID=${CRYPTD_UUID}:sysroot root=/dev/mapper/void-root resume=/dev/mapper/void-swap\"" >> /etc/default/grub
echo "GRUB_ENABLE_CRYPTODISK=y" >> /etc/default/grub

# Install grub
grub-install /dev/sdX
xbps-reconfigure -f linux4.19 # use the current known kernel version

# Copy WiFi config over to the new install
cp /etc/wpa_supplicant/wpa_supplicant.conf /mnt/etc/wpa_supplicant/wpa_supplicant.conf

# Reboot
exit
umount -R /mnt
reboot

# Quick chroot back in if needed:
cryptsetup luksOpen /dev/sdX2 sysroot
vgchange -a y void
mount /dev/mapper/void-root /mnt
mount /dev/sdX1 /mnt/boot
for i in /dev /dev/pts /proc /sys /run; do sudo mount -B $i /mnt$i; done
chroot /mnt
bash

GitLab SnippetGitHub Gist

arch-linux-install.sh

Minimal instructions for installing Arch Linux on GPT or MBR on UEFI or Legacy BIOS.

# Download an Arch ISO from https://www.archlinux.org and copy it to a USB drive:
sudo dd bs=4M if=archlinux-2019.01.01-x86_64.iso of=/dev/sdb status=progress oflag=sync
# then plug it into the device of your preference, boot it and start the installation process.

# Installation process:
# Set UK keymap
loadkeys uk

# Set up partitions (MBR)
parted /dev/sdX mklabel msdos
cfdisk /dev/sdX
# 1 - 1G primary partition, set the bootable flag
# 2 - 100% primary partition

# OR

# Set up partitions (GPT)
cgdisk /dev/sdX
# 1 - [ EFI: 100M | Legacy: 1G ] partition # Hex code: [ EFI: ef00 | Legacy: ef02 ]
# 2 - 250M partition # Hex code 8300 ( Not needed for a legacy install, if you chose not to make this partition, /dev/sdX3 becomes /dev/sdX2 in this guide! )
# 3 - 100% partiton # Hex code 8300

# May need to reboot here if any of the next commands fail with an error!
# reboot

# Set the filesystems
#Legacy:
mkfs.ext2 /dev/sdX1

#OR

#EFI:
mkfs.vfat -F32 /dev/sdX1
mkfs.ext2 /dev/sdX2

# Create the encryptied partitions
cryptsetup luksFormat --type luks2 --cipher aes-xts-plain64 --key-size 512 --hash sha256 /dev/sdX3
cryptsetup luksOpen /dev/sdX3 sysroot

# Set up LVM
pvcreate /dev/mapper/sysroot
vgcreate arch /dev/mapper/sysroot
lvcreate --size 2G arch --name swap
lvcreate -l +100%FREE arch --name root

# Set the filesystems
mkfs.xfs /dev/mapper/arch-root
mkswap /dev/mapper/arch-swap

# Mount the new filesystem
mount /dev/mapper/arch-root /mnt
swapon /dev/mapper/arch-swap
mkdir /mnt/boot

# Legacy:
mount /dev/sdX1 /mnt/boot

# OR

# EFI:
mount /dev/sdX2 /mnt/boot
mkdir /mnt/boot/efi
mount /dev/sdX1 /mnt/boot/efi


# Enable wifi if required
wifi-menu

# Install the system and necessary/favorable utilities, make sure to change this!
pacstrap /mnt base base-devel fish nano vim git efibootmgr grub grub-efi-x86_64 dialog wpa_supplicant lsb-release

# Set up fstab
genfstab -pU /mnt >> /mnt/etc/fstab

# Edit fstab if using an SSD: change relatime on all non-boot partitions to noatime and add discard
nano /mnt/etc/fstab

# Chroot into the newly installed arch system
arch-chroot /mnt /bin/fish

# Setup system clock (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones)
ln -sf /usr/share/zoneinfo/Europe/Jersey /etc/localtime
hwclock --systohc --utc

# Set hostname
echo somehostname > /etc/hostname

# Update locales
nano /etc/locale.gen # uncomment the line containing en_US.UTF-8 UTF-8
echo LANG=en_US.UTF-8 >> /etc/locale.conf
locale-gen

# Set root password
passwd

# Add a new user (remove -s /bin/fish if you want bash instead)
useradd -m -g users -G wheel -s /bin/fish MYUSERNAME
passwd MYUSERNAME

# Configure suders
nano /etc/sudoers
# Uncomment the line containing %wheel ALL=(ALL) ALL

# Configure mkinitcpio
nano /etc/mkinitcpio.conf
# Add 'keymap encrypt lvm2 resume' to HOOKS just before 'filesystems'

# Set up a bootloader (choose between legacy and EFI):

# Legacy (grub)
grub-install --target=i386-pc /dev/sdX

# If you encounter "WARNING: Device /dev/xxx not initialized in udev database even after waiting 10000000 microseconds." you may need to provide /run/lvm/ access to the chroot environment using: 
exit # exit out of the chroot for the time being
mkdir /mnt/hostlvm
mount --bind /run/lvm /mnt/hostlvm
arch-chroot /mnt /bin/fish
ln -s /hostlvm /run/lvm
# You will need to re-run the "grub-install" command as instructed above after this.

nano /etc/default/grub
# Set GRUB_CMDLINE_LINUX as "cryptdevice=/dev/sdX3:luks root=/dev/mapper/arch-root resume=/dev/mapper/arch-swap"
# and uncomment the line containing GRUB_ENABLE_CRYPTODISK=y
grub-mkconfig -o /boot/grub/grub.cfg

# OR

# EFI (grub)
grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=ArchLinux
nano /etc/default/grub
# Set GRUB_CMDLINE_LINUX as "cryptdevice=/dev/sdX3:luks root=/dev/mapper/arch-root resume=/dev/mapper/arch-swap"
# and uncomment the line containing GRUB_ENABLE_CRYPTODISK=y

# OR

# EFI (systemd-boot)
bootctl --path=/boot/efi install
nano /boot/efi/loader/entries/arch.conf
# Set options as follows:
#title   Arch Linux
#linux   /vmlinuz-linux
#initrd  /initramfs-linux.img
#initrd  /initramfs-linux-fallback.img
#options cryptdevice=/dev/sdX3:lvm root=/dev/mapper/arch-root resume=/dev/mapper/arch-swap rw 

# Regenerate initrd image
mkinitcpio -p linux

# Exit the chroot
exit

# Unmount all partitions
umount -R /mnt
swapoff -a

# Reboot into your new Arch Linux install.
reboot


# If at any time you need to get back into chroot via installation media:
cryptsetup luksOpen /dev/sdX3 sysroot
mount /dev/mapper/arch-root /mnt

# Legacy:
mount /dev/sdX1 /mnt/boot

# OR

# EFI:
mount /dev/sdX2 /mnt/boot
mount /dev/sdX1 /mnt/boot/efi

arch-chroot /mnt /bin/fish

GitLab SnippetGitHub Gist

oragono-laced [Fork]

Oragono IRCd mod providing imageboard features and anonymity to IRC.

Warning: Highly experimental!

Features and ideas of this fork:

  • Tripcode and secure tripcode system (set password (/pass) to #tripcode, #tripcode#securetripcode or ##securetripcode to use) (90%)
  • Auditorium mode (+u) (inspircd style) (90%)
  • Greentext and basic ~markdown-to-irc~ formatting support (20%)
  • Channel mode for displaying link titles and description (+U) (99%)
  • Channel mode for group highlights per word basis (+H <string>) (i.e. /mode #channel +H everyone; /msg hey @everyone) (90%) (Add cooldown system (0%))
  • Private queries/whitelist mode (+P) which requires both users to be mutual contacts to use private messaging. (10%)
  • Automatically generated and randomized join/quit (quake kill/death style?) messages (0%)
  • Server statistics (join/quit/kick/ban counter, lines/words spoken) (0%)
  • Simple federation (the irc will work over DHT or a similar system to be provided worldwide for anyone to be able to use, anyone should be able to host a server that connects to it with little to no knowledge required) (0%)
  • Build in a webrtc voice/video chat server and make (or modify an open source) webclient to support voice and video chatting (0%)
  • Web front-end for chat with trip authentication with discord-style themes, avatar support, automatically hosted for every server/client (0%)
  • Anonymity changes (reduced whois info, removed whowas info, completely hide or obfuscate ips/hostmasks, make users without tripcode completely anon and unidentifiable) (60%)

From ergochat/ergo

myweechat.md

My personal weechat configuration.

znc-httpadmin [Fork]

Added user count and user list API call to znc-httpadmin.

From prawnsalad/znc-httpadmin

researcx/SynDBB (Cyndi)

An IRC, imageboard, Facepunch and SomethingAwful inspired forum software.
Hybridization of different aspects of classic internet forums, imageboards, and IRC.

Features:

  • File uploader with external upload support.
  • Anonymous file uploader.
  • Automatic Exif data removal on uploaded images.
  • Deleted files securely wiped using "shred".
  • File listing with file info and thumbnails.
  • Temporary personal image galleries.
  • Custom user-created channels.
  • List and grid (catalog) view modes for channels.
  • List and gallery view modes for threads.
  • Rating system for threads, quotes and IRC.
  • Site/IRC integration API.
  • Avatar history with the ability to re-use avatars without uploading them.
  • Custom emoticon submission (admin approval required).
  • QDB style quote database for IRC quotes (quotes are admin approved).
  • Simple pastebin.
  • Improved site and IRC API.
  • LDAP Authentication support (+ automatic migration)
  • JSON based configuration file.
  • Most aspects of the site configurable in config.json.
  • Display names (+ display name generator).
  • Username generator.
  • Summary cards for user profiles.
  • Profile and user tags.
  • NSFW profile toggle.
  • Tall avatar support (original avatar source image is used) for profiles (all members) and posts (donators).
  • Various configuration options for custom channels (access control, moderator list, NSFW toggle, anon posting toggle, imageboard toggle, etc)
  • Channel and thread info displayed on sidebar.
  • User flairs.
  • Multi-user/profile support (accounts can be linked together and switched between with ease).
  • Mobile layout.
  • Theme selector.
  • All scripts and styles hosted locally.
  • Scripts for importing posts from imageboards and RSS feeds.

More images...

file_download.py (unavailable)

Automated per-channel/server/buffer/query link/file archiver script for weechat (async).

fp-ban (unavailable)

Evercookie based fingerprinting + user ban system. Previously used on the Space Station 13 server.

dnsbl.php

IRC DNSBL style user/bot blocking in PHP.

<?php
function CheckIfSpambot($emailAddress, $ipAddress, $userName, $debug = false)
{
$spambot = false;
$errorDetected = false;

if ($emailAddress != "")
{
  $xml_string = file_get_contents("http://www.stopforumspam.com/api?email=" . urlencode($emailAddress));
  $xml = new SimpleXMLElement($xml_string);

  if ($xml->appears == "yes") // Was the result was registered
  {
    $spambot = true; // Check failed. Result indicates dangerous.
  }
  elseif ($xml->appears == "no") // Check passed. Result returned safe.
  {
    $spambot = false; // Check passed. Result returned safe.
  }
  else
  {
    $errorDetected = true; // Test returned neither positive or negative result. Service might be down?
  }
}

// -------------
// Check IP Address
// -------------
if ($spambot != true && $ipAddress != "")
{
  $xml_string = file_get_contents("http://www.stopforumspam.com/api?ip=" . urlencode($ipAddress));
  $xml = new SimpleXMLElement($xml_string);

  if ($xml->appears == "yes") // Was the result was registered
  {
    $spambot = true; // Check failed. Result indicates dangerous.
  }
  elseif ($xml->appears == "no") // Check passed. Result returned safe.
  {
    $spambot = false; // Check passed. Result returned safe.
  }
  else
  {
    $errorDetected = true; // Test returned neither positive or negative result. Service might be down?
  }
}

// -------------
// Check Username
// -------------
if ($spambot != true && $userName != "")
{
  $xml_string = file_get_contents("http://www.stopforumspam.com/api?username=" . urlencode($userName));
  $xml = new SimpleXMLElement($xml_string);

  if ($xml->appears == "yes") // Was the result was registered
  {
    $spambot = true; // Check failed. Result indicates dangerous.
  }
  elseif ($xml->appears == "no") // Check passed. Result returned safe.
  {
    $spambot = false; // Check passed. Result returned safe.
  }
  else
  {
    $errorDetected = true; // Test returned neither positive or negative result. Service might be down?
  }
}

// To debug function, call it with the debug flag as true and instead the function will return whether or not an error was detected, rather than the test result.
if ($debug == true)
{
  return $errorDetected; // If enabled, return whether or not an error was detected
}
else
{
  return $spambot; // Return test results as either true/false or 1/0
}
}
function ReverseIPOctets($inputip){
$ipoc = explode(".",$inputip);
return $ipoc[3].".".$ipoc[2].".".$ipoc[1].".".$ipoc[0];
}
function IsTorExitPoint($ip){
if (gethostbyname(ReverseIPOctets($ip).".".$_SERVER['SERVER_PORT'].".".ReverseIPOctets($_SERVER['SERVER_ADDR']).".ip-port.exitlist.torproject.org")=="127.0.0.2") {
  return true;
} else {
  return false;
}
}
function checkbl($ip){
$blacklisted = 0;
$whitelist = array(''); //ips of users who you wish to whitelist regardless of conditions below
$blacklist = array(''); //bad ips go here
$range_blacklist = array(''); //ip ranges go here e.g. 84.72.0.0
$city_blacklist = array(''); //cities go here
$region_blacklist = array(''); //regions go here

$geoip = geoip_record_by_name($ip);
$mask=ip2long("255.255.255.0");
$remote=ip2long($ip);

//check for tor
if (IsTorExitPoint($ip)) {
  $blacklisted = 1;
}

//check stopforumspam if ip is malicious
if (CheckIfSpambot('', $ip, '')){
  $blacklisted = 1;
}
//check if ip is in range_blacklist
foreach($range_blacklist as $single_range){
  if (($remote & $mask)==ip2long($single_range))
  {
    $blacklisted = 1;
  }
}

//check if geoip city is blacklisted
foreach($city_blacklist as $city){
  if ($geoip['city'] == $city)
  {
    $blacklisted = 1;
  }
}

//check if geoip region is blacklisted
foreach($region_blacklist as $region){
  if ($geoip['region'] == $region)
  {
    $blacklisted = 1;
  }
}

//check if ip is in the blacklist
if (in_array($ip, $blacklist)) {
  $blacklisted = 1;
}

//do stuff (returns 1 for blacklisted and 0 for safe)
if($blacklisted && !in_array($ip, $whitelist)){
  return 1;
}else{
  return 0;
}
}
if(isset($_REQUEST['ip'])){
echo checkbl($_REQUEST['ip']);
}else{
echo checkbl($_SERVER['REMOTE_ADDR']);
}
?>

GitLab SnippetGitHub Gist

simple_bash_uploader.sh

Bash file/screenshot upload script (scrot compatible)

#!/bin/sh
# Simple Bash Uploader (simple-bash-upload.sh)
#   by researcx - https://researcx.gitlab.io/
#   This will upload a file/screenshot to a server of your choice and automatically copy you the direct link to it.
#   Supports regular file uploading + full-screen, active window and selected area screenshots.
#   Usage: simple-bash-upload.sh [full|active|selection|filename.ext]
#   Can be used directly from console to upload files (./simple-bash-upload.sh file.png) or assigned to a custom action in Thunar (./simple-bash-upload.sh %f)
#   Can also be bound to run on certain keypresses such as print screen, alt+print screen and ctrl+print screen.
# Dependencies:
#   scrot (for screenshotting)
#   xclip (for copying the link to your clipboard)
#   libnotify (for notifying you of the uploads)
# Xfce4 keyboard shortcut fixes (selection mode doesn't work without these)
sleep 0.1
export DISPLAY=:0.0
# Configuration Options:
UPLOAD_SERVICE="My Awesome Server"
RANDOM_FILENAME=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
IMAGE_PATH="/home/$USER/Screenshots/$RANDOM_FILENAME.png"
REMOTE_USER="test"
REMOTE_SSH_AUTH="~/.ssh/my_ssh_key"
REMOTE_SERVER="example.org"
REMOTE_PORT="22"
REMOTE_PATH="/var/www/html/files/"
REMOTE_URL="https://example.org/files/"
if [ "$1" == "full" ]; then
MODE=""
elif [ "$1" == "active" ]; then
MODE="-u"
elif [ "$1" == "selection" ]; then
MODE="-s"
else
  FILE_PATH=$1
  FILE_NAME=$(basename $FILE_PATH)
  FILE_EXT=".${FILE_NAME##*.}"
  notify-send "$UPLOAD_SERVICE" "Upload of file '$RANDOM_FILENAME$FILE_EXT' started."
  scp -i $REMOTE_SSH_AUTH -P $REMOTE_PORT $FILE_PATH $REMOTE_USER@$REMOTE_SERVER:$REMOTE_PATH$RANDOM_FILENAME$FILE_EXT
  if [ $? -eq 0 ];
  then
      echo -n $REMOTE_URL$RANDOM_FILENAME$FILE_EXT|xclip -sel clip
      notify-send "$UPLOAD_SERVICE" $REMOTE_URL$RANDOM_FILENAME$FILE_EXT
  else
      notify-send "$UPLOAD_SERVICE" "Upload failed!"
  fi
  exit
fi
scrot $MODE -z $IMAGE_PATH || exit
notify-send "$UPLOAD_SERVICE" "Upload of screenshot '$RANDOM_FILENAME.png' started."
scp -i $REMOTE_SSH_AUTH -P $REMOTE_PORT $IMAGE_PATH $REMOTE_USER@$REMOTE_SERVER:$REMOTE_PATH
if [ $? -eq 0 ];
then
  echo -n "$REMOTE_URL$RANDOM_FILENAME.png"|xclip -sel clip
  notify-send "$UPLOAD_SERVICE" "$REMOTE_URL$RANDOM_FILENAME.png"
else
  notify-send "$UPLOAD_SERVICE" "Upload failed!"
fi

GitLab SnippetGitHub Gist

Spacetopia Marketplace

Based on my XenForo forum shop system. Adds extra features for BYOND and Spacetopia integration.

Additional features:

  • Support for BYOND sprite files
  • Allows user submitted content.

XenForo Mods

Shop/Market System

A simple shop system, later made modular and given an internal API to make it able to work with any forum or CMS software.

Features:

  • Ability to buy items using real money or on-site currency.
  • Users can buy on-site currency with real money.
  • Users can submit their own items.
  • User accounts which serve as banks.
  • Automatically calculated item pricing (inflation) based on bank accounts' currency and item ownership.
  • Rating and flagging system for items.
  • Items have a redeem code for gifting or giveaways.
  • An inventory which displays each item you own.
  • Items can be either used, activated or downloaded depending on their type.

Smartness Points

Highlights bad grammar and misspellings as red, reduces points for each mistake as a disciplinary action. Based on the Facepunch Studios smartness system from around 2004-2005.

Features:

  • Supports a customizable list of words, thus can be used for more than just grammar/spelling mistakes.
  • Users will lose a point for each bad word.
  • By correcting a message, the user will gain back any points lost.

Imageboard style warn/ban notices

Appends "USER WAS BANNED FOR THIS POST (REASON)" and/or "USER WAS WARNED FOR THIS POST (REASON)" at the bottom of the users' post.

SS13/BYOND API

Features:

  • XenForo user profile info fetching system.
  • Trophy (achievement) get and set system.
  • BYOND ckey comparison using XenForo custom profile fields.
  • Fetch clothing and character customization from user profile fields.
  • Shop integration.
  • bdBank integration.

server.sh

Space Station 13 Linux Server Toolkit

#!/bin/sh
source ../byond/bin/byondsetup

cd `dirname $0`
isdefined=0
${1+ export isdefined=1}
if [ $isdefined == 0 ] ; then
echo "Space Station 13 Linux Server Toolkit"
echo "by researcx (https://github.com/researcx)"
echo "Parameters: start, stop, update, compile, version"
exit
fi

LONG=`git --git-dir=../space-station-13/.git rev-parse --verify HEAD`
SHORT=`git --git-dir=../space-station-13/.git rev-parse --verify --short HEAD`
VERSION=`git --git-dir=../space-station-13/.git shortlog | grep -E '^[ ]+\w+' | wc -l`

if [ $1 == "start" ]; then
DreamDaemon 'goonstation.dmb' -port 5200 -log serverlog.txt -invisible -safe &
elif [ $1 == "stop" ]; then
pkill DreamDaemon
elif [ $1 == "update" ]; then
echo 'Downloading latest content from .git'
git clone https://erikad2k5@bitbucket.org/d2k5productions/space-station-13 ../space-station-13/

echo 'Checking for updates'
sh -c 'cd ../space-station-13/ && /usr/bin/git pull origin master'

LONG=`git --git-dir=../space-station-13/.git rev-parse --verify HEAD`
SHORT=`git --git-dir=../space-station-13/.git rev-parse --verify --short HEAD`
VERSION=`git --git-dir=../space-station-13/.git shortlog | grep -E '^[ ]+\w+' | wc -l`


curl "http://cia.d2k5.com/status/status.php?type=update&ver=$VERSION&rev=$LONG"
echo ''
elif [ $1 == "compile" ]; then
echo "Compiling Space Station 13 (Revision: $VERSION)"

TIME="$(sh -c "time DreamMaker ../space-station-13/goonstation.dme &> build.txt" 2>&1 | grep real)"
#TIME="testing"

echo $TIME

      LONG=`git --git-dir=../space-station-13/.git rev-parse --verify HEAD`
      SHORT=`git --git-dir=../space-station-13/.git rev-parse --verify --short HEAD`
      VERSION=`git --git-dir=../space-station-13/.git shortlog | grep -E '^[ ]+\w+' | wc -l`


BUILD="$(tail -1 build.txt)"
cp build.txt /usr/share/nginx/html/

curl "http://cia.d2k5.com/status/status.php?type=build&data=$BUILD&ver=$VERSION&time=$TIME&log=http://cia.d2k5.com/build.txt"
elif [ $1 == "version" ]; then
echo "Version Hash: $LONG ($SHORT)"
echo "Revision: $VERSION"
echo "Changes in this version: https://bitbucket.org/d2k5productions/space-station-13/commits/$LONG"
else
echo "exiting...";
fi

GitLabGitHub

git-update.sh

Git repository auto-updater

#!/bin/sh
REPO="https://github.com/<user>/<repo>"
PATH="/path/to/repository"
LATEST=`/usr/bin/git ls-remote $REPO refs/heads/master | /usr/bin/cut -f 1`
CURRENT=`/usr/bin/git -C $PATH rev-parse HEAD`
echo "Current Revision: $CURRENT"
echo "Latest Revision: $LATEST"
if [ "$LATEST" == "$CURRENT" ]; then
      echo 'No updates found!'
      exit
fi
/usr/bin/git -C $PATH pull

GitLab GistGitHub Gist

researcx/opensim-mod (currently unavailable)

OpenSimulator mods for XenForo bdBank currency and user integration.

More images...

scan_pics.php

Mass incremental+prefix+suffix photo scanner for direct URLs.

<?php
//scan_pics.php
//Mass image scanner for URLs.

//Valid known prefixes: DSC, DSCN, DSCF, DSC_, DSC, IMG, IMG_, Photo, PIC
//Valid suffixes: .jpg, .png (etc, keep in mind on linux servers file names are case sensitive, so searching for .JPG, .PNG may be useful, but cameras usually save them as lowercase .jpg), _DSCN.jpg (in rare cases)
//For more suffixes or prefixes read up on http://en.wikipedia.org/wiki/Design_rule_for_Camera_File_system

//Optimal usage: /scan_pics.php?url=http://filesmelt.com/dl/&prefix=DSC&suffix=.jpg
//You must edit the URL to your own likings.
//In the demonstration it was used on this URL with the DSC prefix although it only finds one photo.

//Additional parameters: &start=0, &end=9999 (most cameras only go up to 9999 so there's no point in going higher)
//The additional parameters are also good for certain number scans, i.e. from 600 to 699, such as trying to find a set of photos.

//All files that don't exist will error, any files that are found will appear as normal images.
//The page will take extremely long to load on a normal 0-9999 scan, you'll only know what has loaded and what hasn't when it's done loading.

//Functions used by script
function zerofill($mStretch, $iLength = 2)
{
  $sPrintfString = '%0' . (int)$iLength . 's';
  return sprintf($sPrintfString, $mStretch);
}

if(isset($_GET['start'])){
$start = $_GET['start'];
}else{
$start = 0;
}

if(isset($_GET['end'])){
if($_GET['end'] <= 9999){
  $end = $_GET['end'];
}else{
  $end = 9999;
}
}else{
$end = 9999;
}

if(isset($_GET['url'])){
$url = $_GET['url'];
}else{
die('No parameters specified.');
}

if(isset($_GET['prefix'])){
$prefix = $_GET['prefix'];
}else{
$prefix = null;
}

if(isset($_GET['suffix'])){
$suffix = $_GET['suffix'];
}else{
$suffix = '.jpg';
}

echo '<title>Scanning images from '.$prefix.$start.$suffix.' to '.$prefix.$end.$suffix.'.</title>';

for($i=$start;$i<$end + 1;$i++){
echo '<img src="'.$url.$prefix.zerofill($i,strlen($end)).$suffix.'" width="100" height="100" />';
}

?>

GitLab SnippetGitHub Gist

Various Garry's Mod roleplay scripts + hud design.

Hit the giant enemy crab in its weak spot for massive damage.

Source mapping

  • de_hotel

Source modding

  • City 47
  • Texturing (custom map textures)
  • Model editing (custom player-models)

Various website designs



Markov forum bot

Using a combined pyborg IRC database, takes the replies from a forum thread, adding them to its database and formulating a reply to the thread. Can also reply to individual posts. Triggered automatically at random, but also has a chance to reply when replied to or mentioned.

SMF forum mods:

  • Neopets-style RP item shop
  • Smartness
  • Ban list, improved ban system
  • Upload site with file listing, thumbnails, search, user to user file sharing/transfers

Instant-messaging system with an IRC backend. (unavailable)

Grand Theft Auto Modding

  • San Andreas minor texture overhaul
  • San Andreas improved lighting
  • San Andreas improved water
  • SA:MP scripts (D&K Roleplay)
  • MTA:R mapping (D&K Racing)

ActiveWorlds City Builds

  • Rockford
  • Parameira
  • D2City

Contact Me:

  • Tox, Briar, XMPP, Telegram, Signal & more available:
  • [send me your details here and I will add you]
  • e-Mail:
  • click here
  •  
  • Coding (Python, PHP, C#), software & web development. Scripting, automation. database systems (SQL). Graphics & web design, video editing. Modding, spriting, game development. Project lead & community administration experience.

Support:

XMR
ETH
BTC

Image Galleries:

  • > directory listing - /Technology/
  • view more...

  • > directory listing - /Imagedump/
  • view more...

  • > directory listing - /Development/
  • > directory listing - /Photos/
  • > directory listing - /Cooking/
  • > directory listing - /Food/
  • > directory listing - /Music/
  • > directory listing - /Documents/
  • > directory listing - /Games/


  • -----BEGIN PGP PUBLIC KEY BLOCK-----
     
    xm8EYxG+OBMFK4EEACIDAwTVc8YnjDn1EOJanhfu5NVnL3g3AHyUJQMMiYl7TuNk 0n7TykT/coFrN6DmxhPVNJJcbU1hG+AhANqVCPMnaaNgB4doimIOcZDHXubK1+g5 UW5gzP+e56zC9kxv+wfbVLjNIEtlaXJhIFQgPGtlaS50cmVpLmE1MkBnbWFpbC5j b20+wpwEExMKACQFAmMRvjgCGy8DCwkHAxUKCAIeAQIXgAMWAgECGQEFCQeEzgAA CgkQ/hq9PaL7agc0GQGAlBmnpqWwboFuj3pirmL1m9njuQX11PmycUNH5YBnkMzP 2TCBii4GqO6xkU9jNhKOAYCAPc/CxMFCR8G11OsgJmr4X2vhzhcJXLerT7MW0kmP VAgTEwAr114XRhezAq+10IDOUgRjEb44EwgqhkjOPQMBBwIDBJRvFlmBp4P+9YnF 2VEx3xJ2hhzOOj1YizdeYrdftIId5m+QsFh6I1t+ch9pAulZSrpOUYO2+kkePBxw 954hGMzCwCcEGBMKAA8FAmMRvjgFCQeEzgACGy4AagkQ/hq9PaL7agdfIAQZEwoA BgUCYxG+OAAKCRBkBaUCmCPmJXtnAQDey6jpD7D4H8S8DP8mMEROFPYv//G8I8F0 YtVrThtOQQEAv8fcPgooEj+AWBfg00/w4DdH9OWG5Qt2SIBsuBc8HMqTJgF+Ji9t V0DvQVZ64mCKtd5AZR66owRHnLxGHC5Jg/aGdWXEWiHrDKnXd0asNxWrFlS1AX0Z MfjJXgjNym8wFSXRcrlxm2Wt8TBQE3IWQ7ES4UI4kPdL7DKh+JrlrMlVoNrKAkbO UgRjEb44EwgqhkjOPQMBBwIDBCd0qMyP/nuw1DIPucBnT2H8dQgFPZ7TOVwD3E6a kxyX32qE4EkUd0z3PCZgjn1QhOS2Ws+8TVSwsch7rSoCyUvCwCcEGBMKAA8FAmMR vjgFCQeEzgACGy4AagkQ/hq9PaL7agdfIAQZEwoABgUCYxG+OAAKCRCRRmJU4MPo oe3nAQDuh24FYacY8NTdXu/ai0woKZktPExNciBvqAR4VC81gwD8CoVDVdM7P4dY yTAr6WZ3ymkjgdKEISxkqrmTL0YfLqfmcwGA8aqBf44S1uAG9HVpQm0l7m4J+teg jdaXMxse2dAeHf0UbKRxDkUoB4vb+stA7PuoAXwLr/Ge+T1jgQPxpfikduiI4oco csqf/dOCXaWfD0IEibZxyGbPuzClKm23B7pdGYI= =3Uy5 =J4UH
    -----END PGP PUBLIC KEY BLOCK-----

     

    Counter



    "Halfway through reading a Hacker News thread I kick my boot into the computer. Even when it's an original thread I can't stand it. It feels good to smash the computer though. I feel like I'm participating in the discussion."
    •             

     

    info

    user:researcx