266 Commits

Author SHA1 Message Date
brent saner
a0d1c1df5c fix ascii ref layout 2024-08-16 20:38:02 -04:00
brent saner
1d35f4deca make links look nicer 2024-08-16 20:27:59 -04:00
brent saner
1437639338 adding asciiref gen hook 2024-08-16 20:14:37 -04:00
brent saner
90b88d82e8 add anchors to ascii ref 2024-08-16 19:54:59 -04:00
e17005b729 add collapsing toc to ascii.html 2023-09-29 01:53:33 -04:00
aa0a7e5837 lol fixing the Debian fuckery fix 2022-10-12 16:07:42 -04:00
77b22b8b8a fix for Debian's fucking braindead packagers 2022-10-12 00:45:29 -04:00
6543b230d4 fixing deprecation warning 2021-10-29 06:33:51 -04:00
67338238af this should fix some issues with loop mounts 2021-01-19 01:14:33 -05:00
d8686ca943 make the example make more sense 2021-01-19 01:12:48 -05:00
brent s
b740d2bfff adding some changes 2021-01-19 01:12:07 -05:00
11a2db0acb moving relchk to its own project 2021-01-12 17:09:07 -05:00
379795ee06 adding example JSON 2021-01-12 04:57:49 -05:00
root
436bc3d083 changing to new format 2021-01-12 04:55:20 -05:00
4663c3cd02 arch done 2021-01-12 04:54:27 -05:00
f75e4ee896 okay, all good now. now to fix the arch iso downloader 2021-01-12 03:27:45 -05:00
root
88828685c2 updating sysresc relchk 2021-01-12 01:48:22 -05:00
brent s
4fa98eb805 fix for change of func name:
dns.resolver.resolve > dns.resolve.query
2020-11-21 13:34:37 -05:00
916ea1dc2c updating todo 2020-10-04 02:42:35 -04:00
9b2eff59d8 upstream deprecation 2020-10-02 23:10:00 -04:00
13349d6d99 don't need to escape backslash 2020-08-23 00:38:53 -04:00
b0fba9b441 Merge branch 'master' of square-r00t.net:optools into master 2020-08-23 00:04:10 -04:00
6954e8dc4e adding ascii table 2020-08-23 00:03:55 -04:00
brent s
0febad432a fixing stale pid file bug 2020-08-16 12:59:48 -04:00
473833a58f removed repoclone scripts (they've been reformed into the RepoMirror repository) and added libvirt redirection 2020-08-02 03:48:24 -04:00
brent s
743edf045b adding count 2020-07-14 17:30:47 -04:00
brent s
8f3da5ee34 make that shit customizable 2020-07-14 17:26:42 -04:00
brent s
c2c051b6a3 delineate elements 2020-07-14 17:19:48 -04:00
brent s
6deef053d3 should probably strip that. 2020-07-14 17:17:04 -04:00
brent s
c4bb612381 adding get_title 2020-07-14 17:13:14 -04:00
289c2711b8 shut up man, i'm getting SO MANY cron emails about this 2020-05-14 12:52:25 -04:00
b2848970c3 still fiddling 2020-05-10 08:48:08 -04:00
b638e58dc8 think i need a small fix for this. it's not creating records it ought to. 2020-05-10 08:32:49 -04:00
b8592686e4 update todo 2020-04-23 21:48:47 -04:00
95aa8aa3bc publish DDNS 2020-04-21 00:56:28 -04:00
31eec2d3f3 fix for sshsecure on ssh versions 8.1+ 2020-03-13 02:34:49 -04:00
fcc2cb674f in KiB, not bytes. TODO: maybe convert to bytes? 2019-11-29 07:47:50 -05:00
add247d622 can't start an active guest and vice versa 2019-11-29 07:37:00 -05:00
542166de67 add exec 2019-11-29 04:51:05 -05:00
66ece65699 better_virsh.py. ASMD, redhat. 2019-11-29 04:47:25 -05:00
brent s
701949b8f7 nice. 2019-10-31 08:26:22 -04:00
brent s
72298d7a4c moving autorepo to Arch_Repo_Builder repo 2019-09-19 02:01:57 -04:00
brent s
62a7d65be5 committing 2019-09-18 03:49:52 -04:00
brent s
6c7f0a3a6f adding arch autorepo 2019-09-13 01:44:45 -04:00
brent s
86f94f49ef heh 2019-08-20 01:01:10 -04:00
brent s
026a296444 added README to BootSync 2019-08-20 00:59:24 -04:00
brent s
7474012ada adding pacman hook for bootsync 2019-08-19 00:34:38 -04:00
brent s
84b6c80b07 d'oh 2019-08-19 00:29:22 -04:00
brent s
3848a0bf7e lol whoops 2019-08-19 00:16:03 -04:00
brent s
31826960c1 finalized. hashtype incorporated into code, streamlined, etc. 2019-08-19 00:05:52 -04:00
brent s
af732a1d64 adding XML/XSD support for hashtype attr 2019-08-18 23:17:31 -04:00
brent s
0c0f6ee81b fixed! no more messages about missing UUID 2019-08-18 22:28:52 -04:00
brent s
c149a7b3b7 adding BootSync 2019-08-18 20:24:39 -04:00
brent s
3976fd631c optimized and loop bug fixed 2019-07-17 17:58:15 -04:00
brent s
c95f1f535b adding Arch mirror ranker (it's better than upstream), needs optimization 2019-07-17 17:32:09 -04:00
brent s
ea3c90d85d updating to its own repo 2019-06-05 21:51:16 -04:00
brent s
eb9bbd8b3b add XInclude support 2019-06-03 16:30:32 -04:00
brent s
76c898588f gorram it 2019-06-02 19:23:16 -04:00
brent s
7dd42eaf4d another logging bug 2019-06-02 19:17:57 -04:00
brent s
e6652df291 oops 2019-06-02 19:11:07 -04:00
brent s
2ab3116d52 fixing logging bug 2019-06-02 18:12:10 -04:00
brent s
5af337fca7 fixing repo initialization 2019-06-02 16:58:57 -04:00
brent s
68669a7fd5 mysql plugin needed some work 2019-06-02 11:56:41 -04:00
brent s
118e1355bc getting there... 2019-06-02 11:38:50 -04:00
brent s
419f266f0f need the absolute path for the plugin 2019-06-02 11:30:45 -04:00
brent s
e4b7bf85e9 whoops, did that in the wrong place 2019-06-02 03:49:00 -04:00
brent s
6d6d1e20b1 forgot to handle if no snapshots are found 2019-06-02 01:46:23 -04:00
brent s
001925af88 so i forgot that the yum module hasn't been ported to py3 yet, and the wrapper requires python3... gorram it, redhat. 2019-06-01 14:44:36 -04:00
brent s
6ba2b6287c forgot to recombine 'em. heh 2019-05-31 17:03:13 -04:00
brent s
7eee1c4658 oops 2019-05-31 16:50:43 -04:00
brent s
a89a6ec94b works! 2019-05-31 16:03:14 -04:00
brent s
4ef4a939e8 think it's working now 2019-05-31 12:28:07 -04:00
brent s
431b4e3425 untested, but pretty sure it's done 2019-05-24 13:37:40 -04:00
brent s
130746fa00 rewriting backup script; config and plugins done, just need to parse it and execute 2019-05-23 02:46:05 -04:00
brent s
36061cccb5 updating sample config 2019-05-22 14:18:50 -04:00
brent s
0422038c47 THERE we go 2019-05-22 14:18:20 -04:00
brent s
fbce0d448e hrmmm 2019-05-22 14:12:37 -04:00
brent s
b1383ff3d5 whoops? 2019-05-21 14:42:49 -04:00
brent s
b8012e8b4b adding xsd 2019-05-21 14:39:15 -04:00
brent s
58ee4cff4d updating sshsecure 2019-04-17 08:59:26 -04:00
brent s
55b61fab65 adding password generator 2019-02-18 11:29:25 -05:00
brent s
981d92db92 adding bootsync 2019-02-06 15:59:02 -05:00
brent s
b8622c4462 thx jthan 2019-01-31 09:30:15 -05:00
brent s
d744250c1b pip.main moved to pip.__main (or something like that?) but don't try to use it anyways. 2019-01-25 02:40:45 -05:00
brent s
ae2a7be09d bugfix 2019-01-18 18:53:49 -05:00
brent s
305da25420 prettier time output for timeout 2019-01-18 16:00:47 -05:00
brent s
623c0e3abd whoops.. 2019-01-18 15:56:13 -05:00
brent s
31e8c4acee i think i figured it out... i need the *parent* pid, not the pid itself. https://raamdev.com/2007/kill-inactive-and-idle-linux-users/ 2019-01-18 15:41:10 -05:00
brent s
c55b844ad1 no WONDER that wasn't working 2019-01-18 15:12:17 -05:00
brent s
5d4e218db5 fixes 2019-01-18 14:50:53 -05:00
brent s
423c509878 oop 2019-01-18 14:45:46 -05:00
brent s
94b326f1e5 oops 2019-01-18 14:42:37 -05:00
brent s
6cea3c012e added user_cull 2019-01-18 14:39:45 -05:00
brent s
262d10f55d centos 6 is a piece of shit 2019-01-17 08:08:46 -05:00
brent s
06bfb8f3de gorRAM IT 2019-01-17 06:00:32 -05:00
brent s
e091d94f91 oh gorram it. 2019-01-17 05:45:17 -05:00
brent s
a79802afa1 uhhh... preventing multiple simultaneous runs is important. 2019-01-17 04:02:40 -05:00
brent s
122c366490 sshsecure now restart sshd 2019-01-17 03:43:37 -05:00
brent s
aa8fa6f1c4 updates to sshsecure... HOPEFULLY it works with centos 6 now. 2019-01-17 02:25:35 -05:00
brent s
46357d8cf8 gorram it. perms change 2019-01-09 14:47:54 -05:00
brent s
c5821b4e56 adding file extractor 2019-01-09 14:44:42 -05:00
brent s
86875ff09f checking in ssh stuff (unfinished), and some updates to clientinfo 2018-12-29 09:35:16 -05:00
brent s
676a2be088 lol whoops 2018-12-04 10:55:11 -05:00
brent s
6d191405be fixing some vanilla centos 6 bullshit 2018-12-04 10:52:34 -05:00
brent s
ca4a9e4b08 added local RPM support for listing files in RPM 2018-11-26 06:49:12 -05:00
brent s
cfc48cac26 fixing bug in find_changed_confs with symlink-skipping 2018-11-17 03:24:38 -05:00
brent s
1fc59208b6 adding autopkg 2018-11-12 15:45:16 -05:00
brent s
69d13d5c97 updating minor spec break in repoclone for arch, other small notes/fixes 2018-11-08 19:04:11 -05:00
brent s
b17646ea4f i was straight-up goofin' 2018-11-07 12:40:37 -05:00
brent s
b27ee82a9d oops 2018-11-03 02:49:24 -04:00
brent s
33043a3499 adding some new scripts and updated hack(le)s 2018-11-03 02:37:19 -04:00
brent s
6f450ab68f updating some scripts - fixes, mostly. conf_minify works WAY better now. 2018-10-18 14:13:34 -04:00
brent s
d84a98520a lel, eff off CRLF 2018-09-27 15:30:53 -04:00
brent s
fa2fda8054 whoops! forgot -p for json and xml. 2018-09-27 14:40:47 -04:00
brent s
e1eefebf9d .. 2018-09-27 14:30:21 -04:00
brent s
f169080f59 boom,
better implementation of yumdb's search utility
2018-09-27 14:28:25 -04:00
brent s
ef28b55686 oops? https://hg.python.org/cpython/file/tip/Lib/importlib/__init__.py#l6 2018-08-14 14:15:44 -04:00
brent s
24b5899280 whoops! it's complete 2018-08-14 03:47:54 -04:00
brent s
81f85b7e48 adding mtree_to_xml.py 2018-08-14 03:42:18 -04:00
brent s
399819748f gorRAM IT 2018-08-07 17:53:37 -04:00
brent s
6078736f70 oops 2018-08-07 17:47:38 -04:00
brent s
120b576a38 adding restore functionality 2018-08-07 17:42:54 -04:00
brent s
b566970d57 .... 2018-08-07 12:27:01 -04:00
brent s
380301725d gorram it 2018-08-07 12:25:21 -04:00
brent s
cd9148bcec fixing some 3.7 stuff 2018-08-07 12:23:27 -04:00
brent s
4dba35f455 forcing abspath 2018-08-07 12:11:25 -04:00
brent s
5bd2a87d0c fixing symlink chk 2018-08-07 11:47:13 -04:00
brent s
91bff5412c fixing empty value allowance 2018-08-07 11:35:40 -04:00
brent s
8c9a3cd14b change to python3 instead of explicit 3.6 2018-08-07 10:54:59 -04:00
brent s
e1cd54a7b2 got color inversion working - for mIRC, anyways. haven't tested with an irssi log. still getting weird color bugs on irssi but i think mirc is done 2018-08-02 09:11:01 -04:00
brent s
e169160159 oops, i broke it 2018-08-02 07:30:43 -04:00
brent s
9f1515b96c checking irssi log parser rewrite progress.... got a TON done on it 2018-08-02 01:48:26 -04:00
brent s
aac603b4ee removing some extraneous output 2018-07-22 20:31:05 -04:00
brent s
5243da4d0a adding arch iso checker/downloader 2018-07-22 20:29:25 -04:00
brent s
a39e6e4bb6 works! yay 2018-06-15 23:09:28 -04:00
brent s
f23c20da99 thanks, amayer!
12:10:31 < amayer> r00t^2: Shouldn't this be "0" * 64 ? https://git.square-r00t.net/OpTools/tree/centos/find_changed_confs.py#n48
2018-05-21 12:24:50 -04:00
brent s
ba53a8d162 a little simplification and minor TODO-ing 2018-05-08 12:46:52 -04:00
brent s
e18ebb24fb aaaand pubkey parsing added as well. i think this is Done(TM) 2018-05-08 12:32:17 -04:00
brent s
38227cf938 change this to something more apropos 2018-05-08 12:13:25 -04:00
brent s
07ab9840ca fixing URL parsing 2018-05-08 10:04:05 -04:00
brent s
4f775159c8 fixing small bug 2018-05-08 05:27:12 -04:00
brent s
36c20eae91 adding ascii ref links and ssl_tls/certparser.py
(because jthan keeps forgetting how to use openssl cli)
2018-05-08 05:19:10 -04:00
brent s
eb33ecd559 todo, cleaning up timestamp file 2018-05-05 07:05:15 -04:00
brent s
b843a473bc actually, throw a .txt on there so it plays nicely with MIME 2018-05-05 06:57:56 -04:00
brent s
2d0e15f811 adding timestamp file 2018-05-05 06:51:23 -04:00
brent s
4640030373 adding logger lib and conf_minify 2018-04-28 07:59:30 -04:00
brent s
0836b93fee gorram it 2018-04-21 00:54:38 -04:00
brent s
20129f5ac0 "stop whining!" - arnold schwarzenegger 2018-04-21 00:52:25 -04:00
brent s
67c3eb93fa add option to skip symlinks in RPMs (apache, etc.) 2018-04-19 13:41:48 -04:00
brent s
0ed777e86b stop emailing me because of a temporary rsync error 2018-04-19 01:18:19 -04:00
brent s
c546d427fd adding a text file of quick python hacks 2018-04-17 14:35:06 -04:00
brent s
5b941f171f almost forgot to add in forced-ipv4/ipv6 support. whoops! 2018-04-16 01:15:35 -04:00
brent s
652d616471 better output options 2018-04-15 23:02:55 -04:00
brent s
6d04f262db done 2018-04-15 22:05:24 -04:00
brent s
ea414ca5b7 minor updates 2018-04-15 15:20:48 -04:00
brent s
3d537e42bc i think i fixed it- wasn't rendering correct rel_ver 2018-04-15 13:46:28 -04:00
brent s
3252303573 centos repo mirror script is done 2018-04-15 11:36:41 -04:00
brent s
166f5a9021 . 2018-04-14 17:53:55 -04:00
brent s
080ee26fff updating some sksdump stuff, adding a filesizes summer 2018-04-14 13:03:07 -04:00
brent s
5b2f4d5c0a adding exec to iso mirror sort, adding script to detect changes between rpm and local 2018-04-10 15:33:32 -04:00
brent s
591db419ad minor changes 2018-03-13 12:15:32 -04:00
brent s
7b55f4e3f6 should probably include these too 2018-03-08 19:31:29 -05:00
brent s
48364561bf Merge branch 'master' of square-r00t.net:optools 2018-02-13 00:09:43 -05:00
brent s
a8b9ecc7a0 intermediary commit 2018-02-13 00:09:36 -05:00
7f03396325 14:04:47 < jthan> r00t^2: can you add a return() to the main here?
14:04:49 < jthan> https://git.square-r00t.net/OpTools/tree/arch/repoclone.py
14:04:49 < jthan> lol
2018-01-31 20:42:46 -05:00
e4552a879f oops typo 2018-01-29 01:49:42 -05:00
brent s
caca1d1e84 new script, centos/isomirror_sort.py 2018-01-13 02:10:15 -05:00
brent s
3ccef84028 tweaks to the apacman installer 2018-01-12 19:22:26 -05:00
brent s
48eb809e84 adding rough beginning of mirror check/ranker/updater/etc. 2017-12-01 02:30:14 -05:00
brent s
2d42adaaf7 quick gzip fix 2017-11-26 09:00:47 -05:00
brent s
045bcfaa0d checking in license, etc. also checkpointing the irssi log parser; i'm about to remove the list-creation 2017-11-20 06:37:46 -05:00
brent s
9c528c4908 checking in all work done so far because what if my SSD dies? 2017-11-18 22:33:31 -05:00
brent s
b2109646f3 cleverness (mostly) confirmed. exploiting bugs in cmd.exe for fun and pretty printing ftw i guess? 2017-11-17 15:09:03 -05:00
brent s
7598e2bf6f is a clever hack only clever if it works? 🤔 (re: color codes) 2017-11-17 14:57:58 -05:00
brent s
aa791870be pushing for external testing 2017-11-14 15:22:38 -05:00
brent s
1bac4a9d78 i'm about to totally re-do how i'm approaching SSH tunneling, sooo... 2017-11-13 23:38:52 -05:00
brent s
fd68ba87b7 adding rfc.py 2017-11-12 13:44:06 -05:00
brent s
731609f689 fixing bug with replace mode 2017-11-08 23:12:36 -05:00
brent s
405bf79d56 adding hostscan 2017-11-08 17:00:11 -05:00
brent s
d264281284 ... 2017-10-29 23:49:19 -04:00
brent s
30a9e1fc58 wrong var name 2017-10-28 23:15:35 -04:00
brent s
d2f92dad86 ignore local urls.csv 2017-10-26 10:43:39 -04:00
brent s
598fd5fcde ffs 2017-10-26 07:22:36 -04:00
brent s
aaf6b16407 oops 2017-10-26 07:18:36 -04:00
brent s
7ffde14257 missed one 2017-10-26 06:30:33 -04:00
brent s
e41b3f5ab3 gorram it 2017-10-26 06:16:03 -04:00
brent s
e19ae130e9 missed one... 2017-10-26 06:14:05 -04:00
brent s
44fce6e456 oops 2017-10-26 06:13:31 -04:00
brent s
b09a07e3aa k, fix in for better cron handling. it's recommended you call it in cron with: -v -Ll warning
this will print to stdout anything above a warning level. (optionally you can do "-Ll info" too, if you want a little more verbosity.)
if you want full backup reports sent via cron, assuming you have a mailer set up, use info.
2017-10-26 05:11:37 -04:00
brent s
ae118ee9ed modify for better error detection since some programs write to stderr for non-error output 2017-10-26 02:04:08 -04:00
brent s
5ab91b01f7 ... 2017-10-25 16:28:08 -04:00
brent s
a8914da258 some day my prince(ss) will come.
and by that i mean i'll stop making typos.
2017-10-25 03:24:22 -04:00
brent s
58736f7f95 typos. gorram typos. 2017-10-25 03:22:05 -04:00
brent s
b651901b91 ahem. 2017-10-25 03:14:33 -04:00
brent s
afd24b8070 fucking borg 2017-10-25 03:13:43 -04:00
brent s
3a61ea161a ┻━┻ ︵ヽ(`Д´)ノ︵ ┻━┻ 2017-10-25 03:11:47 -04:00
brent s
d4a5bf60db (╯°□°)╯︵ ┻━┻) 2017-10-25 03:10:35 -04:00
brent s
5834e8fafc oh right 2017-10-25 03:09:37 -04:00
brent s
c3e5baf04b gorram it. 2017-10-25 03:08:56 -04:00
brent s
836aacd425 whooops 2017-10-25 03:07:18 -04:00
brent s
b4ee009465 borg backup script added. ready for testing 2017-10-25 02:50:45 -04:00
brent s
33558610a6 fix for sksdump and adding journald support for the backup script 2017-10-24 06:04:54 -04:00
brent s
3bcdb408a1 logging added, needs to be modified to write to journald 2017-10-24 04:57:15 -04:00
brent s
6801047d0a adding script in-progress. i remove -v/--verbose in favor of the loglevel option (e.g. debug). 2017-10-23 03:10:06 -04:00
brent s
11f82ee44e updating todo again 2017-10-12 02:53:39 -04:00
brent s
f8b5d7e2d8 updating todos 2017-10-12 02:51:18 -04:00
brent s
ce317bf3f7 whew, mumble user cert hashing done. 2017-10-12 02:12:36 -04:00
brent s
b3a3427a9a todo update 2017-10-10 21:37:47 -04:00
brent s
7bc6ea3408 testing local clone hook 2017-10-10 21:19:31 -04:00
brent s
8add03fadb need to be able to idempotently only change the config files 2017-10-10 21:09:15 -04:00
brent s
f904052111 ...we don't need to restart for ls operations 2017-10-10 20:15:49 -04:00
brent s
704e590891 we need to restart murmur if we're updating the db directly, so in the future we need to RPC/DBUS/ICE this 2017-10-10 20:14:33 -04:00
brent s
08d3958b47 basic functionality. editing still not there, but it's Usable(TM) 2017-10-10 20:03:24 -04:00
brent s
c49112a28d lol. forgot to actually call it. 2017-10-09 16:32:13 -04:00
brent s
eb6999169e we want to narrow down the stdout vs stderr 2017-10-09 14:56:19 -04:00
brent s
b489c69772 one more 2017-10-09 14:46:47 -04:00
brent s
300beededb doh 2017-10-09 14:45:37 -04:00
brent s
7786f0f49d forgot the execute bit 2017-10-09 14:40:19 -04:00
brent s
8441148e36 think it's ready 2017-10-09 14:38:33 -04:00
brent s
6423c36f24 this should work 2017-10-09 09:42:26 -04:00
brent s
3776f89d9c this note too 2017-10-09 09:20:15 -04:00
brent s
e74c554643 checking in so i don't lose this snippet, but i need to do this totally different. 2017-10-09 09:18:37 -04:00
brent s
f47f2f8640 check-in for mirror checker 2017-10-08 20:05:50 -04:00
brent s
54751f9753 ffs. 2017-10-08 03:57:59 -04:00
brent s
a31a528e60 whoops 2017-10-08 03:54:54 -04:00
brent s
ef73b92929 updating to support throttling... 2017-10-08 03:48:51 -04:00
brent s
055a373885 oops. IPv6 encapsulation of IPv4... 2017-10-06 15:11:46 -04:00
brent s
1491db7e1e fixing url rendering on usage 2017-10-06 15:04:19 -04:00
brent s
a6c557097a whew. 2017-10-06 14:05:56 -04:00
brent s
8aaf23cdac adding net project with addr subproject 2017-10-05 21:17:04 -04:00
brent s
6c2dfce9a7 lol. 2017-10-05 09:49:11 -04:00
brent s
20185dea68 oops 2017-10-05 00:39:29 -04:00
brent s
a7fb958a2c whoops 2017-09-29 07:00:57 -04:00
brent s
e03be139ef adding aif scripts/config 2017-09-28 01:24:51 -04:00
brent s
2ab99f0f22 i think... we're done. still some TODOs but seems to be in a workable state. 2017-09-21 15:18:26 -04:00
brent s
4dedd79942 config system implemented 2017-09-19 05:59:01 -04:00
brent s
b2ba35504d and replacing. 2017-09-19 05:09:58 -04:00
brent s
4da7afdeaf adding the rewrite... 2017-09-19 05:09:33 -04:00
brent s
23a0dfedb1 sksdump fix. again. 2017-09-18 03:13:01 -04:00
brent s
fb7d964516 restructured to use config file and arguments 2017-09-15 12:50:45 -04:00
brent s
72c1532284 error handling 2017-09-15 10:24:06 -04:00
brent s
130074788a get cron to shut the hell up. too. many. emails. 2017-09-15 09:56:11 -04:00
brent s
30f508f40c ... 2017-09-15 09:48:32 -04:00
brent s
8ff59fdaf0 yikes 2017-09-15 09:47:05 -04:00
brent s
28e46f6f51 fuckING 2017-09-15 09:19:14 -04:00
brent s
e0a625853d need. sleep. 2017-09-15 09:17:34 -04:00
brent s
31ecf0b262 whoooops 2017-09-15 09:15:44 -04:00
brent s
fe48317d07 whoooops 2017-09-15 09:12:37 -04:00
brent s
9be695aea6 whew. major restructuring of repoclone... 2017-09-15 09:10:37 -04:00
brent s
b1aaca28d7 make this a little more prod-ready. a *little* more. 2017-09-13 18:51:46 -04:00
brent s
05c3fcc825 whoops 2017-09-13 18:44:23 -04:00
brent s
4cf5a6393a hitting memory issues on the dump box; need to sync then compress on remote 2017-09-13 18:32:28 -04:00
brent s
3909b0c783 oh come ON 2017-09-09 12:28:12 -04:00
brent s
5ace114ef8 ...and *i* need to be more careful about writing python after waking up 2017-09-09 12:27:26 -04:00
brent s
5ad4f0bda8 someone needs to write an editor that has a hotkey to disable the mouse pasting. it's gonna be a looong day. 2017-09-09 12:26:30 -04:00
brent s
3869b30198 whoops 2017-09-09 12:25:14 -04:00
brent s
f652aa7c35 fix to sksdump 2017-09-09 12:23:13 -04:00
brent s
6dbc713dc9 WHEW. test.py working now. still need to test pushing to a keyserver 2017-09-08 04:13:56 -04:00
brent s
20388431aa BROKEN AF, in the middle of a rewrite 2017-09-07 16:36:26 -04:00
brent s
eea9cf778e we can use asciidoctor to render man pages, apparently? something like:
asciidoctor -b manpage kant.1.adoc -o- | groff -Tascii -man
2017-09-05 03:35:20 -04:00
brent s
b93ac7368d restructuring, adding man page to let us make the help output less verbose 2017-09-05 00:00:17 -04:00
brent s
7df13e51e3 check-in.... 2017-09-04 20:45:08 -04:00
brent s
eddf7750c7 update to subdir mgmt, active work undergoing on kant, and some notes for future projects for arch pkg mgmt 2017-09-03 09:25:13 -04:00
brent s
efa84759da UX teak 2017-09-02 10:20:19 -04:00
brent s
86eba8b6ab let's make the compression a little more predictable 2017-09-02 10:01:42 -04:00
brent s
a1925e1053 a little better logging marks 2017-09-01 19:30:28 -04:00
143 changed files with 32728 additions and 448 deletions

View File

@@ -0,0 +1,16 @@
#!/bin/bash
origdir="${PWD}"
docsdir="${PWD}/ref/ascii/"
if ! command -v asciidoctor &> /dev/null;
then
exit 0
fi
cd "${docsdir}"
asciidoctor -o ascii.html ascii.adoc
cd ${origdir}
git add "${docsdir}/ascii.html"

2
.gitignore vendored
View File

@@ -22,4 +22,6 @@ __pycache__/
*.run
*.7z
*.rar
*.sqlite3
*.deb
.idea/

674
LICENSE Normal file
View File

@@ -0,0 +1,674 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.

39
TODO Normal file
View File

@@ -0,0 +1,39 @@
- sshsecure is being re-written in golang
-vault, schema dumper (dump mounts, paths (otional w/switch or toggle), and meta information)
--ability to recreate from xml dump
-git
-net/addr needs DNS/PTR/allocation stuff etc.
-net/mirroring
-storage, see if we can access lvm and cryptsetup functions via https://github.com/storaged-project/libblockdev/issues/41
--http://storaged.org/doc/udisks2-api/latest/gdbus-org.freedesktop.UDisks2.MDRaid.html
--http://storaged.org/doc/udisks2-api/latest/gdbus-org.freedesktop.UDisks2.Encrypted.html
--http://mindbending.org/en/python-and-udisks-part-2
--http://storaged.org/doc/udisks2-api/2.6.5/gdbus-org.freedesktop.UDisks2.Block.html
--https://dbus.freedesktop.org/doc/dbus-python/doc/tutorial.html
sshkeys:
-need to verify keys via GPG signature. we also need to have a more robust way of updating pubkeys - categorization, role
-write API to get pubkeys, hostkeys? really wish DBs supported nesting
-separate by algo, but this is easy to do (split on space, [0])
snippet: create mtree with libarchive, bsdtar -cf /tmp/win.mtree --one-file-system --format=mtree --options='mtree:sha512,mtree:indent' /path/*
probably need to package https://packages.debian.org/source/stretch/freebsd-buildutils to get fmtree for reading
-net, add ipxe - write flask app that determines path based on MAC addr
-net, add shorewall templater
-port in sslchk
-script that uses uconv(?) and pymysql to export database to .ods
-IRC
-- i should use the python IRC module on pypi to join an irc network (freenode, probably, for my personal interests) and
run an iteration over all nicks in a channel with /ctcp <nick> version. handy when i'm trying to find someone running
a certain platform/client i have some questions about.

62
aif/cfgs/base.xml Normal file
View File

@@ -0,0 +1,62 @@
<?xml version="1.0" encoding="UTF-8" ?>
<aif xmlns:aif="https://aif.square-r00t.net"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://aif.square-r00t.net aif.xsd">
<storage>
<disk device="/dev/sda" diskfmt="gpt">
<part num="1" start="0%" size="10%" fstype="ef00" />
<part num="2" start="10%" size="100%" fstype="8300" />
</disk>
<mount source="/dev/sda2" target="/mnt/aif" order="1" />
<mount source="/dev/sda1" target="/mnt/aif/boot" order="2" />
</storage>
<network hostname="aiftest.square-r00t.net">
<iface device="auto" address="auto" netproto="ipv4" />
</network>
<system timezone="EST5EDT" locale="en_US.UTF-8" chrootpath="/mnt/aif" reboot="1">
<users rootpass="!" />
<service name="sshd" status="1" />
<service name="cronie" status="1" />
<service name="haveged" status="1" />
</system>
<pacman command="apacman -S">
<repos>
<repo name="core" enabled="true" siglevel="default" mirror="file:///etc/pacman.d/mirrorlist" />
<repo name="extra" enabled="true" siglevel="default" mirror="file:///etc/pacman.d/mirrorlist" />
<repo name="community" enabled="true" siglevel="default" mirror="file:///etc/pacman.d/mirrorlist" />
<repo name="multilib" enabled="true" siglevel="default" mirror="file:///etc/pacman.d/mirrorlist" />
<repo name="testing" enabled="false" siglevel="default" mirror="file:///etc/pacman.d/mirrorlist" />
<repo name="multilib-testing" enabled="false" siglevel="default" mirror="file:///etc/pacman.d/mirrorlist" />
<repo name="archlinuxfr" enabled="false" siglevel="Optional TrustedOnly" mirror="http://repo.archlinux.fr/$arch" />
</repos>
<mirrorlist>
<mirror>http://mirror.us.leaseweb.net/archlinux/$repo/os/$arch</mirror>
<mirror>http://mirrors.advancedhosters.com/archlinux/$repo/os/$arch</mirror>
<mirror>http://ftp.osuosl.org/pub/archlinux/$repo/os/$arch</mirror>
<mirror>http://arch.mirrors.ionfish.org/$repo/os/$arch</mirror>
<mirror>http://mirrors.gigenet.com/archlinux/$repo/os/$arch</mirror>
<mirror>http://mirror.jmu.edu/pub/archlinux/$repo/os/$arch</mirror>
</mirrorlist>
<software>
<package name="sed" repo="core" />
<package name="python" />
<package name="openssh" />
<package name="vim" />
<package name="vim-plugins" />
<package name="haveged" />
<package name="byobu" />
<package name="etc-update" />
<package name="cronie" />
<package name="mlocate" />
<package name="mtree-git" />
</software>
</pacman>
<bootloader type="grub" target="/boot" efi="true" />
<scripts>
<script uri="https://aif.square-r00t.net/cfgs/scripts/pkg/python.sh" order="1" execution="pkg" />
<script uri="https://aif.square-r00t.net/cfgs/scripts/pkg/apacman.py" order="2" execution="pkg" />
<script uri="https://aif.square-r00t.net/cfgs/scripts/post/sshsecure.py" order="1" execution="post" />
<script uri="https://aif.square-r00t.net/cfgs/scripts/post/sshkeys.py" order="2" execution="post" />
<script uri="https://aif.square-r00t.net/cfgs/scripts/post/configs.py" order="3" execution="post" />
</scripts>
</aif>

View File

@@ -0,0 +1,98 @@
#!/usr/bin/env python3
import datetime
import os
import re
import shutil
import subprocess
from urllib.request import urlopen
pkg_base = 'apacman'
pkgs = ('', '-deps', '-utils')
url_base = 'https://aif.square-r00t.net/cfgs/files'
local_dir = '/tmp'
conf_options = {}
conf_options['apacman'] = {'enabled': ['needed', 'noconfirm', 'noedit', 'progress', 'purgebuild', 'skipcache', 'keepkeys'],
'disabled': [],
'values': {'tmpdir': '"/var/tmp/apacmantmp-$UID"'}}
conf_options['pacman'] = {'enabled': [],
'disabled': [],
'values': {'UseSyslog': None, 'Color': None, 'TotalDownload': None, 'CheckSpace': None, 'VerbosePkgLists': None}}
def downloadPkg(pkgfile, dlfile):
url = os.path.join(url_base, pkgfile)
# Prep the destination
os.makedirs(os.path.dirname(dlfile), exist_ok = True)
# Download the pacman package
with urlopen(url) as u:
with open(dlfile, 'wb') as f:
f.write(u.read())
return()
def installPkg(pkgfile):
# Install it
subprocess.run(['pacman', '-Syyu']) # Installing from an inconsistent state is bad, mmkay?
subprocess.run(['pacman', '--noconfirm', '--needed', '-S', 'base-devel'])
subprocess.run(['pacman', '--noconfirm', '--needed', '-S', 'multilib-devel'])
subprocess.run(['pacman', '--noconfirm', '--needed', '-U', pkgfile])
return()
def configurePkg(opts, pkgr):
cf = '/etc/{0}.conf'.format(pkgr)
# Configure it
shutil.copy2(cf, '{0}.bak.{1}'.format(cf, int(datetime.datetime.utcnow().timestamp())))
with open(cf, 'r') as f:
conf = f.readlines()
for idx, line in enumerate(conf):
l = line.split('=')
opt = l[0].strip('\n').strip()
if len(l) > 1:
val = l[1].strip('\n').strip()
# enabled options
for o in opts['enabled']:
if re.sub('^#?', '', opt).strip() == o:
if pkgr == 'apacman':
conf[idx] = '{0}=1\n'.format(o)
elif pkgr == 'pacman':
conf[idx] = '{0}\n'.format(o)
# disabled options
for o in opts['disabled']:
if re.sub('^#?', '', opt).strip() == o:
if pkgr == 'apacman':
conf[idx] = '{0}=0\n'.format(o)
elif pkgr == 'pacman':
conf[idx] = '#{0}\n'.format(o)
# values
for o in opts['values']:
if opts['values'][o] is not None:
if re.sub('^#?', '', opt).strip() == o:
if pkgr == 'apacman':
conf[idx] = '{0}={1}\n'.format(o, opts['values'][o])
elif pkgr == 'pacman':
conf[idx] = '{0} = {1}\n'.format(o, opts['values'][o])
else:
if re.sub('^#?', '', opt).strip() == o:
conf[idx] = '{0}\n'.format(o)
with open(cf, 'w') as f:
f.write(''.join(conf))
def finishPkg():
# Finish installing (optional deps)
for p in ('git', 'customizepkg-scripting', 'pkgfile', 'rsync'):
subprocess.run(['apacman', '--noconfirm', '--needed', '-S', p])
def main():
for p in pkgs:
pkg = pkg_base + p
fname = '{0}.tar.xz'.format(pkg)
local_pkg = os.path.join(local_dir, fname)
downloadPkg(fname, local_pkg)
installPkg(local_pkg)
for tool in ('pacman', 'apacman'):
configurePkg(conf_options[tool], tool)
finishPkg()
return()
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,3 @@
#!/bin/bash
pacman --needed --noconfirm -S python python-pip python-setuptools

136
aif/scripts/post/configs.py Normal file
View File

@@ -0,0 +1,136 @@
#!/usr/bin/env python3
import os
import pathlib
import pwd
import subprocess
def byobu(user = 'root'):
homedir = os.path.expanduser('~{0}'.format(user))
subprocess.run(['byobu-enable'])
b = '{0}/.byobu'.format(homedir)
# The keybindings, and general enabling
confs = {'backend': 'BYOBU_BACKEND=tmux\n',
'color': 'BACKGROUND=k\nFOREGROUND=w\nMONOCHROME=0', # NOT a typo; the original source I got this from had no end newline.
'color.tmux': 'BYOBU_DARK="\#333333"\nBYOBU_LIGHT="\#EEEEEE"\nBYOBU_ACCENT="\#75507B"\nBYOBU_HIGHLIGHT="\#DD4814"\n',
'datetime.tmux': 'BYOBU_DATE="%Y-%m-%d "\nBYOBU_TIME="%H:%M:%S"\n',
'keybindings': 'source $BYOBU_PREFIX/share/byobu/keybindings/common\n',
'keybindings.tmux': 'unbind-key -n C-a\nset -g prefix ^A\nset -g prefix2 ^A\nbind a send-prefix\n',
'profile': 'source $BYOBU_PREFIX/share/byobu/profiles/common\n',
'profile.tmux': 'source $BYOBU_PREFIX/share/byobu/profiles/tmux\n',
'prompt': '[ -r /usr/share/byobu/profiles/bashrc ] && . /usr/share/byobu/profiles/bashrc #byobu-prompt#\n',
'.screenrc': None,
'.tmux.conf': None,
'.welcome-displayed': None,
'windows': None,
'windows.tmux': None}
for c in confs.keys():
with open('{0}/{1}'.format(b, c), 'w') as f:
if confs[c] is not None:
f.write(confs[c])
else:
f.write('')
# The status file- add some extras, and remove the session string which is broken apparently.
# Holy shit I wish there was a way of storing compressed text in plaintext besides base64.
statusconf = ["# status - Byobu's default status enabled/disabled settings\n", '#\n', '# Override these in $BYOBU_CONFIG_DIR/status\n',
'# where BYOBU_CONFIG_DIR is XDG_CONFIG_HOME if defined,\n', '# and $HOME/.byobu otherwise.\n', '#\n',
'# Copyright (C) 2009-2011 Canonical Ltd.\n', '#\n', '# Authors: Dustin Kirkland <kirkland@byobu.org>\n', '#\n',
'# This program is free software: you can redistribute it and/or modify\n', '# it under the terms of the GNU ' +
'General Public License as published by\n', '# the Free Software Foundation, version 3 of the License.\n', '#\n',
'# This program is distributed in the hope that it will be useful,\n', '# but WITHOUT ANY WARRANTY; without even the ' +
'implied warranty of\n', '# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n', '# GNU General Public License ' +
'for more details.\n', '#\n', '# You should have received a copy of the GNU General Public License\n', '# along with this ' +
'program. If not, see <http://www.gnu.org/licenses/>.\n', '\n', "# Status beginning with '#' are disabled.\n", '\n', '# Screen has ' +
'two status lines, with 4 quadrants for status\n', 'screen_upper_left="color"\n', 'screen_upper_right="color whoami hostname ' +
'ip_address menu"\n', 'screen_lower_left="color logo distro release #arch session"\n', 'screen_lower_right="color network #disk_io ' +
'custom #entropy raid reboot_required updates_available #apport #services #mail users uptime #ec2_cost #rcs_cost #fan_speed #cpu_temp ' +
'battery wifi_quality #processes load_average cpu_count cpu_freq memory #swap disk #time_utc date time"\n', '\n', '# Tmux has one ' +
'status line, with 2 halves for status\n', 'tmux_left=" logo #distro release arch #session"\n', '# You can have as many tmux right ' +
'lines below here, and cycle through them using Shift-F5\n', 'tmux_right=" network disk_io #custom #entropy raid reboot_required ' +
'#updates_available #apport services #mail #users uptime #ec2_cost #rcs_cost #fan_speed #cpu_temp #battery #wifi_quality processes ' +
'load_average cpu_count cpu_freq memory #swap disk whoami hostname ip_address time_utc date time"\n', '#tmux_right="network ' +
'#disk_io #custom entropy raid reboot_required updates_available #apport #services #mail users uptime #ec2_cost #rcs_cost fan_speed ' +
'cpu_temp battery wifi_quality #processes load_average cpu_count cpu_freq memory #swap #disk whoami hostname ip_address #time_utc ' +
'date time"\n', '#tmux_right="network #disk_io custom #entropy raid reboot_required updates_available #apport #services #mail users ' +
'uptime #ec2_cost #rcs_cost #fan_speed #cpu_temp battery wifi_quality #processes load_average cpu_count cpu_freq memory #swap #disk ' +
'#whoami #hostname ip_address #time_utc date time"\n', '#tmux_right="#network disk_io #custom entropy #raid #reboot_required ' +
'#updates_available #apport #services #mail #users #uptime #ec2_cost #rcs_cost fan_speed cpu_temp #battery #wifi_quality #processes ' +
'#load_average #cpu_count #cpu_freq #memory #swap whoami hostname ip_address #time_utc disk date time"\n']
with open('{0}/status'.format(b), 'w') as f:
f.write(''.join(statusconf))
# The statusrc file is another lengthy one.
statusrc = ["# statusrc - Byobu's default status configurations\n", '#\n', '# Override these in $BYOBU_CONFIG_DIR/statusrc\n',
'# where BYOBU_CONFIG_DIR is XDG_CONFIG_HOME if defined,\n', '# and $HOME/.byobu otherwise.\n', '#\n', '# Copyright (C) ' +
'2009-2011 Canonical Ltd.\n', '#\n', '# Authors: Dustin Kirkland <kirkland@byobu.org>\n', '#\n', '# This program is free software: ' +
'you can redistribute it and/or modify\n', '# it under the terms of the GNU General Public License as published by\n',
'# the Free Software Foundation, version 3 of the License.\n', '#\n', '# This program is distributed in the hope that it will be ' +
'useful,\n', '# but WITHOUT ANY WARRANTY; without even the implied warranty of\n', '# MERCHANTABILITY or FITNESS FOR A PARTICULAR ' +
'PURPOSE. See the\n', '# GNU General Public License for more details.\n', '#\n', '# You should have received a copy of the GNU ' +
'General Public License\n', '# along with this program. If not, see <http://www.gnu.org/licenses/>.\n', '\n', '# Configurations that ' +
'you can override; if you leave these commented out,\n', '# Byobu will try to auto-detect them.\n', '\n', '# This should be auto-detected ' +
'for most distro, but setting it here will save\n', '# some call to lsb_release and the like.\n', '#BYOBU_DISTRO=Ubuntu\n', '\n',
'# Default: depends on the distro (which is either auto-detected, either set\n', '# via $DISTRO)\n', '#LOGO="\\o/"\n', '\n', '# Abbreviate ' +
'the release to N characters\n', '# By default, this is disabled. But if you set RELEASE_ABBREVIATED=1\n', '# and your lsb_release is ' +
'"precise", only "p" will be displayed\n', '#RELEASE_ABBREVIATED=1\n', '\n', '# Default: /\n', '#MONITORED_DISK=/\n', '\n', '# Minimum ' +
'disk throughput that triggers the notification (in kB/s)\n', '# Default: 50\n', '#DISK_IO_THRESHOLD=50\n', '\n', '# Default: eth0\n',
'#MONITORED_NETWORK=eth0\n', '\n', '# Unit used for network throughput (either bits per second or bytes per second)\n', '# Default: ' +
'bits\n', '#NETWORK_UNITS=bytes\n', '\n', '# Minimum network throughput that triggers the notification (in kbit/s)\n', '# Default: 20\n',
'#NETWORK_THRESHOLD=20\n', '\n', '# You can add an additional source of temperature here\n', '#MONITORED_TEMP=/proc/acpi/thermal_zone/' +
'THM0/temperature\n', '\n', '# Default: C\n', '#TEMP=F\n', '\n', '#SERVICES="eucalyptus-nc|NC eucalyptus-cloud|CLC eucalyptus-walrus ' +
'eucalyptus-cc|CC eucalyptus-sc|SC"\n', '\n', '#FAN=$(find /sys -type f -name fan1_input | head -n1)\n', '\n', '# You can set this to 1 ' +
'to report your external/public ip address\n', '# Default: 0\n', '#IP_EXTERNAL=0\n', '\n', '# The users notification normally counts ssh ' +
"sessions; set this configuration to '1'\n", '# to instead count number of distinct users logged onto the system\n', '# Default: 0\n',
'#USERS_DISTINCT=0\n', '\n', '# Set this to zero to hide seconds int the time display\n', '# Default 1\n', '#TIME_SECONDS=0\n']
with open('{0}/statusrc'.format(b), 'w') as f:
f.write(''.join(statusrc))
setPerms(user, b)
return()
def vim():
vimc = ['\n', 'set nocompatible\n', 'set number\n', 'syntax on\n', 'set paste\n', 'set ruler\n', 'if has("autocmd")\n',' au BufReadPost * if ' +
'line("\'\\"") > 1 && line("\'\\"") <= line("$") | exe "normal! g\'\\"" | endif\n', 'endif\n', '\n', '" bind F3 to insert a timestamp.\n', '" In ' +
'normal mode, insert.\n', 'nmap <F3> i<C-R>=strftime("%c")<CR><Esc>\n', '\n', 'set pastetoggle=<F2>\n', '\n', '" https://stackoverflow.com/' +
'questions/27771616/turn-off-all-automatic-code-complete-in-jedi-vim\n', 'let g:jedi#completions_enabled = 0\n', 'let g:jedi#show_call_' +
'signatures = "0"\n']
with open('/etc/vimrc', 'a') as f:
f.write(''.join(vimc))
setPerms('root', '/etc/vimrc')
return()
def bash():
bashc = ['\n', 'alias vi=/usr/bin/vim\n', 'export EDITOR=vim\n', '\n', 'if [ -f ~/.bashrc ];\n', 'then\n', ' source ~/.bashrc\n', 'fi \n',
'if [ -d ~/bin ];\n', 'then\n', ' export PATH="$PATH:~/bin"\n', 'fi\n', '\n', 'alias grep="grep --color"\n',
'alias egrep="egrep --color"\n', '\n', 'alias ls="ls --color=auto"\n', 'alias vi="/usr/bin/vim"\n', '\n', 'export HISTTIMEFORMAT="%F %T "\n',
'export PATH="${PATH}:/sbin:/bin:/usr/sbin"\n']
with open('/etc/bash.bashrc', 'a') as f:
f.write(''.join(bashc))
setPerms('root', '/etc/bash.bashrc')
return()
def mlocate():
subprocess.run(['updatedb'])
return()
def setPerms(user, path):
uid = pwd.getpwnam(user).pw_uid
gid = pwd.getpwnam(user).pw_gid
pl = pathlib.PurePath(path).parts
for basedir, dirs, files in os.walk(path):
os.chown(basedir, uid, gid)
if os.path.isdir(basedir):
os.chmod(basedir, 0o755)
elif os.path.isfile(basedir):
os.chmod(basedir, 0o644)
for f in files:
os.chown(os.path.join(basedir, f), uid, gid)
os.chmod(os.path.join(basedir, f), 0o644)
return()
def main():
byobu()
vim()
bash()
mlocate()
if __name__ == '__main__':
main()

206
aif/scripts/post/hostscan.py Executable file
View File

@@ -0,0 +1,206 @@
#!/usr/bin/env python3
# Note: for hashed known-hosts, https://gist.github.com/maxtaco/5080023
import argparse
import grp
import os
import pwd
import re
import subprocess
import sys
# Defaults
#def_supported_keys = subprocess.run(['ssh',
# '-Q',
# 'key'], stdout = subprocess.PIPE).stdout.decode('utf-8').splitlines()
def_supported_keys = ['dsa', 'ecdsa', 'ed25519', 'rsa']
def_mode = 'append'
def_syshostkeys = '/etc/ssh/ssh_known_hosts'
def_user = pwd.getpwuid(os.geteuid())[0]
def_grp = grp.getgrgid(os.getegid())[0]
class hostscanner(object):
def __init__(self, args):
self.args = args
if self.args['keytypes'] == ['all']:
self.args['keytypes'] = def_supported_keys
if self.args['system']:
if os.geteuid() != 0:
exit(('You have specified system-wide modification but ' +
'are not running with root privileges! Exiting.'))
self.args['output'] = def_syshostkeys
if self.args['output'] != sys.stdout:
_pardir = os.path.dirname(os.path.abspath(os.path.expanduser(self.args['output'])))
if _pardir.startswith('/home'):
_octmode = 0o700
else:
_octmode = 0o755
os.makedirs(_pardir, mode = _octmode, exist_ok = True)
os.chown(_pardir,
pwd.getpwnam(self.args['chown_user'])[2],
grp.getgrnam(self.args['chown_grp'])[2])
def getHosts(self):
self.keys = {}
_hosts = os.path.abspath(os.path.expanduser(self.args['infile']))
with open(_hosts, 'r') as f:
for l in f.readlines():
l = l.strip()
if re.search('^\s*(#.*)?$', l, re.MULTILINE):
continue # Skip commented and blank lines
k = re.sub('^([0-9a-z-\.]+)\s*#.*$',
'\g<1>',
l.strip().lower(),
re.MULTILINE)
self.keys[k] = []
return()
def getKeys(self):
def parseType(k):
_newkey = re.sub('^ssh-', '', k).split('-')[0]
if _newkey == 'dss':
_newkey = 'dsa'
return(_newkey)
for h in list(self.keys.keys()):
_h = h.split(':')
if len(_h) == 1:
_host = _h[0]
_port = 22
elif len(_h) == 2:
_host = _h[0]
_port = int(_h[1])
_cmdline = ['ssh-keyscan',
'-t', ','.join(self.args['keytypes']),
'-p', str(_port),
_host]
if self.args['hash']:
#https://security.stackexchange.com/a/56283
# verify via:
# SAMPLE ENTRY: |1|F1E1KeoE/eEWhi10WpGv4OdiO6Y=|3988QV0VE8wmZL7suNrYQLITLCg= ssh-rsa ...
#key=$(echo F1E1KeoE/eEWhi10WpGv4OdiO6Y= | base64 -d | xxd -p)
#echo -n "192.168.1.61" | openssl sha1 -mac HMAC -macopt hexkey:${key} | awk '{print $2}' | xxd -r -p | base64
_cmdline.insert(1, '-H')
_cmd = subprocess.run(_cmdline,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE)
if not re.match('\s*#.*', _cmd.stderr.decode('utf-8')):
_printerr = []
for i in _cmd.stderr.decode('utf-8').splitlines():
if i.strip() not in _printerr:
_printerr.append(i.strip())
print('{0}: errors detected; skipping ({1})'.format(h, '\n'.join(_printerr)))
del(self.keys[h])
continue
for l in _cmd.stdout.decode('utf-8').splitlines():
_l = l.split()
_key = {'type': _l[1],
'host': _l[0],
'key': _l[2]}
if parseType(_key['type']) in self.args['keytypes']:
self.keys[h].append(_key)
return()
def write(self):
if self.args['writemode'] == 'replace':
if os.path.isfile(self.args['output']) and self.args['output'] != sys.stdout:
os.move(self.args['output'], os.path.join(self.args['output'], '.bak'))
for h in self.keys.keys():
for i in self.keys[h]:
_s = '# Automatically added via hostscan.py\n{0} {1} {2}\n'.format(i['host'],
i['type'],
i['key'])
if self.args['output'] == sys.stdout:
print(_s, end = '')
else:
with open(self.args['output'], 'a') as f:
f.write(_s)
os.chmod(self.args['output'], 0o644)
os.chown(self.args['output'],
pwd.getpwnam(self.args['chown_user'])[2],
grp.getgrnam(self.args['chown_grp'])[2])
return()
def parseArgs():
def getTypes(t):
keytypes = t.split(',')
keytypes = [k.strip() for k in keytypes]
for k in keytypes:
if k not in ('all', *def_supported_keys):
raise argparse.ArgumentError('Must be one or more of the following: all, {0}'.format(', '.join(def_supported_keys)))
return(keytypes)
args = argparse.ArgumentParser(description = ('Scan a list of hosts and present their hostkeys in ' +
'a format suitable for an SSH known_hosts file.'))
args.add_argument('-u',
'--user',
dest = 'chown_user',
default = def_user,
help = ('The username to chown the file to (if \033[1m{0}\033[0m is specified). ' +
'Default: \033[1m{1}\033[0m').format('-o/--output', def_user))
args.add_argument('-g',
'--group',
dest = 'chown_grp',
default = def_grp,
help = ('The group to chown the file to (if \033[1m{0}\033[0m is specified). ' +
'Default: \033[1m{1}\033[0m').format('-o/--output', def_grp))
args.add_argument('-H',
'--hash',
dest = 'hash',
action = 'store_true',
help = ('If specified, hash the hostkeys (see ssh-keyscan(1)\'s -H option for more info)'))
args.add_argument('-m',
'--mode',
dest = 'writemode',
default = def_mode,
choices = ['append', 'replace'],
help = ('If \033[1m{0}\033[0m is specified, the mode to use for the ' +
'destination file. The default is \033[1m{1}\033[0m').format('-o/--output', def_mode))
args.add_argument('-k',
'--keytypes',
dest = 'keytypes',
type = getTypes,
default = 'all',
help = ('A comma-separated list of key types to add (if supported by the target host). ' +
'The default is to add all keys found. Must be one (or more) of: \033[1m{0}\033[0m').format(', '.join(def_supported_keys)))
args.add_argument('-o',
'--output',
default = sys.stdout,
metavar = 'OUTFILE',
dest = 'output',
help = ('If specified, write the hostkeys to \033[1m{0}\033[0m instead of ' +
'\033[1m{1}\033[0m (the default). ' +
'Overrides \033[1m{2}\033[0m').format('OUTFILE',
'stdout',
'-S/--system-wide'))
args.add_argument('-S',
'--system-wide',
dest = 'system',
action = 'store_true',
help = ('If specified, apply to the entire system (not just the ' +
'specified/running user) via {0}. ' +
'Requires \033[1m{1}\033[0m in /etc/ssh/ssh_config (usually ' +
'enabled silently by default) and running with root ' +
'privileges').format(def_syshostkeys,
'GlobalKnownHostsFile {0}'.format(def_syshostkeys)))
args.add_argument(metavar = 'HOSTLIST_FILE',
dest = 'infile',
help = ('The path to the list of hosts. Can contain blank lines and/or comments. ' +
'One host per line. Can be \033[1m{0}\033[0m (as long as it\'s resolvable), ' +
'\033[1m{1}\033[0m, or \033[1m{2}\033[0m. To specify an alternate port, ' +
'add \033[1m{3}\033[0m to the end (e.g. ' +
'"some.host.tld:22")').format('hostname',
'IP address',
'FQDN',
':<PORTNUM>'))
return(args)
def main():
args = vars(parseArgs().parse_args())
scan = hostscanner(args)
scan.getHosts()
scan.getKeys()
scan.write()
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,31 @@
#!/usr/bin/env python3
import os
import pwd
from urllib.request import urlopen
keysfile = 'https://square-r00t.net/ssh/all'
def copyKeys(keystring, user = 'root'):
uid = pwd.getpwnam(user).pw_uid
gid = pwd.getpwnam(user).pw_gid
homedir = os.path.expanduser('~{0}'.format(user))
sshdir = '{0}/.ssh'.format(homedir)
authfile = '{0}/authorized_keys'.format(sshdir)
os.makedirs(sshdir, mode = 0o700, exist_ok = True)
with open(authfile, 'a') as f:
f.write(keystring)
for basedir, dirs, files in os.walk(sshdir):
os.chown(basedir, uid, gid)
os.chmod(basedir, 0o700)
for f in files:
os.chown(os.path.join(basedir, f), uid, gid)
os.chmod(os.path.join(basedir, f), 0o600)
return()
def main():
with urlopen(keysfile) as keys:
copyKeys(keys.read().decode('utf-8'))
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,428 @@
#!/usr/bin/env python3
# Pythonized automated way of running https://sysadministrivia.com/news/hardening-ssh-security
# TODO: check for cryptography module. if it exists, we can do this entirely pythonically
# without ever needing to use subprocess/ssh-keygen, i think!
# Thanks to https://stackoverflow.com/a/39126754.
# Also, I need to re-write this. It's getting uglier.
# stdlib
import datetime
import glob
import os
import pwd
import re
import signal
import shutil
import subprocess # REMOVE WHEN SWITCHING TO PURE PYTHON
#### PREP FOR PURE PYTHON IMPLEMENTATION ####
# # non-stdlib - testing and automatic install if necessary.
# # TODO #
# - cryptography module won't generate new-format "openssh-key-v1" keys.
# - See https://github.com/pts/py_ssh_keygen_ed25519 for possible conversion to python 3
# - https://github.com/openssh/openssh-portable/blob/master/PROTOCOL.key
# - https://github.com/pyca/cryptography/issues/3509 and https://github.com/paramiko/paramiko/issues/1136
# has_crypto = False
# pure_py = False
# has_pip = False
# pipver = None
# try:
# import cryptography
# has_crypto = True
# except ImportError:
# # We'll try to install it. We set up the logic below.
# try:
# import pip
# has_pip = True
# # We'll use these to create a temporary lib path and remove it when done.
# import sys
# import tempfile
# except ImportError:
# # ABSOLUTE LAST fallback, if we got to THIS case, is to use subprocess.
# has_pip = False
# import subprocess
#
# # Try installing it then!
# if not all((has_crypto, )):
# # venv only included after python 3.3.x. We fallback to subprocess if we can't do dis.
# if sys.hexversion >= 0x30300f0:
# has_ensurepip = False
# import venv
# if not has_pip and sys.hexversion >= 0x30400f0:
# import ensurepip
# has_ensurepip = True
# temppath = tempfile.mkdtemp('_VENV')
# v = venv.create(temppath)
# if has_ensurepip and not has_pip:
# # This SHOULD be unnecessary, but we want to try really hard.
# ensurepip.bootstrap(root = temppath)
# import pip
# has_pip = True
# if has_pip:
# pipver = pip.__version__.split('.')
# # A thousand people are yelling at me for this.
# if int(pipver[0]) >= 10:
# from pip._internal import main as pipinstall
# else:
# pipinstall = pip.main
# if int(pipver[0]) >= 8:
# pipcmd = ['install',
# '--prefix={0}'.format(temppath),
# '--ignore-installed']
# else:
# pipcmd = ['install',
# '--install-option="--prefix={0}"'.format(temppath),
# '--ignore-installed']
# # Get the lib path.
# libpath = os.path.join(temppath, 'lib')
# if os.path.exists('{0}64'.format(libpath)) and not os.path.islink('{0}64'.format(libpath)):
# libpath += '64'
# for i in os.listdir(libpath): # TODO: make this more sane. We cheat a bit here by making assumptions.
# if re.search('python([0-9]+(\.[0-9]+)?)?$', i):
# libpath = os.path.join(libpath, i)
# break
# libpath = os.path.join(libpath, 'site-packages')
# sys.prefix = temppath
# for m in ('cryptography', 'ed25519'):
# pipinstall(['install', 'cryptography'])
# sys.path.append(libpath)
# try:
# import cryptography
# has_crypto = True
# except ImportError: # All that trouble for nothin'. Shucks.
# pass
#
# if all((has_crypto, )):
# pure_py = True
#
# if pure_py:
# from cryptography.hazmat.primitives import serialization as crypto_serialization
# from cryptography.hazmat.primitives.asymmetric import rsa
# from cryptography.hazmat.backends import default_backend as crypto_default_backend
#
# We need static backup suffixes.
tstamp = int(datetime.datetime.utcnow().timestamp())
# TODO: associate various config directives with version, too.
# For now, we use this for primarily CentOS 6.x, which doesn't support ED25519 and probably some of the MACs.
# Bastards.
# https://ssh-comparison.quendi.de/comparison/cipher.html at some point in the future...
# TODO: maybe implement some parsing of the ssh -Q stuff? https://superuser.com/a/869005/984616
# If you encounter a version incompatibility, please let me know!
# nmap --script ssh2-enum-algos -PN -sV -p22 <host>
magic_ver = 6.5
ssh_ver = subprocess.run(['ssh', '-V'], stderr = subprocess.PIPE).stderr.decode('utf-8').strip().split()[0]
# FUCK YOU, DEBIAN. FUCK YOU AND ALL OF YOUR DERIVATIVES. YOU'RE FUCKING TRASH.
# YOU BELONG NOWHERE NEAR A DATACENTER.
ssh_ver = float(re.sub('^(?:Open|Sun_)SSH_([0-9\.]+)(?:p[0-9]+)?(?:,.*)?.*$', '\g<1>', ssh_ver))
if ssh_ver >= magic_ver:
has_ed25519 = True
supported_keys = ('ed25519', 'rsa')
new_moduli = False
else:
has_ed25519 = False
supported_keys = ('rsa', )
new_moduli = False
# https://github.com/openssh/openssh-portable/commit/3e60d18fba1b502c21d64fc7e81d80bcd08a2092
if ssh_ver >= 8.1:
new_moduli = True
conf_options = {}
conf_options['sshd'] = {'KexAlgorithms': 'diffie-hellman-group-exchange-sha256',
'Protocol': '2',
'HostKey': ['/etc/ssh/ssh_host_rsa_key'],
#'PermitRootLogin': 'prohibit-password', # older daemons don't like "prohibit-..."
'PermitRootLogin': 'without-password',
'PasswordAuthentication': 'no',
'ChallengeResponseAuthentication': 'no',
'PubkeyAuthentication': 'yes',
'Ciphers': 'aes256-ctr,aes192-ctr,aes128-ctr',
'MACs': 'hmac-sha2-512,hmac-sha2-256'}
if has_ed25519:
conf_options['sshd']['HostKey'].append('/etc/ssh/ssh_host_ed25519_key')
conf_options['sshd']['KexAlgorithms'] = ','.join(('curve25519-sha256@libssh.org',
conf_options['sshd']['KexAlgorithms']))
conf_options['sshd']['Ciphers'] = ','.join((('chacha20-poly1305@openssh.com,'
'aes256-gcm@openssh.com,'
'aes128-gcm@openssh.com'),
conf_options['sshd']['Ciphers']))
conf_options['sshd']['MACs'] = ','.join((('hmac-sha2-512-etm@openssh.com,'
'hmac-sha2-256-etm@openssh.com,'
'umac-128-etm@openssh.com'),
conf_options['sshd']['MACs'],
'umac-128@openssh.com'))
# Uncomment if this is further configured
#conf_options['sshd']['AllowGroups'] = 'ssh-user'
conf_options['ssh'] = {'Host': {'*': {'KexAlgorithms': 'diffie-hellman-group-exchange-sha256',
'PubkeyAuthentication': 'yes',
'HostKeyAlgorithms': 'ssh-rsa'}}}
if has_ed25519:
conf_options['ssh']['Host']['*']['KexAlgorithms'] = ','.join(('curve25519-sha256@libssh.org',
conf_options['ssh']['Host']['*']['KexAlgorithms']))
conf_options['ssh']['Host']['*']['HostKeyAlgorithms'] = ','.join(
(('ssh-ed25519-cert-v01@openssh.com,'
'ssh-rsa-cert-v01@openssh.com,'
'ssh-ed25519'),
conf_options['ssh']['Host']['*']['HostKeyAlgorithms']))
def hostKeys(buildmoduli):
# Starting haveged should help lessen the time load a non-negligible amount, especially on virtual platforms.
if os.path.lexists('/usr/bin/haveged'):
# We could use psutil here, but then that's a python dependency we don't need.
# We could parse the /proc directory, but that's quite unnecessary. pgrep's installed by default on
# most distros.
with open(os.devnull, 'wb') as devnull:
if subprocess.run(['pgrep', 'haveged'], stdout = devnull).returncode != 0:
subprocess.run(['haveged'], stdout = devnull)
#Warning: The moduli stuff takes a LONG time to run. Hours.
if buildmoduli:
if not new_moduli:
subprocess.run(['ssh-keygen',
'-G', '/etc/ssh/moduli.all',
'-b', '4096',
'-q'])
subprocess.run(['ssh-keygen',
'-T', '/etc/ssh/moduli.safe',
'-f', '/etc/ssh/moduli.all',
'-q'])
else:
subprocess.run(['ssh-keygen',
'-q',
'-M', 'generate',
'-O', 'bits=4096',
'/etc/ssh/moduli.all'])
subprocess.run(['ssh-keygen',
'-q',
'-M', 'screen',
'-f', '/etc/ssh/moduli.all',
'/etc/ssh/moduli.safe'])
if os.path.lexists('/etc/ssh/moduli'):
os.rename('/etc/ssh/moduli', '/etc/ssh/moduli.old')
os.rename('/etc/ssh/moduli.safe', '/etc/ssh/moduli')
os.remove('/etc/ssh/moduli.all')
for suffix in ('', '.pub'):
for k in glob.glob('/etc/ssh/ssh_host_*key{0}'.format(suffix)):
os.rename(k, '{0}.old.{1}'.format(k, tstamp))
if has_ed25519:
subprocess.run(['ssh-keygen',
'-t', 'ed25519',
'-f', '/etc/ssh/ssh_host_ed25519_key',
'-q',
'-N', ''])
subprocess.run(['ssh-keygen',
'-t', 'rsa',
'-b', '4096',
'-f', '/etc/ssh/ssh_host_rsa_key',
'-q',
'-N', ''])
# We currently don't use this, but for simplicity's sake let's return the host keys.
hostkeys = {}
for k in supported_keys:
with open('/etc/ssh/ssh_host_{0}_key.pub'.format(k), 'r') as f:
hostkeys[k] = f.read()
return(hostkeys)
def config(opts, t):
special = {'sshd': {}, 'ssh': {}}
# We need to handle these directives a little differently...
special['sshd']['opts'] = ['Match']
special['sshd']['filters'] = ['User', 'Group', 'Host', 'LocalAddress', 'LocalPort', 'Address']
# These are arguments supported by each of the special options. We'll use this to verify entries.
special['sshd']['args'] = ['AcceptEnv', 'AllowAgentForwarding', 'AllowGroups', 'AllowStreamLocalForwarding',
'AllowTcpForwarding', 'AllowUsers', 'AuthenticationMethods', 'AuthorizedKeysCommand',
'AuthorizedKeysCommandUser', 'AuthorizedKeysFile', 'AuthorizedPrincipalsCommand',
'AuthorizedPrincipalsCommandUser', 'AuthorizedPrincipalsFile', 'Banner',
'ChrootDirectory', 'ClientAliveCountMax', 'ClientAliveInterval', 'DenyGroups',
'DenyUsers', 'ForceCommand', 'GatewayPorts', 'GSSAPIAuthentication',
'HostbasedAcceptedKeyTypes', 'HostbasedAuthentication',
'HostbasedUsesNameFromPacketOnly', 'IPQoS', 'KbdInteractiveAuthentication',
'KerberosAuthentication', 'MaxAuthTries', 'MaxSessions', 'PasswordAuthentication',
'PermitEmptyPasswords', 'PermitOpen', 'PermitRootLogin', 'PermitTTY', 'PermitTunnel',
'PermitUserRC', 'PubkeyAcceptedKeyTypes', 'PubkeyAuthentication', 'RekeyLimit',
'RevokedKeys', 'StreamLocalBindMask', 'StreamLocalBindUnlink', 'TrustedUserCAKeys',
'X11DisplayOffset', 'X11Forwarding', 'X11UseLocalHost']
special['ssh']['opts'] = ['Host', 'Match']
special['ssh']['args'] = ['canonical', 'exec', 'host', 'originalhost', 'user', 'localuser']
cf = '/etc/ssh/{0}_config'.format(t)
shutil.copy2(cf, '{0}.bak.{1}'.format(cf, tstamp))
with open(cf, 'r') as f:
conf = f.readlines()
conf.append('\n\n# Added per https://sysadministrivia.com/news/hardening-ssh-security\n\n')
confopts = []
# Get an index of directives pre-existing in the config file.
for line in conf[:]:
opt = line.split()
if opt:
if not re.match('^(#.*|\s+.*)$', opt[0]):
confopts.append(opt[0])
# We also need to modify the config file- comment out starting with the first occurrence of the
# specopts, if it exists. This is why we make a backup.
commentidx = None
for idx, i in enumerate(conf):
if re.match('^({0})\s+.*$'.format('|'.join(special[t]['opts'])), i):
commentidx = idx
break
if commentidx is not None:
idx = commentidx
while idx <= (len(conf) - 1):
conf[idx] = '#{0}'.format(conf[idx])
idx += 1
# Now we actually start replacing/adding some major configuration.
for o in opts.keys():
if o in special[t]['opts'] or isinstance(opts[o], dict):
# We need to put these at the bottom of the file due to how they're handled by sshd's config parsing.
continue
# We handle these a little specially too- they're for multiple lines sharing the same directive.
# Since the config should be explicit, we remove any existing entries specified that we find.
else:
if o in confopts:
# If I was more worried about recursion, or if I was appending here, I should use conf[:].
# But I'm not. So I won't.
for idx, opt in enumerate(conf):
if re.match('^{0}(\s.*)?\n$'.format(o), opt):
conf[idx] = '#{0}'.format(opt)
# Here we handle the "multiple-specifying" options- notably, HostKey.
if isinstance(opts[o], list):
for l in opts[o]:
if l is not None:
conf.append('{0} {1}\n'.format(o, l))
else:
conf.append('{0}\n'.format(o))
else:
# So it isn't something we explicitly save until the end (such as a Match or Host),
# and it isn't something that's specified multiple times.
if opts[o] is not None:
conf.append('{0} {1}\n'.format(o, opts[o]))
else:
conf.append('{0}\n'.format(o))
# NOW we can add the Host/Match/etc. directives.
for o in opts.keys():
if isinstance(opts[o], dict):
for k in opts[o].keys():
conf.append('{0} {1}\n'.format(o, k))
for l in opts[o][k].keys():
if opts[o][k][l] is not None:
conf.append('\t{0} {1}\n'.format(l, opts[o][k][l]))
else:
conf.append('\t{0}\n'.format(l))
with open(cf, 'w') as f:
f.write(''.join(conf))
return()
def clientKeys(user = 'root'):
uid = pwd.getpwnam(user).pw_uid
gid = pwd.getpwnam(user).pw_gid
homedir = os.path.expanduser('~{0}'.format(user))
sshdir = '{0}/.ssh'.format(homedir)
os.makedirs(sshdir, mode = 0o700, exist_ok = True)
if has_ed25519:
if not os.path.lexists('{0}/id_ed25519'.format(sshdir)) \
and not os.path.lexists('{0}/id_ed25519.pub'.format(sshdir)):
subprocess.run(['ssh-keygen',
'-t', 'ed25519',
'-o',
'-a', '100',
'-f', '{0}/id_ed25519'.format(sshdir),
'-q',
'-N', ''])
if not os.path.lexists('{0}/id_rsa'.format(sshdir)) and not os.path.lexists('{0}/id_rsa.pub'.format(sshdir)):
if has_ed25519:
subprocess.run(['ssh-keygen',
'-t', 'rsa',
'-b', '4096',
'-o',
'-a', '100',
'-f', '{0}/id_rsa'.format(sshdir),
'-q',
'-N', ''])
else:
subprocess.run(['ssh-keygen',
'-t', 'rsa',
'-b', '4096',
'-a', '100',
'-f', '{0}/id_rsa'.format(sshdir),
'-q',
'-N', ''])
for basedir, dirs, files in os.walk(sshdir):
os.chown(basedir, uid, gid)
os.chmod(basedir, 0o700)
for f in files:
os.chown(os.path.join(basedir, f), uid, gid)
os.chmod(os.path.join(basedir, f), 0o600)
if 'pubkeys' not in globals():
pubkeys = {}
pubkeys[user] = {}
for k in supported_keys:
with open('{0}/id_{1}.pub'.format(sshdir, k), 'r') as f:
pubkeys[user][k] = f.read()
return(pubkeys)
def daemonMgr():
# In case the script is running without sshd running.
pidfile = '/var/run/sshd.pid'
if not os.path.isfile(pidfile):
return()
# We're about to do somethin' stupid. Let's make it a teeny bit less stupid.
with open(os.devnull, 'w') as devnull:
confchk = subprocess.run(['sshd', '-T'], stdout = devnull)
if confchk.returncode != 0:
for suffix in ('', '.pub'):
for k in glob.glob('/etc/ssh/ssh_host_*key{0}'.format(suffix)):
os.rename('{0}.old.{1}'.format(k, tstamp), k)
for conf in ('', 'd'):
cf = '/etc/ssh/ssh{0}_config'.format(conf)
os.rename('{0}.{1}'.format(cf, tstamp),
cf)
exit('OOPS. We goofed. Backup restored and bailing out.')
# We need to restart sshd once we're done. I feel dirty doing this, but this is the most cross-platform way I can
# do it. First, we need the path to the PID file.
# TODO: do some kind of better way of doing this.
with open('/etc/ssh/sshd_config', 'r') as f:
for line in f.readlines():
if re.search('^\s*PidFile\s+.*', line):
pidfile = re.sub('^\s*PidFile\s+(.*)(#.*)?$', '\g<1>', line)
break
with open(pidfile, 'r') as f:
pid = int(f.read().strip())
os.kill(pid, signal.SIGHUP)
return()
def main():
self_pidfile = '/tmp/sshsecure.pid'
is_running = False
# First, check to see if we're already running.
# This is where I'd put a psutil call... IF I HAD ONE.
if os.path.isfile(self_pidfile):
is_running = subprocess.run(['pgrep', '-F', self_pidfile], stdout = subprocess.PIPE)
if is_running.stdout.decode('utf-8').strip() != '':
# We're still running. Exit gracefully.
print('We seem to still be running from a past execution; exiting')
exit(0)
else:
# It's a stale PID file.
os.remove(self_pidfile)
with open(self_pidfile, 'w') as f:
f.write(str(os.getpid()) + '\n')
_chkfile = '/etc/ssh/.aif-generated'
if not os.path.isfile(_chkfile):
# Warning: The moduli stuff can take a LONG time to run. Hours.
buildmoduli = True
hostKeys(buildmoduli)
for t in ('sshd', 'ssh'):
config(conf_options[t], t)
clientKeys()
with open(_chkfile, 'w') as f:
f.write(('ssh, sshd, and hostkey configurations/keys have been modified by sshsecure.py from OpTools.\n'
'https://git.square-r00t.net/OpTools/\n'))
daemonMgr()
os.remove(self_pidfile)
return()
if __name__ == '__main__':
main()

147
arch/arch_mirror_ranking.py Executable file
View File

@@ -0,0 +1,147 @@
#!/usr/bin/env python3
import argparse
import datetime
# import dns # TODO: replace server['ipv4'] with IPv4 address(es)? etc.
import json
import re
import sys
from urllib.request import urlopen
##
import iso3166
servers_json_url = 'https://www.archlinux.org/mirrors/status/json/'
protos = ('http', 'https', 'rsync')
class MirrorIdx(object):
def __init__(self, country = None, proto = None, is_active = None, json_url = servers_json_url,
name_re = None, ipv4 = None, ipv6 = None, isos = None, statuses = False, *args, **kwargs):
_tmpargs = locals()
del (_tmpargs['self'])
for k, v in _tmpargs.items():
setattr(self, k, v)
self.validateParams()
self.servers_json = {}
self.servers = []
self.servers_with_scores = []
self.ranked_servers = []
self.fetchJSON()
self.buildServers()
self.rankServers()
def fetchJSON(self):
if self.statuses:
sys.stderr.write('Fetching servers from {0}...\n'.format(self.json_url))
with urlopen(self.json_url) as u:
self.servers_json = json.load(u)
return()
def buildServers(self):
_limiters = (self.proto, self.ipv4, self.ipv6, self.isos)
_filters = list(_limiters)
_filters.extend([self.name_re, self.country])
_filters = tuple(_filters)
if self.statuses:
sys.stderr.write('Applying filters (if any)...\n')
for s in self.servers_json['urls']:
# We handle these as "tri-value" (None, True, False)
if self.is_active is not None:
if s['active'] != self.is_active:
continue
if not any(_filters):
self.servers.append(s.copy())
if s['score']:
self.servers_with_scores.append(s)
continue
# These are based on string values.
if self.name_re:
if not self.name_re.search(s['url']):
continue
if self.country:
if self.country != s['country_code']:
continue
# These are regular True/False switches
match = False
# We want to be *very* explicit about the ordering and inclusion/exclusion of these.
# They MUST match the order of _limiters.
values = []
for k in ('protocol', 'ipv4', 'ipv6', 'isos'):
values.append(s[k])
valid = all([v for k, v in zip(_limiters, values) if k])
if valid:
self.servers.append(s)
if s['score']:
self.servers_with_scores.append(s)
return()
def rankServers(self):
if self.statuses:
sys.stderr.write('Ranking mirrors...\n')
self.ranked_servers = sorted(self.servers_with_scores, key = lambda i: i['score'])
return()
def validateParams(self):
if self.proto and self.proto.lower() not in protos:
err = '{0} must be one of: {1}'.format(self.proto, ', '.join([i.upper() for i in protos]))
raise ValueError(err)
elif self.proto:
self.proto = self.proto.upper()
if self.country and self.country.upper() not in iso3166.countries:
err = ('{0} must be a valid ISO-3166-1 ALPHA-2 country code. '
'See https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes'
'#Current_ISO_3166_country_codes').format(self.country)
raise ValueError()
elif self.country:
self.country = self.country.upper()
if self.name_re:
self.name_re = re.compile(self.name_re)
return()
def parseArgs():
args = argparse.ArgumentParser(description = 'Fetch and rank Arch Linux mirrors')
args.add_argument('-c', '--country',
dest = 'country',
help = ('If specified, limit results to this country (in ISO-3166-1 ALPHA-2 format)'))
args.add_argument('-p', '--protocol',
choices = protos,
dest = 'proto',
help = ('If specified, limit results to this protocol'))
args.add_argument('-r', '--name-regex',
dest = 'name_re',
help = ('If specified, limit results to URLs that match this regex pattern (Python re syntax)'))
args.add_argument('-4', '--ipv4',
dest = 'ipv4',
action = 'store_true',
help = ('If specified, limit results to servers that support IPv4'))
args.add_argument('-6', '--ipv6',
dest = 'ipv6',
action = 'store_true',
help = ('If specified, limit results to servers that support IPv6'))
args.add_argument('-i', '--iso',
dest = 'isos',
action = 'store_true',
help = ('If specified, limit results to servers that have ISO images'))
is_active = args.add_mutually_exclusive_group()
is_active.add_argument('-a', '--active-only',
default = None,
const = True,
action = 'store_const',
dest = 'is_active',
help = ('If specified, only include active servers (default is active + inactive)'))
is_active.add_argument('-n', '--inactive-only',
default = None,
const = False,
action = 'store_const',
dest = 'is_active',
help = ('If specified, only include inactive servers (default is active + inactive)'))
return(args)
if __name__ == '__main__':
args = vars(parseArgs().parse_args())
m = MirrorIdx(**args, statuses = True)
for s in m.ranked_servers:
print('Server = {0}$repo/os/$arch'.format(s['url']))

165
arch/autopkg/maintain.py Executable file
View File

@@ -0,0 +1,165 @@
#!/usr/bin/env python
import argparse
import json
import os
import sqlite3
import run
from urllib.request import urlopen
def parseArgs():
args = argparse.ArgumentParser(description = ('Modify (add/remove) packages for use with Autopkg'),
epilog = ('Operation-specific help; try e.g. "add --help"'))
commonargs = argparse.ArgumentParser(add_help = False)
commonargs.add_argument('-n', '--name',
dest = 'pkgnm',
required = True,
help = ('The name of the PACKAGE to operate on.'))
commonargs.add_argument('-d', '--db',
dest = 'dbfile',
default = '~/.optools/autopkg.sqlite3',
help = ('The location of the package database. THIS SHOULD NOT BE ANY FILE USED BY '
'ANYTHING ELSE! A default one will be created if it doesn\'t exist'))
subparsers = args.add_subparsers(help = ('Operation to perform'),
metavar = 'OPERATION',
dest = 'oper')
addargs = subparsers.add_parser('add',
parents = [commonargs],
help = ('Add a package. If a matching package NAME exists (-n/--name), '
'we\'ll replace it'))
addargs.add_argument('-b', '--base',
dest = 'pkgbase',
default = None,
help = ('The pkgbase; only really needed for split-packages and we will automatically '
'fetch if it\'s left blank anyways'))
addargs.add_argument('-v', '--version',
dest = 'pkgver',
default = None,
help = ('The current version; we will automatically fetch it if it\'s left blank'))
addargs.add_argument('-l', '--lock',
dest = 'active',
action = 'store_false',
help = ('If specified, the package will still exist in the DB but it will be marked inactive'))
rmargs = subparsers.add_parser('rm',
parents = [commonargs],
help = ('Remove a package from the DB'))
buildargs = subparsers.add_parser('build',
help = ('Build all packages; same effect as running run.py'))
buildargs.add_argument('-d', '--db',
dest = 'dbfile',
default = '~/.optools/autopkg.sqlite3',
help = ('The location of the package database. THIS SHOULD NOT BE ANY FILE USED BY '
'ANYTHING ELSE! A default one will be created if it doesn\'t exist'))
listargs = subparsers.add_parser('ls',
help = ('List packages (and information about them) only'))
listargs.add_argument('-d', '--db',
dest = 'dbfile',
default = '~/.optools/autopkg.sqlite3',
help = ('The location of the package database. THIS SHOULD NOT BE ANY FILE USED BY '
'ANYTHING ELSE! A default one will be created if it doesn\'t exist'))
return(args)
def add(args):
db = sqlite3.connect(args['dbfile'])
db.row_factory = sqlite3.Row
cur = db.cursor()
if not all((args['pkgbase'], args['pkgver'])):
# We need some additional info from the AUR API...
aur_url = 'https://aur.archlinux.org/rpc/?v=5&type=info&by=name&arg%5B%5D={0}'.format(args['pkgnm'])
with urlopen(aur_url) as url:
aur = json.loads(url.read().decode('utf-8'))['results']
if not aur:
raise ValueError(('Either something is screwy with our network access '
'or the package {0} doesn\'t exist').format(args['pkgnm']))
if ((aur['PackageBase'] != aur['Name']) and (not args['pkgbase'])):
args['pkgbase'] = aur['PackageBase']
if not args['pkgver']:
args['pkgver'] = aur['Version']
cur.execute("SELECT id, pkgname, pkgbase, pkgver, active FROM packages WHERE pkgname = ?",
(args['pkgnm'], ))
row = cur.fetchone()
if row:
if args['pkgbase']:
q = ("UPDATE packages SET pkgbase = ? AND pkgver = ? AND ACTIVE = ? WHERE id = ?",
(args['pkgbase'], args['pkgver'], ('0' if args['lock'] else '1'), row['id']))
else:
q = ("UPDATE packages SET pkgver = ? AND ACTIVE = ? WHERE id = ?",
(args['pkgver'], ('0' if args['lock'] else '1'), row['id']))
else:
if args['pkgbase']:
q = (("INSERT INTO "
"packages (pkgname, pkgbase, pkgver, active) "
"VALUES (?, ?, ?, ?)"),
(args['pkgnm'], args['pkgbase'], args['pkgver'], ('0' if args['lock'] else '1')))
else:
q = (("INSERT INTO "
"packages (pkgname, pkgver, active) "
"VALUES (?, ?, ?)"),
(args['pkgnm'], args['pkgver'], ('0' if args['lock'] else '1')))
cur.execute(*q)
db.commit()
cur.close()
db.close()
return()
def rm(args):
db = sqlite3.connect(args['dbfile'])
cur = db.cursor()
cur.execute("DELETE FROM packages WHERE pkgname = ?",
(args['pkgnm'], ))
db.commit()
cur.close()
db.close()
return()
def build(args):
pm = run.PkgMake(db = args['dbfile'])
pm.main()
return()
def ls(args):
db = sqlite3.connect(args['dbfile'])
db.row_factory = sqlite3.Row
cur = db.cursor()
rows = []
cur.execute("SELECT * FROM packages ORDER BY pkgname")
for r in cur.fetchall():
pkgnm = r['pkgname']
rows.append({'name': r['pkgname'],
'row_id': r['id'],
'pkgbase': ('' if not r['pkgbase'] else r['pkgbase']),
'ver': r['pkgver'],
'enabled': ('Yes' if r['active'] else 'No')})
header = '| NAME | PACKAGE BASE | VERSION | ENABLED | ROW ID |'
sep = '=' * len(header)
fmt = '|{name:<16}|{pkgbase:<16}|{ver:^9}|{enabled:^9}|{row_id:<8}|'
out = []
for row in rows:
out.append(fmt.format(**row))
header = '\n'.join((sep, header, sep))
out.insert(0, header)
out.append(sep)
print('\n'.join(out))
cur.close()
db.close()
return()
def main():
rawargs = parseArgs()
args = vars(rawargs.parse_args())
if not args['oper']:
rawargs.print_help()
exit()
args['dbfile'] = os.path.abspath(os.path.expanduser(args['dbfile']))
if args['oper'] == 'add':
add(args)
elif args['oper'] == 'rm':
rm(args)
elif args['oper'] == 'build':
build(args)
elif args['oper'] == 'ls':
ls(args)
return()
if __name__ == '__main__':
main()

278
arch/autopkg/run.py Executable file
View File

@@ -0,0 +1,278 @@
#!/usr/bin/env python
import grp
import json
import os
import pwd
import re
import shutil
import sqlite3
import subprocess
import tarfile
import urllib.request as reqs
import urllib.parse as urlparse
import setup
# I *HATE* relying on non-stlib, and I hate even MORE that this is JUST TO COMPARE VERSION STRINGS.
# WHY IS THIS FUNCTIONALITY NOT STDLIB YET.
try:
from distutils.version import LooseVersion
has_lv = True
except ImportError:
has_lv = False
# The base API URL (https://wiki.archlinux.org/index.php/Aurweb_RPC_interface)
aur_base = 'https://aur.archlinux.org/rpc/?v=5&type=info&by=name'
# The length of the above. Important because of uri_limit.
base_len = len(aur_base)
# Maximum length of the URI.
uri_limit = 4443
class PkgMake(object):
def __init__(self, db = '~/.optools/autopkg.sqlite3'):
db = os.path.abspath(os.path.expanduser(db))
if not os.path.isfile(db):
setup.firstrun(db)
self.conn = sqlite3.connect(db)
self.conn.row_factory = sqlite3.Row
self.cur = self.conn.cursor()
self.cfg = setup.main(self.conn, self.cur)
if self.cfg['sign']:
_cmt_mode = self.conn.isolation_level # autocommit
self.conn.isolation_level = None
self.fpr, self.gpg = setup.GPG(self.cur, homedir = self.cfg['gpg_homedir'], keyid = self.cfg['gpg_keyid'])
self.conn.isolation_level = _cmt_mode
# don't need this anymore; it should be duplicated or populated into self.fpr.
del(self.cfg['gpg_keyid'])
self.my_key = self.gpg.get_key(self.fpr, secret = True)
self.gpg.signers = [self.my_key]
else:
self.fpr = self.gpg = self.my_key = None
del(self.cfg['gpg_keyid'])
self.pkgs = {}
self._populatePkgs()
def main(self):
self.getPkg()
self.buildPkg()
return()
def _chkver(self, pkgbase):
new_ver = self.pkgs[pkgbase]['meta']['new_ver']
old_ver = self.pkgs[pkgbase]['meta']['pkgver']
is_diff = (new_ver != old_ver) # A super-stupid fallback
if is_diff:
if has_lv:
is_diff = LooseVersion(new_ver) > LooseVersion(old_ver)
else:
# like, 90% of the time, this would work.
new_tuple = tuple(map(int, (re.split('\.|-', new_ver))))
old_tuple = tuple(map(int, (re.split('\.|-', old_ver))))
# But people at https://stackoverflow.com/a/11887825/733214 are very angry about it, hence the above.
is_diff = new_tuple > old_tuple
return(is_diff)
def _populatePkgs(self):
# These columns/keys are inferred by structure or unneeded. Applies to both DB and AUR API.
_notrack = ('pkgbase', 'pkgname', 'active', 'id', 'packagebaseid', 'numvotes', 'popularity', 'outofdate',
'maintainer', 'firstsubmitted', 'lastmodified', 'depends', 'optdepends', 'conflicts', 'license',
'keywords')
_attr_map = {'version': 'new_ver'}
# These are tracked per-package; all others are pkgbase and applied to all split pkgs underneath.
_pkg_specific = ('pkgdesc', 'arch', 'url', 'license', 'groups', 'depends', 'optdepends', 'provides',
'conflicts', 'replaces', 'backup', 'options', 'install', 'changelog')
_aur_results = []
_urls = []
_params = {'arg[]': []}
_tmp_params = {'arg[]': []}
self.cur.execute("SELECT * FROM packages WHERE active = '1'")
for row in self.cur.fetchall():
pkgbase = (row['pkgbase'] if row['pkgbase'] else row['pkgname'])
pkgnm = row['pkgname']
if pkgbase not in self.pkgs:
self.pkgs[pkgbase] = {'packages': {pkgnm: {}},
'meta': {}}
for k in dict(row):
if not k:
continue
if k in _notrack:
continue
if k in _pkg_specific:
self.pkgs[pkgbase]['packages'][pkgnm][k] = row[k]
else:
if k not in self.pkgs[pkgbase]['meta']:
self.pkgs[pkgbase]['meta'][k] = row[k]
# TODO: change this?
pkgstr = urlparse.quote(pkgnm) # We perform against a non-pkgbased name for the AUR search.
_tmp_params['arg[]'].append(pkgstr)
l = base_len + (len(urlparse.urlencode(_tmp_params, doseq = True)) + 1)
if l >= uri_limit:
# We need to split into multiple URIs based on URI size because of:
# https://wiki.archlinux.org/index.php/Aurweb_RPC_interface#Limitations
_urls.append('&'.join((aur_base, urlparse.urlencode(_params, doseq = True))))
_params = {'arg[]': []}
_tmp_params = {'arg[]': []}
_params['arg[]'].append(pkgstr)
_urls.append('&'.join((aur_base, urlparse.urlencode(_params, doseq = True))))
for url in _urls:
with reqs.urlopen(url) as u:
_aur_results.extend(json.loads(u.read().decode('utf-8'))['results'])
for pkg in _aur_results:
pkg = {k.lower(): v for (k, v) in pkg.items()}
pkgnm = pkg['name']
pkgbase = pkg['packagebase']
for (k, v) in pkg.items():
if k in _notrack:
continue
if k in _attr_map:
k = _attr_map[k]
if k in _pkg_specific:
self.pkgs[pkgbase]['packages'][pkgnm][k] = v
else:
self.pkgs[pkgbase]['meta'][k] = v
self.pkgs[pkgbase]['meta']['snapshot'] = 'https://aur.archlinux.org{0}'.format(pkg['urlpath'])
self.pkgs[pkgbase]['meta']['filename'] = os.path.basename(pkg['urlpath'])
self.pkgs[pkgbase]['meta']['build'] = self._chkver(pkgbase)
return()
def _drop_privs(self):
# First get the list of groups to assign.
# This *should* generate a list *exactly* like as if that user ran os.getgroups(),
# with the addition of self.cfg['build_user']['gid'] (if it isn't included already).
newgroups = list(sorted([g.gr_gid
for g in grp.getgrall()
if pwd.getpwuid(self.cfg['build_user']['uid'])
in g.gr_mem]))
if self.cfg['build_user']['gid'] not in newgroups:
newgroups.append(self.cfg['build_user']['gid'])
newgroups.sort()
# This is the user's "primary group"
user_gid = pwd.getpwuid(self.cfg['build_user']['uid']).pw_gid
if user_gid not in newgroups:
newgroups.append(user_gid)
os.setgroups(newgroups)
# If we used os.setgid and os.setuid, we would PERMANENTLY/IRREVOCABLY drop privs.
# Being that that doesn't suit the meta of the rest of the script (chmodding, etc.) - probably not a good idea.
os.setresgid(self.cfg['build_user']['gid'], self.cfg['build_user']['gid'], -1)
os.setresuid(self.cfg['build_user']['uid'], self.cfg['build_user']['uid'], -1)
# Default on most linux systems. reasonable enough for building? (equal to chmod 755/644)
os.umask(0o0022)
# TODO: we need a full env construction here, I think, as well. PATH, HOME, GNUPGHOME at the very least?
return()
def _restore_privs(self):
os.setresuid(self.cfg['orig_user']['uid'], self.cfg['orig_user']['uid'], self.cfg['orig_user']['uid'])
os.setresgid(self.cfg['orig_user']['gid'], self.cfg['orig_user']['gid'], self.cfg['orig_user']['gid'])
os.setgroups(self.cfg['orig_user']['groups'])
os.umask(self.cfg['orig_user']['umask'])
# TODO: if we change the env, we need to change it back here. I capture it in self.cfg['orig_user']['env'].
return()
def getPkg(self):
self._drop_privs()
for pkgbase in self.pkgs:
if not self.pkgs[pkgbase]['meta']['build']:
continue
_pkgre = re.compile('^(/?.*/)*({0})/?'.format(pkgbase))
builddir = os.path.join(self.cfg['cache'], pkgbase)
try:
shutil.rmtree(builddir)
except FileNotFoundError:
# We *could* use ignore_errors or onerrors params, but we only want FileNotFoundError.
pass
os.makedirs(builddir, mode = self.cfg['chmod']['dirs'], exist_ok = True)
tarball = os.path.join(builddir, self.pkgs[pkgbase]['meta']['filename'])
with reqs.urlopen(self.pkgs[pkgbase]['meta']['snapshot']) as url:
# We have to write out to disk first because the tarfile module HATES trying to perform seeks on
# a tarfile stream. It HATES it.
with open(tarball, 'wb') as f:
f.write(url.read())
tarnames = {}
with tarfile.open(tarball, mode = 'r:*') as tar:
for i in tar.getmembers():
if any((i.isdir(), i.ischr(), i.isblk(), i.isfifo(), i.isdev())):
continue
if i.name.endswith('.gitignore'):
continue
# We want to strip leading dirs out.
tarnames[i.name] = _pkgre.sub('', i.name)
# Small bugfix.
if tarnames[i.name] == '':
tarnames[i.name] = os.path.basename(i.name)
tarnames[i.name] = os.path.join(builddir, tarnames[i.name])
for i in tar.getmembers():
if i.name in tarnames:
# GOLLY I WISH TARFILE WOULD LET US JUST CHANGE THE ARCNAME DURING EXTRACTION ON THE FLY.
with open(tarnames[i.name], 'wb') as f:
f.write(tar.extractfile(i.name).read())
# No longer needed, so clean it up behind us.
os.remove(tarball)
self._restore_privs()
return()
def buildPkg(self):
self._drop_privs()
for pkgbase in self.pkgs:
if not self.pkgs[pkgbase]['meta']['build']:
continue
builddir = os.path.join(self.cfg['cache'], pkgbase)
os.chdir(builddir)
# subprocess.run(['makepkg']) # TODO: figure out gpg sig checking?
subprocess.run(['makepkg', '--clean', '--force', '--skippgpcheck'])
self._restore_privs()
for pkgbase in self.pkgs:
if not self.pkgs[pkgbase]['meta']['build']:
continue
builddir = os.path.join(self.cfg['cache'], pkgbase)
# The i686 isn't even supported anymore, but let's keep this friendly for Archlinux32 folks.
_pkgre = re.compile(('^({0})-{1}-'
'(x86_64|i686|any)'
'\.pkg\.tar\.xz$').format('|'.join(self.pkgs[pkgbase]['packages'].keys()),
self.pkgs[pkgbase]['meta']['new_ver']))
fname = None
# PROBABLY in the first root dir, and could be done with fnmatch, but...
for root, dirs, files in os.walk(builddir):
for f in files:
if _pkgre.search(f):
fname = os.path.join(root, f)
break
if not fname:
raise RuntimeError('Could not find proper package build filename for {0}'.format(pkgbase))
destfile = os.path.join(self.cfg['dest'], os.path.basename(fname))
os.rename(fname, destfile)
# TODO: HERE IS WHERE WE SIGN THE PACKAGE?
# We also need to update the package info in the DB.
for p in self.pkgs[pkgbase]['packages']:
self.cur.execute("UPDATE packages SET pkgver = ? WHERE pkgname = ?",
(self.pkgs[pkgbase]['meta']['new_ver'], p))
self.cfg['pkgpaths'].append(destfile)
# No longer needed, so we can clear out the build directory.
shutil.rmtree(builddir)
os.chdir(self.cfg['dest'])
dbfile = os.path.join(self.cfg['dest'], 'autopkg.db.tar.gz') # TODO: Custom repo name?
cmd = ['repo-add', '--nocolor', '--delta', dbfile] # -s/--sign?
cmd.extend(self.cfg['pkgpaths'])
subprocess.run(cmd)
for root, dirs, files in os.walk(self.cfg['dest']):
for f in files:
fpath = os.path.join(root, f)
os.chmod(fpath, self.cfg['chmod']['files'])
os.chown(fpath, self.cfg['chown']['uid'], self.cfg['chown']['gid'])
for d in dirs:
dpath = os.path.join(root, d)
os.chmod(dpath, self.cfg['chmod']['dirs'])
os.chown(dpath, self.cfg['chown']['uid'], self.cfg['chown']['gid'])
return()
def close(self):
if self.cur:
self.cur.close()
if self.conn:
self.conn.close()
return()
def main():
pm = PkgMake()
pm.main()
if __name__ == '__main__':
main()

127
arch/autopkg/setup.py Executable file
View File

@@ -0,0 +1,127 @@
#!/usr/bin/env python
import base64
import copy
import gpg
import grp
import json
import lzma
import os
import pwd
import re
from socket import gethostname
import sqlite3
# NOTE: The gpg homedir should be owned by the user *running autopkg*.
# Likely priv-dropping will only work for root.
#
dirs = ('cache', 'dest', 'gpg_homedir')
u_g_pairs = ('chown', 'build_user')
json_vals = ('chmod', )
blank_db = """
/Td6WFoAAATm1rRGAgAhARwAAAAQz1jM4H//AxNdACmURZ1gyBn4JmSIjib+MZX9x4eABpe77H+o
CX2bysoKzO/OaDh2QGbNjiU75tmhPrWMvTFue4XOq+6NPls33xRRL8eZoITBdAaLqbwYY2XW/V/X
Gx8vpjcBnpACjVno40FoJ1qWxJlBZ0PI/8gMoBr3Sgdqnf+Bqi+E6dOl66ktJMRr3bdZ5C9vOXAf
42BtRfwJlwN8NItaWtfRYVfXl+40D05dugcxDLY/3uUe9MSgt46Z9+Q9tGjjrUA8kb5K2fqWSlQ2
6KyF3KV1zsJSDLuaRkP42JNsBTgg6mU5rEk/3egdJiLn+7AupvWQ3YlKkeALZvgEKy75wdObf6QI
jY4qjXjxOTwOG4oou7lNZ3fPI5qLCQL48M8ZbOQoTAQCuArdYqJmBwT2rF86SdQRP4EY6TlExa4o
+E+v26hKhYXO7o188jlmGFbuzqtoyMB1y3UG+Hi2SjPDilD5o6f9fEjiHZm2FY6rkPb9Km4UFlH1
d2A4Wt4iGlciZBs0lFRPKkgHR4s7KHTMKuZyC08qE1B7FwvyBTBBYveA2UoZlKY7d22IbiiSQ3tP
JKhj8nf8EWcgHPt46Juo80l7vqqn6AviY7b1JZXICdiJMbuWJEyzTLWuk4qlUBfimP7k9IjhDFpJ
gEXdNgrnx/wr5CIbr1T5lI9vZz35EacgNA2bGxLA8VI0W9eYDts3BSfhiJOHWwLQPiNzJwd4aeM1
IhqgTEpk+BD0nIgSB3AAB+NfJJavoQjpv0QBA6dH52utA5Nw5L//Ufw/YKaA7ui8YQyDJ7y2n9L3
ugn6VJFFrYSgIe1oRkJBGRGuBgGNTS3aJmdFqEz1vjZBMkFdF+rryXzub4dst2Qh01E6/elowIUh
2whMRVDO28QjyS9tLtLLzfTmBk2NSxs4+znE0ePKKw3n/p6YlbPRAw24QR8MTCOpQ2lH1UZNWBM2
epxfmWtgO5b/wGYopRDEvDDdbPAq6+4zxTOT5RmdWZyc46gdizf9+dQW3wZ9iBDjh4MtuYPvLlqr
0GRmsyrxgFxkwvVoXASNndS0NPcAADkAhYCxn+W2AAGvBoCAAgB/TQWascRn+wIAAAAABFla
"""
def firstrun(dbfile):
dbdata = lzma.decompress(base64.b64decode(blank_db))
with open(dbfile, 'wb') as f:
f.write(dbdata)
return()
def main(connection, cursor):
cfg = {'orig_cwd': os.getcwd(),
'pkgpaths': []}
cursor.execute("SELECT directive, value FROM config")
for r in cursor.fetchall():
cfg[r['directive']] = r['value'].strip()
for k in cfg:
for x in (True, False, None):
if cfg[k] == str(x):
cfg[k] = x
break
if k in json_vals:
cfg[k] = json.loads(cfg[k])
if k == 'path':
paths = []
for i in cfg[k].split(':'):
p = os.path.abspath(os.path.expanduser(i))
paths.append(p)
cfg[k] = paths
if k in dirs:
if cfg[k]:
cfg[k] = os.path.abspath(os.path.expanduser(cfg[k]))
os.makedirs(cfg[k], exist_ok = True)
if k in u_g_pairs:
dflt = [pwd.getpwuid(os.geteuid()).pw_name, grp.getgrgid(os.getegid()).gr_name]
l = re.split(':|\.', cfg[k])
if len(l) == 1:
l.append(None)
for idx, i in enumerate(l[:]):
if i in ('', None):
l[idx] = dflt[idx]
cfg[k] = {}
cfg[k]['uid'] = (int(l[0]) if l[0].isnumeric() else pwd.getpwnam(l[0]).pw_uid)
cfg[k]['gid'] = (int(l[1]) if l[1].isnumeric() else grp.getgrnam(l[1]).gr_gid)
cfg['orig_user'] = {'uid': os.geteuid(),
'gid': os.getegid()}
# Ugh. https://orkus.wordpress.com/2011/04/17/python-getting-umask-without-change/
cfg['orig_user']['umask'] = os.umask(0)
os.umask(cfg['orig_user']['umask'])
cfg['orig_user']['groups'] = os.getgroups()
for i in cfg['chmod']:
cfg['chmod'][i] = int(cfg['chmod'][i], 8)
cfg['orig_user']['env'] = copy.deepcopy(dict(os.environ))
os.chown(cfg['cache'], uid = cfg['build_user']['uid'], gid = cfg['build_user']['gid'])
os.chown(cfg['dest'], uid = cfg['chown']['uid'], gid = cfg['chown']['gid'])
return(cfg)
def GPG(cur, homedir = None, keyid = None):
g = gpg.Context(home_dir = homedir)
if not keyid:
# We don't have a key specified, so we need to generate one and update the config.
s = ('This signature and signing key were automatically generated using Autopkg from OpTools: '
'https://git.square-r00t.net/OpTools/')
g.sig_notation_add('automatically-generated@git.square-r00t.net', s, gpg.constants.sig.notation.HUMAN_READABLE)
userid = 'Autopkg Signing Key ({0}@{1})'.format(os.getenv('SUDO_USER', os.environ['USER']), gethostname())
params = {
#'algorithm': 'ed25519',
'algorithm': 'rsa4096',
'expires': False,
'expires_in': 0,
'sign': True,
'passphrase': None
}
keyid = g.create_key(userid, **params).fpr
# https://stackoverflow.com/a/50718957
q = {}
for col in ('keyid', 'homedir'):
if sqlite3.sqlite_version_info > (3, 24, 0):
q[col] = ("INSERT INTO config (directive, value) "
"VALUES ('gpg_{0}', ?) "
"ON CONFLICT (directive) "
"DO UPDATE SET value = excluded.value").format(col)
else:
cur.execute("SELECT id FROM config WHERE directive = 'gpg_{0}'".format(col))
row = cur.fetchone()
if row:
q[col] = ("UPDATE config SET value = ? WHERE id = '{0}'").format(row['id'])
else:
q[col] = ("INSERT INTO config (directive, value) VALUES ('gpg_{0}', ?)").format(col)
cur.execute(q[col], (locals()[col], ))
return(keyid, g)

223
arch/buildup/pkgchk.py Executable file
View File

@@ -0,0 +1,223 @@
#!/usr/bin/env python3
import argparse
import configparser
import hashlib
import os
import re
import shlex
import subprocess
import tarfile # for verifying built PKGBUILDs. We just need to grab <tar>/.PKGINFO, and check: pkgver = <version>
import tempfile
from collections import OrderedDict
from urllib.request import urlopen
class color(object):
PURPLE = '\033[95m'
CYAN = '\033[96m'
DARKCYAN = '\033[36m'
BLUE = '\033[94m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
RED = '\033[91m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
END = '\033[0m'
vcstypes = ('bzr', 'git', 'hg', 'svn')
class pkgChk(object):
def __init__(self, pkg):
# pkg should be a string of a PKGBUILD,
# not the path to a file.
self.pkg = pkg
# The below holds parsed data from the PKGBUILD.
self.pkgdata = {'pkgver': self.getLex('pkgver', 'var'),
'_pkgver': self.getLex('_pkgver', 'var'),
'pkgname': self.getLex('pkgname', 'var'),
'sources': self.getLex('source', 'array')}
def getLex(self, attrib, attrtype):
# Parse the PKGBUILD and return actual values from it.
# attrtype should be "var" or "array".
# var returns a string and array returns a list.
# If the given attrib isn't in the pkgbuild, None is returned.
# The sources array is special, though - it returns a tuple of:
# (hashtype, dict) where dict is a mapping of:
# filename: hash
# filename2: hash2
# etc.
if attrtype not in ('var', 'array'):
raise ValueError('{0} is not a valid attribute type.'.format(attrib))
_sums = ('sha512', 'sha384', 'sha256', 'sha1', 'md5') # in order of preference
_attrmap = {'var': 'echo ${{{0}}}'.format(attrib),
'array': 'echo ${{{}[@]}}'.format(attrib)}
_tempfile = tempfile.mkstemp(text = True)
with open(_tempfile[1], 'w') as f:
f.write(self.pkg)
_cmd = ['/bin/bash',
'--restricted', '--noprofile',
'--init-file', _tempfile[1],
'-i', '-c', _attrmap[attrtype]]
with open(os.devnull, 'wb') as devnull:
_out = subprocess.run(_cmd, env = {'PATH': ''},
stdout = subprocess.PIPE,
stderr = devnull).stdout.decode('utf-8').strip()
if _out == '':
os.remove(_tempfile[1])
return(None)
if attrtype == 'var':
os.remove(_tempfile[1])
return(_out)
else: # it's an array
if attrib == 'source':
_sources = {}
_source = shlex.split(_out)
_sumarr = [None] * len(_source)
for h in _sums:
_cmd[-1] = 'echo ${{{0}[@]}}'.format(h + 'sums')
with open(os.devnull, 'wb') as devnull:
_out = subprocess.run(_cmd, env = {'PATH': ''},
stdout = subprocess.PIPE,
stderr = devnull).stdout.decode('utf-8').strip()
if _out != '':
os.remove(_tempfile[1])
return(h, OrderedDict(zip(_source, shlex.split(_out))))
else:
continue
# No match for checksums.
os.remove(_tempfile[1])
return(None, OrderedDict(zip(_source, shlex.split(_out))))
else:
os.remove(_tempfile[1])
return(shlex.split(_out))
return()
def getURL(self, url):
with urlopen(url) as http:
code = http.getcode()
return(code)
def chkVer(self):
_separators = []
# TODO: this is to explicitly prevent parsing
# VCS packages, so might need some re-tooling in the future.
if self.pkgdata['pkgname'].split('-')[-1] in vcstypes:
return(None)
# transform the current version into a list of various components.
if not self.pkgdata['pkgver']:
return(None)
if self.pkgdata['_pkgver']:
_cur_ver = self.pkgdata['_pkgver']
else:
_cur_ver = self.pkgdata['pkgver']
# This will catch like 90% of the software versions out there.
# Unfortunately, it won't catch all of them. I dunno how to
# handle that quite yet. TODO.
_split_ver = _cur_ver.split('.')
_idx = len(_split_ver) - 1
while _idx >= 0:
_url = re.sub('^[A-Za-z0-9]+::',
'',
list(self.pkgdata['sources'].keys())[0])
_code = self.getURL(_url)
_idx -= 1
def parseArgs():
_ini = '~/.config/optools/buildup.ini'
_defini = os.path.abspath(os.path.expanduser(_ini))
args = argparse.ArgumentParser()
args.add_argument('-c', '--config',
default = _defini,
dest = 'config',
help = ('The path to the config file. ' +
'Default: {0}{1}{2}').format(color.BOLD,
_defini,
color.END))
args.add_argument('-R', '--no-recurse',
action = 'store_false',
dest = 'recurse',
help = ('If specified, and the path provided is a directory, ' +
'do NOT recurse into subdirectories.'))
args.add_argument('-p', '--path',
metavar = 'path/to/dir/or/PKGBUILD',
default = None,
dest = 'pkgpath',
help = ('The path to either a directory containing PKGBUILDs (recursion ' +
'enabled - see {0}-R/--no-recurse{1}) ' +
'or a single PKGBUILD. Use to override ' +
'the config\'s PKG:paths.').format(color.BOLD, color.END))
return(args)
def parsePkg(pkgbuildstr):
p = pkgChk(pkgbuildstr)
p.chkVer()
return()
def iterDir(pkgpath, recursion = True):
filepaths = []
if os.path.isfile(pkgpath):
return([pkgpath])
if recursion:
for root, subdirs, files in os.walk(pkgpath):
for vcs in vcstypes:
if '.{0}'.format(vcs) in subdirs:
subdirs.remove('.{0}'.format(vcs))
for f in files:
if 'PKGBUILD' in f:
filepaths.append(os.path.join(root, f))
else:
for f in os.listdir(pkgpath):
if 'PKGBUILD' in f:
filepaths.append(f)
filepaths.sort()
return(filepaths)
def parseCfg(cfgfile):
def getPath(p):
return(os.path.abspath(os.path.expanduser(p)))
_defcfg = '[PKG]\npaths = \ntestbuild = no\n[VCS]\n'
for vcs in vcstypes:
_defcfg += '{0} = no\n'.format(vcs)
_cfg = configparser.ConfigParser()
_cfg._interpolation = configparser.ExtendedInterpolation()
_cfg.read((_defcfg, cfgfile))
# We convert to a dict so we can do things like list comprehension.
cfg = {s:dict(_cfg.items(s)) for s in _cfg.sections()}
if 'paths' not in cfg['PKG'].keys():
raise ValueError('You must provide a valid configuration ' +
'file with the PKG:paths setting specified and valid.')
cfg['PKG']['paths'] = sorted([getPath(p.strip()) for p in cfg['PKG']['paths'].split(',')],
reverse = True)
for p in cfg['PKG']['paths'][:]:
if not os.path.exists(p):
print('WARNING: {0} does not exist; skipping...'.format(p))
cfg['PKG']['paths'].remove(p)
# We also want to convert these to pythonic True/False
cfg['PKG']['testbuild'] = _cfg['PKG'].getboolean('testbuild')
for k in vcstypes:
cfg['VCS'][k] = _cfg['VCS'].getboolean(k)
return(cfg)
if __name__ == '__main__':
args = vars(parseArgs().parse_args())
if not os.path.isfile(args['config']):
raise FileNotFoundError('{0} does not exist.'.format(cfg))
cfg = parseCfg(args['config'])
if args['pkgpath']:
args['pkgpath'] = os.path.abspath(os.path.expanduser(args['pkgpath']))
if os.path.isdir(args['pkgpath']):
iterDir(args['pkgpath'], recursion = args['recurse'])
elif os.path.isfile(args['pkgpath']):
parsePkg(args['pkgpath'])
else:
raise FileNotFoundError('{0} does not exist.'.format(args['pkgpath']))
else:
files = []
for p in cfg['PKG']['paths']:
files.extend(iterDir(p))
files.sort()
for p in files:
with open(p, 'r') as f:
parsePkg(f.read())

View File

@@ -0,0 +1,39 @@
## This configuration file will allow you to perform more
## fine-grained control of BuildUp.
## It supports the syntax shortcuts found here:
## https://docs.python.org/3/library/configparser.html#configparser.ExtendedInterpolation
[PKG]
# The path(s) to your PKGBUILD(s), or a directory/directories containing them.
# If you have more than one, separate with a comma.
paths = path/to/pkgbuilds,another/path/to/pkgbuilds
# If 'yes', try building the package with the new version.
# If 'no' (the default), don't try to build with the new version.
# This can be a good way to test that you don't need to modify the PKGBUILD,
# but can be error-prone (missing makedeps, etc.).
testbuild = no
[VCS]
# Here you can enable or disable which VCS platforms you want to support.
# Note that it will increase the time of your check, as it will
# actually perform a checkout/clone/etc. of the source and check against
# the version function inside the PKGBUILD.
# It's also generally meaningless, as VCS PKGBUILDs are intended
# to be dynamic. Nonetheless, the options are there.
# Use 'yes' to enable, or 'no' to disable (the default).
# Currently only the given types are supported (i.e. no CVS).
# THESE ARE CURRENTLY NOT SUPPORTED.
# Check revisions for -git PKGBUILDs
git = no
# Check revisions for -svn PKGBUILDs
svn = no
# Check revisions for -hg PKGBUILDs
hg = no
# Check revisions for -bzr PKGBUILDs
bzr = no

81
arch/mirrorchk.py Normal file
View File

@@ -0,0 +1,81 @@
#!/usr/bin/env python3
import os
import re
import subprocess
import tempfile
from urllib.request import urlopen
# The local list of mirrors
mfile = '/etc/pacman.d/mirrorlist'
# The URL for the list of mirros
# TODO: customize with country in a config
rlist = 'https://www.archlinux.org/mirrorlist/?country=US&protocol=http&protocol=https&ip_version=4&use_mirror_status=on'
# If local_mirror is set to None, don't do any modifications.
# If it's a dict in the format of:
# local_mirror = {'profile': 'PROFILE_NAME',
# 'url': 'http://host/arch/%os/$arch',
# 'state_file': '/var/lib/netctl/netctl.state'}
# Then we will check 'state_file'. If its contents match 'profile',
# then we will add 'url' to the *top* of mfile.
# TODO: I need to move this to a config.
local_mirror = {'profile': '<PROFILENAME>',
'url': 'http://<REPOBOX>/arch/$repo/os/$arch',
'state_file': '/var/lib/netctl/netctl.state'}
def getList(url):
with urlopen(url) as http:
l = http.read().decode('utf-8')
return(l)
def uncomment(url_list):
urls = []
if isinstance(url_list, str):
url_list = [u.strip() for u in url_list.splitlines()]
for u in url_list:
u = u.strip()
if u == '':
continue
urls.append(re.sub('^\s*#', '', u))
return(urls)
def rankList(mfile):
c = ['rankmirrors',
'-n', '6',
mfile]
ranked_urls = subprocess.run(c, stdout = subprocess.PIPE)
url_list = ranked_urls.stdout.decode('utf-8').splitlines()
for u in url_list[:]:
if u.strip() == '':
url_list.remove(u)
continue
if re.match('^\s*(#.*)$', u, re.MULTILINE | re.DOTALL):
url_list.remove(u)
return(url_list)
def localMirror(url_list):
# If checking the state_file doesn't work out, use netctl
# directly.
if not isinstance(local_mirror, dict):
return(url_list)
with open(local_mirror['state_file'], 'r') as f:
state = f.read().strip()
state = [s.strip() for s in state]
if local_mirror['profile'] in state:
url_list.insert(0, 'Server = {0}'.format(local_mirror['url']))
return(url_list)
def writeList(mirrorfile, url_list):
with open(mirrorfile, 'w') as f:
f.write('{0}\n'.format('\n'.join(url_list)))
return()
if __name__ == '__main__':
if os.geteuid() != 0:
exit('Must be run as root.')
urls = getList(rlist)
t = tempfile.mkstemp(text = True)
writeList(t[1], uncomment(urls))
ranked_mirrors = localMirror(rankList(t[1]))
writeList(mfile, ranked_mirrors)
os.remove(t[1])

89
arch/reference Normal file
View File

@@ -0,0 +1,89 @@
some random snippets to incorporate...
######################
this was to assist with https://www.archlinux.org/news/perl-library-path-change/
the following was used to gen the /tmp/perlfix.pkgs.lst:
pacman -Qqo '/usr/lib/perl5/vendor_perl' >> /tmp/perlfix.pkgs.lst ; pacman -Qqo '/usr/lib/perl5/site_perl' >> /tmp/perlfix.pkgs.lst
######################
#!/usr/bin/env python3
import datetime
import re
import os
import pprint
import subprocess
pkgs = []
pkglstfile = '/tmp/perlfix.pkgs.lst'
if os.path.isfile(pkglstfile):
with open(pkglstfile, 'r') as f:
pkgs = f.read().splitlines()
pkgd = {'rdeps': [],
'deps': [],
'remove': []}
for p in pkgs:
pkgchkcmd = ['apacman', '-Q', p]
with open(os.devnull, 'w') as devnull:
pkgchk = subprocess.run(pkgchkcmd, stdout = devnull, stderr = devnull).returncode
if pkgchk != 0: # not installed anymore
break
cmd = ['apacman',
'-Qi',
p]
stdout = subprocess.run(cmd, stdout = subprocess.PIPE).stdout.decode('utf-8').strip().splitlines()
#pprint.pprint(stdout)
d = {re.sub('\s', '_', k.strip().lower()):v.strip() for k, v in (dict(k.split(':', 1) for k in stdout).items())}
# some pythonizations..
# list of things(keys) that should be lists
ll = ['architecture', 'conflicts_with', 'depends_on', 'groups', 'licenses', 'make_depends',
'optional_deps', 'provides', 'replaces', 'required_by']
# and now actually listify
for k in ll:
if k in d.keys():
if d[k].lower() in ('none', ''):
d[k] = None
else:
d[k] = d[k].split()
# Not necessary... blah blah inconsistent whitespace blah blah.
#for k in ('build_date', 'install_date'):
# if k in d.keys():
# try:
# d[k] = datetime.datetime.strptime(d[k], '%a %d %b %Y %H:%M:%S %p %Z')
# except:
# d[k] = datetime.datetime.strptime(d[k], '%a %d %b %Y %H:%M:%S %p')
#pprint.pprint(d)
if d['required_by']:
pkgd['rdeps'].extend(d['required_by'])
else:
if d['install_reason'] != 'Explicitly installed':
pkgd['remove'].append(p)
if d['depends_on']:
pkgd['deps'].extend(d['depends_on'])
#break
for x in ('rdeps', 'deps'):
pkgd[x].sort()
#for p in pkgd['rdeps']:
# if p in pkgd['deps']:
# pkgd['
#print('DEPENDENCIES:')
#print('\n'.join(pkgd['deps']))
#print('\nREQUIRED BY:')
#print('\n'.join(pkgd['rdeps']))
#print('\nCAN REMOVE:')
print('\n'.join(pkgd['remove']))
#cmd = ['apacman', '-R']
#cmd.extend(pkgd['remove'])
#subprocess.run(cmd)

288
arch/repo-maint.py Executable file
View File

@@ -0,0 +1,288 @@
#!/usr/bin/env python3
import argparse
import io
import os
import pprint
import re
import sys
import tarfile
# PREREQS:
# Mostly stdlib.
#
# IF:
# 1.) You want to sign or verify packages (-s/--sign and -v/--verify, respectively),
# 2.) You want to work with delta updates,
# THEN:
# 1.) You need to install the python GnuPG GPGME bindings (the "gpg" module; NOT the "gpgme" module). They're
# distributed with the GPG source. They're also in PyPI (https://pypi.org/project/gpg/).
# 2.) You need to install the xdelta3 module (https://pypi.org/project/xdelta3/).
_delta_re = re.compile('(.*)-*-*_to*')
class RepoMaint(object):
def __init__(self, **kwargs):
# https://stackoverflow.com/a/2912884/733214
user_params = kwargs
# Define a set of defaults to update with kwargs since we
# aren't explicitly defining params.
self.args = {'color': True,
'db': './repo.db.tar.xz',
'key': None,
'pkgs': [],
'quiet': False,
'sign': False,
'verify': False}
self.args.update(user_params)
self.db_exts = {'db.tar': False, # No compression
'db.tar.xz': 'xz',
'db.tar.gz': 'gz',
'db.tar.bz2': 'bz2',
# We explicitly check False vs. None.
# For None, we do a custom check and wrap it.
# In .Z's case, we use the lzw module. It's the only non-stdlib compression
# that Arch Linux repo DB files support.
'db.tar.Z': None}
self.args['db'] = os.path.abspath(os.path.expanduser(self.args['db']))
self.db = None
_is_valid_repo_db = False
if not _is_valid_repo_db:
raise ValueError(('Repo DB {0} is not a valid DB type. '
'Must be one of {1}.').format(self.args['db'],
', '.join(['*.{0}'.format(i) for i in self.db_exts])))
self.repo_dir = os.path.dirname(self.args['db'])
self.lockfile = '{0}.lck'.format(self.args['db'])
os.makedirs(self.repo_dir, exist_ok = True)
self.gpg = None
self.sigkey = None
if self.args['sign'] or self.args['verify']:
# Set up GPG handler.
self._initGPG()
self._importDB()
def _initGPG(self):
import gpg
self.gpg = gpg.Context()
if self.args['sign']:
_seckeys = [k for k in self.gpg.keylist(secret = True) if k.can_sign]
if self.args['key']:
for k in _seckeys:
if self.sigkey:
break
for s in k.subkeys:
if self.sigkey:
break
if s.can_sign:
if self.args['key'].lower() in (s.keyid.lower(),
s.fpr.lower()):
self.sigkey = k
self.gpg.signers = [k]
else:
# Grab the first key that can sign.
if _seckeys:
self.sigkey = _seckeys[0]
self.gpg.signers = [_seckeys[0]]
if not self.args['quiet']:
print('Key ID not specified; using {0} as the default'.format(self.sigkey.fpr))
if not self.sigkey:
raise RuntimeError('Private key ID not found, cannot sign, or no secret keys exist.')
# TODO: confirm verifying works without a key
return()
def _LZWcompress(self, data):
# Based largely on:
# https://github.com/HugoPouliquen/lzw-tools/blob/master/utils/compression.py
data_arr = []
rawdata = io.BytesIO(data)
for i in range(int(len(data) / 2)):
data_arr.insert(i, rawdata.read(2))
w = bytes()
b_size = 256
b = []
compressed = io.BytesIO()
for c in data_arr:
c = c.to_bytes(2, 'big')
wc = w + c
if wc in b:
w = wc
else:
b.insert(b_size, wc)
compressed.write(b.index(wc).to_bytes(2, 'big'))
b_size += 1
w = c
return(compressed.getvalue())
def _LZWdecompress(self, data):
# Based largely on:
# https://github.com/HugoPouliquen/lzw-tools/blob/master/utils/decompression.py
b_size = 256
b = []
out = io.BytesIO()
for i in range(b_size):
b.insert(i, i.to_bytes(2, 'big'))
w = data.pop(0)
out.write(w)
i = 0
for byte in data:
x = int.from_bytes(byte, byteorder = 'big')
if x < b_size:
entry = b[x]
elif x == b_size:
entry = w + w
else:
raise ValueError('Bad uncompressed value for "{0}"'.format(byte))
for y in entry:
if i % 2 == 1:
out.write(y.to_bytes(1, byteorder = 'big'))
i += 1
b.insert(b_size, w + x)
b_size += 1
w = entry
return(out.getvalue())
def _importDB(self):
# Get the compression type.
for ct in self.db_exts:
if self.args['db'].lower().endswith(ct):
if self.db_exts[ct] == False:
if ct.endswith('.Z'): # Currently the only custom one.
pass
def add(self):
# Fresh pkg set (in case the instance was re-used).
self.pkgs = {}
# First handle any wildcard
for p in self.args['pkgs'][:]:
if p.strip() == '*':
for root, dirs, files in os.walk(self.repo_dir):
for f in files:
abspath = os.path.join(root, f)
if f.endswith('.pkg.tar.xz'): # Recommended not to be changed per makepkg.conf
if abspath not in self.args['pkgs']:
self.args['pkgs'].append(abspath)
if self.args['delta']:
if f.endswith('.delta'):
if abspath not in self.args['pkgs']:
self.args['pkgs'].append(abspath)
self.args['pkgs'].remove(p)
# Then de-dupe and convert to full path.
self.args['pkgs'] = sorted(list(set([os.path.abspath(os.path.expanduser(d)) for d in self.args['pkgs']])))
for p in self.args['pkgs']:
pkgfnm = os.path.basename(p)
if p.endswith('.delta'):
pkgnm = _delta_re.sub('\g<1>', os.path.basename(pkgfnm))
return()
def remove(self):
for p in self.args['pkgs']:
pass
return()
def hatch():
import base64
import lzma
import random
h = ((
'/Td6WFoAAATm1rRGAgAhARwAAAAQz1jM4AB6AEtdABBok+MQCtEh'
'BisubEtc2ebacaLGrSRAMmHrcwUr39J24q4iODdNz7wfQl9e6I3C'
'ooyuOkptNISdo50CRdknGAU4JBBh+IQTkHwiAAAABW1d7drLmkUA'
'AWd7/+DtzR+2830BAAAAAARZWg=='
).encode('utf-8'),
(
'/Td6WFoAAATm1rRGAgAhARwAAAAQz1jM4AHEALtdABBpE/AVEKFC'
'fdT16ly2cCwT/MnXTY2D4r8nWgH6mLetLPn17nza3ZK+tSFU7d5j'
'my91M8fvPGu9Tf0NYkWlRU7vJM8r2V3kK/Gs6/GS7tq2qIum/C/X'
'sOnYUewVB2yMvlACqwp3gWJlmXSfwcpGiU662EmATS8kUgF+OdP+'
'EATXhM/1bAn07wJbVWPoAL2SBmJBo2zL1tXQklbQu1J20eWfd1bD'
'cgSBGqcU1/CdHnW6lcb6BmWKTg0p9IAAAEoEyN1gLkAMAAHXAcUD'
'AACXcduyscRn+wIAAAAABFla'
).encode('utf-8'))
h = lzma.decompress(base64.b64decode(h[random.randint(0, 1)]))
return(h.decode('utf-8'))
def parseArgs():
args = argparse.ArgumentParser(description = ('Python implementation of repo-add/repo-remove.'),
epilog = ('See https://wiki.archlinux.org/index.php/Pacman/'
'Tips_and_tricks#Custom_local_repository for more information.\n'
'Each operation has sub-help (e.g. "... add -h")'),
formatter_class = argparse.RawDescriptionHelpFormatter)
operargs = args.add_subparsers(dest = 'oper',
help = ('Operation to perform'))
commonargs = argparse.ArgumentParser(add_help = False)
commonargs.add_argument('db',
metavar = '</path/to/repository/repo.db.tar.xz>',
help = ('The path to the repository DB (required)'))
commonargs.add_argument('pkgs',
nargs = '+',
metavar = '<package|delta>',
help = ('Package filepath (for adding)/name (for removing) or delta; '
'can be specified multiple times (at least 1 required)'))
commonargs.add_argument('--nocolor',
dest = 'color',
action = 'store_false',
help = ('If specified, turn off color in output (currently does nothing; '
'output is currently not colorized)'))
commonargs.add_argument('-q', '--quiet',
dest = 'quiet',
action = 'store_true',
help = ('Minimize output'))
commonargs.add_argument('-s', '--sign',
dest = 'sign',
action = 'store_true',
help = ('If specified, sign database with GnuPG after update'))
commonargs.add_argument('-k', '--key',
metavar = 'KEY_ID',
nargs = 1,
help = ('Use the specified GPG key to sign the database '
'(only used if -s/--sign is active)'))
commonargs.add_argument('-v', '--verify',
dest = 'verify',
action = 'store_true',
help = ('If specified, verify the database\'s signature before update'))
addargs = operargs.add_parser('add',
parents = [commonargs],
help = ('Add package(s) to a repository'))
remargs = operargs.add_parser('remove',
parents = [commonargs],
help = ('Remove package(s) from a repository'))
addargs.add_argument('-d', '--delta',
dest = 'delta',
action = 'store_true',
help = ('If specified, generate and add package deltas for the update'))
addargs.add_argument('-n', '--new',
dest = 'new_only',
action = 'store_true',
help = ('If specified, only add packages that are not already in the database'))
addargs.add_argument('-R', '--remove',
dest = 'remove_old',
action = 'store_true',
help = ('If specified, remove old packages from disk after updating the database'))
# Removal args have no add'l arguments, just the common ones.
return(args)
def main():
if (len(sys.argv) == 2) and (sys.argv[1] == 'elephant'):
print(hatch())
return()
else:
rawargs = parseArgs()
args = rawargs.parse_args()
if not args.oper:
rawargs.print_help()
exit()
rm = RepoMaint(**vars(args))
if args.oper == 'add':
rm.add()
elif args.oper == 'remove':
rm.remove()
return()
if __name__ == '__main__':
main()

207
centos/extract_files_package.py Executable file
View File

@@ -0,0 +1,207 @@
#!/usr/bin/env python
# Supports CentOS 6.9 and up, untested on lower versions.
# Lets you extract files for a given package name(s) without installing
# any extra packages (such as yum-utils for repoquery).
# NOTE: If you're on CentOS 6.x, since it uses such an ancient version of python you need to either install
# python-argparse OR just resign to using it for all packages with none of the features.
try:
import argparse
has_argparse = True
except ImportError:
has_argparse = False
import os
import re
import shutil
import tempfile
# For when CentOS/RHEL switch to python 3 by default (if EVER).
import sys
pyver = sys.version_info
try:
import yum
# Needed for verbosity
from yum.logginglevels import __NO_LOGGING as yum_nolog
has_yum = True
except ImportError:
has_yum = False
exit('This script only runs on the system-provided Python on RHEL/CentOS/other RPM-based distros.')
try:
# pip install libarchive
# https://github.com/dsoprea/PyEasyArchive
import libarchive.public as lap
is_ctype = False
except ImportError:
try:
# pip install libarchive
# https://github.com/Changaco/python-libarchive-c
import libarchive
if 'file_reader' in dir(libarchive):
is_legacy = False
else:
# https://code.google.com/archive/p/python-libarchive
is_legacy = True
is_ctype = True
except ImportError:
raise ImportError('Try yum -y install python-libarchive')
class FileExtractor(object):
def __init__(self, dest_dir, paths, verbose = False, *args, **kwargs):
self.dest_dir = os.path.abspath(os.path.expanduser(dest_dir))
self.verbose = verbose # TODO: print file name as extracting? Verbose as argument?
self.rpms = {}
if 'pkgs' in kwargs and kwargs['pkgs']:
self.pkgs = kwargs['pkgs']
self.yum_getFiles()
if 'rpm_files' in kwargs and kwargs['rpm_files']:
self.rpm_files = kwargs['rpm_files']
self.getFiles()
if '*' in paths:
self.paths = None
else:
self.paths = [re.sub('^', '.', os.path.abspath(i)) for i in paths]
def yum_getFiles(self):
import logging
yumloggers = ['yum.filelogging.RPMInstallCallback', 'yum.verbose.Repos', 'yum.verbose.plugin', 'yum.Depsolve',
'yum.verbose', 'yum.plugin', 'yum.Repos', 'yum', 'yum.verbose.YumBase', 'yum.filelogging',
'yum.verbose.YumPlugins', 'yum.RepoStorage', 'yum.YumBase', 'yum.filelogging.YumBase',
'yum.verbose.Depsolve']
# This actually silences everything. Nice.
# https://stackoverflow.com/a/46716482/733214
if not self.verbose:
for loggerName in yumloggers:
logger = logging.getLogger(loggerName)
logger.setLevel(yum_nolog)
# http://yum.baseurl.org/api/yum/yum/__init__.html#yumbase
yb = yum.YumBase()
yb.conf.downloadonly = True
yb.conf.downloaddir = os.path.join(self.dest_dir, '.CACHE')
yb.conf.quiet = True
yb.conf.assumeyes = True
for pkg in self.pkgs:
try:
p = yb.reinstall(name = pkg)
except yum.Errors.ReinstallRemoveError:
p = yb.install(name = pkg)
p = p[0]
# I am... not 100% certain on this. Might be a better way?
fname = '{0}-{3}-{4}.{1}.rpm'.format(*p.pkgtup)
self.rpms[pkg] = os.path.join(yb.conf.downloaddir, fname)
yb.buildTransaction()
try:
yb.processTransaction()
except SystemExit:
pass # It keeps passing an exit because it's downloading only. Get it together, RH.
yb.closeRpmDB()
yb.close()
return()
def getFiles(self):
for rf in self.rpm_files:
# TODO: check if we have the rpm module and if so, rip pkg name from it? use that as key instead of rf?
self.rpms[os.path.basename(rf)] = os.path.abspath(os.path.expanduser(rf))
return()
def extractFiles(self):
# TODO: globbing or regex on self.paths?
# If we have yum, we can, TECHNICALLY, do this with:
# http://yum.baseurl.org/api/yum/rpmUtils/miscutils.html#rpmUtils.miscutils.rpm2cpio
# But nope. We can't selectively decompress members based on path with rpm2cpio-like funcs.
# We keep getting extraction artefacts, at least with legacy libarchive_c, so we use a hammer.
_curdir = os.getcwd()
_tempdir = tempfile.mkdtemp()
os.chdir(_tempdir)
for rpm_file in self.rpms:
rf = self.rpms[rpm_file]
if is_ctype:
if not is_legacy:
# ctype - extracts to pwd
with libarchive.file_reader(rf) as reader:
for entry in reader:
if self.paths and entry.path not in self.paths:
continue
if entry.isdir():
continue
fpath = os.path.join(self.dest_dir, rpm_file, entry.path)
if not os.path.isdir(os.path.dirname(fpath)):
os.makedirs(os.path.dirname(fpath))
with open(fpath, 'wb') as f:
for b in entry.get_blocks():
f.write(b)
else:
with libarchive.Archive(rf) as reader:
for entry in reader:
if (self.paths and entry.pathname not in self.paths) or (entry.isdir()):
continue
fpath = os.path.join(self.dest_dir, rpm_file, entry.pathname)
if not os.path.isdir(os.path.dirname(fpath)):
os.makedirs(os.path.dirname(fpath))
reader.readpath(fpath)
else:
# pyEasyArchive/"pypi/libarchive"
with lap.file_reader(rf) as reader:
for entry in reader:
if (self.paths and entry.pathname not in self.paths) or (entry.filetype.IFDIR):
continue
fpath = os.path.join(self.dest_dir, rpm_file, entry.pathname)
if not os.path.isdir(os.path.dirname(fpath)):
os.makedirs(os.path.dirname(fpath))
with open(fpath, 'wb') as f:
for b in entry.get_blocks():
f.write(b)
os.chdir(_curdir)
shutil.rmtree(_tempdir)
return()
def parseArgs():
args = argparse.ArgumentParser(description = ('This script allows you to extract files for a given package '
'{0}without installing any extra packages (such as yum-utils '
'for repoquery). '
'You must use at least one -r/--rpm{1}.').format(
('name(s) ' if has_yum else ''),
(', -p/--package, or both' if has_yum else '')))
args.add_argument('-d', '--dest-dir',
dest = 'dest_dir',
default = '/var/tmp/rpm_extract',
help = ('The destination for the extracted package file tree (in the format of '
'<dest_dir>/<pkg_nm>/<tree>). '
'Default: /var/tmp/rpm_extract'))
args.add_argument('-r', '--rpm',
dest = 'rpm_files',
metavar = 'PATH/TO/RPM',
action = 'append',
default = [],
help = ('If specified, use this RPM file instead of the system\'s RPM database. Can be '
'specified multiple times'))
if has_yum:
args.add_argument('-p', '--package',
dest = 'pkgs',
#nargs = 1,
metavar = 'PKGNAME',
action = 'append',
default = [],
help = ('If specified, restrict the list of packages to check against to only this package. '
'Can be specified multiple times. HIGHLY RECOMMENDED'))
args.add_argument('paths',
nargs = '+',
metavar = 'path/file/name.ext',
help = ('The path(s) of files to extract. If \'*\' is used, extract all files'))
return(args)
def main():
if has_argparse:
args = vars(parseArgs().parse_args())
args['rpm_files'] = [os.path.abspath(os.path.expanduser(i)) for i in args['rpm_files']]
if not any((args['rpm_files'], args['pkgs'])):
exit(('You have not specified any package files{0}.\n'
'This is so dumb we are bailing out.\n').format((' or package names') if has_yum else ''))
else:
raise RuntimeError('Please yum -y install python-argparse')
fe = FileExtractor(**args)
fe.extractFiles()
return()
if __name__ == '__main__':
main()

171
centos/find_changed_confs.py Executable file
View File

@@ -0,0 +1,171 @@
#!/usr/bin/env python
# Supports CentOS 6.9 and up, untested on lower versions.
# Definitely probably won't work on 5.x since they use MD5(?), and 6.5? and up
# use SHA256.
# TODO: add support for .rpm files (like list_files_package.py)
import argparse
import copy
import datetime
import hashlib
import os
import re
from sys import version_info as py_ver
try:
import rpm
except ImportError:
exit('This script only runs on RHEL/CentOS/other RPM-based distros.')
# Thanks, dude!
# https://blog.fpmurphy.com/2011/08/programmatically-retrieve-rpm-package-details.html
class PkgChk(object):
def __init__(self, dirpath, symlinks = True, pkgs = None):
self.path = dirpath
self.pkgs = pkgs
self.symlinks = symlinks
self.orig_pkgs = copy.deepcopy(pkgs)
self.pkgfilemap = {}
self.flatfiles = []
self.flst = {}
self.trns = rpm.TransactionSet()
self.getFiles()
self.getActualFiles()
def getFiles(self):
if not self.pkgs:
for p in self.trns.dbMatch():
self.pkgs.append(p['name'])
for p in self.pkgs:
for pkg in self.trns.dbMatch('name', p):
# Get the canonical package name
_pkgnm = pkg.sprintf('%{NAME}')
self.pkgfilemap[_pkgnm] = {}
# Get the list of file(s) and their MD5 hash(es)
for f in pkg.fiFromHeader():
if not f[0].startswith(self.path):
continue
if f[12] == '0' * 64:
_hash = None
else:
_hash = f[12]
self.pkgfilemap[_pkgnm][f[0]] = {'hash': _hash,
'date': f[3],
'size': f[1]}
self.flatfiles.append(f[0])
return()
def getActualFiles(self):
print('Getting a list of local files and their hashes.')
print('Please wait...\n')
for root, dirs, files in os.walk(self.path):
for f in files:
_fpath = os.path.join(root, f)
_stat = os.stat(_fpath)
if _fpath in self.flatfiles:
_hash = hashlib.sha256()
with open(_fpath, 'rb') as r:
for chunk in iter(lambda: r.read(4096), b''):
_hash.update(chunk)
self.flst[_fpath] = {'hash': str(_hash.hexdigest()),
'date': int(_stat.st_mtime),
'size': _stat.st_size}
else:
# It's not even in the package, so don't waste time
# with generating hashes or anything else.
self.flst[_fpath] = {'hash': None}
return()
def compareFiles(self):
for f in self.flst.keys():
if f not in self.flatfiles:
if not self.orig_pkgs:
print(('{0} is not installed by any package.').format(f))
else:
print(('{0} is not installed by package(s) ' +
'specified.').format(f))
else:
for p in self.pkgs:
if f not in self.pkgfilemap[p].keys():
continue
if (f in self.flst.keys() and
(self.flst[f]['hash'] !=
self.pkgfilemap[p][f]['hash'])):
if not self.symlinks:
if ((not self.pkgfilemap[p][f]['hash'])
or re.search('^0+$',
self.pkgfilemap[p][f]['hash'])):
continue
r_time = datetime.datetime.fromtimestamp(
self.pkgfilemap[p][f]['date'])
r_hash = self.pkgfilemap[p][f]['hash']
r_size = self.pkgfilemap[p][f]['size']
l_time = datetime.datetime.fromtimestamp(
self.flst[f]['date'])
l_hash = self.flst[f]['hash']
l_size = self.flst[f]['size']
r_str = ('\n{0} differs per {1}:\n' +
'\tRPM:\n' +
'\t\tSHA256: {2}\n' +
'\t\tBYTES: {3}\n' +
'\t\tDATE: {4}').format(f, p,
r_hash,
r_size,
r_time)
l_str = ('\tLOCAL:\n' +
'\t\tSHA256: {0}\n' +
'\t\tBYTES: {1}\n' +
'\t\tDATE: {2}').format(l_hash,
l_size,
l_time)
print(r_str)
print(l_str)
# Now we print missing files
for f in sorted(list(set(self.flatfiles))):
if not os.path.exists(f):
print('{0} was deleted from the filesystem.'.format(f))
return()
def parseArgs():
def dirchk(path):
p = os.path.abspath(path)
if not os.path.isdir(p):
raise argparse.ArgumentTypeError(('{0} is not a valid ' +
'directory').format(path))
return(p)
args = argparse.ArgumentParser(description = ('Get a list of config ' +
'files that have changed ' +
'from the package\'s ' +
'defaults'))
args.add_argument('-l', '--ignore-symlinks',
dest = 'symlinks',
action = 'store_false',
help = ('If specified, don\'t track files that are ' +
'symlinks in the RPM'))
args.add_argument('-p', '--package',
dest = 'pkgs',
#nargs = 1,
metavar = 'PKGNAME',
action = 'append',
default = [],
help = ('If specified, restrict the list of ' +
'packages to check against to only this ' +
'package. Can be specified multiple times. ' +
'HIGHLY RECOMMENDED'))
args.add_argument('dirpath',
type = dirchk,
metavar = 'path/to/directory',
help = ('The path to the directory containing the ' +
'configuration files to check against (e.g. ' +
'"/etc/ssh")'))
return(args)
def main():
args = vars(parseArgs().parse_args())
p = PkgChk(**args)
p.compareFiles()
if __name__ == '__main__':
main()

92
centos/isomirror_sort.py Executable file
View File

@@ -0,0 +1,92 @@
#!/usr/bin/env python3
# requires python lxml module as well
import os
import socket
import time
from urllib.request import urlopen
from urllib.parse import urlparse
from bs4 import BeautifulSoup
# The page that contains the list of (authoritative ISO) mirrors
URL = 'http://isoredirect.centos.org/centos/7/isos/x86_64/'
# The formatting on the page is pretty simple - no divs, etc. - so we need to
# blacklist some links we pull in.
blacklisted_link_URLs = ('http://bittorrent.com/',
'http://wiki.centos.org/AdditionalResources/Repositories')
mirrors = {}
dflt_ports = {'https': 443, # unlikely. "HTTPS is currently not used for mirrors." per https://wiki.centos.org/HowTos/CreatePublicMirrors
'http': 80, # most likely.
'ftp': 21,
'rsync': 873}
def getMirrors():
mirrors = []
with urlopen(URL) as u:
pg_src = u.read().decode('utf-8')
soup = BeautifulSoup(pg_src, 'lxml')
for tag in soup.find_all('br')[4].next_siblings:
if tag.name == 'a' and tag['href'] not in blacklisted_link_URLs:
mirrors.append(tag['href'].strip())
return(mirrors)
def getHosts(mirror):
port = None
fqdn = None
login = ''
# "mirror" should be a base URI of the CentOS mirror path.
# mirrors.centos.org is pointless to use for this!
#url = os.path.join(mirror, 'sha256sum.txt.asc')
uri = urlparse(mirror)
spl_dom = uri.netloc.split(':')
if len(spl_dom) >= 2: # more complex URI
if len(spl_dom) == 2: # probably domain:port?
try:
port = int(spl_dom[-1:])
except ValueError: # ooookay, so it's not domain:port, it's a user:pass@
if '@' in uri.netloc:
auth = uri.netloc.split('@')
fqdn = auth[1]
login = auth[0] + '@'
elif len(spl_dom) > 2: # even more complex URI, which ironically makes parsing easier
auth = uri.netloc.split('@')
fqdn = spl_dom[1].split('@')[1]
port = int(spl_dom[-1:])
login = auth[0] + '@'
# matches missing values and simple URI. like, 99%+ of mirror URIs being passed.
if not fqdn:
fqdn = uri.netloc
if not port:
port = dflt_ports[uri.scheme]
mirrors[fqdn] = {'proto': uri.scheme,
'port': port,
'path': uri.path,
'auth': login}
return()
def getSpeeds():
for fqdn in mirrors.keys():
start = time.time()
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((fqdn, mirrors[fqdn]['port']))
mirrors[fqdn]['time'] = time.time() - start
sock.close()
return()
def main():
for m in getMirrors():
getHosts(m)
getSpeeds()
ranking = sorted(mirrors.keys(), key = lambda k: (mirrors[k]['time']))
for i in ranking:
str_port = ':' + str(mirrors[i]['port'])
if mirrors[i]['port'] in dflt_ports.values():
str_port = ''
print('{proto}://{auth}{0}{p}{path}'.format(i,
**mirrors[i],
p = str_port))
if __name__ == '__main__':
main()

155
centos/list_files_package.py Executable file
View File

@@ -0,0 +1,155 @@
#!/usr/bin/env python
# Supports CentOS 6.9 and up, untested on lower versions.
# Lets you get a list of files for a given package name(s) without installing
# any extra packages (such as yum-utils for repoquery).
# NOTE: If you're on CentOS 6.x, since it uses such an ancient version of python you need to either install
# python-argparse OR just resign to using it for all packages with none of the features.
try:
import argparse
has_argparse = True
except ImportError:
has_argparse = False
import json
import os
import re
# For when CentOS/RHEL switch to python 3 by default (if EVER).
import sys
pyver = sys.version_info
try:
import rpm
except ImportError:
exit('This script only runs on the system-provided Python on RHEL/CentOS/other RPM-based distros.')
def all_pkgs():
# Gets a list of all packages.
pkgs = []
trns = rpm.TransactionSet()
for p in trns.dbMatch():
pkgs.append(p['name'])
pkgs = list(sorted(set(pkgs)))
return(pkgs)
class FileGetter(object):
def __init__(self, symlinks = True, verbose = False, *args, **kwargs):
self.symlinks = symlinks
self.verbose = verbose
self.trns = rpm.TransactionSet()
self.files = {}
for p in kwargs['pkgs']:
if p not in self.files.keys():
self.getFiles(p)
if kwargs['rpm_files']:
self.getLocalFiles(kwargs['rpm_files'])
def getLocalFiles(self, rpm_files):
# Needed because the rpm module can't handle arbitrary rpm files??? If it can, someone let me know.
# According to http://rpm5.org/docs/api/classRpmhdr.html#_details I can.
import yum
for r in rpm_files:
pkg = yum.YumLocalPackage(ts = self.trns,
filename = r)
_pkgnm = pkg.hdr.sprintf('%{NAME}')
if _pkgnm in self.files:
continue
if self.verbose:
self.files[_pkgnm] = {}
else:
self.files[_pkgnm] = []
for f in pkg.hdr.fiFromHeader():
_symlink = (True if re.search('^0+$', f[12]) else False)
if self.verbose:
if _symlink:
if self.symlinks:
self.files[_pkgnm][f[0]] = '(symbolic link or directory)'
continue
self.files[_pkgnm][f[0]] = f[12]
else:
# Skip if it is a symlink but they aren't enabled
if _symlink and not self.symlinks:
continue
else:
self.files[_pkgnm].append(f[0])
self.files[_pkgnm].sort()
return()
def getFiles(self, pkgnm):
for pkg in self.trns.dbMatch('name', pkgnm):
# The canonical package name
_pkgnm = pkg.sprintf('%{NAME}')
# Return just a list of files, or a dict of filepath:hash if verbose is enabled.
if self.verbose:
self.files[_pkgnm] = {}
else:
self.files[_pkgnm] = []
for f in pkg.fiFromHeader():
_symlink = (True if re.search('^0+$', f[12]) else False)
if self.verbose:
if _symlink:
if self.symlinks:
self.files[_pkgnm][f[0]] = '(symbolic link)'
continue
self.files[_pkgnm][f[0]] = f[12]
else:
# Skip if it is a symlink but they aren't enabled
if _symlink and not self.symlinks:
continue
else:
self.files[_pkgnm].append(f[0])
self.files[_pkgnm].sort()
return()
def parseArgs():
args = argparse.ArgumentParser(description = ('This script allows you get a list of files for a given package '
'name(s) without installing any extra packages (such as yum-utils '
'for repoquery). It is highly recommended to use at least one '
'-r/--rpm, -p/--package, or both.'))
args.add_argument('-l', '--ignore-symlinks',
dest = 'symlinks',
action = 'store_false',
help = ('If specified, don\'t report files that are symlinks in the RPM'))
args.add_argument('-v', '--verbose',
dest = 'verbose',
action = 'store_true',
help = ('If specified, include the hashes of the files'))
args.add_argument('-r', '--rpm',
dest = 'rpm_files',
metavar = 'PATH/TO/RPM',
action = 'append',
default = [],
help = ('If specified, use this RPM file instead of the system\'s RPM database. Can be '
'specified multiple times'))
args.add_argument('-p', '--package',
dest = 'pkgs',
#nargs = 1,
metavar = 'PKGNAME',
action = 'append',
default = [],
help = ('If specified, restrict the list of packages to check against to only this package. Can '
'be specified multiple times. HIGHLY RECOMMENDED'))
return(args)
def main():
if has_argparse:
args = vars(parseArgs().parse_args())
args['rpm_files'] = [os.path.abspath(os.path.expanduser(i)) for i in args['rpm_files']]
if not any((args['rpm_files'], args['pkgs'])):
prompt_str = ('You have not specified any package names.\nThis means we will get file lists for EVERY SINGLE '
'installed package.\nThis is a LOT of output and can take a few moments.\nIf this was a mistake, '
'you can hit ctrl-c now.\nOtherwise, hit the enter key to continue.\n')
sys.stderr.write(prompt_str)
if pyver.major >= 3:
input()
elif pyver.major == 2:
raw_input()
args['pkgs'] = all_pkgs()
else:
args = {'pkgs': all_pkgs(),
'rpm_files': []}
gf = FileGetter(**args)
print(json.dumps(gf.files, indent = 4))
return()
if __name__ == '__main__':
main()

192
centos/list_pkgs.py Executable file
View File

@@ -0,0 +1,192 @@
#!/usr/bin/env python
# Supports CentOS 6.9 and up, untested on lower versions.
# Lets you dump a list of installed packages for backup purposes
# Reference: https://blog.fpmurphy.com/2011/08/programmatically-retrieve-rpm-package-details.html
import argparse
import copy
import datetime
import io
import re
import sys
try:
import yum
except ImportError:
exit('This script only runs on RHEL/CentOS/other yum-based distros.')
# Detect RH version.
ver_re = re.compile('^(centos( linux)? release) ([0-9\.]+) .*$', re.IGNORECASE)
# distro module isn't stdlib, and platform.linux_distribution() (AND platform.distro()) are both deprecated in 3.7.
# So we get hacky.
with open('/etc/redhat-release', 'r') as f:
ver = [int(i) for i in ver_re.sub('\g<3>', f.read().strip()).split('.')]
import pprint
repo_re = re.compile('^@')
class PkgIndexer(object):
def __init__(self, **args):
self.pkgs = []
self.args = args
self.yb = yum.YumBase()
# Make the Yum API shut the heck up.
self.yb.preconf.debuglevel = 0
self.yb.preconf.errorlevel = 0
self._pkgs = self._pkglst()
self._build_pkginfo()
if self.args['report'] == 'csv':
self._gen_csv()
elif self.args['report'] == 'json':
self._gen_json()
elif self.args['report'] == 'xml':
self._gen_xml()
def _pkglst(self):
pkgs = []
# Get the list of packages
if self.args['reason'] != 'all':
for p in sorted(self.yb.rpmdb.returnPackages()):
if 'reason' not in p.yumdb_info:
continue
reason = getattr(p.yumdb_info, 'reason')
if reason == self.args['reason']:
pkgs.append(p)
else:
pkgs = sorted(self.yb.rpmdb.returnPackages())
return(pkgs)
def _build_pkginfo(self):
for p in self._pkgs:
_pkg = {'name': p.name,
'desc': p.summary,
'version': p.ver,
'release': p.release,
'arch': p.arch,
'built': datetime.datetime.fromtimestamp(p.buildtime),
'installed': datetime.datetime.fromtimestamp(p.installtime),
'repo': repo_re.sub('', p.ui_from_repo),
'sizerpm': p.packagesize,
'sizedisk': p.installedsize}
self.pkgs.append(_pkg)
def _gen_csv(self):
if self.args['plain']:
_fields = ['name']
else:
_fields = ['name', 'version', 'release', 'arch', 'desc', 'built',
'installed', 'repo', 'sizerpm', 'sizedisk']
import csv
if sys.hexversion >= 0x30000f0:
_buf = io.StringIO()
else:
_buf = io.BytesIO()
_csv = csv.writer(_buf, delimiter = self.args['sep_char'])
if self.args['header']:
if self.args['plain']:
_csv.writerow(['Name'])
else:
_csv.writerow(['Name', 'Version', 'Release', 'Architecture', 'Description', 'Build Time',
'Install Time', 'Repository', 'Size (RPM)', 'Size (On-Disk)'])
_csv = csv.DictWriter(_buf, fieldnames = _fields, extrasaction = 'ignore', delimiter = self.args['sep_char'])
for p in self.pkgs:
_csv.writerow(p)
_buf.seek(0, 0)
self.report = _buf.read().replace('\r\n', '\n')
return()
def _gen_json(self):
import json
if self.args['plain']:
self.report = json.dumps([p['name'] for p in self.pkgs], indent = 4)
else:
self.report = json.dumps(self.pkgs, default = str, indent = 4)
return()
def _gen_xml(self):
from lxml import etree
_xml = etree.Element('packages')
for p in self.pkgs:
_attrib = copy.deepcopy(p)
for i in ('built', 'installed', 'sizerpm', 'sizedisk'):
_attrib[i] = str(_attrib[i])
if self.args['plain']:
_pkg = etree.Element('package', attrib = {'name': p['name']})
else:
_pkg = etree.Element('package', attrib = _attrib)
_xml.append(_pkg)
#del(_attrib['name']) # I started to make it a more complex, nested structure... is that necessary?
if self.args['header']:
self.report = etree.tostring(_xml, pretty_print = True, xml_declaration = True, encoding = 'UTF-8')
else:
self.report = etree.tostring(_xml, pretty_print = True)
return()
def parseArgs():
args = argparse.ArgumentParser(description = ('This script lets you dump the list of installed packages'))
args.add_argument('-p', '--plain',
dest = 'plain',
action = 'store_true',
help = 'If specified, only create a list of plain package names (i.e. don\'t include extra '
'information)')
args.add_argument('-n', '--no-header',
dest = 'header',
action = 'store_false',
help = 'If specified, do not print column headers/XML headers')
args.add_argument('-s', '--separator',
dest = 'sep_char',
default = ',',
help = 'The separator used to split fields in the output (default: ,) (only used for CSV '
'reports)')
rprt = args.add_mutually_exclusive_group()
rprt.add_argument('-c', '--csv',
dest = 'report',
default = 'csv',
action = 'store_const',
const = 'csv',
help = 'Generate CSV output (this is the default). See -n/--no-header, -s/--separator')
rprt.add_argument('-x', '--xml',
dest = 'report',
default = 'csv',
action = 'store_const',
const = 'xml',
help = 'Generate XML output (requires the LXML module: yum install python-lxml)')
rprt.add_argument('-j', '--json',
dest = 'report',
default = 'csv',
action = 'store_const',
const = 'json',
help = 'Generate JSON output')
rsn = args.add_mutually_exclusive_group()
rsn.add_argument('-a', '--all',
dest = 'reason',
default = 'all',
action = 'store_const',
const = 'all',
help = ('Parse/report all packages that are currently installed. '
'Conflicts with -u/--user and -d/--dep. '
'This is the default'))
rsn.add_argument('-u', '--user',
dest = 'reason',
default = 'all',
action = 'store_const',
const = 'user',
help = ('Parse/report only packages which were explicitly installed. '
'Conflicts with -a/--all and -d/--dep'))
rsn.add_argument('-d', '--dep',
dest = 'reason',
default = 'all',
action = 'store_const',
const = 'dep',
help = ('Parse/report only packages which were installed to satisfy a dependency. '
'Conflicts with -a/--all and -u/--user'))
return(args)
def main():
args = vars(parseArgs().parse_args())
p = PkgIndexer(**args)
print(p.report)
return()
if __name__ == '__main__':
main()

119
git/remotehooks.py Executable file
View File

@@ -0,0 +1,119 @@
#!/usr/bin/env python3
import ast # Needed for localhost cmd strings
import json
import os
import re
import sys
modules = {}
try:
import git
modules['git'] = True
except ImportError:
import subprocess
modules['git'] = False
try:
import paramiko
import socket
modules['ssh'] = True
except ImportError:
modules['ssh'] = False
repos = {}
repos['bdisk'] = {'remotecmds': {'g.rainwreck.com': {'gitbot': {'cmds': ['git -C /var/lib/gitbot/clonerepos/BDisk pull',
'git -C /var/lib/gitbot/clonerepos/BDisk pull --tags',
'asciidoctor /var/lib/gitbot/clonerepos/BDisk/docs/manual/HEAD.adoc -o /srv/http/bdisk/index.html']}}}}
repos['test'] = {'remotecmds': {'g.rainwreck.com': {'gitbot': {'cmds': ['echo $USER']}}}}
repos['games-site'] = {'remotecmds': {'games.square-r00t.net':
{'gitbot':
{'cmds': ['cd /srv/http/games-site && git pull']}}}}
repos['aif-ng'] = {'cmds': [['asciidoctor', '/opt/git/repo.checkouts/aif-ng/docs/README.adoc', '-o', '/srv/http/aif/index.html']]}
def execHook(gitinfo = False):
if not gitinfo:
gitinfo = getGitInfo()
repo = gitinfo['repo'].lower()
print('Executing hooks for {0}:{1}...'.format(repo, gitinfo['branch']))
print('This commit: {0}\nLast commit: {1}'.format(gitinfo['currev'], gitinfo['oldrev']))
# Execute local commands first
if 'cmds' in repos[repo].keys():
for cmd in repos[repo]['cmds']:
print('\tExecuting {0}...'.format(' '.join(cmd)))
subprocess.call(cmd)
if 'remotecmds' in repos[repo].keys():
for host in repos[repo]['remotecmds'].keys():
if 'port' in repos[repo]['remotecmds'][host].keys():
port = int(repos[repo]['remotecmds'][host]['port'])
else:
port = 22
for user in repos[repo]['remotecmds'][host].keys():
print('{0}@{1}:'.format(user, host))
if paramikomodule:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host, username = user, port = port)
try:
for cmd in repos[repo]['remotecmds'][host][user]['cmds']:
print('\tExecuting \'{0}\'...'.format(cmd))
stdin, stdout, stderr = ssh.exec_command(cmd)
stdout = stdout.read().decode('utf-8')
stderr = stderr.read().decode('utf-8')
print(stdout)
if stderr != '':
print(stderr)
except paramiko.AuthenticationException:
print('({0}@{1}) AUTHENTICATION FAILED!'.format(user, host))
except paramiko.BadHostKeyException:
print('({0}@{1}) INCORRECT HOSTKEY!'.format(user, host))
except paramiko.SSHException:
print('({0}@{1}) FAILED TO ESTABLISH SSH!'.format(user, host))
except socket.error:
print('({0}@{1}) SOCKET CONNECTION FAILURE! (DNS, timeout/firewall, etc.)'.format(user, host))
else:
for cmd in repos[repo]['remotecmds'][host][user]['cmds']:
try:
print('\tExecuting \'{0}\'...'.format(cmd))
subprocess.call(['ssh', '{0}@{1}'.format(user, host), cmd])
except:
print('({0}@{1}) An error occurred!'.format(user, host))
def getGitInfo():
refs = sys.argv[1].split('/')
gitinfo = {}
if refs[1] == 'tags':
gitinfo['branch'] = False
gitinfo['tag'] = refs[2]
elif refs[1] == 'heads':
gitinfo['branch'] = refs[2]
gitinfo['tag'] = False
gitinfo['repo'] = os.environ['GL_REPO']
gitinfo['user'] = os.environ['GL_USER']
clientinfo = os.environ['SSH_CONNECTION'].split()
gitinfo['ssh'] = {'client': {'ip': clientinfo[0], 'port': clientinfo[1]},
'server': {'ip': clientinfo[2], 'port': clientinfo[3]},
'user': os.environ['USER']
}
if os.environ['GIT_DIR'] == '.':
gitinfo['dir'] = os.environ['PWD']
else:
#gitinfo['dir'] = os.path.join(os.environ['GL_REPO_BASE'], gitinfo['repo'], '.git')
gitinfo['dir'] = os.path.abspath(os.path.expanduser(os.environ['GIT_DIR']))
if gitmodule:
# This is preferred, because it's a lot more faster and a lot more flexible.
#https://gitpython.readthedocs.io/en/stable
gitobj = git.Repo(gitinfo['dir'])
commits = list(gitobj.iter_commits(gitobj.head.ref.name, max_count = 2))
else:
commits = subprocess.check_output(['git', 'rev-parse', 'HEAD..HEAD^1']).decode('utf-8').splitlines()
gitinfo['oldrev'] = re.sub('^\^', '', commits[1])
gitinfo['currev'] = re.sub('^\^', '', commits[0])
return(gitinfo)
#sys.exit(0)
def main():
execHook()
if __name__ == '__main__':
main()

69
git/remotehooks2.py Executable file
View File

@@ -0,0 +1,69 @@
#!/usr/bin/env python3
import json
import os
import re
import sys
# Can we use paramiko for remotecmds?
try:
import paramiko
import socket
has_ssh = True
except ImportError:
has_ssh = False
# Can we use the python git module?
try:
import git # "python-gitpython" in Arch; https://github.com/gitpython-developers/gitpython
has_git = True
except ImportError:
has_git = False
class repoHooks(object):
def __init__(self):
with open(os.path.join(os.environ['HOME'],
'.gitolite',
'local',
'hooks',
'repo-specific',
'githooks.json'), 'r') as f:
self.cfg = json.loads(f.read())
self.repos = list(self.cfg.keys())
self.env = os.environ.copy()
if 'GIT_DIR' in self.env.keys():
del(self.env['GIT_DIR'])
self.repo = self.env['GL_REPO']
def remoteExec(self):
for _host in self.repos[self.repo]['remotecmds'].keys():
if len(_host.split(':')) == 2:
_server, _port = [i.strip() for i in _host.split(':')]
else:
_port = 22
_server = _host.split(':')[0]
_h = self.repos[self.repo]['remotecmds'][_host]
for _user in _h.keys():
_u = _h[_user]
if has_ssh:
_ssh = paramiko.SSHClient()
_ssh.load_system_host_keys()
_ssh.missing_host_key_policy(paramiko.AutoAddPolicy())
_ssh.connect(_server,
int(_port),
_user)
for _cmd in _h.keys():
pass # DO STUFF HERE
else:
return() # no-op; no paramiko
def localExec(self):
pass
def main():
h = repoHooks()
if h.repo not in h.repos:
return()
if __name__ == '__main__':
main()

27
git/sample.githooks.json Normal file
View File

@@ -0,0 +1,27 @@
# remotehooks.py should go in your <gitolite repo>/local/hooks/repo-specific directory,
# along with the (uncommented) format of this file configured for your particular hooks
# "cmds" is a list of commands performed locally on the gitolite server,
# "remotecmds" contains a recursive directory of commands to run remotely
{
"<REPO_NAME>": {
"remotecmds": {
"<HOST_OR_IP_ADDRESS>": {
"<USER>": {
"cmds": [
"<COMMAND_1>",
"<COMMAND_2>"
]
}
}
}
},
"<REPO2_NAME>": {
"cmds": [
[
"<LOCAL_COMMAND_1>",
"<LOCAL_COMMAND_2>"
]
]
}
}

View File

@@ -1,353 +0,0 @@
#!/usr/bin/env python3
import argparse
import datetime
import email
import os
import re
import shutil
import subprocess
from io import BytesIO
from socket import *
import urllib.parse
import gpgme # non-stdlib; Arch package is "python-pygpgme"
# TODO:
# -attach pubkey when sending below email
# mail to first email address in key with signed message:
#Subj: Your GPG key has been signed
#
#Hello! Thank you for participating in a keysigning party and exchanging keys.
#
#I have signed your key (KEYID) with trust level "TRUSTLEVEL" because:
#
#* You have presented sufficient proof of identity
#
#The signatures have been pushed to KEYSERVERS.
#
#I have taken the liberty of attaching my public key in the event you've not signed it yet and were unable to find it. Please feel free to push to pgp.mit.edu or hkps.pool.sks-keyservers.net.
#
#As a reminder, my key ID, Keybase.io username, and verification/proof of identity can all be found at:
#
#https://devblog.square-r00t.net/about/my-gpg-public-key-verification-of-identity
#
#Thanks again!
def getKeys(args):
# Get our concept
os.environ['GNUPGHOME'] = args['gpgdir']
gpg = gpgme.Context()
keys = {}
allkeys = []
# Do we have the key already? If not, fetch.
for k in args['rcpts'].keys():
if args['rcpts'][k]['type'] == 'fpr':
allkeys.append(k)
if args['rcpts'][k]['type'] == 'email':
# We need to actually do a lookup on the email address.
with open(os.devnull, 'w') as f:
# TODO: replace with gpg.keylist_mode(gpgme.KEYLIST_MODE_EXTERN) and internal mechanisms?
keyout = subprocess.run(['gpg2',
'--search-keys',
'--with-colons',
'--batch',
k],
stdout = subprocess.PIPE,
stderr = f)
keyout = keyout.stdout.decode('utf-8').splitlines()
for line in keyout:
if line.startswith('pub:'):
key = line.split(':')[1]
keys[key] = {}
keys[key]['uids'] = {}
keys[key]['time'] = int(line.split(':')[4])
elif line.startswith('uid:'):
uid = re.split('<(.*)>', urllib.parse.unquote(line.split(':')[1].strip()))
uid.remove('')
uid = [u.strip() for u in uid]
keys[key]['uids'][uid[1]] = {}
keys[key]['uids'][uid[1]]['comment'] = uid[0]
keys[key]['uids'][uid[1]]['time'] = int(line.split(':')[2])
if len(keys) > 1: # Print the keys and prompt for a selection.
print('\nWe found the following keys for <{0}>...\n\nKEY ID:'.format(k))
for k in keys:
print('{0}\n{1:6}(Generated at {2}) UIDs:'.format(k, '', datetime.datetime.utcfromtimestamp(keys[k]['time'])))
for email in keys[k]['uids']:
print('{0:42}(Generated {3}) <{2}> {1}'.format('',
keys[k]['uids'][email]['comment'],
email,
datetime.datetime.utcfromtimestamp(
keys[k]['uids'][email]['time'])))
print()
while True:
key = input('Please enter the (full) appropriate key: ')
if key not in keys.keys():
print('Please enter a full key ID from the list above or hit ctrl-d to exit.')
else:
allkeys.append(key)
break
else:
if not len(keys.keys()) >= 1:
print('Could not find {0}!'.format(k))
continue
key = list(keys.keys())[0]
print('\nFound key {0} for <{1}> (Generated at {2}):'.format(key, k, datetime.datetime.utcfromtimestamp(keys[key]['time'])))
for email in keys[key]['uids']:
print('\t(Generated {2}) {0} <{1}>'.format(keys[key]['uids'][email]['comment'],
email,
datetime.datetime.utcfromtimestamp(keys[key]['uids'][email]['time'])))
allkeys.append(key)
print()
## And now we can (FINALLY) fetch the key(s).
# TODO: replace with gpg.keylist_mode(gpgme.KEYLIST_MODE_EXTERN) and internal mechanisms?
recvcmd = ['gpg2', '--recv-keys', '--batch', '--yes'] # We'll add the keys onto the end of this next.
recvcmd.extend(allkeys)
with open(os.devnull, 'w') as f:
subprocess.run(recvcmd, stdout = f, stderr = f) # We hide stderr because gpg, for some unknown reason, spits non-errors to stderr.
return(allkeys)
def sigKeys(keyids):
pass
def modifyDirmngr(op, args):
if not args['keyservers']:
return()
pid = str(os.getpid())
activecfg = os.path.join(args['gpgdir'], 'dirmngr.conf')
bakcfg = '{0}.{1}'.format(activecfg, pid)
if op in ('new', 'start'):
if os.path.lexists(activecfg):
shutil.copy2(activecfg, bakcfg)
with open(bakcfg, 'r') as read, open(activecfg, 'w') as write:
for line in read:
if not line.startswith('keyserver '):
write.write(line)
with open(activecfg, 'a') as f:
for s in args['keyservers']:
uri = '{0}://{1}:{2}'.format(s['proto'], s['server'], s['port'][0])
f.write('keyserver {0}\n'.format(uri))
if op in ('old', 'stop'):
if os.path.lexists(bakcfg):
with open(bakcfg, 'r') as read, open(activecfg, 'w') as write:
for line in read:
write.write(line)
os.remove(bakcfg)
else:
os.remove(activecfg)
subprocess.run(['gpgconf',
'--reload',
'dirmngr'])
return()
def serverParser(uri):
# https://en.wikipedia.org/wiki/Key_server_(cryptographic)#Keyserver_examples
# We need to make a mapping of the default ports.
server = {}
protos = {'hkp': [11371, ['tcp', 'udp']],
'hkps': [443, ['tcp']], # Yes, same as https
'http': [80, ['tcp']],
'https': [443, ['tcp']], # SSL/TLS
'ldap': [389, ['tcp', 'udp']], # includes TLS negotiation since it runs on the same port
'ldaps': [636, ['tcp', 'udp']]} # SSL
urlobj = urllib.parse.urlparse(uri)
server['proto'] = urlobj.scheme
lazy = False
if not server['proto']:
server['proto'] = 'hkp' # Default
server['server'] = urlobj.hostname
if not server['server']:
server['server'] = re.sub('^([A-Za-z]://)?(.+[^:][^0-9])(:[0-9]+)?$', '\g<2>', uri)
lazy = True
server['port'] = urlobj.port
if not server['port']:
if lazy:
p = re.sub('.*:([0-9]+)$', '\g<1>', uri)
server['port'] = protos[server['proto']] # Default
return(server)
def parseArgs():
def getDefGPGDir():
try:
gpgdir = os.environ['GNUPGHOME']
except KeyError:
try:
homedir = os.environ['HOME']
gpgdchk = os.path.join(homedir, '.gnupg')
except KeyError:
# There is no reason that this should ever get this far, but... edge cases be crazy.
gpgdchk = os.path.join(os.path.expanduser('~'), '.gnupg')
if os.path.isdir(gpgdchk):
gpgdir = gpgdchk
else:
gpgdir = None
return(gpgdir)
def getDefKey(defgpgdir):
os.environ['GNUPGHOME'] = defgpgdir
if not defgpgdir:
return(None)
defkey = None
gpg = gpgme.Context()
for k in gpg.keylist(None, True): # params are query and secret keyring, respectively
if k.can_sign and True not in (k.revoked, k.expired, k.disabled):
defkey = k.subkeys[0].fpr
break # We'll just use the first primary key we find that's valid as the default.
return(defkey)
def getDefKeyservers(defgpgdir):
srvlst = [None]
# We don't need these since we use the gpg agent. Requires GPG 2.1 and above, probably.
#if os.path.isfile(os.path.join(defgpgdir, 'dirmngr.conf')):
# pass
dirmgr_out = subprocess.run(['gpg-connect-agent', '--dirmngr', 'keyserver', '/bye'], stdout = subprocess.PIPE)
for l in dirmgr_out.stdout.decode('utf-8').splitlines():
#if len(l) == 3 and l.lower().startswith('s keyserver'): # It's a keyserver line
if l.lower().startswith('s keyserver'): # It's a keyserver line
s = l.split()[2]
if len(srvlst) == 1 and srvlst[0] == None:
srvlst = [s]
else:
srvlst.append(s)
return(','.join(srvlst))
defgpgdir = getDefGPGDir()
defkey = getDefKey(defgpgdir)
defkeyservers = getDefKeyservers(defgpgdir)
args = argparse.ArgumentParser(description = 'Keysigning Assistance and Notifying Tool (KANT)',
epilog = 'brent s. || 2017 || https://square-r00t.net',
formatter_class = argparse.RawTextHelpFormatter)
args.add_argument('-k',
'--keys',
dest = 'keys',
required = True,
help = 'A single or comma-separated list of keys to sign,\ntrust, and notify. Can also be an email address.')
args.add_argument('-K',
'--sigkey',
dest = 'sigkey',
default = defkey,
help = 'The key to use when signing other keys.\nDefault is \033[1m{0}\033[0m.'.format(defkey))
args.add_argument('-b',
'--batch',
dest = 'batchfile',
default = None,
metavar = '/path/to/batchfile',
help = 'If specified, a CSV file to use as a batch run\nin the format of (one per line):\n' +
'\n\033[1mKEY_FINGERPRINT_OR_EMAIL_ADDRESS,TRUSTLEVEL,PUSH_TO_KEYSERVER\033[0m\n' +
'\n\033[1mTRUSTLEVEL\033[0m can be numeric or string:' +
'\n\n\t\033[1m0 = Unknown\n\t1 = Untrusted\n\t2 = Marginal\n\t3 = Full\n\t4 = Ultimate\033[0m\n' +
'\n\033[1mPUSH_TO_KEYSERVER\033[0m can be \033[1m1/True\033[0m or \033[1m0/False\033[0m. If marked as False,\n' +
'the signature will be made local/non-exportable.')
args.add_argument('-d',
'--gpgdir',
dest = 'gpgdir',
default = defgpgdir,
help = 'The GnuPG configuration directory to use (containing\n' +
'your keys, etc.); default is \033[1m{0}\033[0m.'.format(defgpgdir))
args.add_argument('-s',
'--keyservers',
dest = 'keyservers',
default = defkeyservers,
help = 'The comma-separated keyserver(s) to push to. If "None", don\'t\n' +
'push signatures (local/non-exportable signatures will be made).\n'
'Default keyserver list is: \n\n\033[1m{0}\033[0m\n\n'.format(re.sub(',', '\n', defkeyservers)))
args.add_argument('-n',
'--netproto',
dest = 'netproto',
action = 'store',
choices = ['4', '6'],
default = '4',
help = 'Whether to use (IPv)4 or (IPv)6. Default is to use IPv4.')
args.add_argument('-t',
'--testkeyservers',
dest = 'testkeyservers',
action = 'store_true',
help = 'If specified, initiate a test connection with each\n'
'\nkeyserver before anything else. Disabled by default.')
return(args)
def verifyArgs(args):
## Some pythonization...
# We don't want to only strip the values, we want to remove ALL whitespace.
#args['keys'] = [k.strip() for k in args['keys'].split(',')]
#args['keyservers'] = [s.strip() for s in args['keyservers'].split(',')]
args['keys'] = [re.sub('\s', '', k) for k in args['keys'].split(',')]
args['keyservers'] = [re.sub('\s', '', s) for s in args['keyservers'].split(',')]
args['keyservers'] = [serverParser(s) for s in args['keyservers']]
## Key(s) to sign
args['rcpts'] = {}
for k in args['keys']:
args['rcpts'][k] = {}
try:
int(k, 16)
ktype = 'fpr'
except: # If it isn't a valid key ID...
if not re.match('^[\w\.\+\-]+\@[\w-]+\.[a-z]{2,3}$', k): # is it an email address?
raise ValueError('{0} is not a valid email address'.format(k))
else:
ktype = 'email'
args['rcpts'][k]['type'] = ktype
if ktype == 'fpr' and not len(k) == 40: # Security is important. We don't want users getting collisions, so we don't allow shortened key IDs.
raise ValueError('{0} is not a full 40-char key ID or key fingerprint'.format(k))
del args['keys']
## Batch file
if args['batchfile']:
batchfilepath = os.path.abspath(os.path.expanduser(args['batchfile']))
if not os.path.isfile(batchfilepath):
raise ValueError('{0} does not exist or is not a regular file.'.format(batchfilepath))
else:
args['batchfile'] = batchfilepath
## Signing key
if not args['sigkey']:
raise ValueError('A key for signing is required') # We need a key we can sign with.
else:
if not os.path.lexists(args['gpgdir']):
raise FileNotFoundError('{0} does not exist'.format(args['gpgdir']))
elif os.path.isfile(args['gpgdir']):
raise NotADirectoryError('{0} is not a directory'.format(args['gpgdir']))
try:
os.environ['GNUPGHOME'] = args['gpgdir']
gpg = gpgme.Context()
except:
raise RuntimeError('Could not use {0} as a GnuPG home'.format(args['gpgdir']))
# Now we need to verify that the private key exists...
try:
sigkey = gpg.get_key(args['sigkey'], True)
except GpgmeError:
raise ValueError('Cannot use key {0}'.format(args['sigkey']))
# And that it is an eligible candidate to use to sign.
if not sigkey.can_sign or True in (sigkey.revoked, sigkey.expired, sigkey.disabled):
raise ValueError('{0} is not a valid candidate for signing'.format(args['sigkey']))
## Keyservers
if args['testkeyservers']:
for s in args['keyservers']:
# Test to make sure the keyserver is accessible.
# First we need to construct a way to use python's socket connector
# Great. Now we need to just quickly check to make sure it's accessible - if specified.
if args['netproto'] == '4':
nettype = AF_INET
elif args['netproto'] == '6':
nettype = AF_INET6
for proto in s['port'][1]:
if proto == 'udp':
netproto = SOCK_DGRAM
elif proto == 'tcp':
netproto = SOCK_STREAM
sock = socket(nettype, netproto)
sock.settimeout(10)
tests = sock.connect_ex((s['server'], int(s['port'][0])))
uristr = '{0}://{1}:{2} ({3})'.format(s['proto'], s['server'], s['port'][0], proto.upper())
if not tests == 0:
raise RuntimeError('Keyserver {0} is not available'.format(uristr))
else:
print('Keyserver {0} is accepting connections.'.format(uristr))
sock.close()
return(args)
def main():
rawargs = parseArgs()
args = verifyArgs(vars(rawargs.parse_args()))
modifyDirmngr('new', args)
fprs = getKeys(args)
sigKeys(fprs)
modifyDirmngr('old', args)
if __name__ == '__main__':
main()

2
gpg/kant/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
/gpgme.pdf
/tests

View File

@@ -0,0 +1,18 @@
# NOTE: The python csv module does NOT skip
# commented lines!
# This is my personal key. Ultimate trust,
# push key, careful checking, notify
748231EBCBD808A14F5E85D28C004C2F93481F6B,4,1,3,1
# This is a testing junk key generated on a completely separate box,
# and does not exist on ANY keyservers nor the local keyring.
# Never trust, local sig, unknown checking, don't notify
A03CACFD7123AF443A3A185298A8A46921C8DDEF,-1,0,0,0
# This is jthan's key.
# assign full trust, push to keyserver, casual checking, notify
EFD9413B17293AFDFE6EA6F1402A088DEDF104CB,full,true,casual,yes
# This is paden's key.
# assign Marginal trust, push to keyserver, casual checking, notify
6FA8AE12AEC90B035EEE444FE70457341A63E830,2,True,Casual,True
# This is the email for the Sysadministrivia serverkey.
# Assign full trust, push to keyserver, careful checking, don't notify
<admin@sysadministrivia.com>, full, yes, careful, false
1 # NOTE: The python csv module does NOT skip
2 # commented lines!
3 # This is my personal key. Ultimate trust,
4 # push key, careful checking, notify
5 748231EBCBD808A14F5E85D28C004C2F93481F6B,4,1,3,1
6 # This is a testing junk key generated on a completely separate box,
7 # and does not exist on ANY keyservers nor the local keyring.
8 # Never trust, local sig, unknown checking, don't notify
9 A03CACFD7123AF443A3A185298A8A46921C8DDEF,-1,0,0,0
10 # This is jthan's key.
11 # assign full trust, push to keyserver, casual checking, notify
12 EFD9413B17293AFDFE6EA6F1402A088DEDF104CB,full,true,casual,yes
13 # This is paden's key.
14 # assign Marginal trust, push to keyserver, casual checking, notify
15 6FA8AE12AEC90B035EEE444FE70457341A63E830,2,True,Casual,True
16 # This is the email for the Sysadministrivia serverkey.
17 # Assign full trust, push to keyserver, careful checking, don't notify
18 <admin@sysadministrivia.com>, full, yes, careful, false

15
gpg/kant/docs/README Normal file
View File

@@ -0,0 +1,15 @@
GENERATING THE MAN PAGE:
If you have asciidoctor installed, you can generate the manpage one of two ways.
The first way:
asciidoctor -b manpage kant.1.adoc -o- | groff -Tascii -man | gz -c > kant.1.gz
This will generate a fixed-width man page.
The second way (recommended):
asciidoctor -b manpage kant.1.adoc -o- | gz -c > kant.1.gz
This will generate a dynamic-width man page. Most modern versions of man want this version.

View File

@@ -0,0 +1,46 @@
The __init__() function of kant.SigSession() takes a single argument: args.
it should be a dict, structured like this:
{'batch': False,
'checklevel': None,
'gpgdir': '/home/bts/.gnupg',
'keys': 'EFD9413B17293AFDFE6EA6F1402A088DEDF104CB,admin@sysadministrivia.com',
'keyservers': 'hkp://sks.mirror.square-r00t.net:11371,hkps://hkps.pool.sks-keyservers.net:443,http://pgp.mit.edu:80',
'local': 'false',
'msmtp_profile': None,
'notify': True,
'sigkey': '748231EBCBD808A14F5E85D28C004C2F93481F6B',
'testkeyservers': False,
'trustlevel': None}
The gpgdir, sigkey, and keyservers are set from system defaults in kant.parseArgs() if it's run interactively.
This *may* be reworked in the future to provide a mechanism for external calls to kant.SigSession() but for now,
it's up to you to provide all the data in the dict in the above format.
It will then internally verify these items and do various conversions, so that self.args becomes this:
(Note that some keys, such as "local", are validated and converted to appropriate values later on
e.g. 'false' => False)
{'batch': False,
'checklevel': None,
'gpgdir': '/home/bts/.gnupg',
'keys': ['EFD9413B17293AFDFE6EA6F1402A088DEDF104CB',
'admin@sysadministrivia.com'],
'keyservers': [{'port': [11371, ['tcp', 'udp']],
'proto': 'hkp',
'server': 'sks.mirror.square-r00t.net'},
{'port': [443, ['tcp']],
'proto': 'hkps',
'server': 'hkps.pool.sks-keyservers.net'},
{'port': [80, ['tcp']],
'proto': 'http',
'server': 'pgp.mit.edu'}],
'local': 'false',
'msmtp_profile': None,
'notify': True,
'rcpts': {'EFD9413B17293AFDFE6EA6F1402A088DEDF104CB': {'type': 'fpr'},
'admin@sysadministrivia.com': {'type': 'email'}},
'sigkey': '748231EBCBD808A14F5E85D28C004C2F93481F6B',
'testkeyservers': False,
'trustlevel': None}

View File

@@ -0,0 +1,33 @@
The following functions are available within the SigSession() class:
getTpls()
Get the user-specified templates if they exist, otherwise set up stock ones.
modifyDirmngr(op)
*op* can be either:
new/start/replace - modify dirmngr to use the runtime-specified keyserver(s)
old/stop/restore - modify dirmngr back to the keyservers that were defined before modification
buildKeys()
build out the keys dict (see REF.keys.struct.txt).
getKeys()
fetch keys in the keys dict (see REF.keys.struct.txt) from a keyserver if they aren't found in the local keyring.
trustKeys()
set up trusts for the keys in the keys dict (see REF.keys.struct.txt). prompts for each trust not found/specified at runtime.
sigKeys()
sign keys in the keys dict (see REF.keys.struct.txt), either exportable or local depending on runtime specification.
pushKeys()
push keys in the keys dict (see REF.keys.struct.txt) to the keyservers specified at runtime (as long as they weren't specified to be local/non-exportable signatures; then we don't bother).
sendMails()
send emails to each of the recipients specified in the keys dict (see REF.keys.struct.txt).
serverParser(uri)
returns a dict of a keyserver URI broken up into separate components easier for parsing.
verifyArgs(locargs)
does some verifications, classifies certain data, calls serverParser(), etc.

View File

@@ -0,0 +1,127 @@
TYPES:
d = dict
l = list
s = string
i = int
b = binary (True/False)
o = object
- pkey's dict key is the 40-char key ID of the primary key
- "==>" indicates the next item is a dict and the current item may contain one or more elements of the same format,
"++>" is a list,
"-->" is a "flat" item (string, object, int, etc.)
-"status" is one of "an UPGRADE", "a DOWNGRADE", or "a NEW TRUST".
keys(d) ==> (40-char key ID)(s) ==> pkey(d) --> email(s)
--> name(s)
--> creation (o, datetime)
--> key(o, gpg)
--> trust(i)
--> check(i)
--> local(b)
--> notify(b)
==> subkeys(d) ==> (40-char key ID)(s) --> creation
--> change(b)
--> sign(b)
--> status(s)
==> uids(d) ==> email(s) --> name(s)
--> comment(s)
--> email(s)
--> updated(o, datetime)*
* For many keys, this is unset. In-code, this is represented by having a timestamp of 0, or a
datetime object matching UNIX epoch. This is converted to a string, "Never/unknown".
for email templates, they are looped over for each key dict as "key".
so for example, instead of specifying "keys['748231EBCBD808A14F5E85D28C004C2F93481F6B']['pkey']['name']",
you instead should specify "key['pkey']['name']". To get the name of e.g. the second uid,
you'd use "key['uids'][(uid email)]['name'].
e.g. in the code, it's this:
{'748231EBCBD808A14F5E85D28C004C2F93481F6B': {'change': None,
'check': 0,
'local': False,
'notify': True,
'pkey': {'creation': '2013-12-10 '
'08:35:52',
'email': 'brent.saner@gmail.com',
'key': '<GPGME object>',
'name': 'Brent Timothy '
'Saner'},
'sign': True,
'status': None,
'subkeys': {'748231EBCBD808A14F5E85D28C004C2F93481F6B': '2013-12-10 '
'08:35:52'},
'trust': 2,
'uids': {'brent.saner@gmail.com': {'comment': '',
'name': 'Brent '
'Timothy '
'Saner',
'updated': 'Never/unknown'},
'bts@square-r00t.net': {'comment': 'http://www.square-r00t.net',
'name': 'Brent '
'S.',
'updated': 'Never/unknown'},
'r00t@sysadministrivia.com': {'comment': 'https://sysadministrivia.com',
'name': 'r00t^2',
'updated': 'Never/unknown'},
'squarer00t@keybase.io': {'comment': '',
'name': 'keybase.io/squarer00t',
'updated': 'Never/unknown'}}}}
but this is passed to the email template as:
{'change': None,
'check': 0,
'local': False,
'notify': True,
'pkey': {'creation': '2013-12-10 08:35:52',
'email': 'brent.saner@gmail.com',
'key': '<GPGME object>',
'name': 'Brent Timothy Saner'},
'sign': True,
'status': None,
'subkeys': {'748231EBCBD808A14F5E85D28C004C2F93481F6B': '2013-12-10 08:35:52'},
'trust': 2,
'uids': {'brent.saner@gmail.com': {'comment': '',
'name': 'Brent Timothy Saner',
'updated': '1970-01-01 00:00:00'},
'bts@square-r00t.net': {'comment': 'http://www.square-r00t.net',
'name': 'Brent S.',
'updated': 'Never/unknown'},
'r00t@sysadministrivia.com': {'comment': 'https://sysadministrivia.com',
'name': 'r00t^2',
'updated': 'Never/unknown'},
'squarer00t@keybase.io': {'comment': '',
'name': 'keybase.io/squarer00t',
'updated': 'Never/unknown'}}}
(because the emails are iterated through the keys).
the same structure is available via the "mykey" dictionary (e.g. to get the key ID of *your* key,
you can use "mykey['subkeys'][0][0]"):
{'change': False,
'check': None,
'local': False,
'notify': False,
'pkey': {'creation': '2017-09-07 20:54:31',
'email': 'test@test.com',
'key': '<GPGME object>',
'name': 'test user'},
'sign': False,
'status': None,
'subkeys': {'1CD9200637EC587D1F8EB94198748C2879CCE88D': '2017-09-07 20:54:31',
'2805EC3D90E2229795AFB73FF85BC40E6E17F339': '2017-09-07 20:54:31'},
'trust': 'ultimate',
'uids': {'test@test.com': {'comment': 'this is a testing junk key. DO NOT '
'IMPORT/SIGN/TRUST.',
'name': 'test user',
'updated': 'Never/unknown'}}}
you also have the following variables/lists/etc. available for templates (via the Jinja2 templating syntax[0]):
- "keyservers", a list of keyservers set.
[0] http://jinja.pocoo.org/docs/2.9/templates/

257
gpg/kant/docs/kant.1 Normal file
View File

@@ -0,0 +1,257 @@
'\" t
.\" Title: kant
.\" Author: Brent Saner
.\" Generator: Asciidoctor 1.5.6.1
.\" Date: 2017-09-21
.\" Manual: KANT - Keysigning and Notification Tool
.\" Source: KANT
.\" Language: English
.\"
.TH "KANT" "1" "2017-09-21" "KANT" "KANT \- Keysigning and Notification Tool"
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.ss \n[.ss] 0
.nh
.ad l
.de URL
\\$2 \(laURL: \\$1 \(ra\\$3
..
.if \n[.g] .mso www.tmac
.LINKSTYLE blue R < >
.SH "NAME"
kant \- Sign GnuPG/OpenPGP/PGP keys and notify the key owner(s)
.SH "SYNOPSIS"
.sp
\fBkant\fP [\fIOPTION\fP] \-k/\-\-key \fI<KEY_IDS|BATCHFILE>\fP
.SH "OPTIONS"
.sp
Keysigning (and keysigning parties) can be a lot of fun, and can offer someone with new keys a way into the WoT (Web\-of\-Trust).
Unfortunately, they can be intimidating to those new to the experience.
This tool offers a simple and easy\-to\-use interface to sign public keys (normal, local\-only, and/or non\-exportable),
set owner trust, specify level of checking done, and push the signatures to a keyserver. It even supports batch operation via a CSV file.
On successful completion, information about the keys that were signed and the key used to sign are saved to ~/.kant/cache/YYYY.MM.DD_HH.MM.SS.
.sp
\fB\-h\fP, \fB\-\-help\fP
.RS 4
Display brief help/usage and exit.
.RE
.sp
\fB\-k\fP \fIKEY_IDS|BATCHFILE\fP, \fB\-\-key\fP \fIKEY_IDS|BATCHFILE\fP
.RS 4
A single or comma\-separated list of key IDs (see \fBKEY ID FORMAT\fP) to sign, trust, and notify. Can also be an email address.
If \fB\-b\fP/\fB\-\-batch\fP is specified, this should instead be a path to the batch file (see \fBBATCHFILE/Format\fP).
.RE
.sp
\fB\-K\fP \fIKEY_ID\fP, \fB\-\-sigkey\fP \fIKEY_ID\fP
.RS 4
The key to use when signing other keys (see \fBKEY ID FORMAT\fP). The default key is automatically determined at runtime
(it will be displayed in \fB\-h\fP/\fB\-\-help\fP output).
.RE
.sp
\fB\-t\fP \fITRUSTLEVEL\fP, \fB\-\-trust\fP \fITRUSTLEVEL\fP
.RS 4
The trust level to automatically apply to all keys (if not specified, KANT will prompt for each key).
See \fBBATCHFILE/TRUSTLEVEL\fP for trust level notations.
.RE
.sp
\fB\-c\fP \fICHECKLEVEL\fP, \fB\-\-check\fP \fICHECKLEVEL\fP
.RS 4
The level of checking that was done to confirm the validity of ownership for all keys being signed. If not specified,
the default is for KANT to prompt for each key we sign. See \fBBATCHFILE/CHECKLEVEL\fP for check level notations.
.RE
.sp
\fB\-l\fP \fILOCAL\fP, \fB\-\-local\fP \fILOCAL\fP
.RS 4
If specified, make the signature(s) local\-only (i.e. non\-exportable, don\(cqt push to a keyserver).
See \fBBATCHFILE/LOCAL\fP for more information on local signatures.
.RE
.sp
\fB\-n\fP, \fB\-\-no\-notify\fP
.RS 4
This requires some explanation. If you have MSMTP[1] installed and configured for the currently active user,
then we will send out emails to recipients letting them know we have signed their key. However, if MSMTP is installed and configured
but this flag is given, then we will NOT attempt to send emails. See \fBMAIL\fP for more information.
.RE
.sp
\fB\-s\fP \fIKEYSERVER(S)\fP, \fB\-\-keyservers\fP \fIKEYSERVER(S)\fP
.RS 4
The comma\-separated keyserver(s) to push to. The default keyserver list is automatically generated at runtime.
.RE
.sp
\fB\-m\fP \fIPROFILE\fP, \fB\-\-msmtp\-profile\fP \fIPROFILE\fP
.RS 4
If specified, use the msmtp profile named \fIPROFILE\fP. If this is not specified, KANT first looks for an msmtp configuration named KANT (case\-sensitive). If it doesn\(cqt find one, it will use the profile specified as the default profile in your msmtp configuration. See \fBMAIL\fP for more information.
.RE
.sp
\fB\-b\fP, \fB\-\-batch\fP
.RS 4
If specified, operate in batch mode. See \fBBATCHFILE\fP for more information.
.RE
.sp
\fB\-D\fP \fIGPGDIR\fP, \fB\-\-gpgdir\fP \fIGPGDIR\fP
.RS 4
The GnuPG configuration directory to use (containing your keys, etc.). The default is automatically generated at runtime,
but will probably be \fB/home/<yourusername>/.gnupg\fP or similar.
.RE
.sp
\fB\-T\fP, \fB\-\-testkeyservers\fP
.RS 4
If specified, initiate a basic test connection with each set keyserver before anything else. Disabled by default.
.RE
.SH "KEY ID FORMAT"
.sp
Key IDs can be specified in one of two ways. The first (and preferred) way is to use the full 160\-bit (40\-character, hexadecimal) key ID.
A little known fact is the fingerprint of a key:
.sp
\fBDEAD BEEF DEAD BEEF DEAD BEEF DEAD BEEF DEAD BEEF\fP
.sp
is actually the full key ID of the primary key; i.e.:
.sp
\fBDEADBEEFDEADBEEFDEADBEEFDEADBEEFDEADBEEF\fP
.sp
The second way to specify a key, as far as KANT is concerned, is to use an email address.
Do note that if more than one key is found that matches the email address given (and they usually are), you will be prompted to select the specific
correct key ID anyways so it\(cqs usually a better idea to have the owner present their full key ID/fingerprint right from the get\-go.
.SH "BATCHFILE"
.SS "Format"
.sp
The batch file is a CSV\-formatted (comma\-delimited) file containing keys to sign and other information about them. It keeps the following format:
.sp
\fBKEY_ID,TRUSTLEVEL,LOCAL,CHECKLEVEL,NOTIFY\fP
.sp
For more information on each column, reference the appropriate sub\-section below.
.SS "KEY_ID"
.sp
See \fBKEY ID FORMAT\fP.
.SS "TRUSTLEVEL"
.sp
The \fITRUSTLEVEL\fP is specified by the following levels (you can use either the numeric or string representation):
.sp
.if n \{\
.RS 4
.\}
.nf
\fB\-1 = Never
0 = Unknown
1 = Untrusted
2 = Marginal
3 = Full
4 = Ultimate\fP
.fi
.if n \{\
.RE
.\}
.sp
It is how much trust to assign to a key, and the signatures that key makes on other keys.[2]
.SS "LOCAL"
.sp
Whether or not to push to a keyserver. It can be either the numeric or string representation of the following:
.sp
.if n \{\
.RS 4
.\}
.nf
\fB0 = False
1 = True\fP
.fi
.if n \{\
.RE
.\}
.sp
If \fB1/True\fP, KANT will sign the key with a local signature (and the signature will not be pushed to a keyserver or be exportable).[3]
.SS "CHECKLEVEL"
.sp
The amount of checking that has been done to confirm that the owner of the key is who they say they are and that the key matches their provided information.
It can be either the numeric or string representation of the following:
.sp
.if n \{\
.RS 4
.\}
.nf
\fB0 = Unknown
1 = None
2 = Casual
3 = Careful\fP
.fi
.if n \{\
.RE
.\}
.sp
It is up to you to determine the classification of the amount of checking you have done, but the following is recommended (it is the policy
the author follows):
.sp
.if n \{\
.RS 4
.\}
.nf
\fBUnknown:\fP The key is unknown and has not been reviewed
\fBNone:\fP The key has been signed, but no confirmation of the
ownership of the key has been performed (typically
a local signature)
\fBCasual:\fP The key has been presented and the owner is either
known to the signer or they have provided some form
of non\-government\-issued identification or other
proof (website, Keybase.io, etc.)
\fBCareful:\fP The same as \fBCasual\fP requirements but they have
provided a government\-issued ID and all information
matches
.fi
.if n \{\
.RE
.\}
.sp
It\(cqs important to check each key you sign carefully. Failure to do so may hurt others\(aq trust in your key.[4]
.SH "MAIL"
.sp
The mailing feature of KANT is very handy; it will let you send notifications to the owners of the keys you sign. This is encouraged because: 1.) it\(cqs courteous to let them know where they can fetch the signature you just made on their key, 2.) it\(cqs courteous to let them know if you did/did not push to a keyserver (some people don\(cqt want their keys pushed, and it\(cqs a good idea to respect that wish), and 3.) the mailer also attaches the pubkey for the key you used to sign with, in case your key isn\(cqt on a keyserver, etc.
.sp
However, in order to do this since many ISPs block outgoing mail, one would typically use something like msmtp (http://msmtp.sourceforge.net/). Note that you don\(cqt even need msmtp to be installed, you just need to have msmtp configuration files set up via either /etc/msmtprc or ~/.msmtprc. KANT will parse these configuration files and use a purely pythonic implementation for sending the emails (see \fBSENDING\fP).
.sp
It supports templated mail messages as well (see \fBTEMPLATES\fP). It sends a MIME multipart email, in both plaintext and HTML formatting, for mail clients that may only support one or the other. It will also sign the email message using your signing key (see \fB\-K\fP, \fB\-\-sigkey\fP) and attach a binary (.gpg) and ASCII\-armored (.asc) export of your pubkey.
.SS "SENDING"
.sp
KANT first looks for ~/.msmtprc and, if not found, will look for /etc/msmtprc. If neither are found, mail notifications will not be sent and it will be up to you to contact the key owner(s) and let them know you have signed their key(s). If it does find either, it will use the first configuration file it finds and first look for a profile called "KANT" (without quotation marks). If this is not found, it will use whatever profile is specified for as the default profile (e.g. \fBaccount default: someprofilename\fP in the msmtprc).
.SS "TEMPLATES"
.sp
KANT, on first run (even with a \fB\-h\fP/\fB\-\-help\fP execution), will create the default email templates (which can be found as ~/.kant/email.html.j2 and ~/.kant/email.plain.j2). These support templating via Jinja2 (http://jinja.pocoo.org/docs/2.9/templates/), and the following variables/dictionaries/lists are exported for your use:
.sp
.if n \{\
.RS 4
.\}
.nf
* \fBkey\fP \- a dictionary of information about the recipient\(aqs key (see docs/REF.keys.struct.txt)
* \fBmykey\fP \- a dictionary of information about your key (see docs/REF.keys.struct.txt)
* \fBkeyservers\fP \- a list of keyservers that the key has been pushed to (if an exportable/non\-local signature was made)
.fi
.if n \{\
.RE
.\}
.sp
And of course you can set your own variables inside the template as well (http://jinja.pocoo.org/docs/2.9/templates/#assignments).
.SH "SEE ALSO"
.sp
gpg(1), gpgconf(1), msmtp(1)
.SH "RESOURCES"
.sp
\fBAuthor\(cqs web site:\fP https://square\-r00t.net/
.sp
\fBAuthor\(cqs GPG information:\fP https://square\-r00t.net/gpg\-info
.SH "COPYING"
.sp
Copyright (C) 2017 Brent Saner.
.sp
Free use of this software is granted under the terms of the GPLv3 License.
.SH "NOTES"
1. http://msmtp.sourceforge.net/
2. For more information on trust levels and the Web of Trust, see: https://www.gnupg.org/gph/en/manual/x334.html and https://www.gnupg.org/gph/en/manual/x547.html
3. For more information on pushing to keyservers and local signatures, see: https://www.gnupg.org/gph/en/manual/r899.html#LSIGN and https://lists.gnupg.org/pipermail/gnupg-users/2007-January/030242.html
4. GnuPG documentation refers to this as "validity"; see https://www.gnupg.org/gph/en/manual/x334.html
.SH "AUTHOR(S)"
.sp
\fBBrent Saner\fP
.RS 4
Author(s).
.RE

195
gpg/kant/docs/kant.1.adoc Normal file
View File

@@ -0,0 +1,195 @@
= kant(1)
Brent Saner
v1.0.0
:doctype: manpage
:manmanual: KANT - Keysigning and Notification Tool
:mansource: KANT
:man-linkstyle: pass:[blue R < >]
== NAME
KANT - Sign GnuPG/OpenPGP/PGP keys and notify the key owner(s)
== SYNOPSIS
*kant* [_OPTION_] -k/--key _<KEY_IDS|BATCHFILE>_
== OPTIONS
Keysigning (and keysigning parties) can be a lot of fun, and can offer someone with new keys a way into the WoT (Web-of-Trust).
Unfortunately, they can be intimidating to those new to the experience.
This tool offers a simple and easy-to-use interface to sign public keys (normal, local-only, and/or non-exportable),
set owner trust, specify level of checking done, and push the signatures to a keyserver. It even supports batch operation via a CSV file.
On successful completion, information about the keys that were signed and the key used to sign are saved to ~/.kant/cache/YYYY.MM.DD_HH.MM.SS.
*-h*, *--help*::
Display brief help/usage and exit.
*-k* _KEY_IDS|BATCHFILE_, *--key* _KEY_IDS|BATCHFILE_::
A single or comma-separated list of key IDs (see *KEY ID FORMAT*) to sign, trust, and notify. Can also be an email address.
If *-b*/*--batch* is specified, this should instead be a path to the batch file (see *BATCHFILE/Format*).
*-K* _KEY_ID_, *--sigkey* _KEY_ID_::
The key to use when signing other keys (see *KEY ID FORMAT*). The default key is automatically determined at runtime
(it will be displayed in *-h*/*--help* output).
*-t* _TRUSTLEVEL_, *--trust* _TRUSTLEVEL_::
The trust level to automatically apply to all keys (if not specified, KANT will prompt for each key).
See *BATCHFILE/TRUSTLEVEL* for trust level notations.
*-c* _CHECKLEVEL_, *--check* _CHECKLEVEL_::
The level of checking that was done to confirm the validity of ownership for all keys being signed. If not specified,
the default is for KANT to prompt for each key we sign. See *BATCHFILE/CHECKLEVEL* for check level notations.
*-l* _LOCAL_, *--local* _LOCAL_::
If specified, make the signature(s) local-only (i.e. non-exportable, don't push to a keyserver).
See *BATCHFILE/LOCAL* for more information on local signatures.
*-n*, *--no-notify*::
This requires some explanation. If you have MSMTPfootnote:[\http://msmtp.sourceforge.net/] installed and configured for the currently active user,
then we will send out emails to recipients letting them know we have signed their key. However, if MSMTP is installed and configured
but this flag is given, then we will NOT attempt to send emails. See *MAIL* for more information.
*-s* _KEYSERVER(S)_, *--keyservers* _KEYSERVER(S)_::
The comma-separated keyserver(s) to push to. The default keyserver list is automatically generated at runtime.
*-m* _PROFILE_, *--msmtp-profile* _PROFILE_::
If specified, use the msmtp profile named _PROFILE_. If this is not specified, KANT first looks for an msmtp configuration named KANT (case-sensitive). If it doesn't find one, it will use the profile specified as the default profile in your msmtp configuration. See *MAIL* for more information.
*-b*, *--batch*::
If specified, operate in batch mode. See *BATCHFILE* for more information.
*-D* _GPGDIR_, *--gpgdir* _GPGDIR_::
The GnuPG configuration directory to use (containing your keys, etc.). The default is automatically generated at runtime,
but will probably be */home/<yourusername>/.gnupg* or similar.
*-T*, *--testkeyservers*::
If specified, initiate a basic test connection with each set keyserver before anything else. Disabled by default.
== KEY ID FORMAT
Key IDs can be specified in one of two ways. The first (and preferred) way is to use the full 160-bit (40-character, hexadecimal) key ID.
A little known fact is the fingerprint of a key:
*DEAD BEEF DEAD BEEF DEAD BEEF DEAD BEEF DEAD BEEF*
is actually the full key ID of the primary key; i.e.:
*DEADBEEFDEADBEEFDEADBEEFDEADBEEFDEADBEEF*
The second way to specify a key, as far as KANT is concerned, is to use an email address.
Do note that if more than one key is found that matches the email address given (and they usually are), you will be prompted to select the specific
correct key ID anyways so it's usually a better idea to have the owner present their full key ID/fingerprint right from the get-go.
== BATCHFILE
=== Format
The batch file is a CSV-formatted (comma-delimited) file containing keys to sign and other information about them. It keeps the following format:
*KEY_ID,TRUSTLEVEL,LOCAL,CHECKLEVEL,NOTIFY*
For more information on each column, reference the appropriate sub-section below.
=== KEY_ID
See *KEY ID FORMAT*.
=== TRUSTLEVEL
The _TRUSTLEVEL_ is specified by the following levels (you can use either the numeric or string representation):
[subs=+quotes]
....
*-1 = Never
0 = Unknown
1 = Untrusted
2 = Marginal
3 = Full
4 = Ultimate*
....
It is how much trust to assign to a key, and the signatures that key makes on other keys.footnote:[For more information
on trust levels and the Web of Trust, see: \https://www.gnupg.org/gph/en/manual/x334.html and \https://www.gnupg.org/gph/en/manual/x547.html]
=== LOCAL
Whether or not to push to a keyserver. It can be either the numeric or string representation of the following:
[subs=+quotes]
....
*0 = False
1 = True*
....
If *1/True*, KANT will sign the key with a local signature (and the signature will not be pushed to a keyserver or be exportable).footnote:[For
more information on pushing to keyservers and local signatures, see: \https://www.gnupg.org/gph/en/manual/r899.html#LSIGN and
\https://lists.gnupg.org/pipermail/gnupg-users/2007-January/030242.html]
=== CHECKLEVEL
The amount of checking that has been done to confirm that the owner of the key is who they say they are and that the key matches their provided information.
It can be either the numeric or string representation of the following:
[subs=+quotes]
....
*0 = Unknown
1 = None
2 = Casual
3 = Careful*
....
It is up to you to determine the classification of the amount of checking you have done, but the following is recommended (it is the policy
the author follows):
[subs=+quotes]
....
*Unknown:* The key is unknown and has not been reviewed
*None:* The key has been signed, but no confirmation of the
ownership of the key has been performed (typically
a local signature)
*Casual:* The key has been presented and the owner is either
known to the signer or they have provided some form
of non-government-issued identification or other
proof (website, Keybase.io, etc.)
*Careful:* The same as *Casual* requirements but they have
provided a government-issued ID and all information
matches
....
It's important to check each key you sign carefully. Failure to do so may hurt others' trust in your key.footnote:[GnuPG documentation refers
to this as "validity"; see \https://www.gnupg.org/gph/en/manual/x334.html]
== MAIL
The mailing feature of KANT is very handy; it will let you send notifications to the owners of the keys you sign. This is encouraged because: 1.) it's courteous to let them know where they can fetch the signature you just made on their key, 2.) it's courteous to let them know if you did/did not push to a keyserver (some people don't want their keys pushed, and it's a good idea to respect that wish), and 3.) the mailer also attaches the pubkey for the key you used to sign with, in case your key isn't on a keyserver, etc.
However, in order to do this since many ISPs block outgoing mail, one would typically use something like msmtp (\http://msmtp.sourceforge.net/). Note that you don't even need msmtp to be installed, you just need to have msmtp configuration files set up via either /etc/msmtprc or ~/.msmtprc. KANT will parse these configuration files and use a purely pythonic implementation for sending the emails (see *SENDING*).
It supports templated mail messages as well (see *TEMPLATES*). It sends a MIME multipart email, in both plaintext and HTML formatting, for mail clients that may only support one or the other. It will also sign the email message using your signing key (see *-K*, *--sigkey*) and attach a binary (.gpg) and ASCII-armored (.asc) export of your pubkey.
=== SENDING
KANT first looks for ~/.msmtprc and, if not found, will look for /etc/msmtprc. If neither are found, mail notifications will not be sent and it will be up to you to contact the key owner(s) and let them know you have signed their key(s). If it does find either, it will use the first configuration file it finds and first look for a profile called "KANT" (without quotation marks). If this is not found, it will use whatever profile is specified for as the default profile (e.g. *account default: someprofilename* in the msmtprc).
=== TEMPLATES
KANT, on first run (even with a *-h*/*--help* execution), will create the default email templates (which can be found as ~/.kant/email.html.j2 and ~/.kant/email.plain.j2). These support templating via Jinja2 (\http://jinja.pocoo.org/docs/2.9/templates/), and the following variables/dictionaries/lists are exported for your use:
[subs=+quotes]
....
* *key* - a dictionary of information about the recipient's key (see docs/REF.keys.struct.txt)
* *mykey* - a dictionary of information about your key (see docs/REF.keys.struct.txt)
* *keyservers* - a list of keyservers that the key has been pushed to (if an exportable/non-local signature was made)
....
And of course you can set your own variables inside the template as well (\http://jinja.pocoo.org/docs/2.9/templates/#assignments).
== SEE ALSO
gpg(1), gpgconf(1), msmtp(1)
== RESOURCES
*Author's web site:* \https://square-r00t.net/
*Author's GPG information:* \https://square-r00t.net/gpg-info
== COPYING
Copyright \(C) 2017 {author}.
Free use of this software is granted under the terms of the GPLv3 License.

961
gpg/kant/kant.py Executable file
View File

@@ -0,0 +1,961 @@
#!/usr/bin/env python3
import argparse
import base64
import csv
import datetime
import json
import lzma
import operator
import os
import re
import shutil
import smtplib
import subprocess
from email.message import Message
from email.mime.application import MIMEApplication
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from functools import reduce
from io import BytesIO
from socket import *
import urllib.parse
import jinja2 # non-stdlib; Arch package is python-jinja2
import gpg # non-stdlib; Arch package is "python-gpgme" - see:
import gpg.constants # https://git.archlinux.org/svntogit/packages.git/tree/trunk/PKGBUILD?h=packages/gpgme and
import gpg.errors # https://gnupg.org/ftp/gcrypt/gpgme/ (incl. python bindings in build)
import pprint # development debug
class SigSession(object): # see docs/REFS.funcs.struct.txt
def __init__(self, args):
# These are the "stock" templates for emails. It's a PITA, but to save some space since we store them
# inline in here, they're XZ'd and base64'd.
self.email_tpl = {}
self.email_tpl['plain'] = ('/Td6WFoAAATm1rRGAgAhARwAAAAQz1jM4ATxAnZdACQZSZhvFgKNdKNXbSf05z0ZPvTvmdQ0mJQg' +
'atgzhPVeLKxz22bhxedC813X5I8Gn2g9q9Do2jPPgXOzysImWXoraY4mhz0BAo2Zx1u6AiQQLdN9' +
'/jwrDrUEtb8M/QzmRd+8JrYN8s8vhViJZARMNHYnPeQK5GYEoGZEQ8l2ULmpTjAn9edSnrMmNSb2' +
'EC86CuyhaWDPsQeIamWW1t+MWmgsggE3xKYADKXHMQyXvhv/TAn987dEbzmrkpg8PCjxWt1wKRAr' +
'siDpCGvXLiBwnDtN1D7ocwbZVKty2GELbYt0f0CT7n5Pyu9n0P7QMnErM38kLR1nReopQp41+CsG' +
'orb8EpGGVdFa7sSWSANQtGTjx/1JHecpkTN8xX4kAjMWKYujWlZi/HzN7y/W5GDJM3ycVEUTsDRV' +
'6AusncRBFbo4/+K6cn5WCrhqd5jY2vDJR6KcO0O3usHUMzvOF0S0CZhUbA3Mil5DmPwFrdFrESby' +
'O1xH3uvgHpA5X91qkpEajokOOkY3FZm0oeANh9AMoMfDFTuqi41Nq9Myk4VKNEfzioChn9IfFxX0' +
'Luw6OyXtWJdpe3BvO7pWazLhvdIY4poh9brvJ25cG1kDMOlmC3NEb+POeqQ5aUr4XaRqFstk3grb' +
'8EjiGBzg18uHsbhjyReXnZprJjwzWUdwpV6j+2JFI13UEp16oTyTwyhHdpAmAg+lQJQxtcMpnUeX' +
'/xBkQGs+rqe0e/i8ZQ80XsLAoScxUL+45v9vANYV+lCWRnm/2GZOtCFs1Cb4t9hOeV0P1cwxw7fG' +
'b1A921JUkHbASFiv2EFsgf0lkvnMgz2slNXKcLuwB6X0CAAAALypR4JWDUR6AAGSBfIJAABGCaV4' +
'scRn+wIAAAAABFla')
self.email_tpl['html'] = ('/Td6WFoAAATm1rRGAgAhARwAAAAQz1jM4AXfAtVdAB4aCobvStaHNqdVBn1LjZcL+G+98rmZ7eGZ' +
'Wqx+3LjENIv37L1aGPICZDRBrsBugzVSasRBHkdHMrWW7SsPfRzw6btQfASTHLr48auPJlJgXTnb' +
'vgDd2ELrs6p5m5Wip3qD4NeNuwj4QMcxszWF1vLa1oZiNAmCSunIF8bNTw+lmI50h2M6bXfx80Og' +
'T2HGcuTp07Mp+XLyZQJ5lbQyu5BRhwyKpu14sq9qrVkxmYt8AAxgUyhvRkooHSuug4O8ArMFXqqX' +
'usX9P3zERAsi/TqWIFaG0xoBdrWf/zpGtsVQ+5TtCGOfUHGfIBaNy9Q+FOvfLJFYEzxac992Fkd0' +
'as4RsN31FaySbBmZ8eB3zGbpjS7QH7CA70QYkRcYXcjWE9xHD3Wzxa3DFE0ihKAyVwakxvjgYa2B' +
'7G6uYO606c+a6vHfPhgvY7Eph+I7ip0btfBbcKZ+XBSd0DtCd7ZvI7vlGJdW2/OBXHfNmCndMP1W' +
'Ujd0ASQAQBbJr4rIxYygckSPWti4nBe9JpKTVWqdWRXWjeYGci1dKIjKs7JfS1PGJR50iuyANBun' +
'yQ9oIRafb3nreBqtpXZ4LKM5hC697BaeOIcocXyMALf0a06AUmIaRQfO3AZrPxyOPH3EYOKIMrjM' +
'EosihPVVyYuKUVOg3wWq5aeIC9zM7Htw4FNh2NB5QDYY6HxIqIVUfHCGz+4GaPBVaf0eie8kHaQR' +
'xj+DkAiWQDmN/JRZeTlsy4d3P8XcArOLmxzql/iDzFqtzpD5d91o8I3HU9BJlDJFPs8bC2eCjYs8' +
'o3WJET/UIch6YXQOemXa72aWdBVSytfKBMtL7uekd4ARGbFZYyW2x1agkAZGiWt7gwY8RVEoKyZH' +
'bbvIvOhQ/j1BDuJFJO3BEgekeLhBPpG7cEewseXjGjoWZWtGr+qFTI//w+oDtdqGtJaGtELL3WYU' +
'/tMiQU9AfXkTsODAjvduAAAAAIixVQ23iBDFAAHxBeALAADIP1EPscRn+wIAAAAABFla')
# Set up a dict of some constants and mappings
self.maps = {}
# Keylist modes
self.maps['keylist'] = {'local': gpg.constants.KEYLIST_MODE_LOCAL, # local keyring
'remote': gpg.constants.KEYLIST_MODE_EXTERN, # keyserver
# both - this is SUPPOSED to work, but doesn't seem to... it's unreliable at best?
'both': gpg.constants.KEYLIST_MODE_LOCAL|gpg.constants.KEYLIST_MODE_EXTERN}
# Validity/trust levels
self.maps['trust'] = {-1: ['never', gpg.constants.VALIDITY_NEVER], # this is... probably? not ideal, but. Never trust the key.
0: ['unknown', gpg.constants.VALIDITY_UNKNOWN], # The key's trust is unknown - typically because it hasn't been set yet.
1: ['untrusted', gpg.constants.VALIDITY_UNDEFINED], # The key is explicitly set to a blank trust
2: ['marginal', gpg.constants.VALIDITY_MARGINAL], # Trust a little.
3: ['full', gpg.constants.VALIDITY_FULL], # This is going to be the default for verified key ownership.
4: ['ultimate', gpg.constants.VALIDITY_ULTIMATE]} # This should probably only be reserved for keys you directly control.
# Validity/trust reverse mappings - see self.maps['trust'] for the meanings of these
# Used for fetching display/feedback
self.maps['rtrust'] = {gpg.constants.VALIDITY_NEVER: 'Never',
gpg.constants.VALIDITY_UNKNOWN: 'Unknown',
gpg.constants.VALIDITY_UNDEFINED: 'Untrusted',
gpg.constants.VALIDITY_MARGINAL: 'Marginal',
gpg.constants.VALIDITY_FULL: 'Full',
gpg.constants.VALIDITY_ULTIMATE: 'Ultimate'}
# Local signature and other binary (True/False) mappings
self.maps['binmap'] = {0: ['no', False],
1: ['yes', True]}
# Level of care taken when checking key ownership/valid identity
self.maps['check'] = {0: ['unknown', 0],
1: ['none', 1],
2: ['casual', 2],
3: ['careful', 3]}
# Default protocol/port mappings for keyservers
self.maps['proto'] = {'hkp': [11371, ['tcp', 'udp']], # Standard HKP protocol
'hkps': [443, ['tcp']], # Yes, same as https
'http': [80, ['tcp']], # HTTP (plaintext)
'https': [443, ['tcp']], # SSL/TLS
'ldap': [389, ['tcp', 'udp']], # Includes TLS negotiation since it runs on the same port
'ldaps': [636, ['tcp', 'udp']]} # SSL
self.maps['hashalgos'] = {gpg.constants.MD_MD5: 'md5',
gpg.constants.MD_SHA1: 'sha1',
gpg.constants.MD_RMD160: 'ripemd160',
gpg.constants.MD_MD2: 'md2',
gpg.constants.MD_TIGER: 'tiger192',
gpg.constants.MD_HAVAL: 'haval',
gpg.constants.MD_SHA256: 'sha256',
gpg.constants.MD_SHA384: 'sha384',
gpg.constants.MD_SHA512: 'sha512',
gpg.constants.MD_SHA224: 'sha224',
gpg.constants.MD_MD4: 'md4',
gpg.constants.MD_CRC32: 'crc32',
gpg.constants.MD_CRC32_RFC1510: 'crc32rfc1510',
gpg.constants.MD_CRC24_RFC2440: 'crc24rfc2440'}
# Now that all the static data's set up, we can continue.
self.args = self.verifyArgs(args) # Make the args accessible to all functions in the class - see docs/REF.args.struct.txt
# Get the GPGME context
try:
os.environ['GNUPGHOME'] = self.args['gpgdir']
self.ctx = gpg.Context()
except:
raise RuntimeError('Could not use {0} as a GnuPG home'.format(self.args['gpgdir']))
self.cfgdir = os.path.join(os.environ['HOME'], '.kant')
if not os.path.isdir(self.cfgdir):
print('No KANT configuration directory found; creating one at {0}...'.format(self.cfgdir))
os.makedirs(self.cfgdir, exist_ok = True)
self.keys = {} # See docs/REF.keys.struct.txt
self.mykey = {} # ""
self.tpls = {} # Email templates will go here
self.getTpls() # Build out self.tpls
return(None)
def getEditPrompt(self, key, cmd): # "key" should be the FPR of the primary key
# This mapping defines the default "answers" to the gpgme key editing.
# https://www.apt-browse.org/browse/debian/wheezy/main/amd64/python-pyme/1:0.8.1-2/file/usr/share/doc/python-pyme/examples/t-edit.py
# https://searchcode.com/codesearch/view/20535820/
# https://git.gnupg.org/cgi-bin/gitweb.cgi?p=gnupg.git;a=blob;f=doc/DETAILS
# You can get the prompt identifiers and status indicators without grokking the source
# by first interactively performing the type of edit(s) you want to do with this command:
# gpg --status-fd 2 --command-fd 2 --edit-key <KEY_ID>
if key['trust'] >= gpg.constants.VALIDITY_FULL: # For tsigning, it only prompts for two trust levels:
_loctrust = 2 # "I trust fully"
else:
_loctrust = 1 # "I trust marginally"
# TODO: make the trust depth configurable. 1 is probably the safest, but we try to guess here.
# "Full" trust is a pretty big thing.
if key['trust'] >= gpg.constants.VALIDITY_FULL:
_locdepth = 2 # Allow +1 level of trust extension
else:
_locdepth = 1 # Only trust this key
_map = {'cmds': ['trust', 'fpr', 'sign', 'tsign', 'lsign', 'nrsign', 'grip', 'list', # Valid commands
'uid', 'key', 'check', 'deluid', 'delkey', 'delsig', 'pref', 'showpref',
'revsig', 'enable', 'disable', 'showphoto', 'clean', 'minimize', 'save',
'quit'],
'prompts': {'edit_ownertrust': {'value': str(key['trust']), # Pulled at time of call
'set_ultimate': {'okay': 'yes'}}, # If confirming ultimate trust, we auto-answer yes
'untrusted_key': {'override': 'yes'}, # We don't care if it's untrusted
'pklist': {'user_id': {'enter': key['pkey']['email']}}, # Prompt for a user ID - can we change this to key ID?
'sign_uid': {'class': str(key['check']), # The certification/"check" level
'okay': 'yes'}, # Are you sure that you want to sign this key with your key..."
'trustsig_prompt': {'trust_value': str(_loctrust), # This requires some processing; see above
'trust_depth': str(_locdepth), # The "depth" of the trust signature.
'trust_regexp': None}, # We can "Restrict" trust to certain domains, but this isn't really necessary.
'keyedit': {'prompt': cmd, # Initiate trust editing
'save': {'okay': 'yes'}}}} # Save if prompted
return(_map)
def getTpls(self):
for t in ('plain', 'html'):
_tpl_file = os.path.join(self.cfgdir, 'email.{0}.j2'.format(t))
if os.path.isfile(_tpl_file):
with open(_tpl_file, 'r') as f:
self.tpls[t] = f.read()
else:
self.tpls[t] = lzma.decompress(base64.b64decode(email_tpl[t]),
format = lzma.FORMAT_XZ,
memlimit = None,
filters = None).decode('utf-8')
with open(_tpl_file, 'w') as f:
f.write('{0}'.format(self.tpls[t]))
print('Created: {0}'.format(tpl_file))
return(self.tpls)
def modifyDirmngr(self, op):
if not self.args['keyservers']:
return()
_pid = str(os.getpid())
_activecfg = os.path.join(self.args['gpgdir'], 'dirmngr.conf')
_activegpgconf = os.path.join(self.args['gpgdir'], 'gpg.conf')
_bakcfg = '{0}.{1}'.format(_activecfg, _pid)
_bakgpgconf = '{0}.{1}'.format(_activegpgconf, _pid)
## Modify files
if op in ('new', 'start', 'replace'):
# Replace the keyservers
if os.path.lexists(_activecfg):
shutil.copy2(_activecfg, _bakcfg)
with open(_bakcfg, 'r') as read, open(_activecfg, 'w') as write:
for line in read:
if not line.startswith('keyserver '):
write.write(line)
with open(_activecfg, 'a') as f:
for s in self.args['keyservers']:
_uri = '{0}://{1}:{2}'.format(s['proto'],
s['server'],
s['port'][0])
f.write('keyserver {0}\n'.format(_uri))
# Use stronger ciphers, etc. and prompt for check/certification levels
if os.path.lexists(_activegpgconf):
shutil.copy2(_activegpgconf, _bakgpgconf)
with open(_activegpgconf, 'w') as f:
f.write('cipher-algo AES256\ndigest-algo SHA512\ncert-digest-algo SHA512\ncompress-algo BZIP2\nask-cert-level\n')
## Restore files
if op in ('old', 'stop', 'restore'):
# Restore the keyservers
if os.path.lexists(_bakcfg):
with open(_bakcfg, 'r') as read, open(_activecfg, 'w') as write:
for line in read:
write.write(line)
os.remove(_bakcfg)
else:
os.remove(_activecfg)
# Restore GPG settings
if os.path.lexists(_bakgpgconf):
with open(_bakgpgconf, 'r') as read, open(_activegpgconf, 'w') as write:
for line in read:
write.write(line)
os.remove(_bakgpgconf)
else:
os.remove(_activegpgconf)
subprocess.run(['gpgconf', '--reload', 'dirmngr']) # I *really* wish we could do this via GPGME.
return()
def getKeys(self):
_keyids = []
_keys = {}
# Do we have the key already? If not, fetch.
for r in list(self.args['rcpts'].keys()):
if self.args['rcpts'][r]['type'] == 'fpr':
_keyids.append(r)
self.ctx.set_keylist_mode(self.maps['keylist']['remote'])
try:
_k = self.ctx.get_key(r)
except:
print('{0}: We could not find this key on the keyserver.'.format(r)) # Key not on server
del(self.args['rcpts'][r])
_keyids.remove(r)
continue
self.ctx.set_keylist_mode(self.maps['keylist']['local'])
_keys[r] = {'fpr': r,
'obj': _k,
'created': _k.subkeys[0].timestamp}
if 'T' in str(_keys[r]['created']):
_keys[r]['created'] = int(datetime.datetime.strptime(_keys[r]['created'],
'%Y%m%dT%H%M%S').timestamp())
if self.args['rcpts'][r]['type'] == 'email':
# We need to actually do a lookup on the email address.
_keytmp = []
for k in self.ctx.keylist(r, mode = self.maps['keylist']['remote']):
_keytmp.append(k)
for k in _keytmp:
_keys[k.fpr] = {'fpr': k.fpr,
'obj': k,
'created': k.subkeys[0].timestamp,
'uids': {}}
# Per the docs (<gpg>/docs/DETAILS, "*** Field 6 - Creation date"),
# they may change this to ISO 8601...
if 'T' in str(_keys[k.fpr]['created']):
_keys[k.fpr]['created'] = int(datetime.datetime.strptime(_keys[k.fpr]['created'],
'%Y%m%dT%H%M%S').timestamp())
for s in k.uids:
_keys[k.fpr]['uids'][s.email] = {'comment': s.comment,
'updated': s.last_update}
if len(_keytmp) > 1: # Print the keys and prompt for a selection.
print('\nWe found the following keys for {0}...\n\nKEY ID:'.format(r))
for s in _keytmp:
print('{0}\n{1:6}(Generated at {2}) UIDs:'.format(s.fpr,
'',
datetime.datetime.utcfromtimestamp(s.subkeys[0].timestamp)))
for u in s.uids:
if u.last_update == 0:
_updated = 'Never/Unknown'
else:
_updated = datetime.datetime.utcfromtimestamp(u.last_update)
print('{0:42}(Updated {3}) <{2}> {1}'.format('',
u.comment,
u.email,
_updated))
print()
while True:
key = input('Please enter the (full) appropriate key: ')
if key not in _keys.keys():
print('Please enter a full key ID from the list above or hit ctrl-d to exit.')
else:
_keyids.append(key)
break
else:
if len(_keytmp) == 0:
print('Could not find {0}!'.format(r))
del(self.args['rcpts'][r])
continue
_keyids.append(k.fpr)
print('\nFound key {0} for {1} (Generated at {2}):'.format(_keys[k.fpr]['fpr'],
r,
datetime.datetime.utcfromtimestamp(_keys[k.fpr]['created'])))
for email in _keys[k.fpr]['uids']:
if _keys[k.fpr]['uids'][email]['updated'] == 0:
_updated = 'Never/Unknown'
else:
_updated = datetime.datetime.utcfromtimestamp(_keys[k.fpr]['uids'][email]['updated'])
print('\t(Generated {2}) {0} <{1}>'.format(_keys[k.fpr]['uids'][email]['comment'],
email,
_updated))
print()
## And now we can (FINALLY) fetch the key(s).
print(_keyids)
for g in _keyids:
try:
self.ctx.op_import_keys([_keys[g]['obj']])
except gpg.errors.GPGMEError:
print('Key {0} could not be found on the keyserver'.format(g)) # The key isn't on the keyserver
self.ctx.set_keylist_mode(self.maps['keylist']['local'])
for k in _keys:
if k not in _keyids:
continue
_key = _keys[k]['obj']
self.keys[k] = {'pkey': {'email': _key.uids[0].email,
'name': _key.uids[0].name,
'creation': datetime.datetime.utcfromtimestamp(_keys[k]['created']),
'key': _key},
'trust': self.args['trustlevel'], # Not set yet; we'll modify this later in buildKeys().
'local': self.args['local'], # Not set yet; we'll modify this later in buildKeys().
'notify': self.args['notify'], # Same...
'sign': True, # We don't need to prompt for this since we detect if we need to sign or not
'change': None, # ""
'status': None} # Same.
# And we add the subkeys in yet another loop.
self.keys[k]['subkeys'] = {}
self.keys[k]['uids'] = {}
for s in _key.subkeys:
self.keys[k]['subkeys'][s.fpr] = datetime.datetime.utcfromtimestamp(s.timestamp)
for u in _key.uids:
self.keys[k]['uids'][u.email] = {'name': u.name,
'comment': u.comment,
'updated': datetime.datetime.utcfromtimestamp(u.last_update)}
del(_keys)
return()
def buildKeys(self):
self.getKeys()
# Before anything else, let's set up our own key info.
_key = self.ctx.get_key(self.args['sigkey'], secret = True)
self.mykey = {'pkey': {'email': _key.uids[0].email,
'name': _key.uids[0].name,
'creation': datetime.datetime.utcfromtimestamp(_key.subkeys[0].timestamp),
'key': _key},
'trust': 'ultimate', # No duh. This is our own key.
'local': False, # We keep our own key array separate, so we don't push it anyways.
'notify': False, # ""
'check': None, # ""
'change': False, # ""
'status': None, # ""
'sign': False} # ""
self.mykey['subkeys'] = {}
self.mykey['uids'] = {}
for s in _key.subkeys:
self.mykey['subkeys'][s.fpr] = datetime.datetime.utcfromtimestamp(s.timestamp)
for u in _key.uids:
self.mykey['uids'][u.email] = {'name': u.name,
'comment': u.comment,
'updated': datetime.datetime.utcfromtimestamp(u.last_update)}
# Now let's set up our trusts.
if self.args['batch']:
self.batchParse()
else:
for k in list(self.keys.keys()):
self.promptTrust(k)
self.promptCheck(k)
self.promptLocal(k)
self.promptNotify(k)
# In case we removed any keys, we have to run this outside of the loops
for k in list(self.keys.keys()):
for t in ('trust', 'local', 'check', 'notify'):
self.keysCleanup(k, t)
# TODO: populate self.keys[key]['change']; we use this for trust (but not sigs)
return()
def batchParse(self):
# First we grab the info from CSV
csvlines = csv.reader(self.csvraw, delimiter = ',', quotechar = '"')
for row in csvlines:
row[0] = row[0].replace('<', '').replace('>', '')
try:
if self.args['rcpts'][row[0]]['type'] == 'fpr':
k = row[0]
else: # It's an email.
key_set = False
while not key_set:
for i in list(self.keys.keys()):
if row[0] in list(self.keys[i]['uids'].keys()):
k = i
key_set = True
self.keys[k]['trust'] = row[1].lower().strip()
self.keys[k]['local'] = row[2].lower().strip()
self.keys[k]['check'] = row[3].lower().strip()
self.keys[k]['notify'] = row[4].lower().strip()
except KeyError:
continue # It was deemed to be an invalid key earlier
return()
def promptTrust(self, k):
if 'trust' not in self.keys[k].keys() or not self.keys[k]['trust']:
trust_in = input(('\nWhat trust level should we assign to {0}? (The default is '+
'Marginal.)\n\t\t\t\t ({1} <{2}>)' +
'\n\n\t\033[1m-1 = Never\n\t 0 = Unknown\n\t 1 = Untrusted\n\t 2 = Marginal\n\t 3 = Full' +
'\n\t 4 = Ultimate\033[0m\nTrust: ').format(k,
self.keys[k]['pkey']['name'],
self.keys[k]['pkey']['email']))
if trust_in == '':
trust_in = 'marginal' # Has to be a str, so we can "pretend" it was entered
self.keys[k]['trust'] = trust_in
return()
def promptCheck(self, k):
if 'check' not in self.keys[k].keys() or self.keys[k]['check'] == None:
check_in = input(('\nHow carefully have you checked {0}\'s validity of identity/ownership of the key? ' +
'(Default is Unknown.)\n' +
'\n\t\033[1m0 = Unknown\n\t1 = None\n\t2 = Casual\n\t3 = Careful\033[0m\nCheck level: ').format(k))
if check_in == '':
check_in = 'unknown'
self.keys[k]['check'] = check_in
return()
def promptLocal(self, k):
if 'local' not in self.keys[k].keys() or self.keys[k]['local'] == None:
if self.args['keyservers']:
local_in = input(('\nShould we locally sign {0} '+
'(if yes, the signature will be non-exportable; if no, we will be able to push to a keyserver) ' +
'(Yes/\033[1mNO\033[0m)? ').format(k))
if local_in == '':
local_in = False
self.keys[k]['local'] = local_in
return()
def promptNotify(self, k):
if 'notify' not in self.keys[k].keys() or self.keys[k]['notify'] == None:
notify_in = input(('\nShould we notify {0} (via <{1}>) (\033[1mYES\033[0m/No)? ').format(k,
self.keys[k]['pkey']['email']))
if notify_in == '':
notify_in = True
self.keys[k]['local'] = local_in
return()
def keysCleanup(self, k, t): # At some point, this WHOLE thing would probably be cleaner with bitwise flags...
s = t
_errs = {'trust': 'trust level',
'local': 'local signature option',
'check': 'check level',
'notify': 'notify flag'}
if k not in self.keys.keys():
return() # It was deleted already.
if t in ('local', 'notify'): # these use a binary mapping
t = 'binmap'
# We can do some basic stuff right here.
if str(self.keys[k][s]).lower() in ('n', 'no', 'false'):
self.keys[k][s] = False
return()
elif str(self.keys[k][s]).lower() in ('y', 'yes', 'true'):
self.keys[k][s] = True
return()
# Make sure we have a known value. These will ALWAYS be str's, either from the CLI or CSV.
value_in = str(self.keys[k][s]).lower().strip()
for dictk, dictv in self.maps[t].items():
if value_in == dictv[0]:
self.keys[k][s] = int(dictk)
elif value_in == str(dictk):
self.keys[k][s] = int(dictk)
if not isinstance(self.keys[k][s], int): # It didn't get set
print('{0}: "{1}" is not a valid {2}; skipping. Run kant again to fix.'.format(k, self.keys[k][s], _errs[s]))
del(self.keys[k])
return()
# Determine if we need to change the trust.
if t == 'trust':
cur_trust = self.keys[k]['pkey']['key'].owner_trust
if cur_trust == self.keys[k]['trust']:
self.keys[k]['change'] = False
else:
self.keys[k]['change'] = True
return()
def sigKeys(self): # The More Business-End(TM)
# NOTE: If the trust level is anything but 2 (the default), we should use op_interact() instead and do a tsign.
self.ctx.keylist_mode = gpg.constants.KEYLIST_MODE_SIGS
_mkey = self.mykey['pkey']['key']
self.ctx.signers = [_mkey]
for k in list(self.keys.keys()):
key = self.keys[k]['pkey']['key']
for uid in key.uids:
for s in uid.signatures:
try:
signerkey = ctx.get_key(s.keyid).subkeys[0].fpr
if signerkey == mkey.subkeys[0].fpr:
self.trusts[k]['sign'] = False # We already signed this key
except gpgme.GpgError:
pass # usually if we get this it means we don't have a signer's key in our keyring
# And again, we loop. ALLLLL that buildup for one line.
for k in list(self.keys.keys()):
# TODO: configure to allow for user-entered expiration?
if self.keys[k]['sign']:
self.ctx.key_sign(self.keys[k]['pkey']['key'], local = self.keys[k]['local'])
return()
class KeyEditor(object):
def __init__(self, optmap):
self.replied_once = False # This is used to handle the first prompt vs. the last
self.optmap = optmap
return(None)
def editKey(self, status, args, out):
_result = None
out.seek(0, 0)
def mapDict(m, d):
return(reduce(operator.getitem, m, d))
if args == 'keyedit.prompt' and self.replied_once:
_result = 'quit'
elif status == 'KEY_CONSIDERED':
_result = None
self.replied_once = False
elif status == 'GET_LINE':
self.replied_once = True
_ilist = args.split('.')
_result = mapDict(_ilist, self.optmap['prompts'])
if not _result:
_result = None
return(_result)
def trustKeys(self): # The Son of Business-End(TM)
# TODO: add check for change
for k in list(self.keys.keys()):
_key = self.keys[k]
if _key['change']:
_map = self.getEditPrompt(_key, 'trust')
out = gpg.Data()
self.ctx.interact(_key['pkey']['key'], self.KeyEditor(_map).editKey, sink = out, fnc_value = out)
out.seek(0, 0)
return()
def pushKeys(self): # The Last Business-End(TM)
for k in list(self.keys.keys()):
if not self.keys[k]['local'] and self.keys[k]['sign']:
self.ctx.op_export(k, gpg.constants.EXPORT_MODE_EXTERN, None)
return()
class Mailer(object): # I lied; The Return of the Business-End(TM)
def __init__(self):
_homeconf = os.path.join(os.environ['HOME'], '.msmtprc')
_sysconf = '/etc/msmtprc'
self.msmtp = {'conf': None}
if not os.path.isfile(_homeconf):
if not os.path.isfile(_sysconf):
self.msmtp['conf'] = False
else:
self.msmtp['conf'] = _sysconf
else:
self.msmtp['conf'] = _homeconf
if self.msmtp['conf']:
# Okay. So we have a config file, which we're assuming to be set up correctly.
# Now we need to parse the config.
self.msmtp['cfg'] = self.getCfg()
return(None)
def getCfg(self):
cfg = {'default': None, 'defaults': {}}
_defaults = False
_acct = None
with open(self.msmtp['conf'], 'r') as f:
_cfg_raw = f.read()
for l in _cfg_raw.splitlines():
if re.match('^\s?(#.*|)$', l):
continue # Skip over blank and commented lines
_line = [i.strip() for i in re.split('\s+', l.strip(), maxsplit = 1)]
if _line[0] == 'account':
if re.match('^default\s?:\s?', _line[1]): # it's the default account specifier
cfg['default'] = _line[1].split(':', maxsplit = 1)[1].strip()
else:
if _line[1] not in cfg.keys(): # it's a new account definition
cfg[_line[1]] = {}
_acct = _line[1]
_defaults = False
elif _line[0] == 'defaults': # it's the defaults
_acct = 'defaults'
else: # it's a config directive
cfg[_acct][_line[0]] = _line[1]
for a in list(cfg):
if a != 'default':
for k, v in cfg['defaults'].items():
if k not in cfg[a].keys():
cfg[a][k] = v
del(cfg['defaults'])
return(cfg)
def sendEmail(self, msg, key, profile): # This needs way more parsing to support things like plain ol' port 25 plaintext (ugh), etc.
if 'tls-starttls' in self.msmtp['cfg'][profile].keys() and self.msmtp['cfg'][profile]['tls-starttls'] == 'on':
smtpserver = smtplib.SMTP(self.msmtp['cfg'][profile]['host'], int(self.msmtp['cfg'][profile]['port']))
smtpserver.ehlo()
smtpserver.starttls()
# we need to EHLO twice with a STARTTLS because email is weird.
elif self.msmtp['cfg'][profile]['tls'] == 'on':
smtpserver = smtplib.SMTP_SSL(self.msmtp['cfg'][profile]['host'], int(self.msmtp['cfg'][profile]['port']))
smtpserver.ehlo()
smtpserver.login(self.msmtp['cfg'][profile]['user'], self.msmtp['cfg'][profile]['password'])
smtpserver.sendmail(self.msmtp['cfg'][profile]['user'], key['pkey']['email'], msg.as_string())
smtpserver.close()
return()
def postalWorker(self):
m = self.Mailer()
if 'KANT' in m.msmtp['cfg'].keys():
_profile = 'KANT'
else:
_profile = m.msmtp['cfg']['default'] # TODO: let this be specified on the CLI args?
if 'user' not in m.msmtp['cfg'][_profile].keys() or not m.msmtp['cfg'][_profile]['user']:
return() # We don't have MSMTP configured.
# Reconstruct the keyserver list.
_keyservers = []
for k in self.args['keyservers']:
_keyservers.append('{0}://{1}:{2}'.format(k['proto'], k['server'], k['port'][0]))
# Export our key so we can attach it.
_pubkeys = {}
for e in ('asc', 'gpg'):
if e == 'asc':
self.ctx.armor = True
else:
self.ctx.armor = False
_pubkeys[e] = gpg.Data() # This is a data buffer to store your ASCII-armored pubkeys
self.ctx.op_export_keys([self.mykey['pkey']['key']], 0, _pubkeys[e])
_pubkeys[e].seek(0, 0) # Read with e.g. _sigs['asc'].read()
for k in list(self.keys.keys()):
if self.keys[k]['notify']:
_body = {}
for t in list(self.tpls.keys()):
# There's gotta be a more efficient way of doing this...
#_tplenv = jinja2.Environment(loader = jinja2.BaseLoader()).from_string(self.tpls[t])
_tplenv = jinja2.Environment().from_string(self.tpls[t])
_body[t] = _tplenv.render(key = self.keys[k],
mykey = self.mykey,
keyservers = _keyservers)
b = MIMEMultipart('alternative') # Set up a body
for c in _body.keys():
b.attach(MIMEText(_body[c], c))
bmsg = MIMEMultipart()
bmsg.attach(b)
for s in _pubkeys.keys():
_attchmnt = MIMEApplication(_pubkeys[s].read(), '{0}.{1}'.format(self.mykey['pkey']['key'].fpr, s))
_attchmnt['Content-Disposition'] = 'attachment; filename="{0}.{1}"'.format(self.mykey['pkey']['key'].fpr, s)
bmsg.attach(_attchmnt)
# Now we sign the body. This incomprehensible bit monkey-formats bmsg to be a multi-RFC-compatible
# string, which is then passed to our gpgme instance's signing mechanishm, and the output of that is
# returned as plaintext. Whew.
self.ctx.armor = True
_sig = self.ctx.sign((bmsg.as_string().replace('\n', '\r\n')).encode('utf-8'),
mode = gpg.constants.SIG_MODE_DETACH)
imsg = Message() # Build yet another intermediate message...