commit 0e6c3c6a4bd1037c992d55a0cbf8a652a2968453
parent e1f64570d6ca076a1cbb813706faa768cf0a1fe9
Author: nolash <dev@holbrook.no>
Date: Mon, 26 Apr 2021 08:55:47 +0200
Finish docker offline parts I and II
Diffstat:
23 files changed, 670 insertions(+), 138 deletions(-)
diff --git a/content/20210418_keccak.rst b/content/20210418_keccak.rst
@@ -9,7 +9,7 @@ In search of a slim KECCAK dependency
:slug: keccak-benchmarks
:summary: Compare performance and sizes of alternative KECCAK SHA3 implementations
:lang: en
-:status: published
+:status: draft
Implementations
===============
diff --git a/content/20210419_docker_python.rst b/content/20210419_docker_python.rst
@@ -1,40 +1,31 @@
-Offline Docker - Part II
-########################
+Local python repository
+#######################
-:date: 2021-04-19 17:35
-:modified: 2021-04-18 19:48
+:date: 2021-04-26 07:55
:category: Offlining
:author: Louis Holbrook
:tags: docker,python,devops
-:slug: docker-offline-python
+:slug: docker-offline-2-python
:summary: How to not be forced being online when forced to use docker
+:series: Offline Docker
+:seriesprefix: docker-offline
+:seriespart: 2
:lang: en
:status: published
-Local python repository
-=======================
+In the previous part of this series we were able to connect a Docker network to a virtual interface on our host, neither of which have access to the internet. That means we are ready to host content for the container locally. And we will start out with creating a local Python repository.
+
+Packaging the packages
+======================
+
+I'll be so bold as to assume that you are using ``pip`` to manage your packages. It gives you not only the option to *install* packages, but merely *download* them to storage aswell. So let's do that and try to serve the packages.
-.. include:: code/docker-offline-python/pep503.sh
- :code: bash
.. code-block:: bash
$ pip download faker
- Collecting faker
- Downloading Faker-8.1.0-py3-none-any.whl (1.2 MB)
- |████████████████████████████████| 1.2 MB 173 kB/s
- Collecting python-dateutil>=2.4
- Downloading python_dateutil-2.8.1-py2.py3-none-any.whl (227 kB)
- |████████████████████████████████| 227 kB 732 kB/s
- Collecting six>=1.5
- Using cached six-1.15.0-py2.py3-none-any.whl (10 kB)
- Collecting text-unidecode==1.3
- Using cached text_unidecode-1.3-py2.py3-none-any.whl (78 kB)
- Saved ./Faker-8.1.0-py3-none-any.whl
- Saved ./python_dateutil-2.8.1-py2.py3-none-any.whl
- Saved ./six-1.15.0-py2.py3-none-any.whl
- Saved ./text_unidecode-1.3-py2.py3-none-any.whl
+ [...]
Successfully downloaded faker python-dateutil six text-unidecode
$ ls
Faker-8.1.0-py3-none-any.whl python_dateutil-2.8.1-py2.py3-none-any.whl six-1.15.0-py2.py3-none-any.whl text_unidecode-1.3-py2.py3-none-any.whl
@@ -49,6 +40,14 @@ Local python repository
ERROR: Could not find a version that satisfies the requirement faker (from versions: none)
ERROR: No matching distribution found for faker
+Dang, apparently not that simple. And indeed, if we read up on the spec for `PEP 503 spec for Simple Repository API`_, we learn that we are going to stick those package files under directories named after the packages. That means a bit of bash scripting:
+
+.. include:: code/docker-offline-python/pep503.sh
+ :code: bash
+
+
+Armed with this, let's try again:
+
.. code-block:: bash
$ sh /home/lash/bin/shell/pep503.sh . packages
@@ -74,66 +73,72 @@ Local python repository
$ pip install --index http://localhost:8000/packages faker
Looking in indexes: http://localhost:8000/packages
Collecting faker
- Downloading http://localhost:8000/packages/faker/Faker-8.1.0-py3-none-any.whl (1.2 MB)
- |████████████████████████████████| 1.2 MB 129.5 MB/s
- Collecting python-dateutil>=2.4
- Downloading http://localhost:8000/packages/python-dateutil/python_dateutil-2.8.1-py2.py3-none-any.whl (227 kB)
- |████████████████████████████████| 227 kB 128.3 MB/s
- Collecting text-unidecode==1.3
- Downloading http://localhost:8000/packages/text-unidecode/text_unidecode-1.3-py2.py3-none-any.whl (78 kB)
- |████████████████████████████████| 78 kB 151.9 MB/s
- Collecting six>=1.5
- Downloading http://localhost:8000/packages/six/six-1.15.0-py2.py3-none-any.whl (10 kB)
- Installing collected packages: six, python-dateutil, text-unidecode, faker
+ [...]
Successfully installed faker-8.1.0 python-dateutil-2.8.1 six-1.15.0 text-unidecode-1.3
-Docker networking
+Extra prepping
+==============
+
+There are some basic packages you will most always need, and which ``pip`` often will expect to find in at least one of its available repositories, even regardless of whether its installed or not. If you don't have this on your local offline repository when the internet goes out, then that will block any build you are attempting. So let's make sure we have the packages around, too:
+
+.. code-block:: bash
+
+ $ pip download pip setuptools setuptools-markdown wheel
+ [...]
+ $ bash /home/lash/bin/shell/pep503.sh . packages
+ [...]
+
+
+Choosing a server
=================
-Moved files to Apache.
+As I tend to favor the classics, I still use ``Apache Web Server`` to host stuff to my local environment. One practical (if not all too safe) thing about it is that it will automatically bind to all interfaces. So to make the repository available, we simply link or add the ``packages`` directory to the virtual root, and restart the server e.g. with
-Using Archlinux with python
+.. code-block:: bash
+
+ systemctl restart httpd
+
+Of course you can use any HTTP server you like, as long as you know how to bind it to the virtual interface.
+
+To verify that the service is providing what's needed, simply point your web browser to the location, e.g.
+
+.. code-block:: bash
+
+ lynx http://10.1.2.1/packages/
+
+Serving the packages
+====================
+
+Now that we are all prepped, the next step is to do install packages from within a Docker container.
+
+Let's start with an Archlinux base image with basic python provisions.
.. include:: code/docker-offline-python/Dockerfile.pythonarch
:code: docker
-.. include:: code/docker-offline-python/Dockerfile.localpip
- :code: docker
+Build and tag it with ``pythonbase``, and then let's add the Dockerfile to test the repository:
-With network turned off.
+.. code-block:: docker
+
+ [...]
+
+ RUN pip install --index http://10.1.2.1/packages --trusted-host 10.1.2.1 faker
+
+
+Now, turn internet off (and lights, too, if you'd like some extra suspense), and build the second file.
.. code-block:: bash
$ docker build .
Sending build context to Docker daemon 3.072kB
- Step 1/2 : FROM pythonarch
- ---> 881c5d055056
- Step 2/2 : RUN pip install --index http://localhost/python hexathon
- ---> Running in 6df44ab92324
- WARNING: The directory '/root/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
- Looking in indexes: http://localhost/python
- WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fdb21779250>: Failed to establish a new connection: [Errno 111] Connection refused')': /python/hexathon/
- ^C
- $ docker build --network host .
- Sending build context to Docker daemon 3.072kB
- Step 1/2 : FROM pythonarch
- ---> 881c5d055056
- Step 2/2 : RUN pip install --index http://localhost/python hexathon
- ---> Running in 2c10ffcdf3ad
- WARNING: The directory '/root/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
- Looking in indexes: http://localhost/python
- Collecting hexathon
- Downloading http://localhost/python/hexathon/hexathon-0.0.1a7-py3-none-any.whl (14 kB)
- Installing collected packages: hexathon
- Successfully installed hexathon-0.0.1a7
+ [...]
+ Successfully installed faker-8.1.0 python-dateutil-2.8.1 six-1.15.0 text-unidecode-1.3
Removing intermediate container 2c10ffcdf3ad
---> 1ba83bb8e111
Successfully built 1ba83bb8e111
-.. code-block:: docker
-
- [...]
- RUN pip install --index http://10.1.3.1/python --trusted-host 10.1.3.1 pip setuptools-markdown wheel
+..
+ .. _PEP 503 spec for Simple Repository API: https://www.python.org/dev/peps/pep-0503
diff --git a/content/20210420_docker_offline.rst b/content/20210420_docker_offline.rst
@@ -1,36 +1,77 @@
-Offline Docker - Part I
-#######################
+All you need is your host
+#########################
-:date: 2021-04-20 10:11
-:modified: 2021-04-20 10:11
+:date: 2021-04-26 07:54
:category: Offlining
:author: Louis Holbrook
:tags: docker,networking,iptables,iproute
-:slug: docker-offline
+:slug: docker-offline-1-routing
:summary: How to not be forced being online when forced to use docker
+:series: Offline Docker
+:seriesprefix: docker-offline
+:seriespart: 1
:lang: en
:status: published
+Five years ago I decided that I wanted to be able to work from anywhere, anytime. Four years ago, that promise was kept. In part.
+
+I do not need to go to an office somewhere. I can work outside in a park if I want to. I can ride on to a new town every day. I only ever need to bring my trusty old `Tuxedo Laptop`_ whereever I go.
+
+All of this as true, as long as there is internet available. And, as it turns out, *good* internet available.
+
+This has become especially obvious to me once I started to work with a project that involves a collection of microservices contained in a Docker environment, which also makes extensive use of custom packages that change frequently alongside the development process. Turns out, every time I want to rebuild my cluster of containers when sitting in the sun in a park, I need my LTE modem to play along. If it doesn't, then a single package that can't be reached will thwart the build.
+
+This does not feel much like freedom after all. So let's see how we can serve all of these locally from our host instead.
+
+First of all, we have to be able to reach our local host from the Docker containers. This is less straightforward than it may seem at first. The most obvious solution is to use the ``host`` network driver, but this exposes your *whole* localhost interface and routes to internet, too. Aside from the security issues that raises, it can also also trick you into assuming that some resources are available when they in fact will not be when you move on to a different environment. What we want is to *block* access to internet, while *choosing* which services to let the Docker container use.
+
+Once we have this in place, we want to create local repositories for all the stuff we otherwise need to download. In this particular case, that means a **Docker** repository, a **Python** repository, a **nodejs** repository and a **linux** repository. We'll use Archlinux_ for this exercise, because that's been my home environment for the last four years.
+
+In fact, having your own mirror of all these and anything else you base most of your work on is not only a good idea for the purpose of *offlining* in itself, but wasting on bandwidth for items you've already downloaded hundreds of times is not exactly a nod to climate awareness either. And even more importantly, ensuring *availability* of software is something we should all participate in, and not merely defer to a couple of git repository giants.
+
+ .. _Tuxedo Laptop: https://www.tuxedocomputers.com/en
+
+..
+
+ .. _Archlinux: https://archlinux.org
+
+Reaching the local host
+=======================
+
+*Local host* not *localhost*, mind you. Which means we need a different interface to connect to. And since we are not wiring up in any physical sense, a virtual interface seems to be the reasonable way to go.
+
+First, let's prepare a base Docker layer with some tools that you should never leave home without.
+
+
+Prepare the docker image
+------------------------
+
.. include:: code/docker-offline/Dockerfile.archbase
:code: docker
-Bring up a virtual interface
+Let's build this as an image called ``archbase``. Provided the content above is a file called ``Dockerfile.archbase`` in your current directory:
.. code-block:: bash
- $ ip link add foo type dummy
+ $ docker build -t archbase -f Dockerfile.archbase .
+
-Find the subnet of the network
+Set up network interfaces
+-------------------------
+
+Bring up a virtual interface
.. code-block:: bash
- $ docker network inspect no-internet
+ $ ip link add foo type dummy
-Look for the config property:
+Find the subnet of the ``no-internet`` Docker network. This network driver is a builtin that provides exactly what it advertises.
.. code-block:: bash
+ $ docker network inspect no-internet
+ [...]
"Config": [
{
"Subnet": "10.1.1.0/24",
@@ -38,13 +79,17 @@ Look for the config property:
}
]
-Set up dummy interface in a different subnet.
+Assign an IP address to the dummy interface in a *different* subnet than the one the Docker network uses.
.. code-block:: bash
$ ip addr add 10.1.2.1/24 dev foo
-Find the bridge used by the docker container. Look for an ip address that matches the gateway of the docker network config.
+
+Traverse the firewall
+---------------------
+
+Find the bridge used by the Docker container. Look for an ip address that matches the gateway of the docker network config.
.. code-block:: bash
@@ -63,19 +108,23 @@ Find the bridge used by the docker container. Look for an ip address that matche
inet6 fe80::1850:53ff:fe2b:9698/64 scope link
valid_lft forever preferred_lft forever
-Add the virtual interface to the bridge. This makes the request pass the iptables rules added by Docker.
+Add the virtual interface to the bridge.
.. code-block:: bash
$ ip link set foo master br-d4ddb68f9938
-Depending on your general iptables rules may have to explicitly allow inbound traffic, e.g.
+Long story short, the previous step will make the traffic from the container reach the ``INPUT`` chain in ``iptables``. Now we can make an exception for incoming traffic from the ``no-internet`` Docker bridge.
.. code-block:: bash
$ iptables -I INPUT 1 --source br-d4ddb68f9938 --destination 10.1.2.1/24 -j ACCEPT
-Now a port on device `foo` should be reachable from the docker container. Let's use socat to check.
+
+Verify
+------
+
+Provided you don't have any other hurdles in your local ``ìptables`` setup, a port on device ``foo`` should be reachable from the docker container. We can use socat to check.
On local host:
@@ -95,4 +144,4 @@ The moment of truth
$ echo bar | socat - TCP4:10.1.2.1:8000
-Apache listens on all interfaces. Restart after create dummy interface, and webcontent is instantly available.
+Spoiler: ``bar`` should pop up on the local host side.
diff --git a/content/20210421_docker_vpn.rst b/content/20210421_docker_vpn.rst
@@ -9,7 +9,7 @@ Using Docker with VPN
:slug: docker-vpn
:summary: Using docker network while openvpn in running
:lang: en
-:status: published
+:status: draft
Need to route through the tun interface, which Docker doesn't seem to automatically do.
diff --git a/content/20210421_web_shapshot.rst b/content/20210421_web_shapshot.rst
@@ -0,0 +1,16 @@
+Web snapshots with proof
+########################
+
+:date: 2021-04-21 09:37
+:modified: 2021-04-21 09:37
+:category: Archiving
+:author: Louis Holbrook
+:tags: web,hash,chromium
+:slug: web-snapshot
+:summary: Generating proof of a web resource when you read and share
+:lang: en
+:status: draft
+
+
+.. include:: code/web-snapshot/webshot.sh
+ :code: bash
diff --git a/content/20210425_celery_document_graph.rst b/content/20210425_celery_document_graph.rst
@@ -0,0 +1,32 @@
+Documenting Celery task chains
+##############################
+
+:date: 2021-04-25 15:00
+:modified: 2021-04-25 15:00
+:category: Code
+:author: Louis Holbrook
+:tags: python,microservices,celery
+:slug: celery-document-graph
+:summary: How do document complex task chains in Python Celery using graphviz
+:lang: en
+:status: draft
+
+.. code-block:: python
+
+ current_app.conf.update({
+ 'broker_url': broker,
+ })
+
+ result = config.get('CELERY_RESULT_URL')
+ if result[:4] == 'file':
+ rq = tempfile.mkdtemp()
+ current_app.conf.update({
+ 'result_backend': 'file://{}'.format(rq),
+ })
+ logg.warning('celery backend store dir {} created, will NOT be deleted on shutdown'.format(rq))
+ else:
+ current_app.conf.update({
+ 'result_backend': result,
+ })
+
+
diff --git a/content/code/docker-offline-python/Dockerfile.localpip b/content/code/docker-offline-python/Dockerfile.localpip
@@ -1,3 +1,3 @@
FROM pythonarch
-RUN pip install --index http://localhost/python hexathon
+RUN pip install --index http://localhost/python faker
diff --git a/content/code/docker-offline/Dockerfile.archbase b/content/code/docker-offline/Dockerfile.archbase
@@ -1,4 +1,4 @@
FROM archlinux:latest
RUN pacman -Sy && \
- pacman -S --noconfirm gnu-netcat socat inetutils iproute2 curl
+ pacman -S --noconfirm gnu-netcat socat inetutils iproute2
diff --git a/content/code/docker-vpn/docker_vpn_routes.sh b/content/code/docker-vpn/docker_vpn_routes.sh
@@ -1,8 +1,7 @@
#!/bin/sh
-default_route_vpn_gateway=`ip route | awk '{if ($3 ~ /^tun/) { print $9; }}'`
+default_route_vpn_gateway=`ip route | awk '{if ($1 ~ /^0.0.0.0\/1$/) { print $3; }}'`
route_vpn_gateway=${VPN_GATEWAY:-$default_route_vpn_gateway}
-
echo "Adding default route to $route_vpn_gateway with /0 mask..."
ip route add default via $route_vpn_gateway
diff --git a/content/code/web-snapshot/webshot.sh b/content/code/web-snapshot/webshot.sh
@@ -0,0 +1,18 @@
+#!/bin/bash
+
+f=${HOME}/articles
+set +e
+d=`TZ=UTC date +%Y%m%d%H%M`
+t=`mktemp -d`
+pushd $t
+echo $@ > url.txt
+curl -s -I $@ > headers.txt
+curl -s -X GET $@ > contents.txt
+sha256sum contents.txt > contents.txt.sha256
+h=`cat contents.txt.sha256 | awk '{ print $1; }'`
+chromium --headless --print-to-pdf $@
+n=${d}_${h}
+mv output.pdf $n.pdf
+tar -zcvf $f/$n.tar.gz *
+popd
+set -e
diff --git a/content/pages/identities.rst b/content/pages/identities.rst
@@ -8,9 +8,34 @@ Identities
Keys
====
-PGP (personal)
- ``59A844A4 84AC1125 3D3A3E9D CDCBD24D D1D0E001``
-PGP (code signing)
- ``0826EDA1 702D1E87 C6E28751 21D2E7BB 88C2A746``
-TOX
+PGP - personal
+ `59A844A4 84AC1125 3D3A3E9D CDCBD24D D1D0E001`_
+
+PGP - code signing
+ `0826EDA1 702D1E87 C6E28751 21D2E7BB 88C2A746`_
+
+PGP - `Delta Chat`_
+ `E19386B2 6EB1F4CC 9B14B4E3 D62A8A77 9612E773`_
+
+TOX_
``70459C0568A64737F127CA1505FA0485FBB69831C7BD6AC269E369285C7F2E0282283B2AFCD0``
+
+..
+
+ .. _Delta Chat: https://delta.chat/
+
+..
+
+ .. _59A844A4 84AC1125 3D3A3E9D CDCBD24D D1D0E001: https://holbrook.no/keys/louis.asc
+
+..
+
+ .. _0826EDA1 702D1E87 C6E28751 21D2E7BB 88C2A746: https://holbrook.no/keys/louis_dev.asc
+
+..
+
+ .. _E19386B2 6EB1F4CC 9B14B4E3 D62A8A77 9612E773: https://holbrook.no/keys/louis_deltachat.asc
+
+..
+
+ .. _TOX: https://tox.chat/
diff --git a/content/pages/projects.rst b/content/pages/projects.rst
@@ -3,18 +3,190 @@ Projects
:title: Projects
:author: Louis Holbrook
+:status: published
+
Active
======
-Logwarrior
+Chaintools
+----------
+
+This is a collection of three python3 blockchain libraries - chainlib_, chainsyncer_ and chainqueue_. Chainlib provides tooling for encodings for Solidity-EVM and Ethereum node networks. Chainqueue facilitates bulk send of transactions. Chainsyncer processes all transactions in mined blocks, and executes pluggable code for each of them.
+
+`Crypto Dev Signer`_
+--------------------
+
+Provides a daemon for use in development that performs Ethereum signatures over its standard JSON-RPC, along with a keystore with memory or sql backends. It also contains a cli tool to create and parse keystore files.
+
+Taint_
+------
+
+Taint tags crypto addresses, and merge tags for crypto addresses that trade with each other. It can be used as a simple forensic tool to check whether cryptographic identities are isolated from each other.
+
+`Eth Statsyncer`_
+-----------------
+
+Statsyncer collects and aggregates blockchain state, like gas prices, over time. It in turns serves this data on a (JSON-RPC) API.
+
+Ecuth_
+------
+
+Leverages the `HTTP HOBA challenge-response authentication scheme`_ to authenticate with PGP_ and Ethereum_ wallets. It is supported by the dependencies `python-http-hoba-auth`_ and `python-yaml-acl`_, which provide parsing of HOBA authorization strings and a simple YAML-based ACL structure respectively.
+
+w2625
+-----
+
+Performs parallell lookups over a collection of web2 and/or web3 sources for an asset. Primarily designed to be a intermediate stop for projects that wish to integrate with web3, but cannot risk to fully rely on its lack of stability.
+
+librlp_
+-------
+
+A small implementation of the Recursive Length Prefix serialization format in C. A python interface pylibrlp_ is also provided.
+
+libswarm
+--------
+
+A small implementation of the BMT, Swarmhash and Single-Owner Chunk hashers and chunkers used in the `Swarm Network`_. Written in C.
+
+Logwarrior_
+-----------
+
+Work logging in the spirit of the absolutely awesome Taskwarrior_ and Timewarrior_ tools. Written in Python, it uses the filesystem as backend, and MIME Multiparts to allow attachments to the log items. The ambition is to integrate with Taskwarrior one day.
+
+Confini_
+--------
+
+Python module for parsing and merging content from ``.ini`` files in a directory. It enables overriding of the resulting variables by environment variables and command line arguments through dictionaries. Has an incomplete javascript companion confini-js_.
+
Hiatus
======
-Benford generator
+`Benford generator`_
+--------------------
+
+A small C library to generate number serires that conform to the natural distribution according to `Benford's Law`_ [1]_
+
+
+Gitrefresh_
+-----------
+
+Mirroring tool to migrate your git repositories between computers without copying objects, and update existing repositories from remotes recursively. Written in ``bash``.
+
Abandoned
=========
-libswarm
+Simplesigner_
+-------------
+
+A library that aims to simplify mutually signing generic serializable items offline with handheld devices. Leverages Typescript and Protobuf.
+
+..
+ Project links
+
+..
+
+ .. _chainlib: https://gitlab.com/nolash/chainlib
+
+..
+
+ .. _chainsyncer: https://gitlab.com/nolash/chainsyncer
+
+..
+
+ .. _chainqueue: https://gitlab.com/nolash/chainqueue
+
+..
+
+ .. _Taint: https://gitlab.com/nolash/taint
+
+..
+
+ .. _Eth Statsyncer: https://gitlab.com/nolash/eth-stat-syncer
+
+..
+
+ .. _Crypto Dev Signer: https://gitlab.com/nolash/crypto-dev-signer
+
+..
+
+ .. _Ecuth: https://gitlab.com/nolash/ecuth
+
+..
+
+ .. _python-http-hoba-auth: https://gitlab.com/nolash/python-http-hoba-auth
+
+..
+
+ .. _python-yaml-acl: https://gitlab.com/nolash/python-yaml-acl
+
+
+..
+
+ .. _Confini: https://gitlab.com/nolash/python-confini
+
+..
+
+ .. _confini-js: https://gitlab.com/nolash/confini-js
+
+..
+
+ .. _librlp: https://gitlab.com/nolash/librlp
+
+
+..
+
+ .. _pylibrlp: https://gitlab.com/nolash/pylibrlp
+
+..
+
+ .. _Feedwarrior: https://gitlab.com/nolash/logwarrior
+
+..
+
+ .. _Benford Generator: https://gitlab.com/nolash/libbenford
+
+..
+
+ .. _Gitrefresh: https://gitlab.com/nolash/cli-tools/-/tree/master/gitrefresh
+
+..
+
+ .. _Simplesigner: https://gitlab.com/nolash/simple-signer-js
+
+..
+ External projects
+
+..
+
+ .. _TaskWarrior: https://taskwarrior.org/
+
+..
+
+ .. _TimeWarrior: https://timewarrior.net/
+
+..
+
+ .. _Benford's Law: https://mathworld.wolfram.com/BenfordsLaw.html
+
+..
+
+ .. _Swarm Network: https://swarm.ethereum.org/
+
+..
+
+ .. _HTTP HOBA challenge-response authentication scheme: https://tools.ietf.org/id/draft-ietf-httpauth-hoba-00.html
+
+..
+
+ .. _PGP: https://gnupg.org/
+
+..
+
+ .. _Ethereum: https://ethereum.org/en/
+
+..
+
+ .. [1] A phenomenological law also called the first digit law, first digit phenomenon, or leading digit phenomenon. Benford's law states that in listings, tables of statistics, etc., the digit 1 tends to occur with probability ∼30%, much greater than the expected 11.1% (i.e., one digit out of 9). https://mathworld.wolfram.com/BenfordsLaw.html
diff --git a/lash/static/css/style.css b/lash/static/css/style.css
@@ -1,16 +1,161 @@
-.body {
- width: 100%;
+/* globals */
+a:hover {
+ text-decoration: underline;
+ filter: invert(.33);
+}
+
+a:visited {
+ color: #30303f;
+}
+
+body {
+ background-image: url('../images/bg.png');
+ background-repeat: repeat;
+ font-family: sans-serif;
+}
+
+section.body,
+header div {
+ padding-left: 2em;
+ padding-right: 2em;
+}
+
+ol {
+ list-style-type: none;
+}
+
+h2 {
+ color: #800000;
+ font-size: 1.7em;
+}
+
+h1 {
+ font-size: 2.2em;
+ text-transform: uppercase;
+}
+
+h1.top-body-title {
+ padding-top: 0.3em;
+ color: #800000;
+}
+
+
+header#banner a {
+ color: #000;
+ text-decoration: none;
+ font-weight: 900;
+}
+
+ul.entry-meta-parts li {
+ display: inline-block;
+}
+
+div.entry-meta {
+ font-weight: 900;
+}
+
+/* top menu */
+nav#menu {
+ text-transform: lowercase;
+ font-family: monospace;
}
nav#menu li {
display: inline-block;
}
-ol {
+nav#menu li.active a {
+ color: #f00;
+}
+
+nav#menu ul {
+ padding-inline-start: 0;
+ padding-left: 2em;
+ padding-bottom: 0.3em;
list-style-type: none;
+ padding-bottom: 0.7em;
+}
+
+header#banner h1,
+nav#menu li {
+ text-transform: lowercase;
+ font-family: monospace;
+ font-size: 1.5em;
+ padding-right: .7em;
}
-div.highlight, pre.code {
- background-color: #bbb;
- padding: 10px;
+nav#menu hr {
+ border: thin solid #000;
}
+
+/* metadata */
+
+ol.entry-series-parts,
+ol.entry-series-parts li,
+div.neighbors ul,
+div.neighbors ul li {
+ display: inline-block;
+ margin-block-start: 0;
+ margin-block-end: 0;
+ padding-inline-start: 0;
+}
+
+div.meta {
+ padding-bottom: 1em;
+}
+
+/* content */
+
+h1.entry-title a {
+ text-decoration: none;
+ color: #000;
+}
+
+a.category {
+ font-weight: 600;
+}
+
+/* foot */
+
+footer.body {
+ padding-top: 2em;
+}
+
+/* code */
+
+pre {
+ white-space: break-spaces;
+}
+div.highlight,
+pre.code {
+ background-color: #e5e0e0;
+ padding: 1em;
+ margin-bottom: 0.3em;
+}
+
+
+/* custom: identities */
+div#keys {
+ font-size: 1.2em;
+}
+
+ul li {
+ font-size: 1.2em;
+}
+
+div#keys dt {
+ margin-bottom: 0.3em;
+}
+
+div#keys dd {
+ margin-bottom: 0.6em;
+}
+
+div#keys dd a,
+div#keys dd tt {
+ font-size: 1.2em;
+ font-family: monospace;
+}
+
+
+
diff --git a/lash/static/images/bg.png b/lash/static/images/bg.png
Binary files differ.
diff --git a/lash/static/js/blink.js b/lash/static/js/blink.js
@@ -0,0 +1,13 @@
+function cursorBlink() {
+ if (document.getElementById("cursor").style.opacity == "0") {
+ document.getElementById("cursor").animate({
+ opacity: 1
+ }, 150);
+ } else {
+ document.getElementById("cursor").style.opacity = "0.35";
+ document.getElementById("cursor").animate({
+ opacity: 0
+ }, 350);
+ }
+ setTimeout(cursorBlink, 400);
+}
diff --git a/lash/templates/article.html b/lash/templates/article.html
@@ -12,25 +12,25 @@
{% endblock %}
{% block content %}
-<section id="content" class="body">
<header>
- <h2 class="entry-title">
+ <h1 class="entry-title top-body-title">
<a href="{{ SITEURL }}/{{ article.url }}" rel="bookmark"
- title="Permalink to {{ article.title|striptags }}">{{ article.title }}</a></h2>
+ title="Permalink to {{ article.title|striptags }}">{% if article.series %}{{ article.series }}: {% endif %}{{ article.title }}</a></h1>
- <div class="category">
+ <div class="category meta">
Posted
<time class="published" datetime="{{ article.date.isoformat() }}">
{{ article.locale_date }}
</time>
-in <a href="{{ SITEURL }}/{{ article.category.url }}">{{ article.category }}</a>
+in <a class="category" href="{{ SITEURL }}/{{ article.category.url }}">{{ article.category|lower() }}</a>
{% for tag in article.tags %}
- <a href="{{ SITEURL }}/{{ tag.url }}">{{ tag }}</a>
+ <a href="{{ SITEURL }}/{{ tag.url }}">{{ tag }}</a>
{% endfor %}
</div>
- <ul>
+ <div class="neighbors meta">
+ <ul>
{% if article.prev_article %}
<li>
Previous: <a href="{{ SITEURL }}/{{ article.prev_article.url}}">
@@ -39,32 +39,39 @@ in <a href="{{ SITEURL }}/{{ article.category.url }}">{{ article.category }}</a>
</li>
{% endif %}
{% if article.next_article %}
+ {% if article.prev_article %}
+ |
+ {% endif %}
<li>
Next: <a href="{{ SITEURL }}/{{ article.next_article.url}}">
{{ article.next_article.title }}
</a>
</li>
{% endif %}
- </ul>
-<div class="tags">
-
+ </ul>
+ </div>
+ {% if article.series %}
+ <div class="entry-series meta">
+Part {{ article.seriespart }} from the series "{{ article.series }}"
+ <ol class="entry-series-parts">
+ {% for a in articles|sort(attribute="slug") %}
+ {% if article.seriesprefix in a.slug and article.slug != a.slug %}
+ <li>| <a href="{{ a.url }}" title="{{ a.title }} ">Part {{ a.seriespart }}</a></li>
+ {% endif %}
+ {% endfor %}
+ </ol>
+ </div>
+ {% endif %}
+ <div class="meta">
+<hr/>
</div>
- <hr/>
- </header>
+</header>
+
+<section id="content" class="body">
<div class="entry-content">
{{ article.content }}
</div><!-- /.entry-content -->
<footer class="meta">
- {% if article.tags %}
- <hr/>
- <div class="tags">
- Tags:
- {% for tag in article.tags %}
- <a href="{{ SITEURL }}/{{ tag.url }}">{{ tag }}</a>
- {% endfor %}
- </div>
- {% endif %}
-
</footer>
</section>
diff --git a/lash/templates/base.html b/lash/templates/base.html
@@ -13,26 +13,27 @@
<body id="index" class="home">
<header id="banner" class="body">
- <h1><a href="{{ SITEURL }}/">{{ SITENAME }}{% if SITESUBTITLE %} <strong>{{ SITESUBTITLE }}</strong>{% endif %}</a></h1>
+ <h1>> <a href="{{ SITEURL }}/">{{ SITENAME }}{% if SITESUBTITLE %} <strong>{{ SITESUBTITLE }}</strong>{% endif %}</a><span id="cursor">_</span></h1>
</header>
- <nav id="menu"><ul>
- {% for title, link in MENUITEMS %}
- <li><a href="{{ link }}">{{ title }}</a></li>
- {% endfor %}
- {% if DISPLAY_PAGES_ON_MENU %}
- {% for p in pages %}
- <li{% if p == page %} class="active"{% endif %}><a href="{{ SITEURL }}/{{ p.url }}">{{ p.title }}</a></li>
- {% endfor %}
- {% endif %}
- {% if DISPLAY_CATEGORIES_ON_MENU %}
+ <nav id="menu">
+ <ul>
{% for cat, null in categories %}
<li{% if cat == category %} class="active"{% endif %}><a href="{{ SITEURL }}/{{ cat.url }}">{{ cat }}</a></li>
{% endfor %}
- {% endif %}
- </ul></nav><!-- /#menu -->
+ {% for p in pages %}
+ <li{% if p == page %} class="active"{% endif %}><a href="{{ SITEURL }}/{{ p.url }}">{{ p.title }}</a></li>
+ {% endfor %}
+
+ {% for title, link in MENUITEMS %}
+ <li><a href="{{ link }}">{{ title }}</a></li>
+ {% endfor %}
+ </ul>
+ <hr/>
+ </nav><!-- /#menu -->
{% block content %}
{% endblock %}
<footer id="contentinfo" class="body">
+ <hr/>
<address id="about" class="vcard body">
Powered by <a href="https://getpelican.com/">Pelican</a>.
</address><!-- /#about -->
diff --git a/lash/templates/category.html b/lash/templates/category.html
@@ -0,0 +1,8 @@
+{% extends "index.html" %}
+
+{% block title %}{{ SITENAME }} - {{ category }} category{% endblock %}
+
+{% block content_title %}
+<h1 class="top-body-title">Category: {{ category }}</h1>
+{% endblock %}
+
diff --git a/lash/templates/index.html b/lash/templates/index.html
@@ -2,13 +2,14 @@
{% block content %}
<section id="content">
{% block content_title %}
-<h2>All articles</h2>
+<h1 class="top-body-title">All articles</h1>
{% endblock %}
<ol id="post-list">
{% for article in articles_page.object_list %}
<li><article class="hentry">
- <header> <h2 class="entry-title"><a href="{{ SITEURL }}/{{ article.url }}" rel="bookmark" title="Permalink to {{ article.title|striptags }}">{{ article.title }}</a></h2> </header>
+ <header> <h3 class="entry-title"><a href="{{ SITEURL }}/{{ article.url }}" rel="bookmark" title="Permalink to {{ article.title|striptags }}">{% if article.series %} {{ article.series }}: {% endif %}{{ article.title }}</a></h3> </header>
+ <p>{{ article.summary }}</p>
</article></li>
{% endfor %}
</ol><!-- /#posts-list -->
diff --git a/lash/templates/page.html b/lash/templates/page.html
@@ -0,0 +1,29 @@
+{% extends "base.html" %}
+{% block html_lang %}{{ page.lang }}{% endblock %}
+
+{% block title %}{{ SITENAME }} - {{ page.title|striptags }}{%endblock%}
+
+{% block head %}
+ {{ super() }}
+
+ {% import 'translations.html' as translations with context %}
+ {% if translations.entry_hreflang(page) %}
+ {{ translations.entry_hreflang(page) }}
+ {% endif %}
+{% endblock %}
+
+{% block content %}
+ <h1 class="top-body-title">{{ page.title }}</h1>
+ <section id="content" class="body">
+ {% import 'translations.html' as translations with context %}
+ {{ translations.translations_for(page) }}
+
+ {{ page.content }}
+ </section>
+
+ {% if page.modified %}
+ <p>
+ Last updated: {{ page.locale_modified }}
+ </p>
+ {% endif %}
+{% endblock %}
diff --git a/lash/templates/tag.html b/lash/templates/tag.html
@@ -0,0 +1,7 @@
+{% extends "index.html" %}
+
+{% block title %}{{ SITENAME }} - {{ tag }} tag{% endblock %}
+
+{% block content_title %}
+<h1 class="top-body-title">Tag: {{ tag }}</h1>
+{% endblock %}
diff --git a/pelicanconf.py b/pelicanconf.py
@@ -2,7 +2,7 @@
# -*- coding: utf-8 -*- #
AUTHOR = 'Louis Holbrook'
-SITENAME = 'Man Bytes Dog'
+SITENAME = 'man bytes gnu'
SITEURL = ''
PATH = 'content'
@@ -36,3 +36,6 @@ RELATIVE_URLS = True
DISPLAY_CATEGORIES_ON_MENU = True
PLUGINS = ['pelican.plugins.neighbors']
+
+MENUITEMS = [('tags', '/tags')]
+
diff --git a/proof.txt b/proof.txt
@@ -0,0 +1,2 @@
+https://mathworld.wolfram.com/BenfordsLaw.html cce671b93ea32540b69593141a590b94e1117d48482fb5868177b0607f7e5281
+https://www.python.org/dev/peps/pep-0503/ 90854c60f9dd9cfa5cbf73851c675fbd6f60568ae9dde22378389a5f4a8eec7a