<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Linux &#8211; Manuel Bogner&#039;s Blog</title>
	<atom:link href="https://blog.mbo.dev/archives/category/linux/feed" rel="self" type="application/rss+xml" />
	<link>https://blog.mbo.dev</link>
	<description>Solutions to everyday IT problems</description>
	<lastBuildDate>Mon, 12 Feb 2024 11:54:03 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Convert PSD files to PNG with ImageMagick in a simple bash script</title>
		<link>https://blog.mbo.dev/archives/2014</link>
		
		<dc:creator><![CDATA[Manuel Bogner]]></dc:creator>
		<pubDate>Mon, 12 Feb 2024 11:54:02 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Mac]]></category>
		<category><![CDATA[Media]]></category>
		<guid isPermaLink="false">https://blog.coffeebeans.at/?p=2014</guid>

					<description><![CDATA[Keeping original PSD files was quite good practice over the years but mostly other formats are needed down the road. Because opening Photoshop to export a file takes quite some time and isn&#8217;t efficient I wrote a small script to replace this task: This takes the file(s) to convert as argument(s). You can also just [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>Keeping original PSD files was quite good practice over the years but mostly other formats are needed down the road. Because opening Photoshop to export a file takes quite some time and isn&#8217;t efficient I wrote a small script to replace this task:</p>



<pre class="wp-block-code"><code>#!/usr/bin/env bash

if &#91;&#91; "" == "$@" ]]; then
    echo "usage: $0 &lt;file(s) to convert>"
    exit 1
fi

for i in "$@"; do
    directory=$(realpath $(dirname "$i"))
    file=$(basename "$i")
    name=${file%.*}
    extension=$(echo ${file##*.} | tr '&#91;:upper:]' '&#91;:lower:]')

    if &#91;&#91; "$extension" == "psd" ]]; then
        target="$directory/$name.png"
        if &#91;&#91; -e "$target" ]]; then
            echo "file already exists: '$target'"
        else
            echo "converting '$i'"
            convert "$i" -background none -flatten "$target" || exit 1
        fi
    fi
done</code></pre>



<p>This takes the file(s) to convert as argument(s). You can also just go with wildcards like <code>*.psd</code> to convert all psd files in a folder. Already converted files will be skipped.</p>



<p>You need ImageMagic installed on your machine to have access to the convert command.</p>



<p>Tested with ImageMagic 7.1.1-27 on MacOS Sonoma 14.3.1.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Get more security updates through Ubuntu Pro with &#8216;esm-apps&#8217; enabled</title>
		<link>https://blog.mbo.dev/archives/2006</link>
		
		<dc:creator><![CDATA[Manuel Bogner]]></dc:creator>
		<pubDate>Tue, 23 Jan 2024 23:10:19 +0000</pubDate>
				<category><![CDATA[Ubuntu]]></category>
		<guid isPermaLink="false">https://blog.coffeebeans.at/?p=2006</guid>

					<description><![CDATA[To disable this annoying advertisement in your shell simply run This will add .bak to the end of the file and should survive updates.]]></description>
										<content:encoded><![CDATA[
<p>To disable this annoying advertisement in your shell simply run</p>



<pre class="wp-block-code"><code>sudo dpkg-divert --divert /etc/apt/apt.conf.d/20apt-esm-hook.conf.bak --rename --local /etc/apt/apt.conf.d/20apt-esm-hook.conf</code></pre>



<p>This will add .bak to the end of the file and should survive updates.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>How to create docker containers for multiple platforms / architectures</title>
		<link>https://blog.mbo.dev/archives/1930</link>
		
		<dc:creator><![CDATA[Manuel Bogner]]></dc:creator>
		<pubDate>Mon, 17 Apr 2023 11:07:56 +0000</pubDate>
				<category><![CDATA[Docker]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Mac]]></category>
		<guid isPermaLink="false">https://blog.coffeebeans.at/?p=1930</guid>

					<description><![CDATA[First you need to choose a base image that is available for the target platforms as well. Create your Dockerfile as usual and then build the container for different platforms. This example would create an amd64 and a aarch64 (arm64/v8) image: Based on these you can create a manifest and upload it: This would already [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>First you need to choose a base image that is available for the target platforms as well. Create your Dockerfile as usual and then build the container for different platforms.</p>



<p>This example would create an <em><strong>amd64</strong></em> and a <em><strong>aarch64</strong></em> (arm64/v8) image:</p>



<pre class="wp-block-code"><code># ARM
<strong>docker build </strong>--platform=linux/<strong>aarch64</strong> -t <em>&lt;dockerhub-username>/&lt;image-name>:&lt;version></em><strong>-aarch64</strong> .
<strong>docker push</strong> -t <em>&lt;dockerhub-username>/&lt;image-name>:&lt;version></em><strong>-aarch64</strong>

# AMD
<strong>docker build</strong> --platform=linux/<strong>amd64</strong> -t <em>&lt;dockerhub-username>/&lt;image-name>:&lt;version></em>-<strong>amd64</strong> .
<strong>docker push</strong> -t <em>&lt;dockerhub-username>/&lt;image-name>:&lt;version></em><strong>-amd64</strong></code></pre>



<p>Based on these you can create a manifest and upload it:</p>



<pre class="wp-block-code"><code><strong>docker manifest create</strong> <em>&lt;dockerhub-username>/&lt;image-name>:&lt;version></em> \
  --amend <em>&lt;dockerhub-username>/&lt;image-name>:&lt;version></em><strong>-aarch64</strong> \
  --amend <em>&lt;dockerhub-username>/&lt;image-name>:&lt;version></em><strong>-amd64</strong>
<strong>docker manifest push</strong> <em>&lt;dockerhub-username>/&lt;image-name>:&lt;version></em></code></pre>



<p>This would already provide an image <em>&lt;dockerhub-username>/&lt;image-name>:&lt;version></em> available for <em>amd64</em> and <em>aarch64</em> platform on docker hub. But for convenience we also want a <em>latest</em> tag for that manifest:</p>



<pre class="wp-block-code"><code><strong>docker manifest create</strong> <em>&lt;dockerhub-username>/&lt;image-name></em>:<strong>latest</strong> \
  --<strong>amend</strong> <em>&lt;dockerhub-username>/&lt;image-name>:&lt;version></em><strong>-aarch64</strong> \
  --<strong>amend</strong> <em>&lt;dockerhub-username>/&lt;image-name>:&lt;version></em><strong>-amd64</strong>
<strong>docker manifest push</strong> <em>&lt;dockerhub-username>/&lt;image-name></em><strong>:latest</strong></code></pre>



<p>This uses the same hashes as the version uploaded before.</p>



<p>I am not sure if this is the correct or best way, but at least it works. On my M1 mac buildx didn&#8217;t work so I fell back to manifests that aren&#8217;t that complicated anyway.</p>



<p>Here an example image that was created and uploaded like this: <a rel="noreferrer noopener" href="https://registry.hub.docker.com/r/mbopm/cyberchef" target="_blank">https://registry.hub.docker.com/r/mbopm/cyberchef</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>SMPTE LTC (Linear Time Code) on Linux and MacOS with ltc-tools and Jack Audio Connection Kit</title>
		<link>https://blog.mbo.dev/archives/1887</link>
		
		<dc:creator><![CDATA[Manuel Bogner]]></dc:creator>
		<pubDate>Mon, 28 Nov 2022 21:08:14 +0000</pubDate>
				<category><![CDATA[General]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Mac]]></category>
		<guid isPermaLink="false">https://blog.coffeebeans.at/?p=1887</guid>

					<description><![CDATA[The following time code generation is based on real time clocks synced with ntp on mac or linux computers using UTC time. Make sure your machine has an internal clock source otherwise the time drift will be too big. This approach is a very cheap way to get proper time code into cameras. If your [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>The following time code generation is based on real time clocks synced with ntp on mac or linux computers using UTC time. Make sure your machine has an internal clock source otherwise the time drift will be too big.</p>



<p>This approach is a very cheap way to get proper time code into cameras. If your device &#8211; like my notebook &#8211; has a built in microphone you could for example do the same as tentacle sync e and record time code on one and micro input on the other stereo channel of your for example digital camera. Of course syncing devices via ntp isn&#8217;t as accurate as syncing explicitly and having extra hardware for time code generation but this way I can use existing hardware instead of buying devices for hundreds of euros that aren&#8217;t shippable for weeks or months, cost a fortune and I don&#8217;t use very often.</p>



<p><strong>In the following my notes about how to use ltc-tools and jack for time code generation.</strong></p>



<p>First you need to install the dependencies:</p>



<pre class="wp-block-code"><code>brew install jack
brew install ltc-tools</code></pre>



<p>With these in place you first have to start the jack daemon:</p>



<pre class="wp-block-code"><code>jackd --realtime -dcoreaudio</code></pre>



<p>Then connect the time code generator with your desired frame rate</p>



<pre class="wp-block-code"><code>jltcgen -f 25</code></pre>



<p>This now sends time code to jack input. On my system this input is named <code>genltc:ltc</code>. You can display all ports usable in jack via</p>



<pre class="wp-block-code"><code>jack_lsp</code></pre>



<p>On my machine this gives the following output:</p>



<pre class="wp-block-code"><code>system:capture_1
system:playback_1
system:playback_2
genltc:ltc</code></pre>



<p>capture_1 is my mic, playback_1 the left and playback_2 the right speaker.</p>



<p>To connect the time code generator with my left speaker (and immediately disconnect it again because of the annoying sound) the commands are:</p>



<pre class="wp-block-code"><code># output time code on left and mic on right channel
jack_connect genltc:ltc system:playback_1
jack_connect system:capture_1 system:playback_2

# disconnect it again
jack_disconnect genltc:ltc system:playback_1
jack_disconnect system:capture_1 system:playback_2</code></pre>



<p>If you don&#8217;t disconnect it your device should make some weird sound which is the time code. In case you connect the mic like in my sample make sure to use headphones otherwise you will create a feedback loop.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Remove blank lines within a file with bash</title>
		<link>https://blog.mbo.dev/archives/1877</link>
		
		<dc:creator><![CDATA[Manuel Bogner]]></dc:creator>
		<pubDate>Sat, 12 Nov 2022 13:17:59 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<guid isPermaLink="false">https://blog.coffeebeans.at/?p=1877</guid>

					<description><![CDATA[As it was quite hard to find a proper command to replace all empty lines within a text file &#8211; including empty lines at the start and end &#8211; I wanted to take down the following command which worked for me on macOS:]]></description>
										<content:encoded><![CDATA[
<p>As it was quite hard to find a proper command to replace all empty lines within a text file &#8211; including empty lines at the start and end &#8211; I wanted to take down the following command which worked for me on macOS:</p>



<pre class="wp-block-code"><code>sed -i -z 's/^\n*\|\n*$//g' file.txt</code></pre>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Local OpenFaaS environment with minikube and docker driver</title>
		<link>https://blog.mbo.dev/archives/1868</link>
		
		<dc:creator><![CDATA[Manuel Bogner]]></dc:creator>
		<pubDate>Fri, 28 Oct 2022 17:37:51 +0000</pubDate>
				<category><![CDATA[Development]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Mac]]></category>
		<guid isPermaLink="false">https://blog.coffeebeans.at/?p=1868</guid>

					<description><![CDATA[Following the documentation using arkade was (as stated in the docs) the fastest and easiest way to get a working local installation running. There are numerous outdated tutorials about how to run OpenFaaS locally but none of really worked in my envrionment. I am using a MacBook M1 Max with docker driver. I have docker [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>Following the documentation using arkade was (as stated in the docs) the fastest and easiest way to get a working local installation running. There are numerous outdated tutorials about how to run OpenFaaS locally but none of really worked in my envrionment. I am using a MacBook M1 Max with docker driver. I have docker 20.10.17, faas-cli 0.14.11 and minikube 1.27.1 installed via brew. I followed <a rel="noreferrer noopener" href="https://docs.openfaas.com/deployment/kubernetes/#1-deploy-the-chart-with-arkade-fastest-option" target="_blank">https://docs.openfaas.com/deployment/kubernetes/#1-deploy-the-chart-with-arkade-fastest-option</a>, installed arkade 0.8.48 via brew and run <code>arkade install openfaas</code>. After a short moment all services were up and running and with port 8080 forwarded to localhost I was able to connect to openfaas.</p>



<p>Here a summary of commands if you have everything installed and want to bring up minikube with openfaas:</p>



<pre class="wp-block-code"><code>minikube start
arkade install openfaas

# wait until the following command shows all services as ready:
kubectl -n openfaas get deployments -l "release=openfaas, app=openfaas"

kubectl rollout status -n openfaas deploy/gateway
kubectl port-forward -n openfaas svc/gateway 8080:8080 &amp;

PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)
echo -n $PASSWORD | faas-cli login --username admin --password-stdin

# check if the service answers:
faas-cli list

# print password
echo $PASSWORD</code></pre>



<p>After this you can open http://localhost:8080 with admin user and above printed password.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Replace macOS&#8217;s sed with GNU&#8217;s sed on a mac</title>
		<link>https://blog.mbo.dev/archives/1865</link>
		
		<dc:creator><![CDATA[Manuel Bogner]]></dc:creator>
		<pubDate>Fri, 21 Oct 2022 03:04:46 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Mac]]></category>
		<guid isPermaLink="false">https://blog.coffeebeans.at/?p=1865</guid>

					<description><![CDATA[If you don&#8217;t want to use different syntax on your mac with sed command then it can be easily replaced with GNU sed by installing it via This installs the program as gsed which is also not handy if you don&#8217;t want to write different scripts for Linux and macOS. If you want to get [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>If you don&#8217;t want to use different syntax on your mac with <code>sed</code> command then it can be easily replaced with GNU sed by installing it via</p>



<pre class="wp-block-code"><code>brew install gnu-sed</code></pre>



<p>This installs the program as <code>gsed</code> which is also not handy if you don&#8217;t want to write different scripts for Linux and macOS. If you want to get rid of macOS&#8217;s sed then you need to add the following line to your .zshrc file and restart your terminal or source the config:</p>



<pre class="wp-block-code"><code>PATH="$(brew --prefix)/opt/gnu-sed/libexec/gnubin:$PATH"</code></pre>



<p>This adds gnu programs in front of macOS&#8217;s programs and whatever gnu tools you&#8217;re installing via brew are then replacing the existing versions.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>docker on mac without docker-desktop</title>
		<link>https://blog.mbo.dev/archives/1707</link>
		
		<dc:creator><![CDATA[Manuel Bogner]]></dc:creator>
		<pubDate>Sun, 12 Dec 2021 15:25:15 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Ubuntu]]></category>
		<category><![CDATA[Virtualization]]></category>
		<guid isPermaLink="false">https://blog.coffeebeans.at/?p=1707</guid>

					<description><![CDATA[I just had a discussion where I was told that docker-desktop isn&#8217;t usable anymore because of their new licensing. So I had a look if docker-desktop is really required. In the end it is just a nicer integration with some desktop app to manage the background vm. I installed virtualbox and set up a vm [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>I just had a discussion where I was told that docker-desktop isn&#8217;t usable anymore because of their new licensing. So I had a look if docker-desktop is really required. In the end it is just a nicer integration with some desktop app to manage the background vm.</p>



<p>I installed virtualbox and set up a vm with shared host adapter for being able to easily access it via ip. On that ubuntu vm which I access from outside via SSH I installed docker as documented on the official docker documentation and gave my user proper rights to use docker. The vm has proper internet access and can run docker containers with ports mapped to the shared host adapter.</p>



<p>From my mac I installed docker via brew (not desktop) and added a context for the vm:</p>



<pre class="wp-block-code"><code>docker context create vm --description "local ubuntu vm" --docker "host=ssh://manuel@ubuntu"</code></pre>



<p>The &#8220;ubuntu&#8221; hostname was added to my /etc/hosts with the configured ip of the vm and manuel is my ssh user on the vm.</p>



<p>With the new context created the context list looks like this:</p>



<pre class="wp-block-code"><code>➜  ~ docker context ls
NAME                TYPE                DESCRIPTION                               DOCKER ENDPOINT                                KUBERNETES ENDPOINT   ORCHESTRATOR
default *           moby                Current DOCKER_HOST based configuration   unix:///var/run/docker.sock                                          swarm
desktop-linux       moby                                                          unix:///Users/manuel/.docker/run/docker.sock                         
vm                  moby                local ubuntu vm                           ssh://manuel@ubuntu</code></pre>



<p>After switching the context to `vm` I can easily work with docker running on the vm.</p>



<pre class="wp-block-code"><code>docker context use vm</code></pre>



<p>With the use of context I was able to come around docker-desktop completely. So the argument that docker isn&#8217;t usable anymore on a non linux machine is busted.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Upgrade postgresql 13 to 14</title>
		<link>https://blog.mbo.dev/archives/1664</link>
		
		<dc:creator><![CDATA[Manuel Bogner]]></dc:creator>
		<pubDate>Sun, 07 Nov 2021 15:32:21 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<guid isPermaLink="false">https://blog.coffeebeans.at/?p=1664</guid>

					<description><![CDATA[As a big fan of postgresql I&#8217;m following latest releases based on the stable updates channel. But every time a new major release comes our you end up with two running instances on your machine and you have to upgrade your data from the old to the new one. Here some steps how I did [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>As a big fan of postgresql I&#8217;m following latest releases based on the stable updates channel. But every time a new major release comes our you end up with two running instances on your machine and you have to upgrade your data from the old to the new one. Here some steps how I did it this time:</p>



<p>First take down all services that depend on the database by using systemctl and also bring down the two pg instances:</p>



<pre class="wp-block-code"><code><meta charset="utf-8"><meta charset="utf-8">user@~$ sudo systemctl stop postgresql@13-main
<meta charset="utf-8">user@~$ sudo systemctl stop postgresql@14-main</code></pre>



<p>Then check the clusters you have:</p>



<pre class="wp-block-code"><code><meta charset="utf-8">user@~$ sudo su - postgres
postgres@~$ pg_lsclusters
Ver Cluster Port Status Owner    Data directory              Log file
13  main    5432 online postgres /var/lib/postgresql/13/main /var/log/postgresql/postgresql-13-main.log</code></pre>



<p>If you see a new cluster for 14 already check that it&#8217;s empty and remove it if so by</p>



<pre class="wp-block-code"><code><meta charset="utf-8">postgres@~$ pg_dropcluster --stop 14 main</code></pre>



<p>Then upgrade your existing cluster and bring the new server up again</p>



<pre class="wp-block-code"><code><meta charset="utf-8">postgres@~$ pg_upgradecluster 13 main
<meta charset="utf-8">postgres@~$ exit
<meta charset="utf-8"><meta charset="utf-8">user@~$ sudo systemctl stop postgresql@14-main</code></pre>



<p>After this bring your dependent services up and test them. After verifying you can drop the old cluster:</p>



<pre class="wp-block-code"><code><meta charset="utf-8">user@~$ sudo su - postgres
postgres@~$ pg_dropcluster --stop 13 main</code></pre>



<p>This should be it. To get rid of the old version you can also run</p>



<pre class="wp-block-code"><code>user@~$ sudo apt purge postgresql-13 postgresql-client-13</code></pre>



<p>The described process worked well for me but be warned that this has potential to delete all your databases. So be careful. Just to be sure I want to state here that I don&#8217;t take any responsibility for your data loss or the resulting problems of it ;-) Having backups before starting this process would be a good idea!</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>btrfs raid1 with two 3.5&#8243; sata disks under ubuntu 20.04 lts</title>
		<link>https://blog.mbo.dev/archives/1363</link>
		
		<dc:creator><![CDATA[Manuel Bogner]]></dc:creator>
		<pubDate>Sat, 22 May 2021 13:51:04 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<guid isPermaLink="false">https://blog.coffeebeans.at/?p=1363</guid>

					<description><![CDATA[I installed a raid1 with two 3.5&#8243; sata disks in one of my machines. Instead of using mdadm as usual I decided to go with btrfs this time. Here the commands I used for that (as root): This was already working properly and I was able to copy my data on the newly created raid. [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>I installed a raid1 with two 3.5&#8243; sata disks in one of my machines. Instead of using mdadm as usual I decided to go with btrfs this time. Here the commands I used for that (as root):</p>



<pre class="wp-block-code"><code># overwrite partition tables
dd if=/dev/zero of=/dev/sdc bs=512 count=1024
dd if=/dev/zero of=/dev/sdd bs=512 count=1024
# create btrfs with single drive first
mkfs.btrfs -m single /dev/sdc

# get uuid and add it to fstab for automount
blkid | grep sdc
# => /dev/sdc: UUID="cc3a8b12-5b75-4c71-b5a7-ad151b69eb22" ...
echo "UUID=cc3a8b12-5b75-4c71-b5a7-ad151b69eb22 /data btrfs defaults,autodefrag 0	0" >> /etc/fstab
mount -a

# add second disk
btrfs device add /dev/sdd /data
# and change the fs to raid1
btrfs balance start -dconvert=raid1 -mconvert=raid1 /data</code></pre>



<p>This was already working properly and I was able to copy my data on the newly created raid. I added the following to my /etc/hdparm.conf to turn off the disks when not needed:</p>



<pre class="wp-block-code"><code># see hdparm -S
# 120 * 5sec = 600sec
# 600sec / 60sec = 10min
# frequent spin downs will damage a disk so i chose 20min!!

/dev/sdc {
	spindown_time = 240
}

/dev/sdd {
        spindown_time = 240
}</code></pre>



<p>Haven&#8217;t tested everything yet, but works as expected until now =)</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
