Resizing a LUKS-backed BTRFS RAID1 filesystem
Resizing a LUKS-backed BTRFS RAID1 filesystem
Resizing a LUKS-backed BTRFS RAID1 filesystem
Using Home Assistant to power my CI/CD server
Configuring EdgeRouter to provide IPv6 from the tunnelbroker.net on my local network
Automatically restarting systemd services once the binary is updated
Announcing my new project: canastra.online
Using a Raspberry Pi Zero as an offline canastra server
Announcing my new project: What to cook?
Hi! I’m running Caddy and saving access logs to disk in the JSON format. I want to integrate fail2ban to block bots trying /wp-login.php and other known URLs, and I couldn’t find much about how to make fail2ban read Caddy’s logs. This is a hack that I quickly came up with, barely tested, but I managed to make it work: /etc/fail2ban/filter.d/caddy-forbidden.local: [Definition] failregex = "client_ip":"<HOST>"(.*)"status":403 datepattern = "ts":<DATE>\. ignoreregex = Append to /etc/fail2ban/jail.local: ...
Hi! I got some good feedback from @[email protected] related to my previous post: with flyway it’s possible to have a JavaMigration, which let’s you write custom code. So let’s test this out today! JavaMigration It took me a while to make this work. I couldn’t figure it out what it was meant by adding the class to the db.migration package. Some places mentioned src/db/migration, some places mentioned src/main/java/db/migration and I even tried src/main/resources/db/migration (shouldn’t work, but here are my SQL files), but it was fruitless. In the end, @ComponentScan helped me again, as I created a new sub-package migration in my Spring project and annotated my class with @Component, that made it work, with the added benefit of allowing me to control the location of the files. ...
Hi! It has been a long time since I last wrote any meaningful Java code. Aside from some Jenkins plugin debugging here and there, the bulk of my Java experience is from the 2000s, mostly desktop apps (Swing/AWT), so after a nudge from a friend I’ve decided to dust off my long forgotten Java skills and I set to rewrite YT Email using Spring as a learning exercise. First impressions A big chunk of my software engineering experience is using PHP, that’s where it’s easier to compare things to, and right from the start I felt right at home, as most of the concepts are also present in Symfony/Doctrine and there is a high grade of translatability between both frameworks. There were a few things that Magento also does similarly to Spring but with different names. ...
Hi! I have an always-on Raspberry Pi at home, and once in a while I need to connect to something on my home network, or even exit to the internet as if I were at home (quite handy to access services that block datacenter/country IP ranges). This post documents all the steps needed to make it work. Architecture My home connection is behind a few of layers of (CG)NAT, so I can’t connect to it directly from outside my home network. Instead, I’ll be tunnelling through a VPS that I own. This approach consists of two parts: a persistent SSH tunnel between the Raspberry Pi and a VPS, and a connection from my laptop to the Raspberry Pi, through the SSH tunnel. ...
Hi! I used to self-host a NAS in my home network out of a Raspberry Pi 4B and a couple of HDDs, but after a few years I’ve learned that the Pi is somewhat underpowered for my needs, getting in the way of my backup strategy. Once in a while I would search for NUCs and other small form factor computers, but I never found something in the range that I was comfortable to pay. Eventually, however, something clicked: I had the right solution under my nose the entire time: a 2011 Dell Vostro 3450. ...
Hi. A couple of years ago I “took inspiration” for a HTTP reverse proxy in Go from Stack Overflow without putting too much thought into it, and this week it bit me back. A co-worker found out that it was normalising some URLs (/something//else will 301-redirect to /something/else) against their will. So I decided to take the opportunity and understand better how net/http handles URLs, and here are my findings. ...
I briefly outlined in my ZFS backup strategy blogpost about my NAS setup, but here it’s a quick recap: I have a Raspberry Pi 4 4GB with a 1TB SATA HDD over USB running under the TV in my living room, and a second USB HDD for mirroring. I’ve been running this setup for around 18 months now, and unfortunately it doesn’t quite fit my needs. In the previous post, I focused too much in the remote/cloud backups for ZFS, so I just took it for granted that mirroring the disks would be trivial using ZFS. While ZFS does mirroring by default, now I understand that it’s intended as a solution for always-online disks, so I couldn’t rely on that feature without ZFS constantly nagging that the zpool is unhealthy and resilvering the disk every time I plugged it in. To get around that, I’ve decided to keep the zpool with a single disk, and zfs send the data to the second disk once every when I felt like, mostly because once the disks were fully synced, the delta between the second disk and what’s stored on the cloud would be quite small (<100MB). ...
It has now been close to half a decade since acquisition of my Lenovo Y720-15IBK, so I thought about writing an update this machine, given it is still my main personal machine, seeing almost daily usage for general browsing and a bit of programming. I also game on it semi-regularly, having played a good deal of Forza Horizon 4 (Ultra @ 1080p) and ETS2 over this holidays season. Hardware Upgrading to a Kingston A2000 was the only hardware change I had done in this machine, mostly because I needed 1TB of storage. I haven’t seen the need for an upgrade to a better machine, as I can still run the games I like to play (disclaimer: I only have 60Hz displays and I don’t care much about graphics quality anyway) and all my programming happens on Linux (no VMs), so the i7-7700HQ should be plenty for a little while. ...
As far as I remember, you can’t create a write-only key via Backblaze’s dashboard without also giving read access to the key. I want to use this specifically for uploaders in servers, so, if compromized, an attacker can’t read data out of the bucket. $ curl https://api.backblazeb2.com/b2api/v2/b2_authorize_account -u "MASTER_KEY_ID:MASTER_KEY_SECRET" { "apiUrl": "https://api003.backblazeb2.com", "authorizationToken": ".....", } Replace apiUrl and authorizationToken in the next command: $ curl https://$apiUrl/b2api/v2/b2_create_key -d '{"capabilities": ["listBuckets","writeFiles"],"keyName":"key-name","accountId":"MASTER_KEY_ID"}' -H 'Authorization: $authorizationToken' { "accountId": "0f0f0f0f0f0f", "applicationKey": "K....", "applicationKeyId": "00....", "bucketId": null, "capabilities": [ "listBuckets", "writeFiles" ], "expirationTimestamp": null, "keyName": "key-name", "namePrefix": null, "options": [ "s3" ] } That’s all. ...
I am now doing some experiments running my own NAS at home (mostly out of boredom), and I went with a small solution that goes inside my IKEA PS with a Raspberry Pi 4, a couple of 1TB USB SATA disks and ZFS on Linux mirroring them. I have less than 200GB in data and a very stable 50Mbps uplink at home, so this post explains my strategy to backup my data in a remote location. ...
It’s almost a year since I purchased this gaming computer and just now I had the need to do anything else than gaming in it. I have a friend that recently bought a notebook with a Nvidia GTX 1060 and he installed Fedora, making it a very snappy workstation, so I decided to give Fedora a go again. My computer came with Windows 10 by default and I never changed anything there, so here I am writing down all knowledge I got from installing Fedora in this machine. ...
That’s not a new thing, but I happened to use it during the weekend to be able to access some services back in Brazil that were IP-limited and HideMyAss couldn’t help, so I asked a friend for a small proxy help. What I did on my side: Opened a port on my modem to forward the connection to the port 51000 on my computer; Started a container: docker run -p 51000:51000 -p 51001:51001 --rm -it ubuntu:xenial bash; Installed supervisord, openssh-server and added GatewayPorts yes to /etc/ssh/sshd_config; My friend had to run those commands in parallel: ...
This week’s task was optimising Magento for large carts (100+ different products) and a profiler is a good tool to find small pieces of code that could be optimised. Blackfire was my choice, mostly because previous experience, but also because it is not a resource hog like Xdebug, which was taking around 6 minutes while Blackfire took only 20 seconds in the same type of request. Companion Blackfire works great to profile GET requests on the browser with the help of Companion, but it doesn’t allow you to profile POST requests. Also, Companion sends multiple requests to the same endpoint, but Magento’s place order is not stateless, so the first and all other request would differ substantially. ...
Hi, It’s common to use netcat utility to work with SSH ProxyCommand, which allows to use a bridge server, very useful when you need to connect directly to a host behind a firewall. Example: # File: ~/.ssh/config Host workbox HostName 192.168.1.92 # The ip address that the bridge server can see User anotherusername IdentityFile ~/.ssh/id_rsa ProxyCommand ssh [email protected] nc -w 120 %h %p I was using netcat until a couple of days, and works very smooth. Then I’ve setup a new server on another private network, behind a CentOS 7 firewall, and when I’ve tried the configuration above, I got this: ...
Today I was working in a 3rd party module. Something very simple, an AJAX Add to Cart button. But, when I requested http://.../ajaxcart/cart/add, I’ve got a 302 HTTP status, or, a redirect, to the same requested URL, but in secure mode (HTTPS). After debugging, I could understand that the controller wasn’t being called, so I try to find a configuration problem. The Administration Panel was configured to be served on HTTPS (Configuration > General > Web > Secure > ???), and when I’ve removed this option, everything was working as expected. ...
Quick tip to use regex on nginx’s server_name directive. Using a specific root directory for every subdomain: server { listen 80; server_name ~^(.*)\.project\.com$; root /home/www/project/$1; } Using an environment variable with a single project: server { listen 80; server_name ~^(.*)\.project\.com$; fastcgi_param CUSTOMER $1; root /home/www/project; } You can also use the entire domain name with this: server { listen 80; server_name ~^(.*)\.(.*\..*)$; root /home/www/$2/subdomains/$1/public_html; } Tested with Nginx 1.4.x. ...
Quick tip on KDE based Debian: When you receive the following error when trying to run a X application with sudo: kassner@brian:~$ sudo unetbootin No protocol specified unetbootin: cannot connect to X server :0 Run the following command and try again: kassner@brian:~$ xhost SI:localuser:root localuser:root being added to access control list
Quick tip: When have a code like this: {% raw %}{% trans %}prefix.{{ varname }}{% endtrans %}{% endraw %} And receive this error: Twig_Error_Syntax: A message inside a trans tag must be a simple text You can use the following piece of code as workaround. {% raw %}{{ ("prefix." ~ varname)|trans }}{% endraw %} PS: maybe this is not the best way to go.
Hi folks, Today I struggled over an already known bug on Symfony2 when using UniqueEntity and Entity Inheritance. In the bug discussion, @gentisaliu recommended using a custom repository, and how do this I’m documenting here. Entities: /** * @ORM\Table(name="parent") * @ORM\Entity(repositoryClass="Repository\Parent") * @UniqueEntity(fields={"name"}, repositoryMethod="findByName", message="Name already used.") * @ORM\InheritanceType("JOINED") * @ORM\DiscriminatorColumn(name="type", type="string") * @ORM\DiscriminatorMap({ * "a" = "ChildA", * "b" = "ChildB" * }) */ class Parent { } /** * @ORM\Entity(repositoryClass="Repository\Parent") * @ORM\Table(name="child_a") */ class ChildA extends Parent { } /** * @ORM\Entity(repositoryClass="Repository\Parent") * @ORM\Table(name="child_b") */ class ChildB extends Parent { } repositoryMethod: ...
Hello. Today I bumped into a problem with ping: kassner@brian$ ping git.company.local ping: unknown host git.company.local Of course, it’s a local address, so maybe I forgot to add the local DNS server. Let’s check: kassner@brian$ dig A git.company.local ; < <>> DiG 9.8.4-rpz2+rl005.12-P1 < <>> A git.company.local ;; global options: +cmd ;; Got answer: ;; ->>HEADER< <- opcode: QUERY, status: NOERROR, id: 15746 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 4, ADDITIONAL: 4 ;; QUESTION SECTION: ;git.company.local. IN A ;; ANSWER SECTION: git.company.local. 86400 IN A 192.168.0.150 ;; Query time: 0 msec ;; SERVER: 10.0.0.1#53(10.0.0.1) ;; WHEN: Thu Nov 14 12:05:45 2013 ;; MSG SIZE rcvd: 249 WTF? Oh, of course. I’m using a .local suffix, so Avahi will take action. Needless for my local network, I just disabled it on Debian Wheezy: ...
Hi Folks, A little note to work with product object in Magento: /** * Using setStoreId two times, otherwise it will save the data to default store */ $product = Mage::getModel('catalog/product') ->setStoreId($storeId) ->loadByAttribute('ean', $ean) ->setStoreId($storeId);
Mental note: If you are moving the Rewrite from .htaccess to VirtualHost configuration and get a 400 Bad Request error, one or both tips below can be useful: RewriteEngine On # Use the %{DOCUMENT_ROOT} RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -s [OR] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -l [OR] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] # Use the absolute path RewriteRule ^.*$ /home/www/html/index.php [NC,L] That’s all.
Hello A little tip to convert AVI files to DVD with embed subtitles. Installing tovid: sudo apt-get install tovid Converting AVI file to a DVD-ISO file. tovid -dvd -in Video.avi -subtitles Legenda.srt -out Video /usr/share/tovid/makexml Video.mpg -out Video export VIDEO_FORMAT=NTSC /usr/share/tovid/makedvd Video.xml mkisofs -dvd-video -udf -R -o Video.iso Video/ So, you just need to burn your ISO file. You can use Brasero.
Hello. After an intense debug into Magento, we (Filipe Ibaldo and me) have found two problems in Magento code, related to slow Place Order with many products in cart (not many qty of a product). One of the problems is the order item save, which is executed precisely in app/code/core/Mage/Sales/Model/Entity/Order/Attribute/Backend/Parent.php, on afterSave method, who spent many time on my machine (Core 2 Duo 2.26/4GB RAM), about 0.3 second for each product. ...
Hello. Today I found a problem in Zend Framework and his class autoloader, where an class_exists function call fires the Zend Framework autoloader. The function has an option to ignore the autoloader, but any class that I need loaded by Zend_Loader will return false. Then, if I call class_exists without disable autoloader, I got an file not exists PHP warning, that’s why Zend_Loader doesn’t check if the file exists, just include. ...