How to override DNS for private networks with BIND RPZ

In our private network we have services that are served to the internet and should also be used by the users sitting inside the network (physically or via VPN). We have a main DNS servers in a cloud provider and it is serving service.example.com pointing to our firewall internet facing address and an internal DNS server with all the same records duplicated except service.example.com that is pointing to its address reachable in the private network.

This arrangement causes some maintenance trouble because every time we add a new domain to the main DNS, we need to duplicate this entry in the internal server.

Some time ago I looked up how to solve this a little more elegantly and found out about response-policy-zone feature in BIND. With this set up we can have our internal server configured to only have entries for the domains that need to have different responses in our private network without the need to duplicate all other entries.

Step-by-step

Configure a response-policy in the options:

options {
  [...]
  response-policy { zone "rpz"; };
}

Create the zone you referenced in the response-policy ("rpz")

zone "rpz" {
  type master;
  file "rpz.db";
};

Then populate the zone file referenced ("rpz.db")

$TTL 300
@    IN SOA localhost. root.localhost. (
          2023091201  ; serial
          86400       ; refresh, seconds
          7200        ; retry, seconds
          3600000     ; expire, seconds
          86400       ; minimum, seconds
)

@        IN NS dns.google.

; you need the full domain here, without ending with a period
production.example.com   IN A 192.168.0.10
dev.example.com          IN A 192.168.0.10

bad.ads.example.com      IN A 127.0.0.1
evil.domain.example.com  IN A 127.0.0.1

reload the configs and you should be good to go.

References:
https://www.redpill-linpro.com/sysadvent/2015/12/08/dns-rpz.html

https://dnsrpz.info/

Quick Profiling Python Code

The cProfile is my go-to Python profiler as it is part of the default installation, no extra modules needed. When profilig with cProfile it will generate an output with the call count and spent times for each called fuction. The main way I use it is specifying an output file for later inspection:

python3 -m cProfile -o output_file myscript.py

This will generate a file named output_file to be opened and sorted and analysed later. This file can be read with the pstats module (which is also a default module) by using:

python3 -m pstats output_file

You can call sort without arguments to see the options, but sort time and sort cumulative are the first ones I usually try and then stats will show the ordered data.

There is also snakeviz which is a visualizer for the cProfile output file format. But this one you will need to install with:

python3 -m pip install snakeviz

References:

Accessing corporate GIT repositories without a VPN

I usually just write here so I can remember these things later, but this time is something to be found by others.

So, it is normal to have our git repositories accessible in the public internet (access controlled or not), but it is possible that your repositories are only accessible within your corporate network, and you would need to be inside the company or use a VPN to be able to use it.

One of the ways that this can be circumvented is to use a port forward with an SSH connection, when updating some web app in an outside development environment, for example.

Port forward is straightforward:

ssh user@remote_host -R 10443:git.company.com:443

this will listen for local requests to port 10443 inside remote_host and redirect them trough the SSH connection to git.company.com:443.

This way it should be easy to clone, pull and push directly to the centralized repository

git clone https://localhost:10443/path_to/repository.git

And that’s it.

Of course you will need to be using a computer that is able to access the repository, either from inside or using a VPN, but the development environment does not need to.

In time, maybe check with the company’s IT department if it’s OK to do this! 😉

Hope this is useful for someone!

Change keyboard Compose behavior

For some time I’ve been wanting to change the default behavior of my keyboard (on an Ubuntu 18.04 machine) when using the composition keys (dead-keys).
The default behavior for the double quote key (") to be a compose key, in order to be able to input charaters like ä ("+a)and ö ("+o), so if I wanted a ", I would need to press "+spacebar. I was used to just double tap the " key and get the " (from the Windows days, I think), so I wanted that, but the default is the daeresis sign (¨).

Don’t know if it’s the setup I have (a default american english keyboard with a PT-BR locale)
So my input source is: “English (US, intl., with dead keys)”
And formats are defined as: “Brasil”

After several searches and a few dead ends (ibus and ibus-tables), I finally ended up with a solution, for my use case anyway. It turns out that the compose sequences were controlled by the files in

/usr/share/X11/locale

And the file I needed to change was this one:

/usr/share/X11/locale/pt_BR.UTF-8/Compose

The files are quote self explanatory and they seem to function in a hierarchical way, with the more specialized layouts including the more generic ones and overriding where needed. The pt_BR file included the en_US one, and the values I needed overriding were in the english one. Copying the relevant section and changing the end result was easy and I just added this to the end of the file.

<dead_diaeresis> <dead_diaeresis>   : "\""   diaeresis # DIAERESIS
<Multi_key> <quotedbl> <quotedbl>   : "\""   diaeresis # DIAERESIS
<dead_acute> <dead_acute>           : "'"   acute # ACUTE ACCENT
<Multi_key> <apostrophe> <apostrophe>   : "'"   acute # ACUTE ACCENT

As I could not find an easy way to reload this, restarting the X server solved the rest.

MariaDB replication not auto reconnecting

A few weeks ago I’ve migrated some services to a new server and this new server was running Debian 9. One of the changes from version 8 to 9 was that the default mysql-server package installed MariaDB instead of MySQL. This should be OK as MariaDB is supposed to be compatible with MySQL.

This service needed to be a replication slave with another instance which is not directly accessible to the internet, and is running an old version of MySQL. The setup as straightforward: setup the ssh tunnel; imported the current data with the master settings; configured table name translation, issued the set master to ... and start slave; commands and voilà. All is well.

In the next few weeks I kept receiving alerts that the replication had stopped. I was always blaming the ssh tunnel that kept going down. But the tunnel was being automatically brought back up and MariaDB was not reconnecting back automatically, needing me to manually issue stop slave; start slave; in order to bring replication back.

I’ve tried to change the ssh scripts to use autossh (very nice, by the way), but the only thing it changed is that my checked never needed to bring the tunnel back up.

After a bit of search, I’ve found this article in MariaDB Knowledge Base: https://mariadb.com/kb/en/library/replication-slave-loses-connection-and-does-not-recover/. Changing the default character set from utf8mb4 to utf8 in the slave (the master does not support the new utf8mb4 charset), basically search and replace all utf8mb4 to utf8. The replication issue stopped happening.

utf8m4 is the “new” UTF-8 character set that was added to MySQL in version 5.5.3 to address the issue that not all UTF-8 codepoints could be stored in 3-bytes, as explained in this article from Thomas Shay.

I just wished that MariaDB had given me nicer error messages explaining why the connection was broken and why it was not trying to reconnect back. All the search for maximum-retries, master_retry_count and global variables were no help at all.

bash SEGFAULT on chroot

After upgrading do the Kernel 4.18, a chroot I used somewhat frequently stopped working. Everytime I tried to start it I just got that simple, but horrifying message:

~# chroot /path/to/jail /bin/bash -i -l
Segmentation Fault

As the project I was working on did not depended on that chroot I’ve set this aside until now.

It turns out that LEGACY_VSYSCALL emulation was disabled by the Debian new Kernels…

~# diff /boot/config-4.9.0-8-amd64 /boot/config-4.18.0-0.bpo.1-amd64 | grep VSYSCALL
 CONFIG_X86_VSYSCALL_EMULATION=y
-# CONFIG_LEGACY_VSYSCALL_NATIVE is not set
-CONFIG_LEGACY_VSYSCALL_EMULATE=y
-# CONFIG_LEGACY_VSYSCALL_NONE is not set
+# CONFIG_LEGACY_VSYSCALL_EMULATE is not set
+CONFIG_LEGACY_VSYSCALL_NONE=y

Luckily this can be changed in the kernel command line, so adding vsyscall=emulate in the Grub command line configuration made it work.

So to solve the problem I njsut had to change the file /etc/default/grub so it contained a line such as:

...
GRUB_CMDLINE_LINUX_DEFAULT="quiet vsyscall=emulate"
...

Call update-grub and reboot!

Why?

Looks like this changes are related to the ASLR (Address Space Layout Randomization) security feature and enabling this emulation can lead to security vulnerabilities.

Sources

* This was the post I found that helped me solve this problem: https://github.com/moby/moby/issues/28705

Upgrading the linux-image in Debian stretch

In the last few days I was trying to make some BPF scripts work and for that I tought I needed to upgrade my Debian to a new kernel. The original version is 4.9 with all the debian patches, and I’ve decided to go for the latest one available, which was 4.18.

$ apt-cache search linux-image got me a lot of options, like these

...
linux-image-amd64 - Linux for 64-bit PCs (meta-package)
...
linux-image-4.9.0-7-686 - Linux 4.9 for older PCs
linux-image-4.9.0-7-686-dbg - Debug symbols for linux-image-4.9.0-7-686
linux-image-4.9.0-7-686-pae - Linux 4.9 for modern PCs
linux-image-4.9.0-7-686-pae-dbg - Debug symbols for linux-image-4.9.0-7-686-pae
...
linux-image-4.18.0-0.bpo.1-686 - Linux 4.18 for older PCs
linux-image-4.18.0-0.bpo.1-686-dbg - Debug symbols for linux-image-4.18.0-0.bpo.1-686
linux-image-4.18.0-0.bpo.1-686-pae - Linux 4.18 for modern PCs
linux-image-4.18.0-0.bpo.1-686-pae-dbg - Debug symbols for linux-image-4.18.0-0.bpo.1-686-pae
linux-image-4.18.0-0.bpo.1-amd64 - Linux 4.18 for 64-bit PCs
linux-image-4.18.0-0.bpo.1-amd64-dbg - Debug symbols for linux-image-4.18.0-0.bpo.1-amd64
linux-image-4.18.0-0.bpo.1-cloud-amd64 - Linux 4.18 for x86-64 cloud
linux-image-4.18.0-0.bpo.1-cloud-amd64-dbg - Debug symbols for linux-image-4.18.0-0.bpo.1-cloud-amd64
linux-image-4.18.0-0.bpo.1-rt-686-pae - Linux 4.18 for modern PCs, PREEMPT_RT
linux-image-4.18.0-0.bpo.1-rt-686-pae-dbg - Debug symbols for linux-image-4.18.0-0.bpo.1-rt-686-pae
linux-image-4.18.0-0.bpo.1-rt-amd64 - Linux 4.18 for 64-bit PCs, PREEMPT_RT
linux-image-4.18.0-0.bpo.1-rt-amd64-dbg - Debug symbols for linux-image-4.18.0-0.bpo.1-rt-amd64
...

I know my machine is amd64 (like most home computers) and I assumed bpo stanted for backport.

Ultimately i’ve fired this:

$ sudo apt-get install linux-image-4.18.0-0.bpo.1-amd64

But bcc (https://github.com/iovisor/bcc) didn’t like it, issuing this error:

chdir(/lib/modules/4.18.0-0.bpo.1-amd64/build): No such file or directory

So I’ve tried to install the headers in order to build the kernel modules but the linux-headers-4.18.0-0.bpo.1-amd64 package was complaining the the installed linux-compiler-gcc-6-x86 was not the correct one, with this message:

The following packages have unmet dependencies:
 linux-headers-4.18.0-0.bpo.1-all : Depends: linux-headers-4.18.0-0.bpo.1-all-amd64 (= 4.18.6-1~bpo9+1) but it is not
 going to be installed
E: Unable to correct problems, you have held broken packages.

Nice, don’t you love when apt tells you that you’ve held broken packages?

So, I’ve looked up and found out that I had 3 different versions available

$ apt-cache show linux-compiler-gcc-6-x86 | grep Version
Version: 4.18.6-1~bpo9+1
Version: 4.9.130-2
Version: 4.9.110-3+deb9u6

And, of course, the one I had installed was for the 4.9 kernel

$ dpkg -l | grep linux-compiler-gcc-6-x86
ii  linux-compiler-gcc-6-x86             4.9.130-2                         amd64        Compiler for Linux on x86 (meta-package)

Installing the correct one was simple:

$ sudo apt-get install  linux-compiler-gcc-6-x86=4.18.6-1~bpo9+1

Then installing the headers was just a matter of issuing the apt command

$ sudo apt install linux-headers-4.18.0-0.bpo.1-amd64

Afterwards, as I was trying to use some new BPF features, I needed the userland headers properly installed, and upgraded the linux-libc-dev package as well.

sudo apt-get install linux-libc-dev=4.18.6-1~bpo9+1

Update:
This guy’s post is way better:
http://jensd.be/818/linux/install-a-newer-kernel-in-debian-9-stretch-stable

Using LDAP to authenticate with a svnserve server

I had this already set up in another server  but we had to set up a new svn server even though we already switch most of our stuff to git…

So, after setting the the svnserve daemon, we need to set up LDAP authentication.

We are using debian servers so I needed to install sasl2-bin in order to have the saslauthd daemon.

apt-get install sasl2-bin

After that, we need to set the daemon to start automatically, editing the file: /etc/defaults/saslauthd and changing two lines:

#...
START=no
#...
#...
MECHANISMS="pam"
#...

to

#...
START=yes
#...
#...
MECHANISMS="ldap"
#...

Then the saslauthd daemon needs to know how to reach the LDAP server, we configure this in the file /etc/saslauthd.conf, it is simple as:

ldap_servers: ldap://server.address.example.com
ldap_port: 389
ldap_version: 3
ldap_password_attr: userPassword
ldap_auth_method: bind
ldap_filter: (uid=%u)
ldap_search_base: ou=Users,dc=example,dc=com

The daemon will look for an entry with the uid=USERNAME in the base ou=Users,dc=example,dc=com and will check the password against the attribute userPassword.

You can test if it is working using the testsaslauthd app, like this:

user@svn:/svn# testsaslauthd -u username -p secret
0: OK "Success."

user@svn:/svn# testsaslauthd -u username -p wrongSecret
0: NO "authentication failed"

We can then start the daemon running service saslauthd start.

Now, we need to change the svnserve.conf so it will actually request the authentication to sasl. So, make sure that the [sasl] section of the file looks like this:

[sasl]
use-sasl = true
#...

And we need to register the svn app into the sasl. Apps are registered by creating a file in /usr/lib/sasl2/appname.conf. svnserve uses the name svn internally, we need to create the file as: /usr/lib/sasl2/svn.conf, with the following contents:

pwcheck_method: saslauthd
mech_list: PLAIN LOGIN

We are now all set up. We only need to restart the svnserve daemon and voilà, it’s done!

Setting up a UTF-8 environment in linux

Based on this post.

Check if the locales package is installed (dpkg -l locales). Then run:

dpkg-reconfigure locales

and choose the desired locales (en_US.UTF-8, for me).

Then, to be sure, export the variables (~/.bashrc)

export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
export LANGUAGE=en_US.UTF-8

You should start tmux or screen with the parameters, to be sure:

$ tmux -u
$ screen -U