Sunday, December 13, 2015

SPF check with Zimbra

 !!! This was a step by step for Zimbra 8.x.x on Ubuntu 12.04 and definietly outdated. !!!

Spoofing is one of the real headaches from email server administrators. Specially when scam/spam artists send email from your own domain. Best way to stop this is implement spf check on your server and add a txt record for the domain in the name servers. Most of the email servers I work with are using Zimbra Collaboration Suite which uses a modified version of postfix as the MTA. Eventhough zimbra has a way to implement spf using cbpolicyd I could not find decent documentation or forum entries with enough details. I had to go through 5-6 different documents before I got spf to work successfully  on zimbra.

Here you can find a step by step guide on how to implement spf check on zimbra servers. Tested on zimbra 8.0.7 on ubuntu 12.04

Activate SPF-CHECK on Zimbra to minimize Spoofing

1) zmprov ms `zmhostname` +zimbraServiceInstalled cbpolicyd +zimbraServiceEnabled cbpolicyd

2) zmlocalconfig -e postfix_enable_smtpd_policyd=yes

3) zmprov mcf +zimbraMtaRestriction "check_policy_service inet:127.0.0.1:10031"

4) zmlocalconfig -e cbpolicyd_log_level=4

5) zmlocalconfig -e cbpolicyd_module_checkspf=1

6) In /opt/zimbra/backup create file group.sql

BEGIN TRANSACTION;
INSERT INTO "policies" (Name,Priority,Description) VALUES('Zimbra CBPolicyd Policies', 0, 'Zimbra CBPolicyd Policies');
INSERT INTO "policy_members" (PolicyID,Source,Destination) VALUES(6, 'any', 'any');
COMMIT;

7) sqlite3 /opt/zimbra/data/cbpolicyd/db/cbpolicyd.sqlitedb < /opt/zimbra/backup/group.sql

8) In /opt/zimbra/backup/ create file spf.sql

BEGIN TRANSACTION;
INSERT INTO "checkspf" (PolicyID,Name,UseSPF,RejectFailedSPF,AddSPFHeader,Comment,Disabled) VALUES (6,"SPF Policy",1,1,1,"Zimbra CheckSPF Policy",0);
COMMIT;

9) sqlite3 /opt/zimbra/data/cbpolicyd/db/cbpolicyd.sqlitedb < /opt/zimbra/backup/spf.sql

10) add the following lines at the top of the /opt/zimbra/conf/zmconfigd/smtpd_sender_restrictions.cf file if antivirus is disabled.

permit_sasl_authenticated
permit_mynetworks

If antivirus/antispam is enabled cut the top most line from /opt/zimbra/conf/zmconfigd/smtpd_sender_restrictions.cf and paste it as the last line of the file. The line should look like this

%%contains VAR:zimbraServiceEnabled cbpolicyd^ check_policy_service inet:localhost:@@cbpolicyd_bind_port@@%%

11) Cut the top most line of /opt/zimbra/conf/zmconfigd/smtpd_recipient_restrictions.cf and paste it as the third line from the bottom.

%%contains VAR:zimbraServiceEnabled cbpolicyd^ check_policy_service inet:localhost:@@cbpolicyd_bind_port@@%%


10) zmcontrol restart  


Thursday, September 17, 2015

High Availability DNS with BIND9+GlusterFS+HeartBeat

!!! This is from 2015 using an already outdated method. You can use this as a guideline but this is using fairly outdated stuff !!!
 

 

 

 

 

While at work we had to create a cluster for a  DNS server. The replication method we were using at the time was DRBD. I was a bit curious about other ways to replicate data in a Ubuntu linux environment. That is when I found GlusterFS. I have done this on Ubuntu 12.04 as well as 14.04.

Okay, replication is not the best use of GlusterFS but it can be used to replicate data. Specially the data you are replicating are small files. That is exactly what we replicate in DNS. This is a step by step guide for creating a High Availability DNS cluster.

Step 1 - Installing GlusterFS and Prerequisites


1) Add following line in /etc/apt/source.list

deb http://ppa.launchpad.net/gluster/glusterfs-3.5/ubuntu precise main

2) apt-get install python-software-properties
3) add-apt-repository ppa:gluster/glusterfs-3.5

4) apt-get install glusterfs-server
5) apt-get install xfsprogs
6) apt-get install fuse-utils (This was not available in Ubuntu 14.04 . Not needed as well)



Step 2 - Configuring the Nodes


Assume the disk used for replication is /dev/sdb, Primary server is server1 with IP 192.168.0.1, Secondary server is server2 with IP 192.168.0.2.

1) Change the /etc/hosts file as follows on both servers.
       
        127.0.0.1        localhost
        192.168.0.1    server1
        192.168.0.2    server2

2) fdisk /dev/sdb
3) mkfs.xfs -i size=512 /dev/sdb1
4) Create a mount point to mount this disk.
        mkdir /mnt/gluster

5) Add line in /etc/fstab to mount this on startup
    
      UUID=(uuid of the dev) /mnt/gluster xfs defaults 0 0

6) Run mount -a to check the new fstab line. 
7) Create a folder for replication
          mkdir /mnt/gluster/replicate

8) Find peers and add them to the trusted pool. ( If you want to add additional nodes you have to probe them from a server already in the pool. You cannot probe the pool from a new server)
       
    On Server1
             gluster peer probe server2

    On Server2
             gluster peer probe server1

9) On server1
      Here data is the name of the volume. We can use any name.

             gluster volume create data replica 2 transport tcp server1:/mnt/gluster/replicate  \                                server2:/mnt/gluster/replicate

10) Start the volume
        
               gluster volume start data

11) Create a directory /dns (or any name you like that is not currently used) to mount the replication folder
         
          mkdir /dns
12) Add another mount line to fstab like below. 
        
        on server1
           server1:/data /dns glusterfs defaults,_netdev 0 0
        on server2
           server2:/data /dns glusterfs defaults,_netdev 0 0 

13) Remount again.
       
         mount -a

We have now completed the replication. 

Step 3 - DNS configurations 


1) Create a directory /dns/bind to store the bind9 related data.
      
      mkdir /dns/bind

2) Copy all directories containing zone files to /dns (mount point of replication device)

3) Remove/rename the original directories and create symbolic links for the new locations.

4) Do the same for /etc/bind/named.conf.local file.

5) Make following changes in /etc/bind/named.conf file.
           
              Comment/Remove the following line
                 include “/etc/bind/named.conf.local
              Add following line
                 include “/dns/bind/named.conf.local”

6) Assuming all your bind related files are on /dns/bind add following line to /etc/apparmor.d/usr.sbin.named file.

          /opt/bind/** rw,

Now we have two servers with the same DNS zones which will run on two separate IP Addresses. This alone could be a setup for Disaster Recovery where we can manually do the IP failover. But that is not good enough for us. Is it?

This is where HeartBeat comes in. 

Step 4 - HeartBeat Configuration 

Here we assume that the published IP for your DNS server is 192.168.0.3. Clients will connect to this IP which will be assigned to server1 through HeartBeat initially. If server1 fails server2 will take over this IP and server DNS queries..

1) apt-get install heartbeat

2) Create a file /etc/heartbeat/ha.cf and add following details on both servers

         debug 0
         debugfile /var/log/ha-debug
         logfacility local0
         keepalive 2
         deadtime 20 # timeout before the other server takes over
         bcast eth0
         node server1 server2 #node host names
         auto_failback on # very important or auto failover won't happen

3) Create a file /etc/heartbeat/haresources and add following line on both servers.

           server1 Ipaddr::virtual.IP.Address/netmask/interface bind9

4) Create a file /etc/heartbeat/authkeys and add following lines
          auth 3
          3 md5 sequenceofnumbers(1234567 etc)

5) Remove heartbeat from startup programs. This is because glusterfs takes some time to initialize on      reboot and if heartbeat kicks in before GusterFS it will not be able to start bind services and would      have unexpected Behaviour

6) create a script /root/hbstart to start heartbeat.
         
            /etc/init.d/heartbeat start

7) Call the script 1 minute after reboot with a cron job.

            @reboot /bin/sleep 60 && /root/hbstart