archive-be.com » BE » S » SIPHOS.BE

Total: 45

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • Chapter 18. Taking Backups
    you need For instance to extract all etc portage files tar xvzpf media usb poormanbackup tar gz C etc portage Note the space between and etc portage More Managed Backups A slightly more complicated but probably safer method is to use tools that provide you with a managed backup solution Managed means that you tell them what to backup although they do propose certain backup situations and they take care of the rest such as repetitive backups as well as easy restore activities in case things go wrong If you take a look at the packages that Gentoo offers in its app backup category you ll find that there are plenty of them I can t tell you which one is better because that is a personal taste But I will provide a few pointers so you can get started Backup Ninja The backupninja application is a tool which is scheduled to run every night for instance and that will read in the configuration file s in etc backup d Inside this directory you define what needs to be backed up how often etc For instance the same poor man s backup settings of above would result in a file similar to the following File etc backup d 20 basic system rdiff to backup regular files when daily options exclude special files nicelevel 15 source label myhostname mydomain type local keep 60 include etc include var lib portage world include usr local include home exclude home mozilla firefox default Cache dest type local directory var backups backupninja File etc backup d 19 kernel config sh zcat proc config gz etc kernel config File etc backup d 21 copy to usb sh rsync avug var backups backupninja media usb The result of this is that the kernel configuration file is generated first into etc kernel config after which incremental backups are taken of the presented locations and stored in var backups backupninja Finally these files are copied synchronized to media usb which might be a USB stick or external disk drive If you ever want to store that backup outside of your home for instance at your parents kids house detach the USB stick and store it there Just make sure another USB stick is attached and mounted for the next backup cycle Bare Metal Recovery When you want to make sure your system is up and running in no time even when the entire system crashed you need to take bare metal backups full system backups which you can restore quickly A popular method to accomplish this is to use imaging software tools PartImage If you installed Gentoo using the sysresccd CD then you already have the tools at your disposal to perform imaging backups A popular tool is PartImage and is available on the sysresccd The tool offers a graphical or curses based interface but you don t need to use that the tool also supports command line backups An example usage would be the following Boot from the

    Original URL path: http://swift.siphos.be/linux_sea/backup.html (2016-05-01)
    Open archived version from archive


  • Chapter 19. Using A Shell
    application has a file open it gets a file descriptor assigned This is a number that uniquely identifies a file for a specific application The file descriptors 0 1 and 2 are reserved for standard input standard output and standard error output The 2 1 suffix tells Unix Linux that the file descriptor 2 standard error should be redirected to file descriptor 1 1 This also brings us to the redirection sign If you want the output of a command to be saved in a file you can use the redirect sign to redirect the output into a file emerge uDN world var tmp emerge log The above command will redirect all standard output to var tmp emerge log i e save the output into a file Note that this will not store the error messages on standard error in the file because we did not redirect the standard error output it will still be displayed on screen If we want to store that output as well we can either store it in the same file emerge uDN world var tmp emerge log 2 1 or store it in a different file emerge uDN world var tmp emerge log 2 var tmp emerge errors log Now there is one thing you need to be aware the redirection sign needs to be directly attached to the output file descriptor so it is 2 and not 2 with a space In case of standard output this is implied is actually the same as 1 Grouping Commands Shells also offer a way to group commands If you do this it is said that you create a sub shell that contains the commands you want to execute Now why would this be interesting Well suppose that you want to update your system followed by an update of the file index You don t want them to be ran simultaneously as that would affect performance too much but you also don t want this to be done in the foreground you want the shell free to do other stuff We have seen that you can background processes by appending a at the end emerge uDN world emerge output log 2 1 However chaining two commands together with the background sign is not possible emerge uDN world emerge output log 2 1 updatedb bash syntax error near unexpected token If you drop the from the command both processes will run simultaneously The grouping syntax comes to the rescue emerge uDN world emerge output log 2 1 updatedb This also works for output redirection emerge sync layman S sync output log 2 1 The above example will update the Portage tree and the overlays and the output of both commands will be redirected to the sync output log file Storing Commands All the above examples are examples that are run live from your shell You can however write commands in a text file and execute this text file The advantage is that the sequence of commands is

    Original URL path: http://swift.siphos.be/linux_sea/shellscripting.html (2016-05-01)
    Open archived version from archive

  • Chapter 20. Tips and Answers to Questions
    list of new supported software Users of that distribution can then just start upgrading their packages to the new software without requiring any reinstall Distributions do make new releases often but this is mostly because the installation media itself installation CD and tools are updated to fit the latest hardware available The Role of the Community The Gentoo Linux distribution offers the discussion mediums discussed in this chapter The Gentoo Linux Forums are heavily used over a thousand postings a day web forums Gentoo hosts its mailing lists itself you can find an overview of the available mailing lists online On the Freenode IRC network Gentoo has a few official chat channels including the generic gentoo channel There is also the official Gentoo Wiki Running Linux Organising your home directory should not be taken lightly By using a good structure you can easily backup important documents access files you often need and still keep a good overview For instance to create the directories as given in the exercise mkdir doc pics work tmp With this simple structure the most important directory would be doc personal documents and perhaps work You most likely do not want to back up the temporary files tmp and the pics folder might require a lower frequency on backups Of course you should attempt to use a structure you find the most appealing The tar command allows you to group multiple files and directories into a single file archive It is originally created to allow administrators to back up multiple files on tape tar is most likely short for tape archive A tar file is not compressed it is merely a concatenation of the files with additional metadata about the files such as filename permissions This is where the gzip bzip2 compression comes in these compression methods do not support archiving so one should first archive the files together in a tar file and then compress the tarfile using gzip or bzip2 gzip is the most used as it offers a fast compression algorithm bzip2 is popular too because it has a higher compression rate The combination result of tar with gzip bzip2 creates what is called a tarball These usually have the extension tar gz or tgz for gzip or tar bz2 for bzip2 The fourth compression is provided by the compress command Just like gzip bzip2 it compresses a single file its extension is Z so a tarball would yield tar Z as an extension compress is the oldest method of these four compress zip gzip bzip2 and supported on all Unix and Unix alike operating systems Photorec is a software tool that allows you to recover removed files from a file system In Gentoo you can install this tool through app admin testdisk The Linux File System The command to recursively change a mode can be found in the manual page of chmod man chmod In effect the command could be chmod R o r tmp test All underlying directories test to will

    Original URL path: http://swift.siphos.be/linux_sea/tipsandanswers.html (2016-05-01)
    Open archived version from archive

  • Glossary
    of instructions to use by software engineers By limiting the complexity of the instructions RISC CPUs can be made faster than more complex CPUs clockwize CISC Complex Instruction Set Computing a CPU instruction design that attempts to offer a large list of powerful CPU instructions to use by software engineers The powerful instructions allow engineers to perform complex tasks with a minimum on instructions The CPU is optimized to perform

    Original URL path: http://swift.siphos.be/linux_sea/glossary.html (2016-05-01)
    Open archived version from archive

  • Index
    and Child Relationships init scripts Introduction initial ram file system Kernel Modules initial root disk Kernel Modules initramfs see initial ram file system initrd see initial root disk inittab Services at System Boot Shutdown INPUT DEVICES Installing Xorg instruction set What is an Architecture intellectual property What are Software Licenses ip command Verify your Networking Abilities iptables Firewall Configuration IRC Online Communities Chat iw connect Accessing a Wireless Network dev Verifying Wireless Capabilities link Accessing a Wireless Network list Verifying Wireless Capabilities scan Accessing a Wireless Network iwconfig Verifying Wireless Capabilities iwlist Accessing a Wireless Network J JACK Using Sound Servers jobs Backgrounding Processes journal File Systems K KDE What is a Distribution kernel Kernel building Building a Linux Kernel configuring automatically Using Gentoo s genkernel Script configuring manually Manually Configuring a Kernel kernel module Kernel Modules kernel seed Manually Configuring a Kernel kernel space Introduction keymaps keymaps KEYWORDS Architecture Availability kill Sending Signals and Killing Processes L label Placing a File System on a Partition LABEL Labels versus UUIDs layman Using Third Party Software Repositories less Viewing Text Files Man Pages LICENSE License Approval license groups License Approval LINGUAS Languages link Files and File Structure hard Linking Files and Directories symbolic Files and File Structure Linking Files and Directories Linux Install Fest Local Communities linux kernel Linux as the Kernel of the Linux Operating System linux kernel module see kernel module Linux Standard Base Linux Standard Base Linux User Group Local Communities LinuxTag LinuxTag ln Linking Files and Directories loadkeys Keyboard Settings local local locale Locale Settings localmodconfig Loading the Kernel Configuration Utility localmount localmount localyesconfig Loading the Kernel Configuration Utility locate mlocate logrotate Maintaining Log Files with Logrotate ls Navigating lsattr Listing and Modifying Attributes LSB Linux Standard Base lsmod Working with Modules lsof Process Information lspci PCI Devices lsusb USB Devices LUG Local Communities M MAC Permissions and Attributes mailing list Mailinglists make conf Modifying Build Decisions make profile Switching System Profiles makewhatis Man Pages man Man Pages mandatory access control Permissions and Attributes manual page Man Pages mask Package States masquerading Forwarding Requests Master Boot Record Installing GRUB MBR Installing GRUB meminfo Memory memory management Kernel mkfs Placing a File System on a Partition mkfs ext2 Placing a File System on a Partition mkfs ext3 Placing a File System on a Partition mknod Managing Device Files mkswap Placing a File System on a Partition modinfo Working with Modules modprobe Working with Modules blacklist Loading Modules modprobe conf Working with Modules modprobe d Working with Modules modules Loading Modules service modules modules autoload modules more Viewing Text Files mount Mounting File Systems The mount Command and the fstab file mount point The mount Command and the fstab file MPlayer What is a Distribution mqueue File Systems mv Moving and Renaming Files Directories MySQL Free Software isn t Non Commercial N named pipe Files and File Structure ndiswrapper Support through Windows Drivers net lo net lo net Network File Server NFS newsgroup Internet and Usenet Forums NFS NFS

    Original URL path: http://swift.siphos.be/linux_sea/ix01.html (2016-05-01)
    Open archived version from archive

  • Chapter 2. Platform selection
    one from the various daemons The local system logger is then configured to send the data to the central log server and depending on the administrators needs locally as well If local logs are needed make sure that the logs are properly rotated using logrotate and a regular cron job Administration To administer the system and the components hosted on it OpenSSH for access to the system and Puppet for managing configuration settings are used Figure 2 5 Operating system administration Standard operator administrator access to the operating system is handled through the SSH secure shell The OpenSSH daemon will be configured to use a central user repository for its authentication of users This allows administrators to for instance change their password on a single system and ensure that the new password is then in use for all other systems as well The SSH client is configured to download SSH fingerprints from the DNS server in case of a first time connection to the server The configuration management will be handled through Puppet whose configuration repository will be managed through a version controlled system and pulled from the systems Monitoring Systems and the components and services that are hosted further will be monitored through Icinga Figure 2 6 Operating system monitoring The Icinga agent supports various plugins that allow to monitor various aspects of the operating system and the services that run on it The results of each query is then sent to the central Icinga database The monitoring web interface which is discussed later interacts with the database to visually represent the state of the environment Operations Considering this is a regular platform with no additional services on yet there is no specific operations defined yet Users For the user management on a Linux system a central LDAP service for the end user accounts and administrator accounts is used The functional accounts though the Linux users under which daemons run are defined locally This ensures that there is no dependency on the network or LDAP for those services However for security reasons it is important that these users cannot be used to interactively log on to the system The root account which should only be directly used in case of real emergencies should have a very complex password managed by a secured password management application End users are made part of one or more groups These groups define the SELinux user assigned to them and thus the rights they have on the system even if they need root rights their actions will be limited to those tasks needed for their role Security Additional services on the server are compliance validation using openscap inventory assessment using openscap auditing through the Linux auditd daemon and sent through the system logger for immediate transport host based firewall using iptables and managed through Puppet integrity validation of critical files Compliance validation To support a central compliance validation method we use a local SCAP scanner openscap and centrally manage the configurations and results This is implemented in a tool called pmcs Poor Man Central SCAP Figure 2 7 Running compliance and inventory validation The communication between the central server and the local server is HTTP S based Inventory management As SCAP content is used to do inventory assessment pmcs is used here as well Auditing Auditing on Linux systems is usually done through the Linux audit subsystem The audit daemon can be configured to provide auditing functionalities on various OS calls and most security conscious services are well able to integrate with auditd The important part still need to be covered is to send the audit events to a central server The system logger is leveraged for this and auditd configured to dispatch audit events to the local syslog Host based firewall A host based firewall will assist in reducing the attack space towards the server ensuring that network reachable services are only accessed from more or less trusted locations Managing host based firewall rules can be complex We use the Puppet configuration management services to automatically provide the necessary firewall rules automatically Integrity validation of critical files Critical files on the system are also checked for possibly unwanted manipulations AIDE Advanced Intrusion Detection Environment can be used for this In order to do offline scanning so that malicious software inside the host cannot meddle with the integrity validation scans snapshotting is used on storage level and scanning is done on the hypervisor Pluggable Authentication Modules Authentication management part of access management on a Linux server can be handled by PAM Pluggable Authentication Modules With PAM services do not need to provide authentication services themselves Instead they rely on the PAM modules available on the system Each service can use a different PAM configuration if it wants although most of the time authentication is handled similarly across services By calling PAM modules services can support two factor authentication out of the box immediately use centralized authentication repositories and more PAM provides a flexible modular architecture for the following services Authentication management to verify if a user is who it says it is Account management to check if that users password has expired or if the user is allowed to access this particular service Session management to execute certain tasks on logon or logoff of a user auditing mounting of file systems Password management offering an interface for password resets and the like Principles behind PAM When working with PAM administrators quickly find out what the principles are that PAM works with The first one is back end independence Applications that are PAM aware do not need to incorporate any logic to deal with back ends such as databases LDAP service password files WS Security enabled web services or other back ends that have not been invented yet By using PAM applications segregate the back end integration logic from their own All they need to do is call PAM functions Another principle is configuration independence Administrators do not need to learn how to configure dozens of different applications on how to interact with an LDAP server for authentication Instead they use the same configuration structure provided by PAM The final principle which is part of the PAM name is its pluggable architecture When new back ends need to be integrated all the administrator has to do is to install the library for this back end by placing it in the right directory on the system and configure this module most of the modules use a single configuration file From that point onward the module is usable for applications Administrators can configure the authentication to use this back end and usually just need to restart the application How PAM works Applications that want to use PAM link with the PAM library libpam and call the necessary functions that reflect the above services Other than that the application does not need to implement any specific features for these services as it is all handled by PAM So when a user wants to authenticate itself against say a web application then this web application calls PAM passing on the user id and perhaps password or challenge and checks the PAM return to see if the user is authenticated and allowed access to the application It is PAMs task underlyingly to see where to authenticate against such as a central database or LDAP server Figure 2 8 Schematic representation of PAM The strength of PAM is that everyone can build PAM modules to integrate with any PAM enabled service or application If a company releases a new service for authentication all it needs to do is provide a PAM module that interacts with its service and then all software that uses PAM can work with this service immediately no need to rebuild or enhance those software titles Managing PAM configuration PAM configuration files are stored in etc pam d and are named after the service for which the configuration applies As the service name used by an application is often application specific you will need to consult the application documentation to know which service name it uses in PAM Important As the PAM configuration file defines how to authenticate users it is extremely important that these files are very difficult to tamper with It is recommended to audit changes on these files perform integrity validation keep backups and more Next to the configuration files we also have the PAM modules themselves inside lib security or lib64 security These locations are often forgotten by administrators to keep track of even though these locations are equally important as the configuration files If an attacker can overwrite modules or substitute them with his own then he also might have full control over the authentication results of the application Important As the PAM libraries are the heart of the authentication steps and methods it too is extremely important to make it very difficult to tamper with Again auditing integrity validation and backups are seriously recommended The PAM configuration files are provided on a per application basis although one application configuration file can refer to other configuration file s to use the same authentication steps Let s look at a PAM configuration file for an unnamed service auth required pam env so auth required pam ldap so account required pam ldap so password required pam ldap so session optional pam loginuid so session required pam selinux so close session required pam env so session required pam log so level audit session required pam selinux so open multiple session optional pam mail so Notice that the configuration file is structured in the four service domains that PAM supports authentication account management password management and session management Each of the sections in the configuration file calls one or more PAM modules For instance pam env so sets the environment variable which can be used by subsequent modules The return code provided by the PAM module together with the control directive required or optional in the above example allow PAM to decide how to proceed required The provided PAM module must succeed in order for the entire service such as authentication to succeed If a PAM module fails other PAM modules are still called upon even though it is already certain that the service itself will be denied requisite The provided PAM module must succeed in order for the entire service to succeed Unlike required if the PAM module fails control is immediately handed back and the service itself is denied sufficient If the provided PAM module succeeds then the entire service is granted The remainder of the PAM modules is not checked If however the PAM module fails then the remainder of the PAM modules is handled and the failure of this particular PAM module is ignored optional The success or failure of this particular PAM module is only important if it is the only module in the stack Chaining of modules allows for multiple authentications to be done multiple tasks to be performed upon creating a session and more Configuring PAM on the system In order to connect the authentication of a system to a central LDAP server the following lines need to be added in the etc pam d system auth file don t replace the file just add the lines auth sufficient pam ldap so use first pass account sufficient pam ldap so password sufficient pam ldap so use authtok use first pass session optional pam ldap so Also install the sys auth pam ldap and sys auth nss ldap packages A second step is to configure pam ldap so For etc ldap conf the following template can be used Make sure to substitute the domain information with the one used in the environment suffix dc genfic dc com bind policy soft bind timelimit 2 ldap version 3 nss base group ou Group dc genfic dc com nss base hosts ou Hosts dc genfic dc com nss base passwd ou People dc genfic dc com nss base shadow ou People dc genfic dc com pam filter objectclass posixAccount pam login attribute uid pam member attribute memberuid pam password exop scope one timelimit 2 uri ldap ldap genfic com ldap ldap1 genfic com ldap ldap2 genfic com Secondly etc openldap ldap conf needs to be available on all systems too BASE dc genfic dc com URI ldap ldap genfic com 389 ldap ldap1 genfic com 389 ldap ldap2 genfic com 389 TLS REQCERT allow TIMELIMIT 2 Finally edit etc nsswitch conf so that other services can also use the LDAP server next to central authentication passwd files ldap group files ldap shadow files ldap Learning more about PAM Most if not all PAM modules have their own dedicated manual page man pam env Other information is easily available on the Internet including the Linux PAM project at http www linux pam org the PAM System Administration Guide at http linux pam org Linux PAM html Linux PAM SAG html Gentoo Hardened To increase security of the deployments all systems in this reference architecture will use a Gentoo Hardened deployment Within the Gentoo Linux community Gentoo Hardened is a project that oversees the research implementation and maintenance of security oriented projects in Gentoo Linux It focuses on delivering viable security strategies for high stability production environments and is therefor absolutely suitable for this reference architecture Within this book s scope all services are implemented on a Gentoo Hardened deployment with the following security measures in place PaX PIE PIC SSP SELinux as MAC grSecurity kernel improvements The installation of a Gentoo Hardened system is similar to a regular Gentoo Linux one All necessary information can be found on the Gentoo Hardened project page PaX The PaX project part of grSecurity aims to update the Linux kernel with defense mechanisms against exploitation of software bugs that allow an attacker access to the software s address space memory By exploiting this access a malicious user could introduce or execute arbitrary code execute existing code without the applications intended behavior or with different data than expected One of the defence mechanisms introduced is NOEXEC With this enabled memory pages of an application cannot be marked writeable and executable So either a memory page contains application code but cannot be modified kernel enforced or it contains data and cannot be executed kernel enforced The enforcement methods used are beyond the scope of this book but are described online Enforcing NOEXEC does have potential consequences some applications do not work when PaX enforces this behavior Because of this PaX allows administrators to toggle the enforcement on a per binary basis For more information about this see the Hardened Gentoo PaX Quickstart document see resources at the end of this chapter Note that this also requires PIE PIC built code see later Another mechanism used is ASLR or Address Space Layout Randomization This thwarts attacks that need advance knowledge of addresses for instance through observation of previous runs With ASLR enabled the address space is randomized for each application which makes it much more difficult to guess where a certain code or data portion is loaded and as such attacks will be much more difficult to execute succesfully This requires the code to be PIE built To enable PaX the hardened sources kernel in Gentoo Linux needs to be installed and configured according to the instructions found on the Hardened Gentoo PaX Quickstart document Also install paxctl emerge hardened sources emerge paxctl PIE PIC SSP The given abbreviations describe how source code is built into binary executable code PIC Position Independent Code is used for shared libraries to support the fact that they are loaded in memory dynamically and without prior knowledge to the addresses Whereas older methods use load time relocation where address pointers are all rewritten the moment the code is loaded in memory PIC uses a higher abstraction of indirection towards data and function references By building shared objects with PIC relocations in the text segment in memory which contains the application code are not needed anymore As such these pages can be marked as non writeable To find out if there are libraries that still support text relocations install the pax utils package and scan the libraries for text relocations emerge pax utils scanelf lpqt TEXTREL opt Citrix ICAClient libctxssl so In the above example the libctxssl so file is not built with PIC and as such could be more vulnerable to attacks as its code containing memory pages might not be marked as non writeable With PIE Position Independent Executables enabled executables are built in a fashion similar to shared objects their base address can be relocated and as such PaX ASLR method can be put in effect to randomize the address in use An application binary that is PIE built will show up as a shared object file rather than an executable file when checking its ELF header readelf h bin ls grep Type Type DYN Shared object file readelf h opt Citrix ICAClient wfcmgr bin grep Type Type EXEC Executable file SSP finally stands for Stack Smashing Protection Its purpose is to add in additional buffers after memory allocations for variables and such which contain a cryptographic marker often called the canary When an overflow occurs this marker is also overwritten after all that s how overflows work When a function would return this marker is first checked to see if it is still valid If not then an overflow has occurred and the application is stopped abruptly Checking PaX and PIE PIC SSP results If the state of the system needs to be verified after applying the security measures identified earlier install paxtest and run it The application supports two modes kiddie and blackhat The blackhat test gives the worst case scenario back whereas the kiddie mode runs tests that are more like the ones script kiddies would run The paxtest application simulates certain attacks and presents plausible results to the reader A full explanation on the tests ran can be found in

    Original URL path: http://swift.siphos.be/aglara/platform.html (2016-05-01)
    Open archived version from archive

  • Chapter 3. The environment at large
    reason alone differentiation between tenants is at the highest level the most segregated level Figure 3 1 Multi tenant setup The most important part here is that anything used within a tenant that might be shared across tenants such as user ids for administration is pushed to the tenant never used directly from the tenant hub This provides a clean modular approach to handling tenants When a tenant wants to leave the organization the data flow is stopped and the tenant can continue with its internal architecture with little to no immediate adaptations When a new tenant enters the organization data is pushed and converted towards that tenants internal services Communication between the tenants directly should be done through the external gateways as it is seen as B2B Business to Business communication SLA groups Larger environments will have different SLA groups Those could be called production preproduction testing and sandbox for instance Smaller organizations might have just two or even one SLA group Figure 3 2 SLA group structure The segregation between the SLA groups not only makes proper service level agreements possible on the various services but also controls communication flows between these SLA groups For instance communication of data between production and pre production might be possible but has to be governed through the proper gateways or control points In the above figure the SLA groups are layered so direct communication between production and sandbox should not be possible unless through the three gateway levels However that is definitely not a mandatory setup To properly design such SLA groups make sure communication flows in either direction which not only includes synchronous communication but also file transfers and such are properly documented and checked Architectural positioning The next differentiator is the architectural positioning This gives a high level overview

    Original URL path: http://swift.siphos.be/aglara/environment.html (2016-05-01)
    Open archived version from archive

  • Chapter 4. DNS services
    credentials confidential information and more He only needs to modify the IP address replies that the DNS server would give and have them point to its own website This is called DNS hijacking and does not only introduce the risk of spoofed websites but also changes in e mail flows for instance the malicious user changes the MX records to point to his own e mail server so he can watch and even modify e mail communication to the official domain To provide some level of protection against these threats a number of security changes are recommended If the registrar supports DNSSEC then enable DNSSEC on the zones as well With DNSSEC the resource records in the zone are digitally signed by private keys and that key is signed by the parent domain hence the need for the registrar and higher level domains to support it too This provides some additional protection against DNS hijacking although it is not perfect end users must use DNSSEC for their lookups and should have a valid trusted keystore containing the root DNS server keys Enable SPF Sender Policy Framework so that mail servers who receive an e mail that is supposed to be sent by the official locations or someone within the environment then the mail server can check the origin address against the SPF records in the DNS to validate if they match or not Enable DKIM DomainKey Identified Mail to sign outgoing e mails and provide the DKIM public key in the DNS records so DKIM supporting mail servers can validate the signatures Enable DMARC Domain based Message Authentication Reporting and Conformance to receive reports on how mails from that domain are treated as well as identify which protective measures the domain takes DNSSEC The DNSSEC structure adds in a number of DNS records that help with the authentication of the DNS data as well as verification of the integrity of this data This protects the DNS records from both hijacking as well as tampering Figure 4 7 DNSSEC overview The idea is that a record in the example the record for www genfic com has its various types signed In the example two types are provided A and AAAA each of these has its own signature in an RRSIG Resource Record Signature field The RRSIG fields are digital signatures made with a private key called the ZSK Zone Signing Key of which the public key part is available in the zone in a DNSKEY DNS public Key field This field is also signed by a private key called the KSK Key Signing Key also available in a DNSKEY field in the zone Now to verify that the KSK public key is the proper key it is also signed with the KSK private key but also by the zone signing key of the parent domain in our example the com zone This domain has a DS Delegation Signer field in the genfic com zone record which contains a hash of the genfic com KSK public key the field itself and this hash is signed through the parent zone RRSIG field In the drawing NSEC Next Secure fields are also shown This chains multiple secure records with each other providing a way for requestors to make sure a record really does not exist upon retrieval of a nonexisting record the DNS server replies with data pointing out that between records 1 and 2 there is no record for the requested one Next to NSEC it is also possible to use NSEC3 NSEC3 covers one of the possible security holes that NSEC provides malicious people can walk across all the NSECs to get a view of the entire zone For fully public zones this is not an issue but many DNS servers want only to expose a few of the records in a zone and NSEC requires that all records are public With NSEC3 DNSSEC Hashed Authenticated Denial of Existence not the record name itself like www but a hashed value of it is used This prevents walking over the entire zone while still providing validation that records do or don t exist SPF Sender Policy Framework In SMTP Simple Mail Transfer Protocol mail clients need to pass on information about the mail they are sending It starts with the HELO or EHLO statements telling the mail server who the client system is followed by information whom the mail is from MAIL FROM and to who the mail should be sent RCPT TO 220 mail internal genfic com HELO relay internal genfic com 250 relay internal genfic com Hello relay internal genfic com 10 24 2 3 pleased to meet you MAIL FROM someone genfic com 250 someone genfic com Sendor ok RCPT TO otherone genfic com 250 otherone genfic com Recipient ok will queue DATA 354 Enter mail end with on a line by itself A first check that mail servers should do is validate that the fully qualified hostname provided through the HELO or EHLO commands really comes from that host If it doesn t then it is possibly a fake mail But other than that there are little checks done to see if the mail is truly sent from and to those mail boxes With SPF mail servers will do one or two more checks the mail server will first check if the host as provided by the HELO EHLO is allowed to send mails and the mail server will then check if this host is allowed to send mail for the given domain as provided by the MAIL FROM For this to happen specific fields are added to the host record genfic com IN SPF v spf1 mx a all genfic com MX 10 relay internal genfic com relay internal genfic com IN SPF v spf1 ip4 10 24 2 3 a all relay internal genfic com A 10 24 2 3 The SPF records can be read as Mails sent for the genfic com domain are allowed to be sent from any system that is defined as an MX record all others are denied The relay internal genfic com system is allowed to send mail as long as it is really this system with IP 10 24 2 3 DKIM DKIM DomainKeys Identified Mail uses cryptographic signatures on the e mail and important headers of the e mail to provide authenticity on the mail Mail servers that use DKIM will sign the outgoing e mails with a private key whose public component can be obtained through DNS on the hostname domainkey domain TXT record Receiving mail servers can then validate that the mail they receive if it has a DKIM signature has not been tampered with along the way A mail can have DKIM signatures of non authorative domains though domains that are not related to the mail An extension on DKIM called ADSP Author Domain Signing Practices provides a signature that is authenticated through the same domain as the mail says it is from If ADSP is used then an additional TXT record exists that informs recipients of this Recipients should check this field to see if they should find a proper DKIM signature from that domain or not This TXT record is for adsp domainkey domain DMARC DMARC Domain based Message Authentication Reporting and Conformance is used to inform other systems how to treat non conformance if mails are not according to the SPF requests or DKIM signatures are missing or This information is available through the dmarc domain record and uses tags similar to SPF genfic com TXT v DMARC1 p quarantine pct 20 rua mailto dmarc genfic com sp r aspf r The above record tells other parties that If mails fail the DKIM SPF checks generally called the DMARC checks then 20 of those mails should be quarantined Daily reports are sent to dmarc genfic com The policy for subdomains sp is relaxed The alignment mode for SPF aspf is related BIND On the Internet Berkeleys Internet Name Domain BIND server is the most popular DNS server to date It has been plagued by its set of security issues but is still very well supported on Gentoo and other platforms The software was originally developed at Berkeley in the early 80s in an open source model Since 2010 the software is maintained by the Internet Systems Consortium Because it has such a large history it has also seen quite a few updates and even rewrites The current major version BIND 9 is a rewrite that was tailored to answer the various secrity related concerns that came from earlier versions Sadly even BIND 9 has seen its set of security vulnerabilities Although a new major is in the making BIND 10 it is not made generally available for production use yet From records to views For DNS the smallest set of information it returns is called a record Records are grouped together in domains and domains are grouped in zones Within a record there are 6 fields including the name of the record and the type In the next few sections the syntax as used by BIND is described although the concepts are the same for other DNS servers Record structure When an IP address for a host name is declared this actually maps on a DNS record www IN A 192 168 0 2 This record states that within the domain that this record is a part of that www domain resolves to 192 168 0 2 Type type of the record here is A An IPv6 address would result in a AAAA record mail IN AAAA 2001 db8 10 1 For the same host there can be multiple records as long as they are of a different type and don t make the definitions ambiguous www IN A 192 168 0 2 IN AAAA 2001 db8 10 1 There are many record types available and more are being defined every few years For instance CNAME which says that the given host name is an alias for another record LOC provides location information for servers SSHFP gives the SSH finger print information for a server etc Domains Multiple records are combined within a domain Comments are prepended with a semi colon TTL 24h If records do not hold TTL information themselves use this as default ORIGIN internal genfic com Domain that is defined here 1D IN SOA ns1 internal genfic com hostmaster genfic com 2013020401 serial 3H refresh 15 retry 1w expire 3h minimum IN NS ns1 internal genfic com in the domain In the above example the sign is shorthand for the domain ORIGIN The first record in the domain declaration is the SOA record and after this record there is an NS record Zone One or more domains and their affiliated records are stored in a zone file A zone is a specific space within the global Domain Name System that is managed administered by a single authority Other DNS servers might also serve the same zone but they will not be marked as the authoritative DNS service for that zone View A view is not DNS specific but a feature that many DNS servers offer A view uses different zone files depending on the requestor For instance a DNS server might serve both Internet originating requests as well as internal requests Internet originating requests thus need to receive the Internet facing IP addresses as replies but internal requests can use internal IP addresses public host dig short ns1 genfic com www genfic com A 123 45 67 89 intern host dig short ns1 genfic com www genfic com A 10 152 20 12 Deployment and uses Configuring BIND on Gentoo Linux is fairly similar as configuring BIND on other platforms There are plenty of good and elaborate resources on BIND configuration on the Internet some of which are mentioned at the end of this chapter Installing bind First install net dns bind An overview of the USE flags used here is shown as well as output of the equery command equery u bind Legend U final flag setting for installation I package is installed with flag Colors set unset Found these USE flags for net dns bind 9 9 2 p1 U I berkdb Adds support for sys libs db Berkeley DB for MySQL caps Use Linux capabilities library to control privilege dlz Enables dynamic loaded zones 3rd party extension doc Adds extra documentation API Javadoc etc It is recommended to enable per package instead of globally filter aaaa Enable filtering of AAAA records over IPv4 geoip Add geoip support for country and city lookup based on IPs gost Enables gost OpenSSL engine support gssapi Enable gssapi support idn Enable support for Internationalized Domain Names ipv6 Adds support for IP version 6 ldap Adds LDAP support Lightweight Directory Access Protocol mysql Adds mySQL Database support odbc Adds ODBC Support Open DataBase Connectivity postgres Adds support for the postgresql database python Adds optional support bindings for the Python language rpz Enable response policy rewriting rpz rrl Response Rate Limiting RRL Experimental sdb ldap Enables ldap sdb backend ssl Adds support for Secure Socket Layer connections static libs Build static libraries threads Adds threads support for various packages Usually pthreads urandom Use dev urandom instead of dev random xml Add support for XML files emerge net dns bind net dns bind tools rc update add named default Initial configuration The configuration below is meant for a master DNS server Start with etc bind named conf options directory var bind pid file var run named named pid statistics file var run named named stats listen on 127 0 0 1 listen on v6 2001 db8 81 21 ac 98ad 5fe1 allow query any zone statistics yes allow transfer 2001 db8 81 22 ae 6b01 e3d8 notify yes recursion no version nope Access to DNS for local addresses i e genfic owned view local match clients 2001 db8 81 48 recursion yes zone genfic com type master file pri com genfic zone 1 8 0 0 8 b d 0 1 0 0 2 ip6 arpa type master file pri 1 8 0 0 8 b d 0 1 0 0 2 ip6 arpa That s it The configuration will have this installation work as the master DNS server and will only accept DNS requests from IPv6 addresses within the defined IP range For these requests the pri com genfic file is used which is the zone file that will contain the DNS records and pri 1 8 0 0 8 b d 0 1 0 0 2 ip6 arpa for the reverse lookups The name of the reverse lookup is fairly difficult to work with by people For this reason create a symbolic link that makes this a lot easier ln s 1 8 0 0 8 b d 0 1 0 0 2 ip6 arpa genfic com inv This way domain inv is a symbolic link pointing to the reverse lookup zone definition For the slave server the setup is fairly similar do not set the allow transfer though It is a slave server set the type of the zone s to slave instead and add in masters 2001 db8 81 21 ac 98ad 5fe1 to each zone That will tell BIND what the master of this particular zone is on the slave set the named write master zones SELinux boolean to on so that the named t domain can write to the cache location Finally set the initial zone files for the organization cat var bind pri com genfic TTL 1h ORIGIN genfic com IN SOA ns genfic com ns genfic com 2012041101 1d 2h 4w 1h IN NS ns genfic com IN NS ns2 genfic com IN MX 10 mail genfic com IN MX 20 mail2 genfic com genfic com IN AAAA 2001 db8 81 80 dd 13ed c49e ns IN AAAA 2001 db8 81 21 ac 98ad 5fe1 ns2 IN AAAA 2001 db8 81 22 ae 6b01 e3d8 www IN CNAME genfic com mail IN AAAA 2001 db8 81 21 b0 0738 8ad5 mail2 IN AAAA 2001 db8 81 22 50 5e9f e569 cat var bind pri com genfic inv TTL 1h IN SOA 1 8 0 0 8 b d 0 1 0 0 2 ip6 arpa ns genfic com 2012041101 1d 2h 4w 1h IN NS ns genfic com IN NS ns2 genfic com ORIGIN 1 8 0 0 8 b d 0 1 0 0 2 ip6 arpa 1 e f 5 d a 8 9 c a 0 0 0 0 0 0 1 2 0 0 IN PTR ns genfic com 8 d 3 e 1 0 b 6 e a 0 0 0 0 0 0 2 2 0 0 IN PTR ns2 genfic com With the configuration done start the named daemon run init rc service named start Hardening zone transfers The BIND system can get additional hardening by introducing transaction signatures TSIG To do so create a shared secret key with dnssec keygen The generated key is then added to the named conf file like so dnssec keygen a HMAC MD5 b 128 n HOST secundary cat Ksecundary key secundary IN KEY 512 3 157 d8fhe2frgY24WFedx348 In named conf key secundary algorithm hmac md5 secret d8fhe2frgY24WFedx348 In named conf s zone definition allow update key secundary In the slave s configuration add in an entry for the master and refer to this key as well In named conf key secundary algorithm hmac md5 secret d8fhe2frgY24WFedx348 server 2001 db8 81 21 ac 98ad 5fe1 keys secundary It is not possible to use the TSIG together with an IP address list though so either use the keys or use IP addresses or use the keys and define local firewall rules Hardening DNS records DNSSEC To use DNSSEC first create two keypairs One is the KSK Key Signing Key and is a long term keypair It will be used to sign the ZSKs Zone

    Original URL path: http://swift.siphos.be/aglara/dnsserver.html (2016-05-01)
    Open archived version from archive