archive-be.com » BE » S » SIPHOS.BE

Total: 45

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • Chapter 5. DHCP services
    basic setup shown further uses a high available architecture where the DHCP services keep track of the leases granted to the systems as well as balance free IP addresses between the two DHCP servers This allows if a network split occurs that each DHCP service can act independently as it has its own pool of free IP addresses and has knowledge of the lease state of all currently assigned addresses Figure 5 2 Standard HA architecture for DHCP Important Make sure that the two DHCP servers have their time synchronized so the use of an NTP service for both systems is seriously recommended Flows and feeds No flows or feeds are identified as of yet Administration To administer DHCPd a configuration management tool puppet is used to update configurations Regular operations on the DHCP daemon stopping starting is done through the standard Unix services Figure 5 3 Administering DHCPd Monitoring To monitor DHCP operations log events need to be captured This allows the monitoring infrastructure to follow up on the IP pool balance On regular times the following log entries will be shown with the current balancing information Oct 24 09 33 21 src at intranet dhcpd pool 8124898 total 759 free 238 backup 266 lts 14 Oct 24 09 33 32 src at intranet dhcpd pool 8124898 total 759 free 237 backup 266 lts 14 In the above example from a pool with currently 759 IP addresses 237 are not assigned and owned by the server while the backup server has 266 IP addresses free The lts parameter which stands for leases to send tells that the system is to receive 14 leases negative number in order to balance the available IP addresses Other monitoring is of course Operations In standard operation mode the DHCP daemon receives requests and updates from DHCP relay services or directly from servers in the same subnet Any activity is logged through the system logger who forwards the events to the log server for further processing An NTP service is explicitly mentioned on the drawing as time synchronisation is very important for the fail over setup of the DHCP service Figure 5 4 Operational flows and activities on DHCP service Users The DHCP daemon is an anonymous service no users are defined beyond the administrators on operating system level Security ISC DHCP is a popular components so exploits are not that hard to find against older versions Make sure that the DHCP service is running on the latest patch level and is properly hardened ISC DHCP When you have systems that require dynamically allocated IP addresses you will need a DHCP service Installation and configuration The installation and configuration of DHCP is fairly simple and similar to BIND uses flat files for its configuration Installation First install the DHCP server emerge dhcp Do the same on the relay servers Next edit etc conf d dhcpd to configure the DHCP daemon to use IPv6 cat etc conf d dhcpd DHCPD OPTS 6 Master DHCP server

    Original URL path: http://swift.siphos.be/aglara/dhcp.html (2016-05-01)
    Open archived version from archive

  • Chapter 6. Certificates and PKI
    communication is targeted towards This is where certificates come into play Certificates A certificate is this same public key together with data that identifies who the public private key belongs to This certificate also has signatures attached that are generated by the private keys of other certificates The idea is that if both Alice and Bob know a public key certificate of a party they both trust and know that this public key is indeed of the trusted party then Alice can send her own certificate signed by this trusted party to Bob Figure 6 1 Certificates and CAs in a nutshell Bob then validates the signature on this certificate using the public key he has of the trusted party If the signature indeed pans out then Bob verifies if the certificate isn t in the list of revoked certificates which is managed by the Certificate Authority If that isn t the case then Bob knows that the certificate he got is indeed from Alice because the trusted party says so and of which the private key is not known to be lost or stolen otherwise it would have been mentioned in the revocation list The list of certificates that are trusted the certificates of the Certificate Authority are stored in a trust store A malicious person now has a more difficult task If a wrong certificate is generated then the trusted party will probably not sign it As a result Chris cannot fake a certificate because both Alice and Bob will check the signature of the certificate and the certificate revocation list before they agree that it is a valid certificate Certificates in organizations In a larger organization or network certificates play an important role Many servers use SSL or TLS encryption In this case the client connects to the server and receives that servers certificate information The client validates this certificate with the keys he has of trusted authorative actors If the certificate is valid then the client knows he is communicating with the correct service Some magic occurs then to generate a session key only the client and the server know isearch on the internet for ssl handshake pre master for the details on this so they can then use symmetric encryption algorithms for the rest of the communication If self signed certificates would be used then all the clients should have the public keys of all these systems in their own list of trusted keys the trust store which is a nightmare to manage And if this would be managed why not just keep the symmetric keys then and do not use public key infrastructure It is also possible to have the clients accept self signed certificates but then these systems are vulnerable for MITM attacks This is where good certificate management comes into play Serious organizations need a way to sign certificates used in the architecture with a single key or very limited set of keys and distribute this trusted public key to all the clients The trust store of these clients is then much smaller The service that signs certificates is called the Certificate Authority or CA Often a chain of keys is used a top key called the Root CA which is extremely heavily protected and is used to sign a small set of subkeys or signing keys These signing keys are then used to sign the certificates Often these signing keys have a different purpose keys for signing certificates that will be used for code signing keys for signing certificates that will be used for authenticating people or clients keys for signing certificates that will be used for internet facing services etc Problems with Certificate Authorities On the Internet many popular CA services exist like CACert org Verizon Geotrust and more All these companies try to become this trusted party that others seek and offer signing services they will sign certificates you create some of them even generate public private key pairs if you want These services try to protect their own keys and only sign certificates from customers they can validate are proper customers with correct identities Although this seems like a valid model it does have its flaws Such popular services are more likely target for hackers and crackers Once their keys are compromised their public keys should be removed from all trust stores or if proper validation of keys against a revocation list is implemented have these keys available in the revocation list However that also means that all certificates used for services that are signed by these companies will effectively stop working as the clients will not trust the certificates anymore These services have a financial incentive to sign certificates If they do not do proper validation of the certificate requests they get because volume might be more important to them than integrity they might sign a certificate from a malicious person who is pretending to be a valid customer The more of these keys that are in the clients trust store the more chance that a malicious certificate is seen by the client as valid The latter is important to remember Assume that the clients trust store has 100 certificates that it trusts even though the organization only uses a single one If a malicious user creates a certificate that identifies itself as one of your services and gets it signed by one of those 100 CAs then the clients will trust the connection if it uses this malicious certificate because it is signed with one of the trusted keys Keeping a lid on CAs To keep the risk sufficiently low that a malicious certificate is being used find a CA that is trusted fully and keep the trust store limited to that CA And who else to trust better than yourself By using management software for certificates internally many important activities can be handled without having to seek for and pay a large CA vendor Create a CA store Sign requests of certificate requests that users have sent in Revoke certificates or at least signatures made on certificates for end user certificates that should not be trusted anymore for instance because they have been compromised Have users create a certificate in case they don t have the proper tools on their systems although it is not recommend to generate private and public keys on a remote server Have users submit a certificate signing request to the system for processing and then download the signed certificate from the site CA service providers If managing a private CA is seen as a bit too difficult one can also opt to use an online CA service provider There are many paid for service providers as well as free ones Regardless of the provider chosen make sure that the internal certificates are only signed by a root certificate authority that is specific to this environment and that the provider does not sign certificate requests by others with the root CA trusted internally For this reason be careful which service provider to choose and make a clear distinction between certificates that will be used internally not exposed to others and those that are facing external environments Certificate management protocols The Online Certificate Status Protocol OCSP is a simple protocol that clients can use to check if a certificate is still valid without needing to re download the Certificate Revocation List CRL over and over again and parsing the list An advantage is that you don t need to expose all the bad certificates that you know of which might be a sort of information leakage and that clients don t need to parse the CRL themselves as it is now handled on a server level Architecture To provide a private CA openssl will be used as the main interface The system itself should of course be very secure with an absolute minimum of users having access to the system and even those users should not have direct access to the private key A mail driven system is used to allow users to request signing of their certificates the certificate signing requests are stored in a folder where one of the certificate administrators can then sign the requests and send them back to the client Also an OCSP daemon will be running to provide the necessary revocation validation Flows and feeds In the next picture the two main flows are shown the OCSP to check the validity of a certificate and mail with certificate signing request Figure 6 2 Flows and feeds for the CA server Administration Administration wise all actions are handled through logons on the system The usual configuration management services apply such as using Puppet for general system administration Monitoring An important aspect in the setup is auditing All CA activities should be properly audited as well as operating system activities Operations Figure 6 3 Operations on a CA server Users Three roles are identified for CA operations Figure 6 4 User definitions for CA operations The first one is the regular system administrator His job is to keep the system available and has privileged access to the system However the mandatory access control system in place prevents the admin from easily reaching the private key s used by the CA software The CA admin has no system privileges but is allowed to call and interact with the CA tools The CA admin can generate new private keys certificates sign certificates etc However he too cannot reach the private keys The security admin has a few system privileges but most importantly has the ability to update the MAC policy and read the private keys The role is not involved in the daily operations though Security The mandatory access control in place SELinux prevents direct access to the private key Of course next to the MAC policy other security requirements should be in place as well such as limited role assignment only a few people in the organization should be security admin proper physical security of the system etc OpenSSL as CA As CA management software the presented architecture uses the command line interface of openssl Although more user friendly interfaces seem to exist they are either not properly maintained or very difficult to manage in the long term The OpenSSL stack is a very common well maintained library for handling encryption and encryption related functions including certificate handling Most if not all Linux Unix systems have it installed Setting up the CA Setting the defaults The default settings for using OpenSSL are stored in the etc ssl openssl cnf file Below shows a few changes suggested when dealing with CA certificates CA default dir genficCA default days 7305 20 years req default bits 2048 Setting up the root CA The root CA is the top level key which will be ultimately trusted If an HSM device would be used then this key would be stored in the HSM device itself and never leave But in case this isn t possible create a root CA as follows cd etc ssl private openssl genrsa des3 out root genfic key 2048 This generates root genfic key which is the private key and will be used as the root key Next create a certificate from this key In the example a 20 year lifespan is used The shorter the lifespan the faster there is a need to refresh the key stores within the entire organization which can be a costly activity However the longer the period the more time malicious persons have to get to the key before a new one is generated openssl req new x509 days 7205 key root genfic key out root genfic crt Country Name 2 letter code AU BE State or Province Name full name Some State Antwerp Locality Name eg city Mechelen Organization Name eg company Internet Widgits Pty Ltd Gentoo Fictional Inc Organizational Unit Name eg section Common Name e g server FQDN or YOUR name GenFic Root CA Email Address This provides the root genfic crt file which is the certificate public key of the key pair created before together with identity information and key information To view the certificate in its glory use openssl x509 openssl x509 noout text in root genfic crt Finally create the certificate revocation list which is empty for now mkdir genficCA touch genficCA index txt echo 01 genficCA crlnumber openssl ca gencrl crldays 365 keyfile root genfic key cert root genfic crt out root genfic crl Now put all files in the location s as defined in openssl cnf Multilevel or hierarchical certificate authorities In many environments the root key itself isn t used to sign end user or system certificates a hierarchy is established to support a more flexible approach on certificates Often this hierarchy is to reflect the use of the certificates end user certificates system certificates and if the organization has multiple companies these companies often have an intermediate certificate authority In this example a simple hierarchy is used below the root certificate two signing certificates are used one for end users and one for systems root CA user CA system CA The root certificate created above will be stored offline not reachable through the network and preferably in a shut down state or on a HSM device together with the certificates created for the user and system CA s This is needed in case these certificates need to be revoked since the process of revoking a certificate see later requires the certificate To support the two additional CA s edit openssl cnf accordingly Add sections for each CA to support copy the CA default settings and edit the directory and other settings where necessary For instance call them CA root CA user and CA system to identify the different certificate authorities Then create the keys openssl genrsa des3 out user genfic key 2048 openssl genrsa des3 out system genfic key 2048 Next create certificate requests Unlike the root CA these will not be signed with the same key but rather with the root CA key openssl req new days 1095 key user genfic key out user genfic csr openssl req new days 1109 key system genfic key out system genfic csr If the command asks for a challenge password leave that empty The purpose of the challenge password is that if the certificate is to be revoked the challenge password needs to be given again as well This gives some assurance that rogue administrators can t revoke certificates that they don t own but as we will store the root key offline rather than keep it available as a service revocation of the certificates requires physical access to the keys anyhow The csr files Certificate Signing Request contain the public key of the key pair generated as well as identity information Based on the CSR the root CA will sign and create a certificate openssl ca name CA root days 1095 extensions v3 ca out system genfic crt infiles system genfic csr openssl ca name CA root days 1109 extensions v3 ca out user genfic crt infiles user genfic csr Notice that there are different validation periods for the certificates given This is to support the plausible management activities that result when certificates are suddenly expired If the organization forgets that the certificates expire their user certificates will expire first and two weeks later the system certificates Not only will this allow the continued servicing of the various systems while the user CA certificate is being updated but will also allow the organization to prepare for the system CA update in time since they now have 14 days as they noticed that the user CA certificate was expired and now have time until the system CA certificate expires When the certificate signing requests are handled the genficCA newcerts directory contains the two certificates This allows for the CA to revoke the certificates when needed Finally copy the signed certificates and prepare the directory structure for the two certificate authorities as well mkdir genficSystemCA cd genficSystemCA touch index txt echo 01 serial echo 01 crlnumber mkdir newcerts crl private mv system genfic key private Protecting the root CA As mentioned before it is good practice to protect the root CA This can be done by handling the system as a separate offline system no network connection although with the other CAs in place there is little reason for the root CA to be permanently available In other words it is fine to move it to some offline medium and even print it out and store this in a very safe location For instance put it on a flash disk or tape and put it in a safe or even better two flash disks in two safes Starting the OCSP Server OpenSSL has an internal OCSP daemon that can be used to provide OCSP services internally One way to start it is in a screen session screen S ocsp server cd etc ssl private genficUserCA openssl ocsp index index txt CA user genfic crt rsigner user genfic crt rkey private user genfic key port 80 Waiting for OCSP client connections This way the OCSP daemon will listen on port 80 and within the screen session called ocsp server If the administrator ever needs to get to this screen session he can run screen x ocsp server and he s attached to the session again Of course an init script for this can be created as well instead of using screen Daily handling With the certificate authority files in place let s look at the daily operational tasks involved with certificates User certificates A user certificate is used by a person to identify himself towards services Such certificates can be used to authenticate a user for operating system access but a more common use of user certificates is access towards websites the user has a key and certificate loaded in the browser or to some store that the browser has access to in case the key itself is

    Original URL path: http://swift.siphos.be/aglara/certificates.html (2016-05-01)
    Open archived version from archive

  • Chapter 7. High Available File Server
    srv data location is where the NFS disclosed files are at rsync auq srv data remote system srv data Using a more change based approach requires a bit more scripting but nothing unmanageable It basically uses the inotifywait tool to notify the system when changes have been made on the files and then trigger rsync when needed For low volume systems this works pretty well but larger deployments will find the overhead of the rsync commands too much Dedicated NFS with DRBD and Heartbeat Another setup for a high available NFS is the popular NFS DRDB Heartbeat combination This is an interesting architecture to look at if the virtualization platform used has too much overhead on the performance of the NFS system you have the overhead of the virtualization layer the logical volume layer DRDB again logical volume layer In this case DRBD is used to synchronize the storage while heartbeat is used to ensure the proper node in the setup is up and running and if one node is unreachable the other one can take over Figure 7 2 Alternative HA setup using DRBD and Heartbeat Simple replication Using rsync we can introduce simple replication between hosts However there is a bit more to it to look at then just run rsync Tuning file system performance dir index First of all you definitely want to tune your file system performance Running rsync against a location with several thousand of files means that rsync will need to stat each and every file As a result you will have peak I O loads while rsync is preparing its replication so it makes sense to optimize the file system for this One of the optimizations you might want to consider is to use dir index enabled file systems When the file system ext3 or ext4 uses dir index ext4 uses this by default lookup operations on the file system uses hashed b trees to speed up the operation This algorithm has a huge influence on the time needed to list directory content Let s take a quick overly simplified and thus also incorrect view at the approach Figure 7 3 HTree in a simple example In the above drawing the left side shows a regular linear structure the top block represents the directory which holds the references numbers of the files The files themselves are stored in the blocks with the capital letter which includes metadata of the file such as modification time which is used by rsync to find the files it needs to replicate The small letters contain some information about the file which is for instance the file name In the right side you see a possible example of an HTree hashed b tree So how does this work Let s assume you need to traverse all files what rsync does and check the modification times How many steps would this take In the left example ext2 the code first gets the overview of file identifications It starts at the beginning of the top block finds one reference to a file 1 then after it looks where the information about the second file is at Because the name of a file a in this case is not of a fixed width the reference to the file 1 also contains the offset where the next file is at So the code jumps to the second file 2 and so on At the end the code returns 1 2 5 7 8 12 19 and 21 and has done 7 jumps to do so from 1 to 2 from 2 to 5 Next the code wants to get the modification time of each of those files So for the first file 1 it looks at its address and goes to the file Then it wants to do the same for the second file but since it only has the number 2 and not the reference to the file it repeats the process read 1 jump read 2 and look at file To get the information about the last file it does a sequence similar to read 1 jump read 2 jump read 5 jump read 21 look at file At the end this took 36 jumps In total the code used 43 jumps In the right example using HTrees the code gets the references first The list of references similar to the numbers in the left code use fixed structures so within a node no jumps are needed So starting from the top node c f it takes 3 jumps one to a b one to d e and one to g h to get an overview of all references Next the code again wants to get the modification time of each of those files So for the first file A it reads the top node finds that a is less than c so goes to the first node lower There it finds the reference to A So for file A it took 2 jumps Same for file B File c is referred in the top node so only takes one jump etc At the end this took 14 jumps or in total 17 jumps So even for this very simple example with just 8 files the difference is 43 jumps random ish reads on the disk versus 17 jumps This is what in algorithm terms is O n 2 versus O nlog n speed the order of the first example goes with n 2 whereas the second one is n log n order here means that the impact number of jumps in our case goes up the same way as the given formula it doesn t give the exact number of jumps For very large sets of files this gives a very huge difference 100 2 10 000 whereas 100 log 100 200 and 1000 2 1 000 000 whereas 1000 log 1000 3000 Hence the speed difference If you haven t set dir index while creating the file systems

    Original URL path: http://swift.siphos.be/aglara/fileserver.html (2016-05-01)
    Open archived version from archive

  • Chapter 8. A Gentoo build server
    profile mkdir p srv build basic bp portage portage settings snapshot cache catalyst store cd etc catalyst cat catalyst basic bp conf digests sha512 contents auto distdir usr portage distfiles envscript etc catalyst catalystrc hash function crc32 options autoresume metadata overlay pkgcache seedcache snapcache portdir srv build basic bp portage sharedir usr lib64 catalyst snapshot cache srv build basic bp snapshot cache storedir srv build basic bp catalyst store Next create the stage3 spec and grp spec files The templates for these files can be found in usr share doc catalyst version examples cat stage3 basic bp conf Static settings subarch amd64 target stage3 rel type default profile hardened linux amd64 no multilib selinux cflags O2 pipe march native portage confdir srv build basic bp portage settings Dynamic settings might be passed on as parameters instead Name of the seed stage3 file to use source subpath stage3 amd64 20120401 Name of the portage snapshot to use snapshot 20120406 Timestamp to use on your resulting stage3 file version stamp 20120406 cat grp basic bp conf similar as to stage3 grp packages lighttpd nginx cvechecker bind apache The portage confdir variable tells catalyst where to find the portage configuration specific files all those you usually put in etc portage The source subpath variable tells catalyst which stage3 file to use as a seed file You can opt to point this one to the stage3 built the day before or to a fixed value whatever suits youµ the best This file needs to be stored inside srv build basic bp catalyst store builds Copy the portage snapshot here portage 20120406 tar bz 2 and related files in srv build basic bp catalyst store snapshots If you rather create your own snapshot populate srv build basic bp portage with the tree you want to use and then call catalyst to generate a snapshot for you from it Warning The grSecurity chroot restrictions that are enabled in the kernel will prohibit catalyst from functioning properly It is advisable to disable the chroot restrictions when building through catalyst You can toggle them through sysctl commands catalyst c catalyst basic bp conf s 20120406 Next we have the two catalyst runs executed to generate the new stage3 which will be used by subsequent installations as the new seed stage3 and grp build which is used to generate the binary packages catalyst c catalyst basic bp conf f stage3 basic bp conf catalyst c catalyst basic bp conf f grp basic bp conf When the builds have finished you will find the packages for this profile in srv build basic bp catalyst store builds These packages can then be moved to the NFS mount later Automating the builds To automate the build process remove the date specific parameters and instead pass them on as parameters like so catalyst c catalyst basic bp conf f stage3 basic bp conf C version stamp 20120406 Next if the grp build finishes succesfully as well we can use rsync to synchronize the

    Original URL path: http://swift.siphos.be/aglara/buildserver.html (2016-05-01)
    Open archived version from archive

  • Chapter 9. Database Server
    receiving the archive logs and depending of the type of standby waits keeps logs on the file system until the standby is started for instance due to a fail over or immediately applies the received WAL logs If a standby waits it is called a warm standby If it immediately applies the received WAL logs it is a hot standby In case of a hot standby the database is also open for read only access There are also methods for creating multi master PostgreSQL clusters In our presented architecture the hot standby even has to signal to the master that it has received and applied the log before the commit is returned as successful to the client Of course if you do not need this kind of protection on the database it is best left disabled as it incurs a performance penalty Administration PostgreSQL administration is mostly done using the psql command and related pg commands Monitoring When monitoring PostgreSQL we should focus on process monitoring and log file monitoring Operations When you are responsible for managing PostgreSQL systems then you will need to understand the internal architecture of PostgreSQL Figure 9 4 Internal architecture for PostgreSQL The master postgres server process listens on the PostgreSQL port waiting for incoming connections Any incoming connection is first checked against the settings in the Host Based Access configuration file pg hba conf Only if this list allows the connection will it go through towards the postgres database where the rights granted to the user then define what access is allowed User management PostgreSQL supports a wide number of authentication methods including PAM support Pluggable Authentication Modules GSSAPI including the Kerberos one RADIUS LDAP etc However it is important to know that this is only regarding authentication not authorization In other words you still need to create the roles users in the PostgreSQL server and grant them whatever they need Central user management in this case ensures that if a person leaves the organization his account can be immediately removed or locked so that authenticating as this user against the databases is no longer possible However you will still need a method for cleaning up the role definitions in the databases Security Deployment and uses Manual installation of the database server We start by installing the dev db postgresql server package on the system emerge dev db postgresql server Next edit the etc conf d postgresql 9 1 file to accommodate the settings of the cluster PGDATA etc postgresql 9 1 DATA DIR var lib postgresql 9 1 data PG INITDB OPTS locale en US UTF 8 Now create the cluster temporarily assign a password to the postgres user it will be asked during the configuration and afterwards lock the account again passwd postgres emerge config dev db postgresql server 9 1 passwd l postgres restorecon Rv var lib postgresql 9 1 data To secure the cluster we need to edit its configuration files before starting Let s first make sure that we

    Original URL path: http://swift.siphos.be/aglara/databaseserver.html (2016-05-01)
    Open archived version from archive

  • Chapter 10. Mail Server
    needs to be forwarded further relayed or delivered locally mydestination genfic com mynetworks From which locations does Postfix accept mails to be routed further E mails received from the networks identified in mynetworks are accepted to be routed by this Postfix daemon Mails that originate from elsewhere are handled by the relay domains parameter You definitely want to have mynetworks set to the networks for which you accept mails mynetworks 1 2001 db8 81 1 80 relay domains For which other domains does Postfix act as a relay server E mails that are not meant to be delivered locally will be checked against the relay domains parameter to see if Postfix is allowed to route them further or not By explicitly making this variable empty we tell Postfix it should not route e mails that are not meant for him explicitly relay domains relayhost Through which server will Postfix send out e mails If the relayhost parameter is set Postfix will send all outgoing e mails through this server When unset Postfix will send outgoing e mails directly to the destination server as seen through the MX DNS records By surrounding the target with brackets Postfix will not perform an MX record lookup for the given destination relayhost mail out internal genfic com Managing Postfix When Postfix is configured the real work starts Administering a mail infrastructure is about capacity management queue management integration support with filters anti virus scanning scalability and more Mail administration is not to be misunderstood it is an entire IT field of its own In this chapter we ll take a look at only a few of these aspects Standard operations Regular operations being the stop start and reload of the configuration are best done through the service script etc init d postfix stop start reload This will ensure proper transitioning SELinux wise and thanks to the dependency support within the init scripts it will also handle potential depending services For instance if an anti virus scanning service requires postfix to be running then bringing postfix down will first bring the anti virus scanning service down and then postfix If you would do this through the postfix command itself the anti virus scanning service will remain running which might cause headaches in the future or just throw many errors events Queue management As can be seen from the architecture overview Postfix uses a number of internal queues for processing e mails Physically you will find these queues as directories inside var spool postfix together with quite a few others of which some of them we ll encounter later However managing these queues is not done at the file level but through the postqueue and postsuper commands Generally as a Postfix administrator you want the incoming active queues to be processed swiftly as those are the queues where incoming or outgoing messages live until they are handled and the deferred queue to be monitored this is where mails that couldn t be delivered for now live

    Original URL path: http://swift.siphos.be/aglara/mailserver.html (2016-05-01)
    Open archived version from archive

  • Chapter 11. Configuration management with git and Puppet
    Linux host itself as well which isn t done through the gitolite admin repository For instance a snippet for a repository puppet was in which the users john dalia both admins jacob and eden have access to repo puppet was RW john dalia RW jacob R eden The gitolite documentation referred to at the end of this chapter has more information about the syntax and the abilities including group support in gitolite Puppet The puppet master hosts the configuration entries for your environment and manages the puppet clients authentication through certificates Architecture The puppet architecture is fairly simple which is also one of its strengths Flows The following diagram shows the flows feeds that interact with the puppet processes Figure 11 4 Flows towards and from puppet The most prominent flow is the one with configuration updates These updates come from one of the Git repositories and are triggered locally on the puppet master server itself Administration As puppet is an administration tool by itself it comes to no surprise that the actual administration on puppet is done using the system specific interactive shells i e through SSH Figure 11 5 Puppet administration The main administration task in puppet is handling the certificates system administrators request a certificate through the puppet client The client connects to the master sends the signing request which is then queued The puppet admin then lists the pending certificate requests and signs those he know are valid When signed the system administrator can then retrieve the signed certificate and have it installed on the system again through the puppet client from which point the system is known and can be managed by puppet Monitoring When checking on the puppet operations we need to make sure that the puppet agent is running or scheduled to run the puppet agent ran within the last xx minutes depending on the frequency of the data gathering the puppet agent did not fail We might also want to include a check that says that n consecutive polls might not perform changes every time in other words the configuration has to be stable after n 1 requests Operations During regular operations the puppet agent frequently connects to the puppet master sends all his facts the state of the current system from which the puppetmaster then devises how to update the system to match the configuration the system should be in Figure 11 6 Regular operations of puppet The activities are by default triggered from the agent It is possible and we will do so later to configure the agent to also listen for incoming connections from the puppet master This allows administrators to push changes to systems without waiting for the agents to connect to the master User management Puppet does not have specific user management features in it If you want separate roles you will need to do so through the file access control mechanisms on the puppet master and or through the repositories that you use as the configuration repository Security Make sure that no resources can be accessed through Puppet that are otherwise not accessible by unauthorized people As Puppet includes a web based file server we need to configure it properly so that unauthorized access is not possible Luckily this is the default behavior with a Puppet installation Setting up puppet master Installing puppet master The puppet master and puppet client itself are both provided through the app admin puppet package equery u puppet Legend U final flag setting for installation I package is installed with flag Colors set unset Found these USE flags for app admin puppet 2 7 18 U I augeas Enable augeas support diff Enable diff support doc Adds extra documentation API Javadoc etc It is recommended to enable per package instead of globally emacs Adds support for GNU Emacs ldap Adds LDAP support Lightweight Directory Access Protocol minimal Install a very minimal build disables for example plugins fonts most drivers non critical features rrdtool Enable rrdtool support ruby targets ruby18 Build with MRI Ruby 1 8 x shadow Enable shadow support sqlite3 Adds support for sqlite3 embedded sql database test Workaround to pull in packages needed to run with FEATURES test Portage 2 1 2 handles this internally so don t set it in make conf package use anymore vim syntax Pulls in related vim syntax scripts xemacs Add support for XEmacs emerge app admin puppet Next edit etc puppet puppet conf and add the following to enable puppetmaster to bind on IPv6 master bindaddress You can then start the puppet master service run init rc service puppetmaster start Configuring as CA One puppet master needs to be configured as the certificate authority responsible for handing out and managing the certificates of the various puppet clients cat etc puppet puppet conf main logdir var log puppet rundir var run puppet ssldir vardir ssl master bindaddress Configuring as non CA Hub The remainder of puppet masters need to be configured as a HUB for these systems disable CA functionality cat etc puppet puppet conf main logdir var log puppet rundir var run puppet ca server puppet ca internal genfic com master bindaddress ca false Make sure no ssl directory is available rm rf puppet master configprint ssldir Next request a certificate from the CA for this master In the dns alt names parameter specify all possible hostnames fully qualified and not that agents might use to connect to this particular master puppet agent test dns alt names puppetmaster1 internal genfic com puppet puppet internal genfic com Then on the CA server sign the request puppet cert list puppet cert sign new master cert Finally retrieve the signed certificate back on the HUB puppet agent test Repeat these steps for every HUB you want to use You can implement round robin load balancing by using a round robin DNS address allocation for the master hostname such as puppet internal genfic com Configuring repositories As per our initial example we will need to pull from

    Original URL path: http://swift.siphos.be/aglara/centralcmdb.html (2016-05-01)
    Open archived version from archive

  • Chapter 12. Virtualization with KVM
    installations Secure isolation In a security sensitive environment isolation is a very important concept It ensures that a system or service can only access those resources it needs to while disallowing and even hiding the other resources Virtualization allows architects to design the system so that it runs in its own operating system so from the viewpoint of the service it has access to those resources it needs but sees no other On the host layer the guests can then be properly isolated so they cannot influence each other Having separate operating systems is often seen as a thorough implementation of isolation Yes there are a lot of other means to isolate services Still running a service in a virtualized operating system is not the summum of isolation Breaking out of KVM has been done in the past and will most likely happen again Other virtualization have seen their share of security vulnerabilities to this level as well Simplified backup restore For many organizations a bare metal backup restore routine is much more resource hungry than regular file based backup restore By using virtualization bare metal backup restore of the guests is a breeze as it is now back a matter of working with files and processes Ok the name bare metal might not work anymore here and you ll still need to backup your hypervisor But if your hypervisor host installation is very standardized this is much faster and easier than before Fast deployment By leveraging the use of guest images it is easy to create a single image and then use it as a master template for other instances Need a web serving cluster Set up on replicate and boot Need a few more during high load Replicate and boot a few more It becomes just that easy Architecture To support the various advantages of virtualization mentioned earlier we will need to take these into account in the architecture For instance high availability requires that the storage on which the guests are running is quickly or even continuously available on other systems so these can take over or quickly boot the guests The underlying storage platform we focus on is Ceph and will be discussed in a later chapter Also we will opt for regular KVM running inside screen sessions The screen sessions allow us to easily manage KVM monitor commands The console itself will be launched through VNC sessions The sessions will by default not be started as to not consume memory and resources but can be started through the KVM monitor when needed All the other aspects of the hypervisor level are the same as what we will have with our regular operating system design which is defined further down This is because at the hypervisor level we will use Gentoo Linux as well The flexibility of the operating system allows us to easily manage multiple guests in a secure manner hence the secure containers displayed in the above picture We will cover these secure containers

    Original URL path: http://swift.siphos.be/aglara/hypervisor.html (2016-05-01)
    Open archived version from archive