Multi-Master Replication Of OpenLDAP Server on CentOS

Image result for Multi-Master Replication Of OpenLDAP full hd

Replication
Replicated directories are a fundamental requirement for delivering a resilient enterprise deployment.
OpenLDAP has various configuration options for creating a replicated directory. In previous releases, replication was discussed in terms of a master server and some number of slave servers. A master accepted directory updates from other clients, and a slave only accepted updates from a (single) master. The replication structure was rigidly defined and any particular database could only fulfill a single role, either master or slave.
As OpenLDAP now supports a wide variety of replication topologies, these terms have been deprecated in favor of provider and consumer: A provider replicates directory updates to consumers; consumers receive replication updates from providers. Unlike the rigidly defined master/slave relationships, provider/consumer roles are quite fluid: replication updates received in a consumer can be further propagated by that consumer to other servers, so a consumer can also act simultaneously as a provider. Also, a consumer need not be an actual LDAP server; it may be just an LDAP client.
The following sections will describe the replication technology and discuss the various replication options that are available.
18.1. Replication Technology
18.1.1. LDAP Sync Replication
The LDAP Sync Replication engine, syncrepl for short, is a consumer-side replication engine that enables the consumer LDAP server to maintain a shadow copy of a DIT fragment. A syncrepl engine resides at the consumer and executes as one of the slapd(8) threads. It creates and maintains a consumer replica by connecting to the replication provider to perform the initial DIT content load followed either by periodic content polling or by timely updates upon content changes.
Syncrepl uses the LDAP Content Synchronization protocol (or LDAP Sync for short) as the replica synchronization protocol. LDAP Sync provides a stateful replication which supports both pull-based and push-based synchronization and does not mandate the use of a history store. In pull-based replication the consumer periodically polls the provider for updates. In push-based replication the consumer listens for updates that are sent by the provider in realtime. Since the protocol does not require a history store, the provider does not need to maintain any log of updates it has received (Note that the syncrepl engine is extensible and additional replication protocols may be supported in the future.).
Syncrepl keeps track of the status of the replication content by maintaining and exchanging synchronization cookies. Because the syncrepl consumer and provider maintain their content status, the consumer can poll the provider content to perform incremental synchronization by asking for the entries required to make the consumer replica up-to-date with the provider content. Syncrepl also enables convenient management of replicas by maintaining replica status. The consumer replica can be constructed from a consumer-side or a provider-side backup at any synchronization status. Syncrepl can automatically resynchronize the consumer replica up-to-date with the current provider content.
Syncrepl supports both pull-based and push-based synchronization. In its basic refreshOnly synchronization mode, the provider uses pull-based synchronization where the consumer servers need not be tracked and no history information is maintained. The information required for the provider to process periodic polling requests is contained in the synchronization cookie of the request itself. To optimize the pull-based synchronization, syncrepl utilizes the present phase of the LDAP Sync protocol as well as its delete phase, instead of falling back on frequent full reloads. To further optimize the pull-based synchronization, the provider can maintain a per-scope session log as a history store. In its refreshAndPersist mode of synchronization, the provider uses a push-based synchronization. The provider keeps track of the consumer servers that have requested a persistent search and sends them necessary updates as the provider replication content gets modified.
With syncrepl, a consumer server can create a replica without changing the provider's configurations and without restarting the provider server, if the consumer server has appropriate access privileges for the DIT fragment to be replicated. The consumer server can stop the replication also without the need for provider-side changes and restart.
Syncrepl supports partial, sparse, and fractional replications. The shadow DIT fragment is defined by a general search criteria consisting of base, scope, filter, and attribute list. The replica content is also subject to the access privileges of the bind identity of the syncrepl replication connection.
The LDAP Content Synchronization Protocol
The LDAP Sync protocol allows a client to maintain a synchronized copy of a DIT fragment. The LDAP Sync operation is defined as a set of controls and other protocol elements which extend the LDAP search operation. This section introduces the LDAP Content Sync protocol only briefly. For more information, refer to RFC4533.
The LDAP Sync protocol supports both polling and listening for changes by defining two respective synchronization operations: refreshOnly and refreshAndPersist. Polling is implemented by the refreshOnlyoperation. The consumer polls the provider using an LDAP Search request with an LDAP Sync control attached. The consumer copy is synchronized to the provider copy at the time of polling using the information returned in the search. The provider finishes the search operation by returning SearchResultDone at the end of the search operation as in the normal search. Listening is implemented by the refreshAndPersist operation. As the name implies, it begins with a search, like refreshOnly. Instead of finishing the search after returning all entries currently matching the search criteria, the synchronization search remains persistent in the provider. Subsequent updates to the synchronization content in the provider cause additional entry updates to be sent to the consumer.
The refreshOnly operation and the refresh stage of the refreshAndPersist operation can be performed with a present phase or a delete phase.
In the present phase, the provider sends the consumer the entries updated within the search scope since the last synchronization. The provider sends all requested attributes, be they changed or not, of the updated entries. For each unchanged entry which remains in the scope, the provider sends a present message consisting only of the name of the entry and the synchronization control representing state present. The present message does not contain any attributes of the entry. After the consumer receives all update and present entries, it can reliably determine the new consumer copy by adding the entries added to the provider, by replacing the entries modified at the provider, and by deleting entries in the consumer copy which have not been updated nor specified as being present at the provider.
The transmission of the updated entries in the delete phase is the same as in the present phase. The provider sends all the requested attributes of the entries updated within the search scope since the last synchronization to the consumer. In the delete phase, however, the provider sends a delete message for each entry deleted from the search scope, instead of sending present messages. The delete message consists only of the name of the entry and the synchronization control representing state delete. The new consumer copy can be determined by adding, modifying, and removing entries according to the synchronization control attached to the SearchResultEntry message.
In the case that the LDAP Sync provider maintains a history store and can determine which entries are scoped out of the consumer copy since the last synchronization time, the provider can use the delete phase. If the provider does not maintain any history store, cannot determine the scoped-out entries from the history store, or the history store does not cover the outdated synchronization state of the consumer, the provider should use the present phase. The use of the present phase is much more efficient than a full content reload in terms of the synchronization traffic. To reduce the synchronization traffic further, the LDAP Sync protocol also provides several optimizations such as the transmission of the normalized entryUUIDs and the transmission of multiple entryUUIDs in a single syncIdSet message.
At the end of the refreshOnly synchronization, the provider sends a synchronization cookie to the consumer as a state indicator of the consumer copy after the synchronization is completed. The consumer will present the received cookie when it requests the next incremental synchronization to the provider.
When refreshAndPersist synchronization is used, the provider sends a synchronization cookie at the end of the refresh stage by sending a Sync Info message with refreshDone=TRUE. It also sends a synchronization cookie by attaching it to SearchResultEntry messages generated in the persist stage of the synchronization search. During the persist stage, the provider can also send a Sync Info message containing the synchronization cookie at any time the provider wants to update the consumer-side state indicator.
In the LDAP Sync protocol, entries are uniquely identified by the entryUUID attribute value. It can function as a reliable identifier of the entry. The DN of the entry, on the other hand, can be changed over time and hence cannot be considered as the reliable identifier. The entryUUID is attached to each SearchResultEntry or SearchResultReference as a part of the synchronization control.
Syncrepl Details
The syncrepl engine utilizes both the refreshOnly and the refreshAndPersist operations of the LDAP Sync protocol. If a syncrepl specification is included in a database definition, slapd(8) launches a syncrepl engine as a slapd(8) thread and schedules its execution. If the refreshOnly operation is specified, the syncrepl engine will be rescheduled at the interval time after a synchronization operation is completed. If the refreshAndPersist operation is specified, the engine will remain active and process the persistent synchronization messages from the provider.
The syncrepl engine utilizes both the present phase and the delete phase of the refresh synchronization. It is possible to configure a session log in the provider which stores the entryUUIDs of a finite number of entries deleted from a database. Multiple replicas share the same session log. The syncrepl engine uses the delete phase if the session log is present and the state of the consumer server is recent enough that no session log entries are truncated after the last synchronization of the client. The syncrepl engine uses the present phase if no session log is configured for the replication content or if the consumer replica is too outdated to be covered by the session log. The current design of the session log store is memory based, so the information contained in the session log is not persistent over multiple provider invocations. It is not currently supported to access the session log store by using LDAP operations. It is also not currently supported to impose access control to the session log.
As a further optimization, even in the case the synchronization search is not associated with any session log, no entries will be transmitted to the consumer server when there has been no update in the replication context.
The syncrepl engine, which is a consumer-side replication engine, can work with any backends. The LDAP Sync provider can be configured as an overlay on any backend, but works best with the back-bdb, back-hdb, or back-mdb backends.
The LDAP Sync provider maintains a contextCSN for each database as the current synchronization state indicator of the provider content. It is the largest entryCSN in the provider context such that no transactions for an entry having smaller entryCSN value remains outstanding. The contextCSN could not just be set to the largest issued entryCSN because entryCSN is obtained before a transaction starts and transactions are not committed in the issue order.
The provider stores the contextCSN of a context in the contextCSN attribute of the context suffix entry. The attribute is not written to the database after every update operation though; instead it is maintained primarily in memory. At database start time the provider reads the last saved contextCSN into memory and uses the in-memory copy exclusively thereafter. By default, changes to the contextCSN as a result of database updates will not be written to the database until the server is cleanly shut down. A checkpoint facility exists to cause the contextCSN to be written out more frequently if desired.
Note that at startup time, if the provider is unable to read a contextCSN from the suffix entry, it will scan the entire database to determine the value, and this scan may take quite a long time on a large database. When a contextCSN value is read, the database will still be scanned for any entryCSN values greater than it, to make sure the contextCSN value truly reflects the greatest committed entryCSN in the database. On databases which support inequality indexing, setting an eq index on the entryCSN attribute and configuring contextCSN checkpoints will greatly speed up this scanning step.
If no contextCSN can be determined by reading and scanning the database, a new value will be generated. Also, if scanning the database yielded a greater entryCSN than was previously recorded in the suffix entry's contextCSN attribute, a checkpoint will be immediately written with the new value.
The consumer also stores its replica state, which is the provider's contextCSN received as a synchronization cookie, in the contextCSN attribute of the suffix entry. The replica state maintained by a consumer server is used as the synchronization state indicator when it performs subsequent incremental synchronization with the provider server. It is also used as a provider-side synchronization state indicator when it functions as a secondary provider server in a cascading replication configuration. Since the consumer and provider state information are maintained in the same location within their respective databases, any consumer can be promoted to a provider (and vice versa) without any special actions.
Because a general search filter can be used in the syncrepl specification, some entries in the context may be omitted from the synchronization content. The syncrepl engine creates a glue entry to fill in the holes in the replica context if any part of the replica content is subordinate to the holes. The glue entries will not be returned in the search result unless ManageDsaIT control is provided.
Also as a consequence of the search filter used in the syncrepl specification, it is possible for a modification to remove an entry from the replication scope even though the entry has not been deleted on the provider. Logically the entry must be deleted on the consumer but in refreshOnly mode the provider cannot detect and propagate this change without the use of the session log on the provider
How to configure multi-master replication
Let’s see how to configure multi-master replication of OpenLDAP Server on CentOS 6.4
In my previous post, I have shown you how to configure OpenLDAP Server with SASL/TLS.If you dont know how to configure, please search this blog.
Some important point about multi-master replication:
In previous releases of OpenLDAP, replication was discussed in terms of a master server and some slave servers.
In OpenLDAP version 2.4.x, it support multi-master replication model.
The LDAP Sync Replication engine, syncrepl for short, is a consumer-side replication engine that enables the consumer LDAP server to maintain a shadow copy of a DIT.
A provider replicates directory updates to consumers.
Consumers receive replication updates from providers.
In simple, layman terms, Provider means Master, Consumer means Slave.
In multi-master all providers acts as consumers.
In multi-master replication, syncrepl supports two synchronization operations, i.e. refreshOnly and refreshAndPersist.
In refreshOnly mode synchronization, the provider uses a pull-based synchronization where the consumer servers need not be tracked and no history information is maintained.
In refreshAndPersist mode of synchronization, the provider uses a push-based synchronization. The provider keeps track of the consumer servers that have requested the persistent search and sends them necessary updates as the provider replication content gets modified.
Image result for Multi-Master Replication Of OpenLDAP full hd
1) Copy the LDAP1 server public key file to the LDAP2 server and LDAP2 server public key file to LDAP1 server in this location /etc/openldap/certs
[root@ldap1 ~]# scp ldap2:/etc/pki/tls/certs/ldap2pub.pem  /etc/openldap/certs/
[root@ldap1 ~]# scp /etc/pki/tls/certs/ldap1pub.pem ldap2:/etc/openldap/certs/
2) Set the permissions on the copied public key files to ldap on LDAP1 and LDAP2 Servers
[root@ldap1 ~]# chown ldap. /etc/openldap/certs/ldap2pub.pem
[root@ldap2 ~]# chown ldap. /etc/openldap/certs/ldap1pub.pem
3) Configure /etc/openldap.slapd.conf as below on both LDAP1 and LDAP2 Servers
[root@ldap1 ~]# vim /etc/openldap/slapd.conf
#
# See slapd.conf(5) for details on configuration options.
# This file should NOT be world readable.
#
include         /etc/openldap/schema/corba.schema
include         /etc/openldap/schema/core.schema
include         /etc/openldap/schema/cosine.schema
include         /etc/openldap/schema/duaconf.schema
include         /etc/openldap/schema/dyngroup.schema
include         /etc/openldap/schema/inetorgperson.schema
include         /etc/openldap/schema/java.schema
include         /etc/openldap/schema/misc.schema
include         /etc/openldap/schema/nis.schema
include         /etc/openldap/schema/openldap.schema
include         /etc/openldap/schema/ppolicy.schema
include         /etc/openldap/schema/collective.schema
# Allow LDAPv2 client connections.  This is NOT the default.
allow bind_v2
# Do not enable referrals until AFTER you have a working directory
# service AND an understanding of referrals.
#referral       ldap://root.openldap.org
pidfile         /var/run/openldap/slapd.pid
argsfile        /var/run/openldap/slapd.args
# Load dynamic backend modules
# - modulepath is architecture dependent value (32/64-bit system)
# - back_sql.la overlay requires openldap-server-sql package
# - dyngroup.la and dynlist.la cannot be used at the same time
# modulepath /usr/lib/openldap
# modulepath /usr/lib64/openldap
# moduleload accesslog.la
# moduleload auditlog.la
# moduleload back_sql.la
# moduleload chain.la
# moduleload collect.la
# moduleload constraint.la
# moduleload dds.la
# moduleload deref.la
# moduleload dyngroup.la
# moduleload dynlist.la
# moduleload memberof.la
# moduleload pbind.la
# moduleload pcache.la
# moduleload ppolicy.la
# moduleload refint.la
# moduleload retcode.la
# moduleload rwm.la
# moduleload seqmod.la
# moduleload smbk5pwd.la
# moduleload sssvlv.la
moduleload syncprov.la
# moduleload translucent.la
# moduleload unique.la
# moduleload valsort.la
# The next three lines allow use of TLS for encrypting connections using a
# dummy test certificate which you can generate by running
# /usr/libexec/openldap/generate-server-cert.sh. Your client software may balk
# at self-signed certificates, however.
#TLSCACertificatePath /etc/openldap/certs
#TLSCertificateFile "\"OpenLDAP Server\""
#TLSCertificateKeyFile /etc/openldap/certs/password
TLSCertificateFile "/etc/pki/tls/certs/ldap1pub.pem"
TLSCertificateKeyFile "/etc/pki/tls/certs/ldap1key.pem"
# Sample security restrictions
#       Require integrity protection (prevent hijacking)
#       Require 112-bit (3DES or better) encryption for updates
#       Require 63-bit encryption for simple bind
# security ssf=1 update_ssf=112 simple_bind=64
# Sample access control policy:
#       Root DSE: allow anyone to read it
#       Subschema (sub)entry DSE: allow anyone to read it
#       Other DSEs:
#               Allow self write access
#               Allow authenticated users read access
#               Allow anonymous users to authenticate
#       Directives needed to implement policy:
# access to dn.base="" by * read
# access to dn.base="cn=Subschema" by * read
# access to *
#       by self write
#       by users read
#       by anonymous auth
#
# if no access controls are present, the default policy
# allows anyone and everyone to read anything but restricts
# updates to rootdn.  (e.g., "access to * by * read")
#
# rootdn can always read and write EVERYTHING!
# enable on-the-fly configuration (cn=config)
database config
access to *
        by dn.exact="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth" manage
        by * none
# enable server status monitoring (cn=monitor)
database monitor
access to *
        by dn.exact="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth" read
        by dn.exact="cn=Manager,dc=example,dc=com" read
        by * none
#######################################################################
# database definitions
#######################################################################
database        bdb
suffix          "dc=example,dc=com"
checkpoint      1024 15
rootdn          "cn=Manager,dc=example,dc=com"
rootpw          {SSHA}5h1vaYgy7fOLash39ZFKLQ3TOzqNYk/g
loglevel        256
sizelimit       unlimited
# Cleartext passwords, especially for the rootdn, should
# be avoided.  See slappasswd(8) and slapd.conf(5) for details.
# Use of strong authentication encouraged.
# rootpw                secret
# rootpw                {crypt}ijFYNcSNctBYg
# The database directory MUST exist prior to running slapd AND
# should only be accessible by the slapd and slap tools.
# Mode 700 recommended.
directory       /var/lib/ldap
# Indices to maintain for this database
index objectClass                       eq,pres
index ou,cn,mail,surname,givenname      eq,pres,sub
index uidNumber,gidNumber,loginShell    eq,pres
index uid,memberUid                     eq,pres,sub
index nisMapName,nisMapEntry            eq,pres,sub
# Replicas of this database
#replogfile /var/lib/ldap/openldap-master-replog
#replica host=ldap-1.example.com:389 starttls=critical
#     bindmethod=sasl saslmech=GSSAPI
#     authcId=host/ldap-master.example.com@EXAMPLE.COM
# Multi master replication
ServerID        1 "ldaps://ldap1.example.com"
ServerID        2 "ldaps://ldap2.example.com"
overlay         syncprov
syncprov-checkpoint     10 1
syncprov-sessionlog     100
syncrepl        rid=1
                provider="ldaps://ldap1.example.com"
                type=refreshAndPersist
                interval=00:00:00:10
                retry="5 10 60 +"
timeout=1
                schemachecking=off
                searchbase="dc=example,dc=com"
scope=sub
                bindmethod=simple
                tls_cacert=/etc/pki/tls/certs/ldap1pub.pem
                binddn="cn=Manager,dc=example,dc=com"
                credentials="redhat"
syncrepl        rid=2
                provider="ldaps://ldap2.example.com"
                type=refreshAndPersist
                interval=00:00:00:10
                retry="5 10 60 +"
                timeout=1
                schemachecking=off
scope=sub
                searchbase="dc=example,dc=com"
                bindmethod=simple
                tls_cacert=/etc/openldap/certs/ldap2pub.pem
                binddn="cn=Manager,dc=example,dc=com"
                credentials="redhat"
MirrorMode      on
4) Convert the slapd.conf to cn=config format and re-initialize the slapd.d folder
on LDAP1 and LDAP2 Servers
[root@ldap1 ~]# rm -rf /etc/openldap/slapd.d/*
[root@ldap1 ~]# slaptest -u
[root@ldap1 ~]# slaptest -f /etc/openldap/slapd.conf -F /etc/openldap/slapd.d/
5) Change the permissions on the /etc/openldap/slapd.d/ to ldap on LDAP1 and LDAP2 Servers
[root@ldap1 ~]# chown -R ldap. /etc/openldap/slapd.d/
6) Restart the slapd service on LDAP1 and LDAP2 Servers
[root@ldap1 ~]# service slapd restart
7) Check whether replication is working or not by adding an entry into DIT on both servers, the entry should be visible by ldapsearch on both server if it is added on anyone of them.
8) If there is any problem in replication check the log file /var/log/ldap for more information and troubleshooting.
Configuration terms used in /etc/openldap/slapd.conf for replication
rid -> replica ID for servers, which should be numeric and unique for each server
provider -> URI of ldap server which will be the master server
type -> type of synchronization between LDAP servers for replication
interval -> time interval for initial synchronization process i.e. 10 secs here
retry -> retry the synchronization process if incase consumer is not available i.e. retry 10 times every 5 seconds, if it fails and then every 60 sec it will continue
timeout -> timeout incase of failure in retry i.e. 1 sec
schemachecking -> off means will not check for schema during schema
searchbase -> search base that will be replicated to the other server
scope -> sub means all the sub DNs will be replicated
bindmethod -> connection type for replication process
binddn -> the user authorized for replication process
credentials -> user password for the user initiating the replication process




In the Multi-Master replication, two or more servers act as master and all these are authoritative for any change in the LDAP directory. Queries from the clients are distributed across the multiple servers with the help of replication.
Environment
For Multi-Master replication, we are going to use three OpenLDAP servers. Details are below.

ldpsrv1.sysadminshare.local (192.168.12.10)
ldpsrv2.sysadminshare.local (192.168.12.20)
ldpsrv3.sysadminshare.local (192.168.12.30)
Install LDAP
Install LDAP packages on all of your servers.

yum -y install openldap compat-openldap openldap-clients openldap-servers openldap-servers-sql openldap-devel
Start the LDAP service and enable it for the auto start at the system boot.

systemctl start slapd.service
systemctl enable slapd.service
Configure LDAP Logging
Configure syslog to enable LDAP logging.

echo "local4.* /var/log/ldap.log" >> /etc/rsyslog.conf
systemctl restart rsyslog
Configure OpenLDAP Multi-Master Replication
Copy the sample database configuration file to /var/lib/ldap directory and update the file permissions. You would need to perform below steps on all of your OpenLDAP servers unless otherwise stated.

cp /usr/share/openldap-servers/DB_CONFIG.example /var/lib/ldap/DB_CONFIG
chown ldap:ldap /var/lib/ldap/*
We will enable the syncprov module.

vi syncprov_mod.ldif
Copy and paste the below lines to the above syncprov_mod.ldif file.

dn: cn=module,cn=config
objectClass: olcModuleList
cn: module
olcModulePath: /usr/lib64/openldap
olcModuleLoad: syncprov.la
Now send the configuration to the LDAP server.

ldapadd -Y EXTERNAL -H ldapi:/// -f syncprov_mod.ldif
Output:

SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
adding new entry "cn=module,cn=config"
Enable Config Replication
Change the olcServerID on all servers. For example, for ldpsrv1, set olcServerID to 1, for ldpsrv2, set olcServerID to 2 and for ldpsrv3, set to 3.

vi olcserverid.ldif
Copy and paste the below text into the above file.

dn: cn=config
changetype: modify
add: olcServerID
olcServerID: 1
Update the configuration on LDAP server.

ldapmodify -Y EXTERNAL -H ldapi:/// -f olcserverid.ldif
Output:

SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry "cn=config"
We need to generate a password for LDAP configuration replication.

slappasswd
Output:

New password:
Re-enter new password:
{SSHA}MAfw/QNizKx4NxueW7CpCSN6jeDB5Z+C
You should generate a password on each server by running the slappasswd command.
Set a password for configuration database.

vi olcdatabase.ldif
Copy and paste the below text into the above file. You need to put the password you generated in the previous step on this file.

dn: olcDatabase={0}config,cn=config
add: olcRootPW
olcRootPW: {SSHA}MAfw/QNizKx4NxueW7CpCSN6jeDB5Z+C
Update the configuration on LDAP server.

ldapmodify -Y EXTERNAL -H ldapi:/// -f olcdatabase.ldif
Output:

SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry "olcDatabase={0}config,cn=config"
Now we will set up the configuration replication on all servers.

vi configrep.ldif
Copy and paste the below text into the above file.

### Update Server ID with LDAP URL ###

dn: cn=config
changetype: modify
replace: olcServerID
olcServerID: 1 ldap://ldpsrv1.sysadminshare.local
olcServerID: 2 ldap://ldpsrv2.sysadminshare.local
olcServerID: 3 ldap://ldpsrv3.sysadminshare.local

### Enable Config Replication###

dn: olcOverlay=syncprov,olcDatabase={0}config,cn=config
changetype: add
objectClass: olcOverlayConfig
objectClass: olcSyncProvConfig
olcOverlay: syncprov

### Adding config details for confDB replication ###

dn: olcDatabase={0}config,cn=config
changetype: modify
add: olcSyncRepl
olcSyncRepl: rid=001 provider=ldap://ldpsrv1.sysadminshare.local binddn="cn=config"
  bindmethod=simple credentials=x searchbase="cn=config"
  type=refreshAndPersist retry="5 5 300 5" timeout=1
olcSyncRepl: rid=002 provider=ldap://ldpsrv2.sysadminshare.local binddn="cn=config"
  bindmethod=simple credentials=x searchbase="cn=config"
  type=refreshAndPersist retry="5 5 300 5" timeout=1
olcSyncRepl: rid=003 provider=ldap://ldpsrv3.sysadminshare.local binddn="cn=config"
  bindmethod=simple credentials=x searchbase="cn=config"
  type=refreshAndPersist retry="5 5 300 5" timeout=1
-
add: olcMirrorMode
olcMirrorMode: TRUE
Now send the configuration to the LDAP server.

ldapmodify -Y EXTERNAL -H ldapi:/// -f configrep.ldif
Output:

SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry "cn=config"

adding new entry "olcOverlay=syncprov,olcDatabase={0}config,cn=config"

modifying entry "olcDatabase={0}config,cn=config"
Enable Database Replication
By this time, all your LDAP configurations are replicated. Now, we will enable the replication of actual data, i.e., user database. Perform below steps on any one of the nodes since the other nodes are in replication.
We would need to enable syncprov for hdb database.

vi syncprov.ldif
Copy and paste the below text into the above file.

dn: olcOverlay=syncprov,olcDatabase={2}hdb,cn=config
changetype: add
objectClass: olcOverlayConfig
objectClass: olcSyncProvConfig
olcOverlay: syncprov
Update the configuration on LDAP server.

ldapmodify -Y EXTERNAL -H ldapi:/// -f syncprov.ldif
Output:

SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
adding new entry "olcOverlay=syncprov,olcDatabase={2}hdb,cn=config"
Setup replication for hdb database.

vi olcdatabasehdb.ldif
Copy and paste the below content to the above file. You may get an error for olcSuffix,olcRootDN, and olcRootPW if you have these already on your configuration. Remove the entries, if not required.

dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcSuffix
olcSuffix: dc=sysadminshare,dc=local
-
replace: olcRootDN
olcRootDN: cn=ldapadm,dc=sysadminshare,dc=local
-
replace: olcRootPW
olcRootPW: {SSHA}xtbbtC/1pJclCPzo1n3Szac9jqavSphk
-
add: olcSyncRepl
olcSyncRepl: rid=004 provider=ldap://ldpsrv1.sysadminshare.local binddn="cn=ldapadm,dc=sysadminshare,dc=local" bindmethod=simple
  credentials=x searchbase="dc=sysadminshare,dc=local" type=refreshOnly
  interval=00:00:00:10 retry="5 5 300 5" timeout=1
olcSyncRepl: rid=005 provider=ldap://ldpsrv2.sysadminshare.local binddn="cn=ldapadm,dc=sysadminshare,dc=local" bindmethod=simple
  credentials=x searchbase="dc=sysadminshare,dc=local" type=refreshOnly
  interval=00:00:00:10 retry="5 5 300 5" timeout=1
olcSyncRepl: rid=006 provider=ldap://ldpsrv3.sysadminshare.local binddn="cn=ldapadm,dc=sysadminshare,dc=local" bindmethod=simple
  credentials=x searchbase="dc=sysadminshare,dc=local" type=refreshOnly
  interval=00:00:00:10 retry="5 5 300 5" timeout=1
-
add: olcDbIndex
olcDbIndex: entryUUID  eq
-
add: olcDbIndex
olcDbIndex: entryCSN  eq
-
add: olcMirrorMode
olcMirrorMode: TRUE
Once you have updated the file, send the configuration to the LDAP server.

ldapmodify -Y EXTERNAL  -H ldapi:/// -f olcdatabasehdb.ldif
Output:

SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry "olcDatabase={2}hdb,cn=config"
Make changes to the olcDatabase={1}monitor.ldif file to restrict the monitor access only to LDAP root (ldapadm) user, not to others.

# vi monitor.ldif

dn: olcDatabase={1}monitor,cn=config
changetype: modify
replace: olcAccess
olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external, cn=auth" read by dn.base="cn=ldapadm,dc=sysadminshare,dc=local" read by * none
Once you have updated the file, send the configuration to the LDAP server.

ldapmodify -Y EXTERNAL  -H ldapi:/// -f monitor.ldif
Add the LDAP schemas.

ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/cosine.ldif
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/nis.ldif
ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/inetorgperson.ldif
Generate base.ldif file for your domain.

# vi base.ldiff

dn: dc=sysadminshare,dc=local
dc: sysadminshare
objectClass: top
objectClass: domain

dn: cn=ldapadm ,dc=sysadminshare,dc=local
objectClass: organizationalRole
cn: ldapadm
description: LDAP Manager

dn: ou=People,dc=sysadminshare,dc=local
objectClass: organizationalUnit
ou: People

dn: ou=Group,dc=sysadminshare,dc=local
objectClass: organizationalUnit
ou: Group
Build the directory structure.

ldapadd -x -W -D "cn=ldapadm,dc=sysadminshare,dc=local" -f base.ldif
Output:

Enter LDAP Password:
adding new entry "dc=sysadminshare,dc=local"

adding new entry "cn=ldapadm ,dc=sysadminshare,dc=local"

adding new entry "ou=People,dc=sysadminshare,dc=local"

adding new entry "ou=Group,dc=sysadminshare,dc=local"
Test the LDAP replication
Let’s create a user LDAP called “ldaptest“ in any one of your master servers, to do that, create a .ldif file on the ldpsrv1.sysadminshare.local (in my case).

[root@ldpsrv1 ~]# vi ldaptest.ldif
Update the above file with below content.

dn: uid=ldaptest,ou=People,dc=sysadminshare,dc=local
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: ldaptest
uid: ldaptest
uidNumber: 9988
gidNumber: 100
homeDirectory: /home/ldaptest
loginShell: /bin/bash
gecos: LDAP Replication Test User
userPassword: {crypt}x
shadowLastChange: 17058
shadowMin: 0
shadowMax: 99999
shadowWarning: 7
Add a user to LDAP server using the ldapadd command.

[root@ldpsrv1 ~]# ldapadd -x -W -D "cn=ldapadm,dc=sysadminshare,dc=local" -f ldaptest.ldif
Output:

Enter LDAP Password:
adding new entry "uid=ldaptest,ou=People,dc=sysadminshare,dc=local"
Search for “ldaptest” on another master server (ldpsrv2.sysadminshare.local).

[root@ldpsrv2 ~]# ldapsearch -x cn=ldaptest -b dc=sysadminshare,dc=local
Output:

# extended LDIF
#
# LDAPv3
# base <dc=sysadminshare,dc=local> with scope subtree
# filter: cn=ldaptest
# requesting: ALL
#

# ldaptest, People, sysadminshare.local
dn: uid=ldaptest,ou=People,dc=sysadminshare,dc=local
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: ldaptest
uid: ldaptest
uidNumber: 9988
gidNumber: 100
homeDirectory: /home/ldaptest
loginShell: /bin/bash
gecos: LDAP Replication Test User
userPassword:: e2NyeXB0fXg=
shadowLastChange: 17058
shadowMin: 0
shadowMax: 99999
shadowWarning: 7

# search result
search: 2
result: 0 Success

# numResponses: 2
# numEntries: 1
Now, set a password for the user created on ldpsrv1.sysadminshare.local by going to ldpsrv2.sysadminshare.local. If you can able to set the password, that means the replication is working as expected.

[root@ldpsrv2 ~]# ldappasswd -s password123 -W -D "cn=ldapadm,dc=sysadminshare,dc=local" -x "uid=ldaptest,ou=People,dc=sysadminshare,dc=local"
Where,
-s specify the password for the username
-x username for which the password is changed
-D Distinguished name to authenticate to the LDAP server.
In Master-Slave replication topology, you can not set the password for LDAP user in the slave server.
Extras
Configure LDAP client to bind to the new master server, too.

authconfig --enableldap --enableldapauth --ldapserver=ldpsrv1.sysadminshare.local,ldpsrv2.sysadminshare.local,ldpsrv3.sysadminshare.local --ldapbasedn="dc=sysadminshare,dc=local" --enablemkhomedir --update














































3 Comments

  1. WE UPDATED NEW LINKS
    354Hd5444fa298Here are best materials for you!
    BEST VIDEO ABOUT HOW TO MAKE MONEY ONLINE:
    I found this is No1 video about how to make money online
    Hope that it help you more
    source: 12 ways to make money online in your lifetime







    Jennifer Lee Mar 21, 2018 at 7:36 PM
    LEARN FREE HOW TO MAKE MONEY ONLINE BY AFFILIATE MARKETING
    This is a free course by affilorama, the leading internet marketing academy,rated 4.7 * by 87k+ students.
    source: Free training affiliate marketing online







    John Smith Mar 23, 2018 at 8:36 PM
    LEARN HOW TO BECOME MILLIONAIRE ONLINE
    This is one of best online course about how to become millionaire online.
    It is difficult to become a millionaire, so perhaps this course is only rated 4.4*.
    source: How to become millionaire online in one year







    Juan Carlos Mar 27, 2018 at 8:36 PM
    12 SECRETS TO GET ANY GIRL TO LIKE YOU
    This is one of top secrets that help you get any girl to like you.
    Rated 4.7* by 5600+ students.
    Link: 12 secrets to get any girl to like you







    Mike Jones Mar 29, 2018 at 9:36 PM
    LEARN FREE PIANO ONLINE:
    This course is organized by LearnPianoIn30Days. This site offer 14 days free training for only $1.
    More details: $1 Trial to learn piano in 14 days









    ReplyDelete
  2. It is very useful information. Thanks for sharing with us. I would like share my website about LDAP Integeration Module.

    ReplyDelete
  3. very informative saving this post for future reference

    ReplyDelete
Previous Post Next Post