sudoedit.com!

Force Local Users and Groups with Ansible

I'm in the process of migrating a few Puppet modules over to Ansible, and in the process I've run into an unusual situation while creating users and groups. Here is some background. I have an application that will refuse to complete its installation unless it can see certain users and groups in the local passwd and group files. It just so happens that these same users and groups are also contained in LDAP.

Puppet has an attribute called "forcelocal" in its user and group resource that has always been able to create a local user or group in this situation, despite having a matching user or group in LDAP. So, I was a bit disappointed to discover that the similar "local" option in both the group and the user Ansible modules did not work in the same way.

From the user module docs, the "local" option has the following behavior:

Forces the use of "local" command alternatives on platforms that implement it. This is useful in environments that use centralized authentication when you want to manipulate the local users. I.E. it uses luseradd instead of useradd.This requires that these commands exist on the targeted host, otherwise it will be a fatal error. (https://docs.ansible.com/ansible/latest/modules/user_module.html#user-module)

After reading that, I expected Ansible to create local users and groups regardless of whether or not that user or group was already found in LDAP. However, that is not the case. For whatever reason specifying the local option does not create a local user or group, if that user or group is already in LDAP, and is visible to your target server. Instead, Ansible will simply mark the task as complete and happily move on to the next step. Looking at the code for the module it's using "grp" from the standard library, so it will just check the user database for the user or group, since it finds the user (albeit in LDAP) it moves on, which for my use case kinda defeats the whole purpose of the local option. I would like to see this module do a further check to see if the specified user name or group was listed in the /etc/passwd or /etc/group files, before marking success.

After a bit of head scratching, and cursing I read a few blogs, and stack exchange solutions from others who had attempted to solve this, but none of them struck me as viable for my situation. Because I'm not enough of a programmer to fix this little bug, and since I only need to run this particular playbook once, at the time a server is deployed, I chose a bit of a compromise solution.

At first, I kicked around the idea of inserting the user and group directly into the passwd, shadow, and group files, but that just didn't seem like a clean solution to me. Plus, I assume at some point this problem will be fixed, so it seems easier to continue using the group and user modules than to rewrite the playbook at some point in the near future.

So I decided to do the following: stop the sssd service (thus making the LDAP users and groups invisible to the server), add the users and groups using the Ansible module, and then restart sssd.

Here is a slimmed down version of what I ended up doing in the playbook. Keep in mind, it will only work if you are using sssd you should also make sure that your server is in a state where sssd can be stopped while these tasks are processed. In my case it's fine because I only run this particular sequence once when the server is built.

    ---
    - name: stop sssd
      service:
        name: sssd
        state: stopped
    - name: add group
      group:
        name: localgroup
        gid: 1234
        state: present
    - name: add user
      user:
        name: localuser
        uid: 1234
        group: localgroup
        state: present
    - name: start sssd
      service:
        name: sssd
        state: started
        enabled: yes
     ...

Let me know if you've run into this, or if you have a better solution. I suspect you could make this a bit more universal by adding a test to see if the user and group have an entry in the passwd and group files before stopping sssd.