Working with large volumes of data always has its challenges. Add into the mix the complexity of hundreds or sometimes thousands of data sources each having their own format, naming conventions, method of storage, method of data access, you very quickly get the feel for the complexity of the task at hand. The process of Identity & Access Management (IAM) requires an organisation to understand what each individual permission each user has on each of your systems. In order to achieve this, the permissions of each user for each system will need to be collected.
Many of these associated challenges can solved by using a good Data Management platform to automate the way we collect the data and pre-process it prior to the IAM solution. In the following paper we discuss some of the common challenges and how staging the Identity Data can help.
Data quality is one of the key ingredients to a successful IAM implementation. Early analysis before the IAM project starts, will quickly highlight the priorities that the project should focus on before embarking on a costly technical deployment and integration with every business application.
Data is Inconsistent
It is common to see issues caused during the consumption of Identity or Access Control List (ACL) data caused by missing attributes or mismatched data. There are many possible causes of this, such as mistakes made during manual input of data, an unreliable business process around the creation of that data or in some cases problems with the actual technology or software that generated the data.
By collecting the data into a Data Management layer, analysis can be done on the data to assess data quality and consistency. This can highlight where data is missing or where attributes from one source have no matching attributes in another source. The results of this data can be presented in the form of dashboards or reports which would improve visibility of the errors early in the Identity Management lifecycle. Rules around the quality of the data can also trigger automated workflows to remediate the bad data or escalate to specific data owners. Finally, by keeping a historic record of data collection health over time, trends and patterns can be recognised and the results of this can be used to fix the source of the issue to improve collection health in the future.
Data is in an Unrecognisable Format
When collecting from a variety of data sources and platforms the unique identifiers or attributes you collect could be in a format downstream systems cannot understand, or in worst case, causing system failures during processing. Leading zeros, multi-value attributes and special characters are common causes of this. Having the data management layer will enable the data to be corrected automatically, improving ease of consuming the data further down the line.
Unsupported Data Collection Methods
Many source platforms will only offer data exports via certain methods such as web-services, APIs or proprietary SDKs. By using a data management solution that has a good variety of connector types, all data from all platforms can be consumed into the single staging datastore. This can then be offered to data consumers in a consistent simple database view or restful web service call.
Adding Business Context to the Data
Effective Identity and Access Management processes are always built around good participation and buy in from all areas of the business. Often the business use of particular application platforms may be unclear and other times a platform may contain access permissions for a multitude of business uses (or applications).
It is a good practice to ensure there is business context mapped to all applications and permissions, so there is an easy understanding of what access is being requested, revoked or approved. Once mappings have been created from permissions into business application groupings, your IAM and Governance solutions can remain completely business focused leaving the technical fine-grained integrations to the data staging platform. Adding business friendly descriptions at this stage can further improve the understanding of the permissions or authorisations that you have collected.
Impact on Collection Source
IAM collections from end platforms containing large volumes of data can often have potential performance impacts on the end platform itself due to the length of time or amount of processing that is required. To reduce this impact of collections on the end platform, traditionally (and also still current practice in many organisations), they were scheduled out of office hours to ensure there were minimal active users at the time. However, this gets more complex with global organisations that have locations spread across several time zones.
Staging the collection process using a data management layer can help in several ways:
Firstly, by providing its own flexible collection engine. This overhead is reduced by the services that need to consume the data from the staging solution. They no longer have to do multiple different collections and can read everything on one go.
Secondly, collections can be triggered as well as scheduled. Data can be collected “Just in Time” or “Only when there are significant changes to collect”.
Partial or Change Log style collections can be used to reduce the load/volume that is collected.
Finally, by using simple logic to validate the collected data, the staged solution can call out or trigger the collections from the downstream services again improving efficiency and accuracy of the collected data.
Repeatable Application On-boarding Frameworks
It is important to organisations who have large application estates and strive for high proportions of automation to manage their employees access to have an efficient and repeatable method for on-boarding applications.
Many programmes of this nature adopt a “continuous delivery” approach to manage their delivery pipeline and ensure that they realise a ROI early in the whole programme.
Using a staging platform such as Securience Data Manager that has standard out of the box methods for data collection, allows applications to be on-boarded quickly with no required specialist technical skillset. The same qualification questions can be mandated to each application owner before on-boarding, and the same implementation steps can be repeated with each platform. Huge time savings are made with platforms of similar types utilising templates or direct copies.
Automation of Data Preparation
Data preparation before feeding the IAM solution can often be a time consuming costly manual process. Often run by large development teams to write scripts, then test them and deploy to production.
Utilising Securience Data Manager as a specialist data integration tool, organisations can remove the need for expensive development teams by automating all of the data preparation tasks.
Securience Data Manager
Securience Data Manager is a staging and orchestration solution that has a host of connectors enabling integration with all types of end points. Controlled from a web based reactive UI, a user can easily create an integration to an application and format the data as necessary.
Securience Data Manager is purpose built for efficient collection from thousands of end points, which allows the Identity Management solution to efficiently process the data from a single authoritative source. Technical and Business stakeholders can work together to provide metrics and reports to analyse the sources, which is used to improve the quality and validity of data. Combining the power of workflows and the scheduling engine, Securience Data Manager allows full automation to replace manual effort, creating dependable and audit-able processes. Effort can be further reduced with the ability to export the business configuration to existing main stream Identity Management (IDM) solutions.
For further information on Securience Data Manager or Identity & Access Management solutions from Securience, please visit our website, or contact us via email.