| Subcribe via RSS

Data Security & Identity Management: Essential 1st Cousins in IT Security

5 Tools for Improved Identity Management As a senior architect, sales engineer, and consultant out in the field working closely with senior IT security leaders and CISOs, I sometimes run into questions surrounding the Vormetric product line related to Identity Management. Having implemented identity management solutions for Fortune 500 companies around the world for a number of years, I truly appreciate when these questions arise, as it demonstrates some understanding that Data Security and Identity Management are essential first cousins in IT Security.

How are they first cousins within IT Security and why are they both so essential, vital and important, you ask? At some point here, I’ll throw out at least a one-line history statement to show how and why they are related so we can subsequently understand their vitality and importance within enterprises in today’s world.

Data Security & Identity Management Go Hand In Hand

But first, when you think about it, at base and stripped all the way down, digital information is about digitized data on a storage medium of some sort. That’s it. Whether it’s raw data or data that is executed (ie. a program, a script, etc.), to the storage device and the operating system that manages both, it’s all just “data.” The data itself then is at the heart of what every computing system is about. And data security is all about securing that data wherever it lives for the enterprise.

Digitized data just sitting on a storage device however is meaningless. Access to that data and a frame of reference – “this data is a ‘program,’ this data is ‘system data,’ this data is ‘user data,’” etc. – is what gives data meaning and life. [1] For someone who sells and/or implements in either the Data Security or Identity Management space, it can be very easy to don horse blinders and insist to customers that their solution is the essential piece:

“No! It’s all about protecting the data and having data protect itself!”

“Au contraire! It’s all about identities and governing access to the data!”

Actually, Data Security and Identity Management are symbiont to one another and synergistically linked – chicken and egg, needle and thread, wall and head (for all us cybersecurity professionals, I had to throw that one in there!), Batman and Robin, Oscar and Felix, Wallace and Gromit. (You get the picture… :-)) Ya gotta have both. Both are right and either by themselves aren’t the entire answer or solution to the problem of securing data.  Data without access is dead. But access governance that doesn’t drive protection and controls all the way down to the data level is insufficient. Both are needed, necessary and essential and must be combined together to provide an effective and efficient solution to data security and identity and access management and governance. They go hand in hand.

In terms of implementation…, [Identity Management and Data Security] should be implemented top down and bottom up, and somewhat simultaneously, designed to meet in the middle.

More »

Tags: , , , , , , , ,

The Problem of Non-User IDs in Organizations Today

February 4th, 2016 | No Comments | Posted in General Idm/IAM, IdM Engagement

identities(The contents of this article are captured here and reflected back in response to an article posted on SailPoint’s Identity Quotient Blog article entitled “Third-Party Contractors: The Target Breach’s Bulls-eye.” I recommend reading that article to establish context for this article.)

It is fairly well known and pretty much public knowledge that the Target breach took place leveraging 3rd party credentials that were phished from an associated Heating Venting and Air Conditioning (HVAC) vendor.  This was the initial point of entry into the Target network.

However, the HVAC credentials were primarily leveraged only for initial access. Credit card data was not being accessed and syphoned using that specific HVAC ID. Nevertheless, controls around time of access and other metadata information that could be policy driven within SailPoint IdentityIQ around that 3rd party access are still cogent to the discussion as per the aforementioned SailPoint article.

What isn’t mentioned in the article is that SailPoint IdentityIQ and ideally any IdM product could and should have a very big part to play in the gathering of and providing governance around Non-User IDs (NUIDs) — testing IDs, training IDs, B2B FTP IDs, generic admin IDs (that should be privileged access managed anyway), application IDs (huge!), etc.

Organizations typically have thousands, tens of thousands and yes, even millions of orphaned and ungoverned NUIDs, in terms of overall access, proliferated, orphaned and laying dormant on end-point servers and systems…

To an attacker, an ID is an ID is an ID. Any ID will suffice in order to establish a beachhead on a system and then begin trying to “walk” systems, ideally though the elevation of access. This is typically how deep penetration and spanning of internal networks has taken place in a lot of recent breaches. When attacking a system and attempting to establish access, it doesn’t matter to the attacker whether the initial ID used is technically a normal and established user ID (with or without governance around it) or a NUID that typically is not being properly tracked and governed within organizations. In fact, NUIDs represent an ideal target due to the fact they don’t have visibility and normal and established governance around them in many organizations.
More »

Tags: , , , , ,

Great (SailPoint) Work Is Out There!

Today was it. Today was the day I finally broke down and went beyond lamenting that I can’t clone myself. Today was the day I looked in the mirror and called myself a little bit of stupid and a little bit of selfish.

The Problem I Wish Everyone Had

They always say start by defining the problem.

There are problems and then there are problems. Real problems are bad. Other problems are actually good to have. I’m happy to say I confront the latter almost every day and I’d really like to share these problems with you. More on that later where you can be part of the solution to a lot of open problems I know about, if you want.

But let’s face it… we all know it. Security is hot right now. And if you’ve done a good job in security and are somewhat known, it’s nuclear. My problem is I get lots of fantastic opportunities come my way every day. I think about a lot of you out there. I get some really, really nice opportunities. And I lament I can’t respond to them all.

Me At Vormetric

I’m doing well at Vormetric and Vormetric is doing extremely well in the market place. Vormetric is posed on the edge of what I believe is a radical change in how enterprises go about Data Security and Encryption.

Vormetric does what it does extremely well; better than anyone else in the market place. So I’m set. I love what I do and more importantly what I can do for other people. Vormetric fills an important void. (And believe it or not, Data Security and Encryption has a direct tie-in to how enterprises should approach Identity Management that I had never considered before and a lot of companies still aren’t considering — it’s the “bottom third” that Identity Management can’t touch. More on that in another post.)

Those are the things that really drive me at core… what I can do to legitimately help other people in the mission-critical security space. Which dovetails right in line with the theme of this posting. If you are interested, keep reading.
More »

Tags: , , , , ,

SailPoint IIQ: Rule Modeling in Real Java :-)

I’ve been sitting on this article and concept for months and have had others ask me about it via email — whether I’ve ever done something like this before — and well… here it is.

Tired of No BeanShell Coding Validation!

It turns out I was sitting around in my hotel room in Bangalore on India Independence Day last year, whacking away on some client code, doing some data modeling using CSV. I had a somewhat involved BuildMap rule I was working on and I was getting a null pointer exception I simply could not find. A few hours and one simple coding mistake later, once discovered, I was finally on my way. But it was really discouraging to know that if I had been coding in Eclipse, the coding mistake would have been spotted immediately.

The next thought I had was actually two-fold. While I have at times actually written test straps in real Java using the Sailpoint IIQ Java libraries (ie. jars) and dropped my BeanShell code into procedures to instantly validate the syntax, I have also wanted at some point in time to be able to simulate or partially simulate rule modeling and data modeling outside of Sailpoint IIQ using Java I had complete control over writing and executing.

So on this particular day, being particularly irked, I decided to combine those two wishes and see what I could do about having a place I could not only drop, for instance, BuildMap rule code into Eclipse and instantly validate it, but also execute the code I intended for Sailpoint IIQ against connector sources I also had connected to Sailpoint IIQ (in development, of course!) and see and manipulate the results.

Once I was done iterating my development over a real dataset, I could take my validated Java code, drop it back into Sailpoint IIQ in BeanShell and have not only validated but also working code in Sailpoint IIQ with very little or no modification.

Establishing SailPoint Context

One thing you will need if you want to run your Java code in an actual Sailpoint IIQ context outside of Sailpoint IIQ proper is establishing SailPointContext in your code. This, I will tell you, while not impossible, is not easy to do. You need to implement the Spring Framework and a lot of other stuff. If you are interested in doing this and have access to SailPoint Compass, you can actually read about establishing SailPointContext here4.

Since doing that much work wasn’t something I had the time for doing, almost immediately I decided to implement a partial simulation that would allow me to (1) model and validate my rule and (2) also allow me to model my data very simply and easily without establishing SailPointContext. I could still achieve my goal of iterating the solution to produce validated and working code to drop back into Sailpoint IIQ in this way.

The Code

Amazingly, the code for simulating a BuildMap rule, pointing it to the actual CSV I intend for Sailpoint IIQ, and simulating an account aggregation task is not that complex. Once you have the code, if you understand how Sailpoint IIQ works in general, you could conceivably re-engineer and simulate other segments of Sailpoint IIQ processing or modeling other rule types and.or data outside of Sailpoint IIQ1.
More »

Tags: , , , , , , , ,

Stupid SailPoint Developer Tricks

Hello, mates — as they say Down Under, where I happen to be at the moment on a rather large Sailpoint engagement. It’s been a while, and I’m sorry for that. I keep promising more, new and better content and haven’t delivered.

The last couple of months however have been absolutely crazy and there have been some changes on my end, as you perhaps can see. Now that things have shaped up a bit, maybe I can get back to the business at hand here on the blog, again as I have time.

Stupid Pet Tricks

When I was growing up and in college, a famous comedian became famous (partially) by having a segment on his show called “Stupid Pet Tricks.” Some were hilarious and some… belonged on the 1980’s “Gong Show.” (If you’ve never heard of “The Gong Show,” trust me, you aren’t missing anything).

Since that time, I’ve always thought of various developer tricks in the same light. Some are quite slick and useful and some… really just need to be buried. I’ll leave it to you to decide on this one.

Out of sheer laziness, while onboarding Sailpoint applications that feature a BuildMap rule (eg. BuildMap, JDBCBuildMap, and SAPBuildMap), I sometimes utilize a method for “printing debug statements” that I can see directly and immediately in connectorDebug, without having to jump into or tail the Sailpoint IIQ log or application server logs.

It’s also just a bit less verbose as the Sailpoint IIQ logs typically have a large class identification prefix in front of them, which can get rather cumbersome and make it more difficult to pick out one’s intended debug output.

Plus I hate changing logging levels in log4j.properties even though the Sailpoint IIQ debug page allows me to load a new logging configuration dynamically. In short, I’m just a lazy, complaining type when it comes to Sailpoint IIQ debug statements.

Someone mentioned this would be worth blogging about, so here goes. (At the very least, this is an easy article to write and perhaps will get me back into the blogging swing?!)

__DEBUG__ Schema

Now, I would definitely recommend doing this only on a local or designated sandbox and then making sure you clean up before checking in your code. (You are using some form of source code control for your Sailpoint IIQ development, aren’t you?!)
More »

Tags: , , , , ,

SailPoint IIQ: Move Over, Rover

I’m getting ready to do some customer training on Sailpoint IIQ v6.0. Getting ready for the trip has been a good impetus to get my rear end in gear and get up to date. I’ve been running Sailpoint IIQ v5.5 “bare metal” on my MacBook Pro pretty much since Sailpoint IIQ v5.5 was released. I have procrastinated getting Sailpoint IIQ v6.0 installed on my laptop. (Mainly because I have Sailpoint IIQ v6.0p5 running in the mad scientist lab on ESXi accessible via VPN.)

Side By Side Approach

So, it was time to install Sailpoint IIQ v6.0, but… I don’t and didn’t want to obliterate my Sailpoint IIQ v5.5p6 installation; I have too many customizations, test applications and rules I don’t want to loose and still want to be able to run live. I’ve been running Sailpoint IIQ with a context root of /identityiq and with a MySQL database user of identityiq.

When I run multiple versions of Sailpoint IIQ side by side on the same machine, I’ve adopted the practice of running each installation as /iiqXY where XY is the version number. So I wanted to run /iiq55 and /iiq60 side by side from the same application server. (I could also take the approach of running multiple instances of application server and run one installation from one port, say 8080, and another from another port, say 8081.)

So how to “lift and load” the existing installation at /identityiq to /iiq55 without reinstalling everything and re-aggregating all my sources? Here’s what I did.

DISCLAIMER: I’m neither advocating nor de-advocating this. Do this at your own risk, especially if your environment differs from mine. I make no claims or warranty of any kind. This worked for me. If it helps you… great.

The Environment

Here was my environment:

Operating System Mac OS X, Mountain Lion, v10.8.3
Application Server Apache Tomcat v6.0.35
JRE Java SE JRE (build 1.6.0_43-b01-447-11M4203) (64-bit)
SailPoint IIQ SailPoint IIQ v5.5p6
IIQ Database MySQL 5.5.15

Shut Everything Down

First, I shut everything down. This basically meant just spinning down the entire Tomcat application server. The command you might use and the location of your application server scripts may differ:

$ cd /Library/Apache/Tomcat6/bin
$ ./shutdown.sh

More »

Tags: , , , ,

SailPoint IIQ: Aggregating XML

From an answer to a client this morning on aggregating XML in Sailpoint IIQ. I hope this helps others out there:

Regarding your question this morning on aggregating XML… I have seen XML aggregated through the OOTB RuleBasedFileParser connector. That connector requires that a rule be written to run the parser and through that, you could parse and aggregate XML. I mentioned this to one of our Solution Architects after our meeting and he was aware of the RuleBasedFileParser type, but personally felt it was enough work such that you may as well write a custom connector using libraries Java has available to handle XML.

I think between him and me, I would say the following:

(1) From an overall perspective, it’s technically possible using the RuleBasedFileParser connector to aggregate XML.

(2) There may need to be a discussion about the XML in consideration itself to determine the level of complexity of XML coming in, in which case:
(a)…The RuleBasedFileParser may be an adequate choice.
(b)…A custom connector for the XML may be in order.

One other approach could be:

(i) Use a DelimitedFile connector.
(ii) Write a pre-iterate rule leveraging the Java XML classes available to (a) read the XML and (b) create a CSV from the XML for the DelimitedFile connector to consume.
(iii) Use the post-iterate rule to clean up.

As you can see, there is more than one way to skin the XML cat here. This is the case as with most things in Sailpoint IIQ, as I demonstrate in at least one blog post, can be “tricked” in various places into doing what it is you ultimately want it to do.

As with any of this, it’s very common to have to sit down on an engagement and triage between a number of approach options to decide on the best implementation approach. I hope this information helps you with that process.

From the Twin Cities, where we shrug off the second day of Spring with a second helping of Winter, Amigos…

Tags: , , , , , ,

SailPoint IIQ: Best Practice – Native Change Detection

December 13th, 2012 | No Comments | Posted in IAM Development, Vendor Specific

This should be a short post. What I want to offer is longer than what I can fit into a tweet (@IdMConsultant), but pretty simple to state. (But since I’m blogging, I will expand slightly… :-))

Background

For the new Native Change Detection feature in Sailpoint IIQ v6.0, Sailpoint warns, NCD needs to be turned on after your first aggregation. Obviously, if NCD is turned on before this, all your “changes” on your first aggregation are going to kick off a lot of needless workflows (at best) and could result in some possibly serious consequences in terms of changes made downstream (at worst, depending on how you’ve customized the resulting LCE workflow, especially if you’ve elected a heavy-handed approach to NCD).

Native Change Detection Best Practice

I would further this recommendation and state, as a Best Practice, don’t turn on NCD until after the aggregations for an application have “matured.” That is, you’ve worked through all the kinks that typically come in a production aggregation scenario. Almost always, there is something “forgotten” in an initial aggregation or even the first two or three aggregations. A transformation rule has to be written… You forgot an attribute… Your app owner and you decide another attribute needs to be added to the application… You forget to mark an entitlement… You don’t realize immediately you aren’t getting all expected data… etc.

(You can “mature” or solidify your application aggregations in one of two ways or a combination of both:

(1) Work out your aggregation details in lower environments. Attributes and schemas here should match what you plan to place into production. But since your data isn’t always the same in your lower environments as in production, you should also…

(2) Allow for a number of aggregations in Production with production data. I would recommend at least 2-3 validated aggregations with Production data to solidify expectations.)

Native Change Detection is a powerful new feature of Sailpoint IIQ that is quickly positioning Sailpoint IIQ as THE authoritative governance application in the enterprise (NCD as well as other new features of Sailpoint IIQ v6.0). So to recap:

Recap

(1) Don’t turn on Native Change Detection until aggregations for an application have matured or been solidified.

(2) Turn on Native Change Detection only one application at a time!! Plan your usage of NCD, and either turn NCD on one application at a time or in small groups of related applications (eg. Active Directory and Exchange). I really recommend one application at a time. If you don’t take this #2 approach, I promise you… you are asking for trouble! :-)

(3) I would even go so far as to recommend enabling one NCD function (eg. create, modify, or delete) at a time. At least in your earliest uses of NCD. So one function per one application at a time.

Plan. Map. Forecast. Test. Execute. Mitigate. Don’t “willy nilly” with this. :-)

Rising above 15″ of snow in the Twin Cities and wishing you the best with this fantastic new feature of Sailpoint IIQ!

Tags: , , , , ,

@IdMConsultant for IdM Related Tweets

December 2nd, 2012 | No Comments | Posted in General Idm/IAM, IAM Development, IT Industry, Security

I’ve been wanting for a while to create a dedicated channel on Twitter for tweeting content specific to Identity & Access Management. As of now, I’ll be doing exactly that via a new @IdMConsultant Twitter account. (Totally shocked that that Twitter account was actually available!)

So look for short, I-hope-to-be-handy tweets on the various IdM products we implement, support and provide expert advisory services on through Qubera Solutions. Expect tweets such as Implementing Full Text Search for #SailPoint #IIQ6? Don’t forget to copy the resulting index files across your server farm! Qubera Solutions is IdM/IAM vendor agnostic — we advise and implement solutions that fit your specific needs and requirements, so expect tweets that are vendor agnostic as well, but narrowed to just IdM/IAM.

(Traffic on my older and still existing @TechnologEase Twitter account will carry more general content relating to technology in general and what TechnologEase exists for which is Internet Consulting. Done Right.)

Tags: , , , ,

SailPoint IIQ: BuildMap – I Told You So :-)

Okay, here’s an article I wasn’t planning on posting, but based on some feedback I received privately via email, I thought I would throw this one example out there. Sometimes the simplest and unlikeliest of examples can tell you a whole lot about the plumbing of a product such as Sailpoint IIQ. Concerning my most recent post on SailPoint IIQ Build Map rules, this next exercise I think will fit the bill of being quite revealing even though simple and extremely unlikely to mirror real world.

I Told You So :-)

In my last post, I indicated that Build Map rules (as well as other rule hooks in Sailpoint IIQ) do not care what you are doing inside them, in general. In the case of the Build Map rule, I stated that Sailpoint IIQ does not do a single thing to validate your code. It does not validate it against your application schema; it’s trusting you 100% to wire your build map rule to your schema in the right way — 100%. The only thing Sailpoint IIQ does do is map fields from your build map into a resource object (later in aggregation processing) that matches the schema, which is a short way of saying…

(1) If you don’t provide a field from your return map that matches the application schema, that field in the schema will be blank (or null), and…

(2) If you provide a field from your return map that does NOT match the application schema, that field in the build map will be dropped.

That’s it. The rest is up to you and here’s a very small example that in my mind pretty much demonstrates everything about how build map rules work.

Setting This Up

Let’s set this up. Try this in your development sandbox. First, create a plain text file that has nothing in it but one number per line — lines numbered from say 1 to 25. Nothing else. This is easy to setup on the Linux command line. (For you Windows peeps, I’m sorry to say it may be just as easy to jump into NotePad and bang out 25 lines by hand! :-( :-))

$ perl -e 'for (1..25) { print "$_\n" }' > dummy25.txt

More »

Tags: , , , , , , ,