| Subcribe via RSS

The Problem of Non-User IDs in Organizations Today

February 4th, 2016 | No Comments | Posted in General Idm/IAM, IdM Engagement

identities(The contents of this article are captured here and reflected back in response to an article posted on SailPoint’s Identity Quotient Blog article entitled “Third-Party Contractors: The Target Breach’s Bulls-eye.” I recommend reading that article to establish context for this article.)

It is fairly well known and pretty much public knowledge that the Target breach took place leveraging 3rd party credentials that were phished from an associated Heating Venting and Air Conditioning (HVAC) vendor.  This was the initial point of entry into the Target network.

However, the HVAC credentials were primarily leveraged only for initial access. Credit card data was not being accessed and syphoned using that specific HVAC ID. Nevertheless, controls around time of access and other metadata information that could be policy driven within SailPoint IdentityIQ around that 3rd party access are still cogent to the discussion as per the aforementioned SailPoint article.

What isn’t mentioned in the article is that SailPoint IdentityIQ and ideally any IdM product could and should have a very big part to play in the gathering of and providing governance around Non-User IDs (NUIDs) — testing IDs, training IDs, B2B FTP IDs, generic admin IDs (that should be privileged access managed anyway), application IDs (huge!), etc.

Organizations typically have thousands, tens of thousands and yes, even millions of orphaned and ungoverned NUIDs, in terms of overall access, proliferated, orphaned and laying dormant on end-point servers and systems…

To an attacker, an ID is an ID is an ID. Any ID will suffice in order to establish a beachhead on a system and then begin trying to “walk” systems, ideally though the elevation of access. This is typically how deep penetration and spanning of internal networks has taken place in a lot of recent breaches. When attacking a system and attempting to establish access, it doesn’t matter to the attacker whether the initial ID used is technically a normal and established user ID (with or without governance around it) or a NUID that typically is not being properly tracked and governed within organizations. In fact, NUIDs represent an ideal target due to the fact they don’t have visibility and normal and established governance around them in many organizations.
More »

Tags: , , , , ,

History Demonstrates Strong Encryption Is Here To Stay

January 15th, 2016 | No Comments | Posted in Data Security, Security

magic-book-burning-247(Originally published on LinkedIn – January 13th, 2016)

I am a very firm believer that knowing the background and history of things provides a much better forward-looking perspective and present decision making capability. Would that this view was adopted more. If it were, the age old George Santayana quote that “those who don’t remember the past are condemned to repeat it” would never have come into existence. The fact mankind never really seems to learn the lessons of history also seems to trap the unfolding of events in a cyclical pattern.

The Encryption Debate and History’s Lesson

Encryption, that technology that for years in the computing world has done its job quietly in the background and without much acclaim, is suddenly a topic that is all the rage due to recent and tragic world events. Lawmakers stipulate and paint a gloomy picture that without the ability to intercept and decipher encrypted communications on the part of criminals and terrorists, national security is at serious risk. Technologists on the other hand, including myself, maintain that the implementation of so-called “backdoor encryption” in effect weakens encryption for all of us with severe consequences and effects to our normal, everyday security, economy and lives. Essentially, to weaken encryption would be to cut off our noses to spite our collective economic and everyday-life faces. Lawmakers and technologists and technology companies are digging the trenches and the staunch faceoff, while mostly civil at the moment, continues.

In a recent interview for The Wall Street Journal, Max Levchin, past co-founder of PayPal and a cryptography expert, questions along with other technologists (including yours truly) whether lawmakers really understand how encryption actually works. Levchin goes on to stipulate that if we’re going to continue the national debate, let’s at least make sure lawmakers do in fact understand how encryption works technically.  And perhaps few are more qualified to step up and provide such an education than Max and other well known cryptographers in the cryptographic community.

Not only do I question whether lawmakers understand how encryption works, I also question whether they’ve really taken into account how the world works. It would be easy for anyone to say “how the world works today” but history, if we’re willing to learn from it, demonstrates the world has been working a certain way for a very long time when it comes to widespread technological innovation leveraged in conjunction with outside agenda.

Let’s take a quick lesson from history that coincidentally has ties to today’s date – January 13th – and see if history has anything to teach us concerning how the weakening of encryption would very likely play out were lawmakers to insist on their position through mandatory legislation.
More »

Tags: , , , , ,

Considerations Around Application Encryption

December 22nd, 2015 | No Comments | Posted in Data Security, IT Industry, Security, Tools

person-encryption-623x420For years, the use of encryption to protect data-at-rest on computers within the enterprise was solely the responsibility of developers who coded the applications that used or generated the data. Early on, developers had little choice but to “roll their own” encryption implementations. Some of these early implementations were mathematically sound and somewhat secure. Other implementations, while perhaps not mathematically sound, were adequate for the risk developers were attempting to mitigate.

As technology progressed, choices for encryption matured and solidified across development stacks. Callable libraries were born. Algorithms were perfected, significantly strengthened and pushed into the public domain. And beyond application encryption, encryption itself began to offer benefits to the enterprise at an operational level – within turnkey, off-the-shelf solutions that could be aimed at specific enterprise use cases such as end-point data loss prevention (DLP), encrypted backups, and full-disk encryption (FDE) among others.

Today however, when CISOs and senior security, software and enterprise architects think of protecting data-at-rest, their conceptions can sometimes harken back to days of old and they will stipulate encryption solutions as necessarily needing to be implemented at the application layer.

And while it turns out there is actually no extreme fallacy in this thinking and some benefits at this layer remain, there are some considerations and tradeoffs surrounding application encryption that aren’t overtly obvious. These considerations and tradeoffs can get lost when not weighed along with more recent turnkey, transparent solutions that get implemented at a different architectural layer with nearly the same benefit yet with much less risk and cost association.

Let’s look at and consider some of the ins and outs of application encryption. Hopefully the following thoughts and considerations will help those who are deep in the throes of needing to make a decision around encryption of data-at-rest.
More »

Tags: , , , , ,

Two Years Later: Reflections from “The Breach”

November 6th, 2015 | No Comments | Posted in Data Security, IT Industry, Security

target-100221410-largePresident and CEO of Vormetric, Alan Kessler, blogged earlier this week concerning the far-reaching impacts of the Target breach – reflections from almost two years later. Alan remonstrated in his article that the Target breach was the most visible mile marker in 2014, a year full of breaches and continuing into 2015, and he went on to discuss and reflect on some of the other specific breaches.

In this article, I would like to reflect on some of the industry-wide changes that have taken place since the Target breach.

“The Breach”

The Target breach was so significant that for at least the first year afterward, it was referred to, especially in security circles and even on the news, as simply “The Breach.” And as Alan has already detailed, that breach was merely a harbinger of things to come with major breach after major breach taking place after the Target breach.

But what has been the impact of all these breaches? As one would expect, reactions and responses to “The Breach” by organizations have been all over the map.  Some have, as the saying goes, not “let a good crisis go to waste” and have become better companies as a result. Others have not fared or reacted as well.

While “The Breach” and the major breaches afterward has led most major retailers to reevaluate their data security approach, the retail edition of the Vormetric 2015 Insider Threat Report shows that retailers still have a long way to go. Over 51% of retail respondents reported being very or even extremely vulnerable to insider threats – the highest rates measured in the study. Many of these organizations continue to invest in security and utilizing traditional approaches that have proven over the last two years to be insufficient.

While the threat obviously still remains high and a number of organizations still admit they have a long way to go, positive changes have taken place since “The Breach” (hereafter referred to simply as the breach) that are moving the retail industry and other industries along in a positive direction.
More »

Tags: , , , , , ,

History Foretells the Raising of the Ante: Securing Your Data Now All but Mandatory

August 31st, 2015 | No Comments | Posted in Data Security, IT Industry, Security

Federal-Trade-Commission-FTCIt’s been said that those who don’t learn from history are doomed to repeat it. In my last article, I wrote metaphorically about the medieval arms race to protect the pot of gold inside the castle from outside intruders. This time I want to draw upon history as the telescopic lens through which we forecast the journey into the future in a world full of advanced technology. Through this lens, we will see that the future is already here and history is beginning to write the same story again.

We’ll aim our history telescope backwards in time to the technological breakthrough of the automobile. As with any technology, the advent of each is initially only embraced by a few, and the same is true of the automobile. While the first automobile may have been designed and custom-built as early as the late 1600’s, automobiles were not mass produced and available to the general public until the turn of the 20th century. Widespread, generalized use of the automobile came about right after World World I, thanks to the genius of Henry Ford.

Even in the early days of the automobile, there existed enough power in these “new” devices to wreak havoc upon lives whenever there was an automobile accident. Victims of such accidents were often left holding the bag in terms of the costs and consequences, as were the drivers themselves, regardless of who was at fault. At some point the repeat scenario of “cause and victim” attracted the attention of governments and the auto insurance industry was born through mandatory legislation. The ones welding the wheel of this new technology were made accountable and the ante was raised.

Shift ahead to the 21st century and we behold the power of a world full of automation, driven by the wonders of computer technology. And while computer technology is no longer new either, the global use of computer technology as the business engine fueled by its gasoline of endless data tied to the consumer is starting to have the same effect whenever the “accidents” that we call breaches take place. Governments are beginning to wake up and take notice, and questions concerning liability are starting to be asked. In effect, the future is happening now, history is in the process of repeating itself, and the ante is being raised once again.

More »

Tags: , , , , , , , ,

Data Is The New Gold: Getting Data Security Right in Retail

August 28th, 2015 | No Comments | Posted in Data Security, Security

+44 (0) 7710 787 708 images@adamparker.co.uk

Traditional security has always been metaphorically tied to the medieval castle building of old: building thicker walls and drawbridges, creating multiple perimeters, raising larger armies, you know – the whole nine yards. This paradigm extends into the modern world, which maintains its fascination with sophisticated perimeters. For exhibit A, witness the recent Mission: Impossible Rogue Nation Hollywood blockbuster where sophisticated perimeter security was the primary obstacle to overcome.

But imagine changing that mindset from traditional perimeter-based security to data-centric. A data-centric approach, cast against the metaphorical medieval art of castle building, would result in thieves penetrating outer defenses, only to find the pot of gold actually filled with worthless tokens or paper notes.

Throughout the movie, traditional approaches didn’t stop Ethan Hunt (the protagonist, manipulated by the antagonist into doing his dirty work) and they won’t stop Ethan Hunt-like hackers from infiltrating retailers’ networks.

As the world progresses from a mere “information age” into an age of “big data,” it’s simple – the volume, granularity and sensitivity of individual data is growing exponentially. With this growth comes severe risks and consequences of losing valuable data.
More »

Tags: , , , , , , ,

Great (SailPoint) Work Is Out There!

Today was it. Today was the day I finally broke down and went beyond lamenting that I can’t clone myself. Today was the day I looked in the mirror and called myself a little bit of stupid and a little bit of selfish.

The Problem I Wish Everyone Had

They always say start by defining the problem.

There are problems and then there are problems. Real problems are bad. Other problems are actually good to have. I’m happy to say I confront the latter almost every day and I’d really like to share these problems with you. More on that later where you can be part of the solution to a lot of open problems I know about, if you want.

But let’s face it… we all know it. Security is hot right now. And if you’ve done a good job in security and are somewhat known, it’s nuclear. My problem is I get lots of fantastic opportunities come my way every day. I think about a lot of you out there. I get some really, really nice opportunities. And I lament I can’t respond to them all.

Me At Vormetric

I’m doing well at Vormetric and Vormetric is doing extremely well in the market place. Vormetric is posed on the edge of what I believe is a radical change in how enterprises go about Data Security and Encryption.

Vormetric does what it does extremely well; better than anyone else in the market place. So I’m set. I love what I do and more importantly what I can do for other people. Vormetric fills an important void. (And believe it or not, Data Security and Encryption has a direct tie-in to how enterprises should approach Identity Management that I had never considered before and a lot of companies still aren’t considering — it’s the “bottom third” that Identity Management can’t touch. More on that in another post.)

Those are the things that really drive me at core… what I can do to legitimately help other people in the mission-critical security space. Which dovetails right in line with the theme of this posting. If you are interested, keep reading.
More »

Tags: , , , , ,

SailPoint IIQ: Rule Modeling in Real Java :-)

I’ve been sitting on this article and concept for months and have had others ask me about it via email — whether I’ve ever done something like this before — and well… here it is.

Tired of No BeanShell Coding Validation!

It turns out I was sitting around in my hotel room in Bangalore on India Independence Day last year, whacking away on some client code, doing some data modeling using CSV. I had a somewhat involved BuildMap rule I was working on and I was getting a null pointer exception I simply could not find. A few hours and one simple coding mistake later, once discovered, I was finally on my way. But it was really discouraging to know that if I had been coding in Eclipse, the coding mistake would have been spotted immediately.

The next thought I had was actually two-fold. While I have at times actually written test straps in real Java using the Sailpoint IIQ Java libraries (ie. jars) and dropped my BeanShell code into procedures to instantly validate the syntax, I have also wanted at some point in time to be able to simulate or partially simulate rule modeling and data modeling outside of Sailpoint IIQ using Java I had complete control over writing and executing.

So on this particular day, being particularly irked, I decided to combine those two wishes and see what I could do about having a place I could not only drop, for instance, BuildMap rule code into Eclipse and instantly validate it, but also execute the code I intended for Sailpoint IIQ against connector sources I also had connected to Sailpoint IIQ (in development, of course!) and see and manipulate the results.

Once I was done iterating my development over a real dataset, I could take my validated Java code, drop it back into Sailpoint IIQ in BeanShell and have not only validated but also working code in Sailpoint IIQ with very little or no modification.

Establishing SailPoint Context

One thing you will need if you want to run your Java code in an actual Sailpoint IIQ context outside of Sailpoint IIQ proper is establishing SailPointContext in your code. This, I will tell you, while not impossible, is not easy to do. You need to implement the Spring Framework and a lot of other stuff. If you are interested in doing this and have access to SailPoint Compass, you can actually read about establishing SailPointContext here4.

Since doing that much work wasn’t something I had the time for doing, almost immediately I decided to implement a partial simulation that would allow me to (1) model and validate my rule and (2) also allow me to model my data very simply and easily without establishing SailPointContext. I could still achieve my goal of iterating the solution to produce validated and working code to drop back into Sailpoint IIQ in this way.

The Code

Amazingly, the code for simulating a BuildMap rule, pointing it to the actual CSV I intend for Sailpoint IIQ, and simulating an account aggregation task is not that complex. Once you have the code, if you understand how Sailpoint IIQ works in general, you could conceivably re-engineer and simulate other segments of Sailpoint IIQ processing or modeling other rule types and.or data outside of Sailpoint IIQ1.
More »

Tags: , , , , , , , ,

Stupid SailPoint Developer Tricks

Hello, mates — as they say Down Under, where I happen to be at the moment on a rather large Sailpoint engagement. It’s been a while, and I’m sorry for that. I keep promising more, new and better content and haven’t delivered.

The last couple of months however have been absolutely crazy and there have been some changes on my end, as you perhaps can see. Now that things have shaped up a bit, maybe I can get back to the business at hand here on the blog, again as I have time.

Stupid Pet Tricks

When I was growing up and in college, a famous comedian became famous (partially) by having a segment on his show called “Stupid Pet Tricks.” Some were hilarious and some… belonged on the 1980’s “Gong Show.” (If you’ve never heard of “The Gong Show,” trust me, you aren’t missing anything).

Since that time, I’ve always thought of various developer tricks in the same light. Some are quite slick and useful and some… really just need to be buried. I’ll leave it to you to decide on this one.

Out of sheer laziness, while onboarding Sailpoint applications that feature a BuildMap rule (eg. BuildMap, JDBCBuildMap, and SAPBuildMap), I sometimes utilize a method for “printing debug statements” that I can see directly and immediately in connectorDebug, without having to jump into or tail the Sailpoint IIQ log or application server logs.

It’s also just a bit less verbose as the Sailpoint IIQ logs typically have a large class identification prefix in front of them, which can get rather cumbersome and make it more difficult to pick out one’s intended debug output.

Plus I hate changing logging levels in log4j.properties even though the Sailpoint IIQ debug page allows me to load a new logging configuration dynamically. In short, I’m just a lazy, complaining type when it comes to Sailpoint IIQ debug statements.

Someone mentioned this would be worth blogging about, so here goes. (At the very least, this is an easy article to write and perhaps will get me back into the blogging swing?!)

__DEBUG__ Schema

Now, I would definitely recommend doing this only on a local or designated sandbox and then making sure you clean up before checking in your code. (You are using some form of source code control for your Sailpoint IIQ development, aren’t you?!)
More »

Tags: , , , , ,

SailPoint IIQ: Move Over, Rover

I’m getting ready to do some customer training on Sailpoint IIQ v6.0. Getting ready for the trip has been a good impetus to get my rear end in gear and get up to date. I’ve been running Sailpoint IIQ v5.5 “bare metal” on my MacBook Pro pretty much since Sailpoint IIQ v5.5 was released. I have procrastinated getting Sailpoint IIQ v6.0 installed on my laptop. (Mainly because I have Sailpoint IIQ v6.0p5 running in the mad scientist lab on ESXi accessible via VPN.)

Side By Side Approach

So, it was time to install Sailpoint IIQ v6.0, but… I don’t and didn’t want to obliterate my Sailpoint IIQ v5.5p6 installation; I have too many customizations, test applications and rules I don’t want to loose and still want to be able to run live. I’ve been running Sailpoint IIQ with a context root of /identityiq and with a MySQL database user of identityiq.

When I run multiple versions of Sailpoint IIQ side by side on the same machine, I’ve adopted the practice of running each installation as /iiqXY where XY is the version number. So I wanted to run /iiq55 and /iiq60 side by side from the same application server. (I could also take the approach of running multiple instances of application server and run one installation from one port, say 8080, and another from another port, say 8081.)

So how to “lift and load” the existing installation at /identityiq to /iiq55 without reinstalling everything and re-aggregating all my sources? Here’s what I did.

DISCLAIMER: I’m neither advocating nor de-advocating this. Do this at your own risk, especially if your environment differs from mine. I make no claims or warranty of any kind. This worked for me. If it helps you… great.

The Environment

Here was my environment:

Operating System Mac OS X, Mountain Lion, v10.8.3
Application Server Apache Tomcat v6.0.35
JRE Java SE JRE (build 1.6.0_43-b01-447-11M4203) (64-bit)
SailPoint IIQ SailPoint IIQ v5.5p6
IIQ Database MySQL 5.5.15

Shut Everything Down

First, I shut everything down. This basically meant just spinning down the entire Tomcat application server. The command you might use and the location of your application server scripts may differ:

$ cd /Library/Apache/Tomcat6/bin
$ ./shutdown.sh

More »

Tags: , , , ,