Quantcast
Channel: Exceptional Security » vulnerability management
Viewing all articles
Browse latest Browse all 10

The Nasdaq Breach Illustrates the Need for Continuous Monitoring

$
0
0

Dear Nasdaq, call me.  I am here to help.

The Wall Street Journal reported late Friday that Nasdaq had discovered that they had been hacked.  The hackers never made it to the trading programs, but instead infiltrated an area of Nasdaq called Directors Desk where directors of publicly traded firms share documents about board meetings.

What caught my eye was the following quote from the AP story filed about the attack: “…Nasdaq OMX detected “suspicious files” during a regular security scan on U.S. servers unrelated to its trading systems and determined that Directors Desk was potentially affected.”

People, people, people.  You have got to get on the continuous scanning bandwagon.  Seriously.

Connect the dots.  The story says that “the hackers broke into the service repeatedly over more than a year”.  Notice that the scans that found the suspicious files were “regular” meaning periodic.  Monthly? Quarterly?  How many of these regular scans were run before the activity was discovered.  I understand the need for network based, agentless scans.  I also know their limits, and deep down inside in a place most IT security people don’t want to admit, so do you.  “Regular” is not continuous.

Don’t stop yet, because the story says that the scan determined that the systems were “potentially affected”.  The diagnosis was partial because agentless scans, even credentialed scans, only get part of the story and therefore can only point out “potential” exploitation.

I have zero data about the actual attack and therefore am speaking in general terms.  But I am confident that a granular, continuous scanning tool should have been able to detect enough anomalous and exceptional artifacts on the Nasdaq servers to spot an attack like this.  The story says that suspicious files were ultimately discovered, so we know that there were persistent artifacts created by the attack.

This is a prime example of why you must have continuous, granular monitoring of endpoints and servers.  Periodic scans, while effective, leave too many blind spots.  A continuous scanning tool should have fond the artifacts.  And if the tool used change detection like Triumfant, it would have flagged the files as anomalous at a minimum within 24 hours of the attack.

Don’t throw the shield argument at me here.  These attacks went on for over a year.  Triumfant would have spotted the artifacts in 24 hours or less.  If you can’t see that difference and want to live the lie of the perfect shield, you are on the wrong blog.  In fact, if those files triggered our continuous scan that looks for malicious actions (an autostart mechanism, opening a port, etc.), Triumfant would have flagged the files within 60 seconds.

Regardless of which of our continuous scans would have detected the incident, Triumfant would have performed a deep analysis of the files and been able to show any changes to the affected machine that were associated with the placement of the suspicious files on the machine.  You likely could have deleted the word “potentially” from the conversation almost immediately.  I would also add that we would have built a remediation to end the attack.

Strong words for someone who has no details?  Perhaps.  But I would bet the farm that we would have found this attack in less than a year.

I don’t understand where we have arrived in regards to why organizations don’t implement continuous scanning.  Innovative solutions like Triumfant get throttled by old predispositions and the disconnect between IT security and the operations people who manage the servers and endpoints.  The security teams are forced to use agentless tools because the ops people refuse to consider a new agent, even if that agent is unobtrusive and allows them to remove other agents in a trade of functionality.  As a result, the IT security people to protect machines with periodic scans that cannot possible see the detail available when an agent is used.

Machines get hacked, the organization is placed at risk, countless hours and dollars are spent investigating the problem and then more hours and dollars are spent putting useless spackle over the cracks.  This is worth dismissing even the consideration of an agent?

Let me put it a different way.  We allows users to run whatever they want on endpoint machines, yet block IT security from deploying granular, continuous scanning tools that can actually detect attacks such as the one we see in Nasdaq.

What am I missing here?

Dear Nasdaq, call me.  Don’t rinse, repeat and be in the WSJ again.  I can help.  Promise.


Filed under: Continuous Monitoring, Endpoint Security

Viewing all articles
Browse latest Browse all 10

Latest Images

Trending Articles





Latest Images