Real-Time SIEM integration

 

As mentioned previously, the Real-Time functionality can integrate with any SIEM but comes preconfigured with support for technologies such as LogRhythm, Splunk, Logpoint, MS Sentinel and DTEX agents. Below we discuss how some of these integrations can be configured, both on the customer infrastructure and within the Cyber Risk Aware portal.

 

LogRhythm Integration

 

In the Orchestrator Installation guide we discussed the fact that all communications from SIEM to Orchestrator (On Premise & Azure) are done via a web-hook (the SIEM will communicate by calling an API endpoint implemented within the Orchestrator). For LogRhythm to connect to this end point we use functionality in LogRhythm called Smart Response. Smart Response allows us to create a plugin that contains a PowerShell script accepting various parameters passed from a triggered alarm. This plugin will be provided by Cyber Risk Aware but will need installed in your instance of LogRhythm and associated with the alarms you have set up.

 

SMART_RESPONSE_URI Environment Variable

 

In order for the smart response to communicate the with Orchestrator, it needs to know the full URL to use. This needs to be configured as an Environment Variable. A system Environment Variable called SMART_RESPONSE_URI will need to be added. 

The name of the variable should be SMART_RESPONSE_URI and the value will depend on whether you are using the On Premise or Azure Orchestrator. 

If using the On Premise set up, then the URL value for the environment variable would take the format:

 {Orchestrator Site Path}/api/SIEM/logrhythm/smartresponse. (For example, this could be http://localhost:5555/api/SIEM/logrhythm/smartresponse).

If you are using the Azure Orchestrator, the URL value for the environment variable would be: 

https://orchestrationapi.azurewebsites.net/api/event/logrhythm/smartresponse?code={api-key}& orchid={id-of-orchestrator-from-portal}&orgId={organisation-id-from-portal}

Smart Response Plugin

 

To add the Smart Response plugin to LogRhythm, take ARPlugin***.lpi file that was provided by Cyber Risk Aware and place it on the same server as the LogRhythm installation. Open the LogRhythm desktop client and select Deployment Manager from the tool bar.

 

 

Next from the menu bar, select Tools -> Administration -> SmartResponse Plugin Manger

 

 

 

 

In the resulting dialog, select Actions -> Import. Then select the ARPlugin.lpi file that you copied to the server and click Open.

This will install the plugin to LogRhythm.

 

 

Once the plugin has been installed, it can now be associated with both new and existing Alarms.

To associate the SmartResponse with an alarm, open the LogRhythm desktop client and select Deployment Manager. Then select the AI Engine tab. This should list all the Alarm Rules that have been created.

Double click on the Alarm Rule you wish to associate the SmartResponse with and select the Actions tab in the resulting dialog

 

 

Click the Set Action drop-down. You should see the SmartResponse plugin listed here. Select the plugin.

 

Upon selecting the plugin, you will see the parameters list appear.

 

 

The majority of these parameters are auto populated but one will need set manually. Select the parameter User and change the Type to Alarm Field and Value to User (Impacted) Identity.

 

 

Once complete, click the button Save Action and then OK. You will be asked to restart the AI Engine Servers to pick up the changes. This can be done by clicking Restart AI Engine Servers button at the top of the AI Engine tab.

 

 

Repeat these steps for each alarm you wish to associate the SmartResponse with.

 

Splunk Integration

 

Splunk comes with built in support for Webhooks on their alerts. To configure a webhook on an alert in Splunk, follow the instructions below.

 

You can configure the webhook action when creating a new alert or editing an existing alert's actions. Follow one of the options below.
 

Option

Steps

Create a new alert

From the Search page in the Search and Reporting app, select Save As > Alert. Enter alert details and configure triggering and throttling as needed.

Edit an existing alert

From the Alerts page in the Search and Reporting app, select Edit>Edit actions for an existing alert.

 

From the Add Actions menu, select Webhook.

Here, you will be able to provide the URL for the Webhook. The URL will differ depending on whether or not you are using the On-Premise Orchestrator or the Azure based Orchestrator.

For the On-Premise setup, the URL should take the format:

 

{Orchestrator Site Path}/api/SIEM/splunk/alert. (For example, this could be http://localhost:5555/api/SIEM/splunk/alert).

 

For the Azure based setup, the URL should take the format:

 

https://orchestrationapi.azurewebsites.net/api/event/splunk/alert?code={api-key}& orchid={id-of-orchestrator-from-portal}&orgId={organisation-id-from-portal}

 

After providing the URL. Click Save. This step will need to be take for each alert rule you create.

 

MS Sentinel Integration

 

With MS Sentinel, you can configure Playbooks (logic apps) in Azure that will run in response to a particular alert or event. With these Playbooks, an activity can be configured to forward the alert information to the Azure Orchestrator (On-Prem set up is not currently compatible with MS Sentinel). This activity will act like a Webhook. 

 

We mentioned previously that the two items of information that need to be forwarded to the Orchestrator are an identifier for the Alert that has occurred and an identifier for the user. The identifier for the user must be the user Azure AD username. The Orchestrator integrates with the customer Azure AD and uses the username as a lookup value to get the correct email address (Cyber Risk Aware Portal username). This information is effectively used to locate the offending user in the Cyber Risk Aware instance and the Alert subscription such that we can coordinate the correct response.

 

When configuring the Playbook, create a HTTP Post activity like below:

 

 

The set up above contains the URL that the event information is to be forwarded to. It also contains the properties identifying the correct Orchestrator, Organisation, offending user and rule / alert that was triggered. Depending on how you have configured your alerts / rules, these user details and alert name would be extracted differently. In the example above we get the rule name from the Alert Display name attribute (NOTE: This must be an identical match for the name of the subscription create in your Cyber Risk Aware portal (see documentation on setting up Realtime events and response).

 

It is also important to note that MS Sentinel does not fire these events at real time. The poll for the events based on the timeframe you provide in your alert setup. In the example below, we poll the logs every 5 minutes.

 

 

Logpoint Integration

 

Like all other integrations, with Logpoint, we make use of the Alert webhooks. In most integrations, the webhook performed as a POST in which we receive a payload in the body. However, with Logpoint, this request is a GET (although POST would also work). The important aspect here is that the queries parameters are what is expected by the Orchestrator. For Logpoint, the two queries parameters user and ruleName are required to identify both the offending user and the rule / alert that has been violated.

 

For the On-Premise Orchestrator, the expected URL format would be:

{Orchestrator Site Path}/api/SIEM/logpoint/alert?user={username}&ruleName={rule-name}. (For example, this could be 

http://localhost:5555/api/SIEM/logpoint/alert?user={username}&ruleName={rule-name}).

 

For the Azure based setup, the URL should take the format:

 

https://orchestrationapi.azurewebsites.net/api/event/splunk/alert?code={api-key}& orchid={id-of-orchestrator-from-portal}&orgId={organisation-id-from-portal}&user={username}&ruleName={rule-name}

 

Provided the Orchestrator receives the correct parameters here, it will be able to locate the user in the Cyber Risk Aware instance and the rule that had been violated.

 

 

DTEX Integration

 

The DTEX Agents can forward information to the Cyber Risk Aware On Premise or Azure Orchestrator. Like the other integrations it does this via Webhook. When setting up the DTEX alerting rules, a webhook URL can be specified to forward the information to the Orchestrator. The payload that is forwarded is like the following:

 

"{

  ""dataset"": [

    {

      ""occurred_at"": ""2016-12-30T00:00:00-05:00"",

      ""hits"": [

        {

          ""category"": ""Obfuscation (Unusual File Deletes)"",

          ""severity"": ""High"",

          ""updated_at"": ""2017-06-06T23:09:45.851852+00:00"",

          ""risk_score"": 0.5,

          ""category_id"": ""DELETE"",

          ""id"": ""82d47a730e8a91cb0c812bd2965ca136728812e30334e081139715a2ee346e8b"",

        }

      ],

      ""activities_count"": 6,

      ""user_name"": ""dev\\gary"",

      ""user_risk_score"": 0.5

}]}"

 

The highlighted properties (user_name and category) are the only properties that the Orchestrator requires. These properties are used to identify the rule / alert that was triggered and the offending user. Depending on which Orchestrator you are using, the Orchestrator will find the identified users email either from on premise Active Directory or Azure Active Directory. The URL for the web hook configured as follows:

 

For the On-Premise Orchestrator, the expected URL format would be:

{Orchestrator Site Path}/api/SIEM/dtex/alert. (For example, this could be 

http://localhost:5555/api/SIEM/dtex /alert).

 

For the Azure based setup, the URL should take the format:

 

https://orchestrationapi.azurewebsites.net/api/event/dtex/alert?code={api-key}& orchid={id-of-orchestrator-from-portal}&orgId={organisation-id-from-portal}

 

Other Integrations

 

For all other integrations, a small development may be required by Cyber Risk Aware. This is effectively to generate a parser for the expected webhook payload. For this, we would need a sample of the payload that would be forwarded from the customers SIEM / Monitoring technology. Development of the Parser would generally have a quick turnaround.