Preface
This guide will help you complete the steps required in your AWS service and SkyFormation app so that you will be able to fetch the AWS Cloud Logs events you need using the AWS Kinesis stream option, and send to your SIEM/SOC system of choice.
For application version 2.3.135 and above
In version 2.3.135 a new feature was introduced that now allows to fetch data from all CloudWatch Logs Groups using a single S3 bucket.
The behind the scenes process is that the application:
1. Identifies all existing CloudWatch Logs Groups in the account (in a given region)
2. The application requests AWS CWL to export small timeframes of data to an S3 bucket
3. Once the export process is finished, the application would read it from the bucket
4. When it is finished reading the data from the S3 bucket it would delete it from the bucket
In order to achieve this process, a new field was added to the AWS connector:
Additional permission required to achieve the above process are:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::my-bucket"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"logs:CreateExportTask",
"logs:DescribeExportTasks",
"logs:CancelExportTask"
],
"Resource": "*"
}
]
}
Also, allow CloudWatch Logs to export (put objects) into the S3 bucket by setting the permission on the S3 bucket (from this guide, step #3):
-
-
If the bucket is in your account, use the following policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:GetBucketAcl",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs",
"Principal": {
"Service": "logs.us-west-2.amazonaws.com"
}
},
{
"Action": "s3:PutObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
},
"Principal": {
"Service": "logs.us-west-2.amazonaws.com"
}
}
]
}-
If the bucket is in a different account, use the following policy instead. It includes an additional statement using the IAM user you created in the previous step.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:GetBucketAcl",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs",
"Principal": {
"Service": "logs.us-west-2.amazonaws.com"
}
},
{
"Action": "s3:PutObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs/random-string/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
},
"Principal": {
"Service": "logs.us-west-2.amazonaws.com"
}
},
{
"Action": "s3:PutObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs/random-string/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
},
"Principal": {
"AWS": "arn:aws:iam::SendingAccountID:user/CWLExportUser"
}
}
]
} -
If the bucket is in a different account, and you are using an IAM role instead of an IAM user, use the following policy instead.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:GetBucketAcl",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs",
"Principal": {
"Service": "logs.us-west-2.amazonaws.com"
}
},
{
"Action": "s3:PutObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs/random-string/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
},
"Principal": {
"Service": "logs.us-west-2.amazonaws.com"
}
},
{
"Action": "s3:PutObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs/random-string/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
},
"Principal": {
"AWS": "arn:aws:iam::SendingAccountID:role/CWLExportUser"
}
}
]
}
-
-
Copy the relevant policy JSON into the bucket ACL permissions:
Navigate to Services -> S3, find your bucket. Click on your bucket, navigate to "permissions" and then "Bucket Policy". Paste the JSON into the online editor and save.
CloudWatch Logs Groups are detected automatically, every 10 minutes.
Newly discovered groups start automatically to pull data. In order to pause retrieval from CWL Group, click "Edit", and "STOP" on the relevant endpoint.
For application version prior to 2.3.135
Important note
The example below uses events published to CloudWatch logs from AWS VPC Flow Logs, then these events are streamed into a Kinesis stream. You could fetch any event type from AWS CloudWatch logs using the same process but only the below event types listed below
are automatically parsed before sent to your SIEM/System of choice:
- AWS VPC Flow Logs
If you would like your SkyFormation for AWS Cloud Connector to automatically parse event
types from additional AWS sources please contact support@skyformation.com .
Prerequisites steps at your SkyFormation app
(a) Make sure you have a SkyFormation for AWS Cloud Connector running and connected to the
AWS account you would like to start getting CloudWatch logs events from.
Prerequisites steps at your AWS account
(a) Publish your AWS service events you would like to fetch to an AWS CloudWatch logs group
This is an ad-hoc process that should be done according to the specific AWS event source you
would like to fetch its events. Please consult your AWS admin on how to complete this step.
You will need the AWS CloudWatch logs group name the events are forwarded to.
Example (only relevant for VPC Flow Logs events publishing):
Publishing Flow Logs to CloudWatch Logs Group
See https://aws.amazon.com/cli/
(c) Associate a subscription filter with Kinesis to the CloudWatch Logs Group used for
the VPC Flow Logs
To create a subscription filter for Kinesis
> NOTE: Create the Kinesis stream in the same region as the CloudWatch Logs are for faster and easier configuration. Replace the region in the example below with the real region name, i.e. us-east-1
-
Create a destination Kinesis stream (e.g. named
VpcFlowLogsStream
) using the following command:$
C:\>
aws kinesis create-stream --stream-name "VpcFlowLogsStream" --shard-count 1 -
Wait until the Kinesis stream becomes Active (this might take a minute or two). You can use the following Kinesis describe-stream command to check the StreamDescription.StreamStatus property.
In addition, note the StreamDescription.StreamARN value, as you will need it in a later step:aws kinesis describe-stream --stream-name "VpcFlowLogsStream"
The following is example output:
{ "StreamDescription": { "StreamStatus": "ACTIVE", "StreamName": "VpcFlowLogsStream", "StreamARN": "arn:aws:kinesis:region:123456789012:stream/VpcFlowLogsStream", "Shards": [ { "ShardId": "shardId-000000000000", "HashKeyRange": { "EndingHashKey": "340282366920938463463374607431768211455", "StartingHashKey": "0" }, "SequenceNumberRange": { "StartingSequenceNumber": "49551135218688818456679503831981458784591352702181572610" } } ] } }
-
Create the IAM role that will grant CloudWatch Logs permission to put data into your Kinesis stream. First, you'll need to create a trust policy in a file
(for example,~/TrustPolicyForCWL.json
).
Use a text editor to create this policy. Do not use the IAM console to create it.{ "Statement": { "Effect": "Allow", "Principal": { "Service": "logs.
region
.amazonaws.com" },
"Action": "sts:AssumeRole"
}
} -
Use the create-role command to create the IAM role, specifying the trust policy file. Note the returned Role.Arn value, as you will also need it for a later step:
aws iam create-role --role-name CWLtoKinesisRole --assume-role-policy-document file://~/TrustPolicyForCWL.json
{
"Role": {
"AssumeRolePolicyDocument": {
"Statement": {
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "logs.region.amazonaws.com"
}
}
},
"RoleId": "AAOIIAH450GAB4HC5F431",
"CreateDate": "2015-05-29T13:46:29.431Z",
"RoleName": "CWLtoKinesisRole",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:role/CWLtoKinesisRole"
}
} -
Create a permissions policy to define what actions CloudWatch Logs can do on your account. First, you'll create a permissions policy in a file (for example,
~/PermissionsForCWL.json
). Use a text editor to create this policy. Do not use the IAM console to create it.{
"Statement": [
{
"Effect": "Allow",
"Action": "kinesis:PutRecord",
"Resource": "arn:aws:kinesis:region:123456789012:stream/VpcFlowLogsStream"
},
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::123456789012:role/CWLtoKinesisRole"
}
]
} -
Associate the permissions policy with the role using the following put-role-policy command:
aws iam put-role-policy --role-name CWLtoKinesisRole --policy-name
Permissions-P
olicy-For-CWL
--policy-document file://~/PermissionsForCWL.json
-
After the Kinesis stream is in Active state and you have created the IAM role, you can create the CloudWatch Logs subscription filter. The subscription filter immediately starts the flow of real-time log data from the chosen log group to your Kinesis stream:
aws logs put-subscription-filter \ --log-group-name "VpcFlowLogsGroup" \ --filter-name "VpcFlowLogsStream" \ --filter-pattern "" \ --destination-arn "arn:aws:kinesis:
region
:123456789012:stream/VpsFlowLogsStream" \ --role-arn "arn:aws:iam::123456789012:role/CWLtoKinesisRole
" -
After you set up the subscription filter, CloudWatch Logs forwards all the incoming log events that match the filter pattern to your Kinesis stream.
-
Verify that the VPC Flow Logs are forward to the Kinesis stream as intended.
Run the Kinesis get-records command to fetch some Kinesis records:aws kinesis get-shard-iterator --stream-name VpcFlowLogsStream --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON
{ "ShardIterator": "AAAAAAAAAAFGU/kLvNggvndHq2UIFOw5PZc6F01s3e3afsSscRM70JSbjIefg2ub07nk1y6CDxYR1UoGHJNP4m4NFUetzfL+wev+e2P4djJg4L9wmXKvQYoE+rMUiFq+p4Cn3IgvqOb5dRA0yybNdRcdzvnC35KQANoHzzahKdRGb9v4scv+3vaq+f+OIK8zM5My8ID+g6rMo7UKWeI4+IWiK2OSh0uP" }
aws kinesis get-records --limit 10 --shard-iterator "AAAAAAAAAAFGU/kLvNggvndHq2UIFOw5PZc6F01s3e3afsSscRM70JSbjIefg2ub07nk1y6CDxYR1UoGHJNP4m4NFUetzfL+wev+e2P4djJg4L9wmXKvQYoE+rMUiFq+p4Cn3IgvqOb5dRA0yybNdRcdzvnC35KQANoHzzahKdRGb9v4scv+3vaq+f+OIK8zM5My8ID+g6rMo7UKWeI4+IWiK2OSh0uP"
Note that you might need to make this call a few times before Kinesis starts to return data.
You should expect to see a response with an array of records. The Data attribute in an Kinesis record is Base64 encoded and compressed with the gzip format. You can examine the raw data from the command line using the following Unix commands:
echo -n "<Content of Data>" | base64 -d | zcat
The Base64 decoded and decompressed data is formatted as JSON with the following the key elements:
owner-
The AWS Account ID of the originating log data.
- logGroup
-
The log group name of the originating log data.
- logStream
-
The log stream name of the originating log data.
- subscriptionFilters
-
The list of subscription filter names that matched with the originating log data.
- messageType
-
Data messages will use the "DATA_MESSAGE" type. Sometimes CloudWatch Logs may emit Kinesis records with a "CONTROL_MESSAGE" type, mainly for checking if the destination is reachable.
- logEvents
-
The actual log data, represented as an array of log event records. The "id" property is a unique identifier for every log event.
-
to allow it to interact with the Kinesis stream created.
VpcFlowLogsStream:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kinesis:GetShardIterator",
"kinesis:GetRecords",
"kinesis:DescribeStream"
],
"Resource": "arn:aws:kinesis:region:123456789012:stream/VpcFlowLogsStream"
}
]
}
AWS settings required steps completed
You are now ready to configure your SkyFormation for AWS connector to fetch the
VPC Flow Logs from the Kinesis stream.
Configure the SkyFormation for AWS connector
Kinesis stream to and press its EDIT button and scroll to the list of the Connector's endpoints.
"ENDPOINT - CLOUDWATCH-LOGS - [%the VPC Flow Logs group streamed to kinesis%]
"TMP-FLOW-LOGS"

VPC Flow Logs into (e.g. VpcFlowLogs in our example)

and scroll to the list of the Connector's endpoints.
START button.
Comments
0 comments
Please sign in to leave a comment.