Logging Cowrie logs to the ELK stack

dashboard

This entry will cover the basics of setting up the Cowrie SSH honeypot and Filebeat to export Cowrie's logs to Elasticsearch, so we can use Kibana to visualize them in charts.

Goal

We will have 2 servers with private networking between them. One will host the ELK stack and the other one Cowrie + Filebeat.

The ELK server will receive and store the logs in ElasticSearch, so we can easily search and visualize them using Kibana, the ElasticSearch front-end.

The honeypot will just give it service and ship logs to the ELK server.

Prerequisites


Configuration

ELK Server

We need to set up logstash so it will receive filebeats input on a port. Logstash configuration files have three sections:

  • input: Where to get the data from.
  • filter: Message filtering and processing. Here we can tell Logstash what we want to keep, in addition to do add additional fields to the message, like GEO IP location.
  • output: Where will the filtered messages go to.

Logstash can have multiple inputs, filters, and outputs. It even can have multiple configuration files. By default, logstash will load every .conf file found on /etc/logstash/conf.d/.

Before actually creating a conf file for cowrie let's set up two local databases for Geo IP resolution. We will use MaxMind's free GeoIP databases.
This databases are local .dat files which are accurate to certain extent. Keep in mind that as local files, these databases do not get automatically updated. MaxMind updates it's databases the first Thursday of each month, so you will need to re-download them manually or set a cron job if you want to keep them updated.

Without further ado

# Create the directory structure
sudo mkdir -p /opt/logstash/vendor/geoip  
cd /opt/logstash/vendor/geoip  
# Download the databases
sudo wget "http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz"  
sudo wget "http://download.maxmind.com/download/geoip/database/asnum/GeoIPASNum.dat.gz"  
sudo gunzip *.dat.gz  

Now, for the logstash config file:

/etc/logstash/conf.d/cowrie.conf:

input {  
  beats {
      port => 5045    # Pick an available port to listen on
    }
}

filter {  
    if [type] == "cowrie" {

        json {
            source => message
        }

        date {
            match => [ "timestamp", "ISO8601" ]
        }

        if [src_ip]  {

            dns {
                reverse => [ "src_host", "src_ip" ]
                action => "append"
            }

            geoip {
                source => "src_ip"  # With the src_ip field
                target => "geoip"   # Add the geoip one
                # Using the database we previously saved
                database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"
                add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
                add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
            }

            # Get the ASN code as well
            geoip {
                source => "src_ip"
                database => "/opt/logstash/vendor/geoip/GeoIPASNum.dat"
            }

            mutate {
                convert => [ "[geoip][coordinates]", "float" ]
            }
        }
    }
}

output {  
    if [type] == "cowrie" {
        # Output to elasticsearch
        elasticsearch {
           hosts => ["localhost:9200"]  # Provided elasticsearch is listening on that host:port
           sniffing => true
           manage_template => false
           index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
           document_type => "%{[@metadata][type]}"
        }
        # For debugging
        stdout {
            codec => rubydebug
        }
    }
}

Logstash should be now ready to start receiving logs, we just need to start the service.

sudo service logstash start  

Honeypot

Provided we have cowrie already set up and generating logs to /home/cowrie/cowrie/log/cowrie.json shipping said info to the ELK server is rather simple, as we just need to set up filebeat for it.

You should already have java8 and filebeat already installed on the honeypot. If that's the case the only thing missing is configuring filebeat, which you can do on /etc/filebeat/filebeat.yml.

filebeat:  
  prospectors:
    -
     paths:
        - /home/cowrie/cowrie/log/cowrie.json
     input_type: log
     document_type: cowrie

output:  
  logstash:
    hosts: ["xxx.xxx.xxx.xxx:5045"]  # Whatever your ELK host is

shipper:

logging:  
    path: /var/log/filebeat
    name: current
    rotateeverybytes: 10485760 # = 10MB

Once configured simply start the service

sudo service filebeat start  

Checking the config

The easiest way to check if your setup is correct is to head to Kibana and search for the logs with a type: "cowrie" query in the Discover tab. Keep in mind that cowrie needs to have generated at least one log entry for your logs to show in Kibana.

Troubleshooting

If cowrie is logging to the specified file (log/cowrie.json by default) but no logs are showing on Kibana you may check Filebeat and Logstash individually.

Config tests

Both filebeat and logstash (located in /opt/logstash/bin/logstash) accept a -configtest flag which you can use to check your config file for syntax errors. Passing the configuration tests does not mean your configuration is correct, is just means it has no syntax errors, but it is a starting point.

Debugging

If the config test passes but there is still no data in ElasticSearch / Kibana you may execute the services individually in foreground + verbose mode to check for run time errors.

For Logstash:

sudo /opt/logstash/bin/logstash -v -f /etc/logstash/conf.d/cowrie.conf  

For Filebeat:

sudo filebeat -v -c /etc/filebeat/filebeat.yml  

You may as well

  • Check that the port logstash is trying to use as input is not used by any other process in the machine
  • Check for privilege errors, files should be readable by whatever user you run your services as.
  • Check the paths are correct to your machine
  • Check that logstash / elastissearch are up and accepting connections on the port and interface you are trying to connect to