Pfsense, Suricata and Kibana

  Security
 

In this article I will show how to configure Pfsense Firewall and Suricata IDS with Kibana dashboard. The explained architecture will provide a modern and functional IDS with a good graphical user interface without spending money in commercial products.

The solution permit to monitor in real time attack attempts to network services and to activate, if necessary, the relative incident process management.

The GUI gives the possibility to know in a simple and fast way a lot of information about web attacks: class, type, remote ip, country, etc.

The architecture proposed is the following:

Pfsense and Suricata

Pfsense and Suricata

Software used:

Pfsense 2.3: open free Firewall.

Suricata 3.1:Intrusion Detection System.

Fluentd 2.3: open source data collector.

Elasticsearch 5.4: open and store engine.

Kibana 5.4: Dashboard for creating powerful graphs for suricata alert visualization.

Let’s start with Pfsense and Suricata installation and configuration.

Pfsense and Suricata

Pfsense is a open free Firewall based on FreeBSD SO. In addition to manage access rule, NAT, Load Balancing and other features like normal Firewall, it has the possibility to integrate with other modules like Intrusion Detection System (Suricata and Snort), Web Application Firewall (mod-security), Squid, etc.

Suricata is the Intrusion Detection System module used for our scope: it is a high performance Network IDS easily installable in Pfsense by System Package Manager.

In Pfsene Service/Suricata, after installing it, is showed the Intrusion detection system GUI.

suricata gui

The first thing to do is to install Suricata rules: they are a method for matching threats against network traffic. A typycal Suricata rule is like that:

alert http $HOME_NET any -> $EXTERNAL_NET any (msg:”ET TROJAN Likely Fake Antivirus Download ws.exe”; flow:established,to_server; content:”GET”; http_method; content:”/install/ws.exe”; http_uri; nocase; reference:url,doc.emergingthreats.net/2010051; classtype:trojan-activity; sid:2010051; rev:4;)

A HTTP Get with  “/install/ws.exe” URI is detected as “ET TROJAN Likely Fake Antivirus Download ws.exe” of  attack class “trojan-activity”.

For configuring the type of rules to download we need to configure suricata clicking in Global Settings section:

suricata rules

 

There is the possibility to also download snort rule because they are compatible. For doing that an oinkcode is necessary: read this https://www.snort.org/oinkcodes for understanding how to have an oinkcode.

After inserting the source of rules, it’s possible download them by Update tab.

After configuring the interface where suricata will work (in our case is WAN), the suricata can be started:

Now it’s time to select the rulesets that Suricata will load at startup in Wan Categories tab. Click on Services/Suricata/WAN Categories:

suricata policy

The last step is to set the logging facility and priority, and configure the Pfsense for forward the log to external syslog server. Click on Status/System Logs/Settings:

pfsense syslog

 

The suricata alerts are now configured to be forwarded to syslog server to be parsed by fluentd client. Click on Services/Suricata/Global Settings:

suricata logs

Fluentd client

Fluentd is a open source data collector that permits to parse the log file structuring in data type. For parsing the data is available a simple plugin called grok. The following url can be used for testing the regular expression made with grok: http://grokconstructor.appspot.com/do/match.

Before installing and configuring the fluentd, it’s necessary to configure the rsyslogd server:

[root@td-agent_client ~]# vi /etc/rsyslog.conf
# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 5140
local1.*  /var/log/pfsense.log
[root@td-agent_client ~]#systemctl restart rsyslog

It’s simpler to understand how fluentd works showing the installation and configuration on td-agent client where are available the suricata logs. The package that I will install is td-agent: the stable distribution of Fluentd.

[root@td-agent_client ~]# curl -L https://toolbelt.treasuredata.com/sh/install-redhat-td-agent2.sh | sh
[root@td-agent_client ~]# /usr/sbin/td-agent-gem install fluent-plugin-grok-parser
[root@td-agent_client ~]# /usr/sbin/td-agent-gem install fluent-plugin-record-reformer
[root@td-agent_client ~]# touch /var/log/suricata.log_pos
[root@td-agent_client ~]# chmod td-agent:td-agent /var/log/suricata.log
[root@td-agent_client ~]# chmod td-agent:td-agent /var/log/suricata.log_pos
[root@td-agent_client ~]# vi /etc/td-agent/td-agent.conf
<source>
@type tail
pos_file /var/log/td-agent/suricata.log
path /var/log/pfsense.log
tag suricata_log
types code:integer,size:integer,reqtime:float,apptime:float
time_format %d/%b/%Y:%H:%M:%S %z
log_level trace
<parse>
@type grok
custom_pattern_path /etc/td-agent/grok_pattern.txt
grok_pattern %{SURICATA}
</parse>
</source>
<match suricata_log.**>
type forward
time_as_integer true
send_timeout 60s
recover_wait 10s
heartbeat_interval 1s
phi_threshold 16
hard_timeout 60s
flush_interval 0
<server>
name kibana
host 192.168.1.1
port 5141
</server>
<secondary>
@type file
path /var/log/td-agent/forward-failed
</secondary>
</match>
[root@td-agent_client ~]# more grok_pattern.txt
SURICATA %{SYSLOGTIMESTAMP:date} .*ET (?:%{GREEDYDATA:attack}( \[Classification:)) (?:%{GREEDYDATA:class2})\].*\[Priority: (?:%{NUMBER:priority})\] \{(?:%{WORD:protocol}\}) (?:%{IP:remote}):(?:%{NUMBER:remote_port}) -> (?:%{IP:local}):(?:%{NUMBER:local_port})

Td-agent daemons is in listening in the suricata log and it parses the log received by the grok rule defined in the nginx_grok_pattern.txt file. Next the suricata log are sent to another td-agent server that stores it in elastic search database.

[root@td-agent_client ~]#tail -f /var/log/suricata.log
Apr 23 16:43:44 192.168.2.1 facility[87410]: [1:2011716:3] ET SCAN Sipvicious User-Agent Detected (friendly-scanner) [Classification: Attempted Information Leak] [Priority: 2] {UDP} 62.138.3.173:5086 -> 164.132.193.215:5060

After starting the td-agent by /etc/init.d/td-agent command this is what happens inside the scene:

  1. Fluentd goes in tail in the suricata log file called /var/log/suricata.log.
  2. A alert is sent by suricata to syslog server and written in the log. A typical log is like the previous one: SCAN Sipvicious User-Agent Detected.
  3. Fluend after parsing the log following the grok rule forwards it to td-agent server with suricata_log tag.

As I said, grok plugin has a web site where is possible to test the grok rule. In this case this is the output:

Apr 23 16:43:44 192.168.2.1 facility[87410]: [1:2011716:3] ET SCAN Sipvicious User-Agent Detected (friendly-scanner) [Classification: Attempted Information Leak] [Priority: 2] {UDP} 62.138.3.173:5086 -> 164.132.193.215:5060
MATCHED
class2 Attempted·Information·Leak
priority 2
local 164.132.193.215
remote 62.138.3.173
date Apr·23·16:43:44
attack SCAN·Sipvicious·User-Agent·Detected·(friendly-scanner)
remote_port 5086
local_port 5060
protocol UDP

Perfect: the grok rule is correct and the alert suricata is parsed fine.

Fluentd uses udp and tcp for comunicating with server: the server port 5141 must be opened on the Firewall.

Now let’s to install and configure the fluentd server.

Fluentd server

The fluentd server installation procedure has already been explained for the client. New plugins are installed in order to add information about geographical location of IP addresses and for integrating the fluentd with elasticsearch.

This the procedure for installing the geoip plugin:

[root@td-agent_server ~]# yum install geoip-devel –enablerepo=epel
[root@td-agent_server ~]# /usr/sbin/td-agent-gem install fluent-plugin-geoip

For installing the elasticsearch plugin

[[root@td-agent_server ~]#/usr/sbin/td-agent-gem install fluent-plugin-elasticsearch

Td-agent server configuration is the following:

[root@td-agent_server~]# more /etc/td-agent/td-agent.conf
####
<match syslog.**>
type elasticsearch
logstash_format true
logstash_prefix system_log
</match>
<source>
type forward
port 5141
bind 0.0.0.0
</source>
<match suricata_log.**>
type geoip
geoip_lookup_key remote
<record>
city ${city[“remote”]}
latitude ${latitude[“remote”]}
longitude ${longitude[“remote”]}
country_code3 ${country_code3[“remote”]}
country ${country_code[“remote”]}
country_name ${country_name[“remote”]}
dma ${dma_code[“remote”]}
area ${area_code[“remote”]}
region ${region[“remote”]}
location_array ‘[${longitude[“remote”]},${latitude[“remote”]}]’
</record>
remove_tag_prefix suricata_log.
tag newsuricata_log.
</match>
<match newsuricata_log.**>
host 192.168.2.1
type elasticsearch
logstash_format true
logstash_prefix suricata_log
</match>

The GeoIp plugin uses the remote ip address as key to get from GeoIp database all the information needed: these new parameters are added to data received by td-agent client and sent to elasticsearch that is in listening on 192.168.2.1:9200 tcp port.

GeoIpUpdated is used for updating the GeoIp database. Get the software from https://github.com/maxmind/geoipupdate/releases, compile, install the software and update the database automating the process in the crontab:

[root@td-agent_server~]#crontab -e
MAILTO=your@email.com
18 22 * * 5 /usr/local/bin/geoipupdate
# end of crontab

Let’s go now to final step: Kibana and Elasticearch installation.

Elasticsearch

Elasticsearch is an open engine that permit to store data in an index (you can think it like a database) and search by restful api. Restful api are a way to get and put data using http methods: Get, Post, Put and Delete. Every index is mapped to different types (you can think it like a table) and every type contains different parameters like the columns in a table.

The objects present in the http methods are json formatted and every index is created in run time: no need to create index before putting data, as opposite to sql database.

The elasticsearch installation is very simple: i suggest to follow this link https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-repositories.html.

After installing it, the elasticsearch can be started by systemctl start elasticsearch.

As I said, no need to create any structure in the elasticsearch database. The td-agent server put the received records to elasticsearch by rest api. The index is created automatically by elasticsearch when the first record is received. Very cool.

Elastichsearch changes the index every day: every index create has the time day appended in the end of the tag. In the our case, the tag is suricata_log, and it means that there will be index with suricata_log-YYYYMMDD.

We can get the index on elasticsearch by HTTP Get:

[root@td-agent_server ~]# curl ‘localhost:9200/_cat/indices?v’
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open suricata_log-2016.04.26 1 1 0 0 159b 159b
yellow open suricata_log-2016.04.25 1 1 0 0 159b 159b
yellow open suricata_log-2016.04.24 1 1 0 0 159b 159b
yellow open suricata_log-2016.04.23 1 1 256 0 141.5kb 141.5kb
yellow open suricata_log-2016.04.22 1 1 222 0 115.6kb 115.6kb
yellow open test 1 1 0 0 159b 159b
yellow open .kibana 1 1 16 0 19.4kb 19.4kb

Kibana has created for every day a different index and it happens automatically: great thing.

Before going to Kibana, a problem must be solved (this problem is not present anymore with kibana/elasticsearch 5.4, you can skip it). The string type by default is analyzed: it means that the every string is split in different words and the separator is the space. It is not good for us.

The way that I use for saying to elasticsearch to consider some parameter as not analyzed is to create the index before receiving the first record. This can be done in this way:

[root@td-agent_server ~]# curl -XPOST localhost:9200/suricata_log-2016.04.26 -d ‘{
“settings” : {
“number_of_shards” : 1
},
“mappings” : {
“fluentd” : {
“properties” : {
“@timestamp”:{“type”:”date”,”format”:”dateOptionalTime”},
“area”:{“type”:”long”},
“attack”:{“type”:”string”,”index”:”not_analyzed“},
“city”:{“type”:”string”},
“class2”:{“type”:”string”,”index”:”not_analyzed“},
“location_array”: {“type”: “geo_point”},
“country”:{“type”:”string”},
“country_code3”:{“type”:”string”},
“country_name”:{“type”:”string”,”index”:”not_analyzed“},
“date”:{“type”:”string”},
“dma”:{“type”:”long”},
“hostname”:{“type”:”string”},
“latitude”:{“type”:”double”},
“local”:{“type”:”string”},
“local_port”:{“type”:”string”},
“longitude”:{“type”:”double”},
“message”:{“type”:”string”},
“priority”:{“type”:”string”},
“protocol”:{“type”:”string”},
“region”:{“type”:”string”},
“remote”:{“type”:”string”},
“remote_port”:{“type”:”string”}
}
}
}
}’

This process can be automated every day by simple script:

#/bin/bash
next_day=$(date –date=”next day” +”%d”)
next_mounth = ${date –date=”next day” +”%m”}
next_year=${date –date=”next day” +”%Y”}
curl -XPOST localhost:9200/suricata_log-${next_year}.${next_mounth}.${next_day} -d ‘{
“settings” : {
“number_of_shards” : 1
},
“mappings” : {
“fluentd” : {
“properties” : {
“@timestamp”:{“type”:”date”,”format”:”dateOptionalTime”},
“area”:{“type”:”long”},
“attack”:{“type”:”string”,”index”:”not_analyzed”},
“city”:{“type”:”string”},
“class2”:{“type”:”string”,”index”:”not_analyzed”},
“location_array”: {“type”: “geo_point”},
“country”:{“type”:”string”},
“country_code3”:{“type”:”string”},
“country_name”:{“type”:”string”,”index”:”not_analyzed”},
“date”:{“type”:”string”},
“dma”:{“type”:”long”},
“hostname”:{“type”:”string”},
“latitude”:{“type”:”double”},
“local”:{“type”:”string”},
“local_port”:{“type”:”string”},
“longitude”:{“type”:”double”},
“message”:{“type”:”string”},
“priority”:{“type”:”string”},
“protocol”:{“type”:”string”},
“region”:{“type”:”string”},
“remote”:{“type”:”string”},
“remote_port”:{“type”:”string”}
}
}
}
}’

Let’s go to Kibana now.

Kibana

Kibana is an open source data visualization platform that allows you to interact with data with powerful graphics.

The installation is very simple: follow this link for that https://www.elastic.co/downloads/kibana.

After installing kibana, starting with systemctl start kibana, you can login to dashboard http://ip_address:5601 and configure an index pattern: suricata_log-*.

kibana default index

kibana default index

Now it’s possible to create the first dashboard containing different visualizations: this is the result:

kibana suricata

kibana suricata

kibana geoip attack

kibana geoip attack

I don’t explain how to do graphs in kibana because is very intuitive: you learn more using it than reading my explanation.

More power graphs can be created: the only limit is the fantasy.

Conclusion

The graphs are very useful: they permit to monitor remote attacker and the attack class. The dashboard can start a incident process in case of seriuos attack or it can trigger an action like to block some ip, range or country in the Firewall.

The solution proposed can be improved in this way:

  1. Creating another dashboard with the data log received directly from reverse proxy and/or application server. The suricata alerts can be filtered with these data for creating another better visualizations.
  2. Creating another dashboard with the data log received by Mod Security WAF configured in apache or nginx and adding it to suricata alerts.
  3. You configure shield roles for your kibana users to control what data those users can be read. This is useful to give access to dashboard to other teams like Help Desk.

With these improvements the solution become a very good alternative to commercial IPS.

 

 

6 thoughts on - Pfsense, Suricata and Kibana

LEAVE A COMMENT