Zeekurity Zen – Part VIII: How to Send Zeek Logs to Elastic

If you're looking for professional services on this topic or interested in other cybersecurity consulting services, please reach out to me via my Contact page to discuss further.
This is part of the Zeekurity Zen Zeries on building a Zeek (formerly Bro) network sensor.
Overview
In our Zeek journey thus far, we’ve:
- Set up Zeek to monitor some network traffic.
- Used Zeek Package Manager to install packages.
- Configured Zeek to send logs to Splunk for analysis.
- Uncovered notable events through basic threat hunting.
- Leveraged threat intelligence to elevate our threat hunting game.
- Detected malicious files by hash and extracted commonly exploited file types.
- Analyzed encrypted traffic through handshake details and fingerprinting.
In Part III, we sent our Zeek logs to Splunk and unlocked a new level of analysis and visualization capabilities. But what if we don’t have access to Splunk? What if instead of Splunk, we use Elastic?
For those new to Elastic, the “Elastic Stack” or “ELK Stack” is a popular open source data analytics platform. Elastic has become increasingly popular not just because it offers a free tier but also through the addition of significant security-focused features and an active user community.
Fortunately for us, Elastic natively supports Zeek logs and is a great way to analyze and visualize Zeek data.
To do this, we’ll walkthrough these steps:
- Configure Zeek to output logs in JSON format.
- Install and configure Filebeat to consume Zeek logs.
- Write simple Kibana queries.
Output Zeek logs to JSON
- Stop Zeek if it is currently running.
zeekctl stop
- Edit /opt/zeek/share/zeek/site/local.zeek and add the following.
# Output to JSON @load policy/tuning/json-logs.zeek
- Restart Zeek and view the logs in /opt/zeek/logs/current to confirm they are now in JSON format.
zeekctl deploy cd /opt/zeek/logs/current less conn.log
Configure Filebeat
This guide assumes you have already installed Filebeat. If not, refer to Elastic’s documentation and then come back here when you’re done.
- Switch back to your normal user.
su eric
- Stop Filebeat if it is currently running.
sudo systemctl stop filebeat
- Enable Filebeat’s Zeek module.
sudo filebeat modules enable zeek
- Setup Filebeat assets.
sudo filebeat setup -e
- Add Zeek’s log paths (e.g., /opt/zeek/logs/current/http.log) to the Zeek Filebeat configuration file. Assuming you installed Filebeat from the standard Elastic repositories, the configuration file will be in /etc/filebeat/modules.d/zeek.yml. For each log type you want to capture:
- Set the enabled field to true.
- Set the var.paths field to the log’s specific path.
The sample configuration file below enables all log types currently available in the Filebeat Zeek module. Note that while the “signature” log type is listed, it is currently not available and will result in an error if enabled. This Github issue discusses this in further detail.
# Module: zeek # Docs: https://www.elastic.co/guide/en/beats/filebeat/7.x/filebeat-module-zeek.html - module: zeek capture_loss: enabled: true var.paths: ["/opt/zeek/logs/current/capture_loss.log"] connection: enabled: true var.paths: ["/opt/zeek/logs/current/conn.log"] dce_rpc: enabled: true var.paths: ["/opt/zeek/logs/current/dce_rpc.log"] dhcp: enabled: true var.paths: ["/opt/zeek/logs/current/dhcp.log"] dnp3: enabled: true var.paths: ["/opt/zeek/logs/current/dnp3.log"] dns: enabled: true var.paths: ["/opt/zeek/logs/current/dns.log"] dpd: enabled: true var.paths: ["/opt/zeek/logs/current/dpd.log"] files: enabled: true var.paths: ["/opt/zeek/logs/current/files.log"] ftp: enabled: true var.paths: ["/opt/zeek/logs/current/ftp.log"] http: enabled: true var.paths: ["/opt/zeek/logs/current/http.log"] intel: enabled: true var.paths: ["/opt/zeek/logs/current/intel.log"] irc: enabled: true var.paths: ["/opt/zeek/logs/current/irc.log"] kerberos: enabled: true var.paths: ["/opt/zeek/logs/current/kerberos.log"] modbus: enabled: true var.paths: ["/opt/zeek/logs/current/modbus.log"] mysql: enabled: true var.paths: ["/opt/zeek/logs/current/mysql.log"] notice: enabled: true var.paths: ["/opt/zeek/logs/current/notice.log"] ntlm: enabled: true var.paths: ["/opt/zeek/logs/current/ntlm.log"] ntp: enabled: true var.paths: ["/opt/zeek/logs/current/ntp.log"] ocsp: enabled: true var.paths: ["/opt/zeek/logs/current/oscp.log"] pe: enabled: true var.paths: ["/opt/zeek/logs/current/pe.log"] radius: enabled: true var.paths: ["/opt/zeek/logs/current/radius.log"] rdp: enabled: true var.paths: ["/opt/zeek/logs/current/rdp.log"] rfb: enabled: true var.paths: ["/opt/zeek/logs/current/rfb.log"] signature: enabled: false var.paths: ["/opt/zeek/logs/current/signature.log"] sip: enabled: true var.paths: ["/opt/zeek/logs/current/sip.log"] smb_cmd: enabled: true var.paths: ["/opt/zeek/logs/current/smb_cmd.log"] smb_files: enabled: true var.paths: ["/opt/zeek/logs/current/smb_files.log"] smb_mapping: enabled: true var.paths: ["/opt/zeek/logs/current/smb_mapping.log"] smtp: enabled: true var.paths: ["/opt/zeek/logs/current/smtp.log"] snmp: enabled: true var.paths: ["/opt/zeek/logs/current/snmp.log"] socks: enabled: true var.paths: ["/opt/zeek/logs/current/socks.log"] ssh: enabled: true var.paths: ["/opt/zeek/logs/current/ssh.log"] ssl: enabled: true var.paths: ["/opt/zeek/logs/current/ssl.log"] stats: enabled: true var.paths: ["/opt/zeek/logs/current/stats.log"] syslog: enabled: true var.paths: ["/opt/zeek/logs/current/syslog.log"] traceroute: enabled: true var.paths: ["/opt/zeek/logs/current/traceroute.log"] tunnel: enabled: true var.paths: ["/opt/zeek/logs/current/tunnel.log"] weird: enabled: true var.paths: ["/opt/zeek/logs/current/weird.log"] x509: enabled: true var.paths: ["/opt/zeek/logs/current/x509.log"] # Set custom paths for the log files. If left empty, # Filebeat will choose the paths depending on your OS. #var.paths:
- Restart Filebeat to apply the configuration and confirm your Zeek logs are now properly ingested into Elasticsearch and available for analysis in Kibana.
sudo systemctl restart filebeat
Simple Kibana Queries
Once Zeek logs are flowing into Elasticsearch, we can write some simple Kibana queries to analyze our data. Let’s convert some of our previous sample threat hunting queries from Splunk SPL into Elastic KQL. Try taking each of these queries further by creating relevant visualizations using Kibana Lens.
Connections To Destination Ports Above 1024
event.module:zeek AND event.dataset:zeek.connection AND destination.port>1024
Query Responses With NXDOMAIN
event.module:zeek AND event.dataset:zeek.dns AND dns.response_code:NXDOMAIN
Expired Certificates
event.module:zeek AND event.dataset:zeek.x509 AND file.x509.not_after < now
If you like my content and want to support me, I'd greatly appreciate you buying me a coffee. Thanks! 🙏
Hi, I followed the steps mentioned in your blog to send zeek logs to elastic. I installed zeek version 4.0.7 and filebeat version 7.17.5. Elastic search and kibana version is 7.15.0. Filebeats is unable to send zeek logs to elastic under the category event.module : “zeek”. Rather logs are visible in discover tab in general.
@timestamp:
Jul 26, 2022 @ 08:56:48.537
agent.ephemeral_id:
a330a046-34d5-48a0-8a57-c24ba0d97fe4
agent.hostname:
bakhtawar
agent.id:
5542248b-ad82-4666-be57-6cb13db685de
agent.name:
bakhtawar
agent.type:
filebeat
agent.version:
7.17.5
container.id:
ssl.log
ecs.version:
1.12.0
host.architecture:
x86_64
host.containerized:
false
host.hostname:
bakhtawar
host.id:
fd55a894765441258c780e102d780210
host.ip:
10.0.2.5, fe80::9b4:1ec:5e3a:c4c4
host.mac:
08:00:27:0e:7f:66
host.name:
bakhtawar
host.os.codename:
focal
host.os.family:
debian
host.os.kernel:
5.15.0-41-generic
host.os.name:
Ubuntu
host.os.platform:
ubuntu
host.os.type:
linux
host.os.version:
20.04.3 LTS (Focal Fossa)
input.type:
filestream
log.file.path:
/opt/zeek/logs/current/ssl.log
log.offset:
10,392
message:
However, when I view the logs under event module zeek, it shows no results. Can you please tell why is it showing this abnormal behaviour? Thankyou.
Hi Bakhtawar,
Elastic Stack requires the cluster to be at the same or higher version than any connecting Filebeats. So you’ll either need to upgrade your Elasticsearch cluster to 7.17.5 or user an older Filebeat 7.15.0 to match.
Hope that helps!
Eric
Hi, I tried with filebeat version 7.15.0 but still I am getting the same results.
Hm, only thing I can think of is maybe you didn’t set up Filebeat assets: https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation-configuration.html#setup-assets
Hi, this is what I get after running sudo filebeat setup -e command.
********************************************************************************************************************
2022-07-28T11:47:15.098+0500 INFO instance/beat.go:665 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2022-07-28T11:47:15.098+0500 INFO instance/beat.go:673 Beat ID: 0a75a7d7-6c6b-442b-990d-6f322342b7cd
2022-07-28T11:47:15.108+0500 INFO [beat] instance/beat.go:1014 Beat info {“system_info”: {“beat”: {“path”: {“config”: “/etc/filebeat”, “data”: “/var/lib/filebeat”, “home”: “/usr/share/filebeat”, “logs”: “/var/log/filebeat”}, “type”: “filebeat”, “uuid”: “0a75a7d7-6c6b-442b-990d-6f322342b7cd”}}}
2022-07-28T11:47:15.108+0500 INFO [beat] instance/beat.go:1023 Build info {“system_info”: {“build”: {“commit”: “9023152025ec6251bc6b6c38009b309157f10f17”, “libbeat”: “7.15.0”, “time”: “2021-09-16T03:16:09.000Z”, “version”: “7.15.0”}}}
2022-07-28T11:47:15.108+0500 INFO [beat] instance/beat.go:1026 Go runtime info {“system_info”: {“go”: {“os”:”linux”,”arch”:”amd64″,”max_procs”:6,”version”:”go1.16.6″}}}
2022-07-28T11:47:15.109+0500 INFO [beat] instance/beat.go:1030 Host info {“system_info”: {“host”: {“architecture”:”x86_64″,”boot_time”:”2022-07-28T08:39:52+05:00″,”containerized”:false,”name”:”bakhtawar”,”ip”:[“127.0.0.1/8″,”::1/128″,”10.0.2.5/24″,”fe80::9b4:1ec:5e3a:c4c4/64″,”172.18.0.1/16″,”172.17.0.1/16″,”fe80::42:cdff:fee4:b028/64″,”172.19.0.1/16″,”fe80::42:dff:fe0f:acc1/64″,”fe80::f429:e2ff:fe92:9230/64″,”fe80::349b:a2ff:feb2:ceb5/64″,”fe80::18b7:62ff:feb5:98d4/64″],”kernel_version”:”5.15.0-41-generic”,”mac”:[“08:00:27:0e:7f:66″,”02:42:be:e2:76:55″,”02:42:cd:e4:b0:28″,”02:42:0d:0f:ac:c1″,”f6:29:e2:92:92:30″,”36:9b:a2:b2:ce:b5″,”1a:b7:62:b5:98:d4″],”os”:{“type”:”linux”,”family”:”debian”,”platform”:”ubuntu”,”name”:”Ubuntu”,”version”:”20.04.3 LTS (Focal Fossa)”,”major”:20,”minor”:4,”patch”:3,”codename”:”focal”},”timezone”:”PKT”,”timezone_offset_sec”:18000,”id”:”fd55a894765441258c780e102d780210″}}}
2022-07-28T11:47:15.109+0500 INFO [beat] instance/beat.go:1059 Process info {“system_info”: {“process”: {“capabilities”: {“inheritable”:null,”permitted”:[“chown”,”dac_override”,”dac_read_search”,”fowner”,”fsetid”,”kill”,”setgid”,”setuid”,”setpcap”,”linux_immutable”,”net_bind_service”,”net_broadcast”,”net_admin”,”net_raw”,”ipc_lock”,”ipc_owner”,”sys_module”,”sys_rawio”,”sys_chroot”,”sys_ptrace”,”sys_pacct”,”sys_admin”,”sys_boot”,”sys_nice”,”sys_resource”,”sys_time”,”sys_tty_config”,”mknod”,”lease”,”audit_write”,”audit_control”,”setfcap”,”mac_override”,”mac_admin”,”syslog”,”wake_alarm”,”block_suspend”,”audit_read”,”38″,”39″,”40″],”effective”:[“chown”,”dac_override”,”dac_read_search”,”fowner”,”fsetid”,”kill”,”setgid”,”setuid”,”setpcap”,”linux_immutable”,”net_bind_service”,”net_broadcast”,”net_admin”,”net_raw”,”ipc_lock”,”ipc_owner”,”sys_module”,”sys_rawio”,”sys_chroot”,”sys_ptrace”,”sys_pacct”,”sys_admin”,”sys_boot”,”sys_nice”,”sys_resource”,”sys_time”,”sys_tty_config”,”mknod”,”lease”,”audit_write”,”audit_control”,”setfcap”,”mac_override”,”mac_admin”,”syslog”,”wake_alarm”,”block_suspend”,”audit_read”,”38″,”39″,”40″],”bounding”:[“chown”,”dac_override”,”dac_read_search”,”fowner”,”fsetid”,”kill”,”setgid”,”setuid”,”setpcap”,”linux_immutable”,”net_bind_service”,”net_broadcast”,”net_admin”,”net_raw”,”ipc_lock”,”ipc_owner”,”sys_module”,”sys_rawio”,”sys_chroot”,”sys_ptrace”,”sys_pacct”,”sys_admin”,”sys_boot”,”sys_nice”,”sys_resource”,”sys_time”,”sys_tty_config”,”mknod”,”lease”,”audit_write”,”audit_control”,”setfcap”,”mac_override”,”mac_admin”,”syslog”,”wake_alarm”,”block_suspend”,”audit_read”,”38″,”39″,”40″],”ambient”:null}, “cwd”: “/home/bakhtawar”, “exe”: “/usr/share/filebeat/bin/filebeat”, “name”: “filebeat”, “pid”: 19178, “ppid”: 19177, “seccomp”: {“mode”:”disabled”,”no_new_privs”:false}, “start_time”: “2022-07-28T11:47:14.170+0500”}}}
2022-07-28T11:47:15.109+0500 INFO instance/beat.go:309 Setup Beat: filebeat; Version: 7.15.0
2022-07-28T11:47:15.110+0500 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2022-07-28T11:47:15.110+0500 INFO [publisher] pipeline/module.go:113 Beat name: bakhtawar
2022-07-28T11:47:15.146+0500 INFO beater/filebeat.go:117 Enabled modules/filesets: zeek (dhcp, smb_cmd, snmp, tunnel, files, intel, ntlm, dns, irc, mysql, ocsp, x509, notice, smb_mapping, socks, ssl, dce_rpc, dnp3, http, radius, rfb, ssh, weird, connection, dpd, ntp, rdp, signature, smb_files, kerberos, modbus, pe, sip, smtp, traceroute, capture_loss, ftp, stats, syslog), ()
2022-07-28T11:47:15.182+0500 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2022-07-28T11:47:15.184+0500 INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.15.0
ILM policy and write alias loading not enabled.
Template loading not enabled.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
2022-07-28T11:47:15.185+0500 INFO kibana/client.go:167 Kibana url: http://localhost:5601
2022-07-28T11:47:17.080+0500 INFO kibana/client.go:167 Kibana url: http://localhost:5601
2022-07-28T11:47:18.100+0500 INFO [add_cloud_metadata] add_cloud_metadata/add_cloud_metadata.go:101 add_cloud_metadata: hosting provider type not detected.
2022-07-28T11:48:35.142+0500 INFO instance/beat.go:848 Kibana dashboards successfully loaded.
Loaded dashboards
2022-07-28T11:48:35.142+0500 WARN [cfgwarn] instance/beat.go:574 DEPRECATED: Setting up ML using Filebeat is going to be removed. Please use the ML app to setup jobs. Will be removed in version: 8.0.0
Setting up ML using setup –machine-learning is going to be removed in 8.0.0. Please use the ML app instead.
See more: https://www.elastic.co/guide/en/machine-learning/current/index.html
2022-07-28T11:48:35.142+0500 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2022-07-28T11:48:35.143+0500 INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.15.0
2022-07-28T11:48:35.144+0500 INFO kibana/client.go:167 Kibana url: http://localhost:5601
2022-07-28T11:48:35.170+0500 WARN fileset/modules.go:425 X-Pack Machine Learning is not enabled
2022-07-28T11:48:35.195+0500 WARN fileset/modules.go:425 X-Pack Machine Learning is not enabled
Loaded machine learning job configurations
2022-07-28T11:48:35.195+0500 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2022-07-28T11:48:35.197+0500 INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.15.0
2022-07-28T11:48:35.201+0500 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2022-07-28T11:48:35.203+0500 INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.15.0
2022-07-28T11:48:35.253+0500 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {“pipeline”: “filebeat-7.15.0-zeek-dhcp-pipeline”}
2022-07-28T11:48:35.299+0500 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {“pipeline”: “filebeat-7.15.0-zeek-dns-pipeline”}
2022-07-28T11:48:35.358+0500 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {“pipeline”: “filebeat-7.15.0-zeek-http-pipeline”}
2022-07-28T11:48:35.403+0500 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {“pipeline”: “filebeat-7.15.0-zeek-ssl-pipeline”}
2022-07-28T11:48:35.463+0500 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {“pipeline”: “filebeat-7.15.0-zeek-connection-pipeline”}
2022-07-28T11:48:35.463+0500 INFO cfgfile/reload.go:262 Loading of config files completed.
Loaded Ingest pipelines
*********************************************************************************************************************
Can you please figure out any abnormal behaviour in this output?
Did you restart Filebeat after running setup? Should work now.
Hi Eric,
How would you do some of the other Splunk Queries in Zeek/Kibana? I’m having a hard time trying to do it in Kibana:
This one for example in Splunk:
TOP 10 SOURCES BY BYTES SENT
Recommended visualization: Statistics view
index=zeek sourcetype=zeek_conn
| stats values(service) as Services sum(orig_bytes) as B by id.orig_h
| sort -B
| head 10
| eval MB = round(B/1024/1024,2)
| eval GB = round(MB/1024,2)
| rename id.orig_h as Source
| fields Source B MB GB Services
or even this simple one:
TOP 10 DESTINATIONS BY NUMBER OF CONNECTIONS
Recommended visualization: Column Chart
index=zeek sourcetype=zeek_conn
| top id.resp_h
| head 10
Any help would be greatly appreciated.
Thanks
Hi kdz,
I suppose I should update that post with the Elastic equivalents. 🙂
To the specific questions you asked, try this:
Top 10 Sources by Bytes Sent
1. In Kibana, navigate to Analyze -> Visualize Library and click on “Create visualization.”
2. Click on “Lens.”
3. In the top left, click on “Add filter” and add a filter for event.dataset: zeek.connection.
4. For the visualization type select “Table.”
5. In the right-side menu of Table options, configure “Rows” to include: “Top values” of source.ip” and “Top values of network.protocol.” You can adjust how many “Top” values you want to show for each of these values. If you just want overall top 10, you’d configure source.ip to be 10 and network.protocol to be 1.
6. In the same menu of Table options, configure “Metrics” to include “Sum of source.bytes.” You can change the “Value format” to “Bytes (1024)” and adjust the number of “Decimals” to your liking.
Top 10 Destinations by Number of Connections
1 – 3: Same steps as above.
4. For the visualization type select “Bar vertical.”
5. In the right-side menu of options, configure “Horizontal axis” to include “Top values of event.dataset.”
6. In the same menu of options, configure “Vertical axis” to include “Count of Records.”
7. In the same menu of options, configure “Break down by” to include “Top values of destination.ip.”
Give those a try and see if they work for you.