Logstash is a data collection engine built on top of Elasticsearch, originally intended for log ingestion but now widely used as a general purpose data ingestion pipeline. It can aggregate, filter and supplement log data before forwarding the logs to Elasticsearch for storage. It's written in Ruby and runs on JRuby, and plugins can be written in either Ruby or Java.


  • Events are log records Logstash processes.
  • inputs get data into Logstash, e.g.:
    • file
    • syslog
    • redis
    • beats processes events sent by Beats, small applications designed to collect metrics or events.
  • codecs alter the structural representation of events, operating as stream filters on either inputs or outputs, e.g.:
    • json serialises them as JSON.
  • filters extract data, or mutate or drop events based on conditions, e.g.:
    • `grok`, for parsing unstructured text data into structured data using predefined patterns.
    • mutate, which allows renaming, removing, replacing, and modifying field values.
    • drop discards events.
  • outputs terminate the Logstash pipeline.
    • elasticsearch is pretty good.
    • file writes events to an on-disk file.


Logstash plugins extend the server. They're distributed as RubyGems and are managed using the logstash-plugin CLI.


Logstash configuration is modelled as a set of input and output plugins, with optional codecs and filters. These are usually written to /etc/logstash/conf.d/*.conf.

  1. Grok