1. LogDrill indexes all log data to enable fast and easy access to any log record during later phases of analysis work. After that LogDrill normalizes logs from different sources (for further information scroll down to next paragraph).
  2. Text parsing is applied to the indexed log lines at an outstandingly high speed (e.g. 130 000 EPS/core). The parsed data is collected in a “count cube” object with all irrelevant information omitted from the original log lines.
  3. Queries are run on the count cube using a cube query language with an MDX-like syntax. Queries are executed interactively at a speed of less than 1 second for 10 million records. Queries return multidimensional datasets depending on the query content.
  4. The results of the queries at hand can be conveniently visualized in different reports, dashboards, charts and graphs. After analysts define the criteria of normal operation, in case of unusual activity or deviation from the norm LogDrill provides an immediate alert. LogDrill does not have to be monitored constantly – it can run in the background and will automatically identify threats.

Following the initial configuration of LogDrill new cubes can be built and then running your queries takes only a few clicks. After these simple steps you can immediately start your log analysis. LogDrill works as a mediator between huge datasets and the experts who analyse them. As it is quicker than any other log analysis tool it allows dialogue-kind queries that can be easily fine-tuned. There is an opportunity to test the queries through a ‘trial and error’ approach that offers analysts the freedom to experiment and a work process that is faster and more creative than ever.


Logdrill Technology Video


LogDrill allows dialogue-kind queries that can be easily fine-tuned, therefore guarantees a work process that is faster and more creative than ever.


Significant proportion of data generated today is not well formed machine generated log data. Even the IT system of a middle size company can generate such data in the multi GB range per day. This is where normalization plays role. It converts data to structured form (e.g.: a database table) thus enabling experts analyse data from different sources and gain deep insight about the whole system.

Currently, Java regular expressions is widespread solution on the market, however this approach has many limitations. Execution speed of the java back tracking regular expression engine can be arbitrarily slow (depending on the language) introducing a serious bottleneck in the data processing workflow. If you have to process hundreds of GB of data this can easily take days which is unacceptable in most situations.

Recognizing standard formats is a widely supported feature but according to our experiments most of the cases it does not go that smoothly. Formats have parameters, variations and different versions. Easy to use configuration control over these is essential for reliable proc essing. Check out the exciting features of our Log Normalizer.



We found back-tracking engines to be limited in guaranteeing any execution speed, therefore should not be the basis of a product or service which aims strict SLA requirements. Our solution was shaped by solid scientific results and it uses a proprietary deterministic finite atomata (DFA) approach providing fixed execution speed in the multi hundred MB/sec/core range.



Our Log Normalizer engine comes in different packages on widely used Linux and Windows platforms:

  1. Command line interface for integration at process level.
  2. Flume connector to integrate into distributed log and data processing flows.
  3. Hadoop connector for fast parallel batch processing on HDFS files.
  4. The normalizer takes log data as input and produce CSV output. Output can be further processed or inserted into a database.

Use our Log Normalizer to put your log data into your data warehouse in a structured way to analyze it in depth, this way you can connect Operational intelligence with Business intelligence.



Our Log Normalizer can work well above disk I/O limit (in the multi hundred MB/sec/core range).



Our Log Normalizer comes with a Hadoop packaging that enables parallel processing of files on HDFS. Translated to time this means you can normalize terabytes of data within minutes without investing your scarce resources inefficiently.


Easy configuration

Complex regular expressions tend to be write-only: you write it once and rewrite it every time when you need to change it. Instead of this cumbersome/circumstantial solution we designed a user friendly modular language to specify the log normalization rules. Key features of this language:

  1. Modular. Rules can be grouped into files and included in other files.
  2. Structured. Complex expressions can be split into multiple rules resulting an easily readable and editable structure.
  3. Line and block comments.
  4. Full regexp support if one prefers to use them.

To use this site further, the use of cookies must be accepted. Further information

...Under construction...