Skip to main content

Migration to Elasticsearch 7

· One min read

Spider has been migrated to latest Elasticsearch version to date: 7.2. With it comes many advantages as:

  • Confirmed RollUp API
  • Index lifecycle management
  • SQL interface
  • More and more Kibana features
  • Performance improvements

The migration was long and a bit painful: there have been many breaking changes in scripts and in the ES node.js library.

The more painful being that ES indices created in version 5 block ES7 from starting.... But it is then impossible to go back to version 6 to open and migrate them: the cluster is partially migrated :( I had to copy the indices file to a local ES6 installation and perform a remote reindex to recover the data in the cluster.

Finally, it is ready and running. And all search API have got a new parameter: avoidTotalHits, that avoid computing the total number of hits in the search response, which make searching a bit faster.

Redis reconnection on failure

· One min read

A recent failure in AWS storage revealed that Spider wasn't resilient to Redis failures.

  • I then upgraded all services to a better Redis reconnection pattern with auto resubscription of Lua scripts. It works much better :) !
  • I also added fail fast checks on services in front of Whisperers that only store in Redis: if Redis is not available, they answer straight with a 502 error :)

Spider is monitoring PRM

· One min read

To help integration team, we installed Spider on PRM. And it works out of the box :)

The main difficulty was to run the Whisperer on an old Ubuntu 14.04. But, thanks to Docker, it runs without knowing it :)

  • As the only Docker version available on Ubuntu 14.04 is a very old one, I had to copy the Docker image manually on the server.
  • But thanks to very stable interfaces, this old Docker version can still run new Dockers!! So great!

Whisperers upgrade and versions

· One min read

Whisperers have got a technical upgrade:

  • Upgraded to Node 10 and async await programming pattern
  • Upgraded to latest Node pcap library

To follow Whisperers migrations, I added a Whisperer version and it is sent with the status to the server. The monitoring now shows this version in the Current Status grid:

Moreover, this grid has been improved to show:

  • In orange the whisperers that have not communicated since 2 minutes
  • In dark red, those that have not communicated for more than 1 day

Timeline component OpenSourced

· One min read

The Timeline is one of the component that took me most brain hours, with the map and the grid.

To enable Flowbird's Streetsmart project to use it, I decided to opensource it. This is my first opensource project... We'll see how it goes ;)

Repository

The component is available since a couple of months, and has been integrated successfully in Streetsmart monitoring UI. It embeds a small test application packaged as a Docker that allows testing, and even development with hot reloading.

Changes

Since its packaging as an opensource and standalone component, the Timeline has evolved (Changelog) thanks to Streetsmart UI team ideas, and all changes are accessible both on Spider and Streetsmart.

  • React 16 refactor
  • Lots of options to control the behavior
  • Left margin adapt to legend length
  • Small icon displayed when cursor is out of view

  • Autosliding with regular fetching when dragging cursor left of right of the area
  • Visual improvements and bug fixes

Metricbeat integration in Cluster

· One min read

I added Elastic Metricbeat inside Docker Swarm cluster to gather metrics and performance information of all Dockers in Cluster.

Nothing apart Dockers runs on Spider cluster now.

This allowed adding graphic in monitoring with containers CPU and RAM usage.

This allowed also to assert the load of Traefik and Beats.

Saving user settings to server

· 2 min read

Since a long time, user settings were saved in local storage, but if you opened a new session on another computer, you had to set all as you wanted again.

Now, settings are saved on the server. Settings are split in 2: users own settings and global settings:

User settings

  • Grid settings
  • Timeline settings
  • Refresh delays
  • Workspace composition (panels opened and width)
  • Selected view, mode and options
  • Usage stats and errors sending to server
  • ...

Global settings

  • Url templates
  • Merge patterns for replicas and clients
  • Correlation tokens
  • Stats configurations
  • Saved queries
  • ...

Features

  • Settings are stored after a change on one of them, but with a delay of x seconds. As settings are often not changed alone, it reduces server calls
    • Then user has to wait a bit before settings is saved
    • The settings icon is shown with white fill when a save is waiting
    • The settings icon is rotated on saving

  • Date of last saving is displayed in settings panel

  • List of saved settings is available in source tab of settings panel
  • Defaults
    • Saving settings is enabled by default on start (cannot be changed)
    • Saving settings can be disabled in settings panel
    • Saving settings are disabled by default when loading an external link (not to override user's settings)
  • Global settings will be able to be overridden by team settings on coming months

Cloning an existing Whisperer configuration

· One min read

Very neat to avoid mistake or save time when setting up a new Whisperer:

  • Now, when editing a Whisperer configuration, you may clone the configuration values of an existing Whisperer to fasten your edition.
    • The edition is still in progress, you may accept or cancel the changes.
  • This is possible for all 3 current configurations tabs:
    • Capture config
    • Parsing config
    • Share
  • The list of Whisperers to clone is taken from the filtered list in the whisperers selection input of the menu. Thus, you can use Elasticsearch power to filter the ones you want to clone.
    • Easy and neat. Just have to know it ;)

 

I don't know why it took my so long to start implementing this, I overuse it already :) So neat !!

Saved queries

· One min read

It is now possible to save existing queries  and give them a name, to reload them afterwards.

Note: The query only includes the generated filter in the search input, no information is saved on the selected time, the current view or the filters input.

Saving a query

  • To save a query, click on the save icon

  • In input is show to give it a name
    • If the query has already been saved (and is not modified), the name of the current query is shown
    • Changing the name will update / replace the saved query name

  • You may give the same name to different query. It is up to you to remove old ones.
  • The saved queries are linked to the selected View (HTTP, TCP...)

Loading a saved query

  • Once saved, a query can be reloaded by clicking the load icon

  • The list of saved query shows up with alphabetical sorting

  • The search input is filled with the selected query and search is executed

Select filter improvement

· One min read

Filters selection dropdown has been improved.

  • Previously, the select applied the current query to show you the available options.
    • If you had previously selected a value, no other value could be displayed, as the query was filtering them out;
  • Now, the select gathers the available options by running the current query, but without applying the current filter selection. Thus allowing you to add more values in the selected options of this filter.

This is much more useful than showing a blank select dropdown ;-)