Skip to main content

Timescale on sequence diagrams

· One min read

Sequence diagrams are useful to understand requests fan-out and to see where time is taken and where to optimize the process.

BUT, they had one flaw: requests and responses are drawn sequentially: you don't see easily where time is lost.

So I did an improvement: you can now choose different time scale to stress out the gaps between calls:

  • Linear time scale: visual gap is increase linearly depending of difference in time between calls
  • Logarithmic time scale: difference between short calls is stressed out
  • Squarred time scale: long delays are stressed out
  • Sequential time scale: as before

Example:

Architecture upgrade: splitting a monolith

· 2 min read

One service in Spider back-end has been growing too much. It included:

  • Whisperer configurations
  • Users rights on this Whisperer
  • Whisperer current status
  • Whisperer status history
  • Whisperer hosts resolving

The last 2 were on different indices, but the 3 first 'data aggregates' were inside the same resource/document.

This resulted in a complex service to update, in conflicts in Optimistic Concurrency management, and in slow response time due to the size of the resources.

It needed split.

I firstly tried to split it logically from the resource perspective, extracting the configuration as it is the most stable data... But this was a bad idea: splitting configuration and rights was complexifying a lot the access and usage of the resources from the UI and the other services that needed the information!

So I figured out I had to split the monolith from the client perspective.

In result, I extracted from the first module:

  • An operating  service to process status input and to store both status and current status
  • An operating service to process hosts input and to store them
  • A configuration service to manage configuration and rights

This was much better. But I had slowness due to the fact that all those modules were accessing and storing to ES directly. So, I switch to saving in Redis and configure pollers to serialize the data to ES. Everything was already available to do this easily from the saving processes of Packets, Sessions and Http communications. I also added a pure cache to Whisperer configs resources:

  • On save, save in Redis and ES
  • On read, read from Redis, and if not, read from ES and save in Redis

All in all, requests from Whisperers clients went from 200ms+ to save Status or Hosts to... 50 and 15 ms ;-) Yeah !!

Capture improvement

· One min read

New options have been added to capture settings:

  • Wait for resolving:
    • Don't capture packets to/from on host until its name has been resolved by the DNS.
    • This allows ignoring ALL packets from the 'Hosts to ignore' list. And allows for instance to avoid spikes in capture when the first call to a UI are made
  • Track unresolved IP
    • Capture / or not packets from hosts that could not be resolved from the DNS.

New Circuit Breakers on Whisperer

· One min read

Whisperers have been tracking their CPU and RAM usage since long. Now, they are checking these metrics, and Whisperers can be configured to stop capture when they are above a defined threshold.

This allows to limit the impact of the Whisperers on the hosts they capture when there spikes of traffic. Of course, you loose monitoring data... but you allow your system to cope with the surge in traffic.

By default, Whisperers check their CPU and RAM usage every 20s. Once opened, the circuit breaker will stop capture for the next 20s and check again after.

The circuit breakers are configured in Capture settings:

Select all on Grid

· One min read

Small improvement, but that can save minutes :)

As Remi ask for it, it is now possible to 'select all' records in the loaded grid. It works like this:

  • It ticks all checkboxes of the grid
  • If a record was selected, it is not any more
  • If a record was not selected, it is selected now

This allows those use cases:

  • Select all but these two (invert selection)
  • Select all
  • Unselect all

Attention: this does not affect selected records that are not in the current grid:

  • If you had 5 records in selection and change time window, and click select all to select all 20 records displayed... You'll end up with 25 records in selection.

Note that selection is limited to 100 records. To limit the size (and time) of the export.

Upgrade to ES 6.4.1

· One min read

Spider has been upgraded to ElasticSearch 6.4.1 !!

It now benefits to APM access, SQL queries and so on :-) I'll tell you more later.

Import / Export Http Communications

· 2 min read

On request from Remi L., I increased priority of this feature last week, and it is released today :-)

It is now possible to:

Export Http communications

  • Export a selection of Http communications, including:

Beware though: if the request or response body was encoded (gzipped or chunked for instance), it is still encoded, as transmitted on the wire.

Import them back

  • Import back this export to another Whisperer, of UPLOAD type.
    • It is like magic, you get back your saved communications and can analyze them at peace.
    • You may import many exports at once by selecting many files or by drag&dropping them on the upload icon.
    • Beware though: if you import from different environments from the same time window to the same Whisperer.. you may have some IP clashes, and some strange results ;)

The first identified use cases

  • Being able to save a selection of requests performed by an integrated client to check non regression later on.
  • Being able to compare different clients integration.
  • Being able to export clients integration from production to its own environment to be able to create automated tests from them.

This feature, linked to the 'Diff' feature previously released adds even more power to Spider as a killer tool for integration :-)

Interested?

Anybody can export. However, you need to have your own Whisperer of UPLOAD type to be able to import back.

  • I created one for Remi for tests, ask one from me if you want.
  • For now, I'd rather not give to everybody to right to create Whisperers.
  • Once created, you'll have all configuration options to them, and will be able to share them with others (your team).
    • But only the owner of the Whisperer can upload to it.

Cheers, Thibaut

Merging clients 'replicas' on map

· One min read

Network map got a small improvement:

  • Now, clients with similar (or same) identification are merged on the map, as for servers replicas.
    • This reduces the amount of 'noise' on the map, and show a single client connecting with several IPs (many stations, many devices, or one moving device) as one single client.

This option is active when Merge option is active.

NB: This feature is not present on the sequence diagram.

Excel export of statistics

· 2 min read

After the second/third processing of production statistics, we can say that, although they are very easy to generate, the processing of the screenshot export are slow and painful !

So I shifted priorities, and invested some time in Excel export :)

And now, all statistics information can be exported to an Excel spreadsheet in one click. They include:

  • Stats metadata
  • Main statistics with inclusion of Spider generated graphics as is
  • Source data of the previous graphics (heatmap, distribution, evolution)
  • And the most interesting: Pivot table with the grouped statistics
    • With the same color scale as on the website

Thanks https://github.com/guyonroche/exceljs for the very nice library!

Examples


Response time statistics of GET parks/areas

Excel export:  Spider stats - Duration


Response time statistics on SIT1 by service, url, verb and status (truncated)

Excel export:  Spider stats - Duration, group by Server (merged), URL templates, Verb, Status

Enjoy! Thibaut