Skip to main content

Consent validation

· One min read

I've just added Consent validation of Privacy terms.

This complies with GDPR regulations to inform the user of collected private data, and the processing behind.

  • Consent is mandatory to use Spider
  • User consent is saved on the server and requested again when the terms changes

Date of consent and terms may be accessed later on the new Help page. (See next post)

Grid link UX improvement

· One min read

When building training support I found that automatic filter when clicking the link icon in the grid was not using smartfilters.

I changed that quickly :) So that from a /controlRights item in the grid to the fan out display in the sequence diagram, you're only 1 click away !

New Help details

· One min read

Instead of only redirecting to https://spider-analyzer.io, now the Help page provides more information.

  • The classic About terms.
  • The Changelog - that moved position from an independent details to here.
  • The list of Free and Open Source tools and libraries used with their licences.
    • It takes a bit of time to... render ;)

The content is driven by a jsonld public manifest file visible in the Manifest tab.

New alert probe

· One min read

I just added an alert probe that alerts the administrator when the parsing delay gets over a threshold (default to 30s).

This complements the work done on parsing delay monitoring.

 

I'm studying now the possibility to add alert status to the monitoring UI, and the parsing delay in monitor-write tooltip. ... for later !

Improved free time selection

· One min read

Playing with Spider during non regression with very old pcap captures files, I kept fighting with the free time selection inputs on the right of the timeline.

It was difficult to move to 2018 or such!

I figured out that validation and change acceptance of those inputs needed to be done on both inputs together. So I redesigned the UX there, and it is much better now IMO :)

Tell me what you think!

  • You may validate a change of only one input by pressing enter (when no error)

  • You may validate a change of both input at once with the validation button

    • Thus this allows moving far and fast in time by changing both inputs, and validate only when finished.
  • When there is an error, the error text shows up with the possibility to cancel the change.

How does Spider cope with an 2x load for 15 min?

· 3 min read

Today, checking monitoring at the end of day, I found a spike of 'parsing errors' in the morning. The monitoring helped me find out why. take the path with me:

1 - Looking at the logs dashboard 

We can see a spike in logs - nearly 6000!! - around 10:13. The aggregation by codes show us very easily that there have been parsing issues, and when opening the log detail, because there were missing packets.

Let's find the root cause.

2- Looking at the parsing dashboard

We can see an increase of Tcp session in waiting to be parsed in the queue, and the parsing duration and delay increasing.

Many HTTP coms were still created, so there is no like, errors, but only in increase of demand.

There is a small red part of the Parsing status histogram, with 5603 sessions in errors out of 56000.

3- Further on, in the services dashboard

There is definitely an increase of input load, and an even more increase of created Http Coms. The input load almost doubled in size!

CPU is still good, with a net increase of parsing service.

4- Looking at DB status

Redis doubled its load, with a high increase in RAM, but it came back to normal straight after :) Works like a charm!

Response time and content of Redis increase significantly but nothing worrying. The spike has been absorbed.

Elasticsearch shows a net increase of new communications indexation.

5- Then the whisperers dashboard gives us the answer

In fact, all was normal, it's only the performance team (SPT1 whisperer) that decided to capture one of their test :-)

 

That's good observability capabilities, don't you think? All in all, everything when well.

  • The spike was absorbed for almost 15 minutes,
  • But the parsing replicas where not enough to cope with the input load, and the delay of parsing increased regularly
  • So much that Redis started removing data before it got parsed (when the parsing delay reached 45s, the TTL of packets)
    • Watch again the second set of diagrams to check this.
  • Then the parsers started complaining about missing packets when parsing the Tcp sessions. The system was in 'security' mode, avoiding to crash and avoiding the load increase.
  • All went back to normal after SPT1 stopped testing.

The system works well :) Yeah! Thank you for the improvised test, performance team !

We may also deduced from this event that parsing service replicas may be increased safely to absorb the spike. As the CPU usage still offered room for it. Auto scaling would be the best in this case.

Cheers, Thibaut

Enhanced monitoring for parsing status

· 2 min read

When playing with chaos testing, I noticed that I had no metric telling me if parsing speed was right, ok close to the limit. I knew when parsing was failing, but not if it was about to fail.

I then designed and added new metrics for parsing speed:

  • Delay before parsing
  • Duration of parsing
  • Speed of parsing

The first KPI indicates if parsing 'power' is enough, as it must stay between 10s (delay before parsing in the configuration) and 45s (TTL of packets in Redis).

The other KPIs indicates speed of parsers with current load and will allow to compare performance improvements.

In the main dashboard

As a new parsing page

I regrouped the previous parsing KPI together:

  • Tcp to parse in queue - to check it is not increasing
  • Tcp parsing status - to check quality of parsing
  • Maximum parsing delay - to check it stays way below 45s
  • Parsing duration of a polled page of Tcp sessions (max 20) - to check speed
  • Amount of communications created from the parsing - to check we indeed created something :)

All in all...  1 day of work :)

Avoiding duplicates

· One min read

When capturing both sides of the same communication - for instance, when capturing from both the gateway and the service itself - Spider captures twice the same communication, with sightly different dates.

It is now possible to ask Spider to avoid duplicates.

Avoiding duplicated communications

With this option, Spider will generate the same id for the object on both side of the communication, and only one will be then saved (and parsed).

For this, select 'Avoid duplicated communications' on Capture Config tab.

Then, only one Tcp session will be created, and thus, only one example of the Http Communications.

Avoiding duplicated packets

You may also chose to avoid duplicated packets, on the advanced options of the Packets saving part of Parsing Config tab. The options is visible only when saving packets.

Note that this is asking more resources in of the system, and should be only considered when doing statistics at the packet level (not often).

Changelog since may 2021

· One min read

It's been a while I did not write here.

Spider is progressing, but I spent much of my time Spider doing administrative and legal stuff. It's official public release is approaching :)

I nevertheless did some stuff:

  • Upgraded all services and UIs to Node 16 in august and september, with an upgrade of all libraries
  • Improved the UI so that it checks for a new version every time it receives the focus. With an integrated changelog of UI versions displayed in the details panel by rendering the service CHANGELOG.md file. You might have seen it already.
  • Improved teams configuration to allow copying teams settings to the user's in order to troubleshoot and improve them (the opposite was already existing)
  • Import/export of Whisperer configuration (decoding and parsing) from a file. This would have proven useful before, so it will again !

And I've spent some time solving my 'last' parsing issues to support long communications and optimise again the parsing.

That's for next post ! :)

My 1st customer satisfaction survey

· One min read

In october last year, I performed my first customer satisfaction survey... And the results are great!

I read some website advise and took some templates as examples. The first version was too long with too many choices and questions. I reduced it to get more feedback :)

Thanks to all of you that participated!

The summary in picture:

Contact me if you'd like to know more!