Skip to main content

156 posts tagged with "features"

View All Tags

New tool to save HTTP stats

· 4 min read
Creator of Spider

Spider may be very useful to extract system statistics.
But its small retention time (due to high data volume) limits the usage you may do with these stats.

This new tool extracts HTTP statistics from Spider captured data and save them into another index with longer retention time.

Location extraction

· One min read

I got today an idea from Enzo: add a link next to the Location header in the HTTP details to allow loading the linked resource from the HTTP response.

As I liked the idea, I did it straight :)

From the official RFC, the location may be absolute or relative, so Spider manages both:

  • Either a direct link to the absolute URI
  • Or a link built from the 'host' header, the 'x-forwarded-proto' if present, and the relative 'location' header.

Now the location is then available in the HTTP global tab. Thanks you Enzo!

Progressive loading of Map

· One min read

Following the work and good results on Timeline, the same progressive loading has been applied for the network map.

Although more complex to do, having done the timeline before made it quite quick to do (<10h).

Test have proven it working to display two full days of data for all monitored platforms on Streetsmart. An aggregation over 100GB of data.

I wouldn't say it was fast though, I stopped it before the end as there was nodes all over the place ;-)

Check it out! Fireworks!

Progressive loading of Timeline for massive data

· One min read

Timeout in Spider on production data :(

In Flowbird production environment, Spider is capturing around 400GB per day. Impressive!

But it is a challenge by itself! If the capture works great, the UI, map and statistics are getting timeouts when loading the timeline around the whole day.

Progressive loading with composite aggregation

To improve the situation, I've added an option to the timeline (first) to do progressive loading of the data, with pagination. It uses the composite aggregation of Elasticsearch, and the results is updated the existing timeline data whenever possible, instead of resetting the whole timeline everytime

The settings is activated by default and may be deactivated in the display settings:

Result: Pros & Cons mix

Pros

  • The progressive loading is really visible
  • It makes cancelling the timeline loading expensive query quite easy
  • It limits the load on the server when the user navigate quickly on the timeline

Drawback:

  • Loading the timeline data gets longer. Pagination implies many requests made to the server in sequence, with a request overhead for each call.

I think it is worth it.

Demo

New plugin hook - client-enrichment-plugin

· One min read

A new client-enrichment-plugin hook is available.

This plugin allows to resolve client identification extracted from JWT or Basic auth against an external system.

The enrichment is applied to:

  • Grid, and export
  • Filters
  • Details
  • Stats
  • And map!

Example

This sample plugin decodes Spiders own identifiers of JWT tokens to display name of Whisperers and Users.

It is available here: https://gitlab.com/spider-plugins/spd-client-resolver

In grid & filters:

In map:

Plugin API

{
inputs: {identification, mode},
parameters: {},
callbacks: {setDecodedClient, onShowInfo, onShowError, onShowWarning, onOpenResource},
libs: {React}
}
  • identification: value of the identification to resolve (JWT sub field, or Basic auth login)
  • mode: 'REACT' or 'TEXT', depending on the expected output
  • onOpenResource({id, title, contentType, payload}): callback to open a downloaded payload in details panel. XML and JSON are supported.
    • id: id of the resource, to manage breadcrumb
    • title: displayed at the top of details panel
    • contentType: application/json or application/xml are supported
    • payload: the resource content (string)

Anonymous statistics as a user choice

· One min read

User may now chose on its own to anonymize its usage statistics. It is available in the Settings panel.

The statistics are anonymized so that no link can be made between the statistics and the user. UserId and email are replaced by a client side generated UUID that is regenerated at each user login, or when the anonymous stats flag is changed.

Spider is getting more and more ready.

Consent validation

· One min read

I've just added Consent validation of Privacy terms.

This complies with GDPR regulations to inform the user of collected private data, and the processing behind.

  • Consent is mandatory to use Spider
  • User consent is saved on the server and requested again when the terms changes

Date of consent and terms may be accessed later on the new Help page. (See next post)