Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

\uD83D\uDDD3Date

14:30-16:30 UTC

\uD83D\uDC65Participants

ET-W2AT

WMO Secretariat

\uD83E\uDD45Goals

  1. to discuss the number of shared services instances

\uD83D\uDDE3Discussion topics

Item

Presenter

Notes

  1. Shared services-Global Catalogue, Broker, Cache, Monitoring

ALL

  1. Global Catalogue

    1. harvesting local catalogue from NC/DCPC

    2. multiple GC instances

    3. (Rémy) if there are new updates on GC, will Global Broker NC has to run catalog

      1. (Tom) it would be a normal pub/sub workflow

    4. (Jeremy) catalogue pub/sub mechanism. Primary function of GC is to be searchable

    5. (Timo) try to think from the end side. One Global Catalogue is enough, which can be seen from OSCAR example.

    6. (Tom) OGC API records implementation: implementation is important to specification

    7. (Rémy) Global Catalogue harvest dat,a from Global Cache, the goal is to limit the number of protocols of NCs have to support.

    8. (Peter) From architecture’s perspective, multiple GC instances works. The downside is that the performance of the only one GC is uncertain.

    9. (Kai) Difference between cache and catalogue, cache is more static.

    10. (Peiliang) multiple GC instances which don’t talk to each other, how to make sure the instances of GC have the same content?

      1. (Jeremy) monitoring system will check the discrepancy among GCat instances

      2. (Rémy) GCache will not be fetching data. GCat will be synchronized and GBroker will be in sync.

      3. (Kai) WIS Catalogue is harvesting from centers. OSCAR catalogue is from simple repository. Architecture for WIS and OSCAR (one platform to be provided for all to log) is different. Clarification needs to be made.

      4. (Jeremy) Favor for the idea that NC/DCPC maintain the catalogue locally rather than a platform for all NC/DCPCs to register their metadata catalogue with their user/password

      5. (Peter) What Remy is suggesting is easily implemented as just a cloud instance of wis2box.   (one wis2box in cloud could be shared by multiple nations... just an issue of authentication/authorization.)

  2. Global Monitoring

    1. (Remy) it depends on the design for only one GMon (technically it works). But multiple GMon also work.

  3. Global Broker

    1. (Peter) At least 3

    2. (Jeremy) each GB should talk to others. To write into the technical regulations, we should follow the recommendation of TT-GISC .

    3. (Kai) A maximum of hops. how high redundancy should be? how many connections we can expect to fail which determines the topology.

    4. (Henning) The number of connections depends on the number of the Global Broker.

    5. (Rémy) Based on Technical Regulation, all GISCs should talk to each other. But it doesn’t happen in this way.

    6. (Hassan) We have experience of GTS, NC/DCPC. It is important to involve TT-GISC to solve these issues. Future problems can not be predicted.

    7. (Rémy) we need to solve the issues in a month. To run a global broker, it is needed to declaring themselves.

✅Action items

  • to discuss Global Cache next time

⤴Decision

  • Global Brokers should connect to other brokers

  • No labels