Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Item

Presenter

Notes

  1. Shared services-Global Catalogue, Broker, Cache, Monitoring

ALL

  1. Global Catalogue

    1. harvesting local catalogue from NC/DCPC, mechanism to be agreed

    2. multiple GC instances could feed from the Global Cache

    3. (Rémy) if there are new updates on GC, will Global Broker NC has to run catalog

      1. (Tom) it would be a normal pub/sub workflow

    4. (Jeremy) catalogue pub/sub mechanism. Primary function of GC is to be searchable

    5. (Timo) try to think from the end side: what a Global Catalogue have to achieve?

    6. One Global Catalogue is enough, which can be seen from OSCAR example implemented by Meteo-Suisse.

    7. (Tom) OGC API records implementation: the team is working on it, we worked on commercial and open softwares. implementation is important to specification

    8. (Rémy) Global Catalogue will harvest data from Global Cache, the goal is to limit the number of protocols of NCs have to support.

    9. (Peter) From architecture’s perspective, multiple Global Catalogues instances works. The downside is that the performance of the only one Global Catalogue is uncertain.

    10. (Kai) consider difference between cache and catalogue, catalogue is more static.

    11. (Peiliang) multiple GC instances which don’t talk to each other, how to make sure the instances of GC have the same content?

    12. (Jeremy) monitoring system will check the discrepancy among Global Catalogues instances

    13. (Rémy) Global Cache will not be fetching data. No synchronisation between Global Catalogues. synchronisation is between Global Brokers.

    14. (Kai) WIS Catalogue is harvesting from NC/DCPC. OSCAR catalogue is from simple repository. Architecture for WIS and OSCAR (one platform to be provided for all to log) is different. Clarification needs to be made (what we need is a platform to enter metadata or a mechanism to share metadata).

    15. (Jeremy) Favor for the idea that NC/DCPC maintain the catalogue locally rather than a platform for all NC/DCPCs to register their metadata catalogue with their user/password

    16. (Peter) What Remy is suggesting is easily implemented as just a cloud instance of wis2box.   (one wis2box in cloud could be shared by multiple nations... just an issue of authentication/authorization.)

  2. Global Monitoring

    1. (Remy) it depends on the design. We don’t need more than One Global Monitoring (from technical perspective). But design of WIS2 system can work with multiple instances (different languages options).

    2. (Kai) we will have many types of monitorings (services monitoring, data monitoring, metadata quality …). Use standards for collecting analytics for all monitoring functions

  3. Global Broker

    1. (Peter) At least 3 GB for efficient, it will work with more GB, and each instance should talk to at least 2 other brokers.

    2. (Jeremy) each GB should talk to others. To write into the technical regulations, we should follow the recommendation of TT-GISC .

    3. (Kai) A maximum of hops. how high redundancy should be? how many connections we can expect to fail which determines the topology?

    4. (Henning) The number of connections depends on the number of the Global Brokers.

    5. (Rémy) Based on Technical Regulation, all GISCs should talk to each other. But it doesn’t happen in this way in WIS1.

    6. (Jeremy) need to aim for having all messages everywhere

    7. (Rémy) How to read the technical regulation to say how topology should be configured

    8. (Jeremy) consider the TT-GISC recommendation, P/INFCOM to optimise the network. Global Brokers must connect at least to 2 instances

    9. (Rémy) Let’s keep this simple, avoid the problem by keeping the number of brokers low

    10. (Kai) Need to define the requirements for redundancy. We need a path from every broker to every broker and decide how many links can fail. This determines the topology.

    11. (Rémy) The problem is not a number of brokers, the problem is how to show which broker is performing well. Need approval mechanism for allocating the shared services

    12. (Hassan) We have experience of GTS with many RTHs and the data exchange is fulfilled. we made good progress related to shared services, let’s agree about the Global brokers in a technical angle, then we can use monitoring to be sure that the GB are performing well. Political issues can happened any time and not be predicted. What we need is an approval mechanism to treat this political issues. we can use monitoring and audit process for that

    13. (Rémy) We don’t need to have 200 brokers ( each RTH can candidate to host GB)

    14. (Hassan) We can limit the number of GB by saying only GISCs can offer shared services

    15. we need to solve the issues in a month. To run a global broker, it is needed to declaring themselves.

✅Action items

  •  to discuss be discussed next week:
  1. Connectivity & functions of Global Cache

  2. What else GISC should do? Shared services ? how to support their AoR?

...