Shared services-Global Catalogue, Broker, Cache, Monitoring
| ALL | Global Catalogue harvesting local catalogue from NC/DCPC, mechanism to be agreed multiple GC instances could feed from the Global Cache (Rémy) if there are new updates on GC, will Global Broker NC has to run catalog (Tom) it would be a normal pub/sub workflow
(Jeremy) catalogue pub/sub mechanism. Primary function of GC is to be searchable (Timo) try to think from the end side. : what a Global Catalogue have to achieve? One Global Catalogue is enough, which can be seen from OSCAR example implemented by Meteo-Suisse. (Tom) OGC API records implementation: the team is working on it, we worked on commercial and open softwares. implementation is important to specification (Rémy) Global Catalogue will harvest dat,a data from Global Cache, the goal is to limit the number of protocols of NCs have to support. (Peter) From architecture’s perspective, multiple GC Global Catalogues instances works. The downside is that the performance of the only one GC Global Catalogue is uncertain. (Kai) Difference consider difference between cache and catalogue, cache catalogue is more static. (Peiliang) multiple GC instances which don’t talk to each other, how to make sure the instances of GC have the same content? (Jeremy) monitoring system will check the discrepancy among GCat Global Catalogues instances (Rémy) GCache Global Cache will not be fetching data . GCat will be synchronized and GBroker will be in sync. No synchronisation between Global Catalogues. synchronisation is between Global Brokers. (Kai) WIS Catalogue is harvesting from centersNC/DCPC. OSCAR catalogue is from simple repository. Architecture for WIS and OSCAR (one platform to be provided for all to log) is different. Clarification needs to be made (what we need is a platform to enter metadata or a mechanism to share metadata). (Jeremy) Favor for the idea that NC/DCPC maintain the catalogue locally rather than a platform for all NC/DCPCs to register their metadata catalogue with their user/password (Peter) What Remy is suggesting is easily implemented as just a cloud instance of wis2box. (one wis2box in cloud could be shared by multiple nations... just an issue of authentication/authorization.)
Global Monitoring (Remy) it depends on the design for only one GMon (technically it works. We don’t need more than One Global Monitoring (from technical perspective). But multiple GMon also work.design of WIS2 system can work with multiple instances (different languages options). (Kai) we will have many types of monitorings (services monitoring, data monitoring, metadata quality …). Use standards for collecting analytics for all monitoring functions
Global Broker (Peter) At least 3 GB for efficient, it will work with more GB, and each instance should talk to at least 2 other brokers. (Jeremy) each GB should talk to others. To write into the technical regulations, we should follow the recommendation of TT-GISC . (Kai) A maximum of hops. how high redundancy should be? how many connections we can expect to fail which determines the topology.? (Henning) The number of connections depends on the number of the Global BrokerBrokers. (Rémy) Based on Technical Regulation, all GISCs should talk to each other. But it doesn’t happen in this way in WIS1. (Jeremy) need to aim for having all messages everywhere (Rémy) How to read the technical regulation to say how topology should be configured (Jeremy) consider the TT-GISC recommendation, P/INFCOM to optimise the network. Global Brokers must connect at least to 2 instances (Rémy) Let’s keep this simple, avoid the problem by keeping the number of brokers low (Kai) Need to define the requirements for redundancy. We need a path from every broker to every broker and decide how many links can fail. This determines the topology. (Rémy) The problem is not a number of brokers, the problem is how to show which broker is performing well. Need approval mechanism for allocating the shared services (Hassan) We have experience of GTS , NC/DCPC. It is important to involve TT-GISC to solve these issues. Future problems can not be predicted. (Rémy) with many RTHs and the data exchange is fulfilled. we made good progress related to shared services, let’s agree about the Global brokers in a technical angle, then we can use monitoring to be sure that the GB are performing well. Political issues can happened any time and not be predicted. What we need is an approval mechanism to treat this political issues. we can use monitoring and audit process for that (Rémy) We don’t need to have 200 brokers ( each RTH can candidate to host GB) (Hassan) We can limit the number of GB by saying only GISCs can offer shared services we need to solve the issues in a month. To run a global broker, it is needed to declaring themselves.
|