2022-03-10 ET-W2AT Plenary Meeting
Date
Mar 10, 2022 13:00-14:30 UTC
Participants
ET-W2AT
@Rémy Giraud
@Jeremy Tandy (Unlicensed)
@Tom Kralidis (Unlicensed)
@Kenji Tsunoda (Unlicensed)
@Dana Ostrenga (Unlicensed)
@peter.silva (Unlicensed)
@Baudouin Raoult (Unlicensed)
@Kai Wirt (Unlicensed)
Other Experts
@Kari Sheets (Unlicensed)
WMO Secretariat
@Enrico Fucile
@HADDOUCH Hassan
@Xiaoxia Chen
Apologies
@Pablo Loyber (Unlicensed)
@Li Xiang (Unlicensed)
@Henning Weber (Unlicensed)
@thorsten.buesselberg (Unlicensed)
@sabai.fatima (Unlicensed)
Goals
Endorse recommendations from weekly meeting 7 February - 7 March
Discussion topics
No | Item | Presenter | Notes |
---|---|---|---|
1 | General | Jeremy | 1.WIS2 provides the "plumbing" for data, but doesn't define which data need to be shared. [agreed] 2.WIS2 will provide a structured method and framework for activity areas to develop their own domain-specific standards for vocabulary and topic structure.[agreed] |
2 | Global Broker | Jeremy |
or: At least one Global Broker will subscribe to messages from every NC/DCPC. Baudouin > There is no requirement for an NC/DCPC to publish messages relating to "static" (infrequently changing) datasets , such as archive.[agreed] Peter > there was talk about publishing metadata records using MQP... would those become optional also? [tbd] 4. For full global coverage, a Global Broker instances will republish messages from other Global Broker instances [agreed] 5. A Global Broker instance will be connected to at least 2 other instances.[agreed] Kai > notes that users will need to subscribe to "originating centre" channel topics if the data isn't cached. [agreed] 6. Global Brokers should use distinct "channels" to keep messages from originating centres separate from messages originating from Global Cache instances [agreed] |
3 | Global Cache |
| 1.There will be multiple Global Cache instances. [agreed] 2. A Global Cache instance will serve as a primary Global Cache for a subset of NC/DCPCs, e.g., the Global Cache will download Data Objects to its cache directly from those NC/DCPCs. Jeremy > e.g. three-quarters of the data will be copied from other caches - and one-quarter downloaded directly from NC/DCPC Enrico > what if there's only one cache? Or if there's only two caches - what happens if one cache dies? Baudouin > A NC/DCPC will [push cached data to at least one global cache instance?] >> ACTION: revise #2 on the basis of discussion. 3. For full global coverage, a Global Cache instance will download Data Objects and discovery metadata records from other instances.[agreed] 4. A Global Cache instance will operate independently of other Global Cache instances – albeit that one instance may download content from another.[agreed] 5. A Global Cache instance will store the discovery metadata records needed to populate the Global Catalogue.[agreed] Kai > so the Global Cache would need to manage update/delete of cached metadata records, e.g. to manage the lifespan of the records. Cached data will expire after a given time. Peter > in Canada, we use MQ to synchronise [file] directories - with file "creation", "update", and "remove" action. Could re-use that? That means we don't need extra logic in the Global Cache. Kai > we need this functionality at the Global Catalogue too. >> ACTION: Tom, Peter, Jeremy - investigate further. Peter > Will the catalogue records end up being bigger than the cached data? Tom > No. Don't see us managing more than a few thousand records. Metadata will be at much higher level of granularity. And, with OGC-API Records, each record should be smaller. Peter > Today, our number of metadata records is suppressed because we're only dealing with GTS data. Tom > No. Because our concern is about curating _collections_ of data. >> ACTION: develop best practices to help Data Provider get discovery metadata at the right level of granularity. 6. A Global Cache is designed to support real-time distribution of content. It does not provide a “browse-able” interface where Data Consumers can discover what content is available. Peter > Probably need someone to "browse" the list of files in the Cache, e.g. with WAF. To enable people to recover if a system lost its queue. Tom > Maybe we need to explicitly call the Cache a "Web Accessible Folder" - doesn't provide a user-oriented interface, only a directory structure of files. >> ACTION: update #6. 7. Global Cache will hold "file" Data Objects - similar to how "file" Data Objects are shared on the GTS. Peter > This confuses me. Please remove the caveat after the dash. >> ACTION: update #7. 8. Global Cache instances and NC/DCPC use consistent topic structure in their local message brokers.[agreed] |
4 | Global Discovery Catalogue |
| 1. A single Global Discovery Catalogue instance is sufficient for WIS2. 2. Multiple Global Discovery Catalogue instances may be deployed for resilience. 3. Global Discovery Catalogue instances operate independently of each other. 4. A Global Discovery Catalogue instance is populated with discovery metadata records from a Global Cache instance – receiving messages about availability of discovery metadata records via a Global Broker. 5. A Global Discovery Catalogue instance should connect to more than one Global Broker instance – discarding duplicate messages as needed. 6. Global Discovery Catalogue advertises the availability of datasets and how/where to access them or subscribe to updates, it does not advertise the availability of specific Data Objects (i.e., data files). 7. A Global Discovery Catalogue instance will update discovery metadata records it receives to add "association" links for subscription URLs at Global Broker instances. 8. Global Discovery Catalogue implements OGC-API Records API standards - both record structure and search API. Tom > should we refer to this as the "Global Discovery Catalogue"? [agreed] Tom > suggest we add requirement about "bootstrapping" the GDC from the Global Cache? [agreed] >> ACTION: Tom, Jeremy - make these changes. |
5 | Data Consumer |
|
>> further discussion needed. Peter, Tom, Jeremy 2. Data Consumers should subscribe to Global Brokers to receive "data availability" messages. Exceptionally, a Data Consumer may decide to subscribe directly to the local message broker at the originating NC/DCPC. Data Consumers should not subscribe to the local message broker at Global Cache instances Peter > Need clarification on who you mean by Data Consumer. A Member might offer a "national-level" broker. 3. Data Consumers who want to browse and download data should use the originating centre (i.e. NC/DCPC). The Global Cache may be accessed directly, but its primary purpose is to host files that are identified in real-time "data availability" messages. >> further discussion needed. Enrico > we've not discussed authentication / authorization / credentials. Pick up the "data consumer" perspective, plus this at another Monday meeting. 4. It doesn't matter if a Data Consumer downloads a Data Object from any Global Cache instance or the originating NC/DCPC directly: logically, it is the same Data Object. However, a Data Consumer may have a preferred Global Cache instance (e.g., due to latency or other performance criteria). A NC/DCPC will likely prefer Data Consumers to use the Global Cache to reduce load on their systems. 5. Data Consumers will need to implement logic to discard "duplicate" messages. |
6 | Monitoring |
| 1.WIS2 will standardise how [performance] metrics are published from WIS centres and GISC shared services.[agreed] 2."Sensor-Centre" is a new role in WIS2.[agreed] 3.WIS2 will monitor the 'health' (i.e. performance) of components at NC/DCPC as well as Global "shared services" components.[agreed] 4.Provision of [performance] metrics by NC/DCPC (in standard form) is recommended - but not mandatory.[agreed] 5.Tech Regs will include requirements for Sensor Centres - both data and service availability types; at a minimum in terms of how they expose metrics to the Global Monitoring dashboard(s) [the definition of the metrics themselves may be part of other WMO manuals?[agreed] 6.WIS2 provides the "plumbing" for capturing [performance] metrics, but we don't define which metrics are needed - this is the responsibility for the other programmes/activity.[agreed] |
7 | Discovery metadata |
| 1.WCMP2 based on OGC-API Records. [agreed] |
Action items
Decisions