Confluence Wiki users, we will be upgrading the wiki February 25th at 7PM. You can test the new version at https://qa.wiki.service.emory.edu. Please report any issues with the new version via ServiceNow.
Child pages
  • Use Case 4 - Monitor Web Service
Skip to end of metadata
Go to start of metadata

Purpose

This use-case describes the means to test the availability and authenticity of a deployed web service.

The test should be generally accessible (local or Internet), provide a result with or without authentication as well as with or without an existing authorization. Also, the test should be available on all listeners, in all protocols/dialects, and with all transport security options.

The test should not be rely on any particular eventing model. The test should not dilute metrics. The requested operation should always respond, with exception for avoiding denial of service.

Actors

System Administrators

Applications

Student Information System

High-level Story (Abstract)

A college or university deploys a web service such as the Academic History Web Service and wants to proactively detect problems by testing the service.

Detailed Story

After deploying and testing the web service, the appropriate administrator of an organization's monitoring tools configures that tool to invoke a Status request on the service at a desired interval.

If the invocation fails, that failure is reported by the monitoring tool.

A successful invocation includes the status (success or failure) of invocation attributes including authentication and authorization (if specific metrics were requested) as well as dynamic/negotiated details (e.g. transport security attributes).

The metrics are compared with threshold values set in the monitoring tool, and the appropriate administrators are alerted if thresholds are exceeded.

  1. The service is deployed with a SOAP endpoint using SSL.
  2. The monitor is configured to invoke the 'Status' operation, with an authentication token for 'operator'.
  3. The service is responding, so the Status invocation succeeds. The reply include various name-value pairs.
response

Status

result

success

authn

operator

cipher

AES256-SHA

Metrics

uptime-seconds

5520

ops-per-hour

5.22

op.readPerson.success

7

op.readPerson.avg-response-time

8.03

op.readPerson.failed

2

To Do

Rewrite the detailed story with specifics, when available.

  • No labels

2 Comments

  1. Unknown User (richard.moon@sungardhe.com)

    What I heard in the discussion today is that each service would need to expose a specialized 'getEdUnifyMonitoringMetrics' operation.  This operation would return the kind of information that Steve's Academic History Web Service provides such as average response time, number of successful invocations, etc.  It would also act as a ping to tell you whether the service is up and running.

    In the example I gave the IMS Person Management Service (PMS) would expose an additional monitoring operation that is not in the IMS specification.  That monitoring operation would need to report on all service operation invocations.  In other words, readPerson(), readPersons(), updatePerson() metrics would all need to be included(?)

    It may be asking a lot for vendors and institutions to implement logic to capture the metrics and expose them via a new service operation.  This is not trivial.  In fact it probably should be delegated to some higher level ESB / Web Services Manager infrastructure component.  Again, it is asking a lot to implement such a component.

    At least for the first phase, we need to focus on simply registering the thousands of web services that are already out there without having to enhance them in any way.

    1. Great point.

      Whether such an operation is exposed may be relegated to the question of compliance – relegating the extent of data (status and metrics) returned to degrees of compliance.

      Pushing the implementation of the operation into lower, framework layers would be a goal.