Reveal API and Service design pattern v1.0

Reveal API and Service design pattern v1.0

Document Revision

Date

Version

Description of change

Author

2021-09-30

1.0

Initial document

Stefanus Heath

 

 

 

 

Overview

This specification outlines the design patterns and architectural structures of the Reveal refactor project.  The document will give shape to the structure that Reveal microservices will conform to in order to create a stable unified development environment.

The design aims to bring in proven patterns from the microservices and event driven communities, while applying successful CI/CD principles to ensure a stable, tested and secure application.

Definitions, Acronyms and Abbreviations

Name

Description

Plan

Plan is the grouping of tasks against subjects

Location

 

Event

 

Task

A task is the object defining the linking of a form to a target subject within the plan

Patient

 

Client

OpenSRP server name for FHIR patient

 

Data Types

The system will maintain data types in the same manner as described by https://www.hl7.org/fhir/datatypes.html#code

Java Spring Services

Each Java-based microservice will use OpenJDK Java 11 to create a JAR file that will be run using the Google Distroless Java 11 containers with Undertow driving the web service.

Basics

Included libraries:

  • Spring Boot

  • Spring Data JPA

  • Spring Data Envers

  • Spring Cloud Sleuth

  • Spring Kafka

  • Spring Security OAuth

  • Spring Batch

  • Hibernate

  • springdoc-openapi-core

  • spring-boot-starter-undertow

  • spring-boot-starter-actuator

  • EXCLUDE sprint-boot-starter-tomcat

  • org.keycloak:keycloak-authz-client

  • sonarqube

  • org.keycloak:keycloak-spring-boot-2-adapter

  • org.springframework.security.oauth:spring-security-oauth2

  • org.owasp:dependency-check-gradle



System health

Each service will implement the actuator pattern as provided by Spring Boot Actuator, from where the container and application health can be monitored.  This will allow for ops-less functionality in platforms such as Kubernetes to be used to maintain statefulsets of the running application.

##################################### actuator configs####################################

management.endpoints.health.sensitive=false

management.endpoint.health.show-details=always

Distributed tracing

The system will implement the OpenTracing project (​​The OpenTracing project ) via Spring Cloud sleuth to publish trace events to Kafka, from where Zipkin can be used to analyse data flow trends.

##################################### tracing configs####################################spring.application.name=bar

spring.zipkin.sender.type=kafka

RESTful APIs

Documentation

All APIs will be documented and conform to OpenAPI 3 specification which can be viewed at  https://swagger.io/specification.  Each service will use the springdoc-openapi library (OpenAPI 3 Library for spring-boot ) to auto-generate Swagger documentation, which will be published in realtime from the running service.

The configuration will be controlled by the application.properties file:

####################################

# springdoc configs

####################################

springdoc.api-docs.path=/api-docsspringdoc.version=1.0.0springdoc.swagger-ui.enabled=falsespring.main.allow-bean-definition-overriding=false

 

Methods

Reveal APIs will use the following methods:

Method

Description

GET

Request to retrieve data, resulting in a 200 response with a JSON representation of the request’s result.

POST

Submission to create data, resulting in a 201 response with a JSON representation of the created object.

PUT

Submission to amend data, resulting in a 200 response JSON representation of the updated object.

PATCH

Submission of a partial object to affect a partial update to an object (such as change status), resulting in a 200 response with a JSON representation of the entire updated object.

DELETE

Request to soft delete data, resulting in a 204 response to confirm success.

 

Response codes

All APIs will adhere to HTTP response codes as per RFC 7231 as published by IANA.  These can be viewed at List of HTTP status codes but this document will highlight the typical codes that will be used by Reveal APIs.

Response

Description

200 OK

Standard response for successful HTTP requests.

201 Created

The request has been fulfilled, resulting in the creation of a new resource.

202 Accepted

The request has been accepted for processing, but the processing has not been completed. 

204 No Content *

The server successfully processed the request, and is not returning any content.

400 Bad Request **

The server cannot or will not process the request due to an apparent client error.

401 Unauthorized

For use when authentication is required and has failed or has not yet been provided

403 Forbidden

The request contained valid data and was understood by the server, but the server is refusing action.

404 Not Found

The requested resource could not be found but may be available in the future.

405 Method Not Allowed

A request method is not supported for the requested resource.

409 Conflict **

Indicates that the request could not be processed because of conflict in the current state of the resource.

410 Gone *

Indicates that the resource requested is no longer available and will not be available again.

422 Unprocessable Entity ** 

The request was well-formed but was unable to be followed due to semantic errors.



  • no body should be returned

** an informative description advising of the reason for the response should be returned in JSON format.  See the section on Responses and error handling 

500 Internal Server Error is considered an unhandled exception and will always be considered a bug.  The aim is that the application matures to a stable and tested state, and thus no unhandled exceptions would be acceptable.

Codes not contained in this list should be discussed with the community before adoption.

Resources

The base_path should be the corresponding service followed by the version of the API, e.g. /base_path/v1/resource

Resource paths should be in singular, e.g. /base_path/v1/plan.  Singular has been chosen over plural to conform to the standard selected by the FHIR community.

All identifiers should be annotated and documented as {plan_id} or {task_id} in order to avoid ambiguous identifiers, even though the variable is {id}.  This is done purely for documentation ease of reading purposes.

Request parameters

All search resources that could return more than one result should contain both page, size and sorting parameters that are provided by the Spring Data JPA.

All mandatory request parameters, and sorting criteria should be clearly annotated in order to ensure that the Swagger documents are accurate and comprehensive.

Responses and error handling

All HTTP responses should be in JSON format, even when error responses.  When a response is 400, 409 or 422 then a detailed message outlining the reason for the response should be returned.

Security

Each service should include Spring Security OAuth2 in order to integrate with Keycloak, using RBAC and scopes to assign permissions.  The platform will use standard JWT tokens to authenticate and authorise requests.

Policies will drive the Policy Decision point using scope assignment via Roles.



The configuration will be controlled by the application.properties file:

####################################

# keycloak configs

####################################

keycloak.enabled=truekeycloak.realm=revealkeycloak.auth-server-url=https://sso-ops.akros.online./authkeycloak.resource={KEYCLOAK_RESOURCE}keycloak.credentials.secret={KEYCLOAK_SECRET}keycloak.bearer-only=truekeycloak.public-client=falsekeycloak.cors=truekeycloak.ssl-required=externalsecurity.ignore.paths[0]=/actuatorsecurity.ignore.paths[1]=/actuator/healthsecurity.ignore.paths[2]=/api-docssecurity.ignore.paths[3]=/api-docs.yaml

 

Entity Abstract Globals

Evers

Entity management

Each database table will contain the following columns to manage the records on a row level:

  • entity_status ENUM (active,deleted) which is used to determine if a record is active or has been soft-deleted.  All delete actions will change the entity status to deleted, but will never delete the actual database record.  The default action of any fetch_all or fetch should disregard the records where the entity status is deleted and the system should treat them as non-existent.

  •  

Event Production

Kafka connector

Apache Kafka and Apache Zookeeper will be started up as part of the platform.

Topic Strategy

In order to facilitate non-blocking, the producers and consumers of the various domains should be able to be configured with independent topics.  This will allow for differing archival and retention strategies to be applied to the various event types.

Chronological Event Map Reduce

When an event is persisted into the system via the REST endpoint, the system will read the event selecting the relevant aggregation records to affect.  The event producer will then determine if the eventDate is greater than the last event that was used to calculate the current aggregate, then use the current aggregate to continue with aggregation.  If the eventDate is less than the last event date that was used to calculate the current aggregate, then retrieve the latest snapshot before the eventDate and all the events since that aggregate and calculate the new state of the aggregate by map-reducing the list of events in chronological order.

Snapshots

The system should through a Spring Scheduled use a cron expression to schedule the generation of Snapshot events for each aggregate on a Primary level.  The snapshot event should contain the current state of the aggregate as a base for reconstituting the state.  When a new event is synchronized from a client, the consumer will scan for the newest snapshot before the date of the event affecting the aggregate and calculate the state from that point.

####################################

# Kafka configs

####################################

reveal.events.snapshot.scheduled.cron = 0 15 10 15 * ?

reveal.events.snapshot.topic = aggregate_snapshots

reveal.events.snapshot.group-id = reveal-server-snapshots

reveal.events.snapshot.auto-offset-reset = earliest

 

Development pipeline

Development Pipeline

The repository will contain a pipeline file that configures a pipeline within Azure Devops.  The pipeline will follow the following pattern:

  • Pre-config

  • Gradle build

    • JUnit unit tests -- FAIL-POINT for failed unit tests

    • JaCoCo code coverage report -- FAIL-POINT for low coverage

    • SonarQube static code analysis -- FAIL-POINT for low quality

    • OWasp dependency security scanner  -- FAIL-POINT for known vulnerabilities

  • Docker build

    • Publish container to relevant repository

  • Deployment

    • Auto deploy to integration test environment -- FAIL-POINT for deployment failure

    • Newman integration tests -- FAIL-POINT for failed integration tests

Endpoint testing methodology

After each build the service should be started up for integration testing after which a series of live integration tests run at the API via Newman.  Each resource should be tested in the following manner:

POST/PUT

  • Each POST and PUT endpoint should tested with a positive object to confirm 201 status

  • Each mandatory field should be tested with as positive, negative value as well as a missing value to confirm handling of 400 scenarios

GET

  • Each path parameter should be tested independently

  • All path parameters should be tested with default settings

  • All path parameters should be tested with negative settings

  • Each resource should be tested to confirm 200, 404, 410 statuses

PATCH

  • Each resource that supports patching should be tested to confirm 200 status

  • Each variable

DELETE

Deployed platform

Utility Services

The Reveal platform will contain the following supporting services:

Service

Purpose

PostgreSQL

Database for core data

Redis

Cache for Superset

Apache Zookeeper

Configuration service for Apache Kafka

Apache Kafka

Event store for Reveal

Prometheus

Database for performance metrics

Keycloak

Authentication and Authorization provider

Superset

Data visualisation platform

Grafana

Performance metric visualisation platform

Zipkin

Tracing visualisation platform

Keycloak

Authentication and Authorization provider



Related content