Skip to main content
Version: v1

Introduction

Every microservice in the Godspeed framework has three fundamental abstractions, and the developer needs to work with just these three.

  • Events: Events trigger workflows. Events are generated by event sources like REST endpoints, gRPC, message bus, webhooks, websockets, S3, and more...
  • Workflows: Workflows are triggered by events. They not only perform business logic but also provide orchestration over datasources and microservices, and data/API federation. They will use datasources to store or retrieve data, join across various datasources, transform data, emit events and send responses. The framework provides a YAML dsl with some inbuilt workflows. If YAML does not suffice for any particular case, developers can currently put JS/TS workflows alongside YAML workflows and use them. Coming in future: Support for other languages.
  • Datasources: Datasources are locations where data can be stored or read from. For example API datasource (another microservice or third party), datastores (RDBMS, document, key-value), file system, S3 storage, etc. A microservice can use multiple datasources. The framework provides abstractions for Authn/Authz making it easy for the developer to express the same in a low code manner.

These abstractions allow the developer to focus purely on their business logic. 99.9% - 100% of typical functionality needed by the developer is covered by the framework's YAML-based DSL. Devs can forget about the low-level stuff they typically need to do - which accounts for 90% of the work in typical app dev scenario. The framework aims to handle all the low level functionality and saves developer's effort to do the same. For example creating controllers for endpoints, endpoint authentication/authorization, input validation, auto-telemetry with distributed context, setting up DB client and authorizing DB access, authentication of third party API, key management, creating Swagger docs or Postman collection, creating basic test suite based on documentation, etc.

There is a standard project structure which will give the developer a kickstart to their project and also reference code/declarations, for the kind of stuff they can do using the framework.

2.1 Developer's work

The developer will use the CLI provided by the framework to setup a new microservice project and start developing. (S)he will configure the events, datasources, and workflows for the required functionality, along with mappings, environment variables, and common configurations, like for telemetry. To configure the datasources,

  • For datastores: they will either define the db schema or autogenerate it from the existing database using the CLI.
  • For APIs: they will need to define the APIs OpenAPI schema or provide the url for the same.

Salient Features

Note

Some of the features mentioned here are in the product roadmap and planned for upcoming releases.

Schema driven development

The developer has to specify the API and data schema to start the development.

YAML based DSL and configurations

We have YAML based DSL which makes it much easier and succinct to express policies, business logic, and configurations. Code is shorter and easier to comprehend than programming, even for new learners. This DSL can be further customized by developers to add custom requirements.

Multi datastore support

The same model configuration & unified CRUD API (including full-text search and autosuggest) will provide interfaces with multiple kinds of datastores (SQL or NoSQL). The API is aimed to provide validation, relationship management, transactions, denormalization, and multilingual support. Each integration will support the possible functionality as per the nature of the store.

Data validation

The framework provides validation of third party API requests & responses, datastore queries, and its own API endpoints request and response. The developer only needs to specify the schema of third party API, own microservice API, and datastore model. Rest is taken care of by the framework. In case of more complex validation scenarios, where customer journeys may require conditional validation of incoming requests based on some attributes (in the database or the query {i.e. subject, object, environment, payload}), the developer can add such rules to the application logic as part of the workflows.

Authentication

The microservice framework authenticates every incoming request and extracts the user role and other info, for further processing, based on a valid JWT token. An IAM provider like ORY Kratos can be integrated into the platform for providing identity service. It will generate a JWT token which will include user id, information, and roles. This token is consumed by the microservices for user validation.

Authorization (Planned)

Each microservice will do the job of authorization for any request. Developers will write authorization rules for every microservice in simple configuration files. This will cover not only API endpoint access but also fine grained data access from datastores. This will integrate with third party Authz services in a pluggable way, with abstractions.

Distributed transactions (Planned)

Each domain’s orchestrator is able to use the Saga pattern to ensure distributed transactions across multiple microservices.

Autogenerated documentation

The framework provides autogenerated documentation using CLI.

Autogenerated CRUD API (Planned)

The framework provides autogenereated CRUD APIs from database model. Generated API's can be extended by the developers as per their needs.

Autogenerated test suite

The framework provides autogenerated test suite for APIs using CLI.

Multiple languages support

In case YAML is not enough for a corner case, developers can write custom business logic in any language. If written in JS/TS, they can place the code within the same microservice project. Other language support will also work in the same way, and is planned for the future.

Observability

The framework provides automatic observability support with correlation, for modern distributed systems, via the OpenTelemetry spec. For the same, it will work in conjunction with the microservice mesh used. The developer can extend that to include customized observability. This can integrate with any tools that support OpenTelemetry.

Logging

The inbuilt logging mechanism will log both sync request/response cycle or async events, for both success and failure scenarios.

Monitoring

The framework allows the developer to monitor custom business metrics, along with application level metrics like latency, success, and failures.

Tracing

Every incoming sync & async request will carry trace information in its headers. The same is propagated further through the microservice framework when it makes a sync or async hit to another service.