Standardization and Maintainability

Ayush Ghai's photo

9 min read

Cover Image for Standardization and Maintainability

As part of the series on the Pillars of Developer Infrastructure, this is our first article on Standardization and Maintainability.

CTOs and their teams face the challenge of ensuring productivity and smooth operations across diverse teams and projects. In software development, every developer and team brings unique experiences and styles, leading to differences in project implementations. As services grow and developers join and leave the team, this becomes a serious problem.

The Challenge: The Dilemma of Diversity

Without standardized infrastructure, companies descend into chaos, technical debt and inefficiency. Diverse implementations and team skill sets hinder consistency. Large tech enterprises develop standardized infrastructure, but smaller companies struggle. This chaos leads to implementation mistakes, longer issue resolution times, and difficulties for new developers. Increased hiring and switching costs, dependency on tribal knowledge, roadmap delays, and high operational costs result in lost market opportunities, direct or indirect monetary losses, reduced competitive advantage, and can be fatal for early to mid-stage businesses.

Following guardrails and standardization is not a vitamin but is a pain killer.

Let's delve into examples highlighting challenges without standardization, focusing on API development and infra automation. We will compare non-standardization examples with standardized approaches.

API Development without Standardization

In third-generation frameworks like Node.js, developers have the liberty to do anything, but this freedom comes at a cost. In most cases, two Node.js projects end up looking vastly different, leading to challenges in standardization.

Example A: Project Scaffolding

Developer 1, Project 1

Developer 2, Project 2

In the above example, project 1 follows a more modular approach, separating concerns into controllers, models, and routes, while Project 2 opts for a flatter structure. Imagine this startup growing to build 25 microservices differently and the chaos resulting in maintenance, automation, and developer churn.

Example B: API Request Validation

Developer & Project 1: Validation in Controller Function Code

const exampleController = (req, res) => {
  const { name, email } = req.body;

  // Basic validation
  if (!name || typeof name !== 'string' || name.length < 3 || name.length > 30) {
    return res.status(400).json({ error: 'Name must be a string between 3 and 30 characters.' });
  }
  if (!email || typeof email !== 'string' || !email.includes('@')) {
    return res.status(400).json({ error: 'A valid email is required.' });
  }

  // Proceed with business logic if validation passes
  res.status(200).json({ message: 'Validation successful', data: req.body });
};

// Export the controller function
module.exports = exampleController;

In this example, validation is performed directly in the controller code, which is a bad practice. It lacks a single source of truth and schema-driven development. This approach requires separate Postman Collections and increases the risk of mistakes across multiple routes. Additionally, changing the standard 400 response status code for validation failures necessitates updates across all routes, impacting maintainability.

Developer 2, Project 2: Validations in middleware via functions

// Import required modules
const schema = {
  type: 'object',
  properties: {
    name: { type: 'string', minLength: 3, maxLength: 30 },
    email: { type: 'string', format: 'email' },
  },
  required: ['name', 'email']
};
const validateRequest = (schema) => {
  return (req, res, next) => {
    const validate = ajv.compile(schema);
    const valid = validate(req.body);
    if (!valid) {
      return res.status(400).json({ errors: validate.errors });
    }
    next();
  };
};
// Sample route that uses the validation middleware
app.post('/users', validateUser, (req, res) => {
    const { name, email, age } = req.body;

    // Simulate user creation
    const newUser = { id: Date.now(), name, email, age };

    // Respond with the created user
    res.status(201).json(newUser);
});

// Start the server
const PORT = 3000;
app.listen(PORT, () => {
    console.log(`Server is running on port ${PORT}`);
});

While developer #2 has still taken a better approach, it is still not the best practice. The best approach is covered in this other blog we shared before Schema driven development and single source of truth. Check this video by me talking on this topic in short.

Example C: Infra Automation using Terraform & Tagging of Resources

The example below describes an infrastructure automation use case for adding tags to resources created by Terraform modules. Imagine I am deploying a complex infra with multiple moving pieces in a cloud provider and my organisation wants to add standard tags to each resource we provision. Each tag key has the following structure: `${department}-${region}-${environment}-${tag_name} = ${tag_value}`

Let's say we have thirty modules in our code base, that deploys 30 types of resources.

Approach A: Each developer sets tags from the input without checking whether the input is valid

main.tf

resource "generic_resource" {

  resource_name = "my-resource"

// tags are directly taken as input variables and assigned to 
// key values
  tags = {
    dept-us-east-1-dev-name        = var.name
    dept-us-east-1-dev-team        = var.team
    dept=us-east-1-dev-product_id  = var.product_id
    }
}

This approach has a major flaw in that there is no validation of whether the input given by the module user follows the tagging standard my organization wants. This increases the chances of mistakes because there is no guardrail to check tags. The way the tag keys are named is a bad practice. Hardcoding any such values leads to a lot of technical overhead if at all they need to be changed. It is important to keep key-value pairs as modular as possible so that changing at one place is enough to update everything.

Approach B: The developer handles tag generation and validation in the module's code

// Prefixing var.department,region, env to the tag name 
// and regeneating tags within each module code
// Ensuring in variable definition that compulsory inputs are 
// non-null

main.tf

module "generic_resource" {
    resource_name = "generic resource name"
    tags          =   {
        ${dept}-${reg}-${env}-name            = var.name
        ${dept}-${reg}-${env}-team            = var.team
        ${dept}-${reg}-${env}-id              = var.id
    }
}

terraform.tfvars

dept         =     org-vertical
reg          =     us-east-1
team         =     developer
env          =     test
id           =     id

In this approach, the developer takes input parameters like department, region, and environment along with simple key-value pairs for tags, and constructs the standardized tag keys inside the module. They may have also written terraform test the script to ensure that tags are set properly by his approach. This approach is better than the previous one because there is a guardrail deployed within the module. But now my team of 8 developers has to make 30 modules - each developer will have to repeat this effort. The repeated effort is directly proportional to the inefficiency, increased delays, and chances of mistakes. Can we do the job one time and reuse it everywhere?

Examples of best practices brought by adopting standardized guardrails across all projects

Standardization begins with the foundations of your projects, from scaffolding to the guardrails your team employs, extending to the abstractions and configurations within your project's architecture.

Guardrails serve as the predefined pathways for your tech team, establishing clear boundaries that delineate what actions should and should not be taken. In software development, key guardrails include Schema-Driven Development, Configure Over Code, Security, and Decoupled Architecture.

If you use Godspeed's Meta-Framework, your project, and its components will have a standardised structure because the Meta-Framework brings four standard guardrails into API development - Schema Driven Development, Configure Over Code, Security & Decoupled Architecture.

Example A: Project Scaffolding

Here is how the scaffolding of all projects built using Meta-Framework looks like.

Example B: Request Input Validation

Again taking Godspeed's meta-framework-based example for defining event/API schemas in a standard and universal format independent of eventsources.
A single API definition is exposed via both Express and Apollo Graphql server

http & graphql.post./apply/loan:
  summary: Apply for loan 
  fn: los.apply_for_loan #event handler
  body: #swagger/json-schema
    content:
      multipart/form-data:
         schema:
          $ref: '#/definitions/los/apply_loan'
  responses: #swagger/json-schema
    200:
      content:
        application/json:
          schema:
            type: object

Whether it comes to event sources or data sources, Godspeed's Meta-Framework enables you to efficiently reuse or create standardized plugins across your organization, spanning multiple projects. In addition to covering Schema Driven Development and middlewares as mentioned earlier, Godspeed extends its support to other crucial components such as authentication (authn), authorization (authz), OTEL-based telemetry, and more. You

Curious on practising the best practices? You can get started in five minutes by going through the getting started section of Godspeed.

Example C: Infrastructure as Code - A common Terraform Tag Generation Module

As an alternative strategy to tag handling in the above examples of Terraform-based module development, let's develop a common tag generation module, and call it to generate tags, and then pass the generated tags to each module. This way the end user's inputs are still validated, and the module developers don't have to make any effort for tag checking or generation.

module-tags.tf

locals {
    tags = merge ({
        "org:dept"          =        var.dept
        "org:team"          =        var.team
        "org:reg"           =        var.reg
        "org:env"           =        var.env
        "org:id"            =        var.id
       },
       var.extra_tags
       )
    }

output "final_tags" {
    description = "Output of all the standard tags"
    value       = local.tags
}

variable "depat" {
    description = "The product for which infrastructure is provisoined"
    type        = string
}

variable "team" {
    description = "The name of the team for which infrastructure is provisoined"
    type        = string
}

variable "reg" {
    description = "The region in which infrastructure is provisoined"
    type        = string
}

variable "env" {
    description = "The environment in which infrastructure is provisoined"
    type        = string
}

variable "id" {
    description = "The product ID for which infrastructure is provisoined"
    type        = string
}

The above scenario depicts how Standardization can be implemented across different Terraform modules.

We create a dedicated module for tags which only needs to be called during the execution of other modules. We initialize all the tags inside a local block and use variables to refer to the values. In this way, tags can be standardized across the code base and it promotes Maintainability.

Key Benefits of Standardization

Better maintainability via guardrails

When all projects and automation implementations look the same, plus when all applications are built on standardized deployment and monitoring infrastructure, a company is highly productive, agile, and cost-effective - ready to take on any challenge with swiftness and nimbleness.

Streamlined Development and operations

Every implementation, irrespective of language or project specifics, adheres to a unified and consistent methodology. This commitment enables seamless transitions between projects, allowing developers to effortlessly switch between projects.

Focus on the What, not the How

Because all the infra for any API development is laid out with best practices and also made easy, this transforms software development into an efficient and cohesive experience across various technologies and enables even young developers to contribute much more.

Financial and Competitive Benefits

Reduced development costs and inefficiencies, reduced delays, reduced bugs, and reduced dependencies on tribal knowledge lead to a much more powerful, lean, and fast-moving organization.

Embracing a World of Standardization

10X tech orgs are built on standardization.

Standardization is not just a practice; Following Guardrails and Standardization is not a vitamin but a pain killer or even a lifesaver. It's a commitment to an efficient and unified approach. With Godspeed, the challenges posed by diverse frameworks dissolve, giving rise to a standardized and maintainable development ecosystem that stands as a testament to the power of a fourth-generation meta-framework.

The ecosystem is moving towards greater democratization and standardization. Godspeed's Meta-Framework is a giant step in that direction. Take a sneak peek at the future of software development with Godspeed – where standardization is not just a goal but a reality.

A passing thought to you - Do you think standardization is a life saver, pain killer or vitamin?

Hope you enjoyed this reading. More to come!