Adding a Resource to Inventory#

This section covers the process of adding a new resource to Inventory, which implies extending the database schema and adding the necessary logic to handle the translations from protobuf definitions to database entries, and vice versa.

Protobuf Definitions#

The first step is to define the new resource in the protobuf file. The protobuf file is located at inventory/api/; assume that a new resource called localaccount is being added. A new folder called localaccount is created to host the file localaccount.proto. Ensure that the protobuf definition looks like this:

message LocalAccountResource {
   option (ent.schema) = {gen: true};
   option (infrainv.schemaExtension) = {
      indexes: [
         {
            unique: false
            fields: [
               "username",
               "tenant_id"
            ]
         }
      ]
   };
   // resource identifier
   string resource_id = 1 [
      (ent.field) = {unique: true},
      (buf.validate.field).string = {
         pattern: "^localaccount-[0-9a-f]{8}$"
         max_bytes: 21
      },
      (buf.validate.field).ignore = IGNORE_IF_UNPOPULATED
   ];
   // Username provided by admin
   string username = 2 [
      (ent.field) = {
         optional: false
         immutable: true
      },
      (buf.validate.field).string = {
         pattern: "^[a-z][a-z0-9-]{0,31}$"
         max_bytes: 32
      }
   ];
   // Tenant Identifier.
   string tenant_id = 100 [
      (ent.field) = {
         immutable: true
         optional: false
      },
      (buf.validate.field).string = {
         uuid: true
         max_bytes: 36
      },
      (buf.validate.field).ignore = IGNORE_IF_UNPOPULATED
   ];
}

Note

When defining a new resource, use the ent options to trigger the related protoc plugin, generate the ORM code with entgo for the resources, and change to the schema of the database.

The infrainv.schemaExtension options are custom extensions built as part of the Inventory project. In this case, use it to define the indexes of the resource.

The buf.validate option is used to define the validation rules for the fields and leverages the protovalidate plugin. Each section is further customized by the options exposed by each plugin.

Note

A resource should always have at least 4 fields: resource_id, tenant_id, created_at and updated_at. The created_at and updated_at fields are automatically managed by the Inventory service but they are required even when the resource is unmodifiable.

To complete the protobuf definition, update the inventory.proto file to include the new resource. This will allow the gRPC protocol to properly support the newly added resource. Ensure that the protobuf definition looks like this:

   import "os/v1/os.proto";
   import "ou/v1/ou.proto";
   import "provider/v1/provider.proto";
+  import "localaccount/v1/localaccount.proto";
   import "schedule/v1/schedule.proto";
   import "telemetry/v1/telemetry.proto";
   import "tenant/v1/tenant.proto";
   ...
      RESOURCE_KIND_TELEMETRY_GROUP = 120;
      RESOURCE_KIND_TELEMETRY_PROFILE = 121;
+     RESOURCE_KIND_LOCAL_ACCOUNT = 170;
   }
   ...
         telemetry.v1.TelemetryGroupResource telemetry_group = 100;
         telemetry.v1.TelemetryProfile telemetry_profile = 101;
+        localaccount.v1.LocalAccountResource local_account = 170;
      }
   }

Code and Documentation Generation#

Assuming that the protobuf definition is ready, the next step is to generate the code and the documentation. Inventory Service exposes several make targets that assist the developer journey. Use the following command to generate the code and the documentation:

make generate

After running the command, the generated code and documentation will be available in the respective directories.

docs/inventory.md will be updated with the documentation associated with the new resource.

The folder internal/ent will contain:

  1. Schema changes

  2. The ORM code for the new resource

Generating the database schema is done in two phases:

  1. buf uses protoc-gen-ent to generate schema files in internal/ent/schema, then

  2. ent transforms those schemas into database code with go generate.

The Golang* code that supports the handling of the protobuf definitions, will be available in the folder pkg/api; in this case a new folder called localaccount will be created. When adding a new resource, the Inventory protobuf are impacted, so changes in pkg/api/inventory are expected.

Finally, the generate target supports Python* bindings that will be extended to include the new protobuf definitions.

Database Schema Migration#

After generating the code, the next step is to write the database schema migration. This will update the database schema to include the new resource when an existing database is upgraded to a new Inventory code.

  1. Create the migration file by running the following:

    make migration-generate MIGRATION="add_localaccount"
    
  2. Update the atlas.sum that will spot any inconsistency in the migration process:

    make migration-hash
    

Inventory provides the lint target to verify the migration. When there are conflicting changes, for example, removing a column, the migration will fail and you would need to resolve the conflict. See the atlas documentation for further details.

DAO Code Changes#

To complete the gRPC handling logic, you must update the gRPC server implementation to handle the new resource. The update is similar to updating case statements in the internal/inventory/inventory.go file. The following code snippet shows an example of how to handle the new resource:

+  case inv_v1.ResourceKind_RESOURCE_KIND_LOCAL_ACCOUNT:
+     gresresp.Resource, err = srv.IS.GetLocalAccount(ctx, in.ResourceId)
default:
   zlog.MiSec().MiError("unknown Resource Kind: %s", kind).Msg("get resource parse error")
   return nil, errors.Errorfc(codes.InvalidArgument, "unknown Resource Kind: %s", kind)

Similar extensions are required for the find functions and the filter code, hosted respectively in inventory/internal/store/find.go and inventory/internal/store/filter.go. This is to enable list and find operations on the new resource.

The next step is to update the DAO code, support the Create, Read, Update, and Delete (CRUD) operations exposed by the gRPC server, and implement the adaptation logic from the protobuf definitions to the DB entries, and vice versa.

Updating the DAO code and supporting the CRUD operations require adding localaccount.go and localaccount_validator.go files in the inventory/internal/store folder. These files are similar from one resource to another. The only customization required are typically done if advanced sanity checks are needed that cannot be expressed using the ent schema or SQL.

Unless there is a need to support a new field format, the translation logic from protobuf to ent resources is already implemented by the generic mutation logic. Then, the ent takes care of the serialization and deserialization of the ent data struct to and from the database. The only necessary step is to extend inventory/internal/store/conversions.go to implement the adaptation logic from ent resources to protobuf messages. This extension is required even when introducing a new field into an existing resource.

Finally, make sure to extend utils functions such as inventory/internal/store/utils.go to support the new resource. Store helpers are located under inventory/internal/store/store.go. Do not forget to extend the testing packages to cover the usage of the new resource in the inventory/pkg/testing folder. Other utils functions are located in inventory/pkg/util and inventory/pkg/validator (the latter is used to validate the protobuf messages).

Testing the New Resource#

Last but not least, test the new resource to ensure that the code and the extensions to the Inventory are working as expected.

This includes testing the CRUD operations for the new resource. Special-purpose tests are usually required to verify the behavior of validation rules or custom logic. Run the tests with the following command:

make test

This command will run the tests and provide feedback on the success or failure of the operations. It is important to review the test results and address any issues that may arise. For each contribution, the usual requirement is to cover at least the 80% of the lines of code with tests.