Installation artifacts lifecycle | On‑Premise | 2GIS Documentation
On‑Premise

Installation artifacts lifecycle

  1. DGCLI downloads the installation artifacts from Public Update Servers.

  2. DGCLI places the fetched datasets into the installation artifacts storage. (The Docker images can be placed into the Docker Registry.)

    See more in the DGCLI utility description.

    Note:

    Installation artifacts storage requires regular maintenance to clear out the outdated installation artifacts. This helps to prevent overflow of the storage space.

    DGCLI does not track and does not manage free space in installation artifacts storage or Docker Registry. It is recommended to setup monitoring for these parts of infrastructure and perform regular maintenance.

  3. All artifacts then migrate from public to private network, so that they become available to Helm and On-Premise services.

    The migration process can be implemented in different ways depending on the specifics of the project.

    Example:

    To implement migration of installation artifacts from the public network to the private network, you can install Docker Registry and an S3 compatible storage in the private network. Then configure synchronization between them and the corresponding entities in the public network.

Installation of many services include copying required datasets from the installation artifacts storage (see the previous section) into one or multiple storages that the service will use, e.g., into a PostgreSQL database. Oftentimes, a special Kubernetes Importer job exists for this purpose, providing the following lifecycle for a dataset:

  1. The job reads a manifest file from the installation artifacts storage. This file contains a list of objects stored in the installation artifacts storage and their latest versions.

  2. The job uses the manifest to determine if there is a new piece of data for the service. If there is no new data, the job stops.

  3. The job spawns some workers. Each worker fetches the necessary installation artifacts and imports the new data to the service's data storage as a separate copy.

  4. After the workers complete the data import, the job performs a series of health checks to ensure the integrity of the new data.

    If all checks are completed successfully, the job removes the original data, replacing them with the new data.

    If one or more checks fails, the job stops the updating process and requires actions from the system administrator. The original data is left intact.

Different scenarios for updating services and datasets are possible.

  • Update a service with Helm.

    Helm updates the service a similar way to how the Kubernetes job updates the data (see the previous section): new instances of the services will be deployed next to the current ones, and if health checks are completed successfully, traffic is redirected to the new instances. Otherwise, the process stops, requiring actions from the system administrator.

  • Update a service and its data with Helm (not supported by some services).

    Helm launches the service's Kubernetes Importer job to update the data, then Helm updates the service.

  • Update a service's data only (not supported by some services).

    The corresponding Kubernetes Importer job is scheduled to run, for example, on an everyday basis.

Important note:

Some services may not support updating datasets or the updating process may differ from the described one.

For a specific service's updating process description, see its documentation in the Updating services section.