Search API | Urbi On-Prem | 2GIS Documentation

Installing search API

Important note:

All passwords and keys in this section are given for illustration purposes.

During a real installation, it is recommended to use more complex and reliable passwords.

  1. Consider getting familiar with:

  2. Make sure the necessary preparation steps are completed:

    1. Preparation for installation
    2. Fetching installation artifacts
    3. Installing API Keys service
  3. Collect the necessary information that was set or retrieved on previous steps:

    Object Example value How to get value
    Docker Registry mirror endpoint docker.storage.example.local:5000 See Fetching installation artifacts
    Kubernetes secret for accessing Docker Registry onpremise-registry-creds See Fetching installation artifacts
    Installation artifacts S3 storage domain name artifacts.example.com See Fetching installation artifacts
    Bucket name for installation artifacts onpremise-artifacts See Fetching installation artifacts
    Installation artifacts access key AKIAIOSFODNN7EXAMPLE See Fetching installation artifacts
    Installation artifacts secret key wJalrXUtnFEMIK7MDENGbPxRfiCYEXAMPLEKEY See Fetching installation artifacts
    Path to the manifest file manifests/1640661259.json See Fetching installation artifacts
    API Keys service endpoint keys.example.local See Installing API Keys service
    Service tokens CATALOG_TOKEN
    PLACES_TOKEN
    GEOCODER_TOKEN
    SUGGEST_TOKEN
    CATEGORIES_TOKEN
    REGIONS_TOKEN
    See Installing API Keys service

    * For illustration purposes, it is assumed that service tokens for all the search products are available.

  4. Make sure that the following system requirements are met (the requirements are given for the minimal amount of replicas):

    • For testing environment:

      Service vCPU RAM Storage
      Search services 5 20 GB 15 GB in Kubernetes pod storage
      60 GB in PostgreSQL storage
      For PostgreSQL storage 6 12 GB 60 GB
      Total amount: 11 32 GB 75 GB
    • For production environment:

      Service vCPU RAM Storage
      Search services 17 26 GB 15 GB in Kubernetes pod storage
      60 GB in PostgreSQL storage
      For PostgreSQL storage 24 48 GB 60 GB
      Total amount: 41 74 GB 75 GB

    Note:

    Detailed requirements for each service are listed in the System requirements document.

  5. Choose domain names for the services.

    Example:

    • Domain name for Search API: search.example.com
    • Domain name for Catalog API: catalog.example.com

Place a PostgreSQL cluster with the domain name catalog-postgresql.storage.example.local in the private network. This instruction assumes that the cluster works on the standard port 5432.

Configure the PostgreSQL cluster for usage as a storage:

  1. Connect to the cluster a superuser (usually postgres).

  2. Create a database user that will be used for the service. Set a password for the user.

    create user dbuser_catalog password '650D7AmZjSR1dkNa';
    
  3. Create a database owned by this user.

    create database onpremise_catalog owner dbuser_catalog;
    
  4. Install necessary database extensions:

    \c onpremise_catalog
    
    CREATE EXTENSION postgis WITH SCHEMA public;
    CREATE EXTENSION jsquery WITH SCHEMA public;
    
  1. Create a Helm configuration file. See here for more details on the available settings.

    The example is prefilled with the necessary data collected on previous steps.

    values-search.yaml
    dgctlDockerRegistry: docker.storage.example.local:5000
    
    imagePullSecrets:
        - name: onpremise-registry-creds
    
    imagePullPolicy: IfNotPresent
    
    dgctlStorage:
        host: artifacts.storage.example.local:443
        bucket: onpremise-artifacts
        accessKey: AKIAIOSFODNN7EXAMPLE
        secretKey: wJalrXUtnFEMIK7MDENGbPxRfiCYEXAMPLEKEY
        manifest: manifests/1640661259.json
    
    api:
        resources:
            limits:
                cpu: 1
                memory: 3G
            requests:
                cpu: 100m
                memory: 1G
    nginx:
        resources:
            limits:
                cpu: 1
                memory: 1G
            requests:
                cpu: 100m
                memory: 200Mi
    
    ingress:
        hosts:
            - host: search.example.com
    

    Where:

    • dgctlDockerRegistry: your Docker Registry endpoint where On-Premise services' images reside.

    • dgctlStorage: Installation Artifacts Storage settings.

      • Fill in the common settings to access the storage: endpoint, bucket, and access credentials.
      • manifest: fill in the path to the manifest file in the manifests/1640661259.json format. This file contains the description of pieces of data that the service requires to operate. See Installation artifacts lifecycle.
    • api.resources: computational resources settings for the API backend service. See the minimal requirements table for the actual information about recommended values.

    • nginx.resources: computational resources settings for the NGINX backend service. See the minimal requirements table for the actual information about recommended values.

    • ingress: configuration of the Ingress resource. Adapt it to your Ingress installation. This URL should be accessible from the outside of your Kubernetes cluster, so that users in the private network can browse the URL.

  2. Deploy the service with Helm using the created values-search.yaml configuration file.

    helm upgrade --install --version=1.4.5 --atomic --values ./values-search.yaml search-api 2gis-on-premise/search-api
    
  1. Create a Helm configuration file. See here for more details on the available settings.

    The example is prefilled with the necessary data collected on previous steps.

    values-catalog.yaml
    dgctlDockerRegistry: docker.storage.example.local:5000
    
    imagePullSecrets:
        - name: onpremise-registry-creds
    
    imagePullPolicy: IfNotPresent
    
    api:
        postgres:
            host: catalog-postgresql.storage.example.local
            port: 5432
            name: onpremise_catalog
            username: dbuser_catalog
            password: 650D7AmZjSR1dkNa
    
    search:
        url: http://search.example.com
    
    keys:
        url: https://keys.example.local
        tokens:
            places: PLACES_TOKEN
            geocoder: GEOCODER_TOKEN
            suggest: SUGGEST_TOKEN
            categories: CATEGORIES_TOKEN
            regions: REGIONS_TOKEN
    

    Where:

    • dgctlDockerRegistry: your Docker Registry endpoint where On-Premise services' images reside.

    • api.postgres: the PostgreSQL database access settings. Use the values you configured in the PostgreSQL on the previous step.

      • host: host name of the server
      • port: port of the server
      • name: database name
      • username: user name
      • password: user password
    • search: the Search API service access settings.

      • url: URL of the service. This URL should be accessible from all the pods within your Kubernetes cluster.
    • keys: the API Keys service settings.

      • url: URL of the service. This URL should be accessible from all the pods within your Kubernetes cluster.
      • tokens: service tokens for sharing usage statistics with the API Keys service (see Installing API Keys service).
  2. Deploy the service with Helm using the created values-catalog.yaml configuration file.

    helm upgrade --install --version=1.4.5 --atomic --values ./values-catalog.yaml catalog-api 2gis-on-premise/catalog-api
    

To test that the Search API service is working, you can make a GET request to the status endpoint:

curl search.example.com/v2/status?f=common

To test that the Catalog API service is working, you can do the following:

  1. Using API Keys Admin, create an API key that has access to Places API and Regions API.

  2. Make the following GET request, replacing:

    • API_KEY with the created key.
    • City with any city name you want to search the information for.
    curl catalog.example.com/3.0/items/geocode?key=API_KEY&q=City
    

    This request will test the operability of Catalog API, Search API, and the PostgreSQL database.

What's next?