Installing navigation API
Important note:
All passwords and keys in this section are given for illustration purposes.
During a real installation, it is recommended to use more complex and reliable passwords.
1. Before installing
-
Consider getting familiar with:
-
Make sure the necessary preparation steps are completed:
-
Collect the necessary information that was set or retrieved on previous steps:
Object Example value How to get value Docker Registry mirror endpoint docker.storage.example.local:5000
See Fetching installation artifacts Kubernetes secret for accessing Docker Registry onpremise-registry-creds
See Fetching installation artifacts Installation artifacts S3 storage domain name artifacts.example.com
See Fetching installation artifacts Bucket name for installation artifacts onpremise-artifacts
See Fetching installation artifacts Installation artifacts access key AKIAIOSFODNN7EXAMPLE
See Fetching installation artifacts Installation artifacts secret key wJalrXUtnFEMIK7MDENGbPxRfiCYEXAMPLEKEY
See Fetching installation artifacts Path to the manifest file manifests/1640661259.json
See Fetching installation artifacts API Keys service endpoint keys.example.local
See Installing API Keys service Traffic API Proxy endpoint traffic-proxy.example.local
See Installing Traffic API Proxy Service tokens* DIRECTIONS_TOKEN
PAIRS_DIRECTIONS_TOKEN
TRUCK_DIRECTIONS_TOKEN
PUBLIC_TRANSPORT_TOKEN
ROUTES_TOKEN
DISTANCE_MATRIX_TOKEN
TSP_TOKEN
ISOCHRONE_TOKEN
MAP_MATCHING_TOKEN
See Installing API Keys service * For illustration purposes, it is assumed that service tokens for all the navigation products are available.
-
Make sure that the resource requirements specified in the Helm charts are met:
For more information on how to do this, refer to the System requirements document.
-
Choose domain names for the services.
Example:
- Domain name for Navi-Castle:
navi-castle.example.local
- Domain name for Navi-Back:
navi-back.example.local
- Domain name for Distance Matrix Async API:
navi-async-matrix.example.local
- Domain name for Restrictions API:
navi-restrictions.example.local
- Domain name for Navi-Castle:
2. Prepare infrastructure required for the service
Configure PostgreSQL for Distance Matrix Async API
Place a PostgreSQL cluster with the domain name navi-async-matrix-postgresql.storage.example.local
in the private network. This instruction assumes that the cluster works on the standard port 5432
.
Configure the PostgreSQL cluster for usage as a storage:
-
Connect to the cluster a superuser (usually
postgres
). -
Create a database user that will be used for the service. Set a password for the user.
create user dbuser_navi_async_matrix password 'wNgJamrIym8UAcdX';
-
Create a database owned by this user.
create database onpremise_navi_async_matrix owner dbuser_navi_async_matrix;
Configure PostgreSQL for Restrictions API
Place a PostgreSQL cluster with the domain name navi-restrictions-postgresql.storage.example.local
in the private network. This instruction assumes that the cluster works on the standard port 5432
.
Configure the PostgreSQL cluster for usage as a storage:
-
Connect to the cluster a superuser (usually
postgres
). -
Create a database user that will be used for the service. Set a password for the user.
create user dbuser_restrictions password 'jwbK65iFrCCcNrkg';
-
Create a database owned by this user.
create database onpremise_restrictions owner dbuser_restrictions;
Configure S3 storage for Navi-Back
lace an S3-compatible storage (e.g., Ceph) with the domain name navi-back-s3.storage.example.local
in the private network. This instruction assumes that the storage works on the standard port 80
.
Configure the S3-compatible storage:
-
Create a user that will be used for the service. Remember the credentials for the user.
Example:
- Access key:
HZJQSA1JMOMLXALINTVY
- Secret key:
I2dAfvW0RRbjKj6ESn4gq5mwRJQ5ZCRSEqTWUWAf
- Access key:
-
Choose bucket names that will be used for the service.
Example:
naviback-bucket
Configure S3 storage for Distance Matrix Async API
Place an S3-compatible storage (e.g., Ceph) with the domain name navi-async-matrix-s3.storage.example.local
in the private network. This instruction assumes that the storage works on the standard port 80
.
Configure the S3-compatible storage:
-
Create a user that will be used for the service. Remember the credentials for the user.
Example:
- Access key:
TRVR4ESNMDDSIXLB3ISV
- Secret key:
6gejRs5fyRGKIFjwkiBDaowadGLtmWs2XjEH18YK
- Access key:
-
Choose bucket names that will be used for the service.
Example:
navi-async-matrix-bucket
Configure Apache Kafka for Navi-Back
Place a Apache Kafka storage with the domain name navi-back-kafka.storage.example.local
in the private network. This instruction assumes that the storage works on the standard port 9092
.
Create a user that will be used for the service. Remember the credentials for the user.
Example:
- Username:
kafka-navi-back
- Password:
Ea6fNe5Bbx56Y1s0
Configure Apache Kafka for Navi-Back
Place a Apache Kafka storage with the domain name navi-async-matrix-kafka.storage.example.local
in the private network. This instruction assumes that the storage works on the standard port 9092
.
Create a user that will be used for the service. Remember the credentials for the user.
Example:
- Username:
kafka-async-matrix
- Password:
1Y2u3gGvi6VjNHUt
Configure file storage for Navi-Castle
The Navi-Castle service stores some data as files. Choose a path where these files will be placed.
Example: /opt/castle/data
3. Install navigation API services
Install Navi-Castle service
-
Create a Helm configuration file. See here for more details on the available settings.
The example is prefilled with the necessary data collected on previous steps.
values-castle.yaml
dgctlDockerRegistry: docker.storage.example.local:5000/2gis-on-premise imagePullSecrets: [onpremise-registry-creds] dgctlStorage: host: artifacts.example.com bucket: onpremise-artifacts accessKey: AKIAIOSFODNN7EXAMPLE secretKey: wJalrXUtnFEMIK7MDENGbPxRfiCYEXAMPLEKEY manifest: manifests/1640661259.json resources: limits: cpu: 1000m memory: 512Mi requests: cpu: 500m memory: 128Mi persistentVolume: enabled: false accessModes: [ReadWriteOnce] storageClass: ceph-csi-rbd size: 5Gi castle: castleDataPath: /opt/castle/data/ # Only if you use Restrictions API restrictions: key: secret host: http://navi-restrictions.example.local cron: enabled: import: true restriction: true schedule: import: '*/10 * * * *' restriction: '*/10 * * * *' concurrencyPolicy: Forbid successfulJobsHistoryLimit: 3 replicaCount: 1
Where:
-
dgctlDockerRegistry
: your Docker Registry endpoint where On-Premise services' images reside. -
dgctlStorage
: Installation Artifacts Storage settings.- Fill in the common settings to access the storage: endpoint, bucket, and access credentials.
manifest
: fill in the path to the manifest file in themanifests/1640661259.json
format. This file contains the description of pieces of data that the service requires to operate. See Installation artifacts lifecycle.
-
resources
: computational resources settings for service. See the minimal requirements table for the actual information about recommended values. -
persistentVolume
: settings of Kubernetes Persistent Volume Claim (PVC) that is used to store the service data.enabled
: flag that controls whether PVC is enabled. If PVC is disabled, a service's replica can lose its data.accessModes
: access mode for the PVC (default: none). Available modes are the same as for persistent volumes.storageClass
: storage class for the PVC.size
: storage size.
Important note:
Navi-Castle is deployed using StatefulSet. This means that every Navi-Castle replica will get its own dedicated Persistent Storage with the specified settings.
For example, if you configure the
size
setting as5Gi
, then the total storage volume required for 3 replicas will be equal to15Gi
. -
castle
: Navi-Castle settings.castleDataPath
: path to the Navi-Castle data directory.restrictions.key
: key that will be used to interact with the Restrictions API service. An arbitrary string.restrictions.host
: URL of the Restrictions API service. This URL should be accessible from all the pods within your Kubernetes cluster.
-
cron
: the Kubernetes Cron Job settings. These setting are the same for all deployed Navi-Castle service's replicas. This job fetches actual data from Installation Artifacts Storage and updates the data on the Navi-Castle replica.enabled.import
,enabled.restriction
: flags that control whether the jobs are enabled (default:false
). If both jobs are disabled, no Navi-Castle replicas will get data updates.schedule.import
,schedule.restriction
: schedules of the jobs in cron format.concurrencyPolicy
: the job concurrency policy.successfulJobsHistoryLimit
: a limit on how many completed jobs should be kept.
-
replicaCount
: number of the Navi-Castle service replicas. Note that each replica's pod will get its own dedicatedcron
job to fetch the actual data from Installation Artifacts Storage.
-
-
Deploy the service with Helm using the created
values-castle.yaml
configuration file.helm upgrade --install --version=1.10.0 --atomic --values ./values-castle.yaml navi-castle 2gis-on-premise/navi-castle
On its first start, a Navi-Castle replica will fetch the data from Installation Artifacts Storage. After that, the data will be updated on schedule by the Cron Job.
Install Navi-Back service
-
Create the rules.conf file with the required set of rules.
-
Create a Helm configuration file. See here for more details on the available settings.
The example is prefilled with the necessary data collected on previous steps.
values-back.yaml
dgctlDockerRegistry: docker.storage.example.local:5000/2gis-on-premise affinity: {} hpa: enabled: false minReplicas: 1 maxReplicas: 100 scaleDownStabilizationWindowSeconds: '' scaleUpStabilizationWindowSeconds: '' targetCPUUtilizationPercentage: 80 targetMemoryUtilizationPercentage: '' naviback: castleHost: navi-castle.example.local ecaHost: traffic-proxy.example.local forecastHost: traffic-proxy.example.local appPort: 443 simpleNetwork: emergency: false replicaCount: 1 resources: limits: cpu: 2000m memory: 16000Mi requests: cpu: 1000m memory: 1024Mi # Only if you use Distance Matrix Async API kafka: enabled: true server: navi-back-kafka.storage.example.local port: 9092 groupId: group_id user: kafka-navi-back password: Ea6fNe5Bbx56Y1s0 distanceMatrix: taskTopic: request_topic cancelTopic: cancel_topic # Only if you use Distance Matrix Async API s3: enabled: true host: navi-back-s3.storage.example.local:80 bucket: naviback-bucket accessKey: HZJQSA1JMOMLXALINTVY secretKey: I2dAfvW0RRbjKj6ESn4gq5mwRJQ5ZCRSEqTWUWAf
Where:
-
dgctlDockerRegistry
: your Docker Registry endpoint where On-Premise services' images reside. -
affinity
: node affinity settings. -
hpa
: Horizontal Pod Autoscaling settings. -
naviback
: Navi-Back service settings.-
castleHost
: URL of Navi-Castle service. This URL should be accessible from all the pods within your Kubernetes cluster. -
ecaHost
: domain name of the Traffic API Proxy service. This URL should be accessible from all the pods within your Kubernetes cluster. -
forecastHost
: URL of Traffic forecast service. See the Traffic API Proxy service. This URL should be accessible from all the pods within your Kubernetes cluster. -
appPort
: HTTP port for the Navi-Back service. -
simpleNetwork.emergency
: enable support for emergency vehicle routes.Note that to be able to build such routes, you also need to add the
emergency
routing type to one of the projects in your rules.conf file.
-
-
replicaCount
: number of the Navi-Back service replicas. -
resources
: computational resources settings for service. See the minimal requirements table for the actual information about recommended values. -
kafka
: access settings for the Apache Kafka broker for interacting with Distance Matrix Async API.-
server
: Kafka hostname or IP address. -
port
: Kafka port. -
groupId
: Distance Matrix Async API group identifier. -
user
andpassword
: credentials for accessing the Kafka server. -
distanceMatrix
: names of the topics for interacting with Distance Matrix Async API.taskTopic
: name of the topic for receiving new tasks from Distance Matrix Async API.cancelTopic
: name of the topic for canceling or finishing tasks.
-
-
s3
: access settings for the S3-compatible storage for interacting with Distance Matrix Async API.host
: endpoint of the S3-compatible storage.bucket
: bucket name for storing the request data.accessKey
: S3 access key.secretKey
: S3 secret key.
-
-
Deploy the service with Helm using the created
values-back.yaml
configuration file.helm upgrade --install --version=1.10.0 --atomic --values ./values-back.yaml navi-back 2gis-on-premise/navi-back
Install Navi-Router service
-
Create the rules.conf file with the required set of rules.
-
Create a Helm configuration file. See here for more details on the available settings.
The example is prefilled with the necessary data collected on previous steps.
values-router.yaml
dgctlDockerRegistry: docker.storage.example.local:5000/2gis-on-premise router: logLevel: Warning castleHost: http://navi-castle.example.local keyManagementService: enabled: true host: http://keys.api.example.com refreshIntervalSec: 30 downloadTimeoutSec: 30 apis: # directions: "DIRECTIONS_TOKEN" # distance-matrix: "DISTANCE_MATRIX_TOKEN" # pairs-directions: "PAIRS_DIRECTIONS_TOKEN" # truck-directions: "TRUCK_DIRECTIONS_TOKEN" # public-transport: "PUBLIC_TRANSPORT_TOKEN" # isochrone: "ISOCHRONE_TOKEN" # map-matching : "MAP_MATCHING_TOKEN" # ppnot: "PPNOT_TOKEN" # combo-routes: "COMBO_ROUTES_TOKEN" # free-roam: "FREE_ROAM_TOKEN" replicaCount: 2 resources: limits: cpu: 2000m memory: 1024Mi requests: cpu: 500m memory": 128Mi
Where:
-
dgctlDockerRegistry
: your Docker Registry endpoint where On-Premise services' images reside. -
router
: Navi-Router service settings.-
logLevel
: logging level, default isWarning
. Available levels:Verbose
,Info
,Warning
,Error
,Fatal
. -
castleHost
: URL of the Navi-Castle service. This URL must be accessible from all the pods within your Kubernetes cluster. -
keyManagementService
: API Keys settings. If this parameter is omitted, the API key verification step will be skipped.enabled
: whether API Keys usage is turned on.host
: URL of the API Keys service endpoint. This URL must be accessible from all the pods within your Kubernetes cluster.refreshIntervalSec
: interval between key updates in seconds.downloadTimeoutSec
: timeout of key downloading in seconds.apis
: service tokens for sharing usage statistics with the API Keys service (see Fetching the service tokens).
-
-
replicaCount
: number of service replicas. -
resources
: computational resources settings for the service. See the minimal requirements table for the actual information about recommended values.
-
-
Deploy the service with Helm using the created
values-router.yaml
configuration file.helm upgrade --install --version=1.10.0 --atomic --values ./values-router.yaml navi-router 2gis-on-premise/navi-router
Install Navi-Front service
-
Create a Helm configuration file. See here for more details on the available settings.
The example is prefilled with the necessary data collected on previous steps.
values-front.yaml
dgctlDockerRegistry: docker.storage.example.local:5000/2gis-on-premise affinity: {} hpa: enabled: true minReplicas: 2 maxReplicas: 6 targetCPUUtilizationPercentage: 90 replicaCount: 2 resources: limits: cpu: 100m memory: 128Mi requests: cpu: 100m memory: 128Mi
Where:
-
dgctlDockerRegistry
: your Docker Registry endpoint where On-Premise services' images reside. -
affinity
: node affinity settings. -
hpa
: autoscaling settings. -
replicaCount
: number of service replicas. -
resources
: computational resources settings for the service. See the minimal requirements table for the actual information about recommended values.
-
-
Deploy the service with Helm using the created
values-front.yaml
configuration file.helm upgrade --install --version=1.10.0 --atomic --values ./values-front.yaml navi-front 2gis-on-premise/navi-front
Install Distance Matrix Async API service
-
Create a Helm configuration file. See here for more details on the available settings.
The example is prefilled with the necessary data collected on previous steps.
values-navi-async-matrix.yaml
dgctlDockerRegistry: docker.storage.example.local:5000/2gis-on-premise dm: citiesUrl: http://navi-castle.example.local/cities.conf s3: host: http://navi-async-matrix-s3.storage.example.local:80 bucket: navi-async-matrix-bucket accessKey: TRVR4ESNMDDSIXLB3ISV secretKey: 6gejRs5fyRGKIFjwkiBDaowadGLtmWs2XjEH18YK db: host: navi-async-matrix-postgresql.storage.example.local port: 5432 name: onpremise_navi_async_matrix user: dbuser_navi_async_matrix password: wNgJamrIym8UAcdX kafka: bootstrap: navi-async-matrix-kafka.storage.example.local:9092 groupId: group_id user: kafka-async-matrix password: 1Y2u3gGvi6VjNHUt consumerCancelTopic: cancel_topic topicRules: - topic: request_topic default: true - topic: moscow_request_topic projects: - moscow keys: url: keys.example.local token: DISTANCE_MATRIX_TOKEN
Where:
-
dgctlDockerRegistry
: your Docker Registry endpoint where On-Premise services' images reside. -
dm.citiesUrl
: URL of the information about cities provided by the Navi-Castle service. -
s3
: access settings for the S3-compatible storage.host
: endpoint of the S3-compatible storage.bucket
: bucket name for storing the request data.accessKey
: S3 access key.secretKey
: S3 secret key.
-
db
: access settings for the PostgreSQL server.host
: hostname or IP address of the PostgreSQL server.port
: listening port of the PostgreSQL server.name
: database name.user
andpassword
: credentials for accessing the database specified in thename
setting. The user must be the owner of this database or a superuser.
-
kafka
: access settings for the Apache Kafka broker.-
bootstrap
: URL of the Kafka server. -
groupId
: Distance Matrix Async API group identifier. -
user
andpassword
: credentials for accessing the Kafka server. -
consumerCancelTopic
: name of the topic for canceling or receiving information about finished tasks. -
topicRules
: information about the topics that Distance Matrix Async API will use to send the requests. Defined as a list where each element must have two parameters:-
topic
: name of the topic. -
projects
ordefault
: parameters that define which requests to send to the topic.Distance Matrix Async API sends requests to different topics based on their projects. For each topic other than the default one, the
projects
setting must be defined containing a list of projects (see Rules list). For the default topic, thedefault: true
setting must be defined. The default topic will be used to send the requests related to the projects not listed in any other topic'sprojects
.The configuration must contain one and only one topic with
default: true
.
-
-
-
keys
: the API Keys service settings.url
: URL of the service. This URL should be accessible from all the pods within your Kubernetes cluster.token
: service token (see Installing API Keys service).
-
-
Deploy the service with Helm using the created
values-navi-async-matrix.yaml
configuration file.helm upgrade --install --version=1.10.0 --atomic --values ./values-navi-async-matrix.yaml navi-async-matrix 2gis-on-premise/navi-async-matrix
Install Restrictions API service
-
Add the following settings to your Navi-Castle configuration file:
castle: restrictions: key: secret host: http://navi-restrictions.example.local cron: enabled: import: true restriction: true schedule: import: '11 * * * *' restriction: '*/5 * * * *'
Where:
-
restrictions.key
: key that will be used to interact with the Restrictions API service. An arbitrary string. -
restrictions.host
: URL of the Restrictions API service. This URL should be accessible from all the pods within your Kubernetes cluster. -
cron.schedule
: cron interval for updating information about road closures.
-
-
Add the following locations to the NGINX configuration of your Navi-Front service:
locationsBlock: | location /attract { proxy_pass http://navi-back.example.local; } location /edge { proxy_pass http://navi-back.example.local; }
-
Create a Helm configuration file. See here for more details on the available settings.
The example is prefilled with the necessary data collected on previous steps.
values-restrictions.yaml
dgctlDockerRegistry: docker.storage.example.local:5000/2gis-on-premise db: host: navi-restrictions-postgresql.storage.example.local port: 5432 name: onpremise_restrictions user: dbuser_restrictions password: jwbK65iFrCCcNrkg api: attractor_url: http://navi-back.example.local/attract cron: edges_url_template: http://navi-castle.example.local/restrictions_json/{project}/{date_str}_{hour}.json edge_attributes_url_template: http://navi-back.example.local/edge?edge_id={edge_id}&offset=200&routing=carrouting projects: - moscow api_key: secret
Where:
-
dgctlDockerRegistry
: your Docker Registry endpoint where On-Premise services' images reside. -
db
: access settings for the PostgreSQL server.host
: hostname or IP address of the PostgreSQL server.port
: listening port of the PostgreSQL server.name
: database name.user
andpassword
: credentials for accessing the database specified in thename
setting. The user must be the owner of this database or a superuser.
-
attractor_url
: URL of Navi-Back service. This URL should be accessible from all the pods within your Kubernetes cluster. -
cron
: settings for retrieving information from Navigation services.edges_url_template
: URL of the Navi-Castle service. This URL should be accessible from all the pods within your Kubernetes cluster.edge_attributes_url_template
: URL of the Navi-Back service. This URL should be accessible from all the pods within your Kubernetes cluster.projects
: list of Navi-Back projects (see Rules file).
-
api_key
: key that will be used to interact with Navigation services. The value of this setting must match the value of therestrictions.key
setting of the Navi-Castle service.
-
-
Deploy the service with Helm using the created
values-restrictions.yaml
configuration file:
helm upgrade --install --version=1.10.0 --atomic --wait-for-jobs --values ./values-restrictions.yaml navi-restrictions 2gis-on-premise/navi-restrictions
4. Test deployment
Test Navi-Castle service
To test that the Navi-Castle service is working, you can do the following:
-
Port forward the service using
kubectl
:kubectl port-forward navi-castle-0 7777:8080
-
Send a GET request to the root endpoint (
/
) using cURL or a similar tool:curl -Lv http://navi-castle.example.com:7777/
You should receive an HTML listing of all files and folders similar to the following:
<html> <head> <title>Index of /</title> </head> <body> <h1>Index of /</h1> <hr /> <pre> <a href="../">../</a> <a href="lost%2Bfound/">lost+found/</a> 09-Mar-2022 13:33 - <a href="packages/">packages/</a> 09-Mar-2022 13:33 - <a href="index.json">index.json</a> 09-Mar-2022 13:33 634 <a href="index.json.zip">index.json.zip</a> 09-Mar-2022 13:33 357 </pre> <hr /> </body> </html>
Test Navi-Back service
To test that the Navi-Back service is working, you can do the following:
-
Port forward the service using
kubectl
:kubectl port-forward navi-back-6864944c7-vrpns 7777:8080
-
Create the following file containing the body of a Directions API request (the example is valid for Moscow):
data.json
{ "alternative": 1, "locale": "en", "point_a_name": "start", "point_b_name": "finish", "type": "jam", "points": [ { "start": true, "type": "walking", "x": 37.616489, "y": 55.751225 }, { "start": false, "type": "walking", "x": 37.418451, "y": 55.68355 } ] }
-
Send the request using cURL or a similar tool:
curl -Lv http://navi-back.example.com:7777/carrouting/6.0.0/global -d @data.json
You should receive a response with the following structure:
{ "query": {..}, "result": [{..}, {..}] "type": "result" }
See the Navigation documentation for request examples.
Test Navi-Router service
To test that the Navi-Router service is working, you can do the following:
-
Port forward the service using
kubectl
:kubectl port-forward navi-router-6864944c7-vrpns 7777:8080
-
Create a file containing the body of a Directions API request, identical to the file from Testing the deployment of Navi-Back.
-
Send the request using cURL or a similar tool:
curl -Lv http://navi-router.example.com:7777/carrouting/6.0.0/global -d @data.json
You should receive a response containing the rule name:
moscow_cr
Test Navi-Front service
To test that the Navi-Front service is working, you can do the following:
-
Port forward the service using
kubectl
:kubectl port-forward navi-front-6864944c7-vrpns 7777:8080
-
Create a file containing the body of a Directions API request, identical to the file from Testing the deployment of Navi-Back.
-
Send the request using cURL or a similar tool:
curl -Lv http://navi-front.example.com:7777/carrouting/6.0.0/global -d @data.json
You should receive a response with the following structure:
{ "query": {..}, "result": [{..}, {..}] "type": "result" }
Test Distance Matrix Async API service
To test that the Distance Matrix Async API service is working, you can do the following:
-
Port forward the service using
kubectl
:kubectl port-forward navi-async-matrix-6864944c7-vrpns 7777:8080
-
Create the following file containing the body of the request (the example is valid for Moscow):
{ "points": [ { "lon": 37.573289, "lat": 55.699926 }, { "lon": 37.614402, "lat": 55.706847 }, { "lon": 37.552182, "lat": 55.675928 }, { "lon": 37.620315, "lat": 55.669625 } ], "sources": [0, 1], "targets": [2, 3] }
-
Send the request using cURL or a similar tool:
curl -Lv https://navi-async-matrix.example.com/create_task/get_dist_matrix --header 'Content-Type: application/json' -d @data.json
You should receive a response with the following structure:
{ "task_id": "{TASK_ID}", "status ": "TASK_CREATED" }
-
Request the task status using the
TASK_ID
parameter received on the previous step.curl -Lv https://navi-async-matrix.example.com/result/get_dist_matrix/{TASK_ID}
Perform the request multiple times if necessary, while the task is running. Eventually, you should receive a response with the following structure:
{ "task_id": "{TASK_ID}", "status": "TASK_DONE", "code": 200, "message": "start_time_ms=16516816106601123 calc_time_ms=14419 attract_time=4 build_time=28 points_count=3 source_count=1 target_count=2", "result_link": "http://navi-async-matrix-s3.storage.example.local:80/dm/{TASK_ID}.response.json" }
-
Download the calculation results using the URL received in the
result_link
field on the previous step. Make sure that the result is a valid JSON file.
Test Restrictions API service
To test that the service is working, send a GET request to the /healthcheck
endpoint.
What's next?
-
Find out how to update the service:
-
Install other On-Premise products: