DeGirum AI server software stack allows you to run AI model inferences which are initiated from multiple remote clients within your local network. DeGirum AI server software stack can be installed on hosts equipped with AI accelerator card(s).
The following table lists operating systems, CPU architectures, and AI hardware accelerators, supported by DeGirum AI server software stack:
Operating System | CPU Architecture | Supported AI Hardware |
---|---|---|
Ubuntu Linux 20.04, 22.04 | x86-64 | DeGirum Orca, Google EdgeTPU, Intel® Myriad™ |
Ubuntu Linux 20.04, 22.04 | ARM AArch64 | DeGirum Orca |
Raspberry Pi OS (64 bit) | ARM AArch64 | DeGirum Orca |
Windows 10 | x86-64 | DeGirum Orca (Planned) |
macOS 12 | x86-64 | DeGirum Orca (Planned) |
macOS 12 | ARM AArch64 | DeGirum Orca (Planned) |
You have the following three options for running DeGirum AI server:
- From the terminal directly on Linux host (see section Starting AI Server from Terminal.
- As Linux service (see section Starting AI Server as Linux Service.
- As a pre-built Docker container (see section Starting AI Server as Docker Container.
Note
Before starting the AI server, make sure that the device driver for the AI Accelerator is installed on the system. For Orca driver installation, see the Orca Driver page.
Starting AI Server from Terminal
To run PySDK AI Server from the terminal, perform the following steps:
-
Create or select a user name to be used for all the following configuration steps. This user should have administrative rights on this host. The user name ai-user is used in the instructions below, but it can be changed to any other user name of your choice.
-
For convenience of future maintenance we recommend you to install PySDK into a virtual environment, such as Miniconda. Make sure you’ve activated your Python virtual environment with Python 3.8 and have PySDK installed into this virtual environment.
-
Create a directory for the AI server local model zoo, for example:
mkdir /home/ai-user/zoo
- If you want to host models in the local model zoo (as opposed to the hosted model zoo), download all models of your choice from DeGirum AI Hub Model Zoo into the directory created on the previous step by executing the following command:
degirum download-zoo --path /home/ai-user/zoo --token "token string" [--url "cloud zoo URL"]
- The `"token string"` is your API access token string obtained on [DeGirum AI Hub](https://cs.degirum.com/).
The optional "cloud zoo URL"
parameter is the the DeGirum AI Hub zoo URL in the format "https://cs.degirum.com/<organization>/<zoo>"
, where <organization>
is the name of hosted zoo owner organization, and <zoo>
is the name of the zoo. Omit --url
parameter to use DeGirum public hosted zoo.
- Start DeGirum AI server process by executing the following command:
degirum server --zoo /home/ai-user/zoo
The AI server is up and will run until you press ENTER in the same terminal where you started it. By default, AI server listens to 8778 TCP port. If you want to change the TCP port, pass --port
command line argument when launching the server, for example:
degirum server --zoo /home/ai-user/zoo --port 8780
Starting AI Server as Linux Service
It is convenient to automate the process of AI server launch so it will be started automatically on each system startup. To do so you need to define and configure Linux system service, which will do it for you. Please perform the following steps:
-
Perform all steps described in section Starting AI Server from Terminal, except the last one (do not launch the server yet).
-
Create the configuration file in /etc/systemd/system directory named degirum.service. You will need administrative rights to create this file. You can use the following template as an example:
[Unit]
Description=DeGirum AI Service
[Service]
# >> you may want to adjust the working directory:
WorkingDirectory=/home/ai-user/
# >> you may want to adjust the path to your Python executable and --zoo model zoo path;
# >> also you may specify server TCP port other than default 8778 by adding --port <port> argument:
ExecStart=/home/ai-user/miniconda3/bin/python -m degirum.server --zoo /home/ai-user/zoo
Restart=always
# >> you may want to adjust the restart time interval:
RestartSec=10
SyslogIdentifier=degirum-ai-server
# >> you may want to change the user name under which this service will run.
# >> This user should have rights to access model zoo directory
User=ai-user
[Install]
WantedBy=multi-user.target
- Start the system service by executing the following command:
sudo systemctl start degirum.service
- Check the system service status by executing the following command:
sudo systemctl status degirum.service
- If the status is "Active", that means the configuration is good and the service is up and running.
- Then enable the service for automatic startup by executing the following command:
sudo systemctl enable degirum.service
Starting AI Server as Docker Container
You may run AI server as a pre-built Docker image. To do so please perform the following steps:
-
Make sure Docker is installed on the system. Please refer to official Docker documentation for instructions.
-
If you want to host models in the local model zoo (as opposed to the hosted model zoo), then create and populate local model zoo directory as explained in the beginning of Starting AI Server from Terminal section.
-
Download and start DeGirum AI Server Docker container by running one of the following commands.For local model zoo hosting the command is the following:
docker run --name aiserver -d -p 8778:8778 -v /home/ai-user/zoo:/zoo --privileged degirum/aiserver:latest
Here, /home/ai-user/zoo
is the path to the local model zoo.
- For hosted-only model zoo hosting the command is the following:
docker run --name aiserver -d -p 8778:8778 --privileged degirum/aiserver:latest
Warning
We strongly recommend to use the exact version tag instead of the
latest
version tag to avoid situations when newer image is not pulled from DockerHub because your local cache already has an image with thelatest
tag.To force uninstall the existing container, run the following commands, assuming you started your container with
--name aiserver
option. If not, then rundocker ps
command to obtain the container name: it is listed in the last column. Use this container name instead ofaiserver
:
docker stop aiserver
docker rm aiserver