In this section we're going to use the Remote - Containers feature of Visual Studio Code to simplify the development experience. If you are using a different editor skip to the other editor section below.
The Visual Studio Code Remote containers extension allows it to run a complete development environment inside the Spark SDK Docker container. The editor UI runs as a thin client on your desktop and connects to a headless instance running inside the Spark SDK Docker environment. This configuration allows us to ship a pre-configured set of Tasks for building and testing your code.
Start Visual Studio Code and click on the puzzle piece icon on the left-hand side to open the extensions panel. Use the search box at the top to find the "Remote - Containers" extension and install it.
Open the examples/algo_template directory using the File->Open Folder menu option. If you opted to add it to your path during install, you could run code .
from your command prompt.
When the workspace opens, you should see a notification offering to let you re-open the workspace inside the container.
The first time you open the workspace inside the container it will take a few seconds to configure the spark_sdk docker container. While the workspace is open in Visual Studio Code the docker image will also be running in the background. The container will stop automatically when you close the editor.
Open algo_template.cpp from the file browser on the left or by pressing ctrl-p and typing the name. Each Algo is a standard C++ class which implements IAlgoOrder. Placing your cursor on IAlgoOrder and pressing F12 will take you to the IAlgoOrder.h header file where you can browse the API. To learn more about how an Algo is instantiated see the Algo Initialization.
To build press "Ctrl+Shift+B". You should see the build output at the bottom of your screen. If the build was successful the resulting algo_template.so will be automatically copied to the plugins directory where it will be loaded by the Spark server process on start.
To debug your Algo set a breakpoint (F9) and open the Debug panel (Ctrl+Shift+D) and choose "(gdb) Launch". This will launch the Spark Server inside the container with the debugger attached.
With sparkd running you can launch the Spark Client (spark_client.cmd) and it will connect to the running server. Log in with the user "bts" and no password and open a ladder. You can now select your Algo on the order type dropdown and activate it by clicking on a price on the ladder.
While sparkd is running you can also access the Spark Server Admin interface on http://localhost:9000. Use the same "bts" login to access the admin interface. Here you can manage user and account permissions and risk limits.
To start the Spark Server without attaching a debugger press "Ctrl+Shift+P" and select "Tasks: Run Task" and then "Start Sparkd" and "Never scan task output" This will open a new terminal where you can see any messages logged with the Sdk subsystem as well as any warnings or errors. To stop the Spark server simply kill the terminal (trash can icon) or run the "Stop Sparkd" task.
If you would like to understand more about how Docker is working behind the scenes continue to the next section. Otherwise, continue to Algo Registration to learn how Algos are registered with the Spark.
In this section you will be more exposed to the details of how Docker works and how to compile and test your Algo using the command line.
Spark Server is built and runs only on Linux so you need a Linux development environment to effectively develop plug-ins for it. Docker lets you take the spark_sdk.img disk image and run it inside a virtual Linux machine running on your Windows desktop.
First, some terminology. A Docker "image" is a disk image containing a complete filesystem. The spark_sdk.img file you imported earlier is an image. A Docker "container" is what you get when you get when you attach this image to a running Linux system. You can connect to a running container and use it to run commands like the C++ compiler or the Spark server. Directories from your local filesystem can be mapped into the Docker container when it starts so you can easily share files.
The first step is to start the spark_sdk Docker container with the examples directory mapped so we can compile the examples inside the container. In your command prompt cd to the examples directory that you unzipped from the spark_sdk.zip file. Then run the following command to start the container.
Windows Command Prompt:
docker run --rm -it -v %CD%:/examples -u bts -w /examples --shm-size 2048m --name spark_sdk -h spark_sdk --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -p 9000:9000 -p 9001:9001 spark_sdk bash
Powershell / Linux / Mac:
docker run --rm -it -v ${PWD}:/examples -u bts -w /examples --shm-size 2048m --name spark_sdk -h spark_sdk --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -p 9000:9000 -p 9001:9001 spark_sdk bash
The container will remain as long as this prompt is open. You can attach an additional to the running container with the following command:
docker exec -it spark_sdk bash
Let's look at the algo_template sample first. All of the samples are laid out similarly with a build.sh
script for compiling. As long as you are running build.sh inside the spark Docker container, it automatically copies the build output to /sparkbin/plugins/ where it needs to be to get loaded by the Spark server.
bts@spark_sdk:/examples$ cd algo_template/ bts@spark_sdk:/examples/algo_template$ ./build.sh algo_template.so copied to /sparkbin/plugins/ Restart sparkd to see changes.
To start the Spark server cd /sparkbin/
and run ./sparkd run
. Press Ctrl+C to stop it.
bts@spark_sdk:/sparkbin$ ./sparkd run 2019-10-25 16:21:09.383626|262552152803600|Gen|Warn |1 |main|132 |Run.cpp:53 |dev| ------------------------------------ Sparkd launched --------------------------------
You will see any Sdk log messages as well as any log messages at or above Warning level from other parts of the system. If you need additional diagnostic information more details logs are stored in /sparkdata/data/logging/log. These logs will be automatically deleted when you stop the Docker container.
With sparkd running you can launch the Spark Client (spark_client.cmd) and it will connect to the running server. Log in with the user "bts" and no password and open a ladder. You can now select your Algo on the order type dropdown and activate it by clicking on a price on the ladder.
While sparkd is running you can also access the Spark Server Admin interface on http://localhost:9000. Use the same "bts" login to access the admin interface. Here you can manage user and account permissions and risk limits.
You can debug your algo using gdb from the bash command prompt.
GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git Copyright (C) 2018 Free Software Foundation, Inc. ... Reading symbols from ./sparkd...done. (gdb) break algo_template.cpp:45 No source file named algo_template.cpp. Make breakpoint pending on future shared library load? (y or [n]) y Breakpoint 1 (algo_template.cpp:45) pending. (gdb) run Starting program: /sparkbin/sparkd run
If your editor supports debugging over the ssh protocol you can start sshd from inside the container.
Continue to Algo Registration to learn how Algos are registered with the Spark.