The eXpressWare development package comes with several example and benchmark programs. It is recommended to study these before designing your own application.
All SISCI example, demo and benchmark programs supports various command line options, details will be provided during runtime if you start each application with the -help option.
The purpose of the example programs is to demonstrate the basic usage of selected SISCI API functionality.
All programs share a common set of command line interface options:
'-rn XX' where XX is the nodeId of the remote system. The nodeId of a local system can be determined by running the 'query' SISCI utility.
'-server' or '-client' to specify the client or server side functionality.
The shmem program code demonstrates how to create a basic SISCI program and exchange data using PIO. An interrupt is created and signalled when the data exchange is completed.
The memcopy program code demonstrates how to create a basic SISCI program and exchange data using PIO. An interrupt is created and signalled when the data exchange is completed.
The interrupt program code demonstrates how to trigger an interrupt on a remote system using the SISCI API. The receiver thread is blocking, waiting for the interrupt to arrive.
The data_interrupt program code demonstrates how to trigger an interrupt with data on a remote system.
The intcb program code demonstrates how to trigger an interrupt on a remote system. The receiver thread is notified using an interrupt callback function.
The dma program code demonstrates the basic use of DMA operations to move data between segments.
The MXH830 PCIe chip does not have a PCIe DMA engine. The SISCI Software will automatically utilize System DMA if this available with your system.
The dma program code demonstrates the basic use of DMA operations to move data between segments using the completion callback mechanism.
The rpcia program code demonstrates how to use the PCIe peer to peer functionality to enable remote systems to access a local PCIe resource / Physical address within the system.
Please note that the PCI Express peer to peer functionality is only available with some servers. Please ask your system vendor to confirm PCI Express peer to peer functionality is supported.
The smartio_example program code demonstrates how to use the SISCI API SmartIO extension to access a Transparent PCIe device in the PCIe fabric.
The reflective_memory program code demonstrates how to use PCIe multicast / reflective memory functionality.
Please note that the MXH830 and MXH930 in 3 and 5 node configurations does not support PCIe multicast (yet).
The reflective_device program code demonstrates how to use the SISCI API to enable a PCIe device to directly send multicast data to a multicast group.
This program requires the PCIe peer to peer functionality.
The reflective_device program code demonstrates how to use the SISCI API to register a PCIe device to directly receive PCIe multicast data and how to enable a PCIe device to directly send PCIe multicast data to/from a multicast group.
This program requires the PCIe peer to peer functionality.
The reflective_write program code demonstrates how to use PCIe multicast / reflective memory functionality.
The probe program code demonstrates how to determine if a remote system is accessible via the PCIe network.
The purpose of the benchmark and demo programs is to demonstrate how to measure the actual communication performance over the PCIe network.
The source for all the benchmark and demo programs will be installed if you select to install the development option when running the eXpressWare MSI installer.
The scibench2 program can be used to determine the actual CPU load/store performance to a remote or local segment.
The program copies data to the remote segment without any synchronization between the client and server side during the benchmark.
The send latency displayed by the application is the wall clock time to send the data once.
The scipp program can be used to determine the actual CPU store latency to a remote or local segment.
The program will sends data to the remote system. The remote system is polling for new data and will send a similar amount of data back when it detects the incoming message.
The dma_bench program can be used to determine the actual DMA performance to a remote or local segment.
The program connects to a remote segment and executes a series of single sided DMA operations copying data from a local segment to a remote segment. There is no synchronization between the client and server side during the benchmark.
The intr_bench program can be used to determine the actual latency for sending a remote interrupt.
The program implements a interrupt ping - pong benchmark where the client and server sides exchanges interrupts and measures the full round trip latency. The interrupt latency measured by the program will be the average of both systems. The interrupt latency measured is the full application to application latency.
The reflective_bench program can be used to benchmark the reflective memory / multicast functionality enabled by PCI Express networks.
The program implements a multicast data ping - pong benchmark where the client and server sides exchanges multicast data.
Reflective memory functionality is fully supported in two node configurations and with a central switch.