Buy Contact NSDSP Home

How NSDSP Works

NSDSP Programmer can perform different tasks in different modes. It can program, debug, communicate with SPI, UART, manipulate pins, and perform other tasks depending on the operating mode. However, the principle of the operations is always the same.

NSDSP always maintain two data flows. One data flow transfer commands from PC to NSDSP through USB and then further to the target. The other data flow is in reverse directions - response from the target is transferred back to NSDSP.

The two data flows are completely independent of each other and occur at the same time. You can send commands one by one and wait for the response after each command. Or, you can send multiple back-to-back commands, and then read all the responses.

The picture below illustrates the data flows. The red circles represent commands. The green circles represent responses. The programmer calls one of many available read functions to request the information from the target. The command is created and put into the output buffer. NSDSP software anticipates that there might be more commands to follow and does not send it to NSDSP immediately. You need to flush the buffer for the command to start moving. Once flushed the command travels to the target, and then the response is sent back to PC. The responses accumulate in PC and once the response to your command is available you can call one of the fetch functions to retrieve it.

As you can see such method is not very efficient. Your program spends most of its time waiting for responses. One round trip on NSDSP-1 and NSDSP-2 takes 16 ms. Therefore you can push through about 60 commands per second.

To avoid wasting time waiting, you can issue multiple commands by calling several read functions in a row. Such approach is illustrated on the picture below.

If you call enough read functions to fill up the output buffer on PC, it will get flushed automatically and the request will start traveling. Each request will generate a response and these responses will be received, accumulated and fetched as needed

As you can see on the picture, this creates a pipeline, which increases the throughput. In real life, such approach may be up to 1000 times faster. If you want fast communications, organizing the pipeline is mandatory.

On the animation above, you can observe how the read and fetch functions are called. First, you need to call a number of read functions to start the pipeline, then you start mixing read and fetch functions.

If the number of commands you want to see is small (less than 10,000), you can organize an effective pipeline by sending out all your commands, then fetching the results. The following pseudo-code shows the method:

  for i = 1 to N {
    read()
  }

  flush()

  for i = 1 to N {
    wait()
    fetch()
  }

If you need more commands, you cannot send all the commands at once, because the constant influx of unhandled responses will overflow input buffers. Therefore you need to interleave commands and responses. The pseudo-code to organize the pipeline may look like this:

  outstanding = 0

  for i = 1 to N {
    read()
    outstanding++
  }

  for i = 1 to M {
    read()
    outstanding++
    while (response available) {
      fetch()
      outstanding--
    }
  }

  flush()

  while (outstanding > 0) {
    if (response available) {
      fetch()
      outstanding--
    }
  }

The principle is very simple. You first send a number of commands to prime the pipeline. A reasonable number of commands is a few hundreds or a few thousands.

Then you interleave sending new commands and checking for the responses until all the necessary commands are sent.

Then you flush the buffer, making sure that none of the commands are stuck in the buffer.

Finally you fetch all of the outstanding responses.

The important thing is that every command must be matched with the corresponding fetch. Different commands will produce different responses. The type of every fetch function you call must match the type of the read function. If you call multiple read functions of different types, you must call the same number of fetch functions of the corresponding types in the same order.

If you send more commands than NSDSP can process, your code will block and wait until existing commands are processed and NSDSP is ready to accept more commands. If the wait is too long, the operation may time out. Default timeouts are set based on the connection data rate and should work if there's no significant delays. If necessary, you can increase the timeouts to allow more blocking time.

Northern Software Home NSDSP Contact Us Purchase/View Cart

© 2007-2025 Northern Software Inc. All Rights Reserved.