Questions regarding parallel processing.


Hello everyone,

I am working on a project for RF signal data receiving and sampled data processing.

Right now, I have two separate functions, one function can continuously receive sampled RF signal data; and another function can continuously process the received signal data. For now, in my testing, I run the signal receiving function first, and stored the signal data in a file; then the processing function can read data from that file and process it. So it is a post-processing work flow.

Before, my signal processing function was kinda slow. For processing the received one second's data, the time consumption was a couple of seconds. Now, after optimization, the processing function can work very fast, it takes much less than one second for processing one second's data. Therefore, I am pretty sure that the processing function can work with the signal receiving function together in the same time. So it can be a real time processing.

But, I am now wondering what is the best way to make the two functions run simultaneously? Should it belong to multi-thread programming? I didnt have any experience with this kind of question. So can anyone give me any help, any sample code, or any tutorials? I also found something online called FIFO and forks. Are those different from multi-thread programming?

I think that if the functions can work together, at least the functions dont need to write signal data into file and read the signal data from the file. Avoiding the file input/output will also simplify and speed-up the program.

Thanks in advance.
Last edited on
I presume that the receiver writes a new file every time (or at least does not overwrite the old data)? The access to shared memory is a hard part of parallel computing.

FIFO means "first in, first out". A queue. It is not parallelism as such.

If both functions are quick enough, then there would be no need for parallelism. A trivial thing would be to start a new program every second, IF multiple processes can receive (almost) simultaneously AND M simultaneous programs do not exhaust resources (like memory).

MPI (Message Passing Interface) makes use of clusters, rather than shared memory. With that you could have a "master" and a pool of "workers". Master receives data and then sends it to next available worker. Workers simply wait for data from master, process it, and report again that they are ready. There is a queue of free workers.

A threading solution can be very similar, except that the memory is seen by all and not "passed". It is very important that two threads do not access same data simultaneously, because if A writes while B reads, B may not get consistent data, or when both write, the resulting data is probably corrupt.

Parallelism can be used at lower level too: vectorization (SSE,..,AVX), OpenMP, OpenCL, and GPGPU let a (processing) function to use more hardware on algorithm that suites to that approach.
Topic archived. No new replies allowed.