I have done a brief bit of reading on memory cache issues in multi-core systems. It seems that memory cache is shared between cores - which would initially cause me to say that (in Ray Casting) the threads should be working on image pixels adjacent to those that other threads are working on, in order that they will all be working in similar areas of memory.
However, if multiple threads are accessing data in the same cache sector (the smallest amount of memory that the cache works with), this will cause one to stall whilst waiting for the other read operation to finish. Since this applies to a whole cache sector it can occur just when threads are reading data that is close together, not necessarily just the exact same data.
This suggests each thread should stick to its own seperate portion of the image, which will mean they are largely all working with different areas of the volume also. When I implement the multi-threading I will be able to test both methods and see if the results match this theory.
The final project meeting of the year is tomorrow. I would like to talk a bit about the actual coding of multithreading, including potential libraries to use (the Boost thread library looks promising), and discuss the results of the data storage tests in my previous post.
Links:
http://www.embedded.com/design/multicore/202805545?pgno=3
http://communities.intel.com/openport/community/embedded/multicore/multicore-blog/blog/2008/10/08/cache-efficiency-the-multicore-performance-linchpin-to-packet-processing-applications
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment