Interesting what Kajiuya had mentioned in 1991 that all rendering will be volume rendering. The fast improvement of hardware has supported a lot for volume rendering as research with GPU has increased. The paragraph about photorealism is located at a sudden and odd place, I think. I haven't seen a lot of photorealism applied to volume rendering but wonder if it is popular these days and in what kind of applications, if any. Considering the hardware support these days, most of the methods are parallelizable which seems plausible to make use of the introduced algorithms with GPU.
The paper says, creating static images from volumes of vector dta is an unsolved problem back in 1992. Is it still an unsolved problem?
It seems volume rendering needs pre-processing stage from the initial step of data acquisition and it is applied to every slice.
Is pre-processing of data required for all volume rendering?
What happens if there are negative surface pieces using SF methods? Would there be a hole after rendering the dataset?
Color code used for CT data, for example, would be bone-white/opaque, muscle-red/semi transparent, fat-beige/mostly transparent.But is there a standard for color coding or is it based on user selection?
What is false positive? What is negative triangles?
The authors talk about ethical issues and standard means of validating algorithms. Are there validating algorithms now?
0 comments:
Post a Comment