1
0
Fork 0

Added renders in the repository

This commit is contained in:
Ishan Jain 2023-05-27 18:02:19 +05:30
parent ad9497c1c2
commit be11f1c8fe
Signed by: ishan
GPG Key ID: 0506DB2A1CC75C27
4 changed files with 9 additions and 10 deletions

1
.gitignore vendored
View File

@ -1,3 +1,4 @@
target/
*.ppm
*.png
!renders/*

View File

@ -20,15 +20,15 @@ This improvement keeps on happening until 900 chunks, After that I didn't see an
This happens because some chunks finish up a lot faster than others. For example, A chunk that only has some basic background and doesn't have much going on will finish a lot faster than a chunk that consists of reflections on an object.
This happens because some chunks finish up a lot faster than others. For example, A chunk that only has some basic background and doesn't have much going on will finish a lot faster than a chunk that consists of reflections on an object.
If the entire area has not been divided into very small chunks, You'll have a lot of idle cores towards the end because there are no more chunks left and some cores will be busy rendering a complex area of the entire image.
With that said, There is also no point in dividing the image into a million chunks because there is still some overhead associated with each chunk.
With that said, There is also no point in dividing the image into a million chunks because there is still some overhead associated with each chunk.
For example, In rust(afaik), I don't have a way to tell the compiler that it's okay to share a vector across multiple threads because each thread will only write to a exclusive section in that vector. So, even though they are sharing the vector, They are not _really_ doing that.
And because of this, Currently in this project, Each thread writes to a relatively small temporary buffer and when it's done, This buffer is copied to the correct position in the larger buffer that holds the entire image. Then there is also the overhead of spawning all the os threads.
For example, In rust(afaik), I don't have a way to tell the compiler that it's okay to share a vector across multiple threads because each thread will only write to a exclusive section in that vector. So, even though they are sharing the vector, They are not _really_ doing that.
And because of this, Currently in this project, Each thread writes to a relatively small temporary buffer and when it's done, This buffer is copied to the correct position in the larger buffer that holds the entire image. Then there is also the overhead of spawning all the os threads.
For each resolution there is probably a sweet spot. In my tests, At 500x500, It appears to be 900 chunks.
For each resolution there is probably a sweet spot. In my tests, At 500x500, It appears to be 900 chunks.
[_Ray Tracing: The Next Week_](https://raytracing.github.io/books/RayTracingTheNextWeek.html)
@ -36,10 +36,8 @@ For each resolution there is probably a sweet spot. In my tests, At 500x500, It
# Renders
![[1] Motion Blur](https://user-images.githubusercontent.com/7921368/114031856-4d419b80-986b-11eb-8a56-b9ecc1785a6e.png)
![[1] Motion Blur](./renders/motion_blur.png)
[1] Motion Blur
![[2] Cornell Box](https://user-images.githubusercontent.com/7921368/114031671-1e2b2a00-986b-11eb-9528-4fd5525c43ea.png)
[2] Cornell Box
![[2] Final Scene](./renders/final_scene.png)
[2] Final Scene

BIN
renders/final_scene.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.9 MiB

BIN
renders/motion_blur.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.3 MiB