Its the weekend and you know what that means! Time to implement raytracing in one weekend using Raytracing in One Weekend!
Well I mean, idk you. It might be a Tuesday for you. This is a blog post after all…
The book is written in C++ so my goal is to do some ad-hoc conversion and get a result in Rust, then in a future video convert that code to wgpu.
What is Raytracing?
Raytracing itself is a term that has become more ambiguous over time as more techniques were developed. We’re using it to mean the idea that you can render a scene similar to the way in which a camera does. In the real world light is everywhere and the camera collects whatever portion of that light hits the sensor.
In our raytracing exploration, we’ll be shooting rays out from the camera (since we don’t care about light that doesn’t hit the sensor) into a scene, those rays will hit something, bounce around hitting other objects, and we’ll take a bit of color from each of those collisions to determine what color should show up at a given pixel.
This can be called path tracing, because we’re following a ray as it bounces around, and the full sequence of all of those individual rays is called a path.
The objects in this scene will be mathematically constructed, so instead of having vertices and meshes, we’ll have “here’s the equation for the space a sphere takes up centered at point 0,0 in the scene”.
On to section 1: PPM files.
PPM
PPM is an image format I’ve never heard of, but its the one this series uses so lets get into dealing with it.
The format is pretty straightforward but the description in the series didn’t connect with me.
There is to be a line for declaring the range of numbers in use, which the article describes as being “in ASCII”. They mean we’ll be using a u8 to represent r,g, and b just like CSS. So we get a number from 0-255 for each color in a pixel.
Then we declare how many pixels there are, first the number of columns and then the number of rows, followed by the “max value”, which for us is 255.
The series shows a 2 height by 3 width pixel image but the code they show is for a 256x256 image, so you don’t get to check your output against a simple version… its straight into the full pixel processing.
You also likely need additional software to display ppm images. I chose to add a vscode extension to view them, although it made viewing the 3x2 pixel image really tough and I had to save it out as a png and zoom in in different software to see it.
So far so good. I grabbed itertools for the cartesian product function, which is very nice when iterating over all the pixels in a grid and something I use all the time in advent of code each year.
for us, cartesian_product is a fancy math name for getting the x,y coordinate of every point, where the width and height are defined by ranges.
This means we don’t have to deal with any nested for loops and indices.
Then its just a matter of writing the output to a file.
use itertools::Itertools;
use std::{fs, io};
const IMAGE_HEIGHT: u32 = 256;
const IMAGE_WIDTH: u32 = 256;
const MAX_VALUE: u8 = 255;
fn main() -> io::Result<()> {
let pixels = (0..IMAGE_HEIGHT)
.cartesian_product(0..IMAGE_WIDTH)
.map(|(y, x)| {
let r = x as f64 / (IMAGE_WIDTH - 1) as f64;
let g = y as f64 / (IMAGE_HEIGHT - 1) as f64;
let b = 0.0;
format!(
"{} {} {}",
r * 255.0,
g * 255.0,
b * 255.0
)
})
.join("\n");
fs::write(
"output.ppm",
format!(
"P3
{IMAGE_WIDTH} {IMAGE_HEIGHT}
{MAX_VALUE}
{pixels}
"
),
)?;
Ok(())
}
Adding a progress indicator
With the PPM file being written out, and able to be viewed by us, the series wants us to immediately implement a progress bar.
This is the first deviation from the series for us due to using Rust.
As soon as I saw we needed a progress indicator I knew I was headed for indicatif.
cargo add indicatif
cartesian_product doesn’t seem like its supposed to be an ExactSizeIterator, similar to Chain in the Rust stdlib, so we use .progress_count instead of .progress and we get a progress bar.
.progress_count(
IMAGE_HEIGHT as u64 * IMAGE_WIDTH as u64,
)
At the current moment, our program runs too fast to show the progress bar for more than a frame of this video but that will change.
Vec3
Next is creating a whole vec3 interface. I plan on having a pretty heavy wgpu/Bevy angle to this work as I progress with more advanced implementations, so I’m going to grab the glam
crate instead, which provides DVec3, a 3 dimensional vec that uses f64 as the type for the x,y, and z components.
I don’t feel like I have enough context to make anything a Vec3 though, so I’m going to not make any adjustments to my own code here yet other than bringing DVec3 into scope to use later.
Ray
One part of this series that I found repeatedly awkward is that a lot of the code is given before being shown or told how its going to be used.
So we have to build a Ray class here to represent a Ray from some origin point, angled in some direction.
The ray class is fairly straightforward, although I still don’t really know how this code is going to be used yet. I chose to make a simple translation to a struct with an associated function.
I’m going to keep using DVec3 instead of any kind of abstraction over that and refactor later if it makes sense, but I don’t know where we’re exactly headed so I don’t want to over abstract early.
struct Ray {
origin: DVec3,
direction: DVec3,
}
impl Ray {
fn at(&self, t: f64) -> DVec3 {
self.origin + t * self.direction
}
}
Sending rays into the scene
We’re supposed to send rays into the scene now, but first we do bunch of little math to generate a viewport.
The coordinate system we’re going to use matches Bevy’s, so intuition about where objects are should translate well.
This section was a lot of fiddly math conversion. Nothing really that interesting. If we consider the image we’re generating to have a grid of pixels, then our 0,0 pixel is in the top left, and we store the distance between pixels so we can calculate positions later.
Overall, for every pixel in the scene, we send a ray into the scene through that pixel’s location. The origin for the Ray here is the camera center, and the direction of the Ray is the pixel location.
With the camera set up, we can move on to rendering a sphere.
Adding a Sphere
Ahh the meat and potatoes of ray tracing… or so you'd think I guess. At this point we're successfully shooting rays into the scene and rendering the colors of the positions they hit.
That means to render a sphere, we need to using the equation of a sphere to define where a sphere would be, then figuring out if our ray intersects with that area.
fn hit_sphere(
center: &DVec3,
radius: f64,
ray: &Ray,
) -> bool {
let oc: DVec3 = ray.origin - *center;
let a = ray.direction.dot(ray.direction);
let b = 2.0 * oc.dot(ray.direction);
let c = oc.dot(oc) - radius * radius;
let discriminant = b * b - 4. * a * c;
discriminant >= 0.
}
This process reminds me a lot of Signed Distance Fields, which inigo quilez has a lot of documentation on, but we’re doing intersections instead. If we were using SDFs, we’d have to use a slightly different approach called ray marching instead of calculating intersections directly.
The result doesn’t look much like a sphere, it looks like a flat 2d circle, but that’s only because we haven’t shaded the sphere yet, which we’re about to do in the next section using the surface normal vectors.
Surface Normals and Multiple Objects
Every point on a sphere has a normal, which is a vector that point exactly outward from the center of the sphere through the point on the surface of the sphere. Using these vectors, normalized so they have a length of 1, gives us colors we can paint on the sphere.
This section was a bit more math and an in-the-weeds discussion of sqrt and why you shouldn’t try to remove it, but the code to calculate the sphere seems a bit large and some of the discussion isn’t as relevant because we’re using glam’s DVec3, which already has a normalize function and other useful methods.
Another win for using Rust and foundational crates to power our raytracer.
Either way the hit_sphere function is about to get a refactor
Simplifying the ray-sphere intersection code
I felt a bit surprised by the “simplifying the ray-sphere intersection code” section because the code didn’t feel like it got any simpler. Its the same number of lines and still uses the dot function other small fiddly bits of math.
fn hit_sphere(
center: &DVec3,
radius: f64,
ray: &Ray,
) -> f64 {
let oc: DVec3 = ray.origin - *center;
let a = ray.direction.length_squared();
let half_b = oc.dot(ray.direction);
let c = oc.length_squared() - radius * radius;
let discriminant = half_b * half_b - a * c;
if discriminant < 0. {
-1.0
} else {
(-half_b - discriminant.sqrt()) / a
}
}
I guess they mean mathematically simpler but it didn’t really come through in the resulting code, in my opinion.
The single-letter variables names were definitely not helping me here, even though I am already familiar with the quadratic formula.
In any case, nothing changes visually here and its not like we have benchmarks, so its hard to determine what the changes were actually for.
Hoever, we immediately roll into creating an abstraction for hittable objects.
Hittable object abstraction
The next few sections are all very C++ specific. There’s an extended discussion about creating an abstract hittable class followed by introducing C++ language features.
trait Hittable {
fn hit(
&self,
ray: &Ray,
interval: Range<f64>,
) -> Option<HitRecord>;
}
There’s also number of constants we don’t need to define (like PI, because we’re using Rust and Rust has a PI constant in the standard library) and a class definition for what is basically Rust’s Range type.
Overall these sections are all dedicated to adding a second sphere, but it definitely took awhile to get through and parse out what was actually necessary given that Rust seems to provide a lot more for us than C++ does.
The major point for us here is that I chose to implement Hittable as a trait. I’m not sure this was the most straightforward choice since later on we end up with an enum that encapsulates a lot of the same objects that implement Hittable, but it was definitely the choice that seemed to align most with the C++ code at the time and I wanted to make sure that I was able to finish the series rather than spending a lot of time refactoring.
We then roll into more abstraction, creating a camera class.
Camera Class
The series wants us to place the color function on the camera class, but to be honest that doesn’t make much sense to me since it doesn’t use anything on the camera class to do the work it needs to, so I left that implementation on the Ray.
Spoilers for Book 2 in the series, there is actually a background color that could be a reason to keep the color function in the camera class, but its easily handleable like this as well.
The Ray color function already has to accept the world of hittable objects, so we’re not really losing anything here.
Next comes the problem of pixels and what they actually are.
anti-aliasing
The issue “A Pixel is Not a LIttle Square” that’s linked to in this section seems like one of those “fallacies of distributed comuting” or “falsehoods programmers believe about time/names/etc” type documents. Basically “hey you think of this as this simple thing in normal life but its actually horrendously complicated and not at all straightforward”.
In my mind the big lesson for this section is “pixels don’t exist” and they’re actually a representation of a bunch of samples we take.
We then get to ignore a section about C++ not having random numbers in some versions, similar to the way we got to ignore C++ not having Ranges. When we need random numbers, we’ll use the rand crate.
This also provided a great opportunity to make use of .sum on an Iterator to add up DVec3s, which is a fun function that I don’t actually use in practice that often.
We’ve now added some randomness to the direction our rays get sent through each pixel. By accumulating the randomized samples, we get smoother edges rather than jagged edges. This is anti-aliasing.
let multisampled_pixel_color = (0..self
.samples_per_pixel)
.into_iter()
.map(|_| {
self.get_ray(x as i32, y as i32)
.color(&world)
* 255.0
* scale_factor
})
.sum::<DVec3>();
With the ability to sample more data from each pixel, we can start building more complicated materials.
Diffuse materials
Handling diffuse materials requires some randomness in scattering rays when they hit a surface as well as the ability to combine multiple light ray “bounces” after scattering the rays.
fn random_in_unit_sphere() -> DVec3 {
let mut rng = rand::thread_rng();
loop {
let vec = DVec3::new(
rng.gen_range(-1.0..1.),
rng.gen_range(-1.0..1.),
rng.gen_range(-1.0..1.),
);
if vec.length_squared() < 1. {
break vec;
}
}
}
fn random_unit_vector() -> DVec3 {
return random_in_unit_sphere().normalize();
}
Since we’re using recursion, we need to limit the depth of that recursion, and I was definitely hitting stack overflows before the series told me we were going to limit the depth. Another problem of the answer being later in the series than the problem.
I had to skip ahead twice during this section to fix a program that was either broken (due to stack overflow) or producing what seemed like incorrect output (sphere colors being way too dark, 0.0 → 0.001 floating point rounding errors).
All in all, looking very cool at this point, if a bit noisy.
Lesson for this section is don’t be afraid to read ahead because the answers often are not in the section you’re currently reading, and may be 3 sections ahead.
Reflection
Changing to use lambertian reflection is a really simple change that affects the direction the rays scatter in. Previously we scattered rays that hit the sphere equally randomly, but now we’re using a “lambertian distribution” which is a fancy way of saying the rays bounce off closer to the normal direction more of the time.
This results in stronger shadows and reflection from the “sky”.
Gamut
Image viewing programs assume that an image has been “gamma corrected” before being saved.
Currently our image colors are in linear space, but we want it to be in a gamma space when we save it.
gamma correction went just as quickly as the lambertian reflection change.
fn linear_to_gamma(scalar: f64) -> f64 {
scalar.sqrt()
}
Run a square root on the x,y,z of the color vector and the gamma is corrected.
Then we’ve got our gamut change, before and after.
On to more materials. In this case, Metal.
Metal
The Metal section spends a lot of time describing material abstractions before describing how they’re being used.
There’s an abstract class here to help define materials, but since I control everything here I made Material an enum. We won’t be extending this type outside of the confines of this program anyway.
enum Material {
Lambertian { albedo: DVec3 },
Metal { albedo: DVec3 },
}
I find myself flipping back and forth a lot because of things like functions that take arguments that are then mutated, but also return boolean values.
The pattern of creating empty variables then filling them mutably instead of returning them feels very strange. We have the ability to return Option types and use patterns like if-let though, so I chose to return HitRecords instead of mutating them.
The series also defines some utility functions that already exist in glam, like abs_diff_eq.
// Catch degenerate scatter direction
if scatter_direction.abs_diff_eq(
DVec3::new(0., 0., 0.),
1e-8,
) {
scatter_direction = hit_record.normal;
}
This is also the first time I ran into any issues with borrowing moved data. Partially because the C++ code is trying to circularly reference itself at this location in the series.
I derived Clone in two places and kept at it.
Clamping
At this point I also had an issue with how dark my images were, turns out that due to flipping back and forth during this series I had forgotten to go back to a small clamping function in the gamut section.
.clamp is a widely implemented function in Rust land, so I used that and moved on to Dielectrics rather than implementing it myself.
let color = DVec3 {
x: linear_to_gamma(
multisampled_pixel_color.x,
),
y: linear_to_gamma(
multisampled_pixel_color.y,
),
z: linear_to_gamma(
multisampled_pixel_color.z,
),
}
.clamp(
DVec3::splat(0.),
DVec3::splat(0.999),
)
Dielectrics
In the Dielectrics section they say the hardest part to debug is the refracted ray, but honestly we haven’t debugged much of anything yet, and there’s been largely no discussion of debugging so far.
That said, adding the new glass material is straightforward since adding a new material now is just adding a new enum variant and an implementation.
enum Material {
Lambertian { albedo: DVec3 },
Metal { albedo: DVec3, fuzz: f64 },
Dielectric { index_of_refraction: f64 },
}
So we add the glass material in the same way we’ve constructed the previous two materials and move on.
We’ll want to start setting up more scenes, so moving the camera around and configuring it is what we get to next.
Customizing Camera settings
I found it extremely awkward to switch the scene and the camera variable settings at the same time. For some reason I made a mistake that caused no balls to appear in the scene, but I didn’t know if this was because the balls themselves weren’t rendering or because the camera was pointed in the wrong direction.
I will say that pointing the camera in certain directions greatly influences the render time of the scene.
The math in this section is also spread across a growing camera initialization section, which makes it harder to go back and check against.
There’s also no easy way to “go back and check” the previous camera settings. Its very unclear as to why we default to certain settings and then immediately change them without checking the result here.
This is also what tends to make raytracing hard to get into IMO, the complete lack of support and context when something goes wrong, which we see have an impact in the next section.
Defocus Blur
I ran into an issue implementing defocus blur that was hard to debug. My focus_distance didn’t seem to be affecting the scene at all. The image has the blur applied uniformly across the entire image no matter what.
fn random_in_unit_disk() -> DVec3 {
let mut rng = rand::thread_rng();
loop {
let v = DVec3::new(
rng.gen_range(-1.0..1.),
rng.gen_range(-1.0..1.),
0.,
);
if v.length_squared() < 1. {
break v;
}
}
}
I ended up just… not fixing it and planning to come back to it later… although I haven’t yet.
End of the first weekend
In the end I was thankful for the progress bar we inserted at the beginning.
I set up the final scene and ran the program to render it, immediately realizing it was going to take awhile.
The final image was about halfway done by the time I had completed the Rayon refactor.
.into_par_iter()
Adding Rayon is usually super easy, but because I chose to make Hittable a trait and store hittables as trait objects, I had to extend the requirements placed on the objects I was storing to also be Sync, which is because Rayon is a work-stealing data-parallelism approach that distributes work to available threads and thus, references to our data have to be safe to send across threads.
struct HittableList {
objects: Vec<Box<dyn Hittable + Sync>>,
}
This sped up the final image to render in just under four minutes.
Adding Rayon is one of those things that I feel really shows off Rust’s ability as a language. Instead of .iter, .par_iter and we’re off to the races… quite literally.
Refactoring
After that I continued to refactor my code and clean some of it up. I’m fairly happy with the refactor in terms of organization but I wasn’t able to translate as much of the series to things like introducing alternate shapes as I would’ve liked to.
let camera = Camera::init()
.image_width(800)
.aspect_ratio(16.0 / 9.0)
.look_from(DVec3::new(-3., 4., 2.))
.look_at(DVec3::NEG_Z)
.vup(DVec3::Y)
.samples_per_pixel(500)
.max_depth(50)
.vfov(90.)
.build();
I tried boxes, cylinders, and rounded boxes. Each needs a bit more debugging before it works, but that’s ok. I do wish there was a bit more explanation for implementing this kind of thing on the internet, but the best resource I found (iq’s algorithms) don’t include much explanation.
So I have a brute-force path tracer, sure, but there’s still a long way to go.
Luckily, there’s Ray Tracing: The Next Week.
Raytracing: The Next Week
Alright, so I’ve messed around enough with the path tracer we built last weekend and I feel like I’m ready to move on to the next installment in the raytracing series.
This one covers a whole bunch. Motion blur, AABB and BVH, texture mapping, perlin noise, and more.
It even looks like we get to make some quadrilaterals so maybe I can debug my broken box implementation then.
Overall the series claims that you can do these sections in any order, but from a light skim that’s only kind of true, so we’ll go through in order.
Motion Blur
The first section to complete is Motion Blur.
This introduces some concept of time, which we need to store in each ray because what we’re doing here is effectively launching rays at different points in time, which then determines where an object is and whether or not the ray will hit that object.
pub struct Ray {
pub origin: DVec3,
pub direction: DVec3,
pub time: f64,
}
This seems to be a very static concept of time. Since this isn’t a realtime ray tracer I suppose that makes sense.
We then need to extend our Sphere struct. The C++ code just adds more fields to the class that may or may not be set, including a second center as well as an “is_moving”. Having both of these feels strange. If we have a second center surely we’re moving by definition.
pub struct Sphere {
center: DVec3,
radius: f64,
material: Material,
move_to: Option<DVec3>,
}
I chose to follow the C++ code’s model and refactor later if it makes sense.
Motion blur check. On to… efficiency structures?
BVH/AABB
So we don’t really have a speed problem per-se. The scenes we’re working with are rendering fine and while they could be much faster, writing a raytracer in a cpu driven language like Rust instead working with the GPU is always going to slow us down.
We didn’t write this to be fast, we wrote it to play with some light raytracing.
Our motion blur example takes about 2 minutes to run right now.
That said there’s some utility in becoming a little familiar with axis-aligned bounding boxes since they’re used in other contexts as well. This is, after all, something that exists inside Bevy anyway.
The bounding volumes were an interesting task… not because they were interesting but because the conversion of the C++ code to something that wasn’t C++ was kindof painful.
I spent some time working on it, but in the end decided it was more trouble than it was worth. In sum total the concepts are basically merging ranges and iterating over them.
Textures
The amount of back and forth is starting to wear me down now. In section 4.1 and 4.2 we’re supposed to implement a solid color and a checkered texture, but the code for actually using that to in our materials is in 4.4, even though the examples of it working are in 4.2.
All in all getting the textures to work wasn’t terribly hard, I just wish the code and explanations were given in an order than would’ve made the work a little easier to follow.
I went with the simple approach of making Texture an enum, which I’m finding to be a good default choice as I go through these series. There are a few abstract classes, but turning all of those into traits and dealing with them would’ve been a lot rougher than just using some enums.
#[derive(Clone)]
pub enum Texture {
SolidColor(DVec3),
Checkered { even: DVec3, odd: DVec3, scale: f64 },
}
Image mapping
With textures done, image mapping was easy to build on top of that infrastructure. We add a new enum, drop in a small implementation for picking pixels, and make use of the fact that Rust has the image crate, so we don’t need to implement our own image reader like the series does.
let earth_texture = Texture::load_image("assets/earthmap.jpg")?;
let mut world = vec![];
world.push(Sphere::new(
DVec3::new(0., 0., 0.),
2.,
Material::Lambertian {
albedo: earth_texture,
},
));
I find that implementing all of these little functions really takes a lot away from the series as a whole, making a lot of the work here not actually understanding raytracing but divining which functions are necessary, what their names are in the wider ecosystem, and whether the specific implementations translate well.
I chose to use the Bevy convention of having an assets/ folder to store images and such in.
I used the same image as the series but ended up with a different rotation when viewing my earth texture.
Perlin Noise
This whole section is basically not something you’re going to need if you have access to a noise generation library. In Rust, that’s noise-rs which provides not only perlin noise but a whole bunch of other noises and combinations of noise.
#[derive(Clone)]
pub enum Texture {
SolidColor(DVec3),
Checkered { even: DVec3, odd: DVec3, scale: f64 },
Image(DynamicImage),
PerlinNoise(Perlin, f64),
Turbulence(Perlin),
}
Quads, Lights
Quads are the first new shape introduced by the series, and instead of implementing a box later, the book chooses to put a number of these quads together to make a box-like object.
pub enum Shapes {
Sphere(sphere::Sphere),
Quad(quad::Quad),
}
Quads overall are pretty ok to implement.
They were also the base for sources of light.
pub enum Material {
Lambertian { albedo: Texture },
Metal { albedo: DVec3, fuzz: f64 },
Dielectric { index_of_refraction: f64 },
DiffuseLight(Texture),
}
pub fn emitted(
&self,
u: f64,
v: f64,
point: DVec3,
) -> DVec3 {
match self {
Material::DiffuseLight(texture) => {
texture.color(u, v, point)
}
_ => DVec3::ZERO,
}
}
Lights and emission produce a really nice effect. Basically by multiplying the amount of emission, which is 0 for non-emissive materials, by the scatter color that we’re already rendering, we can render only the colors that light is hitting.
This causes a bit of noise in our image that will stick with us to the end of this book. If a ray bounces off into nowhere and never hits any light, we’ll see a black pixel instead of a rendered color.
In any case, the effect is really fun and produced some of my favorite images from the series.
Instances
It turns out that moving and rotating objects is less about moving and rotating objects and more about moving and rotating the rays that hit them.
At this point I feel like the APIs I’ve chosen for building out hittable things and shapes are all straining a little bit. Oh well, I did the best I could with the information I had. The APIs I’ve chosen to go with all work, so at least we haven’t coded ourselves into an un-evolvable hole, even if they’re not particularly amazing to work with.
pub enum Shapes {
Sphere(sphere::Sphere),
Quad(quad::Quad),
QuadBox(quad_box::QuadBox),
Translate {
offset: DVec3,
object: Box<Shapes>,
},
RotateY {
sin_theta: f64,
cos_theta: f64,
object: Box<Shapes>,
},
}
The Translate and Rotate structs are added to the Shapes enum, with small modifications to the way their hit functions work to translate and rotate the incoming rays.
This works surprisingly well and the translation, rotation, and quads allows us to build what is commonly called the “Standard Cornell Box Scene”.
Two boxes, translated and rotated into position, surrounded by walls with a light on the ceiling.
On to Fog.
Volumes
Fog and volumetric materials were a topic I was really excited to get to, and to be honest it helped drive me through some of the drier parts of the series.
Unfortunately at this point it only really took two paragraphs and a codeblock or two to implement the basic fog mechanisms.
pub enum Material {
Lambertian { albedo: Texture },
Metal { albedo: DVec3, fuzz: f64 },
Dielectric { index_of_refraction: f64 },
DiffuseLight(Texture),
Isotropic { albedo: Texture },
}
It looks amazing, but also feels like there’s a lot that could be done here in the future as well.
Basically, rays shoot through a bounding shape, mostly passing through but sometimes scattering inside.
pub struct ConstantMedium {
boundary: Box<Shapes>,
neg_inv_density: f64,
phase_function: Material,
}
impl ConstantMedium {
pub fn new(
boundary: Shapes,
density: f64,
texture: Texture,
) -> Self {
Self {
boundary: Box::new(boundary),
neg_inv_density: -density.recip(),
phase_function: Material::Isotropic {
albedo: texture,
},
}
}
}
dense volumes are more prone to scatter, while less dense volume materials scatter less.
Overall a very cool look.
The final scene
So I’ve probably said a few times that performance hasn’t really been a concern for us. I always knew the CPU driven raytracer was going to be slow and that if I wanted performance we’d want to look to the GPU.
The final scene however really put that to the test. With the full settings they wanted to use it was going to take potentially days to render, which is obviously too slow.
I knocked down the settings to render in an hour, but there’s very obviously a lot… we’ll just say slow, about the current implementation.
I didn’t code this for speed and didn’t even finish implementing the efficiency aabb and bvh structures, so that’s to be expected.
Honestly I don’t know that I could’ve coded this faster at this point given that I didn’t know where the series was going before I dove in.
Book 3?
At this point I’m happy with where the raytracer is and I’m ready to move on to a wgpu based implementation.
I skimmed the content of book 3 and don’t think that it will be of much use to me until I start building more raytracers and start running into real problems with them.
I’ve got a different book that some people suggested arriving soon: The Ray Tracer Challenge. Maybe I’ll do that.
I’m really interested in global illumination right now though, so I think I’ll get into some of that. There’s a paper that caught my interest, and some really interesting work by zaycev in Bevy’s ecosystem to dig in to.
Thanks for reading. If you go through the raytracing series, please let me know. You can find me in Discord, on Mastodon, or on YouTube.
Title (series): “Ray Tracing in One Weekend Series”
Title (book): “Ray Tracing in One Weekend”
Author: Peter Shirley, Trevor David Black, Steve Hollasch
Version/Edition: v4.0.0-alpha.1
Date: 2023-08-06
URL (series): https://raytracing.github.io/
URL (book): https://raytracing.github.io/books/RayTracingInOneWeekend.html