With ray tracing we follow paths, originating from a camera, going through an image plane (aka the pixels) until we hit an object in our 3D scene. If we hit something we can save that information in a pixel on the image plane and construct an image this way. This information is the alpha, the transparency of the pixel. If we didn't hit anything we consider the pixel to be transparent or whatever we define as a background color. We can also return the color that was specified in the objects material.
However, just a color and a transparency value isn't particularly impressive to look at. To get more information like shading, shadows or reflections out of our 3D scene we need cast more specialized rays. But instead of originating from the camera, those rays start where the previous ones ended - where we hit the object.
Taking it further: primaray and secondary rays
To determine if our current sample is being lit by a light source or lies in shadow we need to shoot shadow rays towards the light source. So for each hit point in our scene we shoot a ray towards the light. If we hit something we know that there must be an object in the way and that our sample is in shadow, if we don't hit something the sample is being lit by the light source. We may even have to do this multiple times in case of a light source that is not just a point - like an area light. This will then result in soft shadows depending on how much of the light is obstructed.
Most of the time we also have more than one light source in the scene so we need to repeat the process for each and every light source which can, of course, add up to a considerable amount of rays, just for one single pixel in the image.
But this, again, is only the beginning. There are other rays we can shoot for different additional effects like reflection, refraction, translucency, and sub-surface scattering. The basic principle stays the same. We usually refer to the first rays (the red ones) as primary rays (or camera/eye rays) and all the subsequent types as secondary rays (shadow, reflection, etc). See the reading list below for more examples.
Secondary rays may also send out additional rays to determine the color to return. Think of a reflection ray that hits another object. You basically have to do the same thing as you did starting from the camera to determine what the reflection acutally looks like.
As discussed in the "sampling" chapter, the amount of samples we collect is crucial for the amount of noise (or smoothness) of the result. This not only applies to the primary but also to the secondary rays. If, for example, an area light is used in our scene, we need more shadow rays to get clean and smooth shadows. If we only shoot one shadow ray per pixel we get a very noisy and unrealistic result. We want to shoot several shadow rays towards the area light to get information about how much of the light is actually obstructed. So the same principle as discussed in "sampling" applies here as well.