I am creating a small 3d rendering application. I decided to use simple flat shading for my triangles - just calculate the cosine of angle between face normal and light source and scale light intensity by it.

But I'm not sure about how exactly should I apply that shading coefficient to my RGB colors.

For example, imagine some surface at 60 degree angle to light source. cos(60 degree) = 0.5, so I should retain only half of the energy in emitted light.

I could simply scale RGB values by that coefficient, as in following pseudocode:

```
double shade = cos(angle(normal, lightDir))
Color out = new Color(in.r * shade, in.g * shade, in.b * shade)
```

But the resulting colors get too dark even at smaller angles. After some thought, that seems logical - our eyes perceive the logarithm of light energy (it's why we can see both in the bright day, and in the night). And RGB values already represent that log scale.

My next attempt was to use that linear/logarithmic insight. Theoretically:

```
output energy = lg(exp(input energy) * shade)
```

That can be simplified to:

```
output energy = lg(exp(input energy)) + lg(shade)
output energy = input energy + lg(shade)
```

So such shading will just amount to adding logarithm of shade coefficient (which is negative) to RGB values:

```
double shade = lg(cos(angle(normal, lightDir)))
Color out = new Color(in.r + shade, in.g + shade, in.b + shade)
```

That seems to work, but is it correct? How it is done in real rendering pipelines?